Show More
@@ -1,676 +1,681 b'' | |||
|
1 | 1 | # bundle2.py - generic container format to transmit arbitrary data. |
|
2 | 2 | # |
|
3 | 3 | # Copyright 2013 Facebook, Inc. |
|
4 | 4 | # |
|
5 | 5 | # This software may be used and distributed according to the terms of the |
|
6 | 6 | # GNU General Public License version 2 or any later version. |
|
7 | 7 | """Handling of the new bundle2 format |
|
8 | 8 | |
|
9 | 9 | The goal of bundle2 is to act as an atomically packet to transmit a set of |
|
10 | 10 | payloads in an application agnostic way. It consist in a sequence of "parts" |
|
11 | 11 | that will be handed to and processed by the application layer. |
|
12 | 12 | |
|
13 | 13 | |
|
14 | 14 | General format architecture |
|
15 | 15 | =========================== |
|
16 | 16 | |
|
17 | 17 | The format is architectured as follow |
|
18 | 18 | |
|
19 | 19 | - magic string |
|
20 | 20 | - stream level parameters |
|
21 | 21 | - payload parts (any number) |
|
22 | 22 | - end of stream marker. |
|
23 | 23 | |
|
24 | 24 | the Binary format |
|
25 | 25 | ============================ |
|
26 | 26 | |
|
27 | 27 | All numbers are unsigned and big-endian. |
|
28 | 28 | |
|
29 | 29 | stream level parameters |
|
30 | 30 | ------------------------ |
|
31 | 31 | |
|
32 | 32 | Binary format is as follow |
|
33 | 33 | |
|
34 | 34 | :params size: (16 bits integer) |
|
35 | 35 | |
|
36 | 36 | The total number of Bytes used by the parameters |
|
37 | 37 | |
|
38 | 38 | :params value: arbitrary number of Bytes |
|
39 | 39 | |
|
40 | 40 | A blob of `params size` containing the serialized version of all stream level |
|
41 | 41 | parameters. |
|
42 | 42 | |
|
43 | 43 | The blob contains a space separated list of parameters. Parameters with value |
|
44 | 44 | are stored in the form `<name>=<value>`. Both name and value are urlquoted. |
|
45 | 45 | |
|
46 | 46 | Empty name are obviously forbidden. |
|
47 | 47 | |
|
48 | 48 | Name MUST start with a letter. If this first letter is lower case, the |
|
49 | 49 | parameter is advisory and can be safely ignored. However when the first |
|
50 | 50 | letter is capital, the parameter is mandatory and the bundling process MUST |
|
51 | 51 | stop if he is not able to proceed it. |
|
52 | 52 | |
|
53 | 53 | Stream parameters use a simple textual format for two main reasons: |
|
54 | 54 | |
|
55 | 55 | - Stream level parameters should remain simple and we want to discourage any |
|
56 | 56 | crazy usage. |
|
57 | 57 | - Textual data allow easy human inspection of a bundle2 header in case of |
|
58 | 58 | troubles. |
|
59 | 59 | |
|
60 | 60 | Any Applicative level options MUST go into a bundle2 part instead. |
|
61 | 61 | |
|
62 | 62 | Payload part |
|
63 | 63 | ------------------------ |
|
64 | 64 | |
|
65 | 65 | Binary format is as follow |
|
66 | 66 | |
|
67 | 67 | :header size: (16 bits inter) |
|
68 | 68 | |
|
69 | 69 | The total number of Bytes used by the part headers. When the header is empty |
|
70 | 70 | (size = 0) this is interpreted as the end of stream marker. |
|
71 | 71 | |
|
72 | 72 | :header: |
|
73 | 73 | |
|
74 | 74 | The header defines how to interpret the part. It contains two piece of |
|
75 | 75 | data: the part type, and the part parameters. |
|
76 | 76 | |
|
77 | 77 | The part type is used to route an application level handler, that can |
|
78 | 78 | interpret payload. |
|
79 | 79 | |
|
80 | 80 | Part parameters are passed to the application level handler. They are |
|
81 | 81 | meant to convey information that will help the application level object to |
|
82 | 82 | interpret the part payload. |
|
83 | 83 | |
|
84 | 84 | The binary format of the header is has follow |
|
85 | 85 | |
|
86 | 86 | :typesize: (one byte) |
|
87 | 87 | |
|
88 | 88 | :parttype: alphanumerical part name |
|
89 | 89 | |
|
90 | 90 | :partid: A 32bits integer (unique in the bundle) that can be used to refer |
|
91 | 91 | to this part. |
|
92 | 92 | |
|
93 | 93 | :parameters: |
|
94 | 94 | |
|
95 | 95 | Part's parameter may have arbitrary content, the binary structure is:: |
|
96 | 96 | |
|
97 | 97 | <mandatory-count><advisory-count><param-sizes><param-data> |
|
98 | 98 | |
|
99 | 99 | :mandatory-count: 1 byte, number of mandatory parameters |
|
100 | 100 | |
|
101 | 101 | :advisory-count: 1 byte, number of advisory parameters |
|
102 | 102 | |
|
103 | 103 | :param-sizes: |
|
104 | 104 | |
|
105 | 105 | N couple of bytes, where N is the total number of parameters. Each |
|
106 | 106 | couple contains (<size-of-key>, <size-of-value) for one parameter. |
|
107 | 107 | |
|
108 | 108 | :param-data: |
|
109 | 109 | |
|
110 | 110 | A blob of bytes from which each parameter key and value can be |
|
111 | 111 | retrieved using the list of size couples stored in the previous |
|
112 | 112 | field. |
|
113 | 113 | |
|
114 | 114 | Mandatory parameters comes first, then the advisory ones. |
|
115 | 115 | |
|
116 | 116 | :payload: |
|
117 | 117 | |
|
118 | 118 | payload is a series of `<chunksize><chunkdata>`. |
|
119 | 119 | |
|
120 | 120 | `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as |
|
121 | 121 | `chunksize` says)` The payload part is concluded by a zero size chunk. |
|
122 | 122 | |
|
123 | 123 | The current implementation always produces either zero or one chunk. |
|
124 | 124 | This is an implementation limitation that will ultimately be lifted. |
|
125 | 125 | |
|
126 | 126 | Bundle processing |
|
127 | 127 | ============================ |
|
128 | 128 | |
|
129 | 129 | Each part is processed in order using a "part handler". Handler are registered |
|
130 | 130 | for a certain part type. |
|
131 | 131 | |
|
132 | 132 | The matching of a part to its handler is case insensitive. The case of the |
|
133 | 133 | part type is used to know if a part is mandatory or advisory. If the Part type |
|
134 | 134 | contains any uppercase char it is considered mandatory. When no handler is |
|
135 | 135 | known for a Mandatory part, the process is aborted and an exception is raised. |
|
136 | 136 | If the part is advisory and no handler is known, the part is ignored. When the |
|
137 | 137 | process is aborted, the full bundle is still read from the stream to keep the |
|
138 | 138 | channel usable. But none of the part read from an abort are processed. In the |
|
139 | 139 | future, dropping the stream may become an option for channel we do not care to |
|
140 | 140 | preserve. |
|
141 | 141 | """ |
|
142 | 142 | |
|
143 | 143 | import util |
|
144 | 144 | import struct |
|
145 | 145 | import urllib |
|
146 | 146 | import string |
|
147 | 147 | |
|
148 | 148 | import changegroup |
|
149 | 149 | from i18n import _ |
|
150 | 150 | |
|
151 | 151 | _pack = struct.pack |
|
152 | 152 | _unpack = struct.unpack |
|
153 | 153 | |
|
154 | 154 | _magicstring = 'HG20' |
|
155 | 155 | |
|
156 | 156 | _fstreamparamsize = '>H' |
|
157 | 157 | _fpartheadersize = '>H' |
|
158 | 158 | _fparttypesize = '>B' |
|
159 | 159 | _fpartid = '>I' |
|
160 | 160 | _fpayloadsize = '>I' |
|
161 | 161 | _fpartparamcount = '>BB' |
|
162 | 162 | |
|
163 | 163 | preferedchunksize = 4096 |
|
164 | 164 | |
|
165 | 165 | def _makefpartparamsizes(nbparams): |
|
166 | 166 | """return a struct format to read part parameter sizes |
|
167 | 167 | |
|
168 | 168 | The number parameters is variable so we need to build that format |
|
169 | 169 | dynamically. |
|
170 | 170 | """ |
|
171 | 171 | return '>'+('BB'*nbparams) |
|
172 | 172 | |
|
173 | 173 | parthandlermapping = {} |
|
174 | 174 | |
|
175 | 175 | def parthandler(parttype): |
|
176 | 176 | """decorator that register a function as a bundle2 part handler |
|
177 | 177 | |
|
178 | 178 | eg:: |
|
179 | 179 | |
|
180 | 180 | @parthandler('myparttype') |
|
181 | 181 | def myparttypehandler(...): |
|
182 | 182 | '''process a part of type "my part".''' |
|
183 | 183 | ... |
|
184 | 184 | """ |
|
185 | 185 | def _decorator(func): |
|
186 | 186 | lparttype = parttype.lower() # enforce lower case matching. |
|
187 | 187 | assert lparttype not in parthandlermapping |
|
188 | 188 | parthandlermapping[lparttype] = func |
|
189 | 189 | return func |
|
190 | 190 | return _decorator |
|
191 | 191 | |
|
192 | 192 | class unbundlerecords(object): |
|
193 | 193 | """keep record of what happens during and unbundle |
|
194 | 194 | |
|
195 | 195 | New records are added using `records.add('cat', obj)`. Where 'cat' is a |
|
196 | 196 | category of record and obj is an arbitrary object. |
|
197 | 197 | |
|
198 | 198 | `records['cat']` will return all entries of this category 'cat'. |
|
199 | 199 | |
|
200 | 200 | Iterating on the object itself will yield `('category', obj)` tuples |
|
201 | 201 | for all entries. |
|
202 | 202 | |
|
203 | 203 | All iterations happens in chronological order. |
|
204 | 204 | """ |
|
205 | 205 | |
|
206 | 206 | def __init__(self): |
|
207 | 207 | self._categories = {} |
|
208 | 208 | self._sequences = [] |
|
209 | 209 | self._replies = {} |
|
210 | 210 | |
|
211 | 211 | def add(self, category, entry, inreplyto=None): |
|
212 | 212 | """add a new record of a given category. |
|
213 | 213 | |
|
214 | 214 | The entry can then be retrieved in the list returned by |
|
215 | 215 | self['category'].""" |
|
216 | 216 | self._categories.setdefault(category, []).append(entry) |
|
217 | 217 | self._sequences.append((category, entry)) |
|
218 | 218 | if inreplyto is not None: |
|
219 | 219 | self.getreplies(inreplyto).add(category, entry) |
|
220 | 220 | |
|
221 | 221 | def getreplies(self, partid): |
|
222 | 222 | """get the subrecords that replies to a specific part""" |
|
223 | 223 | return self._replies.setdefault(partid, unbundlerecords()) |
|
224 | 224 | |
|
225 | 225 | def __getitem__(self, cat): |
|
226 | 226 | return tuple(self._categories.get(cat, ())) |
|
227 | 227 | |
|
228 | 228 | def __iter__(self): |
|
229 | 229 | return iter(self._sequences) |
|
230 | 230 | |
|
231 | 231 | def __len__(self): |
|
232 | 232 | return len(self._sequences) |
|
233 | 233 | |
|
234 | 234 | def __nonzero__(self): |
|
235 | 235 | return bool(self._sequences) |
|
236 | 236 | |
|
237 | 237 | class bundleoperation(object): |
|
238 | 238 | """an object that represents a single bundling process |
|
239 | 239 | |
|
240 | 240 | Its purpose is to carry unbundle-related objects and states. |
|
241 | 241 | |
|
242 | 242 | A new object should be created at the beginning of each bundle processing. |
|
243 | 243 | The object is to be returned by the processing function. |
|
244 | 244 | |
|
245 | 245 | The object has very little content now it will ultimately contain: |
|
246 | 246 | * an access to the repo the bundle is applied to, |
|
247 | 247 | * a ui object, |
|
248 | 248 | * a way to retrieve a transaction to add changes to the repo, |
|
249 | 249 | * a way to record the result of processing each part, |
|
250 | 250 | * a way to construct a bundle response when applicable. |
|
251 | 251 | """ |
|
252 | 252 | |
|
253 | 253 | def __init__(self, repo, transactiongetter): |
|
254 | 254 | self.repo = repo |
|
255 | 255 | self.ui = repo.ui |
|
256 | 256 | self.records = unbundlerecords() |
|
257 | 257 | self.gettransaction = transactiongetter |
|
258 | 258 | self.reply = None |
|
259 | 259 | |
|
260 | 260 | class TransactionUnavailable(RuntimeError): |
|
261 | 261 | pass |
|
262 | 262 | |
|
263 | 263 | def _notransaction(): |
|
264 | 264 | """default method to get a transaction while processing a bundle |
|
265 | 265 | |
|
266 | 266 | Raise an exception to highlight the fact that no transaction was expected |
|
267 | 267 | to be created""" |
|
268 | 268 | raise TransactionUnavailable() |
|
269 | 269 | |
|
270 | 270 | def processbundle(repo, unbundler, transactiongetter=_notransaction): |
|
271 | 271 | """This function process a bundle, apply effect to/from a repo |
|
272 | 272 | |
|
273 | 273 | It iterates over each part then searches for and uses the proper handling |
|
274 | 274 | code to process the part. Parts are processed in order. |
|
275 | 275 | |
|
276 | 276 | This is very early version of this function that will be strongly reworked |
|
277 | 277 | before final usage. |
|
278 | 278 | |
|
279 | 279 | Unknown Mandatory part will abort the process. |
|
280 | 280 | """ |
|
281 | 281 | op = bundleoperation(repo, transactiongetter) |
|
282 | 282 | # todo: |
|
283 | # - only create reply bundle if requested. | |
|
284 | op.reply = bundle20(op.ui) | |
|
285 | # todo: | |
|
286 | 283 | # - replace this is a init function soon. |
|
287 | 284 | # - exception catching |
|
288 | 285 | unbundler.params |
|
289 | 286 | iterparts = unbundler.iterparts() |
|
290 | 287 | part = None |
|
291 | 288 | try: |
|
292 | 289 | for part in iterparts: |
|
293 | 290 | parttype = part.type |
|
294 | 291 | # part key are matched lower case |
|
295 | 292 | key = parttype.lower() |
|
296 | 293 | try: |
|
297 | 294 | handler = parthandlermapping[key] |
|
298 | 295 | op.ui.debug('found a handler for part %r\n' % parttype) |
|
299 | 296 | except KeyError: |
|
300 | 297 | if key != parttype: # mandatory parts |
|
301 | 298 | # todo: |
|
302 | 299 | # - use a more precise exception |
|
303 | 300 | raise |
|
304 | 301 | op.ui.debug('ignoring unknown advisory part %r\n' % key) |
|
305 | 302 | # consuming the part |
|
306 | 303 | part.read() |
|
307 | 304 | continue |
|
308 | 305 | |
|
309 | 306 | # handler is called outside the above try block so that we don't |
|
310 | 307 | # risk catching KeyErrors from anything other than the |
|
311 | 308 | # parthandlermapping lookup (any KeyError raised by handler() |
|
312 | 309 | # itself represents a defect of a different variety). |
|
313 | 310 | handler(op, part) |
|
314 | 311 | part.read() |
|
315 | 312 | except Exception: |
|
316 | 313 | if part is not None: |
|
317 | 314 | # consume the bundle content |
|
318 | 315 | part.read() |
|
319 | 316 | for part in iterparts: |
|
320 | 317 | # consume the bundle content |
|
321 | 318 | part.read() |
|
322 | 319 | raise |
|
323 | 320 | return op |
|
324 | 321 | |
|
325 | 322 | class bundle20(object): |
|
326 | 323 | """represent an outgoing bundle2 container |
|
327 | 324 | |
|
328 | 325 | Use the `addparam` method to add stream level parameter. and `addpart` to |
|
329 | 326 | populate it. Then call `getchunks` to retrieve all the binary chunks of |
|
330 | 327 | data that compose the bundle2 container.""" |
|
331 | 328 | |
|
332 | 329 | def __init__(self, ui): |
|
333 | 330 | self.ui = ui |
|
334 | 331 | self._params = [] |
|
335 | 332 | self._parts = [] |
|
336 | 333 | |
|
337 | 334 | def addparam(self, name, value=None): |
|
338 | 335 | """add a stream level parameter""" |
|
339 | 336 | if not name: |
|
340 | 337 | raise ValueError('empty parameter name') |
|
341 | 338 | if name[0] not in string.letters: |
|
342 | 339 | raise ValueError('non letter first character: %r' % name) |
|
343 | 340 | self._params.append((name, value)) |
|
344 | 341 | |
|
345 | 342 | def addpart(self, part): |
|
346 | 343 | """add a new part to the bundle2 container |
|
347 | 344 | |
|
348 | 345 | Parts contains the actual applicative payload.""" |
|
349 | 346 | assert part.id is None |
|
350 | 347 | part.id = len(self._parts) # very cheap counter |
|
351 | 348 | self._parts.append(part) |
|
352 | 349 | |
|
353 | 350 | def getchunks(self): |
|
354 | 351 | self.ui.debug('start emission of %s stream\n' % _magicstring) |
|
355 | 352 | yield _magicstring |
|
356 | 353 | param = self._paramchunk() |
|
357 | 354 | self.ui.debug('bundle parameter: %s\n' % param) |
|
358 | 355 | yield _pack(_fstreamparamsize, len(param)) |
|
359 | 356 | if param: |
|
360 | 357 | yield param |
|
361 | 358 | |
|
362 | 359 | self.ui.debug('start of parts\n') |
|
363 | 360 | for part in self._parts: |
|
364 | 361 | self.ui.debug('bundle part: "%s"\n' % part.type) |
|
365 | 362 | for chunk in part.getchunks(): |
|
366 | 363 | yield chunk |
|
367 | 364 | self.ui.debug('end of bundle\n') |
|
368 | 365 | yield '\0\0' |
|
369 | 366 | |
|
370 | 367 | def _paramchunk(self): |
|
371 | 368 | """return a encoded version of all stream parameters""" |
|
372 | 369 | blocks = [] |
|
373 | 370 | for par, value in self._params: |
|
374 | 371 | par = urllib.quote(par) |
|
375 | 372 | if value is not None: |
|
376 | 373 | value = urllib.quote(value) |
|
377 | 374 | par = '%s=%s' % (par, value) |
|
378 | 375 | blocks.append(par) |
|
379 | 376 | return ' '.join(blocks) |
|
380 | 377 | |
|
381 | 378 | class unpackermixin(object): |
|
382 | 379 | """A mixin to extract bytes and struct data from a stream""" |
|
383 | 380 | |
|
384 | 381 | def __init__(self, fp): |
|
385 | 382 | self._fp = fp |
|
386 | 383 | |
|
387 | 384 | def _unpack(self, format): |
|
388 | 385 | """unpack this struct format from the stream""" |
|
389 | 386 | data = self._readexact(struct.calcsize(format)) |
|
390 | 387 | return _unpack(format, data) |
|
391 | 388 | |
|
392 | 389 | def _readexact(self, size): |
|
393 | 390 | """read exactly <size> bytes from the stream""" |
|
394 | 391 | return changegroup.readexactly(self._fp, size) |
|
395 | 392 | |
|
396 | 393 | |
|
397 | 394 | class unbundle20(unpackermixin): |
|
398 | 395 | """interpret a bundle2 stream |
|
399 | 396 | |
|
400 | 397 | This class is fed with a binary stream and yields parts through its |
|
401 | 398 | `iterparts` methods.""" |
|
402 | 399 | |
|
403 | 400 | def __init__(self, ui, fp, header=None): |
|
404 | 401 | """If header is specified, we do not read it out of the stream.""" |
|
405 | 402 | self.ui = ui |
|
406 | 403 | super(unbundle20, self).__init__(fp) |
|
407 | 404 | if header is None: |
|
408 | 405 | header = self._readexact(4) |
|
409 | 406 | magic, version = header[0:2], header[2:4] |
|
410 | 407 | if magic != 'HG': |
|
411 | 408 | raise util.Abort(_('not a Mercurial bundle')) |
|
412 | 409 | if version != '20': |
|
413 | 410 | raise util.Abort(_('unknown bundle version %s') % version) |
|
414 | 411 | self.ui.debug('start processing of %s stream\n' % header) |
|
415 | 412 | |
|
416 | 413 | @util.propertycache |
|
417 | 414 | def params(self): |
|
418 | 415 | """dictionary of stream level parameters""" |
|
419 | 416 | self.ui.debug('reading bundle2 stream parameters\n') |
|
420 | 417 | params = {} |
|
421 | 418 | paramssize = self._unpack(_fstreamparamsize)[0] |
|
422 | 419 | if paramssize: |
|
423 | 420 | for p in self._readexact(paramssize).split(' '): |
|
424 | 421 | p = p.split('=', 1) |
|
425 | 422 | p = [urllib.unquote(i) for i in p] |
|
426 | 423 | if len(p) < 2: |
|
427 | 424 | p.append(None) |
|
428 | 425 | self._processparam(*p) |
|
429 | 426 | params[p[0]] = p[1] |
|
430 | 427 | return params |
|
431 | 428 | |
|
432 | 429 | def _processparam(self, name, value): |
|
433 | 430 | """process a parameter, applying its effect if needed |
|
434 | 431 | |
|
435 | 432 | Parameter starting with a lower case letter are advisory and will be |
|
436 | 433 | ignored when unknown. Those starting with an upper case letter are |
|
437 | 434 | mandatory and will this function will raise a KeyError when unknown. |
|
438 | 435 | |
|
439 | 436 | Note: no option are currently supported. Any input will be either |
|
440 | 437 | ignored or failing. |
|
441 | 438 | """ |
|
442 | 439 | if not name: |
|
443 | 440 | raise ValueError('empty parameter name') |
|
444 | 441 | if name[0] not in string.letters: |
|
445 | 442 | raise ValueError('non letter first character: %r' % name) |
|
446 | 443 | # Some logic will be later added here to try to process the option for |
|
447 | 444 | # a dict of known parameter. |
|
448 | 445 | if name[0].islower(): |
|
449 | 446 | self.ui.debug("ignoring unknown parameter %r\n" % name) |
|
450 | 447 | else: |
|
451 | 448 | raise KeyError(name) |
|
452 | 449 | |
|
453 | 450 | |
|
454 | 451 | def iterparts(self): |
|
455 | 452 | """yield all parts contained in the stream""" |
|
456 | 453 | # make sure param have been loaded |
|
457 | 454 | self.params |
|
458 | 455 | self.ui.debug('start extraction of bundle2 parts\n') |
|
459 | 456 | headerblock = self._readpartheader() |
|
460 | 457 | while headerblock is not None: |
|
461 | 458 | part = unbundlepart(self.ui, headerblock, self._fp) |
|
462 | 459 | yield part |
|
463 | 460 | headerblock = self._readpartheader() |
|
464 | 461 | self.ui.debug('end of bundle2 stream\n') |
|
465 | 462 | |
|
466 | 463 | def _readpartheader(self): |
|
467 | 464 | """reads a part header size and return the bytes blob |
|
468 | 465 | |
|
469 | 466 | returns None if empty""" |
|
470 | 467 | headersize = self._unpack(_fpartheadersize)[0] |
|
471 | 468 | self.ui.debug('part header size: %i\n' % headersize) |
|
472 | 469 | if headersize: |
|
473 | 470 | return self._readexact(headersize) |
|
474 | 471 | return None |
|
475 | 472 | |
|
476 | 473 | |
|
477 | 474 | class bundlepart(object): |
|
478 | 475 | """A bundle2 part contains application level payload |
|
479 | 476 | |
|
480 | 477 | The part `type` is used to route the part to the application level |
|
481 | 478 | handler. |
|
482 | 479 | """ |
|
483 | 480 | |
|
484 | 481 | def __init__(self, parttype, mandatoryparams=(), advisoryparams=(), |
|
485 | 482 | data=''): |
|
486 | 483 | self.id = None |
|
487 | 484 | self.type = parttype |
|
488 | 485 | self.data = data |
|
489 | 486 | self.mandatoryparams = mandatoryparams |
|
490 | 487 | self.advisoryparams = advisoryparams |
|
491 | 488 | |
|
492 | 489 | def getchunks(self): |
|
493 | 490 | #### header |
|
494 | 491 | ## parttype |
|
495 | 492 | header = [_pack(_fparttypesize, len(self.type)), |
|
496 | 493 | self.type, _pack(_fpartid, self.id), |
|
497 | 494 | ] |
|
498 | 495 | ## parameters |
|
499 | 496 | # count |
|
500 | 497 | manpar = self.mandatoryparams |
|
501 | 498 | advpar = self.advisoryparams |
|
502 | 499 | header.append(_pack(_fpartparamcount, len(manpar), len(advpar))) |
|
503 | 500 | # size |
|
504 | 501 | parsizes = [] |
|
505 | 502 | for key, value in manpar: |
|
506 | 503 | parsizes.append(len(key)) |
|
507 | 504 | parsizes.append(len(value)) |
|
508 | 505 | for key, value in advpar: |
|
509 | 506 | parsizes.append(len(key)) |
|
510 | 507 | parsizes.append(len(value)) |
|
511 | 508 | paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes) |
|
512 | 509 | header.append(paramsizes) |
|
513 | 510 | # key, value |
|
514 | 511 | for key, value in manpar: |
|
515 | 512 | header.append(key) |
|
516 | 513 | header.append(value) |
|
517 | 514 | for key, value in advpar: |
|
518 | 515 | header.append(key) |
|
519 | 516 | header.append(value) |
|
520 | 517 | ## finalize header |
|
521 | 518 | headerchunk = ''.join(header) |
|
522 | 519 | yield _pack(_fpartheadersize, len(headerchunk)) |
|
523 | 520 | yield headerchunk |
|
524 | 521 | ## payload |
|
525 | 522 | for chunk in self._payloadchunks(): |
|
526 | 523 | yield _pack(_fpayloadsize, len(chunk)) |
|
527 | 524 | yield chunk |
|
528 | 525 | # end of payload |
|
529 | 526 | yield _pack(_fpayloadsize, 0) |
|
530 | 527 | |
|
531 | 528 | def _payloadchunks(self): |
|
532 | 529 | """yield chunks of a the part payload |
|
533 | 530 | |
|
534 | 531 | Exists to handle the different methods to provide data to a part.""" |
|
535 | 532 | # we only support fixed size data now. |
|
536 | 533 | # This will be improved in the future. |
|
537 | 534 | if util.safehasattr(self.data, 'next'): |
|
538 | 535 | buff = util.chunkbuffer(self.data) |
|
539 | 536 | chunk = buff.read(preferedchunksize) |
|
540 | 537 | while chunk: |
|
541 | 538 | yield chunk |
|
542 | 539 | chunk = buff.read(preferedchunksize) |
|
543 | 540 | elif len(self.data): |
|
544 | 541 | yield self.data |
|
545 | 542 | |
|
546 | 543 | class unbundlepart(unpackermixin): |
|
547 | 544 | """a bundle part read from a bundle""" |
|
548 | 545 | |
|
549 | 546 | def __init__(self, ui, header, fp): |
|
550 | 547 | super(unbundlepart, self).__init__(fp) |
|
551 | 548 | self.ui = ui |
|
552 | 549 | # unbundle state attr |
|
553 | 550 | self._headerdata = header |
|
554 | 551 | self._headeroffset = 0 |
|
555 | 552 | self._initialized = False |
|
556 | 553 | self.consumed = False |
|
557 | 554 | # part data |
|
558 | 555 | self.id = None |
|
559 | 556 | self.type = None |
|
560 | 557 | self.mandatoryparams = None |
|
561 | 558 | self.advisoryparams = None |
|
562 | 559 | self._payloadstream = None |
|
563 | 560 | self._readheader() |
|
564 | 561 | |
|
565 | 562 | def _fromheader(self, size): |
|
566 | 563 | """return the next <size> byte from the header""" |
|
567 | 564 | offset = self._headeroffset |
|
568 | 565 | data = self._headerdata[offset:(offset + size)] |
|
569 | 566 | self._headeroffset = offset + size |
|
570 | 567 | return data |
|
571 | 568 | |
|
572 | 569 | def _unpackheader(self, format): |
|
573 | 570 | """read given format from header |
|
574 | 571 | |
|
575 | 572 | This automatically compute the size of the format to read.""" |
|
576 | 573 | data = self._fromheader(struct.calcsize(format)) |
|
577 | 574 | return _unpack(format, data) |
|
578 | 575 | |
|
579 | 576 | def _readheader(self): |
|
580 | 577 | """read the header and setup the object""" |
|
581 | 578 | typesize = self._unpackheader(_fparttypesize)[0] |
|
582 | 579 | self.type = self._fromheader(typesize) |
|
583 | 580 | self.ui.debug('part type: "%s"\n' % self.type) |
|
584 | 581 | self.id = self._unpackheader(_fpartid)[0] |
|
585 | 582 | self.ui.debug('part id: "%s"\n' % self.id) |
|
586 | 583 | ## reading parameters |
|
587 | 584 | # param count |
|
588 | 585 | mancount, advcount = self._unpackheader(_fpartparamcount) |
|
589 | 586 | self.ui.debug('part parameters: %i\n' % (mancount + advcount)) |
|
590 | 587 | # param size |
|
591 | 588 | fparamsizes = _makefpartparamsizes(mancount + advcount) |
|
592 | 589 | paramsizes = self._unpackheader(fparamsizes) |
|
593 | 590 | # make it a list of couple again |
|
594 | 591 | paramsizes = zip(paramsizes[::2], paramsizes[1::2]) |
|
595 | 592 | # split mandatory from advisory |
|
596 | 593 | mansizes = paramsizes[:mancount] |
|
597 | 594 | advsizes = paramsizes[mancount:] |
|
598 | 595 | # retrive param value |
|
599 | 596 | manparams = [] |
|
600 | 597 | for key, value in mansizes: |
|
601 | 598 | manparams.append((self._fromheader(key), self._fromheader(value))) |
|
602 | 599 | advparams = [] |
|
603 | 600 | for key, value in advsizes: |
|
604 | 601 | advparams.append((self._fromheader(key), self._fromheader(value))) |
|
605 | 602 | self.mandatoryparams = manparams |
|
606 | 603 | self.advisoryparams = advparams |
|
607 | 604 | ## part payload |
|
608 | 605 | def payloadchunks(): |
|
609 | 606 | payloadsize = self._unpack(_fpayloadsize)[0] |
|
610 | 607 | self.ui.debug('payload chunk size: %i\n' % payloadsize) |
|
611 | 608 | while payloadsize: |
|
612 | 609 | yield self._readexact(payloadsize) |
|
613 | 610 | payloadsize = self._unpack(_fpayloadsize)[0] |
|
614 | 611 | self.ui.debug('payload chunk size: %i\n' % payloadsize) |
|
615 | 612 | self._payloadstream = util.chunkbuffer(payloadchunks()) |
|
616 | 613 | # we read the data, tell it |
|
617 | 614 | self._initialized = True |
|
618 | 615 | |
|
619 | 616 | def read(self, size=None): |
|
620 | 617 | """read payload data""" |
|
621 | 618 | if not self._initialized: |
|
622 | 619 | self._readheader() |
|
623 | 620 | if size is None: |
|
624 | 621 | data = self._payloadstream.read() |
|
625 | 622 | else: |
|
626 | 623 | data = self._payloadstream.read(size) |
|
627 | 624 | if size is None or len(data) < size: |
|
628 | 625 | self.consumed = True |
|
629 | 626 | return data |
|
630 | 627 | |
|
631 | 628 | |
|
632 | 629 | @parthandler('changegroup') |
|
633 | 630 | def handlechangegroup(op, inpart): |
|
634 | 631 | """apply a changegroup part on the repo |
|
635 | 632 | |
|
636 | 633 | This is a very early implementation that will massive rework before being |
|
637 | 634 | inflicted to any end-user. |
|
638 | 635 | """ |
|
639 | 636 | # Make sure we trigger a transaction creation |
|
640 | 637 | # |
|
641 | 638 | # The addchangegroup function will get a transaction object by itself, but |
|
642 | 639 | # we need to make sure we trigger the creation of a transaction object used |
|
643 | 640 | # for the whole processing scope. |
|
644 | 641 | op.gettransaction() |
|
645 | 642 | cg = changegroup.unbundle10(inpart, 'UN') |
|
646 | 643 | ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2') |
|
647 | 644 | op.records.add('changegroup', {'return': ret}) |
|
648 | 645 | if op.reply is not None: |
|
649 | 646 | # This is definitly not the final form of this |
|
650 | 647 | # return. But one need to start somewhere. |
|
651 | 648 | part = bundlepart('reply:changegroup', (), |
|
652 | 649 | [('in-reply-to', str(inpart.id)), |
|
653 | 650 | ('return', '%i' % ret)]) |
|
654 | 651 | op.reply.addpart(part) |
|
655 | 652 | assert not inpart.read() |
|
656 | 653 | |
|
657 | 654 | @parthandler('reply:changegroup') |
|
658 | 655 | def handlechangegroup(op, inpart): |
|
659 | 656 | p = dict(inpart.advisoryparams) |
|
660 | 657 | ret = int(p['return']) |
|
661 | 658 | op.records.add('changegroup', {'return': ret}, int(p['in-reply-to'])) |
|
662 | 659 | |
|
663 | 660 | @parthandler('check:heads') |
|
664 | 661 | def handlechangegroup(op, inpart): |
|
665 | 662 | """check that head of the repo did not change |
|
666 | 663 | |
|
667 | 664 | This is used to detect a push race when using unbundle. |
|
668 | 665 | This replaces the "heads" argument of unbundle.""" |
|
669 | 666 | h = inpart.read(20) |
|
670 | 667 | heads = [] |
|
671 | 668 | while len(h) == 20: |
|
672 | 669 | heads.append(h) |
|
673 | 670 | h = inpart.read(20) |
|
674 | 671 | assert not h |
|
675 | 672 | if heads != op.repo.heads(): |
|
676 | 673 | raise exchange.PushRaced() |
|
674 | ||
|
675 | @parthandler('replycaps') | |
|
676 | def handlereplycaps(op, inpart): | |
|
677 | """Notify that a reply bundle should be created | |
|
678 | ||
|
679 | Will convey bundle capability at some point too.""" | |
|
680 | if op.reply is None: | |
|
681 | op.reply = bundle20(op.ui) |
@@ -1,705 +1,706 b'' | |||
|
1 | 1 | # exchange.py - utility to exchange data between repos. |
|
2 | 2 | # |
|
3 | 3 | # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> |
|
4 | 4 | # |
|
5 | 5 | # This software may be used and distributed according to the terms of the |
|
6 | 6 | # GNU General Public License version 2 or any later version. |
|
7 | 7 | |
|
8 | 8 | from i18n import _ |
|
9 | 9 | from node import hex, nullid |
|
10 | 10 | import errno |
|
11 | 11 | import util, scmutil, changegroup, base85 |
|
12 | 12 | import discovery, phases, obsolete, bookmarks, bundle2 |
|
13 | 13 | |
|
14 | 14 | def readbundle(ui, fh, fname, vfs=None): |
|
15 | 15 | header = changegroup.readexactly(fh, 4) |
|
16 | 16 | |
|
17 | 17 | alg = None |
|
18 | 18 | if not fname: |
|
19 | 19 | fname = "stream" |
|
20 | 20 | if not header.startswith('HG') and header.startswith('\0'): |
|
21 | 21 | fh = changegroup.headerlessfixup(fh, header) |
|
22 | 22 | header = "HG10" |
|
23 | 23 | alg = 'UN' |
|
24 | 24 | elif vfs: |
|
25 | 25 | fname = vfs.join(fname) |
|
26 | 26 | |
|
27 | 27 | magic, version = header[0:2], header[2:4] |
|
28 | 28 | |
|
29 | 29 | if magic != 'HG': |
|
30 | 30 | raise util.Abort(_('%s: not a Mercurial bundle') % fname) |
|
31 | 31 | if version == '10': |
|
32 | 32 | if alg is None: |
|
33 | 33 | alg = changegroup.readexactly(fh, 2) |
|
34 | 34 | return changegroup.unbundle10(fh, alg) |
|
35 | 35 | elif version == '20': |
|
36 | 36 | return bundle2.unbundle20(ui, fh, header=magic + version) |
|
37 | 37 | else: |
|
38 | 38 | raise util.Abort(_('%s: unknown bundle version %s') % (fname, version)) |
|
39 | 39 | |
|
40 | 40 | |
|
41 | 41 | class pushoperation(object): |
|
42 | 42 | """A object that represent a single push operation |
|
43 | 43 | |
|
44 | 44 | It purpose is to carry push related state and very common operation. |
|
45 | 45 | |
|
46 | 46 | A new should be created at the beginning of each push and discarded |
|
47 | 47 | afterward. |
|
48 | 48 | """ |
|
49 | 49 | |
|
50 | 50 | def __init__(self, repo, remote, force=False, revs=None, newbranch=False): |
|
51 | 51 | # repo we push from |
|
52 | 52 | self.repo = repo |
|
53 | 53 | self.ui = repo.ui |
|
54 | 54 | # repo we push to |
|
55 | 55 | self.remote = remote |
|
56 | 56 | # force option provided |
|
57 | 57 | self.force = force |
|
58 | 58 | # revs to be pushed (None is "all") |
|
59 | 59 | self.revs = revs |
|
60 | 60 | # allow push of new branch |
|
61 | 61 | self.newbranch = newbranch |
|
62 | 62 | # did a local lock get acquired? |
|
63 | 63 | self.locallocked = None |
|
64 | 64 | # Integer version of the push result |
|
65 | 65 | # - None means nothing to push |
|
66 | 66 | # - 0 means HTTP error |
|
67 | 67 | # - 1 means we pushed and remote head count is unchanged *or* |
|
68 | 68 | # we have outgoing changesets but refused to push |
|
69 | 69 | # - other values as described by addchangegroup() |
|
70 | 70 | self.ret = None |
|
71 | 71 | # discover.outgoing object (contains common and outgoing data) |
|
72 | 72 | self.outgoing = None |
|
73 | 73 | # all remote heads before the push |
|
74 | 74 | self.remoteheads = None |
|
75 | 75 | # testable as a boolean indicating if any nodes are missing locally. |
|
76 | 76 | self.incoming = None |
|
77 | 77 | # set of all heads common after changeset bundle push |
|
78 | 78 | self.commonheads = None |
|
79 | 79 | |
|
80 | 80 | def push(repo, remote, force=False, revs=None, newbranch=False): |
|
81 | 81 | '''Push outgoing changesets (limited by revs) from a local |
|
82 | 82 | repository to remote. Return an integer: |
|
83 | 83 | - None means nothing to push |
|
84 | 84 | - 0 means HTTP error |
|
85 | 85 | - 1 means we pushed and remote head count is unchanged *or* |
|
86 | 86 | we have outgoing changesets but refused to push |
|
87 | 87 | - other values as described by addchangegroup() |
|
88 | 88 | ''' |
|
89 | 89 | pushop = pushoperation(repo, remote, force, revs, newbranch) |
|
90 | 90 | if pushop.remote.local(): |
|
91 | 91 | missing = (set(pushop.repo.requirements) |
|
92 | 92 | - pushop.remote.local().supported) |
|
93 | 93 | if missing: |
|
94 | 94 | msg = _("required features are not" |
|
95 | 95 | " supported in the destination:" |
|
96 | 96 | " %s") % (', '.join(sorted(missing))) |
|
97 | 97 | raise util.Abort(msg) |
|
98 | 98 | |
|
99 | 99 | # there are two ways to push to remote repo: |
|
100 | 100 | # |
|
101 | 101 | # addchangegroup assumes local user can lock remote |
|
102 | 102 | # repo (local filesystem, old ssh servers). |
|
103 | 103 | # |
|
104 | 104 | # unbundle assumes local user cannot lock remote repo (new ssh |
|
105 | 105 | # servers, http servers). |
|
106 | 106 | |
|
107 | 107 | if not pushop.remote.canpush(): |
|
108 | 108 | raise util.Abort(_("destination does not support push")) |
|
109 | 109 | # get local lock as we might write phase data |
|
110 | 110 | locallock = None |
|
111 | 111 | try: |
|
112 | 112 | locallock = pushop.repo.lock() |
|
113 | 113 | pushop.locallocked = True |
|
114 | 114 | except IOError, err: |
|
115 | 115 | pushop.locallocked = False |
|
116 | 116 | if err.errno != errno.EACCES: |
|
117 | 117 | raise |
|
118 | 118 | # source repo cannot be locked. |
|
119 | 119 | # We do not abort the push, but just disable the local phase |
|
120 | 120 | # synchronisation. |
|
121 | 121 | msg = 'cannot lock source repository: %s\n' % err |
|
122 | 122 | pushop.ui.debug(msg) |
|
123 | 123 | try: |
|
124 | 124 | pushop.repo.checkpush(pushop) |
|
125 | 125 | lock = None |
|
126 | 126 | unbundle = pushop.remote.capable('unbundle') |
|
127 | 127 | if not unbundle: |
|
128 | 128 | lock = pushop.remote.lock() |
|
129 | 129 | try: |
|
130 | 130 | _pushdiscovery(pushop) |
|
131 | 131 | if _pushcheckoutgoing(pushop): |
|
132 | 132 | pushop.repo.prepushoutgoinghooks(pushop.repo, |
|
133 | 133 | pushop.remote, |
|
134 | 134 | pushop.outgoing) |
|
135 | 135 | if pushop.remote.capable('bundle2'): |
|
136 | 136 | _pushbundle2(pushop) |
|
137 | 137 | else: |
|
138 | 138 | _pushchangeset(pushop) |
|
139 | 139 | _pushcomputecommonheads(pushop) |
|
140 | 140 | _pushsyncphase(pushop) |
|
141 | 141 | _pushobsolete(pushop) |
|
142 | 142 | finally: |
|
143 | 143 | if lock is not None: |
|
144 | 144 | lock.release() |
|
145 | 145 | finally: |
|
146 | 146 | if locallock is not None: |
|
147 | 147 | locallock.release() |
|
148 | 148 | |
|
149 | 149 | _pushbookmark(pushop) |
|
150 | 150 | return pushop.ret |
|
151 | 151 | |
|
152 | 152 | def _pushdiscovery(pushop): |
|
153 | 153 | # discovery |
|
154 | 154 | unfi = pushop.repo.unfiltered() |
|
155 | 155 | fci = discovery.findcommonincoming |
|
156 | 156 | commoninc = fci(unfi, pushop.remote, force=pushop.force) |
|
157 | 157 | common, inc, remoteheads = commoninc |
|
158 | 158 | fco = discovery.findcommonoutgoing |
|
159 | 159 | outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs, |
|
160 | 160 | commoninc=commoninc, force=pushop.force) |
|
161 | 161 | pushop.outgoing = outgoing |
|
162 | 162 | pushop.remoteheads = remoteheads |
|
163 | 163 | pushop.incoming = inc |
|
164 | 164 | |
|
165 | 165 | def _pushcheckoutgoing(pushop): |
|
166 | 166 | outgoing = pushop.outgoing |
|
167 | 167 | unfi = pushop.repo.unfiltered() |
|
168 | 168 | if not outgoing.missing: |
|
169 | 169 | # nothing to push |
|
170 | 170 | scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded) |
|
171 | 171 | return False |
|
172 | 172 | # something to push |
|
173 | 173 | if not pushop.force: |
|
174 | 174 | # if repo.obsstore == False --> no obsolete |
|
175 | 175 | # then, save the iteration |
|
176 | 176 | if unfi.obsstore: |
|
177 | 177 | # this message are here for 80 char limit reason |
|
178 | 178 | mso = _("push includes obsolete changeset: %s!") |
|
179 | 179 | mst = "push includes %s changeset: %s!" |
|
180 | 180 | # plain versions for i18n tool to detect them |
|
181 | 181 | _("push includes unstable changeset: %s!") |
|
182 | 182 | _("push includes bumped changeset: %s!") |
|
183 | 183 | _("push includes divergent changeset: %s!") |
|
184 | 184 | # If we are to push if there is at least one |
|
185 | 185 | # obsolete or unstable changeset in missing, at |
|
186 | 186 | # least one of the missinghead will be obsolete or |
|
187 | 187 | # unstable. So checking heads only is ok |
|
188 | 188 | for node in outgoing.missingheads: |
|
189 | 189 | ctx = unfi[node] |
|
190 | 190 | if ctx.obsolete(): |
|
191 | 191 | raise util.Abort(mso % ctx) |
|
192 | 192 | elif ctx.troubled(): |
|
193 | 193 | raise util.Abort(_(mst) |
|
194 | 194 | % (ctx.troubles()[0], |
|
195 | 195 | ctx)) |
|
196 | 196 | newbm = pushop.ui.configlist('bookmarks', 'pushing') |
|
197 | 197 | discovery.checkheads(unfi, pushop.remote, outgoing, |
|
198 | 198 | pushop.remoteheads, |
|
199 | 199 | pushop.newbranch, |
|
200 | 200 | bool(pushop.incoming), |
|
201 | 201 | newbm) |
|
202 | 202 | return True |
|
203 | 203 | |
|
204 | 204 | def _pushbundle2(pushop): |
|
205 | 205 | """push data to the remote using bundle2 |
|
206 | 206 | |
|
207 | 207 | The only currently supported type of data is changegroup but this will |
|
208 | 208 | evolve in the future.""" |
|
209 | 209 | # Send known head to the server for race detection. |
|
210 | 210 | bundler = bundle2.bundle20(pushop.ui) |
|
211 | bundler.addpart(bundle2.bundlepart('replycaps')) | |
|
211 | 212 | if not pushop.force: |
|
212 | 213 | part = bundle2.bundlepart('CHECK:HEADS', data=iter(pushop.remoteheads)) |
|
213 | 214 | bundler.addpart(part) |
|
214 | 215 | # add the changegroup bundle |
|
215 | 216 | cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing) |
|
216 | 217 | cgpart = bundle2.bundlepart('CHANGEGROUP', data=cg.getchunks()) |
|
217 | 218 | bundler.addpart(cgpart) |
|
218 | 219 | stream = util.chunkbuffer(bundler.getchunks()) |
|
219 | 220 | reply = pushop.remote.unbundle(stream, ['force'], 'push') |
|
220 | 221 | try: |
|
221 | 222 | op = bundle2.processbundle(pushop.repo, reply) |
|
222 | 223 | except KeyError, exc: |
|
223 | 224 | raise util.Abort('missing support for %s' % exc) |
|
224 | 225 | cgreplies = op.records.getreplies(cgpart.id) |
|
225 | 226 | assert len(cgreplies['changegroup']) == 1 |
|
226 | 227 | pushop.ret = cgreplies['changegroup'][0]['return'] |
|
227 | 228 | |
|
228 | 229 | def _pushchangeset(pushop): |
|
229 | 230 | """Make the actual push of changeset bundle to remote repo""" |
|
230 | 231 | outgoing = pushop.outgoing |
|
231 | 232 | unbundle = pushop.remote.capable('unbundle') |
|
232 | 233 | # TODO: get bundlecaps from remote |
|
233 | 234 | bundlecaps = None |
|
234 | 235 | # create a changegroup from local |
|
235 | 236 | if pushop.revs is None and not (outgoing.excluded |
|
236 | 237 | or pushop.repo.changelog.filteredrevs): |
|
237 | 238 | # push everything, |
|
238 | 239 | # use the fast path, no race possible on push |
|
239 | 240 | bundler = changegroup.bundle10(pushop.repo, bundlecaps) |
|
240 | 241 | cg = changegroup.getsubset(pushop.repo, |
|
241 | 242 | outgoing, |
|
242 | 243 | bundler, |
|
243 | 244 | 'push', |
|
244 | 245 | fastpath=True) |
|
245 | 246 | else: |
|
246 | 247 | cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing, |
|
247 | 248 | bundlecaps) |
|
248 | 249 | |
|
249 | 250 | # apply changegroup to remote |
|
250 | 251 | if unbundle: |
|
251 | 252 | # local repo finds heads on server, finds out what |
|
252 | 253 | # revs it must push. once revs transferred, if server |
|
253 | 254 | # finds it has different heads (someone else won |
|
254 | 255 | # commit/push race), server aborts. |
|
255 | 256 | if pushop.force: |
|
256 | 257 | remoteheads = ['force'] |
|
257 | 258 | else: |
|
258 | 259 | remoteheads = pushop.remoteheads |
|
259 | 260 | # ssh: return remote's addchangegroup() |
|
260 | 261 | # http: return remote's addchangegroup() or 0 for error |
|
261 | 262 | pushop.ret = pushop.remote.unbundle(cg, remoteheads, |
|
262 | 263 | 'push') |
|
263 | 264 | else: |
|
264 | 265 | # we return an integer indicating remote head count |
|
265 | 266 | # change |
|
266 | 267 | pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url()) |
|
267 | 268 | |
|
268 | 269 | def _pushcomputecommonheads(pushop): |
|
269 | 270 | unfi = pushop.repo.unfiltered() |
|
270 | 271 | if pushop.ret: |
|
271 | 272 | # push succeed, synchronize target of the push |
|
272 | 273 | cheads = pushop.outgoing.missingheads |
|
273 | 274 | elif pushop.revs is None: |
|
274 | 275 | # All out push fails. synchronize all common |
|
275 | 276 | cheads = pushop.outgoing.commonheads |
|
276 | 277 | else: |
|
277 | 278 | # I want cheads = heads(::missingheads and ::commonheads) |
|
278 | 279 | # (missingheads is revs with secret changeset filtered out) |
|
279 | 280 | # |
|
280 | 281 | # This can be expressed as: |
|
281 | 282 | # cheads = ( (missingheads and ::commonheads) |
|
282 | 283 | # + (commonheads and ::missingheads))" |
|
283 | 284 | # ) |
|
284 | 285 | # |
|
285 | 286 | # while trying to push we already computed the following: |
|
286 | 287 | # common = (::commonheads) |
|
287 | 288 | # missing = ((commonheads::missingheads) - commonheads) |
|
288 | 289 | # |
|
289 | 290 | # We can pick: |
|
290 | 291 | # * missingheads part of common (::commonheads) |
|
291 | 292 | common = set(pushop.outgoing.common) |
|
292 | 293 | nm = pushop.repo.changelog.nodemap |
|
293 | 294 | cheads = [node for node in pushop.revs if nm[node] in common] |
|
294 | 295 | # and |
|
295 | 296 | # * commonheads parents on missing |
|
296 | 297 | revset = unfi.set('%ln and parents(roots(%ln))', |
|
297 | 298 | pushop.outgoing.commonheads, |
|
298 | 299 | pushop.outgoing.missing) |
|
299 | 300 | cheads.extend(c.node() for c in revset) |
|
300 | 301 | pushop.commonheads = cheads |
|
301 | 302 | |
|
302 | 303 | def _pushsyncphase(pushop): |
|
303 | 304 | """synchronise phase information locally and remotely""" |
|
304 | 305 | unfi = pushop.repo.unfiltered() |
|
305 | 306 | cheads = pushop.commonheads |
|
306 | 307 | if pushop.ret: |
|
307 | 308 | # push succeed, synchronize target of the push |
|
308 | 309 | cheads = pushop.outgoing.missingheads |
|
309 | 310 | elif pushop.revs is None: |
|
310 | 311 | # All out push fails. synchronize all common |
|
311 | 312 | cheads = pushop.outgoing.commonheads |
|
312 | 313 | else: |
|
313 | 314 | # I want cheads = heads(::missingheads and ::commonheads) |
|
314 | 315 | # (missingheads is revs with secret changeset filtered out) |
|
315 | 316 | # |
|
316 | 317 | # This can be expressed as: |
|
317 | 318 | # cheads = ( (missingheads and ::commonheads) |
|
318 | 319 | # + (commonheads and ::missingheads))" |
|
319 | 320 | # ) |
|
320 | 321 | # |
|
321 | 322 | # while trying to push we already computed the following: |
|
322 | 323 | # common = (::commonheads) |
|
323 | 324 | # missing = ((commonheads::missingheads) - commonheads) |
|
324 | 325 | # |
|
325 | 326 | # We can pick: |
|
326 | 327 | # * missingheads part of common (::commonheads) |
|
327 | 328 | common = set(pushop.outgoing.common) |
|
328 | 329 | nm = pushop.repo.changelog.nodemap |
|
329 | 330 | cheads = [node for node in pushop.revs if nm[node] in common] |
|
330 | 331 | # and |
|
331 | 332 | # * commonheads parents on missing |
|
332 | 333 | revset = unfi.set('%ln and parents(roots(%ln))', |
|
333 | 334 | pushop.outgoing.commonheads, |
|
334 | 335 | pushop.outgoing.missing) |
|
335 | 336 | cheads.extend(c.node() for c in revset) |
|
336 | 337 | pushop.commonheads = cheads |
|
337 | 338 | # even when we don't push, exchanging phase data is useful |
|
338 | 339 | remotephases = pushop.remote.listkeys('phases') |
|
339 | 340 | if (pushop.ui.configbool('ui', '_usedassubrepo', False) |
|
340 | 341 | and remotephases # server supports phases |
|
341 | 342 | and pushop.ret is None # nothing was pushed |
|
342 | 343 | and remotephases.get('publishing', False)): |
|
343 | 344 | # When: |
|
344 | 345 | # - this is a subrepo push |
|
345 | 346 | # - and remote support phase |
|
346 | 347 | # - and no changeset was pushed |
|
347 | 348 | # - and remote is publishing |
|
348 | 349 | # We may be in issue 3871 case! |
|
349 | 350 | # We drop the possible phase synchronisation done by |
|
350 | 351 | # courtesy to publish changesets possibly locally draft |
|
351 | 352 | # on the remote. |
|
352 | 353 | remotephases = {'publishing': 'True'} |
|
353 | 354 | if not remotephases: # old server or public only reply from non-publishing |
|
354 | 355 | _localphasemove(pushop, cheads) |
|
355 | 356 | # don't push any phase data as there is nothing to push |
|
356 | 357 | else: |
|
357 | 358 | ana = phases.analyzeremotephases(pushop.repo, cheads, |
|
358 | 359 | remotephases) |
|
359 | 360 | pheads, droots = ana |
|
360 | 361 | ### Apply remote phase on local |
|
361 | 362 | if remotephases.get('publishing', False): |
|
362 | 363 | _localphasemove(pushop, cheads) |
|
363 | 364 | else: # publish = False |
|
364 | 365 | _localphasemove(pushop, pheads) |
|
365 | 366 | _localphasemove(pushop, cheads, phases.draft) |
|
366 | 367 | ### Apply local phase on remote |
|
367 | 368 | |
|
368 | 369 | # Get the list of all revs draft on remote by public here. |
|
369 | 370 | # XXX Beware that revset break if droots is not strictly |
|
370 | 371 | # XXX root we may want to ensure it is but it is costly |
|
371 | 372 | outdated = unfi.set('heads((%ln::%ln) and public())', |
|
372 | 373 | droots, cheads) |
|
373 | 374 | for newremotehead in outdated: |
|
374 | 375 | r = pushop.remote.pushkey('phases', |
|
375 | 376 | newremotehead.hex(), |
|
376 | 377 | str(phases.draft), |
|
377 | 378 | str(phases.public)) |
|
378 | 379 | if not r: |
|
379 | 380 | pushop.ui.warn(_('updating %s to public failed!\n') |
|
380 | 381 | % newremotehead) |
|
381 | 382 | |
|
382 | 383 | def _localphasemove(pushop, nodes, phase=phases.public): |
|
383 | 384 | """move <nodes> to <phase> in the local source repo""" |
|
384 | 385 | if pushop.locallocked: |
|
385 | 386 | phases.advanceboundary(pushop.repo, phase, nodes) |
|
386 | 387 | else: |
|
387 | 388 | # repo is not locked, do not change any phases! |
|
388 | 389 | # Informs the user that phases should have been moved when |
|
389 | 390 | # applicable. |
|
390 | 391 | actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()] |
|
391 | 392 | phasestr = phases.phasenames[phase] |
|
392 | 393 | if actualmoves: |
|
393 | 394 | pushop.ui.status(_('cannot lock source repo, skipping ' |
|
394 | 395 | 'local %s phase update\n') % phasestr) |
|
395 | 396 | |
|
396 | 397 | def _pushobsolete(pushop): |
|
397 | 398 | """utility function to push obsolete markers to a remote""" |
|
398 | 399 | pushop.ui.debug('try to push obsolete markers to remote\n') |
|
399 | 400 | repo = pushop.repo |
|
400 | 401 | remote = pushop.remote |
|
401 | 402 | if (obsolete._enabled and repo.obsstore and |
|
402 | 403 | 'obsolete' in remote.listkeys('namespaces')): |
|
403 | 404 | rslts = [] |
|
404 | 405 | remotedata = repo.listkeys('obsolete') |
|
405 | 406 | for key in sorted(remotedata, reverse=True): |
|
406 | 407 | # reverse sort to ensure we end with dump0 |
|
407 | 408 | data = remotedata[key] |
|
408 | 409 | rslts.append(remote.pushkey('obsolete', key, '', data)) |
|
409 | 410 | if [r for r in rslts if not r]: |
|
410 | 411 | msg = _('failed to push some obsolete markers!\n') |
|
411 | 412 | repo.ui.warn(msg) |
|
412 | 413 | |
|
413 | 414 | def _pushbookmark(pushop): |
|
414 | 415 | """Update bookmark position on remote""" |
|
415 | 416 | ui = pushop.ui |
|
416 | 417 | repo = pushop.repo.unfiltered() |
|
417 | 418 | remote = pushop.remote |
|
418 | 419 | ui.debug("checking for updated bookmarks\n") |
|
419 | 420 | revnums = map(repo.changelog.rev, pushop.revs or []) |
|
420 | 421 | ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)] |
|
421 | 422 | (addsrc, adddst, advsrc, advdst, diverge, differ, invalid |
|
422 | 423 | ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'), |
|
423 | 424 | srchex=hex) |
|
424 | 425 | |
|
425 | 426 | for b, scid, dcid in advsrc: |
|
426 | 427 | if ancestors and repo[scid].rev() not in ancestors: |
|
427 | 428 | continue |
|
428 | 429 | if remote.pushkey('bookmarks', b, dcid, scid): |
|
429 | 430 | ui.status(_("updating bookmark %s\n") % b) |
|
430 | 431 | else: |
|
431 | 432 | ui.warn(_('updating bookmark %s failed!\n') % b) |
|
432 | 433 | |
|
433 | 434 | class pulloperation(object): |
|
434 | 435 | """A object that represent a single pull operation |
|
435 | 436 | |
|
436 | 437 | It purpose is to carry push related state and very common operation. |
|
437 | 438 | |
|
438 | 439 | A new should be created at the beginning of each pull and discarded |
|
439 | 440 | afterward. |
|
440 | 441 | """ |
|
441 | 442 | |
|
442 | 443 | def __init__(self, repo, remote, heads=None, force=False): |
|
443 | 444 | # repo we pull into |
|
444 | 445 | self.repo = repo |
|
445 | 446 | # repo we pull from |
|
446 | 447 | self.remote = remote |
|
447 | 448 | # revision we try to pull (None is "all") |
|
448 | 449 | self.heads = heads |
|
449 | 450 | # do we force pull? |
|
450 | 451 | self.force = force |
|
451 | 452 | # the name the pull transaction |
|
452 | 453 | self._trname = 'pull\n' + util.hidepassword(remote.url()) |
|
453 | 454 | # hold the transaction once created |
|
454 | 455 | self._tr = None |
|
455 | 456 | # set of common changeset between local and remote before pull |
|
456 | 457 | self.common = None |
|
457 | 458 | # set of pulled head |
|
458 | 459 | self.rheads = None |
|
459 | 460 | # list of missing changeset to fetch remotely |
|
460 | 461 | self.fetch = None |
|
461 | 462 | # result of changegroup pulling (used as return code by pull) |
|
462 | 463 | self.cgresult = None |
|
463 | 464 | # list of step remaining todo (related to future bundle2 usage) |
|
464 | 465 | self.todosteps = set(['changegroup', 'phases', 'obsmarkers']) |
|
465 | 466 | |
|
466 | 467 | @util.propertycache |
|
467 | 468 | def pulledsubset(self): |
|
468 | 469 | """heads of the set of changeset target by the pull""" |
|
469 | 470 | # compute target subset |
|
470 | 471 | if self.heads is None: |
|
471 | 472 | # We pulled every thing possible |
|
472 | 473 | # sync on everything common |
|
473 | 474 | c = set(self.common) |
|
474 | 475 | ret = list(self.common) |
|
475 | 476 | for n in self.rheads: |
|
476 | 477 | if n not in c: |
|
477 | 478 | ret.append(n) |
|
478 | 479 | return ret |
|
479 | 480 | else: |
|
480 | 481 | # We pulled a specific subset |
|
481 | 482 | # sync on this subset |
|
482 | 483 | return self.heads |
|
483 | 484 | |
|
484 | 485 | def gettransaction(self): |
|
485 | 486 | """get appropriate pull transaction, creating it if needed""" |
|
486 | 487 | if self._tr is None: |
|
487 | 488 | self._tr = self.repo.transaction(self._trname) |
|
488 | 489 | return self._tr |
|
489 | 490 | |
|
490 | 491 | def closetransaction(self): |
|
491 | 492 | """close transaction if created""" |
|
492 | 493 | if self._tr is not None: |
|
493 | 494 | self._tr.close() |
|
494 | 495 | |
|
495 | 496 | def releasetransaction(self): |
|
496 | 497 | """release transaction if created""" |
|
497 | 498 | if self._tr is not None: |
|
498 | 499 | self._tr.release() |
|
499 | 500 | |
|
500 | 501 | def pull(repo, remote, heads=None, force=False): |
|
501 | 502 | pullop = pulloperation(repo, remote, heads, force) |
|
502 | 503 | if pullop.remote.local(): |
|
503 | 504 | missing = set(pullop.remote.requirements) - pullop.repo.supported |
|
504 | 505 | if missing: |
|
505 | 506 | msg = _("required features are not" |
|
506 | 507 | " supported in the destination:" |
|
507 | 508 | " %s") % (', '.join(sorted(missing))) |
|
508 | 509 | raise util.Abort(msg) |
|
509 | 510 | |
|
510 | 511 | lock = pullop.repo.lock() |
|
511 | 512 | try: |
|
512 | 513 | _pulldiscovery(pullop) |
|
513 | 514 | if pullop.remote.capable('bundle2'): |
|
514 | 515 | _pullbundle2(pullop) |
|
515 | 516 | if 'changegroup' in pullop.todosteps: |
|
516 | 517 | _pullchangeset(pullop) |
|
517 | 518 | if 'phases' in pullop.todosteps: |
|
518 | 519 | _pullphase(pullop) |
|
519 | 520 | if 'obsmarkers' in pullop.todosteps: |
|
520 | 521 | _pullobsolete(pullop) |
|
521 | 522 | pullop.closetransaction() |
|
522 | 523 | finally: |
|
523 | 524 | pullop.releasetransaction() |
|
524 | 525 | lock.release() |
|
525 | 526 | |
|
526 | 527 | return pullop.cgresult |
|
527 | 528 | |
|
528 | 529 | def _pulldiscovery(pullop): |
|
529 | 530 | """discovery phase for the pull |
|
530 | 531 | |
|
531 | 532 | Current handle changeset discovery only, will change handle all discovery |
|
532 | 533 | at some point.""" |
|
533 | 534 | tmp = discovery.findcommonincoming(pullop.repo.unfiltered(), |
|
534 | 535 | pullop.remote, |
|
535 | 536 | heads=pullop.heads, |
|
536 | 537 | force=pullop.force) |
|
537 | 538 | pullop.common, pullop.fetch, pullop.rheads = tmp |
|
538 | 539 | |
|
539 | 540 | def _pullbundle2(pullop): |
|
540 | 541 | """pull data using bundle2 |
|
541 | 542 | |
|
542 | 543 | For now, the only supported data are changegroup.""" |
|
543 | 544 | kwargs = {'bundlecaps': set(['HG20'])} |
|
544 | 545 | # pulling changegroup |
|
545 | 546 | pullop.todosteps.remove('changegroup') |
|
546 | 547 | if not pullop.fetch: |
|
547 | 548 | pullop.repo.ui.status(_("no changes found\n")) |
|
548 | 549 | pullop.cgresult = 0 |
|
549 | 550 | else: |
|
550 | 551 | kwargs['common'] = pullop.common |
|
551 | 552 | kwargs['heads'] = pullop.heads or pullop.rheads |
|
552 | 553 | if pullop.heads is None and list(pullop.common) == [nullid]: |
|
553 | 554 | pullop.repo.ui.status(_("requesting all changes\n")) |
|
554 | 555 | if kwargs.keys() == ['format']: |
|
555 | 556 | return # nothing to pull |
|
556 | 557 | bundle = pullop.remote.getbundle('pull', **kwargs) |
|
557 | 558 | try: |
|
558 | 559 | op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction) |
|
559 | 560 | except KeyError, exc: |
|
560 | 561 | raise util.Abort('missing support for %s' % exc) |
|
561 | 562 | assert len(op.records['changegroup']) == 1 |
|
562 | 563 | pullop.cgresult = op.records['changegroup'][0]['return'] |
|
563 | 564 | |
|
564 | 565 | def _pullchangeset(pullop): |
|
565 | 566 | """pull changeset from unbundle into the local repo""" |
|
566 | 567 | # We delay the open of the transaction as late as possible so we |
|
567 | 568 | # don't open transaction for nothing or you break future useful |
|
568 | 569 | # rollback call |
|
569 | 570 | pullop.todosteps.remove('changegroup') |
|
570 | 571 | if not pullop.fetch: |
|
571 | 572 | pullop.repo.ui.status(_("no changes found\n")) |
|
572 | 573 | pullop.cgresult = 0 |
|
573 | 574 | return |
|
574 | 575 | pullop.gettransaction() |
|
575 | 576 | if pullop.heads is None and list(pullop.common) == [nullid]: |
|
576 | 577 | pullop.repo.ui.status(_("requesting all changes\n")) |
|
577 | 578 | elif pullop.heads is None and pullop.remote.capable('changegroupsubset'): |
|
578 | 579 | # issue1320, avoid a race if remote changed after discovery |
|
579 | 580 | pullop.heads = pullop.rheads |
|
580 | 581 | |
|
581 | 582 | if pullop.remote.capable('getbundle'): |
|
582 | 583 | # TODO: get bundlecaps from remote |
|
583 | 584 | cg = pullop.remote.getbundle('pull', common=pullop.common, |
|
584 | 585 | heads=pullop.heads or pullop.rheads) |
|
585 | 586 | elif pullop.heads is None: |
|
586 | 587 | cg = pullop.remote.changegroup(pullop.fetch, 'pull') |
|
587 | 588 | elif not pullop.remote.capable('changegroupsubset'): |
|
588 | 589 | raise util.Abort(_("partial pull cannot be done because " |
|
589 | 590 | "other repository doesn't support " |
|
590 | 591 | "changegroupsubset.")) |
|
591 | 592 | else: |
|
592 | 593 | cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull') |
|
593 | 594 | pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull', |
|
594 | 595 | pullop.remote.url()) |
|
595 | 596 | |
|
596 | 597 | def _pullphase(pullop): |
|
597 | 598 | # Get remote phases data from remote |
|
598 | 599 | pullop.todosteps.remove('phases') |
|
599 | 600 | remotephases = pullop.remote.listkeys('phases') |
|
600 | 601 | publishing = bool(remotephases.get('publishing', False)) |
|
601 | 602 | if remotephases and not publishing: |
|
602 | 603 | # remote is new and unpublishing |
|
603 | 604 | pheads, _dr = phases.analyzeremotephases(pullop.repo, |
|
604 | 605 | pullop.pulledsubset, |
|
605 | 606 | remotephases) |
|
606 | 607 | phases.advanceboundary(pullop.repo, phases.public, pheads) |
|
607 | 608 | phases.advanceboundary(pullop.repo, phases.draft, |
|
608 | 609 | pullop.pulledsubset) |
|
609 | 610 | else: |
|
610 | 611 | # Remote is old or publishing all common changesets |
|
611 | 612 | # should be seen as public |
|
612 | 613 | phases.advanceboundary(pullop.repo, phases.public, |
|
613 | 614 | pullop.pulledsubset) |
|
614 | 615 | |
|
615 | 616 | def _pullobsolete(pullop): |
|
616 | 617 | """utility function to pull obsolete markers from a remote |
|
617 | 618 | |
|
618 | 619 | The `gettransaction` is function that return the pull transaction, creating |
|
619 | 620 | one if necessary. We return the transaction to inform the calling code that |
|
620 | 621 | a new transaction have been created (when applicable). |
|
621 | 622 | |
|
622 | 623 | Exists mostly to allow overriding for experimentation purpose""" |
|
623 | 624 | pullop.todosteps.remove('obsmarkers') |
|
624 | 625 | tr = None |
|
625 | 626 | if obsolete._enabled: |
|
626 | 627 | pullop.repo.ui.debug('fetching remote obsolete markers\n') |
|
627 | 628 | remoteobs = pullop.remote.listkeys('obsolete') |
|
628 | 629 | if 'dump0' in remoteobs: |
|
629 | 630 | tr = pullop.gettransaction() |
|
630 | 631 | for key in sorted(remoteobs, reverse=True): |
|
631 | 632 | if key.startswith('dump'): |
|
632 | 633 | data = base85.b85decode(remoteobs[key]) |
|
633 | 634 | pullop.repo.obsstore.mergemarkers(tr, data) |
|
634 | 635 | pullop.repo.invalidatevolatilesets() |
|
635 | 636 | return tr |
|
636 | 637 | |
|
637 | 638 | def getbundle(repo, source, heads=None, common=None, bundlecaps=None): |
|
638 | 639 | """return a full bundle (with potentially multiple kind of parts) |
|
639 | 640 | |
|
640 | 641 | Could be a bundle HG10 or a bundle HG20 depending on bundlecaps |
|
641 | 642 | passed. For now, the bundle can contain only changegroup, but this will |
|
642 | 643 | changes when more part type will be available for bundle2. |
|
643 | 644 | |
|
644 | 645 | This is different from changegroup.getbundle that only returns an HG10 |
|
645 | 646 | changegroup bundle. They may eventually get reunited in the future when we |
|
646 | 647 | have a clearer idea of the API we what to query different data. |
|
647 | 648 | |
|
648 | 649 | The implementation is at a very early stage and will get massive rework |
|
649 | 650 | when the API of bundle is refined. |
|
650 | 651 | """ |
|
651 | 652 | # build bundle here. |
|
652 | 653 | cg = changegroup.getbundle(repo, source, heads=heads, |
|
653 | 654 | common=common, bundlecaps=bundlecaps) |
|
654 | 655 | if bundlecaps is None or 'HG20' not in bundlecaps: |
|
655 | 656 | return cg |
|
656 | 657 | # very crude first implementation, |
|
657 | 658 | # the bundle API will change and the generation will be done lazily. |
|
658 | 659 | bundler = bundle2.bundle20(repo.ui) |
|
659 | 660 | part = bundle2.bundlepart('changegroup', data=cg.getchunks()) |
|
660 | 661 | bundler.addpart(part) |
|
661 | 662 | return util.chunkbuffer(bundler.getchunks()) |
|
662 | 663 | |
|
663 | 664 | class PushRaced(RuntimeError): |
|
664 | 665 | """An exception raised during unbundling that indicate a push race""" |
|
665 | 666 | |
|
666 | 667 | def check_heads(repo, their_heads, context): |
|
667 | 668 | """check if the heads of a repo have been modified |
|
668 | 669 | |
|
669 | 670 | Used by peer for unbundling. |
|
670 | 671 | """ |
|
671 | 672 | heads = repo.heads() |
|
672 | 673 | heads_hash = util.sha1(''.join(sorted(heads))).digest() |
|
673 | 674 | if not (their_heads == ['force'] or their_heads == heads or |
|
674 | 675 | their_heads == ['hashed', heads_hash]): |
|
675 | 676 | # someone else committed/pushed/unbundled while we |
|
676 | 677 | # were transferring data |
|
677 | 678 | raise PushRaced('repository changed while %s - ' |
|
678 | 679 | 'please try again' % context) |
|
679 | 680 | |
|
680 | 681 | def unbundle(repo, cg, heads, source, url): |
|
681 | 682 | """Apply a bundle to a repo. |
|
682 | 683 | |
|
683 | 684 | this function makes sure the repo is locked during the application and have |
|
684 | 685 | mechanism to check that no push race occurred between the creation of the |
|
685 | 686 | bundle and its application. |
|
686 | 687 | |
|
687 | 688 | If the push was raced as PushRaced exception is raised.""" |
|
688 | 689 | r = 0 |
|
689 | 690 | # need a transaction when processing a bundle2 stream |
|
690 | 691 | tr = None |
|
691 | 692 | lock = repo.lock() |
|
692 | 693 | try: |
|
693 | 694 | check_heads(repo, heads, 'uploading changes') |
|
694 | 695 | # push can proceed |
|
695 | 696 | if util.safehasattr(cg, 'params'): |
|
696 | 697 | tr = repo.transaction('unbundle') |
|
697 | 698 | r = bundle2.processbundle(repo, cg, lambda: tr).reply |
|
698 | 699 | tr.close() |
|
699 | 700 | else: |
|
700 | 701 | r = changegroup.addchangegroup(repo, cg, source, url) |
|
701 | 702 | finally: |
|
702 | 703 | if tr is not None: |
|
703 | 704 | tr.release() |
|
704 | 705 | lock.release() |
|
705 | 706 | return r |
@@ -1,785 +1,801 b'' | |||
|
1 | 1 | |
|
2 | 2 | Create an extension to test bundle2 API |
|
3 | 3 | |
|
4 | 4 | $ cat > bundle2.py << EOF |
|
5 | 5 | > """A small extension to test bundle2 implementation |
|
6 | 6 | > |
|
7 | 7 | > Current bundle2 implementation is far too limited to be used in any core |
|
8 | 8 | > code. We still need to be able to test it while it grow up. |
|
9 | 9 | > """ |
|
10 | 10 | > |
|
11 | 11 | > import sys |
|
12 | 12 | > from mercurial import cmdutil |
|
13 | 13 | > from mercurial import util |
|
14 | 14 | > from mercurial import bundle2 |
|
15 | 15 | > from mercurial import scmutil |
|
16 | 16 | > from mercurial import discovery |
|
17 | 17 | > from mercurial import changegroup |
|
18 | 18 | > cmdtable = {} |
|
19 | 19 | > command = cmdutil.command(cmdtable) |
|
20 | 20 | > |
|
21 | 21 | > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko |
|
22 | 22 | > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko |
|
23 | 23 | > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.""" |
|
24 | 24 | > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it. |
|
25 | 25 | > |
|
26 | 26 | > @bundle2.parthandler('test:song') |
|
27 | 27 | > def songhandler(op, part): |
|
28 | 28 | > """handle a "test:song" bundle2 part, printing the lyrics on stdin""" |
|
29 | 29 | > op.ui.write('The choir starts singing:\n') |
|
30 | 30 | > verses = 0 |
|
31 | 31 | > for line in part.read().split('\n'): |
|
32 | 32 | > op.ui.write(' %s\n' % line) |
|
33 | 33 | > verses += 1 |
|
34 | 34 | > op.records.add('song', {'verses': verses}) |
|
35 | 35 | > |
|
36 | 36 | > @bundle2.parthandler('test:ping') |
|
37 | 37 | > def pinghandler(op, part): |
|
38 | 38 | > op.ui.write('received ping request (id %i)\n' % part.id) |
|
39 | 39 | > if op.reply is not None: |
|
40 | 40 | > rpart = bundle2.bundlepart('test:pong', |
|
41 | 41 | > [('in-reply-to', str(part.id))]) |
|
42 | 42 | > op.reply.addpart(rpart) |
|
43 | 43 | > |
|
44 | 44 | > @command('bundle2', |
|
45 | 45 | > [('', 'param', [], 'stream level parameter'), |
|
46 | 46 | > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'), |
|
47 | 47 | > ('', 'parts', False, 'include some arbitrary parts to the bundle'), |
|
48 | > ('', 'reply', False, 'produce a reply bundle'), | |
|
48 | 49 | > ('r', 'rev', [], 'includes those changeset in the bundle'),], |
|
49 | 50 | > '[OUTPUTFILE]') |
|
50 | 51 | > def cmdbundle2(ui, repo, path=None, **opts): |
|
51 | 52 | > """write a bundle2 container on standard ouput""" |
|
52 | 53 | > bundler = bundle2.bundle20(ui) |
|
53 | 54 | > for p in opts['param']: |
|
54 | 55 | > p = p.split('=', 1) |
|
55 | 56 | > try: |
|
56 | 57 | > bundler.addparam(*p) |
|
57 | 58 | > except ValueError, exc: |
|
58 | 59 | > raise util.Abort('%s' % exc) |
|
59 | 60 | > |
|
61 | > if opts['reply']: | |
|
62 | > bundler.addpart(bundle2.bundlepart('replycaps')) | |
|
63 | > | |
|
60 | 64 | > revs = opts['rev'] |
|
61 | 65 | > if 'rev' in opts: |
|
62 | 66 | > revs = scmutil.revrange(repo, opts['rev']) |
|
63 | 67 | > if revs: |
|
64 | 68 | > # very crude version of a changegroup part creation |
|
65 | 69 | > bundled = repo.revs('%ld::%ld', revs, revs) |
|
66 | 70 | > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)] |
|
67 | 71 | > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)] |
|
68 | 72 | > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing) |
|
69 | 73 | > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None) |
|
70 | 74 | > part = bundle2.bundlepart('changegroup', data=cg.getchunks()) |
|
71 | 75 | > bundler.addpart(part) |
|
72 | 76 | > |
|
73 | 77 | > if opts['parts']: |
|
74 | 78 | > part = bundle2.bundlepart('test:empty') |
|
75 | 79 | > bundler.addpart(part) |
|
76 | 80 | > # add a second one to make sure we handle multiple parts |
|
77 | 81 | > part = bundle2.bundlepart('test:empty') |
|
78 | 82 | > bundler.addpart(part) |
|
79 | 83 | > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG) |
|
80 | 84 | > bundler.addpart(part) |
|
81 | 85 | > part = bundle2.bundlepart('test:math', |
|
82 | 86 | > [('pi', '3.14'), ('e', '2.72')], |
|
83 | 87 | > [('cooking', 'raw')], |
|
84 | 88 | > '42') |
|
85 | 89 | > bundler.addpart(part) |
|
86 | 90 | > if opts['unknown']: |
|
87 | 91 | > part = bundle2.bundlepart('test:UNKNOWN', |
|
88 | 92 | > data='some random content') |
|
89 | 93 | > bundler.addpart(part) |
|
90 | 94 | > if opts['parts']: |
|
91 | 95 | > part = bundle2.bundlepart('test:ping') |
|
92 | 96 | > bundler.addpart(part) |
|
93 | 97 | > |
|
94 | 98 | > if path is None: |
|
95 | 99 | > file = sys.stdout |
|
96 | 100 | > else: |
|
97 | 101 | > file = open(path, 'w') |
|
98 | 102 | > |
|
99 | 103 | > for chunk in bundler.getchunks(): |
|
100 | 104 | > file.write(chunk) |
|
101 | 105 | > |
|
102 | 106 | > @command('unbundle2', [], '') |
|
103 | 107 | > def cmdunbundle2(ui, repo, replypath=None): |
|
104 | 108 | > """process a bundle2 stream from stdin on the current repo""" |
|
105 | 109 | > try: |
|
106 | 110 | > tr = None |
|
107 | 111 | > lock = repo.lock() |
|
108 | 112 | > tr = repo.transaction('processbundle') |
|
109 | 113 | > try: |
|
110 | 114 | > unbundler = bundle2.unbundle20(ui, sys.stdin) |
|
111 | 115 | > op = bundle2.processbundle(repo, unbundler, lambda: tr) |
|
112 | 116 | > tr.close() |
|
113 | 117 | > except KeyError, exc: |
|
114 | 118 | > raise util.Abort('missing support for %s' % exc) |
|
115 | 119 | > finally: |
|
116 | 120 | > if tr is not None: |
|
117 | 121 | > tr.release() |
|
118 | 122 | > lock.release() |
|
119 | 123 | > remains = sys.stdin.read() |
|
120 | 124 | > ui.write('%i unread bytes\n' % len(remains)) |
|
121 | 125 | > if op.records['song']: |
|
122 | 126 | > totalverses = sum(r['verses'] for r in op.records['song']) |
|
123 | 127 | > ui.write('%i total verses sung\n' % totalverses) |
|
124 | 128 | > for rec in op.records['changegroup']: |
|
125 | 129 | > ui.write('addchangegroup return: %i\n' % rec['return']) |
|
126 | 130 | > if op.reply is not None and replypath is not None: |
|
127 | 131 | > file = open(replypath, 'w') |
|
128 | 132 | > for chunk in op.reply.getchunks(): |
|
129 | 133 | > file.write(chunk) |
|
130 | 134 | > |
|
131 | 135 | > @command('statbundle2', [], '') |
|
132 | 136 | > def cmdstatbundle2(ui, repo): |
|
133 | 137 | > """print statistic on the bundle2 container read from stdin""" |
|
134 | 138 | > unbundler = bundle2.unbundle20(ui, sys.stdin) |
|
135 | 139 | > try: |
|
136 | 140 | > params = unbundler.params |
|
137 | 141 | > except KeyError, exc: |
|
138 | 142 | > raise util.Abort('unknown parameters: %s' % exc) |
|
139 | 143 | > ui.write('options count: %i\n' % len(params)) |
|
140 | 144 | > for key in sorted(params): |
|
141 | 145 | > ui.write('- %s\n' % key) |
|
142 | 146 | > value = params[key] |
|
143 | 147 | > if value is not None: |
|
144 | 148 | > ui.write(' %s\n' % value) |
|
145 | 149 | > count = 0 |
|
146 | 150 | > for p in unbundler.iterparts(): |
|
147 | 151 | > count += 1 |
|
148 | 152 | > ui.write(' :%s:\n' % p.type) |
|
149 | 153 | > ui.write(' mandatory: %i\n' % len(p.mandatoryparams)) |
|
150 | 154 | > ui.write(' advisory: %i\n' % len(p.advisoryparams)) |
|
151 | 155 | > ui.write(' payload: %i bytes\n' % len(p.read())) |
|
152 | 156 | > ui.write('parts count: %i\n' % count) |
|
153 | 157 | > EOF |
|
154 | 158 | $ cat >> $HGRCPATH << EOF |
|
155 | 159 | > [extensions] |
|
156 | 160 | > bundle2=$TESTTMP/bundle2.py |
|
157 | 161 | > [server] |
|
158 | 162 | > bundle2=True |
|
159 | 163 | > [ui] |
|
160 | 164 | > ssh=python "$TESTDIR/dummyssh" |
|
161 | 165 | > [web] |
|
162 | 166 | > push_ssl = false |
|
163 | 167 | > allow_push = * |
|
164 | 168 | > EOF |
|
165 | 169 | |
|
166 | 170 | The extension requires a repo (currently unused) |
|
167 | 171 | |
|
168 | 172 | $ hg init main |
|
169 | 173 | $ cd main |
|
170 | 174 | $ touch a |
|
171 | 175 | $ hg add a |
|
172 | 176 | $ hg commit -m 'a' |
|
173 | 177 | |
|
174 | 178 | |
|
175 | 179 | Empty bundle |
|
176 | 180 | ================= |
|
177 | 181 | |
|
178 | 182 | - no option |
|
179 | 183 | - no parts |
|
180 | 184 | |
|
181 | 185 | Test bundling |
|
182 | 186 | |
|
183 | 187 | $ hg bundle2 |
|
184 | 188 | HG20\x00\x00\x00\x00 (no-eol) (esc) |
|
185 | 189 | |
|
186 | 190 | Test unbundling |
|
187 | 191 | |
|
188 | 192 | $ hg bundle2 | hg statbundle2 |
|
189 | 193 | options count: 0 |
|
190 | 194 | parts count: 0 |
|
191 | 195 | |
|
192 | 196 | Test old style bundle are detected and refused |
|
193 | 197 | |
|
194 | 198 | $ hg bundle --all ../bundle.hg |
|
195 | 199 | 1 changesets found |
|
196 | 200 | $ hg statbundle2 < ../bundle.hg |
|
197 | 201 | abort: unknown bundle version 10 |
|
198 | 202 | [255] |
|
199 | 203 | |
|
200 | 204 | Test parameters |
|
201 | 205 | ================= |
|
202 | 206 | |
|
203 | 207 | - some options |
|
204 | 208 | - no parts |
|
205 | 209 | |
|
206 | 210 | advisory parameters, no value |
|
207 | 211 | ------------------------------- |
|
208 | 212 | |
|
209 | 213 | Simplest possible parameters form |
|
210 | 214 | |
|
211 | 215 | Test generation simple option |
|
212 | 216 | |
|
213 | 217 | $ hg bundle2 --param 'caution' |
|
214 | 218 | HG20\x00\x07caution\x00\x00 (no-eol) (esc) |
|
215 | 219 | |
|
216 | 220 | Test unbundling |
|
217 | 221 | |
|
218 | 222 | $ hg bundle2 --param 'caution' | hg statbundle2 |
|
219 | 223 | options count: 1 |
|
220 | 224 | - caution |
|
221 | 225 | parts count: 0 |
|
222 | 226 | |
|
223 | 227 | Test generation multiple option |
|
224 | 228 | |
|
225 | 229 | $ hg bundle2 --param 'caution' --param 'meal' |
|
226 | 230 | HG20\x00\x0ccaution meal\x00\x00 (no-eol) (esc) |
|
227 | 231 | |
|
228 | 232 | Test unbundling |
|
229 | 233 | |
|
230 | 234 | $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2 |
|
231 | 235 | options count: 2 |
|
232 | 236 | - caution |
|
233 | 237 | - meal |
|
234 | 238 | parts count: 0 |
|
235 | 239 | |
|
236 | 240 | advisory parameters, with value |
|
237 | 241 | ------------------------------- |
|
238 | 242 | |
|
239 | 243 | Test generation |
|
240 | 244 | |
|
241 | 245 | $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' |
|
242 | 246 | HG20\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc) |
|
243 | 247 | |
|
244 | 248 | Test unbundling |
|
245 | 249 | |
|
246 | 250 | $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2 |
|
247 | 251 | options count: 3 |
|
248 | 252 | - caution |
|
249 | 253 | - elephants |
|
250 | 254 | - meal |
|
251 | 255 | vegan |
|
252 | 256 | parts count: 0 |
|
253 | 257 | |
|
254 | 258 | parameter with special char in value |
|
255 | 259 | --------------------------------------------------- |
|
256 | 260 | |
|
257 | 261 | Test generation |
|
258 | 262 | |
|
259 | 263 | $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple |
|
260 | 264 | HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc) |
|
261 | 265 | |
|
262 | 266 | Test unbundling |
|
263 | 267 | |
|
264 | 268 | $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2 |
|
265 | 269 | options count: 2 |
|
266 | 270 | - e|! 7/ |
|
267 | 271 | babar%#==tutu |
|
268 | 272 | - simple |
|
269 | 273 | parts count: 0 |
|
270 | 274 | |
|
271 | 275 | Test unknown mandatory option |
|
272 | 276 | --------------------------------------------------- |
|
273 | 277 | |
|
274 | 278 | $ hg bundle2 --param 'Gravity' | hg statbundle2 |
|
275 | 279 | abort: unknown parameters: 'Gravity' |
|
276 | 280 | [255] |
|
277 | 281 | |
|
278 | 282 | Test debug output |
|
279 | 283 | --------------------------------------------------- |
|
280 | 284 | |
|
281 | 285 | bundling debug |
|
282 | 286 | |
|
283 | 287 | $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2 |
|
284 | 288 | start emission of HG20 stream |
|
285 | 289 | bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple |
|
286 | 290 | start of parts |
|
287 | 291 | end of bundle |
|
288 | 292 | |
|
289 | 293 | file content is ok |
|
290 | 294 | |
|
291 | 295 | $ cat ../out.hg2 |
|
292 | 296 | HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc) |
|
293 | 297 | |
|
294 | 298 | unbundling debug |
|
295 | 299 | |
|
296 | 300 | $ hg statbundle2 --debug < ../out.hg2 |
|
297 | 301 | start processing of HG20 stream |
|
298 | 302 | reading bundle2 stream parameters |
|
299 | 303 | ignoring unknown parameter 'e|! 7/' |
|
300 | 304 | ignoring unknown parameter 'simple' |
|
301 | 305 | options count: 2 |
|
302 | 306 | - e|! 7/ |
|
303 | 307 | babar%#==tutu |
|
304 | 308 | - simple |
|
305 | 309 | start extraction of bundle2 parts |
|
306 | 310 | part header size: 0 |
|
307 | 311 | end of bundle2 stream |
|
308 | 312 | parts count: 0 |
|
309 | 313 | |
|
310 | 314 | |
|
311 | 315 | Test buggy input |
|
312 | 316 | --------------------------------------------------- |
|
313 | 317 | |
|
314 | 318 | empty parameter name |
|
315 | 319 | |
|
316 | 320 | $ hg bundle2 --param '' --quiet |
|
317 | 321 | abort: empty parameter name |
|
318 | 322 | [255] |
|
319 | 323 | |
|
320 | 324 | bad parameter name |
|
321 | 325 | |
|
322 | 326 | $ hg bundle2 --param 42babar |
|
323 | 327 | abort: non letter first character: '42babar' |
|
324 | 328 | [255] |
|
325 | 329 | |
|
326 | 330 | |
|
327 | 331 | Test part |
|
328 | 332 | ================= |
|
329 | 333 | |
|
330 | 334 | $ hg bundle2 --parts ../parts.hg2 --debug |
|
331 | 335 | start emission of HG20 stream |
|
332 | 336 | bundle parameter: |
|
333 | 337 | start of parts |
|
334 | 338 | bundle part: "test:empty" |
|
335 | 339 | bundle part: "test:empty" |
|
336 | 340 | bundle part: "test:song" |
|
337 | 341 | bundle part: "test:math" |
|
338 | 342 | bundle part: "test:ping" |
|
339 | 343 | end of bundle |
|
340 | 344 | |
|
341 | 345 | $ cat ../parts.hg2 |
|
342 | 346 | HG20\x00\x00\x00\x11 (esc) |
|
343 | 347 | test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc) |
|
344 | 348 | test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc) |
|
345 | 349 | Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko |
|
346 | 350 | Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x03\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc) |
|
347 | 351 | |
|
348 | 352 | |
|
349 | 353 | $ hg statbundle2 < ../parts.hg2 |
|
350 | 354 | options count: 0 |
|
351 | 355 | :test:empty: |
|
352 | 356 | mandatory: 0 |
|
353 | 357 | advisory: 0 |
|
354 | 358 | payload: 0 bytes |
|
355 | 359 | :test:empty: |
|
356 | 360 | mandatory: 0 |
|
357 | 361 | advisory: 0 |
|
358 | 362 | payload: 0 bytes |
|
359 | 363 | :test:song: |
|
360 | 364 | mandatory: 0 |
|
361 | 365 | advisory: 0 |
|
362 | 366 | payload: 178 bytes |
|
363 | 367 | :test:math: |
|
364 | 368 | mandatory: 2 |
|
365 | 369 | advisory: 1 |
|
366 | 370 | payload: 2 bytes |
|
367 | 371 | :test:ping: |
|
368 | 372 | mandatory: 0 |
|
369 | 373 | advisory: 0 |
|
370 | 374 | payload: 0 bytes |
|
371 | 375 | parts count: 5 |
|
372 | 376 | |
|
373 | 377 | $ hg statbundle2 --debug < ../parts.hg2 |
|
374 | 378 | start processing of HG20 stream |
|
375 | 379 | reading bundle2 stream parameters |
|
376 | 380 | options count: 0 |
|
377 | 381 | start extraction of bundle2 parts |
|
378 | 382 | part header size: 17 |
|
379 | 383 | part type: "test:empty" |
|
380 | 384 | part id: "0" |
|
381 | 385 | part parameters: 0 |
|
382 | 386 | :test:empty: |
|
383 | 387 | mandatory: 0 |
|
384 | 388 | advisory: 0 |
|
385 | 389 | payload chunk size: 0 |
|
386 | 390 | payload: 0 bytes |
|
387 | 391 | part header size: 17 |
|
388 | 392 | part type: "test:empty" |
|
389 | 393 | part id: "1" |
|
390 | 394 | part parameters: 0 |
|
391 | 395 | :test:empty: |
|
392 | 396 | mandatory: 0 |
|
393 | 397 | advisory: 0 |
|
394 | 398 | payload chunk size: 0 |
|
395 | 399 | payload: 0 bytes |
|
396 | 400 | part header size: 16 |
|
397 | 401 | part type: "test:song" |
|
398 | 402 | part id: "2" |
|
399 | 403 | part parameters: 0 |
|
400 | 404 | :test:song: |
|
401 | 405 | mandatory: 0 |
|
402 | 406 | advisory: 0 |
|
403 | 407 | payload chunk size: 178 |
|
404 | 408 | payload chunk size: 0 |
|
405 | 409 | payload: 178 bytes |
|
406 | 410 | part header size: 43 |
|
407 | 411 | part type: "test:math" |
|
408 | 412 | part id: "3" |
|
409 | 413 | part parameters: 3 |
|
410 | 414 | :test:math: |
|
411 | 415 | mandatory: 2 |
|
412 | 416 | advisory: 1 |
|
413 | 417 | payload chunk size: 2 |
|
414 | 418 | payload chunk size: 0 |
|
415 | 419 | payload: 2 bytes |
|
416 | 420 | part header size: 16 |
|
417 | 421 | part type: "test:ping" |
|
418 | 422 | part id: "4" |
|
419 | 423 | part parameters: 0 |
|
420 | 424 | :test:ping: |
|
421 | 425 | mandatory: 0 |
|
422 | 426 | advisory: 0 |
|
423 | 427 | payload chunk size: 0 |
|
424 | 428 | payload: 0 bytes |
|
425 | 429 | part header size: 0 |
|
426 | 430 | end of bundle2 stream |
|
427 | 431 | parts count: 5 |
|
428 | 432 | |
|
429 | 433 | Test actual unbundling of test part |
|
430 | 434 | ======================================= |
|
431 | 435 | |
|
432 | 436 | Process the bundle |
|
433 | 437 | |
|
434 | 438 | $ hg unbundle2 --debug < ../parts.hg2 |
|
435 | 439 | start processing of HG20 stream |
|
436 | 440 | reading bundle2 stream parameters |
|
437 | 441 | start extraction of bundle2 parts |
|
438 | 442 | part header size: 17 |
|
439 | 443 | part type: "test:empty" |
|
440 | 444 | part id: "0" |
|
441 | 445 | part parameters: 0 |
|
442 | 446 | ignoring unknown advisory part 'test:empty' |
|
443 | 447 | payload chunk size: 0 |
|
444 | 448 | part header size: 17 |
|
445 | 449 | part type: "test:empty" |
|
446 | 450 | part id: "1" |
|
447 | 451 | part parameters: 0 |
|
448 | 452 | ignoring unknown advisory part 'test:empty' |
|
449 | 453 | payload chunk size: 0 |
|
450 | 454 | part header size: 16 |
|
451 | 455 | part type: "test:song" |
|
452 | 456 | part id: "2" |
|
453 | 457 | part parameters: 0 |
|
454 | 458 | found a handler for part 'test:song' |
|
455 | 459 | The choir starts singing: |
|
456 | 460 | payload chunk size: 178 |
|
457 | 461 | payload chunk size: 0 |
|
458 | 462 | Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko |
|
459 | 463 | Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko |
|
460 | 464 | Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko. |
|
461 | 465 | part header size: 43 |
|
462 | 466 | part type: "test:math" |
|
463 | 467 | part id: "3" |
|
464 | 468 | part parameters: 3 |
|
465 | 469 | ignoring unknown advisory part 'test:math' |
|
466 | 470 | payload chunk size: 2 |
|
467 | 471 | payload chunk size: 0 |
|
468 | 472 | part header size: 16 |
|
469 | 473 | part type: "test:ping" |
|
470 | 474 | part id: "4" |
|
471 | 475 | part parameters: 0 |
|
472 | 476 | found a handler for part 'test:ping' |
|
473 | 477 | received ping request (id 4) |
|
474 | 478 | payload chunk size: 0 |
|
475 | 479 | part header size: 0 |
|
476 | 480 | end of bundle2 stream |
|
477 | 481 | 0 unread bytes |
|
478 | 482 | 3 total verses sung |
|
479 | 483 | |
|
480 | 484 | Unbundle with an unknown mandatory part |
|
481 | 485 | (should abort) |
|
482 | 486 | |
|
483 | 487 | $ hg bundle2 --parts --unknown ../unknown.hg2 |
|
484 | 488 | |
|
485 | 489 | $ hg unbundle2 < ../unknown.hg2 |
|
486 | 490 | The choir starts singing: |
|
487 | 491 | Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko |
|
488 | 492 | Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko |
|
489 | 493 | Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko. |
|
490 | 494 | 0 unread bytes |
|
491 | 495 | abort: missing support for 'test:unknown' |
|
492 | 496 | [255] |
|
493 | 497 | |
|
494 | 498 | unbundle with a reply |
|
495 | 499 | |
|
496 | $ hg unbundle2 ../reply.hg2 < ../parts.hg2 | |
|
500 | $ hg bundle2 --parts --reply ../parts-reply.hg2 | |
|
501 | $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2 | |
|
497 | 502 | The choir starts singing: |
|
498 | 503 | Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko |
|
499 | 504 | Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko |
|
500 | 505 | Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko. |
|
501 |
received ping request (id |
|
|
506 | received ping request (id 5) | |
|
502 | 507 | 0 unread bytes |
|
503 | 508 | 3 total verses sung |
|
504 | 509 | |
|
505 | 510 | The reply is a bundle |
|
506 | 511 | |
|
507 | 512 | $ cat ../reply.hg2 |
|
508 |
HG20\x00\x00\x00\x1e test:pong\x00\x00\x00\x00\x01\x00\x0b\x01in-reply-to |
|
|
513 | HG20\x00\x00\x00\x1e test:pong\x00\x00\x00\x00\x01\x00\x0b\x01in-reply-to5\x00\x00\x00\x00\x00\x00 (no-eol) (esc) | |
|
509 | 514 | |
|
510 | 515 | The reply is valid |
|
511 | 516 | |
|
512 | 517 | $ hg statbundle2 < ../reply.hg2 |
|
513 | 518 | options count: 0 |
|
514 | 519 | :test:pong: |
|
515 | 520 | mandatory: 1 |
|
516 | 521 | advisory: 0 |
|
517 | 522 | payload: 0 bytes |
|
518 | 523 | parts count: 1 |
|
519 | 524 | |
|
520 | 525 | Support for changegroup |
|
521 | 526 | =================================== |
|
522 | 527 | |
|
523 | 528 | $ hg unbundle $TESTDIR/bundles/rebase.hg |
|
524 | 529 | adding changesets |
|
525 | 530 | adding manifests |
|
526 | 531 | adding file changes |
|
527 | 532 | added 8 changesets with 7 changes to 7 files (+3 heads) |
|
528 | 533 | (run 'hg heads' to see heads, 'hg merge' to merge) |
|
529 | 534 | |
|
530 | 535 | $ hg log -G |
|
531 | 536 | o changeset: 8:02de42196ebe |
|
532 | 537 | | tag: tip |
|
533 | 538 | | parent: 6:24b6387c8c8c |
|
534 | 539 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
535 | 540 | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
536 | 541 | | summary: H |
|
537 | 542 | | |
|
538 | 543 | | o changeset: 7:eea13746799a |
|
539 | 544 | |/| parent: 6:24b6387c8c8c |
|
540 | 545 | | | parent: 5:9520eea781bc |
|
541 | 546 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
542 | 547 | | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
543 | 548 | | | summary: G |
|
544 | 549 | | | |
|
545 | 550 | o | changeset: 6:24b6387c8c8c |
|
546 | 551 | | | parent: 1:cd010b8cd998 |
|
547 | 552 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
548 | 553 | | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
549 | 554 | | | summary: F |
|
550 | 555 | | | |
|
551 | 556 | | o changeset: 5:9520eea781bc |
|
552 | 557 | |/ parent: 1:cd010b8cd998 |
|
553 | 558 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
554 | 559 | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
555 | 560 | | summary: E |
|
556 | 561 | | |
|
557 | 562 | | o changeset: 4:32af7686d403 |
|
558 | 563 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
559 | 564 | | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
560 | 565 | | | summary: D |
|
561 | 566 | | | |
|
562 | 567 | | o changeset: 3:5fddd98957c8 |
|
563 | 568 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
564 | 569 | | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
565 | 570 | | | summary: C |
|
566 | 571 | | | |
|
567 | 572 | | o changeset: 2:42ccdea3bb16 |
|
568 | 573 | |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
569 | 574 | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
570 | 575 | | summary: B |
|
571 | 576 | | |
|
572 | 577 | o changeset: 1:cd010b8cd998 |
|
573 | 578 | parent: -1:000000000000 |
|
574 | 579 | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
575 | 580 | date: Sat Apr 30 15:24:48 2011 +0200 |
|
576 | 581 | summary: A |
|
577 | 582 | |
|
578 | 583 | @ changeset: 0:3903775176ed |
|
579 | 584 | user: test |
|
580 | 585 | date: Thu Jan 01 00:00:00 1970 +0000 |
|
581 | 586 | summary: a |
|
582 | 587 | |
|
583 | 588 | |
|
584 | 589 | $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2 |
|
585 | 590 | 4 changesets found |
|
586 | 591 | list of changesets: |
|
587 | 592 | 32af7686d403cf45b5d95f2d70cebea587ac806a |
|
588 | 593 | 9520eea781bcca16c1e15acc0ba14335a0e8e5ba |
|
589 | 594 | eea13746799a9e0bfd88f29d3c2e9dc9389f524f |
|
590 | 595 | 02de42196ebee42ef284b6780a87cdc96e8eaab6 |
|
591 | 596 | start emission of HG20 stream |
|
592 | 597 | bundle parameter: |
|
593 | 598 | start of parts |
|
594 | 599 | bundle part: "changegroup" |
|
595 | 600 | bundling: 1/4 changesets (25.00%) |
|
596 | 601 | bundling: 2/4 changesets (50.00%) |
|
597 | 602 | bundling: 3/4 changesets (75.00%) |
|
598 | 603 | bundling: 4/4 changesets (100.00%) |
|
599 | 604 | bundling: 1/4 manifests (25.00%) |
|
600 | 605 | bundling: 2/4 manifests (50.00%) |
|
601 | 606 | bundling: 3/4 manifests (75.00%) |
|
602 | 607 | bundling: 4/4 manifests (100.00%) |
|
603 | 608 | bundling: D 1/3 files (33.33%) |
|
604 | 609 | bundling: E 2/3 files (66.67%) |
|
605 | 610 | bundling: H 3/3 files (100.00%) |
|
606 | 611 | end of bundle |
|
607 | 612 | |
|
608 | 613 | $ cat ../rev.hg2 |
|
609 | 614 | HG20\x00\x00\x00\x12\x0bchangegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc) |
|
610 | 615 | \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc) |
|
611 | 616 | \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc) |
|
612 | 617 | \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc) |
|
613 | 618 | \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc) |
|
614 | 619 | \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc) |
|
615 | 620 | \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc) |
|
616 | 621 | \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc) |
|
617 | 622 | \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc) |
|
618 | 623 | \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc) |
|
619 | 624 | \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc) |
|
620 | 625 | \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc) |
|
621 | 626 | \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc) |
|
622 | 627 | \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc) |
|
623 | 628 | \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc) |
|
624 | 629 | \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc) |
|
625 | 630 | \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc) |
|
626 | 631 | l\r (no-eol) (esc) |
|
627 | 632 | \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc) |
|
628 | 633 | \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc) |
|
629 | 634 | \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc) |
|
630 | 635 | \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc) |
|
631 | 636 | |
|
632 |
$ hg unbundle2 |
|
|
637 | $ hg unbundle2 < ../rev.hg2 | |
|
633 | 638 | adding changesets |
|
634 | 639 | adding manifests |
|
635 | 640 | adding file changes |
|
636 | 641 | added 0 changesets with 0 changes to 3 files |
|
637 | 642 | 0 unread bytes |
|
638 | 643 | addchangegroup return: 1 |
|
639 | 644 | |
|
640 | $ cat ../rev-replay.hg2 | |
|
641 | HG20\x00\x00\x00/\x11reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to0return1\x00\x00\x00\x00\x00\x00 (no-eol) (esc) | |
|
645 | with reply | |
|
646 | ||
|
647 | $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2 | |
|
648 | $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2 | |
|
649 | adding changesets | |
|
650 | adding manifests | |
|
651 | adding file changes | |
|
652 | added 0 changesets with 0 changes to 3 files | |
|
653 | 0 unread bytes | |
|
654 | addchangegroup return: 1 | |
|
655 | ||
|
656 | $ cat ../rev-reply.hg2 | |
|
657 | HG20\x00\x00\x00/\x11reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x00 (no-eol) (esc) | |
|
642 | 658 | |
|
643 | 659 | Real world exchange |
|
644 | 660 | ===================== |
|
645 | 661 | |
|
646 | 662 | |
|
647 | 663 | clone --pull |
|
648 | 664 | |
|
649 | 665 | $ cd .. |
|
650 | 666 | $ hg clone main other --pull --rev 9520eea781bc |
|
651 | 667 | adding changesets |
|
652 | 668 | adding manifests |
|
653 | 669 | adding file changes |
|
654 | 670 | added 2 changesets with 2 changes to 2 files |
|
655 | 671 | updating to branch default |
|
656 | 672 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
657 | 673 | $ hg -R other log -G |
|
658 | 674 | @ changeset: 1:9520eea781bc |
|
659 | 675 | | tag: tip |
|
660 | 676 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
661 | 677 | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
662 | 678 | | summary: E |
|
663 | 679 | | |
|
664 | 680 | o changeset: 0:cd010b8cd998 |
|
665 | 681 | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
666 | 682 | date: Sat Apr 30 15:24:48 2011 +0200 |
|
667 | 683 | summary: A |
|
668 | 684 | |
|
669 | 685 | |
|
670 | 686 | pull |
|
671 | 687 | |
|
672 | 688 | $ hg -R other pull -r 24b6387c8c8c |
|
673 | 689 | pulling from $TESTTMP/main (glob) |
|
674 | 690 | searching for changes |
|
675 | 691 | adding changesets |
|
676 | 692 | adding manifests |
|
677 | 693 | adding file changes |
|
678 | 694 | added 1 changesets with 1 changes to 1 files (+1 heads) |
|
679 | 695 | (run 'hg heads' to see heads, 'hg merge' to merge) |
|
680 | 696 | |
|
681 | 697 | push |
|
682 | 698 | |
|
683 | 699 | $ hg -R main push other --rev eea13746799a |
|
684 | 700 | pushing to other |
|
685 | 701 | searching for changes |
|
686 | 702 | adding changesets |
|
687 | 703 | adding manifests |
|
688 | 704 | adding file changes |
|
689 | 705 | added 1 changesets with 0 changes to 0 files (-1 heads) |
|
690 | 706 | |
|
691 | 707 | pull over ssh |
|
692 | 708 | |
|
693 | 709 | $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback |
|
694 | 710 | pulling from ssh://user@dummy/main |
|
695 | 711 | searching for changes |
|
696 | 712 | adding changesets |
|
697 | 713 | adding manifests |
|
698 | 714 | adding file changes |
|
699 | 715 | added 1 changesets with 1 changes to 1 files (+1 heads) |
|
700 | 716 | (run 'hg heads' to see heads, 'hg merge' to merge) |
|
701 | 717 | |
|
702 | 718 | pull over http |
|
703 | 719 | |
|
704 | 720 | $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log |
|
705 | 721 | $ cat main.pid >> $DAEMON_PIDS |
|
706 | 722 | |
|
707 | 723 | $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16 |
|
708 | 724 | pulling from http://localhost:$HGPORT/ |
|
709 | 725 | searching for changes |
|
710 | 726 | adding changesets |
|
711 | 727 | adding manifests |
|
712 | 728 | adding file changes |
|
713 | 729 | added 1 changesets with 1 changes to 1 files (+1 heads) |
|
714 | 730 | (run 'hg heads .' to see heads, 'hg merge' to merge) |
|
715 | 731 | $ cat main-error.log |
|
716 | 732 | |
|
717 | 733 | push over ssh |
|
718 | 734 | |
|
719 | 735 | $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8 |
|
720 | 736 | pushing to ssh://user@dummy/other |
|
721 | 737 | searching for changes |
|
722 | 738 | remote: adding changesets |
|
723 | 739 | remote: adding manifests |
|
724 | 740 | remote: adding file changes |
|
725 | 741 | remote: added 1 changesets with 1 changes to 1 files |
|
726 | 742 | |
|
727 | 743 | push over http |
|
728 | 744 | |
|
729 | 745 | $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log |
|
730 | 746 | $ cat other.pid >> $DAEMON_PIDS |
|
731 | 747 | |
|
732 | 748 | $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 |
|
733 | 749 | pushing to http://localhost:$HGPORT2/ |
|
734 | 750 | searching for changes |
|
735 | 751 | $ cat other-error.log |
|
736 | 752 | |
|
737 | 753 | Check final content. |
|
738 | 754 | |
|
739 | 755 | $ hg -R other log -G |
|
740 | 756 | o changeset: 7:32af7686d403 |
|
741 | 757 | | tag: tip |
|
742 | 758 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
743 | 759 | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
744 | 760 | | summary: D |
|
745 | 761 | | |
|
746 | 762 | o changeset: 6:5fddd98957c8 |
|
747 | 763 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
748 | 764 | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
749 | 765 | | summary: C |
|
750 | 766 | | |
|
751 | 767 | o changeset: 5:42ccdea3bb16 |
|
752 | 768 | | parent: 0:cd010b8cd998 |
|
753 | 769 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
754 | 770 | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
755 | 771 | | summary: B |
|
756 | 772 | | |
|
757 | 773 | | o changeset: 4:02de42196ebe |
|
758 | 774 | | | parent: 2:24b6387c8c8c |
|
759 | 775 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
760 | 776 | | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
761 | 777 | | | summary: H |
|
762 | 778 | | | |
|
763 | 779 | | | o changeset: 3:eea13746799a |
|
764 | 780 | | |/| parent: 2:24b6387c8c8c |
|
765 | 781 | | | | parent: 1:9520eea781bc |
|
766 | 782 | | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
767 | 783 | | | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
768 | 784 | | | | summary: G |
|
769 | 785 | | | | |
|
770 | 786 | | o | changeset: 2:24b6387c8c8c |
|
771 | 787 | |/ / parent: 0:cd010b8cd998 |
|
772 | 788 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
773 | 789 | | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
774 | 790 | | | summary: F |
|
775 | 791 | | | |
|
776 | 792 | | @ changeset: 1:9520eea781bc |
|
777 | 793 | |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
778 | 794 | | date: Sat Apr 30 15:24:48 2011 +0200 |
|
779 | 795 | | summary: E |
|
780 | 796 | | |
|
781 | 797 | o changeset: 0:cd010b8cd998 |
|
782 | 798 | user: Nicolas Dumazet <nicdumz.commits@gmail.com> |
|
783 | 799 | date: Sat Apr 30 15:24:48 2011 +0200 |
|
784 | 800 | summary: A |
|
785 | 801 |
General Comments 0
You need to be logged in to leave comments.
Login now