Show More
@@ -4,10 +4,142 | |||||
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 |
""" |
|
7 | """interactive history editing | |
|
8 | ||||
|
9 | With this extension installed, Mercurial gains one new command: histedit. Usage | |||
|
10 | is as follows, assuming the following history:: | |||
|
11 | ||||
|
12 | @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42 | |||
|
13 | | Add delta | |||
|
14 | | | |||
|
15 | o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42 | |||
|
16 | | Add gamma | |||
|
17 | | | |||
|
18 | o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42 | |||
|
19 | | Add beta | |||
|
20 | | | |||
|
21 | o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42 | |||
|
22 | Add alpha | |||
|
23 | ||||
|
24 | If you were to run ``hg histedit c561b4e977df``, you would see the following | |||
|
25 | file open in your editor:: | |||
|
26 | ||||
|
27 | pick c561b4e977df Add beta | |||
|
28 | pick 030b686bedc4 Add gamma | |||
|
29 | pick 7c2fd3b9020c Add delta | |||
|
30 | ||||
|
31 | # Edit history between 633536316234 and 7c2fd3b9020c | |||
|
32 | # | |||
|
33 | # Commands: | |||
|
34 | # p, pick = use commit | |||
|
35 | # e, edit = use commit, but stop for amending | |||
|
36 | # f, fold = use commit, but fold into previous commit | |||
|
37 | # d, drop = remove commit from history | |||
|
38 | # m, mess = edit message without changing commit content | |||
|
39 | # | |||
|
40 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
41 | ||||
|
42 | In this file, lines beginning with ``#`` are ignored. You must specify a rule | |||
|
43 | for each revision in your history. For example, if you had meant to add gamma | |||
|
44 | before beta, and then wanted to add delta in the same revision as beta, you | |||
|
45 | would reorganize the file to look like this:: | |||
|
46 | ||||
|
47 | pick 030b686bedc4 Add gamma | |||
|
48 | pick c561b4e977df Add beta | |||
|
49 | fold 7c2fd3b9020c Add delta | |||
|
50 | ||||
|
51 | # Edit history between 633536316234 and 7c2fd3b9020c | |||
|
52 | # | |||
|
53 | # Commands: | |||
|
54 | # p, pick = use commit | |||
|
55 | # e, edit = use commit, but stop for amending | |||
|
56 | # f, fold = use commit, but fold into previous commit | |||
|
57 | # d, drop = remove commit from history | |||
|
58 | # m, mess = edit message without changing commit content | |||
|
59 | # | |||
|
60 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
61 | ||||
|
62 | At which point you close the editor and ``histedit`` starts working. When you | |||
|
63 | specify a ``fold`` operation, ``histedit`` will open an editor when it folds | |||
|
64 | those revisions together, offering you a chance to clean up the commit message:: | |||
|
65 | ||||
|
66 | Add beta | |||
|
67 | *** | |||
|
68 | Add delta | |||
8 |
|
69 | |||
9 | Inspired by git rebase --interactive. |
|
70 | Edit the commit message to your liking, then close the editor. For | |
|
71 | this example, let's assume that the commit message was changed to | |||
|
72 | ``Add beta and delta.`` After histedit has run and had a chance to | |||
|
73 | remove any old or temporary revisions it needed, the history looks | |||
|
74 | like this:: | |||
|
75 | ||||
|
76 | @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42 | |||
|
77 | | Add beta and delta. | |||
|
78 | | | |||
|
79 | o 1 081603921c3f 2009-04-27 18:04 -0500 durin42 | |||
|
80 | | Add gamma | |||
|
81 | | | |||
|
82 | o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42 | |||
|
83 | Add alpha | |||
|
84 | ||||
|
85 | Note that ``histedit`` does *not* remove any revisions (even its own temporary | |||
|
86 | ones) until after it has completed all the editing operations, so it will | |||
|
87 | probably perform several strip operations when it's done. For the above example, | |||
|
88 | it had to run strip twice. Strip can be slow depending on a variety of factors, | |||
|
89 | so you might need to be a little patient. You can choose to keep the original | |||
|
90 | revisions by passing the ``--keep`` flag. | |||
|
91 | ||||
|
92 | The ``edit`` operation will drop you back to a command prompt, | |||
|
93 | allowing you to edit files freely, or even use ``hg record`` to commit | |||
|
94 | some changes as a separate commit. When you're done, any remaining | |||
|
95 | uncommitted changes will be committed as well. When done, run ``hg | |||
|
96 | histedit --continue`` to finish this step. You'll be prompted for a | |||
|
97 | new commit message, but the default commit message will be the | |||
|
98 | original message for the ``edit`` ed revision. | |||
|
99 | ||||
|
100 | The ``message`` operation will give you a chance to revise a commit | |||
|
101 | message without changing the contents. It's a shortcut for doing | |||
|
102 | ``edit`` immediately followed by `hg histedit --continue``. | |||
|
103 | ||||
|
104 | If ``histedit`` encounters a conflict when moving a revision (while | |||
|
105 | handling ``pick`` or ``fold``), it'll stop in a similar manner to | |||
|
106 | ``edit`` with the difference that it won't prompt you for a commit | |||
|
107 | message when done. If you decide at this point that you don't like how | |||
|
108 | much work it will be to rearrange history, or that you made a mistake, | |||
|
109 | you can use ``hg histedit --abort`` to abandon the new changes you | |||
|
110 | have made and return to the state before you attempted to edit your | |||
|
111 | history. | |||
|
112 | ||||
|
113 | If we clone the example repository above and add three more changes, such that | |||
|
114 | we have the following history:: | |||
|
115 | ||||
|
116 | @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan | |||
|
117 | | Add theta | |||
|
118 | | | |||
|
119 | o 5 140988835471 2009-04-27 18:04 -0500 stefan | |||
|
120 | | Add eta | |||
|
121 | | | |||
|
122 | o 4 122930637314 2009-04-27 18:04 -0500 stefan | |||
|
123 | | Add zeta | |||
|
124 | | | |||
|
125 | o 3 836302820282 2009-04-27 18:04 -0500 stefan | |||
|
126 | | Add epsilon | |||
|
127 | | | |||
|
128 | o 2 989b4d060121 2009-04-27 18:04 -0500 durin42 | |||
|
129 | | Add beta and delta. | |||
|
130 | | | |||
|
131 | o 1 081603921c3f 2009-04-27 18:04 -0500 durin42 | |||
|
132 | | Add gamma | |||
|
133 | | | |||
|
134 | o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42 | |||
|
135 | Add alpha | |||
|
136 | ||||
|
137 | If you run ``hg histedit --outgoing`` on the clone then it is the same | |||
|
138 | as running ``hg histedit 836302820282``. If you need plan to push to a | |||
|
139 | repository that Mercurial does not detect to be related to the source | |||
|
140 | repo, you can add a ``--force`` option. | |||
10 | """ |
|
141 | """ | |
|
142 | ||||
11 | try: |
|
143 | try: | |
12 | import cPickle as pickle |
|
144 | import cPickle as pickle | |
13 | except ImportError: |
|
145 | except ImportError: | |
@@ -243,7 +375,7 actiontable = {'p': pick, | |||||
243 | 'mess': message, |
|
375 | 'mess': message, | |
244 | } |
|
376 | } | |
245 | def histedit(ui, repo, *parent, **opts): |
|
377 | def histedit(ui, repo, *parent, **opts): | |
246 | """hg histedit <parent> |
|
378 | """interactively edit changeset history | |
247 | """ |
|
379 | """ | |
248 | # TODO only abort if we try and histedit mq patches, not just |
|
380 | # TODO only abort if we try and histedit mq patches, not just | |
249 | # blanket if mq patches are applied somewhere |
|
381 | # blanket if mq patches are applied somewhere | |
@@ -307,7 +439,11 def histedit(ui, repo, *parent, **opts): | |||||
307 | new = repo.commit(text=message, user=oldctx.user(), |
|
439 | new = repo.commit(text=message, user=oldctx.user(), | |
308 | date=oldctx.date(), extra=oldctx.extra()) |
|
440 | date=oldctx.date(), extra=oldctx.extra()) | |
309 |
|
441 | |||
310 | if action in ('f', 'fold'): |
|
442 | # If we're resuming a fold and we have new changes, mark the | |
|
443 | # replacements and finish the fold. If not, it's more like a | |||
|
444 | # drop of the changesets that disappeared, and we can skip | |||
|
445 | # this step. | |||
|
446 | if action in ('f', 'fold') and (new or newchildren): | |||
311 | if new: |
|
447 | if new: | |
312 | tmpnodes.append(new) |
|
448 | tmpnodes.append(new) | |
313 | else: |
|
449 | else: | |
@@ -396,13 +532,11 def histedit(ui, repo, *parent, **opts): | |||||
396 | (parentctx, created_, replaced_, tmpnodes_) = actiontable[action]( |
|
532 | (parentctx, created_, replaced_, tmpnodes_) = actiontable[action]( | |
397 | ui, repo, parentctx, ha, opts) |
|
533 | ui, repo, parentctx, ha, opts) | |
398 |
|
534 | |||
399 | hexshort = lambda x: node.hex(x)[:12] |
|
|||
400 |
|
||||
401 | if replaced_: |
|
535 | if replaced_: | |
402 | clen, rlen = len(created_), len(replaced_) |
|
536 | clen, rlen = len(created_), len(replaced_) | |
403 | if clen == rlen == 1: |
|
537 | if clen == rlen == 1: | |
404 | ui.debug('histedit: exact replacement of %s with %s\n' % ( |
|
538 | ui.debug('histedit: exact replacement of %s with %s\n' % ( | |
405 |
|
|
539 | node.short(replaced_[0]), node.short(created_[0]))) | |
406 |
|
540 | |||
407 | replacemap[replaced_[0]] = created_[0] |
|
541 | replacemap[replaced_[0]] = created_[0] | |
408 | elif clen > rlen: |
|
542 | elif clen > rlen: | |
@@ -412,7 +546,7 def histedit(ui, repo, *parent, **opts): | |||||
412 | # TODO synthesize patch names for created patches |
|
546 | # TODO synthesize patch names for created patches | |
413 | replacemap[replaced_[0]] = created_[-1] |
|
547 | replacemap[replaced_[0]] = created_[-1] | |
414 | ui.debug('histedit: created many, assuming %s replaced by %s' % |
|
548 | ui.debug('histedit: created many, assuming %s replaced by %s' % | |
415 |
( |
|
549 | (node.short(replaced_[0]), node.short(created_[-1]))) | |
416 | elif rlen > clen: |
|
550 | elif rlen > clen: | |
417 | if not created_: |
|
551 | if not created_: | |
418 | # This must be a drop. Try and put our metadata on |
|
552 | # This must be a drop. Try and put our metadata on | |
@@ -420,7 +554,7 def histedit(ui, repo, *parent, **opts): | |||||
420 | assert rlen == 1 |
|
554 | assert rlen == 1 | |
421 | r = replaced_[0] |
|
555 | r = replaced_[0] | |
422 | ui.debug('histedit: %s seems replaced with nothing, ' |
|
556 | ui.debug('histedit: %s seems replaced with nothing, ' | |
423 |
'finding a parent\n' % ( |
|
557 | 'finding a parent\n' % (node.short(r))) | |
424 | pctx = repo[r].parents()[0] |
|
558 | pctx = repo[r].parents()[0] | |
425 | if pctx.node() in replacemap: |
|
559 | if pctx.node() in replacemap: | |
426 | ui.debug('histedit: parent is already replaced\n') |
|
560 | ui.debug('histedit: parent is already replaced\n') | |
@@ -428,12 +562,12 def histedit(ui, repo, *parent, **opts): | |||||
428 | else: |
|
562 | else: | |
429 | replacemap[r] = pctx.node() |
|
563 | replacemap[r] = pctx.node() | |
430 | ui.debug('histedit: %s best replaced by %s\n' % ( |
|
564 | ui.debug('histedit: %s best replaced by %s\n' % ( | |
431 |
|
|
565 | node.short(r), node.short(replacemap[r]))) | |
432 | else: |
|
566 | else: | |
433 | assert len(created_) == 1 |
|
567 | assert len(created_) == 1 | |
434 | for r in replaced_: |
|
568 | for r in replaced_: | |
435 | ui.debug('histedit: %s replaced by %s\n' % ( |
|
569 | ui.debug('histedit: %s replaced by %s\n' % ( | |
436 |
|
|
570 | node.short(r), node.short(created_[0]))) | |
437 | replacemap[r] = created_[0] |
|
571 | replacemap[r] = created_[0] | |
438 | else: |
|
572 | else: | |
439 | assert False, ( |
|
573 | assert False, ( | |
@@ -456,8 +590,8 def histedit(ui, repo, *parent, **opts): | |||||
456 | return |
|
590 | return | |
457 | while new in replacemap: |
|
591 | while new in replacemap: | |
458 | new = replacemap[new] |
|
592 | new = replacemap[new] | |
459 |
ui.note(_('histedit: %s to %s\n') % ( |
|
593 | ui.note(_('histedit: %s to %s\n') % (node.short(old), | |
460 |
|
|
594 | node.short(new))) | |
461 | octx = repo[old] |
|
595 | octx = repo[old] | |
462 | marks = octx.bookmarks() |
|
596 | marks = octx.bookmarks() | |
463 | if marks: |
|
597 | if marks: | |
@@ -559,6 +693,6 cmdtable = { | |||||
559 | 'force outgoing even for unrelated repositories')), |
|
693 | 'force outgoing even for unrelated repositories')), | |
560 | ('r', 'rev', [], _('first revision to be edited')), |
|
694 | ('r', 'rev', [], _('first revision to be edited')), | |
561 | ], |
|
695 | ], | |
562 | __doc__, |
|
696 | _("[PARENT]"), | |
563 | ), |
|
697 | ), | |
564 | } |
|
698 | } |
@@ -48,8 +48,8 class basestore(object): | |||||
48 | '''Put source file into the store under <filename>/<hash>.''' |
|
48 | '''Put source file into the store under <filename>/<hash>.''' | |
49 | raise NotImplementedError('abstract method') |
|
49 | raise NotImplementedError('abstract method') | |
50 |
|
50 | |||
51 | def exists(self, hash): |
|
51 | def exists(self, hashes): | |
52 | '''Check to see if the store contains the given hash.''' |
|
52 | '''Check to see if the store contains the given hashes.''' | |
53 | raise NotImplementedError('abstract method') |
|
53 | raise NotImplementedError('abstract method') | |
54 |
|
54 | |||
55 | def get(self, files): |
|
55 | def get(self, files): |
@@ -340,7 +340,11 def uploadlfiles(ui, rsrc, rdst, files): | |||||
340 | store = basestore._openstore(rsrc, rdst, put=True) |
|
340 | store = basestore._openstore(rsrc, rdst, put=True) | |
341 |
|
341 | |||
342 | at = 0 |
|
342 | at = 0 | |
343 | files = filter(lambda h: not store.exists(h), files) |
|
343 | ui.debug("sending statlfile command for %d largefiles\n" % len(files)) | |
|
344 | retval = store.exists(files) | |||
|
345 | files = filter(lambda h: not retval[h], files) | |||
|
346 | ui.debug("%d largefiles need to be uploaded\n" % len(files)) | |||
|
347 | ||||
344 | for hash in files: |
|
348 | for hash in files: | |
345 | ui.progress(_('uploading largefiles'), at, unit='largefile', |
|
349 | ui.progress(_('uploading largefiles'), at, unit='largefile', | |
346 | total=len(files)) |
|
350 | total=len(files)) |
@@ -7,6 +7,7 import os | |||||
7 | import urllib2 |
|
7 | import urllib2 | |
8 |
|
8 | |||
9 | from mercurial import error, httprepo, util, wireproto |
|
9 | from mercurial import error, httprepo, util, wireproto | |
|
10 | from mercurial.wireproto import batchable, future | |||
10 | from mercurial.i18n import _ |
|
11 | from mercurial.i18n import _ | |
11 |
|
12 | |||
12 | import lfutil |
|
13 | import lfutil | |
@@ -119,15 +120,19 def wirereposetup(ui, repo): | |||||
119 | length)) |
|
120 | length)) | |
120 | return (length, stream) |
|
121 | return (length, stream) | |
121 |
|
122 | |||
|
123 | @batchable | |||
122 | def statlfile(self, sha): |
|
124 | def statlfile(self, sha): | |
|
125 | f = future() | |||
|
126 | result = {'sha': sha} | |||
|
127 | yield result, f | |||
123 | try: |
|
128 | try: | |
124 | return int(self._call("statlfile", sha=sha)) |
|
129 | yield int(f.value) | |
125 | except (ValueError, urllib2.HTTPError): |
|
130 | except (ValueError, urllib2.HTTPError): | |
126 | # If the server returns anything but an integer followed by a |
|
131 | # If the server returns anything but an integer followed by a | |
127 | # newline, newline, it's not speaking our language; if we get |
|
132 | # newline, newline, it's not speaking our language; if we get | |
128 | # an HTTP error, we can't be sure the largefile is present; |
|
133 | # an HTTP error, we can't be sure the largefile is present; | |
129 | # either way, consider it missing. |
|
134 | # either way, consider it missing. | |
130 |
|
|
135 | yield 2 | |
131 |
|
136 | |||
132 | repo.__class__ = lfileswirerepository |
|
137 | repo.__class__ = lfileswirerepository | |
133 |
|
138 |
@@ -10,6 +10,7 import urllib2 | |||||
10 |
|
10 | |||
11 | from mercurial import util |
|
11 | from mercurial import util | |
12 | from mercurial.i18n import _ |
|
12 | from mercurial.i18n import _ | |
|
13 | from mercurial.wireproto import remotebatch | |||
13 |
|
14 | |||
14 | import lfutil |
|
15 | import lfutil | |
15 | import basestore |
|
16 | import basestore | |
@@ -20,8 +21,6 class remotestore(basestore.basestore): | |||||
20 | super(remotestore, self).__init__(ui, repo, url) |
|
21 | super(remotestore, self).__init__(ui, repo, url) | |
21 |
|
22 | |||
22 | def put(self, source, hash): |
|
23 | def put(self, source, hash): | |
23 | if self._verify(hash): |
|
|||
24 | return |
|
|||
25 | if self.sendfile(source, hash): |
|
24 | if self.sendfile(source, hash): | |
26 | raise util.Abort( |
|
25 | raise util.Abort( | |
27 | _('remotestore: could not put %s to remote store %s') |
|
26 | _('remotestore: could not put %s to remote store %s') | |
@@ -29,8 +28,8 class remotestore(basestore.basestore): | |||||
29 | self.ui.debug( |
|
28 | self.ui.debug( | |
30 | _('remotestore: put %s to remote store %s') % (source, self.url)) |
|
29 | _('remotestore: put %s to remote store %s') % (source, self.url)) | |
31 |
|
30 | |||
32 | def exists(self, hash): |
|
31 | def exists(self, hashes): | |
33 | return self._verify(hash) |
|
32 | return self._verify(hashes) | |
34 |
|
33 | |||
35 | def sendfile(self, filename, hash): |
|
34 | def sendfile(self, filename, hash): | |
36 | self.ui.debug('remotestore: sendfile(%s, %s)\n' % (filename, hash)) |
|
35 | self.ui.debug('remotestore: sendfile(%s, %s)\n' % (filename, hash)) | |
@@ -74,8 +73,8 class remotestore(basestore.basestore): | |||||
74 | infile = lfutil.limitreader(infile, length) |
|
73 | infile = lfutil.limitreader(infile, length) | |
75 | return lfutil.copyandhash(lfutil.blockstream(infile), tmpfile) |
|
74 | return lfutil.copyandhash(lfutil.blockstream(infile), tmpfile) | |
76 |
|
75 | |||
77 | def _verify(self, hash): |
|
76 | def _verify(self, hashes): | |
78 |
return |
|
77 | return self._stat(hashes) | |
79 |
|
78 | |||
80 | def _verifyfile(self, cctx, cset, contents, standin, verified): |
|
79 | def _verifyfile(self, cctx, cset, contents, standin, verified): | |
81 | filename = lfutil.splitstandin(standin) |
|
80 | filename = lfutil.splitstandin(standin) | |
@@ -104,3 +103,8 class remotestore(basestore.basestore): | |||||
104 | else: |
|
103 | else: | |
105 | raise RuntimeError('verify failed: unexpected response from ' |
|
104 | raise RuntimeError('verify failed: unexpected response from ' | |
106 | 'statlfile (%r)' % stat) |
|
105 | 'statlfile (%r)' % stat) | |
|
106 | ||||
|
107 | def batch(self): | |||
|
108 | '''Support for remote batching.''' | |||
|
109 | return remotebatch(self) | |||
|
110 |
@@ -25,5 +25,13 class wirestore(remotestore.remotestore) | |||||
25 | def _get(self, hash): |
|
25 | def _get(self, hash): | |
26 | return self.remote.getlfile(hash) |
|
26 | return self.remote.getlfile(hash) | |
27 |
|
27 | |||
28 | def _stat(self, hash): |
|
28 | def _stat(self, hashes): | |
29 |
|
|
29 | batch = self.remote.batch() | |
|
30 | futures = {} | |||
|
31 | for hash in hashes: | |||
|
32 | futures[hash] = batch.statlfile(hash) | |||
|
33 | batch.submit() | |||
|
34 | retval = {} | |||
|
35 | for hash in hashes: | |||
|
36 | retval[hash] = not futures[hash].value | |||
|
37 | return retval |
@@ -2061,17 +2061,22 def debugobsolete(ui, repo, precursor=No | |||||
2061 | succs = tuple(bin(succ) for succ in successors) |
|
2061 | succs = tuple(bin(succ) for succ in successors) | |
2062 | l = repo.lock() |
|
2062 | l = repo.lock() | |
2063 | try: |
|
2063 | try: | |
2064 | repo.obsstore.create(bin(precursor), succs, 0, metadata) |
|
2064 | tr = repo.transaction('debugobsolete') | |
|
2065 | try: | |||
|
2066 | repo.obsstore.create(tr, bin(precursor), succs, 0, metadata) | |||
|
2067 | tr.close() | |||
|
2068 | finally: | |||
|
2069 | tr.release() | |||
2065 | finally: |
|
2070 | finally: | |
2066 | l.release() |
|
2071 | l.release() | |
2067 | else: |
|
2072 | else: | |
2068 |
for m |
|
2073 | for m in obsolete.allmarkers(repo): | |
2069 |
ui.write(hex(m |
|
2074 | ui.write(hex(m.precnode())) | |
2070 |
for repl in m |
|
2075 | for repl in m.succnodes(): | |
2071 | ui.write(' ') |
|
2076 | ui.write(' ') | |
2072 | ui.write(hex(repl)) |
|
2077 | ui.write(hex(repl)) | |
2073 |
ui.write(' %X ' % m |
|
2078 | ui.write(' %X ' % m._data[2]) | |
2074 |
ui.write(m |
|
2079 | ui.write(m.metadata()) | |
2075 | ui.write('\n') |
|
2080 | ui.write('\n') | |
2076 |
|
2081 | |||
2077 | @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]')) |
|
2082 | @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]')) |
@@ -197,10 +197,7 class localrepository(repo.repository): | |||||
197 |
|
197 | |||
198 | @storecache('obsstore') |
|
198 | @storecache('obsstore') | |
199 | def obsstore(self): |
|
199 | def obsstore(self): | |
200 | store = obsolete.obsstore() |
|
200 | store = obsolete.obsstore(self.sopener) | |
201 | data = self.sopener.tryread('obsstore') |
|
|||
202 | if data: |
|
|||
203 | store.loadmarkers(data) |
|
|||
204 | return store |
|
201 | return store | |
205 |
|
202 | |||
206 | @storecache('00changelog.i') |
|
203 | @storecache('00changelog.i') | |
@@ -994,16 +991,6 class localrepository(repo.repository): | |||||
994 | self.store.write() |
|
991 | self.store.write() | |
995 | if '_phasecache' in vars(self): |
|
992 | if '_phasecache' in vars(self): | |
996 | self._phasecache.write() |
|
993 | self._phasecache.write() | |
997 | if 'obsstore' in vars(self) and self.obsstore._new: |
|
|||
998 | # XXX: transaction logic should be used here. But for |
|
|||
999 | # now rewriting the whole file is good enough. |
|
|||
1000 | f = self.sopener('obsstore', 'wb', atomictemp=True) |
|
|||
1001 | try: |
|
|||
1002 | self.obsstore.flushmarkers(f) |
|
|||
1003 | f.close() |
|
|||
1004 | except: # re-raises |
|
|||
1005 | f.discard() |
|
|||
1006 | raise |
|
|||
1007 | for k, ce in self._filecache.items(): |
|
994 | for k, ce in self._filecache.items(): | |
1008 | if k == 'dirstate': |
|
995 | if k == 'dirstate': | |
1009 | continue |
|
996 | continue | |
@@ -1622,6 +1609,10 class localrepository(repo.repository): | |||||
1622 | return r |
|
1609 | return r | |
1623 |
|
1610 | |||
1624 | def pull(self, remote, heads=None, force=False): |
|
1611 | def pull(self, remote, heads=None, force=False): | |
|
1612 | # don't open transaction for nothing or you break future useful | |||
|
1613 | # rollback call | |||
|
1614 | tr = None | |||
|
1615 | trname = 'pull\n' + util.hidepassword(remote.url()) | |||
1625 | lock = self.lock() |
|
1616 | lock = self.lock() | |
1626 | try: |
|
1617 | try: | |
1627 | tmp = discovery.findcommonincoming(self, remote, heads=heads, |
|
1618 | tmp = discovery.findcommonincoming(self, remote, heads=heads, | |
@@ -1632,6 +1623,7 class localrepository(repo.repository): | |||||
1632 | added = [] |
|
1623 | added = [] | |
1633 | result = 0 |
|
1624 | result = 0 | |
1634 | else: |
|
1625 | else: | |
|
1626 | tr = self.transaction(trname) | |||
1635 | if heads is None and list(common) == [nullid]: |
|
1627 | if heads is None and list(common) == [nullid]: | |
1636 | self.ui.status(_("requesting all changes\n")) |
|
1628 | self.ui.status(_("requesting all changes\n")) | |
1637 | elif heads is None and remote.capable('changegroupsubset'): |
|
1629 | elif heads is None and remote.capable('changegroupsubset'): | |
@@ -1680,9 +1672,15 class localrepository(repo.repository): | |||||
1680 |
|
1672 | |||
1681 | remoteobs = remote.listkeys('obsolete') |
|
1673 | remoteobs = remote.listkeys('obsolete') | |
1682 | if 'dump' in remoteobs: |
|
1674 | if 'dump' in remoteobs: | |
|
1675 | if tr is None: | |||
|
1676 | tr = self.transaction(trname) | |||
1683 | data = base85.b85decode(remoteobs['dump']) |
|
1677 | data = base85.b85decode(remoteobs['dump']) | |
1684 | self.obsstore.mergemarkers(data) |
|
1678 | self.obsstore.mergemarkers(tr, data) | |
|
1679 | if tr is not None: | |||
|
1680 | tr.close() | |||
1685 | finally: |
|
1681 | finally: | |
|
1682 | if tr is not None: | |||
|
1683 | tr.release() | |||
1686 | lock.release() |
|
1684 | lock.release() | |
1687 |
|
1685 | |||
1688 | return result |
|
1686 | return result | |
@@ -1823,9 +1821,8 class localrepository(repo.repository): | |||||
1823 | self.ui.warn(_('updating %s to public failed!\n') |
|
1821 | self.ui.warn(_('updating %s to public failed!\n') | |
1824 | % newremotehead) |
|
1822 | % newremotehead) | |
1825 | if 'obsolete' in self.listkeys('namespaces') and self.obsstore: |
|
1823 | if 'obsolete' in self.listkeys('namespaces') and self.obsstore: | |
1826 |
data = self. |
|
1824 | data = self.listkeys('obsolete')['dump'] | |
1827 | r = remote.pushkey('obsolete', 'dump', '', |
|
1825 | r = remote.pushkey('obsolete', 'dump', '', data) | |
1828 | base85.b85encode(data)) |
|
|||
1829 | if not r: |
|
1826 | if not r: | |
1830 | self.ui.warn(_('failed to push obsolete markers!\n')) |
|
1827 | self.ui.warn(_('failed to push obsolete markers!\n')) | |
1831 | finally: |
|
1828 | finally: |
@@ -156,12 +156,16 class obsstore(object): | |||||
156 | - successors: new -> set(old) |
|
156 | - successors: new -> set(old) | |
157 | """ |
|
157 | """ | |
158 |
|
158 | |||
159 | def __init__(self): |
|
159 | def __init__(self, sopener): | |
160 | self._all = [] |
|
160 | self._all = [] | |
161 | # new markers to serialize |
|
161 | # new markers to serialize | |
162 | self._new = [] |
|
|||
163 | self.precursors = {} |
|
162 | self.precursors = {} | |
164 | self.successors = {} |
|
163 | self.successors = {} | |
|
164 | self.sopener = sopener | |||
|
165 | data = sopener.tryread('obsstore') | |||
|
166 | if data: | |||
|
167 | for marker in _readmarkers(data): | |||
|
168 | self._load(marker) | |||
165 |
|
169 | |||
166 | def __iter__(self): |
|
170 | def __iter__(self): | |
167 | return iter(self._all) |
|
171 | return iter(self._all) | |
@@ -169,7 +173,7 class obsstore(object): | |||||
169 | def __nonzero__(self): |
|
173 | def __nonzero__(self): | |
170 | return bool(self._all) |
|
174 | return bool(self._all) | |
171 |
|
175 | |||
172 | def create(self, prec, succs=(), flag=0, metadata=None): |
|
176 | def create(self, transaction, prec, succs=(), flag=0, metadata=None): | |
173 | """obsolete: add a new obsolete marker |
|
177 | """obsolete: add a new obsolete marker | |
174 |
|
178 | |||
175 | * ensuring it is hashable |
|
179 | * ensuring it is hashable | |
@@ -184,33 +188,33 class obsstore(object): | |||||
184 | if len(succ) != 20: |
|
188 | if len(succ) != 20: | |
185 | raise ValueError(succ) |
|
189 | raise ValueError(succ) | |
186 | marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata)) |
|
190 | marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata)) | |
187 | self.add(marker) |
|
191 | self.add(transaction, marker) | |
188 |
|
||||
189 | def add(self, marker): |
|
|||
190 | """Add a new marker to the store |
|
|||
191 |
|
192 | |||
192 | This marker still needs to be written to disk""" |
|
193 | def add(self, transaction, marker): | |
193 | self._new.append(marker) |
|
194 | """Add a new marker to the store""" | |
194 | self._load(marker) |
|
195 | if marker not in self._all: | |
195 |
|
196 | f = self.sopener('obsstore', 'ab') | ||
196 | def loadmarkers(self, data): |
|
197 | try: | |
197 | """Load all markers in data, mark them as known.""" |
|
198 | offset = f.tell() | |
198 | for marker in _readmarkers(data): |
|
199 | transaction.add('obsstore', offset) | |
|
200 | if offset == 0: | |||
|
201 | # new file add version header | |||
|
202 | f.write(_pack('>B', _fmversion)) | |||
|
203 | _writemarkers(f.write, [marker]) | |||
|
204 | finally: | |||
|
205 | # XXX: f.close() == filecache invalidation == obsstore rebuilt. | |||
|
206 | # call 'filecacheentry.refresh()' here | |||
|
207 | f.close() | |||
199 | self._load(marker) |
|
208 | self._load(marker) | |
200 |
|
209 | |||
201 | def mergemarkers(self, data): |
|
210 | def mergemarkers(self, transation, data): | |
202 |
other = |
|
211 | other = _readmarkers(data) | |
203 | local = set(self._all) |
|
212 | local = set(self._all) | |
204 |
new = other |
|
213 | new = [m for m in other if m not in local] | |
205 | for marker in new: |
|
214 | for marker in new: | |
206 | self.add(marker) |
|
215 | # XXX: N marker == N x (open, write, close) | |
207 |
|
216 | # we should write them all at once | ||
208 | def flushmarkers(self, stream): |
|
217 | self.add(transation, marker) | |
209 | """Write all markers to a stream |
|
|||
210 |
|
||||
211 | After this operation, "new" markers are considered "known".""" |
|
|||
212 | self._writemarkers(stream) |
|
|||
213 | self._new[:] = [] |
|
|||
214 |
|
218 | |||
215 | def _load(self, marker): |
|
219 | def _load(self, marker): | |
216 | self._all.append(marker) |
|
220 | self._all.append(marker) | |
@@ -219,32 +223,25 class obsstore(object): | |||||
219 | for suc in sucs: |
|
223 | for suc in sucs: | |
220 | self.successors.setdefault(suc, set()).add(marker) |
|
224 | self.successors.setdefault(suc, set()).add(marker) | |
221 |
|
225 | |||
222 |
|
|
226 | def _writemarkers(write, markers): | |
223 |
|
|
227 | # Kept separate from flushmarkers(), it will be reused for | |
224 |
|
|
228 | # markers exchange. | |
225 | if stream is None: |
|
229 | for marker in markers: | |
226 | final = [] |
|
230 | pre, sucs, flags, metadata = marker | |
227 | w = final.append |
|
231 | nbsuc = len(sucs) | |
228 | else: |
|
232 | format = _fmfixed + (_fmnode * nbsuc) | |
229 | w = stream.write |
|
233 | data = [nbsuc, len(metadata), flags, pre] | |
230 | w(_pack('>B', _fmversion)) |
|
234 | data.extend(sucs) | |
231 | for marker in self._all: |
|
235 | write(_pack(format, *data)) | |
232 | pre, sucs, flags, metadata = marker |
|
236 | write(metadata) | |
233 | nbsuc = len(sucs) |
|
|||
234 | format = _fmfixed + (_fmnode * nbsuc) |
|
|||
235 | data = [nbsuc, len(metadata), flags, pre] |
|
|||
236 | data.extend(sucs) |
|
|||
237 | w(_pack(format, *data)) |
|
|||
238 | w(metadata) |
|
|||
239 | if stream is None: |
|
|||
240 | return ''.join(final) |
|
|||
241 |
|
237 | |||
242 | def listmarkers(repo): |
|
238 | def listmarkers(repo): | |
243 | """List markers over pushkey""" |
|
239 | """List markers over pushkey""" | |
244 | if not repo.obsstore: |
|
240 | if not repo.obsstore: | |
245 | return {} |
|
241 | return {} | |
246 | data = repo.obsstore._writemarkers() |
|
242 | data = [_pack('>B', _fmversion)] | |
247 | return {'dump': base85.b85encode(data)} |
|
243 | _writemarkers(data.append, repo.obsstore) | |
|
244 | return {'dump': base85.b85encode(''.join(data))} | |||
248 |
|
245 | |||
249 | def pushmarker(repo, key, old, new): |
|
246 | def pushmarker(repo, key, old, new): | |
250 | """Push markers over pushkey""" |
|
247 | """Push markers over pushkey""" | |
@@ -257,8 +254,13 def pushmarker(repo, key, old, new): | |||||
257 | data = base85.b85decode(new) |
|
254 | data = base85.b85decode(new) | |
258 | lock = repo.lock() |
|
255 | lock = repo.lock() | |
259 | try: |
|
256 | try: | |
260 | repo.obsstore.mergemarkers(data) |
|
257 | tr = repo.transaction('pushkey: obsolete markers') | |
261 |
|
|
258 | try: | |
|
259 | repo.obsstore.mergemarkers(tr, data) | |||
|
260 | tr.close() | |||
|
261 | return 1 | |||
|
262 | finally: | |||
|
263 | tr.release() | |||
262 | finally: |
|
264 | finally: | |
263 | lock.release() |
|
265 | lock.release() | |
264 |
|
266 |
@@ -75,35 +75,6 def hash(text, p1, p2): | |||||
75 | s.update(text) |
|
75 | s.update(text) | |
76 | return s.digest() |
|
76 | return s.digest() | |
77 |
|
77 | |||
78 | def compress(text): |
|
|||
79 | """ generate a possibly-compressed representation of text """ |
|
|||
80 | if not text: |
|
|||
81 | return ("", text) |
|
|||
82 | l = len(text) |
|
|||
83 | bin = None |
|
|||
84 | if l < 44: |
|
|||
85 | pass |
|
|||
86 | elif l > 1000000: |
|
|||
87 | # zlib makes an internal copy, thus doubling memory usage for |
|
|||
88 | # large files, so lets do this in pieces |
|
|||
89 | z = zlib.compressobj() |
|
|||
90 | p = [] |
|
|||
91 | pos = 0 |
|
|||
92 | while pos < l: |
|
|||
93 | pos2 = pos + 2**20 |
|
|||
94 | p.append(z.compress(text[pos:pos2])) |
|
|||
95 | pos = pos2 |
|
|||
96 | p.append(z.flush()) |
|
|||
97 | if sum(map(len, p)) < l: |
|
|||
98 | bin = "".join(p) |
|
|||
99 | else: |
|
|||
100 | bin = _compress(text) |
|
|||
101 | if bin is None or len(bin) > l: |
|
|||
102 | if text[0] == '\0': |
|
|||
103 | return ("", text) |
|
|||
104 | return ('u', text) |
|
|||
105 | return ("", bin) |
|
|||
106 |
|
||||
107 | def decompress(bin): |
|
78 | def decompress(bin): | |
108 | """ decompress the given input """ |
|
79 | """ decompress the given input """ | |
109 | if not bin: |
|
80 | if not bin: | |
@@ -1017,6 +988,35 class revlog(object): | |||||
1017 | dfh.close() |
|
988 | dfh.close() | |
1018 | ifh.close() |
|
989 | ifh.close() | |
1019 |
|
990 | |||
|
991 | def compress(self, text): | |||
|
992 | """ generate a possibly-compressed representation of text """ | |||
|
993 | if not text: | |||
|
994 | return ("", text) | |||
|
995 | l = len(text) | |||
|
996 | bin = None | |||
|
997 | if l < 44: | |||
|
998 | pass | |||
|
999 | elif l > 1000000: | |||
|
1000 | # zlib makes an internal copy, thus doubling memory usage for | |||
|
1001 | # large files, so lets do this in pieces | |||
|
1002 | z = zlib.compressobj() | |||
|
1003 | p = [] | |||
|
1004 | pos = 0 | |||
|
1005 | while pos < l: | |||
|
1006 | pos2 = pos + 2**20 | |||
|
1007 | p.append(z.compress(text[pos:pos2])) | |||
|
1008 | pos = pos2 | |||
|
1009 | p.append(z.flush()) | |||
|
1010 | if sum(map(len, p)) < l: | |||
|
1011 | bin = "".join(p) | |||
|
1012 | else: | |||
|
1013 | bin = _compress(text) | |||
|
1014 | if bin is None or len(bin) > l: | |||
|
1015 | if text[0] == '\0': | |||
|
1016 | return ("", text) | |||
|
1017 | return ('u', text) | |||
|
1018 | return ("", bin) | |||
|
1019 | ||||
1020 | def _addrevision(self, node, text, transaction, link, p1, p2, |
|
1020 | def _addrevision(self, node, text, transaction, link, p1, p2, | |
1021 | cachedelta, ifh, dfh): |
|
1021 | cachedelta, ifh, dfh): | |
1022 | """internal function to add revisions to the log |
|
1022 | """internal function to add revisions to the log | |
@@ -1049,7 +1049,7 class revlog(object): | |||||
1049 | t = buildtext() |
|
1049 | t = buildtext() | |
1050 | ptext = self.revision(self.node(rev)) |
|
1050 | ptext = self.revision(self.node(rev)) | |
1051 | delta = mdiff.textdiff(ptext, t) |
|
1051 | delta = mdiff.textdiff(ptext, t) | |
1052 | data = compress(delta) |
|
1052 | data = self.compress(delta) | |
1053 | l = len(data[1]) + len(data[0]) |
|
1053 | l = len(data[1]) + len(data[0]) | |
1054 | if basecache[0] == rev: |
|
1054 | if basecache[0] == rev: | |
1055 | chainbase = basecache[1] |
|
1055 | chainbase = basecache[1] | |
@@ -1094,7 +1094,7 class revlog(object): | |||||
1094 | textlen = len(text) |
|
1094 | textlen = len(text) | |
1095 | if d is None or dist > textlen * 2: |
|
1095 | if d is None or dist > textlen * 2: | |
1096 | text = buildtext() |
|
1096 | text = buildtext() | |
1097 | data = compress(text) |
|
1097 | data = self.compress(text) | |
1098 | l = len(data[1]) + len(data[0]) |
|
1098 | l = len(data[1]) + len(data[0]) | |
1099 | base = chainbase = curr |
|
1099 | base = chainbase = curr | |
1100 |
|
1100 |
@@ -108,3 +108,78 post-fold manifest | |||||
108 | f |
|
108 | f | |
109 |
|
109 | |||
110 | $ cd .. |
|
110 | $ cd .. | |
|
111 | ||||
|
112 | folding and creating no new change doesn't break: | |||
|
113 | $ mkdir fold-to-empty-test | |||
|
114 | $ cd fold-to-empty-test | |||
|
115 | $ hg init | |||
|
116 | $ printf "1\n2\n3\n" > file | |||
|
117 | $ hg add file | |||
|
118 | $ hg commit -m '1+2+3' | |||
|
119 | $ echo 4 >> file | |||
|
120 | $ hg commit -m '+4' | |||
|
121 | $ echo 5 >> file | |||
|
122 | $ hg commit -m '+5' | |||
|
123 | $ echo 6 >> file | |||
|
124 | $ hg commit -m '+6' | |||
|
125 | $ hg log --graph | |||
|
126 | @ changeset: 3:251d831eeec5 | |||
|
127 | | tag: tip | |||
|
128 | | user: test | |||
|
129 | | date: Thu Jan 01 00:00:00 1970 +0000 | |||
|
130 | | summary: +6 | |||
|
131 | | | |||
|
132 | o changeset: 2:888f9082bf99 | |||
|
133 | | user: test | |||
|
134 | | date: Thu Jan 01 00:00:00 1970 +0000 | |||
|
135 | | summary: +5 | |||
|
136 | | | |||
|
137 | o changeset: 1:617f94f13c0f | |||
|
138 | | user: test | |||
|
139 | | date: Thu Jan 01 00:00:00 1970 +0000 | |||
|
140 | | summary: +4 | |||
|
141 | | | |||
|
142 | o changeset: 0:0189ba417d34 | |||
|
143 | user: test | |||
|
144 | date: Thu Jan 01 00:00:00 1970 +0000 | |||
|
145 | summary: 1+2+3 | |||
|
146 | ||||
|
147 | ||||
|
148 | $ cat > editor.py <<EOF | |||
|
149 | > import re, sys | |||
|
150 | > rules = sys.argv[1] | |||
|
151 | > data = open(rules).read() | |||
|
152 | > data = re.sub(r'pick ([0-9a-f]{12} 2 \+5)', r'drop \1', data) | |||
|
153 | > data = re.sub(r'pick ([0-9a-f]{12} 2 \+6)', r'fold \1', data) | |||
|
154 | > open(rules, 'w').write(data) | |||
|
155 | > EOF | |||
|
156 | ||||
|
157 | $ HGEDITOR='python editor.py' hg histedit 1 | |||
|
158 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
159 | patching file file | |||
|
160 | Hunk #1 FAILED at 2 | |||
|
161 | 1 out of 1 hunks FAILED -- saving rejects to file file.rej | |||
|
162 | abort: Fix up the change and run hg histedit --continue | |||
|
163 | [255] | |||
|
164 | There were conflicts, but we'll continue without resolving. This | |||
|
165 | should effectively drop the changes from +6. | |||
|
166 | $ hg status | |||
|
167 | ? editor.py | |||
|
168 | ? file.rej | |||
|
169 | $ hg histedit --continue | |||
|
170 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
171 | saved backup bundle to $TESTTMP/*-backup.hg (glob) | |||
|
172 | $ hg log --graph | |||
|
173 | @ changeset: 1:617f94f13c0f | |||
|
174 | | tag: tip | |||
|
175 | | user: test | |||
|
176 | | date: Thu Jan 01 00:00:00 1970 +0000 | |||
|
177 | | summary: +4 | |||
|
178 | | | |||
|
179 | o changeset: 0:0189ba417d34 | |||
|
180 | user: test | |||
|
181 | date: Thu Jan 01 00:00:00 1970 +0000 | |||
|
182 | summary: 1+2+3 | |||
|
183 | ||||
|
184 | ||||
|
185 | $ cd .. |
@@ -158,9 +158,28 Try to pull markers | |||||
158 | added 6 changesets with 6 changes to 6 files (+3 heads) |
|
158 | added 6 changesets with 6 changes to 6 files (+3 heads) | |
159 | (run 'hg heads' to see heads, 'hg merge' to merge) |
|
159 | (run 'hg heads' to see heads, 'hg merge' to merge) | |
160 | $ hg debugobsolete |
|
160 | $ hg debugobsolete | |
|
161 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} | |||
|
162 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} | |||
161 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} |
|
163 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} | |
|
164 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} | |||
|
165 | ||||
|
166 | Rollback//Transaction support | |||
|
167 | ||||
|
168 | $ hg debugobsolete -d '1340 0' aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb | |||
|
169 | $ hg debugobsolete | |||
|
170 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} | |||
162 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} |
|
171 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} | |
|
172 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} | |||
|
173 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} | |||
|
174 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb 0 {'date': '1340 0', 'user': 'test'} | |||
|
175 | $ hg rollback -n | |||
|
176 | repository tip rolled back to revision 5 (undo debugobsolete) | |||
|
177 | $ hg rollback | |||
|
178 | repository tip rolled back to revision 5 (undo debugobsolete) | |||
|
179 | $ hg debugobsolete | |||
163 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} |
|
180 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} | |
|
181 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} | |||
|
182 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} | |||
164 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} |
|
183 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} | |
165 |
|
184 | |||
166 | $ cd .. |
|
185 | $ cd .. | |
@@ -176,9 +195,9 Try to pull markers | |||||
176 | adding file changes |
|
195 | adding file changes | |
177 | added 6 changesets with 6 changes to 6 files (+3 heads) |
|
196 | added 6 changesets with 6 changes to 6 files (+3 heads) | |
178 | $ hg -R tmpd debugobsolete |
|
197 | $ hg -R tmpd debugobsolete | |
179 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} |
|
198 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} | |
180 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} |
|
199 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} | |
181 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} |
|
200 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} | |
182 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} |
|
201 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} | |
183 |
|
202 | |||
184 |
|
203 | |||
@@ -200,9 +219,9 On pull | |||||
200 | (run 'hg heads' to see heads, 'hg merge' to merge) |
|
219 | (run 'hg heads' to see heads, 'hg merge' to merge) | |
201 | $ hg debugobsolete |
|
220 | $ hg debugobsolete | |
202 | 2448244824482448244824482448244824482448 1339133913391339133913391339133913391339 0 {'date': '1339 0', 'user': 'test'} |
|
221 | 2448244824482448244824482448244824482448 1339133913391339133913391339133913391339 0 {'date': '1339 0', 'user': 'test'} | |
203 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} |
|
222 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} | |
204 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} |
|
223 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} | |
205 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} |
|
224 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} | |
206 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} |
|
225 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} | |
207 |
|
226 | |||
208 | On push |
|
227 | On push | |
@@ -213,8 +232,8 On push | |||||
213 | no changes found |
|
232 | no changes found | |
214 | [1] |
|
233 | [1] | |
215 | $ hg -R ../tmpc debugobsolete |
|
234 | $ hg -R ../tmpc debugobsolete | |
216 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} |
|
235 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} | |
217 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} |
|
236 | cdbce2fbb16313928851e97e0d85413f3f7eb77f ca819180edb99ed25ceafb3e9584ac287e240b00 0 {'date': '1337 0', 'user': 'test'} | |
218 | 245bde4270cd1072a27757984f9cda8ba26f08ca cdbce2fbb16313928851e97e0d85413f3f7eb77f 0 {'date': '56 12', 'user': 'test'} |
|
237 | ca819180edb99ed25ceafb3e9584ac287e240b00 1337133713371337133713371337133713371337 0 {'date': '1338 0', 'user': 'test'} | |
219 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} |
|
238 | 1337133713371337133713371337133713371337 5601fb93a350734d935195fee37f4054c529ff39 0 {'date': '1339 0', 'user': 'test'} | |
220 | 2448244824482448244824482448244824482448 1339133913391339133913391339133913391339 0 {'date': '1339 0', 'user': 'test'} |
|
239 | 2448244824482448244824482448244824482448 1339133913391339133913391339133913391339 0 {'date': '1339 0', 'user': 'test'} |
General Comments 0
You need to be logged in to leave comments.
Login now