##// END OF EJS Templates
revset: replace extpredicate by revsetpredicate of registrar...
FUJIWARA Katsunori -
r28394:dcb4209b default
parent child Browse files
Show More
@@ -1,132 +1,134 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''track large binary files
9 '''track large binary files
10
10
11 Large binary files tend to be not very compressible, not very
11 Large binary files tend to be not very compressible, not very
12 diffable, and not at all mergeable. Such files are not handled
12 diffable, and not at all mergeable. Such files are not handled
13 efficiently by Mercurial's storage format (revlog), which is based on
13 efficiently by Mercurial's storage format (revlog), which is based on
14 compressed binary deltas; storing large binary files as regular
14 compressed binary deltas; storing large binary files as regular
15 Mercurial files wastes bandwidth and disk space and increases
15 Mercurial files wastes bandwidth and disk space and increases
16 Mercurial's memory usage. The largefiles extension addresses these
16 Mercurial's memory usage. The largefiles extension addresses these
17 problems by adding a centralized client-server layer on top of
17 problems by adding a centralized client-server layer on top of
18 Mercurial: largefiles live in a *central store* out on the network
18 Mercurial: largefiles live in a *central store* out on the network
19 somewhere, and you only fetch the revisions that you need when you
19 somewhere, and you only fetch the revisions that you need when you
20 need them.
20 need them.
21
21
22 largefiles works by maintaining a "standin file" in .hglf/ for each
22 largefiles works by maintaining a "standin file" in .hglf/ for each
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
24 newline) and are tracked by Mercurial. Largefile revisions are
24 newline) and are tracked by Mercurial. Largefile revisions are
25 identified by the SHA-1 hash of their contents, which is written to
25 identified by the SHA-1 hash of their contents, which is written to
26 the standin. largefiles uses that revision ID to get/put largefile
26 the standin. largefiles uses that revision ID to get/put largefile
27 revisions from/to the central store. This saves both disk space and
27 revisions from/to the central store. This saves both disk space and
28 bandwidth, since you don't need to retrieve all historical revisions
28 bandwidth, since you don't need to retrieve all historical revisions
29 of large files when you clone or pull.
29 of large files when you clone or pull.
30
30
31 To start a new repository or add new large binary files, just add
31 To start a new repository or add new large binary files, just add
32 --large to your :hg:`add` command. For example::
32 --large to your :hg:`add` command. For example::
33
33
34 $ dd if=/dev/urandom of=randomdata count=2000
34 $ dd if=/dev/urandom of=randomdata count=2000
35 $ hg add --large randomdata
35 $ hg add --large randomdata
36 $ hg commit -m 'add randomdata as a largefile'
36 $ hg commit -m 'add randomdata as a largefile'
37
37
38 When you push a changeset that adds/modifies largefiles to a remote
38 When you push a changeset that adds/modifies largefiles to a remote
39 repository, its largefile revisions will be uploaded along with it.
39 repository, its largefile revisions will be uploaded along with it.
40 Note that the remote Mercurial must also have the largefiles extension
40 Note that the remote Mercurial must also have the largefiles extension
41 enabled for this to work.
41 enabled for this to work.
42
42
43 When you pull a changeset that affects largefiles from a remote
43 When you pull a changeset that affects largefiles from a remote
44 repository, the largefiles for the changeset will by default not be
44 repository, the largefiles for the changeset will by default not be
45 pulled down. However, when you update to such a revision, any
45 pulled down. However, when you update to such a revision, any
46 largefiles needed by that revision are downloaded and cached (if
46 largefiles needed by that revision are downloaded and cached (if
47 they have never been downloaded before). One way to pull largefiles
47 they have never been downloaded before). One way to pull largefiles
48 when pulling is thus to use --update, which will update your working
48 when pulling is thus to use --update, which will update your working
49 copy to the latest pulled revision (and thereby downloading any new
49 copy to the latest pulled revision (and thereby downloading any new
50 largefiles).
50 largefiles).
51
51
52 If you want to pull largefiles you don't need for update yet, then
52 If you want to pull largefiles you don't need for update yet, then
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
54
54
55 If you know you are pulling from a non-default location and want to
55 If you know you are pulling from a non-default location and want to
56 download all the largefiles that correspond to the new changesets at
56 download all the largefiles that correspond to the new changesets at
57 the same time, then you can pull with `--lfrev "pulled()"`.
57 the same time, then you can pull with `--lfrev "pulled()"`.
58
58
59 If you just want to ensure that you will have the largefiles needed to
59 If you just want to ensure that you will have the largefiles needed to
60 merge or rebase with new heads that you are pulling, then you can pull
60 merge or rebase with new heads that you are pulling, then you can pull
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
62 that are new in the heads you are pulling.
62 that are new in the heads you are pulling.
63
63
64 Keep in mind that network access may now be required to update to
64 Keep in mind that network access may now be required to update to
65 changesets that you have not previously updated to. The nature of the
65 changesets that you have not previously updated to. The nature of the
66 largefiles extension means that updating is no longer guaranteed to
66 largefiles extension means that updating is no longer guaranteed to
67 be a local-only operation.
67 be a local-only operation.
68
68
69 If you already have large files tracked by Mercurial without the
69 If you already have large files tracked by Mercurial without the
70 largefiles extension, you will need to convert your repository in
70 largefiles extension, you will need to convert your repository in
71 order to benefit from largefiles. This is done with the
71 order to benefit from largefiles. This is done with the
72 :hg:`lfconvert` command::
72 :hg:`lfconvert` command::
73
73
74 $ hg lfconvert --size 10 oldrepo newrepo
74 $ hg lfconvert --size 10 oldrepo newrepo
75
75
76 In repositories that already have largefiles in them, any new file
76 In repositories that already have largefiles in them, any new file
77 over 10MB will automatically be added as a largefile. To change this
77 over 10MB will automatically be added as a largefile. To change this
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
79 to the minimum size in megabytes to track as a largefile, or use the
79 to the minimum size in megabytes to track as a largefile, or use the
80 --lfsize option to the add command (also in megabytes)::
80 --lfsize option to the add command (also in megabytes)::
81
81
82 [largefiles]
82 [largefiles]
83 minsize = 2
83 minsize = 2
84
84
85 $ hg add --lfsize 2
85 $ hg add --lfsize 2
86
86
87 The ``largefiles.patterns`` config option allows you to specify a list
87 The ``largefiles.patterns`` config option allows you to specify a list
88 of filename patterns (see :hg:`help patterns`) that should always be
88 of filename patterns (see :hg:`help patterns`) that should always be
89 tracked as largefiles::
89 tracked as largefiles::
90
90
91 [largefiles]
91 [largefiles]
92 patterns =
92 patterns =
93 *.jpg
93 *.jpg
94 re:.*\.(png|bmp)$
94 re:.*\.(png|bmp)$
95 library.zip
95 library.zip
96 content/audio/*
96 content/audio/*
97
97
98 Files that match one of these patterns will be added as largefiles
98 Files that match one of these patterns will be added as largefiles
99 regardless of their size.
99 regardless of their size.
100
100
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
102 will be ignored for any repositories not already containing a
102 will be ignored for any repositories not already containing a
103 largefile. To add the first largefile to a repository, you must
103 largefile. To add the first largefile to a repository, you must
104 explicitly do so with the --large flag passed to the :hg:`add`
104 explicitly do so with the --large flag passed to the :hg:`add`
105 command.
105 command.
106 '''
106 '''
107
107
108 from mercurial import hg, localrepo
108 from mercurial import hg, localrepo
109
109
110 import lfcommands
110 import lfcommands
111 import proto
111 import proto
112 import reposetup
112 import reposetup
113 import uisetup as uisetupmod
113 import uisetup as uisetupmod
114 import overrides
114
115
115 # Note for extension authors: ONLY specify testedwith = 'internal' for
116 # Note for extension authors: ONLY specify testedwith = 'internal' for
116 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
117 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
117 # be specifying the version(s) of Mercurial they are tested with, or
118 # be specifying the version(s) of Mercurial they are tested with, or
118 # leave the attribute unspecified.
119 # leave the attribute unspecified.
119 testedwith = 'internal'
120 testedwith = 'internal'
120
121
121 reposetup = reposetup.reposetup
122 reposetup = reposetup.reposetup
122
123
123 def featuresetup(ui, supported):
124 def featuresetup(ui, supported):
124 # don't die on seeing a repo with the largefiles requirement
125 # don't die on seeing a repo with the largefiles requirement
125 supported |= set(['largefiles'])
126 supported |= set(['largefiles'])
126
127
127 def uisetup(ui):
128 def uisetup(ui):
128 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
129 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
129 hg.wirepeersetupfuncs.append(proto.wirereposetup)
130 hg.wirepeersetupfuncs.append(proto.wirereposetup)
130 uisetupmod.uisetup(ui)
131 uisetupmod.uisetup(ui)
131
132
132 cmdtable = lfcommands.cmdtable
133 cmdtable = lfcommands.cmdtable
134 revsetpredicate = overrides.revsetpredicate
@@ -1,1404 +1,1404 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10
10
11 import os
11 import os
12 import copy
12 import copy
13
13
14 from mercurial import hg, util, cmdutil, scmutil, match as match_, \
14 from mercurial import hg, util, cmdutil, scmutil, match as match_, \
15 archival, pathutil, revset, error
15 archival, pathutil, registrar, revset, error
16 from mercurial.i18n import _
16 from mercurial.i18n import _
17
17
18 import lfutil
18 import lfutil
19 import lfcommands
19 import lfcommands
20 import basestore
20 import basestore
21
21
22 # -- Utility functions: commonly/repeatedly needed functionality ---------------
22 # -- Utility functions: commonly/repeatedly needed functionality ---------------
23
23
24 def composelargefilematcher(match, manifest):
24 def composelargefilematcher(match, manifest):
25 '''create a matcher that matches only the largefiles in the original
25 '''create a matcher that matches only the largefiles in the original
26 matcher'''
26 matcher'''
27 m = copy.copy(match)
27 m = copy.copy(match)
28 lfile = lambda f: lfutil.standin(f) in manifest
28 lfile = lambda f: lfutil.standin(f) in manifest
29 m._files = filter(lfile, m._files)
29 m._files = filter(lfile, m._files)
30 m._fileroots = set(m._files)
30 m._fileroots = set(m._files)
31 m._always = False
31 m._always = False
32 origmatchfn = m.matchfn
32 origmatchfn = m.matchfn
33 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
33 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
34 return m
34 return m
35
35
36 def composenormalfilematcher(match, manifest, exclude=None):
36 def composenormalfilematcher(match, manifest, exclude=None):
37 excluded = set()
37 excluded = set()
38 if exclude is not None:
38 if exclude is not None:
39 excluded.update(exclude)
39 excluded.update(exclude)
40
40
41 m = copy.copy(match)
41 m = copy.copy(match)
42 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
42 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
43 manifest or f in excluded)
43 manifest or f in excluded)
44 m._files = filter(notlfile, m._files)
44 m._files = filter(notlfile, m._files)
45 m._fileroots = set(m._files)
45 m._fileroots = set(m._files)
46 m._always = False
46 m._always = False
47 origmatchfn = m.matchfn
47 origmatchfn = m.matchfn
48 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
48 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
49 return m
49 return m
50
50
51 def installnormalfilesmatchfn(manifest):
51 def installnormalfilesmatchfn(manifest):
52 '''installmatchfn with a matchfn that ignores all largefiles'''
52 '''installmatchfn with a matchfn that ignores all largefiles'''
53 def overridematch(ctx, pats=(), opts=None, globbed=False,
53 def overridematch(ctx, pats=(), opts=None, globbed=False,
54 default='relpath', badfn=None):
54 default='relpath', badfn=None):
55 if opts is None:
55 if opts is None:
56 opts = {}
56 opts = {}
57 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
57 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
58 return composenormalfilematcher(match, manifest)
58 return composenormalfilematcher(match, manifest)
59 oldmatch = installmatchfn(overridematch)
59 oldmatch = installmatchfn(overridematch)
60
60
61 def installmatchfn(f):
61 def installmatchfn(f):
62 '''monkey patch the scmutil module with a custom match function.
62 '''monkey patch the scmutil module with a custom match function.
63 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
63 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
64 oldmatch = scmutil.match
64 oldmatch = scmutil.match
65 setattr(f, 'oldmatch', oldmatch)
65 setattr(f, 'oldmatch', oldmatch)
66 scmutil.match = f
66 scmutil.match = f
67 return oldmatch
67 return oldmatch
68
68
69 def restorematchfn():
69 def restorematchfn():
70 '''restores scmutil.match to what it was before installmatchfn
70 '''restores scmutil.match to what it was before installmatchfn
71 was called. no-op if scmutil.match is its original function.
71 was called. no-op if scmutil.match is its original function.
72
72
73 Note that n calls to installmatchfn will require n calls to
73 Note that n calls to installmatchfn will require n calls to
74 restore the original matchfn.'''
74 restore the original matchfn.'''
75 scmutil.match = getattr(scmutil.match, 'oldmatch')
75 scmutil.match = getattr(scmutil.match, 'oldmatch')
76
76
77 def installmatchandpatsfn(f):
77 def installmatchandpatsfn(f):
78 oldmatchandpats = scmutil.matchandpats
78 oldmatchandpats = scmutil.matchandpats
79 setattr(f, 'oldmatchandpats', oldmatchandpats)
79 setattr(f, 'oldmatchandpats', oldmatchandpats)
80 scmutil.matchandpats = f
80 scmutil.matchandpats = f
81 return oldmatchandpats
81 return oldmatchandpats
82
82
83 def restorematchandpatsfn():
83 def restorematchandpatsfn():
84 '''restores scmutil.matchandpats to what it was before
84 '''restores scmutil.matchandpats to what it was before
85 installmatchandpatsfn was called. No-op if scmutil.matchandpats
85 installmatchandpatsfn was called. No-op if scmutil.matchandpats
86 is its original function.
86 is its original function.
87
87
88 Note that n calls to installmatchandpatsfn will require n calls
88 Note that n calls to installmatchandpatsfn will require n calls
89 to restore the original matchfn.'''
89 to restore the original matchfn.'''
90 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
90 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
91 scmutil.matchandpats)
91 scmutil.matchandpats)
92
92
93 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
93 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
94 large = opts.get('large')
94 large = opts.get('large')
95 lfsize = lfutil.getminsize(
95 lfsize = lfutil.getminsize(
96 ui, lfutil.islfilesrepo(repo), opts.get('lfsize'))
96 ui, lfutil.islfilesrepo(repo), opts.get('lfsize'))
97
97
98 lfmatcher = None
98 lfmatcher = None
99 if lfutil.islfilesrepo(repo):
99 if lfutil.islfilesrepo(repo):
100 lfpats = ui.configlist(lfutil.longname, 'patterns', default=[])
100 lfpats = ui.configlist(lfutil.longname, 'patterns', default=[])
101 if lfpats:
101 if lfpats:
102 lfmatcher = match_.match(repo.root, '', list(lfpats))
102 lfmatcher = match_.match(repo.root, '', list(lfpats))
103
103
104 lfnames = []
104 lfnames = []
105 m = matcher
105 m = matcher
106
106
107 wctx = repo[None]
107 wctx = repo[None]
108 for f in repo.walk(match_.badmatch(m, lambda x, y: None)):
108 for f in repo.walk(match_.badmatch(m, lambda x, y: None)):
109 exact = m.exact(f)
109 exact = m.exact(f)
110 lfile = lfutil.standin(f) in wctx
110 lfile = lfutil.standin(f) in wctx
111 nfile = f in wctx
111 nfile = f in wctx
112 exists = lfile or nfile
112 exists = lfile or nfile
113
113
114 # addremove in core gets fancy with the name, add doesn't
114 # addremove in core gets fancy with the name, add doesn't
115 if isaddremove:
115 if isaddremove:
116 name = m.uipath(f)
116 name = m.uipath(f)
117 else:
117 else:
118 name = m.rel(f)
118 name = m.rel(f)
119
119
120 # Don't warn the user when they attempt to add a normal tracked file.
120 # Don't warn the user when they attempt to add a normal tracked file.
121 # The normal add code will do that for us.
121 # The normal add code will do that for us.
122 if exact and exists:
122 if exact and exists:
123 if lfile:
123 if lfile:
124 ui.warn(_('%s already a largefile\n') % name)
124 ui.warn(_('%s already a largefile\n') % name)
125 continue
125 continue
126
126
127 if (exact or not exists) and not lfutil.isstandin(f):
127 if (exact or not exists) and not lfutil.isstandin(f):
128 # In case the file was removed previously, but not committed
128 # In case the file was removed previously, but not committed
129 # (issue3507)
129 # (issue3507)
130 if not repo.wvfs.exists(f):
130 if not repo.wvfs.exists(f):
131 continue
131 continue
132
132
133 abovemin = (lfsize and
133 abovemin = (lfsize and
134 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
134 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
135 if large or abovemin or (lfmatcher and lfmatcher(f)):
135 if large or abovemin or (lfmatcher and lfmatcher(f)):
136 lfnames.append(f)
136 lfnames.append(f)
137 if ui.verbose or not exact:
137 if ui.verbose or not exact:
138 ui.status(_('adding %s as a largefile\n') % name)
138 ui.status(_('adding %s as a largefile\n') % name)
139
139
140 bad = []
140 bad = []
141
141
142 # Need to lock, otherwise there could be a race condition between
142 # Need to lock, otherwise there could be a race condition between
143 # when standins are created and added to the repo.
143 # when standins are created and added to the repo.
144 with repo.wlock():
144 with repo.wlock():
145 if not opts.get('dry_run'):
145 if not opts.get('dry_run'):
146 standins = []
146 standins = []
147 lfdirstate = lfutil.openlfdirstate(ui, repo)
147 lfdirstate = lfutil.openlfdirstate(ui, repo)
148 for f in lfnames:
148 for f in lfnames:
149 standinname = lfutil.standin(f)
149 standinname = lfutil.standin(f)
150 lfutil.writestandin(repo, standinname, hash='',
150 lfutil.writestandin(repo, standinname, hash='',
151 executable=lfutil.getexecutable(repo.wjoin(f)))
151 executable=lfutil.getexecutable(repo.wjoin(f)))
152 standins.append(standinname)
152 standins.append(standinname)
153 if lfdirstate[f] == 'r':
153 if lfdirstate[f] == 'r':
154 lfdirstate.normallookup(f)
154 lfdirstate.normallookup(f)
155 else:
155 else:
156 lfdirstate.add(f)
156 lfdirstate.add(f)
157 lfdirstate.write()
157 lfdirstate.write()
158 bad += [lfutil.splitstandin(f)
158 bad += [lfutil.splitstandin(f)
159 for f in repo[None].add(standins)
159 for f in repo[None].add(standins)
160 if f in m.files()]
160 if f in m.files()]
161
161
162 added = [f for f in lfnames if f not in bad]
162 added = [f for f in lfnames if f not in bad]
163 return added, bad
163 return added, bad
164
164
165 def removelargefiles(ui, repo, isaddremove, matcher, **opts):
165 def removelargefiles(ui, repo, isaddremove, matcher, **opts):
166 after = opts.get('after')
166 after = opts.get('after')
167 m = composelargefilematcher(matcher, repo[None].manifest())
167 m = composelargefilematcher(matcher, repo[None].manifest())
168 try:
168 try:
169 repo.lfstatus = True
169 repo.lfstatus = True
170 s = repo.status(match=m, clean=not isaddremove)
170 s = repo.status(match=m, clean=not isaddremove)
171 finally:
171 finally:
172 repo.lfstatus = False
172 repo.lfstatus = False
173 manifest = repo[None].manifest()
173 manifest = repo[None].manifest()
174 modified, added, deleted, clean = [[f for f in list
174 modified, added, deleted, clean = [[f for f in list
175 if lfutil.standin(f) in manifest]
175 if lfutil.standin(f) in manifest]
176 for list in (s.modified, s.added,
176 for list in (s.modified, s.added,
177 s.deleted, s.clean)]
177 s.deleted, s.clean)]
178
178
179 def warn(files, msg):
179 def warn(files, msg):
180 for f in files:
180 for f in files:
181 ui.warn(msg % m.rel(f))
181 ui.warn(msg % m.rel(f))
182 return int(len(files) > 0)
182 return int(len(files) > 0)
183
183
184 result = 0
184 result = 0
185
185
186 if after:
186 if after:
187 remove = deleted
187 remove = deleted
188 result = warn(modified + added + clean,
188 result = warn(modified + added + clean,
189 _('not removing %s: file still exists\n'))
189 _('not removing %s: file still exists\n'))
190 else:
190 else:
191 remove = deleted + clean
191 remove = deleted + clean
192 result = warn(modified, _('not removing %s: file is modified (use -f'
192 result = warn(modified, _('not removing %s: file is modified (use -f'
193 ' to force removal)\n'))
193 ' to force removal)\n'))
194 result = warn(added, _('not removing %s: file has been marked for add'
194 result = warn(added, _('not removing %s: file has been marked for add'
195 ' (use forget to undo)\n')) or result
195 ' (use forget to undo)\n')) or result
196
196
197 # Need to lock because standin files are deleted then removed from the
197 # Need to lock because standin files are deleted then removed from the
198 # repository and we could race in-between.
198 # repository and we could race in-between.
199 with repo.wlock():
199 with repo.wlock():
200 lfdirstate = lfutil.openlfdirstate(ui, repo)
200 lfdirstate = lfutil.openlfdirstate(ui, repo)
201 for f in sorted(remove):
201 for f in sorted(remove):
202 if ui.verbose or not m.exact(f):
202 if ui.verbose or not m.exact(f):
203 # addremove in core gets fancy with the name, remove doesn't
203 # addremove in core gets fancy with the name, remove doesn't
204 if isaddremove:
204 if isaddremove:
205 name = m.uipath(f)
205 name = m.uipath(f)
206 else:
206 else:
207 name = m.rel(f)
207 name = m.rel(f)
208 ui.status(_('removing %s\n') % name)
208 ui.status(_('removing %s\n') % name)
209
209
210 if not opts.get('dry_run'):
210 if not opts.get('dry_run'):
211 if not after:
211 if not after:
212 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
212 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
213
213
214 if opts.get('dry_run'):
214 if opts.get('dry_run'):
215 return result
215 return result
216
216
217 remove = [lfutil.standin(f) for f in remove]
217 remove = [lfutil.standin(f) for f in remove]
218 # If this is being called by addremove, let the original addremove
218 # If this is being called by addremove, let the original addremove
219 # function handle this.
219 # function handle this.
220 if not isaddremove:
220 if not isaddremove:
221 for f in remove:
221 for f in remove:
222 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
222 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
223 repo[None].forget(remove)
223 repo[None].forget(remove)
224
224
225 for f in remove:
225 for f in remove:
226 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
226 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
227 False)
227 False)
228
228
229 lfdirstate.write()
229 lfdirstate.write()
230
230
231 return result
231 return result
232
232
233 # For overriding mercurial.hgweb.webcommands so that largefiles will
233 # For overriding mercurial.hgweb.webcommands so that largefiles will
234 # appear at their right place in the manifests.
234 # appear at their right place in the manifests.
235 def decodepath(orig, path):
235 def decodepath(orig, path):
236 return lfutil.splitstandin(path) or path
236 return lfutil.splitstandin(path) or path
237
237
238 # -- Wrappers: modify existing commands --------------------------------
238 # -- Wrappers: modify existing commands --------------------------------
239
239
240 def overrideadd(orig, ui, repo, *pats, **opts):
240 def overrideadd(orig, ui, repo, *pats, **opts):
241 if opts.get('normal') and opts.get('large'):
241 if opts.get('normal') and opts.get('large'):
242 raise error.Abort(_('--normal cannot be used with --large'))
242 raise error.Abort(_('--normal cannot be used with --large'))
243 return orig(ui, repo, *pats, **opts)
243 return orig(ui, repo, *pats, **opts)
244
244
245 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
245 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
246 # The --normal flag short circuits this override
246 # The --normal flag short circuits this override
247 if opts.get('normal'):
247 if opts.get('normal'):
248 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
248 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
249
249
250 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
250 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
251 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
251 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
252 ladded)
252 ladded)
253 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
253 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
254
254
255 bad.extend(f for f in lbad)
255 bad.extend(f for f in lbad)
256 return bad
256 return bad
257
257
258 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos):
258 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos):
259 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
259 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
260 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos)
260 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos)
261 return removelargefiles(ui, repo, False, matcher, after=after,
261 return removelargefiles(ui, repo, False, matcher, after=after,
262 force=force) or result
262 force=force) or result
263
263
264 def overridestatusfn(orig, repo, rev2, **opts):
264 def overridestatusfn(orig, repo, rev2, **opts):
265 try:
265 try:
266 repo._repo.lfstatus = True
266 repo._repo.lfstatus = True
267 return orig(repo, rev2, **opts)
267 return orig(repo, rev2, **opts)
268 finally:
268 finally:
269 repo._repo.lfstatus = False
269 repo._repo.lfstatus = False
270
270
271 def overridestatus(orig, ui, repo, *pats, **opts):
271 def overridestatus(orig, ui, repo, *pats, **opts):
272 try:
272 try:
273 repo.lfstatus = True
273 repo.lfstatus = True
274 return orig(ui, repo, *pats, **opts)
274 return orig(ui, repo, *pats, **opts)
275 finally:
275 finally:
276 repo.lfstatus = False
276 repo.lfstatus = False
277
277
278 def overridedirty(orig, repo, ignoreupdate=False):
278 def overridedirty(orig, repo, ignoreupdate=False):
279 try:
279 try:
280 repo._repo.lfstatus = True
280 repo._repo.lfstatus = True
281 return orig(repo, ignoreupdate)
281 return orig(repo, ignoreupdate)
282 finally:
282 finally:
283 repo._repo.lfstatus = False
283 repo._repo.lfstatus = False
284
284
285 def overridelog(orig, ui, repo, *pats, **opts):
285 def overridelog(orig, ui, repo, *pats, **opts):
286 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
286 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
287 default='relpath', badfn=None):
287 default='relpath', badfn=None):
288 """Matcher that merges root directory with .hglf, suitable for log.
288 """Matcher that merges root directory with .hglf, suitable for log.
289 It is still possible to match .hglf directly.
289 It is still possible to match .hglf directly.
290 For any listed files run log on the standin too.
290 For any listed files run log on the standin too.
291 matchfn tries both the given filename and with .hglf stripped.
291 matchfn tries both the given filename and with .hglf stripped.
292 """
292 """
293 if opts is None:
293 if opts is None:
294 opts = {}
294 opts = {}
295 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
295 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
296 badfn=badfn)
296 badfn=badfn)
297 m, p = copy.copy(matchandpats)
297 m, p = copy.copy(matchandpats)
298
298
299 if m.always():
299 if m.always():
300 # We want to match everything anyway, so there's no benefit trying
300 # We want to match everything anyway, so there's no benefit trying
301 # to add standins.
301 # to add standins.
302 return matchandpats
302 return matchandpats
303
303
304 pats = set(p)
304 pats = set(p)
305
305
306 def fixpats(pat, tostandin=lfutil.standin):
306 def fixpats(pat, tostandin=lfutil.standin):
307 if pat.startswith('set:'):
307 if pat.startswith('set:'):
308 return pat
308 return pat
309
309
310 kindpat = match_._patsplit(pat, None)
310 kindpat = match_._patsplit(pat, None)
311
311
312 if kindpat[0] is not None:
312 if kindpat[0] is not None:
313 return kindpat[0] + ':' + tostandin(kindpat[1])
313 return kindpat[0] + ':' + tostandin(kindpat[1])
314 return tostandin(kindpat[1])
314 return tostandin(kindpat[1])
315
315
316 if m._cwd:
316 if m._cwd:
317 hglf = lfutil.shortname
317 hglf = lfutil.shortname
318 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
318 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
319
319
320 def tostandin(f):
320 def tostandin(f):
321 # The file may already be a standin, so truncate the back
321 # The file may already be a standin, so truncate the back
322 # prefix and test before mangling it. This avoids turning
322 # prefix and test before mangling it. This avoids turning
323 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
323 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
324 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
324 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
325 return f
325 return f
326
326
327 # An absolute path is from outside the repo, so truncate the
327 # An absolute path is from outside the repo, so truncate the
328 # path to the root before building the standin. Otherwise cwd
328 # path to the root before building the standin. Otherwise cwd
329 # is somewhere in the repo, relative to root, and needs to be
329 # is somewhere in the repo, relative to root, and needs to be
330 # prepended before building the standin.
330 # prepended before building the standin.
331 if os.path.isabs(m._cwd):
331 if os.path.isabs(m._cwd):
332 f = f[len(back):]
332 f = f[len(back):]
333 else:
333 else:
334 f = m._cwd + '/' + f
334 f = m._cwd + '/' + f
335 return back + lfutil.standin(f)
335 return back + lfutil.standin(f)
336
336
337 pats.update(fixpats(f, tostandin) for f in p)
337 pats.update(fixpats(f, tostandin) for f in p)
338 else:
338 else:
339 def tostandin(f):
339 def tostandin(f):
340 if lfutil.splitstandin(f):
340 if lfutil.splitstandin(f):
341 return f
341 return f
342 return lfutil.standin(f)
342 return lfutil.standin(f)
343 pats.update(fixpats(f, tostandin) for f in p)
343 pats.update(fixpats(f, tostandin) for f in p)
344
344
345 for i in range(0, len(m._files)):
345 for i in range(0, len(m._files)):
346 # Don't add '.hglf' to m.files, since that is already covered by '.'
346 # Don't add '.hglf' to m.files, since that is already covered by '.'
347 if m._files[i] == '.':
347 if m._files[i] == '.':
348 continue
348 continue
349 standin = lfutil.standin(m._files[i])
349 standin = lfutil.standin(m._files[i])
350 # If the "standin" is a directory, append instead of replace to
350 # If the "standin" is a directory, append instead of replace to
351 # support naming a directory on the command line with only
351 # support naming a directory on the command line with only
352 # largefiles. The original directory is kept to support normal
352 # largefiles. The original directory is kept to support normal
353 # files.
353 # files.
354 if standin in repo[ctx.node()]:
354 if standin in repo[ctx.node()]:
355 m._files[i] = standin
355 m._files[i] = standin
356 elif m._files[i] not in repo[ctx.node()] \
356 elif m._files[i] not in repo[ctx.node()] \
357 and repo.wvfs.isdir(standin):
357 and repo.wvfs.isdir(standin):
358 m._files.append(standin)
358 m._files.append(standin)
359
359
360 m._fileroots = set(m._files)
360 m._fileroots = set(m._files)
361 m._always = False
361 m._always = False
362 origmatchfn = m.matchfn
362 origmatchfn = m.matchfn
363 def lfmatchfn(f):
363 def lfmatchfn(f):
364 lf = lfutil.splitstandin(f)
364 lf = lfutil.splitstandin(f)
365 if lf is not None and origmatchfn(lf):
365 if lf is not None and origmatchfn(lf):
366 return True
366 return True
367 r = origmatchfn(f)
367 r = origmatchfn(f)
368 return r
368 return r
369 m.matchfn = lfmatchfn
369 m.matchfn = lfmatchfn
370
370
371 ui.debug('updated patterns: %s\n' % sorted(pats))
371 ui.debug('updated patterns: %s\n' % sorted(pats))
372 return m, pats
372 return m, pats
373
373
374 # For hg log --patch, the match object is used in two different senses:
374 # For hg log --patch, the match object is used in two different senses:
375 # (1) to determine what revisions should be printed out, and
375 # (1) to determine what revisions should be printed out, and
376 # (2) to determine what files to print out diffs for.
376 # (2) to determine what files to print out diffs for.
377 # The magic matchandpats override should be used for case (1) but not for
377 # The magic matchandpats override should be used for case (1) but not for
378 # case (2).
378 # case (2).
379 def overridemakelogfilematcher(repo, pats, opts, badfn=None):
379 def overridemakelogfilematcher(repo, pats, opts, badfn=None):
380 wctx = repo[None]
380 wctx = repo[None]
381 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
381 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
382 return lambda rev: match
382 return lambda rev: match
383
383
384 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
384 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
385 oldmakelogfilematcher = cmdutil._makenofollowlogfilematcher
385 oldmakelogfilematcher = cmdutil._makenofollowlogfilematcher
386 setattr(cmdutil, '_makenofollowlogfilematcher', overridemakelogfilematcher)
386 setattr(cmdutil, '_makenofollowlogfilematcher', overridemakelogfilematcher)
387
387
388 try:
388 try:
389 return orig(ui, repo, *pats, **opts)
389 return orig(ui, repo, *pats, **opts)
390 finally:
390 finally:
391 restorematchandpatsfn()
391 restorematchandpatsfn()
392 setattr(cmdutil, '_makenofollowlogfilematcher', oldmakelogfilematcher)
392 setattr(cmdutil, '_makenofollowlogfilematcher', oldmakelogfilematcher)
393
393
394 def overrideverify(orig, ui, repo, *pats, **opts):
394 def overrideverify(orig, ui, repo, *pats, **opts):
395 large = opts.pop('large', False)
395 large = opts.pop('large', False)
396 all = opts.pop('lfa', False)
396 all = opts.pop('lfa', False)
397 contents = opts.pop('lfc', False)
397 contents = opts.pop('lfc', False)
398
398
399 result = orig(ui, repo, *pats, **opts)
399 result = orig(ui, repo, *pats, **opts)
400 if large or all or contents:
400 if large or all or contents:
401 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
401 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
402 return result
402 return result
403
403
404 def overridedebugstate(orig, ui, repo, *pats, **opts):
404 def overridedebugstate(orig, ui, repo, *pats, **opts):
405 large = opts.pop('large', False)
405 large = opts.pop('large', False)
406 if large:
406 if large:
407 class fakerepo(object):
407 class fakerepo(object):
408 dirstate = lfutil.openlfdirstate(ui, repo)
408 dirstate = lfutil.openlfdirstate(ui, repo)
409 orig(ui, fakerepo, *pats, **opts)
409 orig(ui, fakerepo, *pats, **opts)
410 else:
410 else:
411 orig(ui, repo, *pats, **opts)
411 orig(ui, repo, *pats, **opts)
412
412
413 # Before starting the manifest merge, merge.updates will call
413 # Before starting the manifest merge, merge.updates will call
414 # _checkunknownfile to check if there are any files in the merged-in
414 # _checkunknownfile to check if there are any files in the merged-in
415 # changeset that collide with unknown files in the working copy.
415 # changeset that collide with unknown files in the working copy.
416 #
416 #
417 # The largefiles are seen as unknown, so this prevents us from merging
417 # The largefiles are seen as unknown, so this prevents us from merging
418 # in a file 'foo' if we already have a largefile with the same name.
418 # in a file 'foo' if we already have a largefile with the same name.
419 #
419 #
420 # The overridden function filters the unknown files by removing any
420 # The overridden function filters the unknown files by removing any
421 # largefiles. This makes the merge proceed and we can then handle this
421 # largefiles. This makes the merge proceed and we can then handle this
422 # case further in the overridden calculateupdates function below.
422 # case further in the overridden calculateupdates function below.
423 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
423 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
424 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
424 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
425 return False
425 return False
426 return origfn(repo, wctx, mctx, f, f2)
426 return origfn(repo, wctx, mctx, f, f2)
427
427
428 # The manifest merge handles conflicts on the manifest level. We want
428 # The manifest merge handles conflicts on the manifest level. We want
429 # to handle changes in largefile-ness of files at this level too.
429 # to handle changes in largefile-ness of files at this level too.
430 #
430 #
431 # The strategy is to run the original calculateupdates and then process
431 # The strategy is to run the original calculateupdates and then process
432 # the action list it outputs. There are two cases we need to deal with:
432 # the action list it outputs. There are two cases we need to deal with:
433 #
433 #
434 # 1. Normal file in p1, largefile in p2. Here the largefile is
434 # 1. Normal file in p1, largefile in p2. Here the largefile is
435 # detected via its standin file, which will enter the working copy
435 # detected via its standin file, which will enter the working copy
436 # with a "get" action. It is not "merge" since the standin is all
436 # with a "get" action. It is not "merge" since the standin is all
437 # Mercurial is concerned with at this level -- the link to the
437 # Mercurial is concerned with at this level -- the link to the
438 # existing normal file is not relevant here.
438 # existing normal file is not relevant here.
439 #
439 #
440 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
440 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
441 # since the largefile will be present in the working copy and
441 # since the largefile will be present in the working copy and
442 # different from the normal file in p2. Mercurial therefore
442 # different from the normal file in p2. Mercurial therefore
443 # triggers a merge action.
443 # triggers a merge action.
444 #
444 #
445 # In both cases, we prompt the user and emit new actions to either
445 # In both cases, we prompt the user and emit new actions to either
446 # remove the standin (if the normal file was kept) or to remove the
446 # remove the standin (if the normal file was kept) or to remove the
447 # normal file and get the standin (if the largefile was kept). The
447 # normal file and get the standin (if the largefile was kept). The
448 # default prompt answer is to use the largefile version since it was
448 # default prompt answer is to use the largefile version since it was
449 # presumably changed on purpose.
449 # presumably changed on purpose.
450 #
450 #
451 # Finally, the merge.applyupdates function will then take care of
451 # Finally, the merge.applyupdates function will then take care of
452 # writing the files into the working copy and lfcommands.updatelfiles
452 # writing the files into the working copy and lfcommands.updatelfiles
453 # will update the largefiles.
453 # will update the largefiles.
454 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
454 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
455 acceptremote, *args, **kwargs):
455 acceptremote, *args, **kwargs):
456 overwrite = force and not branchmerge
456 overwrite = force and not branchmerge
457 actions, diverge, renamedelete = origfn(
457 actions, diverge, renamedelete = origfn(
458 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs)
458 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs)
459
459
460 if overwrite:
460 if overwrite:
461 return actions, diverge, renamedelete
461 return actions, diverge, renamedelete
462
462
463 # Convert to dictionary with filename as key and action as value.
463 # Convert to dictionary with filename as key and action as value.
464 lfiles = set()
464 lfiles = set()
465 for f in actions:
465 for f in actions:
466 splitstandin = lfutil.splitstandin(f)
466 splitstandin = lfutil.splitstandin(f)
467 if splitstandin in p1:
467 if splitstandin in p1:
468 lfiles.add(splitstandin)
468 lfiles.add(splitstandin)
469 elif lfutil.standin(f) in p1:
469 elif lfutil.standin(f) in p1:
470 lfiles.add(f)
470 lfiles.add(f)
471
471
472 for lfile in sorted(lfiles):
472 for lfile in sorted(lfiles):
473 standin = lfutil.standin(lfile)
473 standin = lfutil.standin(lfile)
474 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
474 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
475 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
475 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
476 if sm in ('g', 'dc') and lm != 'r':
476 if sm in ('g', 'dc') and lm != 'r':
477 if sm == 'dc':
477 if sm == 'dc':
478 f1, f2, fa, move, anc = sargs
478 f1, f2, fa, move, anc = sargs
479 sargs = (p2[f2].flags(), False)
479 sargs = (p2[f2].flags(), False)
480 # Case 1: normal file in the working copy, largefile in
480 # Case 1: normal file in the working copy, largefile in
481 # the second parent
481 # the second parent
482 usermsg = _('remote turned local normal file %s into a largefile\n'
482 usermsg = _('remote turned local normal file %s into a largefile\n'
483 'use (l)argefile or keep (n)ormal file?'
483 'use (l)argefile or keep (n)ormal file?'
484 '$$ &Largefile $$ &Normal file') % lfile
484 '$$ &Largefile $$ &Normal file') % lfile
485 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
485 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
486 actions[lfile] = ('r', None, 'replaced by standin')
486 actions[lfile] = ('r', None, 'replaced by standin')
487 actions[standin] = ('g', sargs, 'replaces standin')
487 actions[standin] = ('g', sargs, 'replaces standin')
488 else: # keep local normal file
488 else: # keep local normal file
489 actions[lfile] = ('k', None, 'replaces standin')
489 actions[lfile] = ('k', None, 'replaces standin')
490 if branchmerge:
490 if branchmerge:
491 actions[standin] = ('k', None, 'replaced by non-standin')
491 actions[standin] = ('k', None, 'replaced by non-standin')
492 else:
492 else:
493 actions[standin] = ('r', None, 'replaced by non-standin')
493 actions[standin] = ('r', None, 'replaced by non-standin')
494 elif lm in ('g', 'dc') and sm != 'r':
494 elif lm in ('g', 'dc') and sm != 'r':
495 if lm == 'dc':
495 if lm == 'dc':
496 f1, f2, fa, move, anc = largs
496 f1, f2, fa, move, anc = largs
497 largs = (p2[f2].flags(), False)
497 largs = (p2[f2].flags(), False)
498 # Case 2: largefile in the working copy, normal file in
498 # Case 2: largefile in the working copy, normal file in
499 # the second parent
499 # the second parent
500 usermsg = _('remote turned local largefile %s into a normal file\n'
500 usermsg = _('remote turned local largefile %s into a normal file\n'
501 'keep (l)argefile or use (n)ormal file?'
501 'keep (l)argefile or use (n)ormal file?'
502 '$$ &Largefile $$ &Normal file') % lfile
502 '$$ &Largefile $$ &Normal file') % lfile
503 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
503 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
504 if branchmerge:
504 if branchmerge:
505 # largefile can be restored from standin safely
505 # largefile can be restored from standin safely
506 actions[lfile] = ('k', None, 'replaced by standin')
506 actions[lfile] = ('k', None, 'replaced by standin')
507 actions[standin] = ('k', None, 'replaces standin')
507 actions[standin] = ('k', None, 'replaces standin')
508 else:
508 else:
509 # "lfile" should be marked as "removed" without
509 # "lfile" should be marked as "removed" without
510 # removal of itself
510 # removal of itself
511 actions[lfile] = ('lfmr', None,
511 actions[lfile] = ('lfmr', None,
512 'forget non-standin largefile')
512 'forget non-standin largefile')
513
513
514 # linear-merge should treat this largefile as 're-added'
514 # linear-merge should treat this largefile as 're-added'
515 actions[standin] = ('a', None, 'keep standin')
515 actions[standin] = ('a', None, 'keep standin')
516 else: # pick remote normal file
516 else: # pick remote normal file
517 actions[lfile] = ('g', largs, 'replaces standin')
517 actions[lfile] = ('g', largs, 'replaces standin')
518 actions[standin] = ('r', None, 'replaced by non-standin')
518 actions[standin] = ('r', None, 'replaced by non-standin')
519
519
520 return actions, diverge, renamedelete
520 return actions, diverge, renamedelete
521
521
522 def mergerecordupdates(orig, repo, actions, branchmerge):
522 def mergerecordupdates(orig, repo, actions, branchmerge):
523 if 'lfmr' in actions:
523 if 'lfmr' in actions:
524 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
524 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
525 for lfile, args, msg in actions['lfmr']:
525 for lfile, args, msg in actions['lfmr']:
526 # this should be executed before 'orig', to execute 'remove'
526 # this should be executed before 'orig', to execute 'remove'
527 # before all other actions
527 # before all other actions
528 repo.dirstate.remove(lfile)
528 repo.dirstate.remove(lfile)
529 # make sure lfile doesn't get synclfdirstate'd as normal
529 # make sure lfile doesn't get synclfdirstate'd as normal
530 lfdirstate.add(lfile)
530 lfdirstate.add(lfile)
531 lfdirstate.write()
531 lfdirstate.write()
532
532
533 return orig(repo, actions, branchmerge)
533 return orig(repo, actions, branchmerge)
534
534
535
535
536 # Override filemerge to prompt the user about how they wish to merge
536 # Override filemerge to prompt the user about how they wish to merge
537 # largefiles. This will handle identical edits without prompting the user.
537 # largefiles. This will handle identical edits without prompting the user.
538 def overridefilemerge(origfn, premerge, repo, mynode, orig, fcd, fco, fca,
538 def overridefilemerge(origfn, premerge, repo, mynode, orig, fcd, fco, fca,
539 labels=None):
539 labels=None):
540 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
540 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
541 return origfn(premerge, repo, mynode, orig, fcd, fco, fca,
541 return origfn(premerge, repo, mynode, orig, fcd, fco, fca,
542 labels=labels)
542 labels=labels)
543
543
544 ahash = fca.data().strip().lower()
544 ahash = fca.data().strip().lower()
545 dhash = fcd.data().strip().lower()
545 dhash = fcd.data().strip().lower()
546 ohash = fco.data().strip().lower()
546 ohash = fco.data().strip().lower()
547 if (ohash != ahash and
547 if (ohash != ahash and
548 ohash != dhash and
548 ohash != dhash and
549 (dhash == ahash or
549 (dhash == ahash or
550 repo.ui.promptchoice(
550 repo.ui.promptchoice(
551 _('largefile %s has a merge conflict\nancestor was %s\n'
551 _('largefile %s has a merge conflict\nancestor was %s\n'
552 'keep (l)ocal %s or\ntake (o)ther %s?'
552 'keep (l)ocal %s or\ntake (o)ther %s?'
553 '$$ &Local $$ &Other') %
553 '$$ &Local $$ &Other') %
554 (lfutil.splitstandin(orig), ahash, dhash, ohash),
554 (lfutil.splitstandin(orig), ahash, dhash, ohash),
555 0) == 1)):
555 0) == 1)):
556 repo.wwrite(fcd.path(), fco.data(), fco.flags())
556 repo.wwrite(fcd.path(), fco.data(), fco.flags())
557 return True, 0, False
557 return True, 0, False
558
558
559 def copiespathcopies(orig, ctx1, ctx2, match=None):
559 def copiespathcopies(orig, ctx1, ctx2, match=None):
560 copies = orig(ctx1, ctx2, match=match)
560 copies = orig(ctx1, ctx2, match=match)
561 updated = {}
561 updated = {}
562
562
563 for k, v in copies.iteritems():
563 for k, v in copies.iteritems():
564 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
564 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
565
565
566 return updated
566 return updated
567
567
568 # Copy first changes the matchers to match standins instead of
568 # Copy first changes the matchers to match standins instead of
569 # largefiles. Then it overrides util.copyfile in that function it
569 # largefiles. Then it overrides util.copyfile in that function it
570 # checks if the destination largefile already exists. It also keeps a
570 # checks if the destination largefile already exists. It also keeps a
571 # list of copied files so that the largefiles can be copied and the
571 # list of copied files so that the largefiles can be copied and the
572 # dirstate updated.
572 # dirstate updated.
573 def overridecopy(orig, ui, repo, pats, opts, rename=False):
573 def overridecopy(orig, ui, repo, pats, opts, rename=False):
574 # doesn't remove largefile on rename
574 # doesn't remove largefile on rename
575 if len(pats) < 2:
575 if len(pats) < 2:
576 # this isn't legal, let the original function deal with it
576 # this isn't legal, let the original function deal with it
577 return orig(ui, repo, pats, opts, rename)
577 return orig(ui, repo, pats, opts, rename)
578
578
579 # This could copy both lfiles and normal files in one command,
579 # This could copy both lfiles and normal files in one command,
580 # but we don't want to do that. First replace their matcher to
580 # but we don't want to do that. First replace their matcher to
581 # only match normal files and run it, then replace it to just
581 # only match normal files and run it, then replace it to just
582 # match largefiles and run it again.
582 # match largefiles and run it again.
583 nonormalfiles = False
583 nonormalfiles = False
584 nolfiles = False
584 nolfiles = False
585 installnormalfilesmatchfn(repo[None].manifest())
585 installnormalfilesmatchfn(repo[None].manifest())
586 try:
586 try:
587 result = orig(ui, repo, pats, opts, rename)
587 result = orig(ui, repo, pats, opts, rename)
588 except error.Abort as e:
588 except error.Abort as e:
589 if str(e) != _('no files to copy'):
589 if str(e) != _('no files to copy'):
590 raise e
590 raise e
591 else:
591 else:
592 nonormalfiles = True
592 nonormalfiles = True
593 result = 0
593 result = 0
594 finally:
594 finally:
595 restorematchfn()
595 restorematchfn()
596
596
597 # The first rename can cause our current working directory to be removed.
597 # The first rename can cause our current working directory to be removed.
598 # In that case there is nothing left to copy/rename so just quit.
598 # In that case there is nothing left to copy/rename so just quit.
599 try:
599 try:
600 repo.getcwd()
600 repo.getcwd()
601 except OSError:
601 except OSError:
602 return result
602 return result
603
603
604 def makestandin(relpath):
604 def makestandin(relpath):
605 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
605 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
606 return os.path.join(repo.wjoin(lfutil.standin(path)))
606 return os.path.join(repo.wjoin(lfutil.standin(path)))
607
607
608 fullpats = scmutil.expandpats(pats)
608 fullpats = scmutil.expandpats(pats)
609 dest = fullpats[-1]
609 dest = fullpats[-1]
610
610
611 if os.path.isdir(dest):
611 if os.path.isdir(dest):
612 if not os.path.isdir(makestandin(dest)):
612 if not os.path.isdir(makestandin(dest)):
613 os.makedirs(makestandin(dest))
613 os.makedirs(makestandin(dest))
614
614
615 try:
615 try:
616 # When we call orig below it creates the standins but we don't add
616 # When we call orig below it creates the standins but we don't add
617 # them to the dir state until later so lock during that time.
617 # them to the dir state until later so lock during that time.
618 wlock = repo.wlock()
618 wlock = repo.wlock()
619
619
620 manifest = repo[None].manifest()
620 manifest = repo[None].manifest()
621 def overridematch(ctx, pats=(), opts=None, globbed=False,
621 def overridematch(ctx, pats=(), opts=None, globbed=False,
622 default='relpath', badfn=None):
622 default='relpath', badfn=None):
623 if opts is None:
623 if opts is None:
624 opts = {}
624 opts = {}
625 newpats = []
625 newpats = []
626 # The patterns were previously mangled to add the standin
626 # The patterns were previously mangled to add the standin
627 # directory; we need to remove that now
627 # directory; we need to remove that now
628 for pat in pats:
628 for pat in pats:
629 if match_.patkind(pat) is None and lfutil.shortname in pat:
629 if match_.patkind(pat) is None and lfutil.shortname in pat:
630 newpats.append(pat.replace(lfutil.shortname, ''))
630 newpats.append(pat.replace(lfutil.shortname, ''))
631 else:
631 else:
632 newpats.append(pat)
632 newpats.append(pat)
633 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
633 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
634 m = copy.copy(match)
634 m = copy.copy(match)
635 lfile = lambda f: lfutil.standin(f) in manifest
635 lfile = lambda f: lfutil.standin(f) in manifest
636 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
636 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
637 m._fileroots = set(m._files)
637 m._fileroots = set(m._files)
638 origmatchfn = m.matchfn
638 origmatchfn = m.matchfn
639 m.matchfn = lambda f: (lfutil.isstandin(f) and
639 m.matchfn = lambda f: (lfutil.isstandin(f) and
640 (f in manifest) and
640 (f in manifest) and
641 origmatchfn(lfutil.splitstandin(f)) or
641 origmatchfn(lfutil.splitstandin(f)) or
642 None)
642 None)
643 return m
643 return m
644 oldmatch = installmatchfn(overridematch)
644 oldmatch = installmatchfn(overridematch)
645 listpats = []
645 listpats = []
646 for pat in pats:
646 for pat in pats:
647 if match_.patkind(pat) is not None:
647 if match_.patkind(pat) is not None:
648 listpats.append(pat)
648 listpats.append(pat)
649 else:
649 else:
650 listpats.append(makestandin(pat))
650 listpats.append(makestandin(pat))
651
651
652 try:
652 try:
653 origcopyfile = util.copyfile
653 origcopyfile = util.copyfile
654 copiedfiles = []
654 copiedfiles = []
655 def overridecopyfile(src, dest):
655 def overridecopyfile(src, dest):
656 if (lfutil.shortname in src and
656 if (lfutil.shortname in src and
657 dest.startswith(repo.wjoin(lfutil.shortname))):
657 dest.startswith(repo.wjoin(lfutil.shortname))):
658 destlfile = dest.replace(lfutil.shortname, '')
658 destlfile = dest.replace(lfutil.shortname, '')
659 if not opts['force'] and os.path.exists(destlfile):
659 if not opts['force'] and os.path.exists(destlfile):
660 raise IOError('',
660 raise IOError('',
661 _('destination largefile already exists'))
661 _('destination largefile already exists'))
662 copiedfiles.append((src, dest))
662 copiedfiles.append((src, dest))
663 origcopyfile(src, dest)
663 origcopyfile(src, dest)
664
664
665 util.copyfile = overridecopyfile
665 util.copyfile = overridecopyfile
666 result += orig(ui, repo, listpats, opts, rename)
666 result += orig(ui, repo, listpats, opts, rename)
667 finally:
667 finally:
668 util.copyfile = origcopyfile
668 util.copyfile = origcopyfile
669
669
670 lfdirstate = lfutil.openlfdirstate(ui, repo)
670 lfdirstate = lfutil.openlfdirstate(ui, repo)
671 for (src, dest) in copiedfiles:
671 for (src, dest) in copiedfiles:
672 if (lfutil.shortname in src and
672 if (lfutil.shortname in src and
673 dest.startswith(repo.wjoin(lfutil.shortname))):
673 dest.startswith(repo.wjoin(lfutil.shortname))):
674 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
674 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
675 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
675 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
676 destlfiledir = os.path.dirname(repo.wjoin(destlfile)) or '.'
676 destlfiledir = os.path.dirname(repo.wjoin(destlfile)) or '.'
677 if not os.path.isdir(destlfiledir):
677 if not os.path.isdir(destlfiledir):
678 os.makedirs(destlfiledir)
678 os.makedirs(destlfiledir)
679 if rename:
679 if rename:
680 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
680 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
681
681
682 # The file is gone, but this deletes any empty parent
682 # The file is gone, but this deletes any empty parent
683 # directories as a side-effect.
683 # directories as a side-effect.
684 util.unlinkpath(repo.wjoin(srclfile), True)
684 util.unlinkpath(repo.wjoin(srclfile), True)
685 lfdirstate.remove(srclfile)
685 lfdirstate.remove(srclfile)
686 else:
686 else:
687 util.copyfile(repo.wjoin(srclfile),
687 util.copyfile(repo.wjoin(srclfile),
688 repo.wjoin(destlfile))
688 repo.wjoin(destlfile))
689
689
690 lfdirstate.add(destlfile)
690 lfdirstate.add(destlfile)
691 lfdirstate.write()
691 lfdirstate.write()
692 except error.Abort as e:
692 except error.Abort as e:
693 if str(e) != _('no files to copy'):
693 if str(e) != _('no files to copy'):
694 raise e
694 raise e
695 else:
695 else:
696 nolfiles = True
696 nolfiles = True
697 finally:
697 finally:
698 restorematchfn()
698 restorematchfn()
699 wlock.release()
699 wlock.release()
700
700
701 if nolfiles and nonormalfiles:
701 if nolfiles and nonormalfiles:
702 raise error.Abort(_('no files to copy'))
702 raise error.Abort(_('no files to copy'))
703
703
704 return result
704 return result
705
705
706 # When the user calls revert, we have to be careful to not revert any
706 # When the user calls revert, we have to be careful to not revert any
707 # changes to other largefiles accidentally. This means we have to keep
707 # changes to other largefiles accidentally. This means we have to keep
708 # track of the largefiles that are being reverted so we only pull down
708 # track of the largefiles that are being reverted so we only pull down
709 # the necessary largefiles.
709 # the necessary largefiles.
710 #
710 #
711 # Standins are only updated (to match the hash of largefiles) before
711 # Standins are only updated (to match the hash of largefiles) before
712 # commits. Update the standins then run the original revert, changing
712 # commits. Update the standins then run the original revert, changing
713 # the matcher to hit standins instead of largefiles. Based on the
713 # the matcher to hit standins instead of largefiles. Based on the
714 # resulting standins update the largefiles.
714 # resulting standins update the largefiles.
715 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
715 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
716 # Because we put the standins in a bad state (by updating them)
716 # Because we put the standins in a bad state (by updating them)
717 # and then return them to a correct state we need to lock to
717 # and then return them to a correct state we need to lock to
718 # prevent others from changing them in their incorrect state.
718 # prevent others from changing them in their incorrect state.
719 with repo.wlock():
719 with repo.wlock():
720 lfdirstate = lfutil.openlfdirstate(ui, repo)
720 lfdirstate = lfutil.openlfdirstate(ui, repo)
721 s = lfutil.lfdirstatestatus(lfdirstate, repo)
721 s = lfutil.lfdirstatestatus(lfdirstate, repo)
722 lfdirstate.write()
722 lfdirstate.write()
723 for lfile in s.modified:
723 for lfile in s.modified:
724 lfutil.updatestandin(repo, lfutil.standin(lfile))
724 lfutil.updatestandin(repo, lfutil.standin(lfile))
725 for lfile in s.deleted:
725 for lfile in s.deleted:
726 if (os.path.exists(repo.wjoin(lfutil.standin(lfile)))):
726 if (os.path.exists(repo.wjoin(lfutil.standin(lfile)))):
727 os.unlink(repo.wjoin(lfutil.standin(lfile)))
727 os.unlink(repo.wjoin(lfutil.standin(lfile)))
728
728
729 oldstandins = lfutil.getstandinsstate(repo)
729 oldstandins = lfutil.getstandinsstate(repo)
730
730
731 def overridematch(mctx, pats=(), opts=None, globbed=False,
731 def overridematch(mctx, pats=(), opts=None, globbed=False,
732 default='relpath', badfn=None):
732 default='relpath', badfn=None):
733 if opts is None:
733 if opts is None:
734 opts = {}
734 opts = {}
735 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
735 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
736 m = copy.copy(match)
736 m = copy.copy(match)
737
737
738 # revert supports recursing into subrepos, and though largefiles
738 # revert supports recursing into subrepos, and though largefiles
739 # currently doesn't work correctly in that case, this match is
739 # currently doesn't work correctly in that case, this match is
740 # called, so the lfdirstate above may not be the correct one for
740 # called, so the lfdirstate above may not be the correct one for
741 # this invocation of match.
741 # this invocation of match.
742 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
742 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
743 False)
743 False)
744
744
745 def tostandin(f):
745 def tostandin(f):
746 standin = lfutil.standin(f)
746 standin = lfutil.standin(f)
747 if standin in ctx or standin in mctx:
747 if standin in ctx or standin in mctx:
748 return standin
748 return standin
749 elif standin in repo[None] or lfdirstate[f] == 'r':
749 elif standin in repo[None] or lfdirstate[f] == 'r':
750 return None
750 return None
751 return f
751 return f
752 m._files = [tostandin(f) for f in m._files]
752 m._files = [tostandin(f) for f in m._files]
753 m._files = [f for f in m._files if f is not None]
753 m._files = [f for f in m._files if f is not None]
754 m._fileroots = set(m._files)
754 m._fileroots = set(m._files)
755 origmatchfn = m.matchfn
755 origmatchfn = m.matchfn
756 def matchfn(f):
756 def matchfn(f):
757 if lfutil.isstandin(f):
757 if lfutil.isstandin(f):
758 return (origmatchfn(lfutil.splitstandin(f)) and
758 return (origmatchfn(lfutil.splitstandin(f)) and
759 (f in ctx or f in mctx))
759 (f in ctx or f in mctx))
760 return origmatchfn(f)
760 return origmatchfn(f)
761 m.matchfn = matchfn
761 m.matchfn = matchfn
762 return m
762 return m
763 oldmatch = installmatchfn(overridematch)
763 oldmatch = installmatchfn(overridematch)
764 try:
764 try:
765 orig(ui, repo, ctx, parents, *pats, **opts)
765 orig(ui, repo, ctx, parents, *pats, **opts)
766 finally:
766 finally:
767 restorematchfn()
767 restorematchfn()
768
768
769 newstandins = lfutil.getstandinsstate(repo)
769 newstandins = lfutil.getstandinsstate(repo)
770 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
770 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
771 # lfdirstate should be 'normallookup'-ed for updated files,
771 # lfdirstate should be 'normallookup'-ed for updated files,
772 # because reverting doesn't touch dirstate for 'normal' files
772 # because reverting doesn't touch dirstate for 'normal' files
773 # when target revision is explicitly specified: in such case,
773 # when target revision is explicitly specified: in such case,
774 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
774 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
775 # of target (standin) file.
775 # of target (standin) file.
776 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
776 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
777 normallookup=True)
777 normallookup=True)
778
778
779 # after pulling changesets, we need to take some extra care to get
779 # after pulling changesets, we need to take some extra care to get
780 # largefiles updated remotely
780 # largefiles updated remotely
781 def overridepull(orig, ui, repo, source=None, **opts):
781 def overridepull(orig, ui, repo, source=None, **opts):
782 revsprepull = len(repo)
782 revsprepull = len(repo)
783 if not source:
783 if not source:
784 source = 'default'
784 source = 'default'
785 repo.lfpullsource = source
785 repo.lfpullsource = source
786 result = orig(ui, repo, source, **opts)
786 result = orig(ui, repo, source, **opts)
787 revspostpull = len(repo)
787 revspostpull = len(repo)
788 lfrevs = opts.get('lfrev', [])
788 lfrevs = opts.get('lfrev', [])
789 if opts.get('all_largefiles'):
789 if opts.get('all_largefiles'):
790 lfrevs.append('pulled()')
790 lfrevs.append('pulled()')
791 if lfrevs and revspostpull > revsprepull:
791 if lfrevs and revspostpull > revsprepull:
792 numcached = 0
792 numcached = 0
793 repo.firstpulled = revsprepull # for pulled() revset expression
793 repo.firstpulled = revsprepull # for pulled() revset expression
794 try:
794 try:
795 for rev in scmutil.revrange(repo, lfrevs):
795 for rev in scmutil.revrange(repo, lfrevs):
796 ui.note(_('pulling largefiles for revision %s\n') % rev)
796 ui.note(_('pulling largefiles for revision %s\n') % rev)
797 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
797 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
798 numcached += len(cached)
798 numcached += len(cached)
799 finally:
799 finally:
800 del repo.firstpulled
800 del repo.firstpulled
801 ui.status(_("%d largefiles cached\n") % numcached)
801 ui.status(_("%d largefiles cached\n") % numcached)
802 return result
802 return result
803
803
804 revsetpredicate = revset.extpredicate()
804 revsetpredicate = registrar.revsetpredicate()
805
805
806 @revsetpredicate('pulled()')
806 @revsetpredicate('pulled()')
807 def pulledrevsetsymbol(repo, subset, x):
807 def pulledrevsetsymbol(repo, subset, x):
808 """Changesets that just has been pulled.
808 """Changesets that just has been pulled.
809
809
810 Only available with largefiles from pull --lfrev expressions.
810 Only available with largefiles from pull --lfrev expressions.
811
811
812 .. container:: verbose
812 .. container:: verbose
813
813
814 Some examples:
814 Some examples:
815
815
816 - pull largefiles for all new changesets::
816 - pull largefiles for all new changesets::
817
817
818 hg pull -lfrev "pulled()"
818 hg pull -lfrev "pulled()"
819
819
820 - pull largefiles for all new branch heads::
820 - pull largefiles for all new branch heads::
821
821
822 hg pull -lfrev "head(pulled()) and not closed()"
822 hg pull -lfrev "head(pulled()) and not closed()"
823
823
824 """
824 """
825
825
826 try:
826 try:
827 firstpulled = repo.firstpulled
827 firstpulled = repo.firstpulled
828 except AttributeError:
828 except AttributeError:
829 raise error.Abort(_("pulled() only available in --lfrev"))
829 raise error.Abort(_("pulled() only available in --lfrev"))
830 return revset.baseset([r for r in subset if r >= firstpulled])
830 return revset.baseset([r for r in subset if r >= firstpulled])
831
831
832 def overrideclone(orig, ui, source, dest=None, **opts):
832 def overrideclone(orig, ui, source, dest=None, **opts):
833 d = dest
833 d = dest
834 if d is None:
834 if d is None:
835 d = hg.defaultdest(source)
835 d = hg.defaultdest(source)
836 if opts.get('all_largefiles') and not hg.islocal(d):
836 if opts.get('all_largefiles') and not hg.islocal(d):
837 raise error.Abort(_(
837 raise error.Abort(_(
838 '--all-largefiles is incompatible with non-local destination %s') %
838 '--all-largefiles is incompatible with non-local destination %s') %
839 d)
839 d)
840
840
841 return orig(ui, source, dest, **opts)
841 return orig(ui, source, dest, **opts)
842
842
843 def hgclone(orig, ui, opts, *args, **kwargs):
843 def hgclone(orig, ui, opts, *args, **kwargs):
844 result = orig(ui, opts, *args, **kwargs)
844 result = orig(ui, opts, *args, **kwargs)
845
845
846 if result is not None:
846 if result is not None:
847 sourcerepo, destrepo = result
847 sourcerepo, destrepo = result
848 repo = destrepo.local()
848 repo = destrepo.local()
849
849
850 # When cloning to a remote repo (like through SSH), no repo is available
850 # When cloning to a remote repo (like through SSH), no repo is available
851 # from the peer. Therefore the largefiles can't be downloaded and the
851 # from the peer. Therefore the largefiles can't be downloaded and the
852 # hgrc can't be updated.
852 # hgrc can't be updated.
853 if not repo:
853 if not repo:
854 return result
854 return result
855
855
856 # If largefiles is required for this repo, permanently enable it locally
856 # If largefiles is required for this repo, permanently enable it locally
857 if 'largefiles' in repo.requirements:
857 if 'largefiles' in repo.requirements:
858 fp = repo.vfs('hgrc', 'a', text=True)
858 fp = repo.vfs('hgrc', 'a', text=True)
859 try:
859 try:
860 fp.write('\n[extensions]\nlargefiles=\n')
860 fp.write('\n[extensions]\nlargefiles=\n')
861 finally:
861 finally:
862 fp.close()
862 fp.close()
863
863
864 # Caching is implicitly limited to 'rev' option, since the dest repo was
864 # Caching is implicitly limited to 'rev' option, since the dest repo was
865 # truncated at that point. The user may expect a download count with
865 # truncated at that point. The user may expect a download count with
866 # this option, so attempt whether or not this is a largefile repo.
866 # this option, so attempt whether or not this is a largefile repo.
867 if opts.get('all_largefiles'):
867 if opts.get('all_largefiles'):
868 success, missing = lfcommands.downloadlfiles(ui, repo, None)
868 success, missing = lfcommands.downloadlfiles(ui, repo, None)
869
869
870 if missing != 0:
870 if missing != 0:
871 return None
871 return None
872
872
873 return result
873 return result
874
874
875 def overriderebase(orig, ui, repo, **opts):
875 def overriderebase(orig, ui, repo, **opts):
876 if not util.safehasattr(repo, '_largefilesenabled'):
876 if not util.safehasattr(repo, '_largefilesenabled'):
877 return orig(ui, repo, **opts)
877 return orig(ui, repo, **opts)
878
878
879 resuming = opts.get('continue')
879 resuming = opts.get('continue')
880 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
880 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
881 repo._lfstatuswriters.append(lambda *msg, **opts: None)
881 repo._lfstatuswriters.append(lambda *msg, **opts: None)
882 try:
882 try:
883 return orig(ui, repo, **opts)
883 return orig(ui, repo, **opts)
884 finally:
884 finally:
885 repo._lfstatuswriters.pop()
885 repo._lfstatuswriters.pop()
886 repo._lfcommithooks.pop()
886 repo._lfcommithooks.pop()
887
887
888 def overridearchivecmd(orig, ui, repo, dest, **opts):
888 def overridearchivecmd(orig, ui, repo, dest, **opts):
889 repo.unfiltered().lfstatus = True
889 repo.unfiltered().lfstatus = True
890
890
891 try:
891 try:
892 return orig(ui, repo.unfiltered(), dest, **opts)
892 return orig(ui, repo.unfiltered(), dest, **opts)
893 finally:
893 finally:
894 repo.unfiltered().lfstatus = False
894 repo.unfiltered().lfstatus = False
895
895
896 def hgwebarchive(orig, web, req, tmpl):
896 def hgwebarchive(orig, web, req, tmpl):
897 web.repo.lfstatus = True
897 web.repo.lfstatus = True
898
898
899 try:
899 try:
900 return orig(web, req, tmpl)
900 return orig(web, req, tmpl)
901 finally:
901 finally:
902 web.repo.lfstatus = False
902 web.repo.lfstatus = False
903
903
904 def overridearchive(orig, repo, dest, node, kind, decode=True, matchfn=None,
904 def overridearchive(orig, repo, dest, node, kind, decode=True, matchfn=None,
905 prefix='', mtime=None, subrepos=None):
905 prefix='', mtime=None, subrepos=None):
906 # For some reason setting repo.lfstatus in hgwebarchive only changes the
906 # For some reason setting repo.lfstatus in hgwebarchive only changes the
907 # unfiltered repo's attr, so check that as well.
907 # unfiltered repo's attr, so check that as well.
908 if not repo.lfstatus and not repo.unfiltered().lfstatus:
908 if not repo.lfstatus and not repo.unfiltered().lfstatus:
909 return orig(repo, dest, node, kind, decode, matchfn, prefix, mtime,
909 return orig(repo, dest, node, kind, decode, matchfn, prefix, mtime,
910 subrepos)
910 subrepos)
911
911
912 # No need to lock because we are only reading history and
912 # No need to lock because we are only reading history and
913 # largefile caches, neither of which are modified.
913 # largefile caches, neither of which are modified.
914 if node is not None:
914 if node is not None:
915 lfcommands.cachelfiles(repo.ui, repo, node)
915 lfcommands.cachelfiles(repo.ui, repo, node)
916
916
917 if kind not in archival.archivers:
917 if kind not in archival.archivers:
918 raise error.Abort(_("unknown archive type '%s'") % kind)
918 raise error.Abort(_("unknown archive type '%s'") % kind)
919
919
920 ctx = repo[node]
920 ctx = repo[node]
921
921
922 if kind == 'files':
922 if kind == 'files':
923 if prefix:
923 if prefix:
924 raise error.Abort(
924 raise error.Abort(
925 _('cannot give prefix when archiving to files'))
925 _('cannot give prefix when archiving to files'))
926 else:
926 else:
927 prefix = archival.tidyprefix(dest, kind, prefix)
927 prefix = archival.tidyprefix(dest, kind, prefix)
928
928
929 def write(name, mode, islink, getdata):
929 def write(name, mode, islink, getdata):
930 if matchfn and not matchfn(name):
930 if matchfn and not matchfn(name):
931 return
931 return
932 data = getdata()
932 data = getdata()
933 if decode:
933 if decode:
934 data = repo.wwritedata(name, data)
934 data = repo.wwritedata(name, data)
935 archiver.addfile(prefix + name, mode, islink, data)
935 archiver.addfile(prefix + name, mode, islink, data)
936
936
937 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
937 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
938
938
939 if repo.ui.configbool("ui", "archivemeta", True):
939 if repo.ui.configbool("ui", "archivemeta", True):
940 write('.hg_archival.txt', 0o644, False,
940 write('.hg_archival.txt', 0o644, False,
941 lambda: archival.buildmetadata(ctx))
941 lambda: archival.buildmetadata(ctx))
942
942
943 for f in ctx:
943 for f in ctx:
944 ff = ctx.flags(f)
944 ff = ctx.flags(f)
945 getdata = ctx[f].data
945 getdata = ctx[f].data
946 if lfutil.isstandin(f):
946 if lfutil.isstandin(f):
947 if node is not None:
947 if node is not None:
948 path = lfutil.findfile(repo, getdata().strip())
948 path = lfutil.findfile(repo, getdata().strip())
949
949
950 if path is None:
950 if path is None:
951 raise error.Abort(
951 raise error.Abort(
952 _('largefile %s not found in repo store or system cache')
952 _('largefile %s not found in repo store or system cache')
953 % lfutil.splitstandin(f))
953 % lfutil.splitstandin(f))
954 else:
954 else:
955 path = lfutil.splitstandin(f)
955 path = lfutil.splitstandin(f)
956
956
957 f = lfutil.splitstandin(f)
957 f = lfutil.splitstandin(f)
958
958
959 getdata = lambda: util.readfile(path)
959 getdata = lambda: util.readfile(path)
960 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
960 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
961
961
962 if subrepos:
962 if subrepos:
963 for subpath in sorted(ctx.substate):
963 for subpath in sorted(ctx.substate):
964 sub = ctx.workingsub(subpath)
964 sub = ctx.workingsub(subpath)
965 submatch = match_.subdirmatcher(subpath, matchfn)
965 submatch = match_.subdirmatcher(subpath, matchfn)
966 sub._repo.lfstatus = True
966 sub._repo.lfstatus = True
967 sub.archive(archiver, prefix, submatch)
967 sub.archive(archiver, prefix, submatch)
968
968
969 archiver.done()
969 archiver.done()
970
970
971 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None):
971 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None):
972 if not repo._repo.lfstatus:
972 if not repo._repo.lfstatus:
973 return orig(repo, archiver, prefix, match)
973 return orig(repo, archiver, prefix, match)
974
974
975 repo._get(repo._state + ('hg',))
975 repo._get(repo._state + ('hg',))
976 rev = repo._state[1]
976 rev = repo._state[1]
977 ctx = repo._repo[rev]
977 ctx = repo._repo[rev]
978
978
979 if ctx.node() is not None:
979 if ctx.node() is not None:
980 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
980 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
981
981
982 def write(name, mode, islink, getdata):
982 def write(name, mode, islink, getdata):
983 # At this point, the standin has been replaced with the largefile name,
983 # At this point, the standin has been replaced with the largefile name,
984 # so the normal matcher works here without the lfutil variants.
984 # so the normal matcher works here without the lfutil variants.
985 if match and not match(f):
985 if match and not match(f):
986 return
986 return
987 data = getdata()
987 data = getdata()
988
988
989 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
989 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
990
990
991 for f in ctx:
991 for f in ctx:
992 ff = ctx.flags(f)
992 ff = ctx.flags(f)
993 getdata = ctx[f].data
993 getdata = ctx[f].data
994 if lfutil.isstandin(f):
994 if lfutil.isstandin(f):
995 if ctx.node() is not None:
995 if ctx.node() is not None:
996 path = lfutil.findfile(repo._repo, getdata().strip())
996 path = lfutil.findfile(repo._repo, getdata().strip())
997
997
998 if path is None:
998 if path is None:
999 raise error.Abort(
999 raise error.Abort(
1000 _('largefile %s not found in repo store or system cache')
1000 _('largefile %s not found in repo store or system cache')
1001 % lfutil.splitstandin(f))
1001 % lfutil.splitstandin(f))
1002 else:
1002 else:
1003 path = lfutil.splitstandin(f)
1003 path = lfutil.splitstandin(f)
1004
1004
1005 f = lfutil.splitstandin(f)
1005 f = lfutil.splitstandin(f)
1006
1006
1007 getdata = lambda: util.readfile(os.path.join(prefix, path))
1007 getdata = lambda: util.readfile(os.path.join(prefix, path))
1008
1008
1009 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1009 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1010
1010
1011 for subpath in sorted(ctx.substate):
1011 for subpath in sorted(ctx.substate):
1012 sub = ctx.workingsub(subpath)
1012 sub = ctx.workingsub(subpath)
1013 submatch = match_.subdirmatcher(subpath, match)
1013 submatch = match_.subdirmatcher(subpath, match)
1014 sub._repo.lfstatus = True
1014 sub._repo.lfstatus = True
1015 sub.archive(archiver, prefix + repo._path + '/', submatch)
1015 sub.archive(archiver, prefix + repo._path + '/', submatch)
1016
1016
1017 # If a largefile is modified, the change is not reflected in its
1017 # If a largefile is modified, the change is not reflected in its
1018 # standin until a commit. cmdutil.bailifchanged() raises an exception
1018 # standin until a commit. cmdutil.bailifchanged() raises an exception
1019 # if the repo has uncommitted changes. Wrap it to also check if
1019 # if the repo has uncommitted changes. Wrap it to also check if
1020 # largefiles were changed. This is used by bisect, backout and fetch.
1020 # largefiles were changed. This is used by bisect, backout and fetch.
1021 def overridebailifchanged(orig, repo, *args, **kwargs):
1021 def overridebailifchanged(orig, repo, *args, **kwargs):
1022 orig(repo, *args, **kwargs)
1022 orig(repo, *args, **kwargs)
1023 repo.lfstatus = True
1023 repo.lfstatus = True
1024 s = repo.status()
1024 s = repo.status()
1025 repo.lfstatus = False
1025 repo.lfstatus = False
1026 if s.modified or s.added or s.removed or s.deleted:
1026 if s.modified or s.added or s.removed or s.deleted:
1027 raise error.Abort(_('uncommitted changes'))
1027 raise error.Abort(_('uncommitted changes'))
1028
1028
1029 def postcommitstatus(orig, repo, *args, **kwargs):
1029 def postcommitstatus(orig, repo, *args, **kwargs):
1030 repo.lfstatus = True
1030 repo.lfstatus = True
1031 try:
1031 try:
1032 return orig(repo, *args, **kwargs)
1032 return orig(repo, *args, **kwargs)
1033 finally:
1033 finally:
1034 repo.lfstatus = False
1034 repo.lfstatus = False
1035
1035
1036 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly):
1036 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly):
1037 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1037 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1038 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly)
1038 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly)
1039 m = composelargefilematcher(match, repo[None].manifest())
1039 m = composelargefilematcher(match, repo[None].manifest())
1040
1040
1041 try:
1041 try:
1042 repo.lfstatus = True
1042 repo.lfstatus = True
1043 s = repo.status(match=m, clean=True)
1043 s = repo.status(match=m, clean=True)
1044 finally:
1044 finally:
1045 repo.lfstatus = False
1045 repo.lfstatus = False
1046 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1046 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1047 forget = [f for f in forget if lfutil.standin(f) in repo[None].manifest()]
1047 forget = [f for f in forget if lfutil.standin(f) in repo[None].manifest()]
1048
1048
1049 for f in forget:
1049 for f in forget:
1050 if lfutil.standin(f) not in repo.dirstate and not \
1050 if lfutil.standin(f) not in repo.dirstate and not \
1051 repo.wvfs.isdir(lfutil.standin(f)):
1051 repo.wvfs.isdir(lfutil.standin(f)):
1052 ui.warn(_('not removing %s: file is already untracked\n')
1052 ui.warn(_('not removing %s: file is already untracked\n')
1053 % m.rel(f))
1053 % m.rel(f))
1054 bad.append(f)
1054 bad.append(f)
1055
1055
1056 for f in forget:
1056 for f in forget:
1057 if ui.verbose or not m.exact(f):
1057 if ui.verbose or not m.exact(f):
1058 ui.status(_('removing %s\n') % m.rel(f))
1058 ui.status(_('removing %s\n') % m.rel(f))
1059
1059
1060 # Need to lock because standin files are deleted then removed from the
1060 # Need to lock because standin files are deleted then removed from the
1061 # repository and we could race in-between.
1061 # repository and we could race in-between.
1062 with repo.wlock():
1062 with repo.wlock():
1063 lfdirstate = lfutil.openlfdirstate(ui, repo)
1063 lfdirstate = lfutil.openlfdirstate(ui, repo)
1064 for f in forget:
1064 for f in forget:
1065 if lfdirstate[f] == 'a':
1065 if lfdirstate[f] == 'a':
1066 lfdirstate.drop(f)
1066 lfdirstate.drop(f)
1067 else:
1067 else:
1068 lfdirstate.remove(f)
1068 lfdirstate.remove(f)
1069 lfdirstate.write()
1069 lfdirstate.write()
1070 standins = [lfutil.standin(f) for f in forget]
1070 standins = [lfutil.standin(f) for f in forget]
1071 for f in standins:
1071 for f in standins:
1072 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1072 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1073 rejected = repo[None].forget(standins)
1073 rejected = repo[None].forget(standins)
1074
1074
1075 bad.extend(f for f in rejected if f in m.files())
1075 bad.extend(f for f in rejected if f in m.files())
1076 forgot.extend(f for f in forget if f not in rejected)
1076 forgot.extend(f for f in forget if f not in rejected)
1077 return bad, forgot
1077 return bad, forgot
1078
1078
1079 def _getoutgoings(repo, other, missing, addfunc):
1079 def _getoutgoings(repo, other, missing, addfunc):
1080 """get pairs of filename and largefile hash in outgoing revisions
1080 """get pairs of filename and largefile hash in outgoing revisions
1081 in 'missing'.
1081 in 'missing'.
1082
1082
1083 largefiles already existing on 'other' repository are ignored.
1083 largefiles already existing on 'other' repository are ignored.
1084
1084
1085 'addfunc' is invoked with each unique pairs of filename and
1085 'addfunc' is invoked with each unique pairs of filename and
1086 largefile hash value.
1086 largefile hash value.
1087 """
1087 """
1088 knowns = set()
1088 knowns = set()
1089 lfhashes = set()
1089 lfhashes = set()
1090 def dedup(fn, lfhash):
1090 def dedup(fn, lfhash):
1091 k = (fn, lfhash)
1091 k = (fn, lfhash)
1092 if k not in knowns:
1092 if k not in knowns:
1093 knowns.add(k)
1093 knowns.add(k)
1094 lfhashes.add(lfhash)
1094 lfhashes.add(lfhash)
1095 lfutil.getlfilestoupload(repo, missing, dedup)
1095 lfutil.getlfilestoupload(repo, missing, dedup)
1096 if lfhashes:
1096 if lfhashes:
1097 lfexists = basestore._openstore(repo, other).exists(lfhashes)
1097 lfexists = basestore._openstore(repo, other).exists(lfhashes)
1098 for fn, lfhash in knowns:
1098 for fn, lfhash in knowns:
1099 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1099 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1100 addfunc(fn, lfhash)
1100 addfunc(fn, lfhash)
1101
1101
1102 def outgoinghook(ui, repo, other, opts, missing):
1102 def outgoinghook(ui, repo, other, opts, missing):
1103 if opts.pop('large', None):
1103 if opts.pop('large', None):
1104 lfhashes = set()
1104 lfhashes = set()
1105 if ui.debugflag:
1105 if ui.debugflag:
1106 toupload = {}
1106 toupload = {}
1107 def addfunc(fn, lfhash):
1107 def addfunc(fn, lfhash):
1108 if fn not in toupload:
1108 if fn not in toupload:
1109 toupload[fn] = []
1109 toupload[fn] = []
1110 toupload[fn].append(lfhash)
1110 toupload[fn].append(lfhash)
1111 lfhashes.add(lfhash)
1111 lfhashes.add(lfhash)
1112 def showhashes(fn):
1112 def showhashes(fn):
1113 for lfhash in sorted(toupload[fn]):
1113 for lfhash in sorted(toupload[fn]):
1114 ui.debug(' %s\n' % (lfhash))
1114 ui.debug(' %s\n' % (lfhash))
1115 else:
1115 else:
1116 toupload = set()
1116 toupload = set()
1117 def addfunc(fn, lfhash):
1117 def addfunc(fn, lfhash):
1118 toupload.add(fn)
1118 toupload.add(fn)
1119 lfhashes.add(lfhash)
1119 lfhashes.add(lfhash)
1120 def showhashes(fn):
1120 def showhashes(fn):
1121 pass
1121 pass
1122 _getoutgoings(repo, other, missing, addfunc)
1122 _getoutgoings(repo, other, missing, addfunc)
1123
1123
1124 if not toupload:
1124 if not toupload:
1125 ui.status(_('largefiles: no files to upload\n'))
1125 ui.status(_('largefiles: no files to upload\n'))
1126 else:
1126 else:
1127 ui.status(_('largefiles to upload (%d entities):\n')
1127 ui.status(_('largefiles to upload (%d entities):\n')
1128 % (len(lfhashes)))
1128 % (len(lfhashes)))
1129 for file in sorted(toupload):
1129 for file in sorted(toupload):
1130 ui.status(lfutil.splitstandin(file) + '\n')
1130 ui.status(lfutil.splitstandin(file) + '\n')
1131 showhashes(file)
1131 showhashes(file)
1132 ui.status('\n')
1132 ui.status('\n')
1133
1133
1134 def summaryremotehook(ui, repo, opts, changes):
1134 def summaryremotehook(ui, repo, opts, changes):
1135 largeopt = opts.get('large', False)
1135 largeopt = opts.get('large', False)
1136 if changes is None:
1136 if changes is None:
1137 if largeopt:
1137 if largeopt:
1138 return (False, True) # only outgoing check is needed
1138 return (False, True) # only outgoing check is needed
1139 else:
1139 else:
1140 return (False, False)
1140 return (False, False)
1141 elif largeopt:
1141 elif largeopt:
1142 url, branch, peer, outgoing = changes[1]
1142 url, branch, peer, outgoing = changes[1]
1143 if peer is None:
1143 if peer is None:
1144 # i18n: column positioning for "hg summary"
1144 # i18n: column positioning for "hg summary"
1145 ui.status(_('largefiles: (no remote repo)\n'))
1145 ui.status(_('largefiles: (no remote repo)\n'))
1146 return
1146 return
1147
1147
1148 toupload = set()
1148 toupload = set()
1149 lfhashes = set()
1149 lfhashes = set()
1150 def addfunc(fn, lfhash):
1150 def addfunc(fn, lfhash):
1151 toupload.add(fn)
1151 toupload.add(fn)
1152 lfhashes.add(lfhash)
1152 lfhashes.add(lfhash)
1153 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1153 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1154
1154
1155 if not toupload:
1155 if not toupload:
1156 # i18n: column positioning for "hg summary"
1156 # i18n: column positioning for "hg summary"
1157 ui.status(_('largefiles: (no files to upload)\n'))
1157 ui.status(_('largefiles: (no files to upload)\n'))
1158 else:
1158 else:
1159 # i18n: column positioning for "hg summary"
1159 # i18n: column positioning for "hg summary"
1160 ui.status(_('largefiles: %d entities for %d files to upload\n')
1160 ui.status(_('largefiles: %d entities for %d files to upload\n')
1161 % (len(lfhashes), len(toupload)))
1161 % (len(lfhashes), len(toupload)))
1162
1162
1163 def overridesummary(orig, ui, repo, *pats, **opts):
1163 def overridesummary(orig, ui, repo, *pats, **opts):
1164 try:
1164 try:
1165 repo.lfstatus = True
1165 repo.lfstatus = True
1166 orig(ui, repo, *pats, **opts)
1166 orig(ui, repo, *pats, **opts)
1167 finally:
1167 finally:
1168 repo.lfstatus = False
1168 repo.lfstatus = False
1169
1169
1170 def scmutiladdremove(orig, repo, matcher, prefix, opts=None, dry_run=None,
1170 def scmutiladdremove(orig, repo, matcher, prefix, opts=None, dry_run=None,
1171 similarity=None):
1171 similarity=None):
1172 if opts is None:
1172 if opts is None:
1173 opts = {}
1173 opts = {}
1174 if not lfutil.islfilesrepo(repo):
1174 if not lfutil.islfilesrepo(repo):
1175 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1175 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1176 # Get the list of missing largefiles so we can remove them
1176 # Get the list of missing largefiles so we can remove them
1177 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1177 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1178 unsure, s = lfdirstate.status(match_.always(repo.root, repo.getcwd()), [],
1178 unsure, s = lfdirstate.status(match_.always(repo.root, repo.getcwd()), [],
1179 False, False, False)
1179 False, False, False)
1180
1180
1181 # Call into the normal remove code, but the removing of the standin, we want
1181 # Call into the normal remove code, but the removing of the standin, we want
1182 # to have handled by original addremove. Monkey patching here makes sure
1182 # to have handled by original addremove. Monkey patching here makes sure
1183 # we don't remove the standin in the largefiles code, preventing a very
1183 # we don't remove the standin in the largefiles code, preventing a very
1184 # confused state later.
1184 # confused state later.
1185 if s.deleted:
1185 if s.deleted:
1186 m = copy.copy(matcher)
1186 m = copy.copy(matcher)
1187
1187
1188 # The m._files and m._map attributes are not changed to the deleted list
1188 # The m._files and m._map attributes are not changed to the deleted list
1189 # because that affects the m.exact() test, which in turn governs whether
1189 # because that affects the m.exact() test, which in turn governs whether
1190 # or not the file name is printed, and how. Simply limit the original
1190 # or not the file name is printed, and how. Simply limit the original
1191 # matches to those in the deleted status list.
1191 # matches to those in the deleted status list.
1192 matchfn = m.matchfn
1192 matchfn = m.matchfn
1193 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1193 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1194
1194
1195 removelargefiles(repo.ui, repo, True, m, **opts)
1195 removelargefiles(repo.ui, repo, True, m, **opts)
1196 # Call into the normal add code, and any files that *should* be added as
1196 # Call into the normal add code, and any files that *should* be added as
1197 # largefiles will be
1197 # largefiles will be
1198 added, bad = addlargefiles(repo.ui, repo, True, matcher, **opts)
1198 added, bad = addlargefiles(repo.ui, repo, True, matcher, **opts)
1199 # Now that we've handled largefiles, hand off to the original addremove
1199 # Now that we've handled largefiles, hand off to the original addremove
1200 # function to take care of the rest. Make sure it doesn't do anything with
1200 # function to take care of the rest. Make sure it doesn't do anything with
1201 # largefiles by passing a matcher that will ignore them.
1201 # largefiles by passing a matcher that will ignore them.
1202 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1202 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1203 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1203 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1204
1204
1205 # Calling purge with --all will cause the largefiles to be deleted.
1205 # Calling purge with --all will cause the largefiles to be deleted.
1206 # Override repo.status to prevent this from happening.
1206 # Override repo.status to prevent this from happening.
1207 def overridepurge(orig, ui, repo, *dirs, **opts):
1207 def overridepurge(orig, ui, repo, *dirs, **opts):
1208 # XXX Monkey patching a repoview will not work. The assigned attribute will
1208 # XXX Monkey patching a repoview will not work. The assigned attribute will
1209 # be set on the unfiltered repo, but we will only lookup attributes in the
1209 # be set on the unfiltered repo, but we will only lookup attributes in the
1210 # unfiltered repo if the lookup in the repoview object itself fails. As the
1210 # unfiltered repo if the lookup in the repoview object itself fails. As the
1211 # monkey patched method exists on the repoview class the lookup will not
1211 # monkey patched method exists on the repoview class the lookup will not
1212 # fail. As a result, the original version will shadow the monkey patched
1212 # fail. As a result, the original version will shadow the monkey patched
1213 # one, defeating the monkey patch.
1213 # one, defeating the monkey patch.
1214 #
1214 #
1215 # As a work around we use an unfiltered repo here. We should do something
1215 # As a work around we use an unfiltered repo here. We should do something
1216 # cleaner instead.
1216 # cleaner instead.
1217 repo = repo.unfiltered()
1217 repo = repo.unfiltered()
1218 oldstatus = repo.status
1218 oldstatus = repo.status
1219 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1219 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1220 clean=False, unknown=False, listsubrepos=False):
1220 clean=False, unknown=False, listsubrepos=False):
1221 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1221 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1222 listsubrepos)
1222 listsubrepos)
1223 lfdirstate = lfutil.openlfdirstate(ui, repo)
1223 lfdirstate = lfutil.openlfdirstate(ui, repo)
1224 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1224 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1225 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1225 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1226 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1226 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1227 unknown, ignored, r.clean)
1227 unknown, ignored, r.clean)
1228 repo.status = overridestatus
1228 repo.status = overridestatus
1229 orig(ui, repo, *dirs, **opts)
1229 orig(ui, repo, *dirs, **opts)
1230 repo.status = oldstatus
1230 repo.status = oldstatus
1231 def overriderollback(orig, ui, repo, **opts):
1231 def overriderollback(orig, ui, repo, **opts):
1232 with repo.wlock():
1232 with repo.wlock():
1233 before = repo.dirstate.parents()
1233 before = repo.dirstate.parents()
1234 orphans = set(f for f in repo.dirstate
1234 orphans = set(f for f in repo.dirstate
1235 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1235 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1236 result = orig(ui, repo, **opts)
1236 result = orig(ui, repo, **opts)
1237 after = repo.dirstate.parents()
1237 after = repo.dirstate.parents()
1238 if before == after:
1238 if before == after:
1239 return result # no need to restore standins
1239 return result # no need to restore standins
1240
1240
1241 pctx = repo['.']
1241 pctx = repo['.']
1242 for f in repo.dirstate:
1242 for f in repo.dirstate:
1243 if lfutil.isstandin(f):
1243 if lfutil.isstandin(f):
1244 orphans.discard(f)
1244 orphans.discard(f)
1245 if repo.dirstate[f] == 'r':
1245 if repo.dirstate[f] == 'r':
1246 repo.wvfs.unlinkpath(f, ignoremissing=True)
1246 repo.wvfs.unlinkpath(f, ignoremissing=True)
1247 elif f in pctx:
1247 elif f in pctx:
1248 fctx = pctx[f]
1248 fctx = pctx[f]
1249 repo.wwrite(f, fctx.data(), fctx.flags())
1249 repo.wwrite(f, fctx.data(), fctx.flags())
1250 else:
1250 else:
1251 # content of standin is not so important in 'a',
1251 # content of standin is not so important in 'a',
1252 # 'm' or 'n' (coming from the 2nd parent) cases
1252 # 'm' or 'n' (coming from the 2nd parent) cases
1253 lfutil.writestandin(repo, f, '', False)
1253 lfutil.writestandin(repo, f, '', False)
1254 for standin in orphans:
1254 for standin in orphans:
1255 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1255 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1256
1256
1257 lfdirstate = lfutil.openlfdirstate(ui, repo)
1257 lfdirstate = lfutil.openlfdirstate(ui, repo)
1258 orphans = set(lfdirstate)
1258 orphans = set(lfdirstate)
1259 lfiles = lfutil.listlfiles(repo)
1259 lfiles = lfutil.listlfiles(repo)
1260 for file in lfiles:
1260 for file in lfiles:
1261 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1261 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1262 orphans.discard(file)
1262 orphans.discard(file)
1263 for lfile in orphans:
1263 for lfile in orphans:
1264 lfdirstate.drop(lfile)
1264 lfdirstate.drop(lfile)
1265 lfdirstate.write()
1265 lfdirstate.write()
1266 return result
1266 return result
1267
1267
1268 def overridetransplant(orig, ui, repo, *revs, **opts):
1268 def overridetransplant(orig, ui, repo, *revs, **opts):
1269 resuming = opts.get('continue')
1269 resuming = opts.get('continue')
1270 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1270 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1271 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1271 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1272 try:
1272 try:
1273 result = orig(ui, repo, *revs, **opts)
1273 result = orig(ui, repo, *revs, **opts)
1274 finally:
1274 finally:
1275 repo._lfstatuswriters.pop()
1275 repo._lfstatuswriters.pop()
1276 repo._lfcommithooks.pop()
1276 repo._lfcommithooks.pop()
1277 return result
1277 return result
1278
1278
1279 def overridecat(orig, ui, repo, file1, *pats, **opts):
1279 def overridecat(orig, ui, repo, file1, *pats, **opts):
1280 ctx = scmutil.revsingle(repo, opts.get('rev'))
1280 ctx = scmutil.revsingle(repo, opts.get('rev'))
1281 err = 1
1281 err = 1
1282 notbad = set()
1282 notbad = set()
1283 m = scmutil.match(ctx, (file1,) + pats, opts)
1283 m = scmutil.match(ctx, (file1,) + pats, opts)
1284 origmatchfn = m.matchfn
1284 origmatchfn = m.matchfn
1285 def lfmatchfn(f):
1285 def lfmatchfn(f):
1286 if origmatchfn(f):
1286 if origmatchfn(f):
1287 return True
1287 return True
1288 lf = lfutil.splitstandin(f)
1288 lf = lfutil.splitstandin(f)
1289 if lf is None:
1289 if lf is None:
1290 return False
1290 return False
1291 notbad.add(lf)
1291 notbad.add(lf)
1292 return origmatchfn(lf)
1292 return origmatchfn(lf)
1293 m.matchfn = lfmatchfn
1293 m.matchfn = lfmatchfn
1294 origbadfn = m.bad
1294 origbadfn = m.bad
1295 def lfbadfn(f, msg):
1295 def lfbadfn(f, msg):
1296 if not f in notbad:
1296 if not f in notbad:
1297 origbadfn(f, msg)
1297 origbadfn(f, msg)
1298 m.bad = lfbadfn
1298 m.bad = lfbadfn
1299
1299
1300 origvisitdirfn = m.visitdir
1300 origvisitdirfn = m.visitdir
1301 def lfvisitdirfn(dir):
1301 def lfvisitdirfn(dir):
1302 if dir == lfutil.shortname:
1302 if dir == lfutil.shortname:
1303 return True
1303 return True
1304 ret = origvisitdirfn(dir)
1304 ret = origvisitdirfn(dir)
1305 if ret:
1305 if ret:
1306 return ret
1306 return ret
1307 lf = lfutil.splitstandin(dir)
1307 lf = lfutil.splitstandin(dir)
1308 if lf is None:
1308 if lf is None:
1309 return False
1309 return False
1310 return origvisitdirfn(lf)
1310 return origvisitdirfn(lf)
1311 m.visitdir = lfvisitdirfn
1311 m.visitdir = lfvisitdirfn
1312
1312
1313 for f in ctx.walk(m):
1313 for f in ctx.walk(m):
1314 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1314 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1315 pathname=f)
1315 pathname=f)
1316 lf = lfutil.splitstandin(f)
1316 lf = lfutil.splitstandin(f)
1317 if lf is None or origmatchfn(f):
1317 if lf is None or origmatchfn(f):
1318 # duplicating unreachable code from commands.cat
1318 # duplicating unreachable code from commands.cat
1319 data = ctx[f].data()
1319 data = ctx[f].data()
1320 if opts.get('decode'):
1320 if opts.get('decode'):
1321 data = repo.wwritedata(f, data)
1321 data = repo.wwritedata(f, data)
1322 fp.write(data)
1322 fp.write(data)
1323 else:
1323 else:
1324 hash = lfutil.readstandin(repo, lf, ctx.rev())
1324 hash = lfutil.readstandin(repo, lf, ctx.rev())
1325 if not lfutil.inusercache(repo.ui, hash):
1325 if not lfutil.inusercache(repo.ui, hash):
1326 store = basestore._openstore(repo)
1326 store = basestore._openstore(repo)
1327 success, missing = store.get([(lf, hash)])
1327 success, missing = store.get([(lf, hash)])
1328 if len(success) != 1:
1328 if len(success) != 1:
1329 raise error.Abort(
1329 raise error.Abort(
1330 _('largefile %s is not in cache and could not be '
1330 _('largefile %s is not in cache and could not be '
1331 'downloaded') % lf)
1331 'downloaded') % lf)
1332 path = lfutil.usercachepath(repo.ui, hash)
1332 path = lfutil.usercachepath(repo.ui, hash)
1333 fpin = open(path, "rb")
1333 fpin = open(path, "rb")
1334 for chunk in util.filechunkiter(fpin, 128 * 1024):
1334 for chunk in util.filechunkiter(fpin, 128 * 1024):
1335 fp.write(chunk)
1335 fp.write(chunk)
1336 fpin.close()
1336 fpin.close()
1337 fp.close()
1337 fp.close()
1338 err = 0
1338 err = 0
1339 return err
1339 return err
1340
1340
1341 def mergeupdate(orig, repo, node, branchmerge, force,
1341 def mergeupdate(orig, repo, node, branchmerge, force,
1342 *args, **kwargs):
1342 *args, **kwargs):
1343 matcher = kwargs.get('matcher', None)
1343 matcher = kwargs.get('matcher', None)
1344 # note if this is a partial update
1344 # note if this is a partial update
1345 partial = matcher and not matcher.always()
1345 partial = matcher and not matcher.always()
1346 with repo.wlock():
1346 with repo.wlock():
1347 # branch | | |
1347 # branch | | |
1348 # merge | force | partial | action
1348 # merge | force | partial | action
1349 # -------+-------+---------+--------------
1349 # -------+-------+---------+--------------
1350 # x | x | x | linear-merge
1350 # x | x | x | linear-merge
1351 # o | x | x | branch-merge
1351 # o | x | x | branch-merge
1352 # x | o | x | overwrite (as clean update)
1352 # x | o | x | overwrite (as clean update)
1353 # o | o | x | force-branch-merge (*1)
1353 # o | o | x | force-branch-merge (*1)
1354 # x | x | o | (*)
1354 # x | x | o | (*)
1355 # o | x | o | (*)
1355 # o | x | o | (*)
1356 # x | o | o | overwrite (as revert)
1356 # x | o | o | overwrite (as revert)
1357 # o | o | o | (*)
1357 # o | o | o | (*)
1358 #
1358 #
1359 # (*) don't care
1359 # (*) don't care
1360 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1360 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1361
1361
1362 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1362 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1363 unsure, s = lfdirstate.status(match_.always(repo.root,
1363 unsure, s = lfdirstate.status(match_.always(repo.root,
1364 repo.getcwd()),
1364 repo.getcwd()),
1365 [], False, False, False)
1365 [], False, False, False)
1366 pctx = repo['.']
1366 pctx = repo['.']
1367 for lfile in unsure + s.modified:
1367 for lfile in unsure + s.modified:
1368 lfileabs = repo.wvfs.join(lfile)
1368 lfileabs = repo.wvfs.join(lfile)
1369 if not os.path.exists(lfileabs):
1369 if not os.path.exists(lfileabs):
1370 continue
1370 continue
1371 lfhash = lfutil.hashrepofile(repo, lfile)
1371 lfhash = lfutil.hashrepofile(repo, lfile)
1372 standin = lfutil.standin(lfile)
1372 standin = lfutil.standin(lfile)
1373 lfutil.writestandin(repo, standin, lfhash,
1373 lfutil.writestandin(repo, standin, lfhash,
1374 lfutil.getexecutable(lfileabs))
1374 lfutil.getexecutable(lfileabs))
1375 if (standin in pctx and
1375 if (standin in pctx and
1376 lfhash == lfutil.readstandin(repo, lfile, '.')):
1376 lfhash == lfutil.readstandin(repo, lfile, '.')):
1377 lfdirstate.normal(lfile)
1377 lfdirstate.normal(lfile)
1378 for lfile in s.added:
1378 for lfile in s.added:
1379 lfutil.updatestandin(repo, lfutil.standin(lfile))
1379 lfutil.updatestandin(repo, lfutil.standin(lfile))
1380 lfdirstate.write()
1380 lfdirstate.write()
1381
1381
1382 oldstandins = lfutil.getstandinsstate(repo)
1382 oldstandins = lfutil.getstandinsstate(repo)
1383
1383
1384 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1384 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1385
1385
1386 newstandins = lfutil.getstandinsstate(repo)
1386 newstandins = lfutil.getstandinsstate(repo)
1387 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1387 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1388 if branchmerge or force or partial:
1388 if branchmerge or force or partial:
1389 filelist.extend(s.deleted + s.removed)
1389 filelist.extend(s.deleted + s.removed)
1390
1390
1391 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1391 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1392 normallookup=partial)
1392 normallookup=partial)
1393
1393
1394 return result
1394 return result
1395
1395
1396 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1396 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1397 result = orig(repo, files, *args, **kwargs)
1397 result = orig(repo, files, *args, **kwargs)
1398
1398
1399 filelist = [lfutil.splitstandin(f) for f in files if lfutil.isstandin(f)]
1399 filelist = [lfutil.splitstandin(f) for f in files if lfutil.isstandin(f)]
1400 if filelist:
1400 if filelist:
1401 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1401 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1402 printmessage=False, normallookup=True)
1402 printmessage=False, normallookup=True)
1403
1403
1404 return result
1404 return result
@@ -1,175 +1,173 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''setup for largefiles extension: uisetup'''
9 '''setup for largefiles extension: uisetup'''
10
10
11 from mercurial import archival, cmdutil, commands, extensions, filemerge, hg, \
11 from mercurial import archival, cmdutil, commands, extensions, filemerge, hg, \
12 httppeer, merge, scmutil, sshpeer, wireproto, subrepo, copies
12 httppeer, merge, scmutil, sshpeer, wireproto, subrepo, copies
13 from mercurial.i18n import _
13 from mercurial.i18n import _
14 from mercurial.hgweb import hgweb_mod, webcommands
14 from mercurial.hgweb import hgweb_mod, webcommands
15
15
16 import overrides
16 import overrides
17 import proto
17 import proto
18
18
19 def uisetup(ui):
19 def uisetup(ui):
20 # Disable auto-status for some commands which assume that all
20 # Disable auto-status for some commands which assume that all
21 # files in the result are under Mercurial's control
21 # files in the result are under Mercurial's control
22
22
23 entry = extensions.wrapcommand(commands.table, 'add',
23 entry = extensions.wrapcommand(commands.table, 'add',
24 overrides.overrideadd)
24 overrides.overrideadd)
25 addopt = [('', 'large', None, _('add as largefile')),
25 addopt = [('', 'large', None, _('add as largefile')),
26 ('', 'normal', None, _('add as normal file')),
26 ('', 'normal', None, _('add as normal file')),
27 ('', 'lfsize', '', _('add all files above this size '
27 ('', 'lfsize', '', _('add all files above this size '
28 '(in megabytes) as largefiles '
28 '(in megabytes) as largefiles '
29 '(default: 10)'))]
29 '(default: 10)'))]
30 entry[1].extend(addopt)
30 entry[1].extend(addopt)
31
31
32 # The scmutil function is called both by the (trivial) addremove command,
32 # The scmutil function is called both by the (trivial) addremove command,
33 # and in the process of handling commit -A (issue3542)
33 # and in the process of handling commit -A (issue3542)
34 entry = extensions.wrapfunction(scmutil, 'addremove',
34 entry = extensions.wrapfunction(scmutil, 'addremove',
35 overrides.scmutiladdremove)
35 overrides.scmutiladdremove)
36 extensions.wrapfunction(cmdutil, 'add', overrides.cmdutiladd)
36 extensions.wrapfunction(cmdutil, 'add', overrides.cmdutiladd)
37 extensions.wrapfunction(cmdutil, 'remove', overrides.cmdutilremove)
37 extensions.wrapfunction(cmdutil, 'remove', overrides.cmdutilremove)
38 extensions.wrapfunction(cmdutil, 'forget', overrides.cmdutilforget)
38 extensions.wrapfunction(cmdutil, 'forget', overrides.cmdutilforget)
39
39
40 extensions.wrapfunction(copies, 'pathcopies', overrides.copiespathcopies)
40 extensions.wrapfunction(copies, 'pathcopies', overrides.copiespathcopies)
41
41
42 # Subrepos call status function
42 # Subrepos call status function
43 entry = extensions.wrapcommand(commands.table, 'status',
43 entry = extensions.wrapcommand(commands.table, 'status',
44 overrides.overridestatus)
44 overrides.overridestatus)
45 entry = extensions.wrapfunction(subrepo.hgsubrepo, 'status',
45 entry = extensions.wrapfunction(subrepo.hgsubrepo, 'status',
46 overrides.overridestatusfn)
46 overrides.overridestatusfn)
47
47
48 entry = extensions.wrapcommand(commands.table, 'log',
48 entry = extensions.wrapcommand(commands.table, 'log',
49 overrides.overridelog)
49 overrides.overridelog)
50 entry = extensions.wrapcommand(commands.table, 'rollback',
50 entry = extensions.wrapcommand(commands.table, 'rollback',
51 overrides.overriderollback)
51 overrides.overriderollback)
52 entry = extensions.wrapcommand(commands.table, 'verify',
52 entry = extensions.wrapcommand(commands.table, 'verify',
53 overrides.overrideverify)
53 overrides.overrideverify)
54
54
55 verifyopt = [('', 'large', None,
55 verifyopt = [('', 'large', None,
56 _('verify that all largefiles in current revision exists')),
56 _('verify that all largefiles in current revision exists')),
57 ('', 'lfa', None,
57 ('', 'lfa', None,
58 _('verify largefiles in all revisions, not just current')),
58 _('verify largefiles in all revisions, not just current')),
59 ('', 'lfc', None,
59 ('', 'lfc', None,
60 _('verify local largefile contents, not just existence'))]
60 _('verify local largefile contents, not just existence'))]
61 entry[1].extend(verifyopt)
61 entry[1].extend(verifyopt)
62
62
63 entry = extensions.wrapcommand(commands.table, 'debugstate',
63 entry = extensions.wrapcommand(commands.table, 'debugstate',
64 overrides.overridedebugstate)
64 overrides.overridedebugstate)
65 debugstateopt = [('', 'large', None, _('display largefiles dirstate'))]
65 debugstateopt = [('', 'large', None, _('display largefiles dirstate'))]
66 entry[1].extend(debugstateopt)
66 entry[1].extend(debugstateopt)
67
67
68 outgoing = lambda orgfunc, *arg, **kwargs: orgfunc(*arg, **kwargs)
68 outgoing = lambda orgfunc, *arg, **kwargs: orgfunc(*arg, **kwargs)
69 entry = extensions.wrapcommand(commands.table, 'outgoing', outgoing)
69 entry = extensions.wrapcommand(commands.table, 'outgoing', outgoing)
70 outgoingopt = [('', 'large', None, _('display outgoing largefiles'))]
70 outgoingopt = [('', 'large', None, _('display outgoing largefiles'))]
71 entry[1].extend(outgoingopt)
71 entry[1].extend(outgoingopt)
72 cmdutil.outgoinghooks.add('largefiles', overrides.outgoinghook)
72 cmdutil.outgoinghooks.add('largefiles', overrides.outgoinghook)
73 entry = extensions.wrapcommand(commands.table, 'summary',
73 entry = extensions.wrapcommand(commands.table, 'summary',
74 overrides.overridesummary)
74 overrides.overridesummary)
75 summaryopt = [('', 'large', None, _('display outgoing largefiles'))]
75 summaryopt = [('', 'large', None, _('display outgoing largefiles'))]
76 entry[1].extend(summaryopt)
76 entry[1].extend(summaryopt)
77 cmdutil.summaryremotehooks.add('largefiles', overrides.summaryremotehook)
77 cmdutil.summaryremotehooks.add('largefiles', overrides.summaryremotehook)
78
78
79 entry = extensions.wrapcommand(commands.table, 'pull',
79 entry = extensions.wrapcommand(commands.table, 'pull',
80 overrides.overridepull)
80 overrides.overridepull)
81 pullopt = [('', 'all-largefiles', None,
81 pullopt = [('', 'all-largefiles', None,
82 _('download all pulled versions of largefiles (DEPRECATED)')),
82 _('download all pulled versions of largefiles (DEPRECATED)')),
83 ('', 'lfrev', [],
83 ('', 'lfrev', [],
84 _('download largefiles for these revisions'), _('REV'))]
84 _('download largefiles for these revisions'), _('REV'))]
85 entry[1].extend(pullopt)
85 entry[1].extend(pullopt)
86
86
87 entry = extensions.wrapcommand(commands.table, 'clone',
87 entry = extensions.wrapcommand(commands.table, 'clone',
88 overrides.overrideclone)
88 overrides.overrideclone)
89 cloneopt = [('', 'all-largefiles', None,
89 cloneopt = [('', 'all-largefiles', None,
90 _('download all versions of all largefiles'))]
90 _('download all versions of all largefiles'))]
91 entry[1].extend(cloneopt)
91 entry[1].extend(cloneopt)
92 entry = extensions.wrapfunction(hg, 'clone', overrides.hgclone)
92 entry = extensions.wrapfunction(hg, 'clone', overrides.hgclone)
93
93
94 entry = extensions.wrapcommand(commands.table, 'cat',
94 entry = extensions.wrapcommand(commands.table, 'cat',
95 overrides.overridecat)
95 overrides.overridecat)
96 entry = extensions.wrapfunction(merge, '_checkunknownfile',
96 entry = extensions.wrapfunction(merge, '_checkunknownfile',
97 overrides.overridecheckunknownfile)
97 overrides.overridecheckunknownfile)
98 entry = extensions.wrapfunction(merge, 'calculateupdates',
98 entry = extensions.wrapfunction(merge, 'calculateupdates',
99 overrides.overridecalculateupdates)
99 overrides.overridecalculateupdates)
100 entry = extensions.wrapfunction(merge, 'recordupdates',
100 entry = extensions.wrapfunction(merge, 'recordupdates',
101 overrides.mergerecordupdates)
101 overrides.mergerecordupdates)
102 entry = extensions.wrapfunction(merge, 'update',
102 entry = extensions.wrapfunction(merge, 'update',
103 overrides.mergeupdate)
103 overrides.mergeupdate)
104 entry = extensions.wrapfunction(filemerge, '_filemerge',
104 entry = extensions.wrapfunction(filemerge, '_filemerge',
105 overrides.overridefilemerge)
105 overrides.overridefilemerge)
106 entry = extensions.wrapfunction(cmdutil, 'copy',
106 entry = extensions.wrapfunction(cmdutil, 'copy',
107 overrides.overridecopy)
107 overrides.overridecopy)
108
108
109 # Summary calls dirty on the subrepos
109 # Summary calls dirty on the subrepos
110 entry = extensions.wrapfunction(subrepo.hgsubrepo, 'dirty',
110 entry = extensions.wrapfunction(subrepo.hgsubrepo, 'dirty',
111 overrides.overridedirty)
111 overrides.overridedirty)
112
112
113 entry = extensions.wrapfunction(cmdutil, 'revert',
113 entry = extensions.wrapfunction(cmdutil, 'revert',
114 overrides.overriderevert)
114 overrides.overriderevert)
115
115
116 extensions.wrapcommand(commands.table, 'archive',
116 extensions.wrapcommand(commands.table, 'archive',
117 overrides.overridearchivecmd)
117 overrides.overridearchivecmd)
118 extensions.wrapfunction(archival, 'archive', overrides.overridearchive)
118 extensions.wrapfunction(archival, 'archive', overrides.overridearchive)
119 extensions.wrapfunction(subrepo.hgsubrepo, 'archive',
119 extensions.wrapfunction(subrepo.hgsubrepo, 'archive',
120 overrides.hgsubrepoarchive)
120 overrides.hgsubrepoarchive)
121 extensions.wrapfunction(webcommands, 'archive',
121 extensions.wrapfunction(webcommands, 'archive',
122 overrides.hgwebarchive)
122 overrides.hgwebarchive)
123 extensions.wrapfunction(cmdutil, 'bailifchanged',
123 extensions.wrapfunction(cmdutil, 'bailifchanged',
124 overrides.overridebailifchanged)
124 overrides.overridebailifchanged)
125
125
126 extensions.wrapfunction(cmdutil, 'postcommitstatus',
126 extensions.wrapfunction(cmdutil, 'postcommitstatus',
127 overrides.postcommitstatus)
127 overrides.postcommitstatus)
128 extensions.wrapfunction(scmutil, 'marktouched',
128 extensions.wrapfunction(scmutil, 'marktouched',
129 overrides.scmutilmarktouched)
129 overrides.scmutilmarktouched)
130
130
131 # create the new wireproto commands ...
131 # create the new wireproto commands ...
132 wireproto.commands['putlfile'] = (proto.putlfile, 'sha')
132 wireproto.commands['putlfile'] = (proto.putlfile, 'sha')
133 wireproto.commands['getlfile'] = (proto.getlfile, 'sha')
133 wireproto.commands['getlfile'] = (proto.getlfile, 'sha')
134 wireproto.commands['statlfile'] = (proto.statlfile, 'sha')
134 wireproto.commands['statlfile'] = (proto.statlfile, 'sha')
135
135
136 # ... and wrap some existing ones
136 # ... and wrap some existing ones
137 wireproto.commands['capabilities'] = (proto.capabilities, '')
137 wireproto.commands['capabilities'] = (proto.capabilities, '')
138 wireproto.commands['heads'] = (proto.heads, '')
138 wireproto.commands['heads'] = (proto.heads, '')
139 wireproto.commands['lheads'] = (wireproto.heads, '')
139 wireproto.commands['lheads'] = (wireproto.heads, '')
140
140
141 # make putlfile behave the same as push and {get,stat}lfile behave
141 # make putlfile behave the same as push and {get,stat}lfile behave
142 # the same as pull w.r.t. permissions checks
142 # the same as pull w.r.t. permissions checks
143 hgweb_mod.perms['putlfile'] = 'push'
143 hgweb_mod.perms['putlfile'] = 'push'
144 hgweb_mod.perms['getlfile'] = 'pull'
144 hgweb_mod.perms['getlfile'] = 'pull'
145 hgweb_mod.perms['statlfile'] = 'pull'
145 hgweb_mod.perms['statlfile'] = 'pull'
146
146
147 extensions.wrapfunction(webcommands, 'decodepath', overrides.decodepath)
147 extensions.wrapfunction(webcommands, 'decodepath', overrides.decodepath)
148
148
149 # the hello wireproto command uses wireproto.capabilities, so it won't see
149 # the hello wireproto command uses wireproto.capabilities, so it won't see
150 # our largefiles capability unless we replace the actual function as well.
150 # our largefiles capability unless we replace the actual function as well.
151 proto.capabilitiesorig = wireproto.capabilities
151 proto.capabilitiesorig = wireproto.capabilities
152 wireproto.capabilities = proto.capabilities
152 wireproto.capabilities = proto.capabilities
153
153
154 # can't do this in reposetup because it needs to have happened before
154 # can't do this in reposetup because it needs to have happened before
155 # wirerepo.__init__ is called
155 # wirerepo.__init__ is called
156 proto.ssholdcallstream = sshpeer.sshpeer._callstream
156 proto.ssholdcallstream = sshpeer.sshpeer._callstream
157 proto.httpoldcallstream = httppeer.httppeer._callstream
157 proto.httpoldcallstream = httppeer.httppeer._callstream
158 sshpeer.sshpeer._callstream = proto.sshrepocallstream
158 sshpeer.sshpeer._callstream = proto.sshrepocallstream
159 httppeer.httppeer._callstream = proto.httprepocallstream
159 httppeer.httppeer._callstream = proto.httprepocallstream
160
160
161 # override some extensions' stuff as well
161 # override some extensions' stuff as well
162 for name, module in extensions.extensions():
162 for name, module in extensions.extensions():
163 if name == 'purge':
163 if name == 'purge':
164 extensions.wrapcommand(getattr(module, 'cmdtable'), 'purge',
164 extensions.wrapcommand(getattr(module, 'cmdtable'), 'purge',
165 overrides.overridepurge)
165 overrides.overridepurge)
166 if name == 'rebase':
166 if name == 'rebase':
167 extensions.wrapcommand(getattr(module, 'cmdtable'), 'rebase',
167 extensions.wrapcommand(getattr(module, 'cmdtable'), 'rebase',
168 overrides.overriderebase)
168 overrides.overriderebase)
169 extensions.wrapfunction(module, 'rebase',
169 extensions.wrapfunction(module, 'rebase',
170 overrides.overriderebase)
170 overrides.overriderebase)
171 if name == 'transplant':
171 if name == 'transplant':
172 extensions.wrapcommand(getattr(module, 'cmdtable'), 'transplant',
172 extensions.wrapcommand(getattr(module, 'cmdtable'), 'transplant',
173 overrides.overridetransplant)
173 overrides.overridetransplant)
174
175 overrides.revsetpredicate.setup()
@@ -1,3586 +1,3585 b''
1 # mq.py - patch queues for mercurial
1 # mq.py - patch queues for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''manage a stack of patches
8 '''manage a stack of patches
9
9
10 This extension lets you work with a stack of patches in a Mercurial
10 This extension lets you work with a stack of patches in a Mercurial
11 repository. It manages two stacks of patches - all known patches, and
11 repository. It manages two stacks of patches - all known patches, and
12 applied patches (subset of known patches).
12 applied patches (subset of known patches).
13
13
14 Known patches are represented as patch files in the .hg/patches
14 Known patches are represented as patch files in the .hg/patches
15 directory. Applied patches are both patch files and changesets.
15 directory. Applied patches are both patch files and changesets.
16
16
17 Common tasks (use :hg:`help command` for more details)::
17 Common tasks (use :hg:`help command` for more details)::
18
18
19 create new patch qnew
19 create new patch qnew
20 import existing patch qimport
20 import existing patch qimport
21
21
22 print patch series qseries
22 print patch series qseries
23 print applied patches qapplied
23 print applied patches qapplied
24
24
25 add known patch to applied stack qpush
25 add known patch to applied stack qpush
26 remove patch from applied stack qpop
26 remove patch from applied stack qpop
27 refresh contents of top applied patch qrefresh
27 refresh contents of top applied patch qrefresh
28
28
29 By default, mq will automatically use git patches when required to
29 By default, mq will automatically use git patches when required to
30 avoid losing file mode changes, copy records, binary files or empty
30 avoid losing file mode changes, copy records, binary files or empty
31 files creations or deletions. This behavior can be configured with::
31 files creations or deletions. This behavior can be configured with::
32
32
33 [mq]
33 [mq]
34 git = auto/keep/yes/no
34 git = auto/keep/yes/no
35
35
36 If set to 'keep', mq will obey the [diff] section configuration while
36 If set to 'keep', mq will obey the [diff] section configuration while
37 preserving existing git patches upon qrefresh. If set to 'yes' or
37 preserving existing git patches upon qrefresh. If set to 'yes' or
38 'no', mq will override the [diff] section and always generate git or
38 'no', mq will override the [diff] section and always generate git or
39 regular patches, possibly losing data in the second case.
39 regular patches, possibly losing data in the second case.
40
40
41 It may be desirable for mq changesets to be kept in the secret phase (see
41 It may be desirable for mq changesets to be kept in the secret phase (see
42 :hg:`help phases`), which can be enabled with the following setting::
42 :hg:`help phases`), which can be enabled with the following setting::
43
43
44 [mq]
44 [mq]
45 secret = True
45 secret = True
46
46
47 You will by default be managing a patch queue named "patches". You can
47 You will by default be managing a patch queue named "patches". You can
48 create other, independent patch queues with the :hg:`qqueue` command.
48 create other, independent patch queues with the :hg:`qqueue` command.
49
49
50 If the working directory contains uncommitted files, qpush, qpop and
50 If the working directory contains uncommitted files, qpush, qpop and
51 qgoto abort immediately. If -f/--force is used, the changes are
51 qgoto abort immediately. If -f/--force is used, the changes are
52 discarded. Setting::
52 discarded. Setting::
53
53
54 [mq]
54 [mq]
55 keepchanges = True
55 keepchanges = True
56
56
57 make them behave as if --keep-changes were passed, and non-conflicting
57 make them behave as if --keep-changes were passed, and non-conflicting
58 local changes will be tolerated and preserved. If incompatible options
58 local changes will be tolerated and preserved. If incompatible options
59 such as -f/--force or --exact are passed, this setting is ignored.
59 such as -f/--force or --exact are passed, this setting is ignored.
60
60
61 This extension used to provide a strip command. This command now lives
61 This extension used to provide a strip command. This command now lives
62 in the strip extension.
62 in the strip extension.
63 '''
63 '''
64
64
65 from mercurial.i18n import _
65 from mercurial.i18n import _
66 from mercurial.node import bin, hex, short, nullid, nullrev
66 from mercurial.node import bin, hex, short, nullid, nullrev
67 from mercurial.lock import release
67 from mercurial.lock import release
68 from mercurial import commands, cmdutil, hg, scmutil, util, revset
68 from mercurial import commands, cmdutil, hg, scmutil, util, revset
69 from mercurial import extensions, error, phases
69 from mercurial import extensions, error, phases
70 from mercurial import patch as patchmod
70 from mercurial import patch as patchmod
71 from mercurial import lock as lockmod
71 from mercurial import lock as lockmod
72 from mercurial import localrepo
72 from mercurial import localrepo
73 from mercurial import registrar
73 from mercurial import subrepo
74 from mercurial import subrepo
74 import os, re, errno, shutil
75 import os, re, errno, shutil
75
76
76 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
77 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
77
78
78 cmdtable = {}
79 cmdtable = {}
79 command = cmdutil.command(cmdtable)
80 command = cmdutil.command(cmdtable)
80 # Note for extension authors: ONLY specify testedwith = 'internal' for
81 # Note for extension authors: ONLY specify testedwith = 'internal' for
81 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
82 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
82 # be specifying the version(s) of Mercurial they are tested with, or
83 # be specifying the version(s) of Mercurial they are tested with, or
83 # leave the attribute unspecified.
84 # leave the attribute unspecified.
84 testedwith = 'internal'
85 testedwith = 'internal'
85
86
86 # force load strip extension formerly included in mq and import some utility
87 # force load strip extension formerly included in mq and import some utility
87 try:
88 try:
88 stripext = extensions.find('strip')
89 stripext = extensions.find('strip')
89 except KeyError:
90 except KeyError:
90 # note: load is lazy so we could avoid the try-except,
91 # note: load is lazy so we could avoid the try-except,
91 # but I (marmoute) prefer this explicit code.
92 # but I (marmoute) prefer this explicit code.
92 class dummyui(object):
93 class dummyui(object):
93 def debug(self, msg):
94 def debug(self, msg):
94 pass
95 pass
95 stripext = extensions.load(dummyui(), 'strip', '')
96 stripext = extensions.load(dummyui(), 'strip', '')
96
97
97 strip = stripext.strip
98 strip = stripext.strip
98 checksubstate = stripext.checksubstate
99 checksubstate = stripext.checksubstate
99 checklocalchanges = stripext.checklocalchanges
100 checklocalchanges = stripext.checklocalchanges
100
101
101
102
102 # Patch names looks like unix-file names.
103 # Patch names looks like unix-file names.
103 # They must be joinable with queue directory and result in the patch path.
104 # They must be joinable with queue directory and result in the patch path.
104 normname = util.normpath
105 normname = util.normpath
105
106
106 class statusentry(object):
107 class statusentry(object):
107 def __init__(self, node, name):
108 def __init__(self, node, name):
108 self.node, self.name = node, name
109 self.node, self.name = node, name
109 def __repr__(self):
110 def __repr__(self):
110 return hex(self.node) + ':' + self.name
111 return hex(self.node) + ':' + self.name
111
112
112 # The order of the headers in 'hg export' HG patches:
113 # The order of the headers in 'hg export' HG patches:
113 HGHEADERS = [
114 HGHEADERS = [
114 # '# HG changeset patch',
115 # '# HG changeset patch',
115 '# User ',
116 '# User ',
116 '# Date ',
117 '# Date ',
117 '# ',
118 '# ',
118 '# Branch ',
119 '# Branch ',
119 '# Node ID ',
120 '# Node ID ',
120 '# Parent ', # can occur twice for merges - but that is not relevant for mq
121 '# Parent ', # can occur twice for merges - but that is not relevant for mq
121 ]
122 ]
122 # The order of headers in plain 'mail style' patches:
123 # The order of headers in plain 'mail style' patches:
123 PLAINHEADERS = {
124 PLAINHEADERS = {
124 'from': 0,
125 'from': 0,
125 'date': 1,
126 'date': 1,
126 'subject': 2,
127 'subject': 2,
127 }
128 }
128
129
129 def inserthgheader(lines, header, value):
130 def inserthgheader(lines, header, value):
130 """Assuming lines contains a HG patch header, add a header line with value.
131 """Assuming lines contains a HG patch header, add a header line with value.
131 >>> try: inserthgheader([], '# Date ', 'z')
132 >>> try: inserthgheader([], '# Date ', 'z')
132 ... except ValueError, inst: print "oops"
133 ... except ValueError, inst: print "oops"
133 oops
134 oops
134 >>> inserthgheader(['# HG changeset patch'], '# Date ', 'z')
135 >>> inserthgheader(['# HG changeset patch'], '# Date ', 'z')
135 ['# HG changeset patch', '# Date z']
136 ['# HG changeset patch', '# Date z']
136 >>> inserthgheader(['# HG changeset patch', ''], '# Date ', 'z')
137 >>> inserthgheader(['# HG changeset patch', ''], '# Date ', 'z')
137 ['# HG changeset patch', '# Date z', '']
138 ['# HG changeset patch', '# Date z', '']
138 >>> inserthgheader(['# HG changeset patch', '# User y'], '# Date ', 'z')
139 >>> inserthgheader(['# HG changeset patch', '# User y'], '# Date ', 'z')
139 ['# HG changeset patch', '# User y', '# Date z']
140 ['# HG changeset patch', '# User y', '# Date z']
140 >>> inserthgheader(['# HG changeset patch', '# Date x', '# User y'],
141 >>> inserthgheader(['# HG changeset patch', '# Date x', '# User y'],
141 ... '# User ', 'z')
142 ... '# User ', 'z')
142 ['# HG changeset patch', '# Date x', '# User z']
143 ['# HG changeset patch', '# Date x', '# User z']
143 >>> inserthgheader(['# HG changeset patch', '# Date y'], '# Date ', 'z')
144 >>> inserthgheader(['# HG changeset patch', '# Date y'], '# Date ', 'z')
144 ['# HG changeset patch', '# Date z']
145 ['# HG changeset patch', '# Date z']
145 >>> inserthgheader(['# HG changeset patch', '', '# Date y'], '# Date ', 'z')
146 >>> inserthgheader(['# HG changeset patch', '', '# Date y'], '# Date ', 'z')
146 ['# HG changeset patch', '# Date z', '', '# Date y']
147 ['# HG changeset patch', '# Date z', '', '# Date y']
147 >>> inserthgheader(['# HG changeset patch', '# Parent y'], '# Date ', 'z')
148 >>> inserthgheader(['# HG changeset patch', '# Parent y'], '# Date ', 'z')
148 ['# HG changeset patch', '# Date z', '# Parent y']
149 ['# HG changeset patch', '# Date z', '# Parent y']
149 """
150 """
150 start = lines.index('# HG changeset patch') + 1
151 start = lines.index('# HG changeset patch') + 1
151 newindex = HGHEADERS.index(header)
152 newindex = HGHEADERS.index(header)
152 bestpos = len(lines)
153 bestpos = len(lines)
153 for i in range(start, len(lines)):
154 for i in range(start, len(lines)):
154 line = lines[i]
155 line = lines[i]
155 if not line.startswith('# '):
156 if not line.startswith('# '):
156 bestpos = min(bestpos, i)
157 bestpos = min(bestpos, i)
157 break
158 break
158 for lineindex, h in enumerate(HGHEADERS):
159 for lineindex, h in enumerate(HGHEADERS):
159 if line.startswith(h):
160 if line.startswith(h):
160 if lineindex == newindex:
161 if lineindex == newindex:
161 lines[i] = header + value
162 lines[i] = header + value
162 return lines
163 return lines
163 if lineindex > newindex:
164 if lineindex > newindex:
164 bestpos = min(bestpos, i)
165 bestpos = min(bestpos, i)
165 break # next line
166 break # next line
166 lines.insert(bestpos, header + value)
167 lines.insert(bestpos, header + value)
167 return lines
168 return lines
168
169
169 def insertplainheader(lines, header, value):
170 def insertplainheader(lines, header, value):
170 """For lines containing a plain patch header, add a header line with value.
171 """For lines containing a plain patch header, add a header line with value.
171 >>> insertplainheader([], 'Date', 'z')
172 >>> insertplainheader([], 'Date', 'z')
172 ['Date: z']
173 ['Date: z']
173 >>> insertplainheader([''], 'Date', 'z')
174 >>> insertplainheader([''], 'Date', 'z')
174 ['Date: z', '']
175 ['Date: z', '']
175 >>> insertplainheader(['x'], 'Date', 'z')
176 >>> insertplainheader(['x'], 'Date', 'z')
176 ['Date: z', '', 'x']
177 ['Date: z', '', 'x']
177 >>> insertplainheader(['From: y', 'x'], 'Date', 'z')
178 >>> insertplainheader(['From: y', 'x'], 'Date', 'z')
178 ['From: y', 'Date: z', '', 'x']
179 ['From: y', 'Date: z', '', 'x']
179 >>> insertplainheader([' date : x', ' from : y', ''], 'From', 'z')
180 >>> insertplainheader([' date : x', ' from : y', ''], 'From', 'z')
180 [' date : x', 'From: z', '']
181 [' date : x', 'From: z', '']
181 >>> insertplainheader(['', 'Date: y'], 'Date', 'z')
182 >>> insertplainheader(['', 'Date: y'], 'Date', 'z')
182 ['Date: z', '', 'Date: y']
183 ['Date: z', '', 'Date: y']
183 >>> insertplainheader(['foo: bar', 'DATE: z', 'x'], 'From', 'y')
184 >>> insertplainheader(['foo: bar', 'DATE: z', 'x'], 'From', 'y')
184 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
185 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
185 """
186 """
186 newprio = PLAINHEADERS[header.lower()]
187 newprio = PLAINHEADERS[header.lower()]
187 bestpos = len(lines)
188 bestpos = len(lines)
188 for i, line in enumerate(lines):
189 for i, line in enumerate(lines):
189 if ':' in line:
190 if ':' in line:
190 lheader = line.split(':', 1)[0].strip().lower()
191 lheader = line.split(':', 1)[0].strip().lower()
191 lprio = PLAINHEADERS.get(lheader, newprio + 1)
192 lprio = PLAINHEADERS.get(lheader, newprio + 1)
192 if lprio == newprio:
193 if lprio == newprio:
193 lines[i] = '%s: %s' % (header, value)
194 lines[i] = '%s: %s' % (header, value)
194 return lines
195 return lines
195 if lprio > newprio and i < bestpos:
196 if lprio > newprio and i < bestpos:
196 bestpos = i
197 bestpos = i
197 else:
198 else:
198 if line:
199 if line:
199 lines.insert(i, '')
200 lines.insert(i, '')
200 if i < bestpos:
201 if i < bestpos:
201 bestpos = i
202 bestpos = i
202 break
203 break
203 lines.insert(bestpos, '%s: %s' % (header, value))
204 lines.insert(bestpos, '%s: %s' % (header, value))
204 return lines
205 return lines
205
206
206 class patchheader(object):
207 class patchheader(object):
207 def __init__(self, pf, plainmode=False):
208 def __init__(self, pf, plainmode=False):
208 def eatdiff(lines):
209 def eatdiff(lines):
209 while lines:
210 while lines:
210 l = lines[-1]
211 l = lines[-1]
211 if (l.startswith("diff -") or
212 if (l.startswith("diff -") or
212 l.startswith("Index:") or
213 l.startswith("Index:") or
213 l.startswith("===========")):
214 l.startswith("===========")):
214 del lines[-1]
215 del lines[-1]
215 else:
216 else:
216 break
217 break
217 def eatempty(lines):
218 def eatempty(lines):
218 while lines:
219 while lines:
219 if not lines[-1].strip():
220 if not lines[-1].strip():
220 del lines[-1]
221 del lines[-1]
221 else:
222 else:
222 break
223 break
223
224
224 message = []
225 message = []
225 comments = []
226 comments = []
226 user = None
227 user = None
227 date = None
228 date = None
228 parent = None
229 parent = None
229 format = None
230 format = None
230 subject = None
231 subject = None
231 branch = None
232 branch = None
232 nodeid = None
233 nodeid = None
233 diffstart = 0
234 diffstart = 0
234
235
235 for line in file(pf):
236 for line in file(pf):
236 line = line.rstrip()
237 line = line.rstrip()
237 if (line.startswith('diff --git')
238 if (line.startswith('diff --git')
238 or (diffstart and line.startswith('+++ '))):
239 or (diffstart and line.startswith('+++ '))):
239 diffstart = 2
240 diffstart = 2
240 break
241 break
241 diffstart = 0 # reset
242 diffstart = 0 # reset
242 if line.startswith("--- "):
243 if line.startswith("--- "):
243 diffstart = 1
244 diffstart = 1
244 continue
245 continue
245 elif format == "hgpatch":
246 elif format == "hgpatch":
246 # parse values when importing the result of an hg export
247 # parse values when importing the result of an hg export
247 if line.startswith("# User "):
248 if line.startswith("# User "):
248 user = line[7:]
249 user = line[7:]
249 elif line.startswith("# Date "):
250 elif line.startswith("# Date "):
250 date = line[7:]
251 date = line[7:]
251 elif line.startswith("# Parent "):
252 elif line.startswith("# Parent "):
252 parent = line[9:].lstrip() # handle double trailing space
253 parent = line[9:].lstrip() # handle double trailing space
253 elif line.startswith("# Branch "):
254 elif line.startswith("# Branch "):
254 branch = line[9:]
255 branch = line[9:]
255 elif line.startswith("# Node ID "):
256 elif line.startswith("# Node ID "):
256 nodeid = line[10:]
257 nodeid = line[10:]
257 elif not line.startswith("# ") and line:
258 elif not line.startswith("# ") and line:
258 message.append(line)
259 message.append(line)
259 format = None
260 format = None
260 elif line == '# HG changeset patch':
261 elif line == '# HG changeset patch':
261 message = []
262 message = []
262 format = "hgpatch"
263 format = "hgpatch"
263 elif (format != "tagdone" and (line.startswith("Subject: ") or
264 elif (format != "tagdone" and (line.startswith("Subject: ") or
264 line.startswith("subject: "))):
265 line.startswith("subject: "))):
265 subject = line[9:]
266 subject = line[9:]
266 format = "tag"
267 format = "tag"
267 elif (format != "tagdone" and (line.startswith("From: ") or
268 elif (format != "tagdone" and (line.startswith("From: ") or
268 line.startswith("from: "))):
269 line.startswith("from: "))):
269 user = line[6:]
270 user = line[6:]
270 format = "tag"
271 format = "tag"
271 elif (format != "tagdone" and (line.startswith("Date: ") or
272 elif (format != "tagdone" and (line.startswith("Date: ") or
272 line.startswith("date: "))):
273 line.startswith("date: "))):
273 date = line[6:]
274 date = line[6:]
274 format = "tag"
275 format = "tag"
275 elif format == "tag" and line == "":
276 elif format == "tag" and line == "":
276 # when looking for tags (subject: from: etc) they
277 # when looking for tags (subject: from: etc) they
277 # end once you find a blank line in the source
278 # end once you find a blank line in the source
278 format = "tagdone"
279 format = "tagdone"
279 elif message or line:
280 elif message or line:
280 message.append(line)
281 message.append(line)
281 comments.append(line)
282 comments.append(line)
282
283
283 eatdiff(message)
284 eatdiff(message)
284 eatdiff(comments)
285 eatdiff(comments)
285 # Remember the exact starting line of the patch diffs before consuming
286 # Remember the exact starting line of the patch diffs before consuming
286 # empty lines, for external use by TortoiseHg and others
287 # empty lines, for external use by TortoiseHg and others
287 self.diffstartline = len(comments)
288 self.diffstartline = len(comments)
288 eatempty(message)
289 eatempty(message)
289 eatempty(comments)
290 eatempty(comments)
290
291
291 # make sure message isn't empty
292 # make sure message isn't empty
292 if format and format.startswith("tag") and subject:
293 if format and format.startswith("tag") and subject:
293 message.insert(0, subject)
294 message.insert(0, subject)
294
295
295 self.message = message
296 self.message = message
296 self.comments = comments
297 self.comments = comments
297 self.user = user
298 self.user = user
298 self.date = date
299 self.date = date
299 self.parent = parent
300 self.parent = parent
300 # nodeid and branch are for external use by TortoiseHg and others
301 # nodeid and branch are for external use by TortoiseHg and others
301 self.nodeid = nodeid
302 self.nodeid = nodeid
302 self.branch = branch
303 self.branch = branch
303 self.haspatch = diffstart > 1
304 self.haspatch = diffstart > 1
304 self.plainmode = (plainmode or
305 self.plainmode = (plainmode or
305 '# HG changeset patch' not in self.comments and
306 '# HG changeset patch' not in self.comments and
306 any(c.startswith('Date: ') or
307 any(c.startswith('Date: ') or
307 c.startswith('From: ')
308 c.startswith('From: ')
308 for c in self.comments))
309 for c in self.comments))
309
310
310 def setuser(self, user):
311 def setuser(self, user):
311 try:
312 try:
312 inserthgheader(self.comments, '# User ', user)
313 inserthgheader(self.comments, '# User ', user)
313 except ValueError:
314 except ValueError:
314 if self.plainmode:
315 if self.plainmode:
315 insertplainheader(self.comments, 'From', user)
316 insertplainheader(self.comments, 'From', user)
316 else:
317 else:
317 tmp = ['# HG changeset patch', '# User ' + user]
318 tmp = ['# HG changeset patch', '# User ' + user]
318 self.comments = tmp + self.comments
319 self.comments = tmp + self.comments
319 self.user = user
320 self.user = user
320
321
321 def setdate(self, date):
322 def setdate(self, date):
322 try:
323 try:
323 inserthgheader(self.comments, '# Date ', date)
324 inserthgheader(self.comments, '# Date ', date)
324 except ValueError:
325 except ValueError:
325 if self.plainmode:
326 if self.plainmode:
326 insertplainheader(self.comments, 'Date', date)
327 insertplainheader(self.comments, 'Date', date)
327 else:
328 else:
328 tmp = ['# HG changeset patch', '# Date ' + date]
329 tmp = ['# HG changeset patch', '# Date ' + date]
329 self.comments = tmp + self.comments
330 self.comments = tmp + self.comments
330 self.date = date
331 self.date = date
331
332
332 def setparent(self, parent):
333 def setparent(self, parent):
333 try:
334 try:
334 inserthgheader(self.comments, '# Parent ', parent)
335 inserthgheader(self.comments, '# Parent ', parent)
335 except ValueError:
336 except ValueError:
336 if not self.plainmode:
337 if not self.plainmode:
337 tmp = ['# HG changeset patch', '# Parent ' + parent]
338 tmp = ['# HG changeset patch', '# Parent ' + parent]
338 self.comments = tmp + self.comments
339 self.comments = tmp + self.comments
339 self.parent = parent
340 self.parent = parent
340
341
341 def setmessage(self, message):
342 def setmessage(self, message):
342 if self.comments:
343 if self.comments:
343 self._delmsg()
344 self._delmsg()
344 self.message = [message]
345 self.message = [message]
345 if message:
346 if message:
346 if self.plainmode and self.comments and self.comments[-1]:
347 if self.plainmode and self.comments and self.comments[-1]:
347 self.comments.append('')
348 self.comments.append('')
348 self.comments.append(message)
349 self.comments.append(message)
349
350
350 def __str__(self):
351 def __str__(self):
351 s = '\n'.join(self.comments).rstrip()
352 s = '\n'.join(self.comments).rstrip()
352 if not s:
353 if not s:
353 return ''
354 return ''
354 return s + '\n\n'
355 return s + '\n\n'
355
356
356 def _delmsg(self):
357 def _delmsg(self):
357 '''Remove existing message, keeping the rest of the comments fields.
358 '''Remove existing message, keeping the rest of the comments fields.
358 If comments contains 'subject: ', message will prepend
359 If comments contains 'subject: ', message will prepend
359 the field and a blank line.'''
360 the field and a blank line.'''
360 if self.message:
361 if self.message:
361 subj = 'subject: ' + self.message[0].lower()
362 subj = 'subject: ' + self.message[0].lower()
362 for i in xrange(len(self.comments)):
363 for i in xrange(len(self.comments)):
363 if subj == self.comments[i].lower():
364 if subj == self.comments[i].lower():
364 del self.comments[i]
365 del self.comments[i]
365 self.message = self.message[2:]
366 self.message = self.message[2:]
366 break
367 break
367 ci = 0
368 ci = 0
368 for mi in self.message:
369 for mi in self.message:
369 while mi != self.comments[ci]:
370 while mi != self.comments[ci]:
370 ci += 1
371 ci += 1
371 del self.comments[ci]
372 del self.comments[ci]
372
373
373 def newcommit(repo, phase, *args, **kwargs):
374 def newcommit(repo, phase, *args, **kwargs):
374 """helper dedicated to ensure a commit respect mq.secret setting
375 """helper dedicated to ensure a commit respect mq.secret setting
375
376
376 It should be used instead of repo.commit inside the mq source for operation
377 It should be used instead of repo.commit inside the mq source for operation
377 creating new changeset.
378 creating new changeset.
378 """
379 """
379 repo = repo.unfiltered()
380 repo = repo.unfiltered()
380 if phase is None:
381 if phase is None:
381 if repo.ui.configbool('mq', 'secret', False):
382 if repo.ui.configbool('mq', 'secret', False):
382 phase = phases.secret
383 phase = phases.secret
383 if phase is not None:
384 if phase is not None:
384 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
385 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
385 allowemptybackup = repo.ui.backupconfig('ui', 'allowemptycommit')
386 allowemptybackup = repo.ui.backupconfig('ui', 'allowemptycommit')
386 try:
387 try:
387 if phase is not None:
388 if phase is not None:
388 repo.ui.setconfig('phases', 'new-commit', phase, 'mq')
389 repo.ui.setconfig('phases', 'new-commit', phase, 'mq')
389 repo.ui.setconfig('ui', 'allowemptycommit', True)
390 repo.ui.setconfig('ui', 'allowemptycommit', True)
390 return repo.commit(*args, **kwargs)
391 return repo.commit(*args, **kwargs)
391 finally:
392 finally:
392 repo.ui.restoreconfig(allowemptybackup)
393 repo.ui.restoreconfig(allowemptybackup)
393 if phase is not None:
394 if phase is not None:
394 repo.ui.restoreconfig(phasebackup)
395 repo.ui.restoreconfig(phasebackup)
395
396
396 class AbortNoCleanup(error.Abort):
397 class AbortNoCleanup(error.Abort):
397 pass
398 pass
398
399
399 class queue(object):
400 class queue(object):
400 def __init__(self, ui, baseui, path, patchdir=None):
401 def __init__(self, ui, baseui, path, patchdir=None):
401 self.basepath = path
402 self.basepath = path
402 try:
403 try:
403 fh = open(os.path.join(path, 'patches.queue'))
404 fh = open(os.path.join(path, 'patches.queue'))
404 cur = fh.read().rstrip()
405 cur = fh.read().rstrip()
405 fh.close()
406 fh.close()
406 if not cur:
407 if not cur:
407 curpath = os.path.join(path, 'patches')
408 curpath = os.path.join(path, 'patches')
408 else:
409 else:
409 curpath = os.path.join(path, 'patches-' + cur)
410 curpath = os.path.join(path, 'patches-' + cur)
410 except IOError:
411 except IOError:
411 curpath = os.path.join(path, 'patches')
412 curpath = os.path.join(path, 'patches')
412 self.path = patchdir or curpath
413 self.path = patchdir or curpath
413 self.opener = scmutil.opener(self.path)
414 self.opener = scmutil.opener(self.path)
414 self.ui = ui
415 self.ui = ui
415 self.baseui = baseui
416 self.baseui = baseui
416 self.applieddirty = False
417 self.applieddirty = False
417 self.seriesdirty = False
418 self.seriesdirty = False
418 self.added = []
419 self.added = []
419 self.seriespath = "series"
420 self.seriespath = "series"
420 self.statuspath = "status"
421 self.statuspath = "status"
421 self.guardspath = "guards"
422 self.guardspath = "guards"
422 self.activeguards = None
423 self.activeguards = None
423 self.guardsdirty = False
424 self.guardsdirty = False
424 # Handle mq.git as a bool with extended values
425 # Handle mq.git as a bool with extended values
425 try:
426 try:
426 gitmode = ui.configbool('mq', 'git', None)
427 gitmode = ui.configbool('mq', 'git', None)
427 if gitmode is None:
428 if gitmode is None:
428 raise error.ConfigError
429 raise error.ConfigError
429 if gitmode:
430 if gitmode:
430 self.gitmode = 'yes'
431 self.gitmode = 'yes'
431 else:
432 else:
432 self.gitmode = 'no'
433 self.gitmode = 'no'
433 except error.ConfigError:
434 except error.ConfigError:
434 # let's have check-config ignore the type mismatch
435 # let's have check-config ignore the type mismatch
435 self.gitmode = ui.config(r'mq', 'git', 'auto').lower()
436 self.gitmode = ui.config(r'mq', 'git', 'auto').lower()
436 # deprecated config: mq.plain
437 # deprecated config: mq.plain
437 self.plainmode = ui.configbool('mq', 'plain', False)
438 self.plainmode = ui.configbool('mq', 'plain', False)
438 self.checkapplied = True
439 self.checkapplied = True
439
440
440 @util.propertycache
441 @util.propertycache
441 def applied(self):
442 def applied(self):
442 def parselines(lines):
443 def parselines(lines):
443 for l in lines:
444 for l in lines:
444 entry = l.split(':', 1)
445 entry = l.split(':', 1)
445 if len(entry) > 1:
446 if len(entry) > 1:
446 n, name = entry
447 n, name = entry
447 yield statusentry(bin(n), name)
448 yield statusentry(bin(n), name)
448 elif l.strip():
449 elif l.strip():
449 self.ui.warn(_('malformated mq status line: %s\n') % entry)
450 self.ui.warn(_('malformated mq status line: %s\n') % entry)
450 # else we ignore empty lines
451 # else we ignore empty lines
451 try:
452 try:
452 lines = self.opener.read(self.statuspath).splitlines()
453 lines = self.opener.read(self.statuspath).splitlines()
453 return list(parselines(lines))
454 return list(parselines(lines))
454 except IOError as e:
455 except IOError as e:
455 if e.errno == errno.ENOENT:
456 if e.errno == errno.ENOENT:
456 return []
457 return []
457 raise
458 raise
458
459
459 @util.propertycache
460 @util.propertycache
460 def fullseries(self):
461 def fullseries(self):
461 try:
462 try:
462 return self.opener.read(self.seriespath).splitlines()
463 return self.opener.read(self.seriespath).splitlines()
463 except IOError as e:
464 except IOError as e:
464 if e.errno == errno.ENOENT:
465 if e.errno == errno.ENOENT:
465 return []
466 return []
466 raise
467 raise
467
468
468 @util.propertycache
469 @util.propertycache
469 def series(self):
470 def series(self):
470 self.parseseries()
471 self.parseseries()
471 return self.series
472 return self.series
472
473
473 @util.propertycache
474 @util.propertycache
474 def seriesguards(self):
475 def seriesguards(self):
475 self.parseseries()
476 self.parseseries()
476 return self.seriesguards
477 return self.seriesguards
477
478
478 def invalidate(self):
479 def invalidate(self):
479 for a in 'applied fullseries series seriesguards'.split():
480 for a in 'applied fullseries series seriesguards'.split():
480 if a in self.__dict__:
481 if a in self.__dict__:
481 delattr(self, a)
482 delattr(self, a)
482 self.applieddirty = False
483 self.applieddirty = False
483 self.seriesdirty = False
484 self.seriesdirty = False
484 self.guardsdirty = False
485 self.guardsdirty = False
485 self.activeguards = None
486 self.activeguards = None
486
487
487 def diffopts(self, opts=None, patchfn=None):
488 def diffopts(self, opts=None, patchfn=None):
488 diffopts = patchmod.diffopts(self.ui, opts)
489 diffopts = patchmod.diffopts(self.ui, opts)
489 if self.gitmode == 'auto':
490 if self.gitmode == 'auto':
490 diffopts.upgrade = True
491 diffopts.upgrade = True
491 elif self.gitmode == 'keep':
492 elif self.gitmode == 'keep':
492 pass
493 pass
493 elif self.gitmode in ('yes', 'no'):
494 elif self.gitmode in ('yes', 'no'):
494 diffopts.git = self.gitmode == 'yes'
495 diffopts.git = self.gitmode == 'yes'
495 else:
496 else:
496 raise error.Abort(_('mq.git option can be auto/keep/yes/no'
497 raise error.Abort(_('mq.git option can be auto/keep/yes/no'
497 ' got %s') % self.gitmode)
498 ' got %s') % self.gitmode)
498 if patchfn:
499 if patchfn:
499 diffopts = self.patchopts(diffopts, patchfn)
500 diffopts = self.patchopts(diffopts, patchfn)
500 return diffopts
501 return diffopts
501
502
502 def patchopts(self, diffopts, *patches):
503 def patchopts(self, diffopts, *patches):
503 """Return a copy of input diff options with git set to true if
504 """Return a copy of input diff options with git set to true if
504 referenced patch is a git patch and should be preserved as such.
505 referenced patch is a git patch and should be preserved as such.
505 """
506 """
506 diffopts = diffopts.copy()
507 diffopts = diffopts.copy()
507 if not diffopts.git and self.gitmode == 'keep':
508 if not diffopts.git and self.gitmode == 'keep':
508 for patchfn in patches:
509 for patchfn in patches:
509 patchf = self.opener(patchfn, 'r')
510 patchf = self.opener(patchfn, 'r')
510 # if the patch was a git patch, refresh it as a git patch
511 # if the patch was a git patch, refresh it as a git patch
511 for line in patchf:
512 for line in patchf:
512 if line.startswith('diff --git'):
513 if line.startswith('diff --git'):
513 diffopts.git = True
514 diffopts.git = True
514 break
515 break
515 patchf.close()
516 patchf.close()
516 return diffopts
517 return diffopts
517
518
518 def join(self, *p):
519 def join(self, *p):
519 return os.path.join(self.path, *p)
520 return os.path.join(self.path, *p)
520
521
521 def findseries(self, patch):
522 def findseries(self, patch):
522 def matchpatch(l):
523 def matchpatch(l):
523 l = l.split('#', 1)[0]
524 l = l.split('#', 1)[0]
524 return l.strip() == patch
525 return l.strip() == patch
525 for index, l in enumerate(self.fullseries):
526 for index, l in enumerate(self.fullseries):
526 if matchpatch(l):
527 if matchpatch(l):
527 return index
528 return index
528 return None
529 return None
529
530
530 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
531 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
531
532
532 def parseseries(self):
533 def parseseries(self):
533 self.series = []
534 self.series = []
534 self.seriesguards = []
535 self.seriesguards = []
535 for l in self.fullseries:
536 for l in self.fullseries:
536 h = l.find('#')
537 h = l.find('#')
537 if h == -1:
538 if h == -1:
538 patch = l
539 patch = l
539 comment = ''
540 comment = ''
540 elif h == 0:
541 elif h == 0:
541 continue
542 continue
542 else:
543 else:
543 patch = l[:h]
544 patch = l[:h]
544 comment = l[h:]
545 comment = l[h:]
545 patch = patch.strip()
546 patch = patch.strip()
546 if patch:
547 if patch:
547 if patch in self.series:
548 if patch in self.series:
548 raise error.Abort(_('%s appears more than once in %s') %
549 raise error.Abort(_('%s appears more than once in %s') %
549 (patch, self.join(self.seriespath)))
550 (patch, self.join(self.seriespath)))
550 self.series.append(patch)
551 self.series.append(patch)
551 self.seriesguards.append(self.guard_re.findall(comment))
552 self.seriesguards.append(self.guard_re.findall(comment))
552
553
553 def checkguard(self, guard):
554 def checkguard(self, guard):
554 if not guard:
555 if not guard:
555 return _('guard cannot be an empty string')
556 return _('guard cannot be an empty string')
556 bad_chars = '# \t\r\n\f'
557 bad_chars = '# \t\r\n\f'
557 first = guard[0]
558 first = guard[0]
558 if first in '-+':
559 if first in '-+':
559 return (_('guard %r starts with invalid character: %r') %
560 return (_('guard %r starts with invalid character: %r') %
560 (guard, first))
561 (guard, first))
561 for c in bad_chars:
562 for c in bad_chars:
562 if c in guard:
563 if c in guard:
563 return _('invalid character in guard %r: %r') % (guard, c)
564 return _('invalid character in guard %r: %r') % (guard, c)
564
565
565 def setactive(self, guards):
566 def setactive(self, guards):
566 for guard in guards:
567 for guard in guards:
567 bad = self.checkguard(guard)
568 bad = self.checkguard(guard)
568 if bad:
569 if bad:
569 raise error.Abort(bad)
570 raise error.Abort(bad)
570 guards = sorted(set(guards))
571 guards = sorted(set(guards))
571 self.ui.debug('active guards: %s\n' % ' '.join(guards))
572 self.ui.debug('active guards: %s\n' % ' '.join(guards))
572 self.activeguards = guards
573 self.activeguards = guards
573 self.guardsdirty = True
574 self.guardsdirty = True
574
575
575 def active(self):
576 def active(self):
576 if self.activeguards is None:
577 if self.activeguards is None:
577 self.activeguards = []
578 self.activeguards = []
578 try:
579 try:
579 guards = self.opener.read(self.guardspath).split()
580 guards = self.opener.read(self.guardspath).split()
580 except IOError as err:
581 except IOError as err:
581 if err.errno != errno.ENOENT:
582 if err.errno != errno.ENOENT:
582 raise
583 raise
583 guards = []
584 guards = []
584 for i, guard in enumerate(guards):
585 for i, guard in enumerate(guards):
585 bad = self.checkguard(guard)
586 bad = self.checkguard(guard)
586 if bad:
587 if bad:
587 self.ui.warn('%s:%d: %s\n' %
588 self.ui.warn('%s:%d: %s\n' %
588 (self.join(self.guardspath), i + 1, bad))
589 (self.join(self.guardspath), i + 1, bad))
589 else:
590 else:
590 self.activeguards.append(guard)
591 self.activeguards.append(guard)
591 return self.activeguards
592 return self.activeguards
592
593
593 def setguards(self, idx, guards):
594 def setguards(self, idx, guards):
594 for g in guards:
595 for g in guards:
595 if len(g) < 2:
596 if len(g) < 2:
596 raise error.Abort(_('guard %r too short') % g)
597 raise error.Abort(_('guard %r too short') % g)
597 if g[0] not in '-+':
598 if g[0] not in '-+':
598 raise error.Abort(_('guard %r starts with invalid char') % g)
599 raise error.Abort(_('guard %r starts with invalid char') % g)
599 bad = self.checkguard(g[1:])
600 bad = self.checkguard(g[1:])
600 if bad:
601 if bad:
601 raise error.Abort(bad)
602 raise error.Abort(bad)
602 drop = self.guard_re.sub('', self.fullseries[idx])
603 drop = self.guard_re.sub('', self.fullseries[idx])
603 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
604 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
604 self.parseseries()
605 self.parseseries()
605 self.seriesdirty = True
606 self.seriesdirty = True
606
607
607 def pushable(self, idx):
608 def pushable(self, idx):
608 if isinstance(idx, str):
609 if isinstance(idx, str):
609 idx = self.series.index(idx)
610 idx = self.series.index(idx)
610 patchguards = self.seriesguards[idx]
611 patchguards = self.seriesguards[idx]
611 if not patchguards:
612 if not patchguards:
612 return True, None
613 return True, None
613 guards = self.active()
614 guards = self.active()
614 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
615 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
615 if exactneg:
616 if exactneg:
616 return False, repr(exactneg[0])
617 return False, repr(exactneg[0])
617 pos = [g for g in patchguards if g[0] == '+']
618 pos = [g for g in patchguards if g[0] == '+']
618 exactpos = [g for g in pos if g[1:] in guards]
619 exactpos = [g for g in pos if g[1:] in guards]
619 if pos:
620 if pos:
620 if exactpos:
621 if exactpos:
621 return True, repr(exactpos[0])
622 return True, repr(exactpos[0])
622 return False, ' '.join(map(repr, pos))
623 return False, ' '.join(map(repr, pos))
623 return True, ''
624 return True, ''
624
625
625 def explainpushable(self, idx, all_patches=False):
626 def explainpushable(self, idx, all_patches=False):
626 if all_patches:
627 if all_patches:
627 write = self.ui.write
628 write = self.ui.write
628 else:
629 else:
629 write = self.ui.warn
630 write = self.ui.warn
630
631
631 if all_patches or self.ui.verbose:
632 if all_patches or self.ui.verbose:
632 if isinstance(idx, str):
633 if isinstance(idx, str):
633 idx = self.series.index(idx)
634 idx = self.series.index(idx)
634 pushable, why = self.pushable(idx)
635 pushable, why = self.pushable(idx)
635 if all_patches and pushable:
636 if all_patches and pushable:
636 if why is None:
637 if why is None:
637 write(_('allowing %s - no guards in effect\n') %
638 write(_('allowing %s - no guards in effect\n') %
638 self.series[idx])
639 self.series[idx])
639 else:
640 else:
640 if not why:
641 if not why:
641 write(_('allowing %s - no matching negative guards\n') %
642 write(_('allowing %s - no matching negative guards\n') %
642 self.series[idx])
643 self.series[idx])
643 else:
644 else:
644 write(_('allowing %s - guarded by %s\n') %
645 write(_('allowing %s - guarded by %s\n') %
645 (self.series[idx], why))
646 (self.series[idx], why))
646 if not pushable:
647 if not pushable:
647 if why:
648 if why:
648 write(_('skipping %s - guarded by %s\n') %
649 write(_('skipping %s - guarded by %s\n') %
649 (self.series[idx], why))
650 (self.series[idx], why))
650 else:
651 else:
651 write(_('skipping %s - no matching guards\n') %
652 write(_('skipping %s - no matching guards\n') %
652 self.series[idx])
653 self.series[idx])
653
654
654 def savedirty(self):
655 def savedirty(self):
655 def writelist(items, path):
656 def writelist(items, path):
656 fp = self.opener(path, 'w')
657 fp = self.opener(path, 'w')
657 for i in items:
658 for i in items:
658 fp.write("%s\n" % i)
659 fp.write("%s\n" % i)
659 fp.close()
660 fp.close()
660 if self.applieddirty:
661 if self.applieddirty:
661 writelist(map(str, self.applied), self.statuspath)
662 writelist(map(str, self.applied), self.statuspath)
662 self.applieddirty = False
663 self.applieddirty = False
663 if self.seriesdirty:
664 if self.seriesdirty:
664 writelist(self.fullseries, self.seriespath)
665 writelist(self.fullseries, self.seriespath)
665 self.seriesdirty = False
666 self.seriesdirty = False
666 if self.guardsdirty:
667 if self.guardsdirty:
667 writelist(self.activeguards, self.guardspath)
668 writelist(self.activeguards, self.guardspath)
668 self.guardsdirty = False
669 self.guardsdirty = False
669 if self.added:
670 if self.added:
670 qrepo = self.qrepo()
671 qrepo = self.qrepo()
671 if qrepo:
672 if qrepo:
672 qrepo[None].add(f for f in self.added if f not in qrepo[None])
673 qrepo[None].add(f for f in self.added if f not in qrepo[None])
673 self.added = []
674 self.added = []
674
675
675 def removeundo(self, repo):
676 def removeundo(self, repo):
676 undo = repo.sjoin('undo')
677 undo = repo.sjoin('undo')
677 if not os.path.exists(undo):
678 if not os.path.exists(undo):
678 return
679 return
679 try:
680 try:
680 os.unlink(undo)
681 os.unlink(undo)
681 except OSError as inst:
682 except OSError as inst:
682 self.ui.warn(_('error removing undo: %s\n') % str(inst))
683 self.ui.warn(_('error removing undo: %s\n') % str(inst))
683
684
684 def backup(self, repo, files, copy=False):
685 def backup(self, repo, files, copy=False):
685 # backup local changes in --force case
686 # backup local changes in --force case
686 for f in sorted(files):
687 for f in sorted(files):
687 absf = repo.wjoin(f)
688 absf = repo.wjoin(f)
688 if os.path.lexists(absf):
689 if os.path.lexists(absf):
689 self.ui.note(_('saving current version of %s as %s\n') %
690 self.ui.note(_('saving current version of %s as %s\n') %
690 (f, scmutil.origpath(self.ui, repo, f)))
691 (f, scmutil.origpath(self.ui, repo, f)))
691
692
692 absorig = scmutil.origpath(self.ui, repo, absf)
693 absorig = scmutil.origpath(self.ui, repo, absf)
693 if copy:
694 if copy:
694 util.copyfile(absf, absorig)
695 util.copyfile(absf, absorig)
695 else:
696 else:
696 util.rename(absf, absorig)
697 util.rename(absf, absorig)
697
698
698 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
699 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
699 fp=None, changes=None, opts={}):
700 fp=None, changes=None, opts={}):
700 stat = opts.get('stat')
701 stat = opts.get('stat')
701 m = scmutil.match(repo[node1], files, opts)
702 m = scmutil.match(repo[node1], files, opts)
702 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
703 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
703 changes, stat, fp)
704 changes, stat, fp)
704
705
705 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
706 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
706 # first try just applying the patch
707 # first try just applying the patch
707 (err, n) = self.apply(repo, [patch], update_status=False,
708 (err, n) = self.apply(repo, [patch], update_status=False,
708 strict=True, merge=rev)
709 strict=True, merge=rev)
709
710
710 if err == 0:
711 if err == 0:
711 return (err, n)
712 return (err, n)
712
713
713 if n is None:
714 if n is None:
714 raise error.Abort(_("apply failed for patch %s") % patch)
715 raise error.Abort(_("apply failed for patch %s") % patch)
715
716
716 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
717 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
717
718
718 # apply failed, strip away that rev and merge.
719 # apply failed, strip away that rev and merge.
719 hg.clean(repo, head)
720 hg.clean(repo, head)
720 strip(self.ui, repo, [n], update=False, backup=False)
721 strip(self.ui, repo, [n], update=False, backup=False)
721
722
722 ctx = repo[rev]
723 ctx = repo[rev]
723 ret = hg.merge(repo, rev)
724 ret = hg.merge(repo, rev)
724 if ret:
725 if ret:
725 raise error.Abort(_("update returned %d") % ret)
726 raise error.Abort(_("update returned %d") % ret)
726 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
727 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
727 if n is None:
728 if n is None:
728 raise error.Abort(_("repo commit failed"))
729 raise error.Abort(_("repo commit failed"))
729 try:
730 try:
730 ph = patchheader(mergeq.join(patch), self.plainmode)
731 ph = patchheader(mergeq.join(patch), self.plainmode)
731 except Exception:
732 except Exception:
732 raise error.Abort(_("unable to read %s") % patch)
733 raise error.Abort(_("unable to read %s") % patch)
733
734
734 diffopts = self.patchopts(diffopts, patch)
735 diffopts = self.patchopts(diffopts, patch)
735 patchf = self.opener(patch, "w")
736 patchf = self.opener(patch, "w")
736 comments = str(ph)
737 comments = str(ph)
737 if comments:
738 if comments:
738 patchf.write(comments)
739 patchf.write(comments)
739 self.printdiff(repo, diffopts, head, n, fp=patchf)
740 self.printdiff(repo, diffopts, head, n, fp=patchf)
740 patchf.close()
741 patchf.close()
741 self.removeundo(repo)
742 self.removeundo(repo)
742 return (0, n)
743 return (0, n)
743
744
744 def qparents(self, repo, rev=None):
745 def qparents(self, repo, rev=None):
745 """return the mq handled parent or p1
746 """return the mq handled parent or p1
746
747
747 In some case where mq get himself in being the parent of a merge the
748 In some case where mq get himself in being the parent of a merge the
748 appropriate parent may be p2.
749 appropriate parent may be p2.
749 (eg: an in progress merge started with mq disabled)
750 (eg: an in progress merge started with mq disabled)
750
751
751 If no parent are managed by mq, p1 is returned.
752 If no parent are managed by mq, p1 is returned.
752 """
753 """
753 if rev is None:
754 if rev is None:
754 (p1, p2) = repo.dirstate.parents()
755 (p1, p2) = repo.dirstate.parents()
755 if p2 == nullid:
756 if p2 == nullid:
756 return p1
757 return p1
757 if not self.applied:
758 if not self.applied:
758 return None
759 return None
759 return self.applied[-1].node
760 return self.applied[-1].node
760 p1, p2 = repo.changelog.parents(rev)
761 p1, p2 = repo.changelog.parents(rev)
761 if p2 != nullid and p2 in [x.node for x in self.applied]:
762 if p2 != nullid and p2 in [x.node for x in self.applied]:
762 return p2
763 return p2
763 return p1
764 return p1
764
765
765 def mergepatch(self, repo, mergeq, series, diffopts):
766 def mergepatch(self, repo, mergeq, series, diffopts):
766 if not self.applied:
767 if not self.applied:
767 # each of the patches merged in will have two parents. This
768 # each of the patches merged in will have two parents. This
768 # can confuse the qrefresh, qdiff, and strip code because it
769 # can confuse the qrefresh, qdiff, and strip code because it
769 # needs to know which parent is actually in the patch queue.
770 # needs to know which parent is actually in the patch queue.
770 # so, we insert a merge marker with only one parent. This way
771 # so, we insert a merge marker with only one parent. This way
771 # the first patch in the queue is never a merge patch
772 # the first patch in the queue is never a merge patch
772 #
773 #
773 pname = ".hg.patches.merge.marker"
774 pname = ".hg.patches.merge.marker"
774 n = newcommit(repo, None, '[mq]: merge marker', force=True)
775 n = newcommit(repo, None, '[mq]: merge marker', force=True)
775 self.removeundo(repo)
776 self.removeundo(repo)
776 self.applied.append(statusentry(n, pname))
777 self.applied.append(statusentry(n, pname))
777 self.applieddirty = True
778 self.applieddirty = True
778
779
779 head = self.qparents(repo)
780 head = self.qparents(repo)
780
781
781 for patch in series:
782 for patch in series:
782 patch = mergeq.lookup(patch, strict=True)
783 patch = mergeq.lookup(patch, strict=True)
783 if not patch:
784 if not patch:
784 self.ui.warn(_("patch %s does not exist\n") % patch)
785 self.ui.warn(_("patch %s does not exist\n") % patch)
785 return (1, None)
786 return (1, None)
786 pushable, reason = self.pushable(patch)
787 pushable, reason = self.pushable(patch)
787 if not pushable:
788 if not pushable:
788 self.explainpushable(patch, all_patches=True)
789 self.explainpushable(patch, all_patches=True)
789 continue
790 continue
790 info = mergeq.isapplied(patch)
791 info = mergeq.isapplied(patch)
791 if not info:
792 if not info:
792 self.ui.warn(_("patch %s is not applied\n") % patch)
793 self.ui.warn(_("patch %s is not applied\n") % patch)
793 return (1, None)
794 return (1, None)
794 rev = info[1]
795 rev = info[1]
795 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
796 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
796 if head:
797 if head:
797 self.applied.append(statusentry(head, patch))
798 self.applied.append(statusentry(head, patch))
798 self.applieddirty = True
799 self.applieddirty = True
799 if err:
800 if err:
800 return (err, head)
801 return (err, head)
801 self.savedirty()
802 self.savedirty()
802 return (0, head)
803 return (0, head)
803
804
804 def patch(self, repo, patchfile):
805 def patch(self, repo, patchfile):
805 '''Apply patchfile to the working directory.
806 '''Apply patchfile to the working directory.
806 patchfile: name of patch file'''
807 patchfile: name of patch file'''
807 files = set()
808 files = set()
808 try:
809 try:
809 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
810 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
810 files=files, eolmode=None)
811 files=files, eolmode=None)
811 return (True, list(files), fuzz)
812 return (True, list(files), fuzz)
812 except Exception as inst:
813 except Exception as inst:
813 self.ui.note(str(inst) + '\n')
814 self.ui.note(str(inst) + '\n')
814 if not self.ui.verbose:
815 if not self.ui.verbose:
815 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
816 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
816 self.ui.traceback()
817 self.ui.traceback()
817 return (False, list(files), False)
818 return (False, list(files), False)
818
819
819 def apply(self, repo, series, list=False, update_status=True,
820 def apply(self, repo, series, list=False, update_status=True,
820 strict=False, patchdir=None, merge=None, all_files=None,
821 strict=False, patchdir=None, merge=None, all_files=None,
821 tobackup=None, keepchanges=False):
822 tobackup=None, keepchanges=False):
822 wlock = lock = tr = None
823 wlock = lock = tr = None
823 try:
824 try:
824 wlock = repo.wlock()
825 wlock = repo.wlock()
825 lock = repo.lock()
826 lock = repo.lock()
826 tr = repo.transaction("qpush")
827 tr = repo.transaction("qpush")
827 try:
828 try:
828 ret = self._apply(repo, series, list, update_status,
829 ret = self._apply(repo, series, list, update_status,
829 strict, patchdir, merge, all_files=all_files,
830 strict, patchdir, merge, all_files=all_files,
830 tobackup=tobackup, keepchanges=keepchanges)
831 tobackup=tobackup, keepchanges=keepchanges)
831 tr.close()
832 tr.close()
832 self.savedirty()
833 self.savedirty()
833 return ret
834 return ret
834 except AbortNoCleanup:
835 except AbortNoCleanup:
835 tr.close()
836 tr.close()
836 self.savedirty()
837 self.savedirty()
837 raise
838 raise
838 except: # re-raises
839 except: # re-raises
839 try:
840 try:
840 tr.abort()
841 tr.abort()
841 finally:
842 finally:
842 self.invalidate()
843 self.invalidate()
843 raise
844 raise
844 finally:
845 finally:
845 release(tr, lock, wlock)
846 release(tr, lock, wlock)
846 self.removeundo(repo)
847 self.removeundo(repo)
847
848
848 def _apply(self, repo, series, list=False, update_status=True,
849 def _apply(self, repo, series, list=False, update_status=True,
849 strict=False, patchdir=None, merge=None, all_files=None,
850 strict=False, patchdir=None, merge=None, all_files=None,
850 tobackup=None, keepchanges=False):
851 tobackup=None, keepchanges=False):
851 """returns (error, hash)
852 """returns (error, hash)
852
853
853 error = 1 for unable to read, 2 for patch failed, 3 for patch
854 error = 1 for unable to read, 2 for patch failed, 3 for patch
854 fuzz. tobackup is None or a set of files to backup before they
855 fuzz. tobackup is None or a set of files to backup before they
855 are modified by a patch.
856 are modified by a patch.
856 """
857 """
857 # TODO unify with commands.py
858 # TODO unify with commands.py
858 if not patchdir:
859 if not patchdir:
859 patchdir = self.path
860 patchdir = self.path
860 err = 0
861 err = 0
861 n = None
862 n = None
862 for patchname in series:
863 for patchname in series:
863 pushable, reason = self.pushable(patchname)
864 pushable, reason = self.pushable(patchname)
864 if not pushable:
865 if not pushable:
865 self.explainpushable(patchname, all_patches=True)
866 self.explainpushable(patchname, all_patches=True)
866 continue
867 continue
867 self.ui.status(_("applying %s\n") % patchname)
868 self.ui.status(_("applying %s\n") % patchname)
868 pf = os.path.join(patchdir, patchname)
869 pf = os.path.join(patchdir, patchname)
869
870
870 try:
871 try:
871 ph = patchheader(self.join(patchname), self.plainmode)
872 ph = patchheader(self.join(patchname), self.plainmode)
872 except IOError:
873 except IOError:
873 self.ui.warn(_("unable to read %s\n") % patchname)
874 self.ui.warn(_("unable to read %s\n") % patchname)
874 err = 1
875 err = 1
875 break
876 break
876
877
877 message = ph.message
878 message = ph.message
878 if not message:
879 if not message:
879 # The commit message should not be translated
880 # The commit message should not be translated
880 message = "imported patch %s\n" % patchname
881 message = "imported patch %s\n" % patchname
881 else:
882 else:
882 if list:
883 if list:
883 # The commit message should not be translated
884 # The commit message should not be translated
884 message.append("\nimported patch %s" % patchname)
885 message.append("\nimported patch %s" % patchname)
885 message = '\n'.join(message)
886 message = '\n'.join(message)
886
887
887 if ph.haspatch:
888 if ph.haspatch:
888 if tobackup:
889 if tobackup:
889 touched = patchmod.changedfiles(self.ui, repo, pf)
890 touched = patchmod.changedfiles(self.ui, repo, pf)
890 touched = set(touched) & tobackup
891 touched = set(touched) & tobackup
891 if touched and keepchanges:
892 if touched and keepchanges:
892 raise AbortNoCleanup(
893 raise AbortNoCleanup(
893 _("conflicting local changes found"),
894 _("conflicting local changes found"),
894 hint=_("did you forget to qrefresh?"))
895 hint=_("did you forget to qrefresh?"))
895 self.backup(repo, touched, copy=True)
896 self.backup(repo, touched, copy=True)
896 tobackup = tobackup - touched
897 tobackup = tobackup - touched
897 (patcherr, files, fuzz) = self.patch(repo, pf)
898 (patcherr, files, fuzz) = self.patch(repo, pf)
898 if all_files is not None:
899 if all_files is not None:
899 all_files.update(files)
900 all_files.update(files)
900 patcherr = not patcherr
901 patcherr = not patcherr
901 else:
902 else:
902 self.ui.warn(_("patch %s is empty\n") % patchname)
903 self.ui.warn(_("patch %s is empty\n") % patchname)
903 patcherr, files, fuzz = 0, [], 0
904 patcherr, files, fuzz = 0, [], 0
904
905
905 if merge and files:
906 if merge and files:
906 # Mark as removed/merged and update dirstate parent info
907 # Mark as removed/merged and update dirstate parent info
907 removed = []
908 removed = []
908 merged = []
909 merged = []
909 for f in files:
910 for f in files:
910 if os.path.lexists(repo.wjoin(f)):
911 if os.path.lexists(repo.wjoin(f)):
911 merged.append(f)
912 merged.append(f)
912 else:
913 else:
913 removed.append(f)
914 removed.append(f)
914 repo.dirstate.beginparentchange()
915 repo.dirstate.beginparentchange()
915 for f in removed:
916 for f in removed:
916 repo.dirstate.remove(f)
917 repo.dirstate.remove(f)
917 for f in merged:
918 for f in merged:
918 repo.dirstate.merge(f)
919 repo.dirstate.merge(f)
919 p1, p2 = repo.dirstate.parents()
920 p1, p2 = repo.dirstate.parents()
920 repo.setparents(p1, merge)
921 repo.setparents(p1, merge)
921 repo.dirstate.endparentchange()
922 repo.dirstate.endparentchange()
922
923
923 if all_files and '.hgsubstate' in all_files:
924 if all_files and '.hgsubstate' in all_files:
924 wctx = repo[None]
925 wctx = repo[None]
925 pctx = repo['.']
926 pctx = repo['.']
926 overwrite = False
927 overwrite = False
927 mergedsubstate = subrepo.submerge(repo, pctx, wctx, wctx,
928 mergedsubstate = subrepo.submerge(repo, pctx, wctx, wctx,
928 overwrite)
929 overwrite)
929 files += mergedsubstate.keys()
930 files += mergedsubstate.keys()
930
931
931 match = scmutil.matchfiles(repo, files or [])
932 match = scmutil.matchfiles(repo, files or [])
932 oldtip = repo['tip']
933 oldtip = repo['tip']
933 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
934 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
934 force=True)
935 force=True)
935 if repo['tip'] == oldtip:
936 if repo['tip'] == oldtip:
936 raise error.Abort(_("qpush exactly duplicates child changeset"))
937 raise error.Abort(_("qpush exactly duplicates child changeset"))
937 if n is None:
938 if n is None:
938 raise error.Abort(_("repository commit failed"))
939 raise error.Abort(_("repository commit failed"))
939
940
940 if update_status:
941 if update_status:
941 self.applied.append(statusentry(n, patchname))
942 self.applied.append(statusentry(n, patchname))
942
943
943 if patcherr:
944 if patcherr:
944 self.ui.warn(_("patch failed, rejects left in working "
945 self.ui.warn(_("patch failed, rejects left in working "
945 "directory\n"))
946 "directory\n"))
946 err = 2
947 err = 2
947 break
948 break
948
949
949 if fuzz and strict:
950 if fuzz and strict:
950 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
951 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
951 err = 3
952 err = 3
952 break
953 break
953 return (err, n)
954 return (err, n)
954
955
955 def _cleanup(self, patches, numrevs, keep=False):
956 def _cleanup(self, patches, numrevs, keep=False):
956 if not keep:
957 if not keep:
957 r = self.qrepo()
958 r = self.qrepo()
958 if r:
959 if r:
959 r[None].forget(patches)
960 r[None].forget(patches)
960 for p in patches:
961 for p in patches:
961 try:
962 try:
962 os.unlink(self.join(p))
963 os.unlink(self.join(p))
963 except OSError as inst:
964 except OSError as inst:
964 if inst.errno != errno.ENOENT:
965 if inst.errno != errno.ENOENT:
965 raise
966 raise
966
967
967 qfinished = []
968 qfinished = []
968 if numrevs:
969 if numrevs:
969 qfinished = self.applied[:numrevs]
970 qfinished = self.applied[:numrevs]
970 del self.applied[:numrevs]
971 del self.applied[:numrevs]
971 self.applieddirty = True
972 self.applieddirty = True
972
973
973 unknown = []
974 unknown = []
974
975
975 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
976 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
976 reverse=True):
977 reverse=True):
977 if i is not None:
978 if i is not None:
978 del self.fullseries[i]
979 del self.fullseries[i]
979 else:
980 else:
980 unknown.append(p)
981 unknown.append(p)
981
982
982 if unknown:
983 if unknown:
983 if numrevs:
984 if numrevs:
984 rev = dict((entry.name, entry.node) for entry in qfinished)
985 rev = dict((entry.name, entry.node) for entry in qfinished)
985 for p in unknown:
986 for p in unknown:
986 msg = _('revision %s refers to unknown patches: %s\n')
987 msg = _('revision %s refers to unknown patches: %s\n')
987 self.ui.warn(msg % (short(rev[p]), p))
988 self.ui.warn(msg % (short(rev[p]), p))
988 else:
989 else:
989 msg = _('unknown patches: %s\n')
990 msg = _('unknown patches: %s\n')
990 raise error.Abort(''.join(msg % p for p in unknown))
991 raise error.Abort(''.join(msg % p for p in unknown))
991
992
992 self.parseseries()
993 self.parseseries()
993 self.seriesdirty = True
994 self.seriesdirty = True
994 return [entry.node for entry in qfinished]
995 return [entry.node for entry in qfinished]
995
996
996 def _revpatches(self, repo, revs):
997 def _revpatches(self, repo, revs):
997 firstrev = repo[self.applied[0].node].rev()
998 firstrev = repo[self.applied[0].node].rev()
998 patches = []
999 patches = []
999 for i, rev in enumerate(revs):
1000 for i, rev in enumerate(revs):
1000
1001
1001 if rev < firstrev:
1002 if rev < firstrev:
1002 raise error.Abort(_('revision %d is not managed') % rev)
1003 raise error.Abort(_('revision %d is not managed') % rev)
1003
1004
1004 ctx = repo[rev]
1005 ctx = repo[rev]
1005 base = self.applied[i].node
1006 base = self.applied[i].node
1006 if ctx.node() != base:
1007 if ctx.node() != base:
1007 msg = _('cannot delete revision %d above applied patches')
1008 msg = _('cannot delete revision %d above applied patches')
1008 raise error.Abort(msg % rev)
1009 raise error.Abort(msg % rev)
1009
1010
1010 patch = self.applied[i].name
1011 patch = self.applied[i].name
1011 for fmt in ('[mq]: %s', 'imported patch %s'):
1012 for fmt in ('[mq]: %s', 'imported patch %s'):
1012 if ctx.description() == fmt % patch:
1013 if ctx.description() == fmt % patch:
1013 msg = _('patch %s finalized without changeset message\n')
1014 msg = _('patch %s finalized without changeset message\n')
1014 repo.ui.status(msg % patch)
1015 repo.ui.status(msg % patch)
1015 break
1016 break
1016
1017
1017 patches.append(patch)
1018 patches.append(patch)
1018 return patches
1019 return patches
1019
1020
1020 def finish(self, repo, revs):
1021 def finish(self, repo, revs):
1021 # Manually trigger phase computation to ensure phasedefaults is
1022 # Manually trigger phase computation to ensure phasedefaults is
1022 # executed before we remove the patches.
1023 # executed before we remove the patches.
1023 repo._phasecache
1024 repo._phasecache
1024 patches = self._revpatches(repo, sorted(revs))
1025 patches = self._revpatches(repo, sorted(revs))
1025 qfinished = self._cleanup(patches, len(patches))
1026 qfinished = self._cleanup(patches, len(patches))
1026 if qfinished and repo.ui.configbool('mq', 'secret', False):
1027 if qfinished and repo.ui.configbool('mq', 'secret', False):
1027 # only use this logic when the secret option is added
1028 # only use this logic when the secret option is added
1028 oldqbase = repo[qfinished[0]]
1029 oldqbase = repo[qfinished[0]]
1029 tphase = repo.ui.config('phases', 'new-commit', phases.draft)
1030 tphase = repo.ui.config('phases', 'new-commit', phases.draft)
1030 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1031 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1031 with repo.transaction('qfinish') as tr:
1032 with repo.transaction('qfinish') as tr:
1032 phases.advanceboundary(repo, tr, tphase, qfinished)
1033 phases.advanceboundary(repo, tr, tphase, qfinished)
1033
1034
1034 def delete(self, repo, patches, opts):
1035 def delete(self, repo, patches, opts):
1035 if not patches and not opts.get('rev'):
1036 if not patches and not opts.get('rev'):
1036 raise error.Abort(_('qdelete requires at least one revision or '
1037 raise error.Abort(_('qdelete requires at least one revision or '
1037 'patch name'))
1038 'patch name'))
1038
1039
1039 realpatches = []
1040 realpatches = []
1040 for patch in patches:
1041 for patch in patches:
1041 patch = self.lookup(patch, strict=True)
1042 patch = self.lookup(patch, strict=True)
1042 info = self.isapplied(patch)
1043 info = self.isapplied(patch)
1043 if info:
1044 if info:
1044 raise error.Abort(_("cannot delete applied patch %s") % patch)
1045 raise error.Abort(_("cannot delete applied patch %s") % patch)
1045 if patch not in self.series:
1046 if patch not in self.series:
1046 raise error.Abort(_("patch %s not in series file") % patch)
1047 raise error.Abort(_("patch %s not in series file") % patch)
1047 if patch not in realpatches:
1048 if patch not in realpatches:
1048 realpatches.append(patch)
1049 realpatches.append(patch)
1049
1050
1050 numrevs = 0
1051 numrevs = 0
1051 if opts.get('rev'):
1052 if opts.get('rev'):
1052 if not self.applied:
1053 if not self.applied:
1053 raise error.Abort(_('no patches applied'))
1054 raise error.Abort(_('no patches applied'))
1054 revs = scmutil.revrange(repo, opts.get('rev'))
1055 revs = scmutil.revrange(repo, opts.get('rev'))
1055 revs.sort()
1056 revs.sort()
1056 revpatches = self._revpatches(repo, revs)
1057 revpatches = self._revpatches(repo, revs)
1057 realpatches += revpatches
1058 realpatches += revpatches
1058 numrevs = len(revpatches)
1059 numrevs = len(revpatches)
1059
1060
1060 self._cleanup(realpatches, numrevs, opts.get('keep'))
1061 self._cleanup(realpatches, numrevs, opts.get('keep'))
1061
1062
1062 def checktoppatch(self, repo):
1063 def checktoppatch(self, repo):
1063 '''check that working directory is at qtip'''
1064 '''check that working directory is at qtip'''
1064 if self.applied:
1065 if self.applied:
1065 top = self.applied[-1].node
1066 top = self.applied[-1].node
1066 patch = self.applied[-1].name
1067 patch = self.applied[-1].name
1067 if repo.dirstate.p1() != top:
1068 if repo.dirstate.p1() != top:
1068 raise error.Abort(_("working directory revision is not qtip"))
1069 raise error.Abort(_("working directory revision is not qtip"))
1069 return top, patch
1070 return top, patch
1070 return None, None
1071 return None, None
1071
1072
1072 def putsubstate2changes(self, substatestate, changes):
1073 def putsubstate2changes(self, substatestate, changes):
1073 for files in changes[:3]:
1074 for files in changes[:3]:
1074 if '.hgsubstate' in files:
1075 if '.hgsubstate' in files:
1075 return # already listed up
1076 return # already listed up
1076 # not yet listed up
1077 # not yet listed up
1077 if substatestate in 'a?':
1078 if substatestate in 'a?':
1078 changes[1].append('.hgsubstate')
1079 changes[1].append('.hgsubstate')
1079 elif substatestate in 'r':
1080 elif substatestate in 'r':
1080 changes[2].append('.hgsubstate')
1081 changes[2].append('.hgsubstate')
1081 else: # modified
1082 else: # modified
1082 changes[0].append('.hgsubstate')
1083 changes[0].append('.hgsubstate')
1083
1084
1084 def checklocalchanges(self, repo, force=False, refresh=True):
1085 def checklocalchanges(self, repo, force=False, refresh=True):
1085 excsuffix = ''
1086 excsuffix = ''
1086 if refresh:
1087 if refresh:
1087 excsuffix = ', qrefresh first'
1088 excsuffix = ', qrefresh first'
1088 # plain versions for i18n tool to detect them
1089 # plain versions for i18n tool to detect them
1089 _("local changes found, qrefresh first")
1090 _("local changes found, qrefresh first")
1090 _("local changed subrepos found, qrefresh first")
1091 _("local changed subrepos found, qrefresh first")
1091 return checklocalchanges(repo, force, excsuffix)
1092 return checklocalchanges(repo, force, excsuffix)
1092
1093
1093 _reserved = ('series', 'status', 'guards', '.', '..')
1094 _reserved = ('series', 'status', 'guards', '.', '..')
1094 def checkreservedname(self, name):
1095 def checkreservedname(self, name):
1095 if name in self._reserved:
1096 if name in self._reserved:
1096 raise error.Abort(_('"%s" cannot be used as the name of a patch')
1097 raise error.Abort(_('"%s" cannot be used as the name of a patch')
1097 % name)
1098 % name)
1098 for prefix in ('.hg', '.mq'):
1099 for prefix in ('.hg', '.mq'):
1099 if name.startswith(prefix):
1100 if name.startswith(prefix):
1100 raise error.Abort(_('patch name cannot begin with "%s"')
1101 raise error.Abort(_('patch name cannot begin with "%s"')
1101 % prefix)
1102 % prefix)
1102 for c in ('#', ':', '\r', '\n'):
1103 for c in ('#', ':', '\r', '\n'):
1103 if c in name:
1104 if c in name:
1104 raise error.Abort(_('%r cannot be used in the name of a patch')
1105 raise error.Abort(_('%r cannot be used in the name of a patch')
1105 % c)
1106 % c)
1106
1107
1107 def checkpatchname(self, name, force=False):
1108 def checkpatchname(self, name, force=False):
1108 self.checkreservedname(name)
1109 self.checkreservedname(name)
1109 if not force and os.path.exists(self.join(name)):
1110 if not force and os.path.exists(self.join(name)):
1110 if os.path.isdir(self.join(name)):
1111 if os.path.isdir(self.join(name)):
1111 raise error.Abort(_('"%s" already exists as a directory')
1112 raise error.Abort(_('"%s" already exists as a directory')
1112 % name)
1113 % name)
1113 else:
1114 else:
1114 raise error.Abort(_('patch "%s" already exists') % name)
1115 raise error.Abort(_('patch "%s" already exists') % name)
1115
1116
1116 def makepatchname(self, title, fallbackname):
1117 def makepatchname(self, title, fallbackname):
1117 """Return a suitable filename for title, adding a suffix to make
1118 """Return a suitable filename for title, adding a suffix to make
1118 it unique in the existing list"""
1119 it unique in the existing list"""
1119 namebase = re.sub('[\s\W_]+', '_', title.lower()).strip('_')
1120 namebase = re.sub('[\s\W_]+', '_', title.lower()).strip('_')
1120 if namebase:
1121 if namebase:
1121 try:
1122 try:
1122 self.checkreservedname(namebase)
1123 self.checkreservedname(namebase)
1123 except error.Abort:
1124 except error.Abort:
1124 namebase = fallbackname
1125 namebase = fallbackname
1125 else:
1126 else:
1126 namebase = fallbackname
1127 namebase = fallbackname
1127 name = namebase
1128 name = namebase
1128 i = 0
1129 i = 0
1129 while True:
1130 while True:
1130 if name not in self.fullseries:
1131 if name not in self.fullseries:
1131 try:
1132 try:
1132 self.checkpatchname(name)
1133 self.checkpatchname(name)
1133 break
1134 break
1134 except error.Abort:
1135 except error.Abort:
1135 pass
1136 pass
1136 i += 1
1137 i += 1
1137 name = '%s__%s' % (namebase, i)
1138 name = '%s__%s' % (namebase, i)
1138 return name
1139 return name
1139
1140
1140 def checkkeepchanges(self, keepchanges, force):
1141 def checkkeepchanges(self, keepchanges, force):
1141 if force and keepchanges:
1142 if force and keepchanges:
1142 raise error.Abort(_('cannot use both --force and --keep-changes'))
1143 raise error.Abort(_('cannot use both --force and --keep-changes'))
1143
1144
1144 def new(self, repo, patchfn, *pats, **opts):
1145 def new(self, repo, patchfn, *pats, **opts):
1145 """options:
1146 """options:
1146 msg: a string or a no-argument function returning a string
1147 msg: a string or a no-argument function returning a string
1147 """
1148 """
1148 msg = opts.get('msg')
1149 msg = opts.get('msg')
1149 edit = opts.get('edit')
1150 edit = opts.get('edit')
1150 editform = opts.get('editform', 'mq.qnew')
1151 editform = opts.get('editform', 'mq.qnew')
1151 user = opts.get('user')
1152 user = opts.get('user')
1152 date = opts.get('date')
1153 date = opts.get('date')
1153 if date:
1154 if date:
1154 date = util.parsedate(date)
1155 date = util.parsedate(date)
1155 diffopts = self.diffopts({'git': opts.get('git')})
1156 diffopts = self.diffopts({'git': opts.get('git')})
1156 if opts.get('checkname', True):
1157 if opts.get('checkname', True):
1157 self.checkpatchname(patchfn)
1158 self.checkpatchname(patchfn)
1158 inclsubs = checksubstate(repo)
1159 inclsubs = checksubstate(repo)
1159 if inclsubs:
1160 if inclsubs:
1160 substatestate = repo.dirstate['.hgsubstate']
1161 substatestate = repo.dirstate['.hgsubstate']
1161 if opts.get('include') or opts.get('exclude') or pats:
1162 if opts.get('include') or opts.get('exclude') or pats:
1162 # detect missing files in pats
1163 # detect missing files in pats
1163 def badfn(f, msg):
1164 def badfn(f, msg):
1164 if f != '.hgsubstate': # .hgsubstate is auto-created
1165 if f != '.hgsubstate': # .hgsubstate is auto-created
1165 raise error.Abort('%s: %s' % (f, msg))
1166 raise error.Abort('%s: %s' % (f, msg))
1166 match = scmutil.match(repo[None], pats, opts, badfn=badfn)
1167 match = scmutil.match(repo[None], pats, opts, badfn=badfn)
1167 changes = repo.status(match=match)
1168 changes = repo.status(match=match)
1168 else:
1169 else:
1169 changes = self.checklocalchanges(repo, force=True)
1170 changes = self.checklocalchanges(repo, force=True)
1170 commitfiles = list(inclsubs)
1171 commitfiles = list(inclsubs)
1171 for files in changes[:3]:
1172 for files in changes[:3]:
1172 commitfiles.extend(files)
1173 commitfiles.extend(files)
1173 match = scmutil.matchfiles(repo, commitfiles)
1174 match = scmutil.matchfiles(repo, commitfiles)
1174 if len(repo[None].parents()) > 1:
1175 if len(repo[None].parents()) > 1:
1175 raise error.Abort(_('cannot manage merge changesets'))
1176 raise error.Abort(_('cannot manage merge changesets'))
1176 self.checktoppatch(repo)
1177 self.checktoppatch(repo)
1177 insert = self.fullseriesend()
1178 insert = self.fullseriesend()
1178 with repo.wlock():
1179 with repo.wlock():
1179 try:
1180 try:
1180 # if patch file write fails, abort early
1181 # if patch file write fails, abort early
1181 p = self.opener(patchfn, "w")
1182 p = self.opener(patchfn, "w")
1182 except IOError as e:
1183 except IOError as e:
1183 raise error.Abort(_('cannot write patch "%s": %s')
1184 raise error.Abort(_('cannot write patch "%s": %s')
1184 % (patchfn, e.strerror))
1185 % (patchfn, e.strerror))
1185 try:
1186 try:
1186 defaultmsg = "[mq]: %s" % patchfn
1187 defaultmsg = "[mq]: %s" % patchfn
1187 editor = cmdutil.getcommiteditor(editform=editform)
1188 editor = cmdutil.getcommiteditor(editform=editform)
1188 if edit:
1189 if edit:
1189 def finishdesc(desc):
1190 def finishdesc(desc):
1190 if desc.rstrip():
1191 if desc.rstrip():
1191 return desc
1192 return desc
1192 else:
1193 else:
1193 return defaultmsg
1194 return defaultmsg
1194 # i18n: this message is shown in editor with "HG: " prefix
1195 # i18n: this message is shown in editor with "HG: " prefix
1195 extramsg = _('Leave message empty to use default message.')
1196 extramsg = _('Leave message empty to use default message.')
1196 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1197 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1197 extramsg=extramsg,
1198 extramsg=extramsg,
1198 editform=editform)
1199 editform=editform)
1199 commitmsg = msg
1200 commitmsg = msg
1200 else:
1201 else:
1201 commitmsg = msg or defaultmsg
1202 commitmsg = msg or defaultmsg
1202
1203
1203 n = newcommit(repo, None, commitmsg, user, date, match=match,
1204 n = newcommit(repo, None, commitmsg, user, date, match=match,
1204 force=True, editor=editor)
1205 force=True, editor=editor)
1205 if n is None:
1206 if n is None:
1206 raise error.Abort(_("repo commit failed"))
1207 raise error.Abort(_("repo commit failed"))
1207 try:
1208 try:
1208 self.fullseries[insert:insert] = [patchfn]
1209 self.fullseries[insert:insert] = [patchfn]
1209 self.applied.append(statusentry(n, patchfn))
1210 self.applied.append(statusentry(n, patchfn))
1210 self.parseseries()
1211 self.parseseries()
1211 self.seriesdirty = True
1212 self.seriesdirty = True
1212 self.applieddirty = True
1213 self.applieddirty = True
1213 nctx = repo[n]
1214 nctx = repo[n]
1214 ph = patchheader(self.join(patchfn), self.plainmode)
1215 ph = patchheader(self.join(patchfn), self.plainmode)
1215 if user:
1216 if user:
1216 ph.setuser(user)
1217 ph.setuser(user)
1217 if date:
1218 if date:
1218 ph.setdate('%s %s' % date)
1219 ph.setdate('%s %s' % date)
1219 ph.setparent(hex(nctx.p1().node()))
1220 ph.setparent(hex(nctx.p1().node()))
1220 msg = nctx.description().strip()
1221 msg = nctx.description().strip()
1221 if msg == defaultmsg.strip():
1222 if msg == defaultmsg.strip():
1222 msg = ''
1223 msg = ''
1223 ph.setmessage(msg)
1224 ph.setmessage(msg)
1224 p.write(str(ph))
1225 p.write(str(ph))
1225 if commitfiles:
1226 if commitfiles:
1226 parent = self.qparents(repo, n)
1227 parent = self.qparents(repo, n)
1227 if inclsubs:
1228 if inclsubs:
1228 self.putsubstate2changes(substatestate, changes)
1229 self.putsubstate2changes(substatestate, changes)
1229 chunks = patchmod.diff(repo, node1=parent, node2=n,
1230 chunks = patchmod.diff(repo, node1=parent, node2=n,
1230 changes=changes, opts=diffopts)
1231 changes=changes, opts=diffopts)
1231 for chunk in chunks:
1232 for chunk in chunks:
1232 p.write(chunk)
1233 p.write(chunk)
1233 p.close()
1234 p.close()
1234 r = self.qrepo()
1235 r = self.qrepo()
1235 if r:
1236 if r:
1236 r[None].add([patchfn])
1237 r[None].add([patchfn])
1237 except: # re-raises
1238 except: # re-raises
1238 repo.rollback()
1239 repo.rollback()
1239 raise
1240 raise
1240 except Exception:
1241 except Exception:
1241 patchpath = self.join(patchfn)
1242 patchpath = self.join(patchfn)
1242 try:
1243 try:
1243 os.unlink(patchpath)
1244 os.unlink(patchpath)
1244 except OSError:
1245 except OSError:
1245 self.ui.warn(_('error unlinking %s\n') % patchpath)
1246 self.ui.warn(_('error unlinking %s\n') % patchpath)
1246 raise
1247 raise
1247 self.removeundo(repo)
1248 self.removeundo(repo)
1248
1249
1249 def isapplied(self, patch):
1250 def isapplied(self, patch):
1250 """returns (index, rev, patch)"""
1251 """returns (index, rev, patch)"""
1251 for i, a in enumerate(self.applied):
1252 for i, a in enumerate(self.applied):
1252 if a.name == patch:
1253 if a.name == patch:
1253 return (i, a.node, a.name)
1254 return (i, a.node, a.name)
1254 return None
1255 return None
1255
1256
1256 # if the exact patch name does not exist, we try a few
1257 # if the exact patch name does not exist, we try a few
1257 # variations. If strict is passed, we try only #1
1258 # variations. If strict is passed, we try only #1
1258 #
1259 #
1259 # 1) a number (as string) to indicate an offset in the series file
1260 # 1) a number (as string) to indicate an offset in the series file
1260 # 2) a unique substring of the patch name was given
1261 # 2) a unique substring of the patch name was given
1261 # 3) patchname[-+]num to indicate an offset in the series file
1262 # 3) patchname[-+]num to indicate an offset in the series file
1262 def lookup(self, patch, strict=False):
1263 def lookup(self, patch, strict=False):
1263 def partialname(s):
1264 def partialname(s):
1264 if s in self.series:
1265 if s in self.series:
1265 return s
1266 return s
1266 matches = [x for x in self.series if s in x]
1267 matches = [x for x in self.series if s in x]
1267 if len(matches) > 1:
1268 if len(matches) > 1:
1268 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1269 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1269 for m in matches:
1270 for m in matches:
1270 self.ui.warn(' %s\n' % m)
1271 self.ui.warn(' %s\n' % m)
1271 return None
1272 return None
1272 if matches:
1273 if matches:
1273 return matches[0]
1274 return matches[0]
1274 if self.series and self.applied:
1275 if self.series and self.applied:
1275 if s == 'qtip':
1276 if s == 'qtip':
1276 return self.series[self.seriesend(True) - 1]
1277 return self.series[self.seriesend(True) - 1]
1277 if s == 'qbase':
1278 if s == 'qbase':
1278 return self.series[0]
1279 return self.series[0]
1279 return None
1280 return None
1280
1281
1281 if patch in self.series:
1282 if patch in self.series:
1282 return patch
1283 return patch
1283
1284
1284 if not os.path.isfile(self.join(patch)):
1285 if not os.path.isfile(self.join(patch)):
1285 try:
1286 try:
1286 sno = int(patch)
1287 sno = int(patch)
1287 except (ValueError, OverflowError):
1288 except (ValueError, OverflowError):
1288 pass
1289 pass
1289 else:
1290 else:
1290 if -len(self.series) <= sno < len(self.series):
1291 if -len(self.series) <= sno < len(self.series):
1291 return self.series[sno]
1292 return self.series[sno]
1292
1293
1293 if not strict:
1294 if not strict:
1294 res = partialname(patch)
1295 res = partialname(patch)
1295 if res:
1296 if res:
1296 return res
1297 return res
1297 minus = patch.rfind('-')
1298 minus = patch.rfind('-')
1298 if minus >= 0:
1299 if minus >= 0:
1299 res = partialname(patch[:minus])
1300 res = partialname(patch[:minus])
1300 if res:
1301 if res:
1301 i = self.series.index(res)
1302 i = self.series.index(res)
1302 try:
1303 try:
1303 off = int(patch[minus + 1:] or 1)
1304 off = int(patch[minus + 1:] or 1)
1304 except (ValueError, OverflowError):
1305 except (ValueError, OverflowError):
1305 pass
1306 pass
1306 else:
1307 else:
1307 if i - off >= 0:
1308 if i - off >= 0:
1308 return self.series[i - off]
1309 return self.series[i - off]
1309 plus = patch.rfind('+')
1310 plus = patch.rfind('+')
1310 if plus >= 0:
1311 if plus >= 0:
1311 res = partialname(patch[:plus])
1312 res = partialname(patch[:plus])
1312 if res:
1313 if res:
1313 i = self.series.index(res)
1314 i = self.series.index(res)
1314 try:
1315 try:
1315 off = int(patch[plus + 1:] or 1)
1316 off = int(patch[plus + 1:] or 1)
1316 except (ValueError, OverflowError):
1317 except (ValueError, OverflowError):
1317 pass
1318 pass
1318 else:
1319 else:
1319 if i + off < len(self.series):
1320 if i + off < len(self.series):
1320 return self.series[i + off]
1321 return self.series[i + off]
1321 raise error.Abort(_("patch %s not in series") % patch)
1322 raise error.Abort(_("patch %s not in series") % patch)
1322
1323
1323 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1324 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1324 all=False, move=False, exact=False, nobackup=False,
1325 all=False, move=False, exact=False, nobackup=False,
1325 keepchanges=False):
1326 keepchanges=False):
1326 self.checkkeepchanges(keepchanges, force)
1327 self.checkkeepchanges(keepchanges, force)
1327 diffopts = self.diffopts()
1328 diffopts = self.diffopts()
1328 with repo.wlock():
1329 with repo.wlock():
1329 heads = []
1330 heads = []
1330 for hs in repo.branchmap().itervalues():
1331 for hs in repo.branchmap().itervalues():
1331 heads.extend(hs)
1332 heads.extend(hs)
1332 if not heads:
1333 if not heads:
1333 heads = [nullid]
1334 heads = [nullid]
1334 if repo.dirstate.p1() not in heads and not exact:
1335 if repo.dirstate.p1() not in heads and not exact:
1335 self.ui.status(_("(working directory not at a head)\n"))
1336 self.ui.status(_("(working directory not at a head)\n"))
1336
1337
1337 if not self.series:
1338 if not self.series:
1338 self.ui.warn(_('no patches in series\n'))
1339 self.ui.warn(_('no patches in series\n'))
1339 return 0
1340 return 0
1340
1341
1341 # Suppose our series file is: A B C and the current 'top'
1342 # Suppose our series file is: A B C and the current 'top'
1342 # patch is B. qpush C should be performed (moving forward)
1343 # patch is B. qpush C should be performed (moving forward)
1343 # qpush B is a NOP (no change) qpush A is an error (can't
1344 # qpush B is a NOP (no change) qpush A is an error (can't
1344 # go backwards with qpush)
1345 # go backwards with qpush)
1345 if patch:
1346 if patch:
1346 patch = self.lookup(patch)
1347 patch = self.lookup(patch)
1347 info = self.isapplied(patch)
1348 info = self.isapplied(patch)
1348 if info and info[0] >= len(self.applied) - 1:
1349 if info and info[0] >= len(self.applied) - 1:
1349 self.ui.warn(
1350 self.ui.warn(
1350 _('qpush: %s is already at the top\n') % patch)
1351 _('qpush: %s is already at the top\n') % patch)
1351 return 0
1352 return 0
1352
1353
1353 pushable, reason = self.pushable(patch)
1354 pushable, reason = self.pushable(patch)
1354 if pushable:
1355 if pushable:
1355 if self.series.index(patch) < self.seriesend():
1356 if self.series.index(patch) < self.seriesend():
1356 raise error.Abort(
1357 raise error.Abort(
1357 _("cannot push to a previous patch: %s") % patch)
1358 _("cannot push to a previous patch: %s") % patch)
1358 else:
1359 else:
1359 if reason:
1360 if reason:
1360 reason = _('guarded by %s') % reason
1361 reason = _('guarded by %s') % reason
1361 else:
1362 else:
1362 reason = _('no matching guards')
1363 reason = _('no matching guards')
1363 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1364 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1364 return 1
1365 return 1
1365 elif all:
1366 elif all:
1366 patch = self.series[-1]
1367 patch = self.series[-1]
1367 if self.isapplied(patch):
1368 if self.isapplied(patch):
1368 self.ui.warn(_('all patches are currently applied\n'))
1369 self.ui.warn(_('all patches are currently applied\n'))
1369 return 0
1370 return 0
1370
1371
1371 # Following the above example, starting at 'top' of B:
1372 # Following the above example, starting at 'top' of B:
1372 # qpush should be performed (pushes C), but a subsequent
1373 # qpush should be performed (pushes C), but a subsequent
1373 # qpush without an argument is an error (nothing to
1374 # qpush without an argument is an error (nothing to
1374 # apply). This allows a loop of "...while hg qpush..." to
1375 # apply). This allows a loop of "...while hg qpush..." to
1375 # work as it detects an error when done
1376 # work as it detects an error when done
1376 start = self.seriesend()
1377 start = self.seriesend()
1377 if start == len(self.series):
1378 if start == len(self.series):
1378 self.ui.warn(_('patch series already fully applied\n'))
1379 self.ui.warn(_('patch series already fully applied\n'))
1379 return 1
1380 return 1
1380 if not force and not keepchanges:
1381 if not force and not keepchanges:
1381 self.checklocalchanges(repo, refresh=self.applied)
1382 self.checklocalchanges(repo, refresh=self.applied)
1382
1383
1383 if exact:
1384 if exact:
1384 if keepchanges:
1385 if keepchanges:
1385 raise error.Abort(
1386 raise error.Abort(
1386 _("cannot use --exact and --keep-changes together"))
1387 _("cannot use --exact and --keep-changes together"))
1387 if move:
1388 if move:
1388 raise error.Abort(_('cannot use --exact and --move '
1389 raise error.Abort(_('cannot use --exact and --move '
1389 'together'))
1390 'together'))
1390 if self.applied:
1391 if self.applied:
1391 raise error.Abort(_('cannot push --exact with applied '
1392 raise error.Abort(_('cannot push --exact with applied '
1392 'patches'))
1393 'patches'))
1393 root = self.series[start]
1394 root = self.series[start]
1394 target = patchheader(self.join(root), self.plainmode).parent
1395 target = patchheader(self.join(root), self.plainmode).parent
1395 if not target:
1396 if not target:
1396 raise error.Abort(
1397 raise error.Abort(
1397 _("%s does not have a parent recorded") % root)
1398 _("%s does not have a parent recorded") % root)
1398 if not repo[target] == repo['.']:
1399 if not repo[target] == repo['.']:
1399 hg.update(repo, target)
1400 hg.update(repo, target)
1400
1401
1401 if move:
1402 if move:
1402 if not patch:
1403 if not patch:
1403 raise error.Abort(_("please specify the patch to move"))
1404 raise error.Abort(_("please specify the patch to move"))
1404 for fullstart, rpn in enumerate(self.fullseries):
1405 for fullstart, rpn in enumerate(self.fullseries):
1405 # strip markers for patch guards
1406 # strip markers for patch guards
1406 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1407 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1407 break
1408 break
1408 for i, rpn in enumerate(self.fullseries[fullstart:]):
1409 for i, rpn in enumerate(self.fullseries[fullstart:]):
1409 # strip markers for patch guards
1410 # strip markers for patch guards
1410 if self.guard_re.split(rpn, 1)[0] == patch:
1411 if self.guard_re.split(rpn, 1)[0] == patch:
1411 break
1412 break
1412 index = fullstart + i
1413 index = fullstart + i
1413 assert index < len(self.fullseries)
1414 assert index < len(self.fullseries)
1414 fullpatch = self.fullseries[index]
1415 fullpatch = self.fullseries[index]
1415 del self.fullseries[index]
1416 del self.fullseries[index]
1416 self.fullseries.insert(fullstart, fullpatch)
1417 self.fullseries.insert(fullstart, fullpatch)
1417 self.parseseries()
1418 self.parseseries()
1418 self.seriesdirty = True
1419 self.seriesdirty = True
1419
1420
1420 self.applieddirty = True
1421 self.applieddirty = True
1421 if start > 0:
1422 if start > 0:
1422 self.checktoppatch(repo)
1423 self.checktoppatch(repo)
1423 if not patch:
1424 if not patch:
1424 patch = self.series[start]
1425 patch = self.series[start]
1425 end = start + 1
1426 end = start + 1
1426 else:
1427 else:
1427 end = self.series.index(patch, start) + 1
1428 end = self.series.index(patch, start) + 1
1428
1429
1429 tobackup = set()
1430 tobackup = set()
1430 if (not nobackup and force) or keepchanges:
1431 if (not nobackup and force) or keepchanges:
1431 status = self.checklocalchanges(repo, force=True)
1432 status = self.checklocalchanges(repo, force=True)
1432 if keepchanges:
1433 if keepchanges:
1433 tobackup.update(status.modified + status.added +
1434 tobackup.update(status.modified + status.added +
1434 status.removed + status.deleted)
1435 status.removed + status.deleted)
1435 else:
1436 else:
1436 tobackup.update(status.modified + status.added)
1437 tobackup.update(status.modified + status.added)
1437
1438
1438 s = self.series[start:end]
1439 s = self.series[start:end]
1439 all_files = set()
1440 all_files = set()
1440 try:
1441 try:
1441 if mergeq:
1442 if mergeq:
1442 ret = self.mergepatch(repo, mergeq, s, diffopts)
1443 ret = self.mergepatch(repo, mergeq, s, diffopts)
1443 else:
1444 else:
1444 ret = self.apply(repo, s, list, all_files=all_files,
1445 ret = self.apply(repo, s, list, all_files=all_files,
1445 tobackup=tobackup, keepchanges=keepchanges)
1446 tobackup=tobackup, keepchanges=keepchanges)
1446 except AbortNoCleanup:
1447 except AbortNoCleanup:
1447 raise
1448 raise
1448 except: # re-raises
1449 except: # re-raises
1449 self.ui.warn(_('cleaning up working directory...\n'))
1450 self.ui.warn(_('cleaning up working directory...\n'))
1450 cmdutil.revert(self.ui, repo, repo['.'],
1451 cmdutil.revert(self.ui, repo, repo['.'],
1451 repo.dirstate.parents(), no_backup=True)
1452 repo.dirstate.parents(), no_backup=True)
1452 # only remove unknown files that we know we touched or
1453 # only remove unknown files that we know we touched or
1453 # created while patching
1454 # created while patching
1454 for f in all_files:
1455 for f in all_files:
1455 if f not in repo.dirstate:
1456 if f not in repo.dirstate:
1456 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1457 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1457 self.ui.warn(_('done\n'))
1458 self.ui.warn(_('done\n'))
1458 raise
1459 raise
1459
1460
1460 if not self.applied:
1461 if not self.applied:
1461 return ret[0]
1462 return ret[0]
1462 top = self.applied[-1].name
1463 top = self.applied[-1].name
1463 if ret[0] and ret[0] > 1:
1464 if ret[0] and ret[0] > 1:
1464 msg = _("errors during apply, please fix and qrefresh %s\n")
1465 msg = _("errors during apply, please fix and qrefresh %s\n")
1465 self.ui.write(msg % top)
1466 self.ui.write(msg % top)
1466 else:
1467 else:
1467 self.ui.write(_("now at: %s\n") % top)
1468 self.ui.write(_("now at: %s\n") % top)
1468 return ret[0]
1469 return ret[0]
1469
1470
1470 def pop(self, repo, patch=None, force=False, update=True, all=False,
1471 def pop(self, repo, patch=None, force=False, update=True, all=False,
1471 nobackup=False, keepchanges=False):
1472 nobackup=False, keepchanges=False):
1472 self.checkkeepchanges(keepchanges, force)
1473 self.checkkeepchanges(keepchanges, force)
1473 with repo.wlock():
1474 with repo.wlock():
1474 if patch:
1475 if patch:
1475 # index, rev, patch
1476 # index, rev, patch
1476 info = self.isapplied(patch)
1477 info = self.isapplied(patch)
1477 if not info:
1478 if not info:
1478 patch = self.lookup(patch)
1479 patch = self.lookup(patch)
1479 info = self.isapplied(patch)
1480 info = self.isapplied(patch)
1480 if not info:
1481 if not info:
1481 raise error.Abort(_("patch %s is not applied") % patch)
1482 raise error.Abort(_("patch %s is not applied") % patch)
1482
1483
1483 if not self.applied:
1484 if not self.applied:
1484 # Allow qpop -a to work repeatedly,
1485 # Allow qpop -a to work repeatedly,
1485 # but not qpop without an argument
1486 # but not qpop without an argument
1486 self.ui.warn(_("no patches applied\n"))
1487 self.ui.warn(_("no patches applied\n"))
1487 return not all
1488 return not all
1488
1489
1489 if all:
1490 if all:
1490 start = 0
1491 start = 0
1491 elif patch:
1492 elif patch:
1492 start = info[0] + 1
1493 start = info[0] + 1
1493 else:
1494 else:
1494 start = len(self.applied) - 1
1495 start = len(self.applied) - 1
1495
1496
1496 if start >= len(self.applied):
1497 if start >= len(self.applied):
1497 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1498 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1498 return
1499 return
1499
1500
1500 if not update:
1501 if not update:
1501 parents = repo.dirstate.parents()
1502 parents = repo.dirstate.parents()
1502 rr = [x.node for x in self.applied]
1503 rr = [x.node for x in self.applied]
1503 for p in parents:
1504 for p in parents:
1504 if p in rr:
1505 if p in rr:
1505 self.ui.warn(_("qpop: forcing dirstate update\n"))
1506 self.ui.warn(_("qpop: forcing dirstate update\n"))
1506 update = True
1507 update = True
1507 else:
1508 else:
1508 parents = [p.node() for p in repo[None].parents()]
1509 parents = [p.node() for p in repo[None].parents()]
1509 needupdate = False
1510 needupdate = False
1510 for entry in self.applied[start:]:
1511 for entry in self.applied[start:]:
1511 if entry.node in parents:
1512 if entry.node in parents:
1512 needupdate = True
1513 needupdate = True
1513 break
1514 break
1514 update = needupdate
1515 update = needupdate
1515
1516
1516 tobackup = set()
1517 tobackup = set()
1517 if update:
1518 if update:
1518 s = self.checklocalchanges(repo, force=force or keepchanges)
1519 s = self.checklocalchanges(repo, force=force or keepchanges)
1519 if force:
1520 if force:
1520 if not nobackup:
1521 if not nobackup:
1521 tobackup.update(s.modified + s.added)
1522 tobackup.update(s.modified + s.added)
1522 elif keepchanges:
1523 elif keepchanges:
1523 tobackup.update(s.modified + s.added +
1524 tobackup.update(s.modified + s.added +
1524 s.removed + s.deleted)
1525 s.removed + s.deleted)
1525
1526
1526 self.applieddirty = True
1527 self.applieddirty = True
1527 end = len(self.applied)
1528 end = len(self.applied)
1528 rev = self.applied[start].node
1529 rev = self.applied[start].node
1529
1530
1530 try:
1531 try:
1531 heads = repo.changelog.heads(rev)
1532 heads = repo.changelog.heads(rev)
1532 except error.LookupError:
1533 except error.LookupError:
1533 node = short(rev)
1534 node = short(rev)
1534 raise error.Abort(_('trying to pop unknown node %s') % node)
1535 raise error.Abort(_('trying to pop unknown node %s') % node)
1535
1536
1536 if heads != [self.applied[-1].node]:
1537 if heads != [self.applied[-1].node]:
1537 raise error.Abort(_("popping would remove a revision not "
1538 raise error.Abort(_("popping would remove a revision not "
1538 "managed by this patch queue"))
1539 "managed by this patch queue"))
1539 if not repo[self.applied[-1].node].mutable():
1540 if not repo[self.applied[-1].node].mutable():
1540 raise error.Abort(
1541 raise error.Abort(
1541 _("popping would remove a public revision"),
1542 _("popping would remove a public revision"),
1542 hint=_('see "hg help phases" for details'))
1543 hint=_('see "hg help phases" for details'))
1543
1544
1544 # we know there are no local changes, so we can make a simplified
1545 # we know there are no local changes, so we can make a simplified
1545 # form of hg.update.
1546 # form of hg.update.
1546 if update:
1547 if update:
1547 qp = self.qparents(repo, rev)
1548 qp = self.qparents(repo, rev)
1548 ctx = repo[qp]
1549 ctx = repo[qp]
1549 m, a, r, d = repo.status(qp, '.')[:4]
1550 m, a, r, d = repo.status(qp, '.')[:4]
1550 if d:
1551 if d:
1551 raise error.Abort(_("deletions found between repo revs"))
1552 raise error.Abort(_("deletions found between repo revs"))
1552
1553
1553 tobackup = set(a + m + r) & tobackup
1554 tobackup = set(a + m + r) & tobackup
1554 if keepchanges and tobackup:
1555 if keepchanges and tobackup:
1555 raise error.Abort(_("local changes found, qrefresh first"))
1556 raise error.Abort(_("local changes found, qrefresh first"))
1556 self.backup(repo, tobackup)
1557 self.backup(repo, tobackup)
1557 repo.dirstate.beginparentchange()
1558 repo.dirstate.beginparentchange()
1558 for f in a:
1559 for f in a:
1559 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1560 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1560 repo.dirstate.drop(f)
1561 repo.dirstate.drop(f)
1561 for f in m + r:
1562 for f in m + r:
1562 fctx = ctx[f]
1563 fctx = ctx[f]
1563 repo.wwrite(f, fctx.data(), fctx.flags())
1564 repo.wwrite(f, fctx.data(), fctx.flags())
1564 repo.dirstate.normal(f)
1565 repo.dirstate.normal(f)
1565 repo.setparents(qp, nullid)
1566 repo.setparents(qp, nullid)
1566 repo.dirstate.endparentchange()
1567 repo.dirstate.endparentchange()
1567 for patch in reversed(self.applied[start:end]):
1568 for patch in reversed(self.applied[start:end]):
1568 self.ui.status(_("popping %s\n") % patch.name)
1569 self.ui.status(_("popping %s\n") % patch.name)
1569 del self.applied[start:end]
1570 del self.applied[start:end]
1570 strip(self.ui, repo, [rev], update=False, backup=False)
1571 strip(self.ui, repo, [rev], update=False, backup=False)
1571 for s, state in repo['.'].substate.items():
1572 for s, state in repo['.'].substate.items():
1572 repo['.'].sub(s).get(state)
1573 repo['.'].sub(s).get(state)
1573 if self.applied:
1574 if self.applied:
1574 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1575 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1575 else:
1576 else:
1576 self.ui.write(_("patch queue now empty\n"))
1577 self.ui.write(_("patch queue now empty\n"))
1577
1578
1578 def diff(self, repo, pats, opts):
1579 def diff(self, repo, pats, opts):
1579 top, patch = self.checktoppatch(repo)
1580 top, patch = self.checktoppatch(repo)
1580 if not top:
1581 if not top:
1581 self.ui.write(_("no patches applied\n"))
1582 self.ui.write(_("no patches applied\n"))
1582 return
1583 return
1583 qp = self.qparents(repo, top)
1584 qp = self.qparents(repo, top)
1584 if opts.get('reverse'):
1585 if opts.get('reverse'):
1585 node1, node2 = None, qp
1586 node1, node2 = None, qp
1586 else:
1587 else:
1587 node1, node2 = qp, None
1588 node1, node2 = qp, None
1588 diffopts = self.diffopts(opts, patch)
1589 diffopts = self.diffopts(opts, patch)
1589 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1590 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1590
1591
1591 def refresh(self, repo, pats=None, **opts):
1592 def refresh(self, repo, pats=None, **opts):
1592 if not self.applied:
1593 if not self.applied:
1593 self.ui.write(_("no patches applied\n"))
1594 self.ui.write(_("no patches applied\n"))
1594 return 1
1595 return 1
1595 msg = opts.get('msg', '').rstrip()
1596 msg = opts.get('msg', '').rstrip()
1596 edit = opts.get('edit')
1597 edit = opts.get('edit')
1597 editform = opts.get('editform', 'mq.qrefresh')
1598 editform = opts.get('editform', 'mq.qrefresh')
1598 newuser = opts.get('user')
1599 newuser = opts.get('user')
1599 newdate = opts.get('date')
1600 newdate = opts.get('date')
1600 if newdate:
1601 if newdate:
1601 newdate = '%d %d' % util.parsedate(newdate)
1602 newdate = '%d %d' % util.parsedate(newdate)
1602 wlock = repo.wlock()
1603 wlock = repo.wlock()
1603
1604
1604 try:
1605 try:
1605 self.checktoppatch(repo)
1606 self.checktoppatch(repo)
1606 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1607 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1607 if repo.changelog.heads(top) != [top]:
1608 if repo.changelog.heads(top) != [top]:
1608 raise error.Abort(_("cannot qrefresh a revision with children"))
1609 raise error.Abort(_("cannot qrefresh a revision with children"))
1609 if not repo[top].mutable():
1610 if not repo[top].mutable():
1610 raise error.Abort(_("cannot qrefresh public revision"),
1611 raise error.Abort(_("cannot qrefresh public revision"),
1611 hint=_('see "hg help phases" for details'))
1612 hint=_('see "hg help phases" for details'))
1612
1613
1613 cparents = repo.changelog.parents(top)
1614 cparents = repo.changelog.parents(top)
1614 patchparent = self.qparents(repo, top)
1615 patchparent = self.qparents(repo, top)
1615
1616
1616 inclsubs = checksubstate(repo, hex(patchparent))
1617 inclsubs = checksubstate(repo, hex(patchparent))
1617 if inclsubs:
1618 if inclsubs:
1618 substatestate = repo.dirstate['.hgsubstate']
1619 substatestate = repo.dirstate['.hgsubstate']
1619
1620
1620 ph = patchheader(self.join(patchfn), self.plainmode)
1621 ph = patchheader(self.join(patchfn), self.plainmode)
1621 diffopts = self.diffopts({'git': opts.get('git')}, patchfn)
1622 diffopts = self.diffopts({'git': opts.get('git')}, patchfn)
1622 if newuser:
1623 if newuser:
1623 ph.setuser(newuser)
1624 ph.setuser(newuser)
1624 if newdate:
1625 if newdate:
1625 ph.setdate(newdate)
1626 ph.setdate(newdate)
1626 ph.setparent(hex(patchparent))
1627 ph.setparent(hex(patchparent))
1627
1628
1628 # only commit new patch when write is complete
1629 # only commit new patch when write is complete
1629 patchf = self.opener(patchfn, 'w', atomictemp=True)
1630 patchf = self.opener(patchfn, 'w', atomictemp=True)
1630
1631
1631 # update the dirstate in place, strip off the qtip commit
1632 # update the dirstate in place, strip off the qtip commit
1632 # and then commit.
1633 # and then commit.
1633 #
1634 #
1634 # this should really read:
1635 # this should really read:
1635 # mm, dd, aa = repo.status(top, patchparent)[:3]
1636 # mm, dd, aa = repo.status(top, patchparent)[:3]
1636 # but we do it backwards to take advantage of manifest/changelog
1637 # but we do it backwards to take advantage of manifest/changelog
1637 # caching against the next repo.status call
1638 # caching against the next repo.status call
1638 mm, aa, dd = repo.status(patchparent, top)[:3]
1639 mm, aa, dd = repo.status(patchparent, top)[:3]
1639 changes = repo.changelog.read(top)
1640 changes = repo.changelog.read(top)
1640 man = repo.manifest.read(changes[0])
1641 man = repo.manifest.read(changes[0])
1641 aaa = aa[:]
1642 aaa = aa[:]
1642 matchfn = scmutil.match(repo[None], pats, opts)
1643 matchfn = scmutil.match(repo[None], pats, opts)
1643 # in short mode, we only diff the files included in the
1644 # in short mode, we only diff the files included in the
1644 # patch already plus specified files
1645 # patch already plus specified files
1645 if opts.get('short'):
1646 if opts.get('short'):
1646 # if amending a patch, we start with existing
1647 # if amending a patch, we start with existing
1647 # files plus specified files - unfiltered
1648 # files plus specified files - unfiltered
1648 match = scmutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1649 match = scmutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1649 # filter with include/exclude options
1650 # filter with include/exclude options
1650 matchfn = scmutil.match(repo[None], opts=opts)
1651 matchfn = scmutil.match(repo[None], opts=opts)
1651 else:
1652 else:
1652 match = scmutil.matchall(repo)
1653 match = scmutil.matchall(repo)
1653 m, a, r, d = repo.status(match=match)[:4]
1654 m, a, r, d = repo.status(match=match)[:4]
1654 mm = set(mm)
1655 mm = set(mm)
1655 aa = set(aa)
1656 aa = set(aa)
1656 dd = set(dd)
1657 dd = set(dd)
1657
1658
1658 # we might end up with files that were added between
1659 # we might end up with files that were added between
1659 # qtip and the dirstate parent, but then changed in the
1660 # qtip and the dirstate parent, but then changed in the
1660 # local dirstate. in this case, we want them to only
1661 # local dirstate. in this case, we want them to only
1661 # show up in the added section
1662 # show up in the added section
1662 for x in m:
1663 for x in m:
1663 if x not in aa:
1664 if x not in aa:
1664 mm.add(x)
1665 mm.add(x)
1665 # we might end up with files added by the local dirstate that
1666 # we might end up with files added by the local dirstate that
1666 # were deleted by the patch. In this case, they should only
1667 # were deleted by the patch. In this case, they should only
1667 # show up in the changed section.
1668 # show up in the changed section.
1668 for x in a:
1669 for x in a:
1669 if x in dd:
1670 if x in dd:
1670 dd.remove(x)
1671 dd.remove(x)
1671 mm.add(x)
1672 mm.add(x)
1672 else:
1673 else:
1673 aa.add(x)
1674 aa.add(x)
1674 # make sure any files deleted in the local dirstate
1675 # make sure any files deleted in the local dirstate
1675 # are not in the add or change column of the patch
1676 # are not in the add or change column of the patch
1676 forget = []
1677 forget = []
1677 for x in d + r:
1678 for x in d + r:
1678 if x in aa:
1679 if x in aa:
1679 aa.remove(x)
1680 aa.remove(x)
1680 forget.append(x)
1681 forget.append(x)
1681 continue
1682 continue
1682 else:
1683 else:
1683 mm.discard(x)
1684 mm.discard(x)
1684 dd.add(x)
1685 dd.add(x)
1685
1686
1686 m = list(mm)
1687 m = list(mm)
1687 r = list(dd)
1688 r = list(dd)
1688 a = list(aa)
1689 a = list(aa)
1689
1690
1690 # create 'match' that includes the files to be recommitted.
1691 # create 'match' that includes the files to be recommitted.
1691 # apply matchfn via repo.status to ensure correct case handling.
1692 # apply matchfn via repo.status to ensure correct case handling.
1692 cm, ca, cr, cd = repo.status(patchparent, match=matchfn)[:4]
1693 cm, ca, cr, cd = repo.status(patchparent, match=matchfn)[:4]
1693 allmatches = set(cm + ca + cr + cd)
1694 allmatches = set(cm + ca + cr + cd)
1694 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1695 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1695
1696
1696 files = set(inclsubs)
1697 files = set(inclsubs)
1697 for x in refreshchanges:
1698 for x in refreshchanges:
1698 files.update(x)
1699 files.update(x)
1699 match = scmutil.matchfiles(repo, files)
1700 match = scmutil.matchfiles(repo, files)
1700
1701
1701 bmlist = repo[top].bookmarks()
1702 bmlist = repo[top].bookmarks()
1702
1703
1703 dsguard = None
1704 dsguard = None
1704 try:
1705 try:
1705 dsguard = cmdutil.dirstateguard(repo, 'mq.refresh')
1706 dsguard = cmdutil.dirstateguard(repo, 'mq.refresh')
1706 if diffopts.git or diffopts.upgrade:
1707 if diffopts.git or diffopts.upgrade:
1707 copies = {}
1708 copies = {}
1708 for dst in a:
1709 for dst in a:
1709 src = repo.dirstate.copied(dst)
1710 src = repo.dirstate.copied(dst)
1710 # during qfold, the source file for copies may
1711 # during qfold, the source file for copies may
1711 # be removed. Treat this as a simple add.
1712 # be removed. Treat this as a simple add.
1712 if src is not None and src in repo.dirstate:
1713 if src is not None and src in repo.dirstate:
1713 copies.setdefault(src, []).append(dst)
1714 copies.setdefault(src, []).append(dst)
1714 repo.dirstate.add(dst)
1715 repo.dirstate.add(dst)
1715 # remember the copies between patchparent and qtip
1716 # remember the copies between patchparent and qtip
1716 for dst in aaa:
1717 for dst in aaa:
1717 f = repo.file(dst)
1718 f = repo.file(dst)
1718 src = f.renamed(man[dst])
1719 src = f.renamed(man[dst])
1719 if src:
1720 if src:
1720 copies.setdefault(src[0], []).extend(
1721 copies.setdefault(src[0], []).extend(
1721 copies.get(dst, []))
1722 copies.get(dst, []))
1722 if dst in a:
1723 if dst in a:
1723 copies[src[0]].append(dst)
1724 copies[src[0]].append(dst)
1724 # we can't copy a file created by the patch itself
1725 # we can't copy a file created by the patch itself
1725 if dst in copies:
1726 if dst in copies:
1726 del copies[dst]
1727 del copies[dst]
1727 for src, dsts in copies.iteritems():
1728 for src, dsts in copies.iteritems():
1728 for dst in dsts:
1729 for dst in dsts:
1729 repo.dirstate.copy(src, dst)
1730 repo.dirstate.copy(src, dst)
1730 else:
1731 else:
1731 for dst in a:
1732 for dst in a:
1732 repo.dirstate.add(dst)
1733 repo.dirstate.add(dst)
1733 # Drop useless copy information
1734 # Drop useless copy information
1734 for f in list(repo.dirstate.copies()):
1735 for f in list(repo.dirstate.copies()):
1735 repo.dirstate.copy(None, f)
1736 repo.dirstate.copy(None, f)
1736 for f in r:
1737 for f in r:
1737 repo.dirstate.remove(f)
1738 repo.dirstate.remove(f)
1738 # if the patch excludes a modified file, mark that
1739 # if the patch excludes a modified file, mark that
1739 # file with mtime=0 so status can see it.
1740 # file with mtime=0 so status can see it.
1740 mm = []
1741 mm = []
1741 for i in xrange(len(m) - 1, -1, -1):
1742 for i in xrange(len(m) - 1, -1, -1):
1742 if not matchfn(m[i]):
1743 if not matchfn(m[i]):
1743 mm.append(m[i])
1744 mm.append(m[i])
1744 del m[i]
1745 del m[i]
1745 for f in m:
1746 for f in m:
1746 repo.dirstate.normal(f)
1747 repo.dirstate.normal(f)
1747 for f in mm:
1748 for f in mm:
1748 repo.dirstate.normallookup(f)
1749 repo.dirstate.normallookup(f)
1749 for f in forget:
1750 for f in forget:
1750 repo.dirstate.drop(f)
1751 repo.dirstate.drop(f)
1751
1752
1752 user = ph.user or changes[1]
1753 user = ph.user or changes[1]
1753
1754
1754 oldphase = repo[top].phase()
1755 oldphase = repo[top].phase()
1755
1756
1756 # assumes strip can roll itself back if interrupted
1757 # assumes strip can roll itself back if interrupted
1757 repo.setparents(*cparents)
1758 repo.setparents(*cparents)
1758 self.applied.pop()
1759 self.applied.pop()
1759 self.applieddirty = True
1760 self.applieddirty = True
1760 strip(self.ui, repo, [top], update=False, backup=False)
1761 strip(self.ui, repo, [top], update=False, backup=False)
1761 dsguard.close()
1762 dsguard.close()
1762 finally:
1763 finally:
1763 release(dsguard)
1764 release(dsguard)
1764
1765
1765 try:
1766 try:
1766 # might be nice to attempt to roll back strip after this
1767 # might be nice to attempt to roll back strip after this
1767
1768
1768 defaultmsg = "[mq]: %s" % patchfn
1769 defaultmsg = "[mq]: %s" % patchfn
1769 editor = cmdutil.getcommiteditor(editform=editform)
1770 editor = cmdutil.getcommiteditor(editform=editform)
1770 if edit:
1771 if edit:
1771 def finishdesc(desc):
1772 def finishdesc(desc):
1772 if desc.rstrip():
1773 if desc.rstrip():
1773 ph.setmessage(desc)
1774 ph.setmessage(desc)
1774 return desc
1775 return desc
1775 return defaultmsg
1776 return defaultmsg
1776 # i18n: this message is shown in editor with "HG: " prefix
1777 # i18n: this message is shown in editor with "HG: " prefix
1777 extramsg = _('Leave message empty to use default message.')
1778 extramsg = _('Leave message empty to use default message.')
1778 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1779 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1779 extramsg=extramsg,
1780 extramsg=extramsg,
1780 editform=editform)
1781 editform=editform)
1781 message = msg or "\n".join(ph.message)
1782 message = msg or "\n".join(ph.message)
1782 elif not msg:
1783 elif not msg:
1783 if not ph.message:
1784 if not ph.message:
1784 message = defaultmsg
1785 message = defaultmsg
1785 else:
1786 else:
1786 message = "\n".join(ph.message)
1787 message = "\n".join(ph.message)
1787 else:
1788 else:
1788 message = msg
1789 message = msg
1789 ph.setmessage(msg)
1790 ph.setmessage(msg)
1790
1791
1791 # Ensure we create a new changeset in the same phase than
1792 # Ensure we create a new changeset in the same phase than
1792 # the old one.
1793 # the old one.
1793 lock = tr = None
1794 lock = tr = None
1794 try:
1795 try:
1795 lock = repo.lock()
1796 lock = repo.lock()
1796 tr = repo.transaction('mq')
1797 tr = repo.transaction('mq')
1797 n = newcommit(repo, oldphase, message, user, ph.date,
1798 n = newcommit(repo, oldphase, message, user, ph.date,
1798 match=match, force=True, editor=editor)
1799 match=match, force=True, editor=editor)
1799 # only write patch after a successful commit
1800 # only write patch after a successful commit
1800 c = [list(x) for x in refreshchanges]
1801 c = [list(x) for x in refreshchanges]
1801 if inclsubs:
1802 if inclsubs:
1802 self.putsubstate2changes(substatestate, c)
1803 self.putsubstate2changes(substatestate, c)
1803 chunks = patchmod.diff(repo, patchparent,
1804 chunks = patchmod.diff(repo, patchparent,
1804 changes=c, opts=diffopts)
1805 changes=c, opts=diffopts)
1805 comments = str(ph)
1806 comments = str(ph)
1806 if comments:
1807 if comments:
1807 patchf.write(comments)
1808 patchf.write(comments)
1808 for chunk in chunks:
1809 for chunk in chunks:
1809 patchf.write(chunk)
1810 patchf.write(chunk)
1810 patchf.close()
1811 patchf.close()
1811
1812
1812 marks = repo._bookmarks
1813 marks = repo._bookmarks
1813 for bm in bmlist:
1814 for bm in bmlist:
1814 marks[bm] = n
1815 marks[bm] = n
1815 marks.recordchange(tr)
1816 marks.recordchange(tr)
1816 tr.close()
1817 tr.close()
1817
1818
1818 self.applied.append(statusentry(n, patchfn))
1819 self.applied.append(statusentry(n, patchfn))
1819 finally:
1820 finally:
1820 lockmod.release(lock, tr)
1821 lockmod.release(lock, tr)
1821 except: # re-raises
1822 except: # re-raises
1822 ctx = repo[cparents[0]]
1823 ctx = repo[cparents[0]]
1823 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1824 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1824 self.savedirty()
1825 self.savedirty()
1825 self.ui.warn(_('qrefresh interrupted while patch was popped! '
1826 self.ui.warn(_('qrefresh interrupted while patch was popped! '
1826 '(revert --all, qpush to recover)\n'))
1827 '(revert --all, qpush to recover)\n'))
1827 raise
1828 raise
1828 finally:
1829 finally:
1829 wlock.release()
1830 wlock.release()
1830 self.removeundo(repo)
1831 self.removeundo(repo)
1831
1832
1832 def init(self, repo, create=False):
1833 def init(self, repo, create=False):
1833 if not create and os.path.isdir(self.path):
1834 if not create and os.path.isdir(self.path):
1834 raise error.Abort(_("patch queue directory already exists"))
1835 raise error.Abort(_("patch queue directory already exists"))
1835 try:
1836 try:
1836 os.mkdir(self.path)
1837 os.mkdir(self.path)
1837 except OSError as inst:
1838 except OSError as inst:
1838 if inst.errno != errno.EEXIST or not create:
1839 if inst.errno != errno.EEXIST or not create:
1839 raise
1840 raise
1840 if create:
1841 if create:
1841 return self.qrepo(create=True)
1842 return self.qrepo(create=True)
1842
1843
1843 def unapplied(self, repo, patch=None):
1844 def unapplied(self, repo, patch=None):
1844 if patch and patch not in self.series:
1845 if patch and patch not in self.series:
1845 raise error.Abort(_("patch %s is not in series file") % patch)
1846 raise error.Abort(_("patch %s is not in series file") % patch)
1846 if not patch:
1847 if not patch:
1847 start = self.seriesend()
1848 start = self.seriesend()
1848 else:
1849 else:
1849 start = self.series.index(patch) + 1
1850 start = self.series.index(patch) + 1
1850 unapplied = []
1851 unapplied = []
1851 for i in xrange(start, len(self.series)):
1852 for i in xrange(start, len(self.series)):
1852 pushable, reason = self.pushable(i)
1853 pushable, reason = self.pushable(i)
1853 if pushable:
1854 if pushable:
1854 unapplied.append((i, self.series[i]))
1855 unapplied.append((i, self.series[i]))
1855 self.explainpushable(i)
1856 self.explainpushable(i)
1856 return unapplied
1857 return unapplied
1857
1858
1858 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1859 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1859 summary=False):
1860 summary=False):
1860 def displayname(pfx, patchname, state):
1861 def displayname(pfx, patchname, state):
1861 if pfx:
1862 if pfx:
1862 self.ui.write(pfx)
1863 self.ui.write(pfx)
1863 if summary:
1864 if summary:
1864 ph = patchheader(self.join(patchname), self.plainmode)
1865 ph = patchheader(self.join(patchname), self.plainmode)
1865 if ph.message:
1866 if ph.message:
1866 msg = ph.message[0]
1867 msg = ph.message[0]
1867 else:
1868 else:
1868 msg = ''
1869 msg = ''
1869
1870
1870 if self.ui.formatted():
1871 if self.ui.formatted():
1871 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1872 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1872 if width > 0:
1873 if width > 0:
1873 msg = util.ellipsis(msg, width)
1874 msg = util.ellipsis(msg, width)
1874 else:
1875 else:
1875 msg = ''
1876 msg = ''
1876 self.ui.write(patchname, label='qseries.' + state)
1877 self.ui.write(patchname, label='qseries.' + state)
1877 self.ui.write(': ')
1878 self.ui.write(': ')
1878 self.ui.write(msg, label='qseries.message.' + state)
1879 self.ui.write(msg, label='qseries.message.' + state)
1879 else:
1880 else:
1880 self.ui.write(patchname, label='qseries.' + state)
1881 self.ui.write(patchname, label='qseries.' + state)
1881 self.ui.write('\n')
1882 self.ui.write('\n')
1882
1883
1883 applied = set([p.name for p in self.applied])
1884 applied = set([p.name for p in self.applied])
1884 if length is None:
1885 if length is None:
1885 length = len(self.series) - start
1886 length = len(self.series) - start
1886 if not missing:
1887 if not missing:
1887 if self.ui.verbose:
1888 if self.ui.verbose:
1888 idxwidth = len(str(start + length - 1))
1889 idxwidth = len(str(start + length - 1))
1889 for i in xrange(start, start + length):
1890 for i in xrange(start, start + length):
1890 patch = self.series[i]
1891 patch = self.series[i]
1891 if patch in applied:
1892 if patch in applied:
1892 char, state = 'A', 'applied'
1893 char, state = 'A', 'applied'
1893 elif self.pushable(i)[0]:
1894 elif self.pushable(i)[0]:
1894 char, state = 'U', 'unapplied'
1895 char, state = 'U', 'unapplied'
1895 else:
1896 else:
1896 char, state = 'G', 'guarded'
1897 char, state = 'G', 'guarded'
1897 pfx = ''
1898 pfx = ''
1898 if self.ui.verbose:
1899 if self.ui.verbose:
1899 pfx = '%*d %s ' % (idxwidth, i, char)
1900 pfx = '%*d %s ' % (idxwidth, i, char)
1900 elif status and status != char:
1901 elif status and status != char:
1901 continue
1902 continue
1902 displayname(pfx, patch, state)
1903 displayname(pfx, patch, state)
1903 else:
1904 else:
1904 msng_list = []
1905 msng_list = []
1905 for root, dirs, files in os.walk(self.path):
1906 for root, dirs, files in os.walk(self.path):
1906 d = root[len(self.path) + 1:]
1907 d = root[len(self.path) + 1:]
1907 for f in files:
1908 for f in files:
1908 fl = os.path.join(d, f)
1909 fl = os.path.join(d, f)
1909 if (fl not in self.series and
1910 if (fl not in self.series and
1910 fl not in (self.statuspath, self.seriespath,
1911 fl not in (self.statuspath, self.seriespath,
1911 self.guardspath)
1912 self.guardspath)
1912 and not fl.startswith('.')):
1913 and not fl.startswith('.')):
1913 msng_list.append(fl)
1914 msng_list.append(fl)
1914 for x in sorted(msng_list):
1915 for x in sorted(msng_list):
1915 pfx = self.ui.verbose and ('D ') or ''
1916 pfx = self.ui.verbose and ('D ') or ''
1916 displayname(pfx, x, 'missing')
1917 displayname(pfx, x, 'missing')
1917
1918
1918 def issaveline(self, l):
1919 def issaveline(self, l):
1919 if l.name == '.hg.patches.save.line':
1920 if l.name == '.hg.patches.save.line':
1920 return True
1921 return True
1921
1922
1922 def qrepo(self, create=False):
1923 def qrepo(self, create=False):
1923 ui = self.baseui.copy()
1924 ui = self.baseui.copy()
1924 if create or os.path.isdir(self.join(".hg")):
1925 if create or os.path.isdir(self.join(".hg")):
1925 return hg.repository(ui, path=self.path, create=create)
1926 return hg.repository(ui, path=self.path, create=create)
1926
1927
1927 def restore(self, repo, rev, delete=None, qupdate=None):
1928 def restore(self, repo, rev, delete=None, qupdate=None):
1928 desc = repo[rev].description().strip()
1929 desc = repo[rev].description().strip()
1929 lines = desc.splitlines()
1930 lines = desc.splitlines()
1930 i = 0
1931 i = 0
1931 datastart = None
1932 datastart = None
1932 series = []
1933 series = []
1933 applied = []
1934 applied = []
1934 qpp = None
1935 qpp = None
1935 for i, line in enumerate(lines):
1936 for i, line in enumerate(lines):
1936 if line == 'Patch Data:':
1937 if line == 'Patch Data:':
1937 datastart = i + 1
1938 datastart = i + 1
1938 elif line.startswith('Dirstate:'):
1939 elif line.startswith('Dirstate:'):
1939 l = line.rstrip()
1940 l = line.rstrip()
1940 l = l[10:].split(' ')
1941 l = l[10:].split(' ')
1941 qpp = [bin(x) for x in l]
1942 qpp = [bin(x) for x in l]
1942 elif datastart is not None:
1943 elif datastart is not None:
1943 l = line.rstrip()
1944 l = line.rstrip()
1944 n, name = l.split(':', 1)
1945 n, name = l.split(':', 1)
1945 if n:
1946 if n:
1946 applied.append(statusentry(bin(n), name))
1947 applied.append(statusentry(bin(n), name))
1947 else:
1948 else:
1948 series.append(l)
1949 series.append(l)
1949 if datastart is None:
1950 if datastart is None:
1950 self.ui.warn(_("no saved patch data found\n"))
1951 self.ui.warn(_("no saved patch data found\n"))
1951 return 1
1952 return 1
1952 self.ui.warn(_("restoring status: %s\n") % lines[0])
1953 self.ui.warn(_("restoring status: %s\n") % lines[0])
1953 self.fullseries = series
1954 self.fullseries = series
1954 self.applied = applied
1955 self.applied = applied
1955 self.parseseries()
1956 self.parseseries()
1956 self.seriesdirty = True
1957 self.seriesdirty = True
1957 self.applieddirty = True
1958 self.applieddirty = True
1958 heads = repo.changelog.heads()
1959 heads = repo.changelog.heads()
1959 if delete:
1960 if delete:
1960 if rev not in heads:
1961 if rev not in heads:
1961 self.ui.warn(_("save entry has children, leaving it alone\n"))
1962 self.ui.warn(_("save entry has children, leaving it alone\n"))
1962 else:
1963 else:
1963 self.ui.warn(_("removing save entry %s\n") % short(rev))
1964 self.ui.warn(_("removing save entry %s\n") % short(rev))
1964 pp = repo.dirstate.parents()
1965 pp = repo.dirstate.parents()
1965 if rev in pp:
1966 if rev in pp:
1966 update = True
1967 update = True
1967 else:
1968 else:
1968 update = False
1969 update = False
1969 strip(self.ui, repo, [rev], update=update, backup=False)
1970 strip(self.ui, repo, [rev], update=update, backup=False)
1970 if qpp:
1971 if qpp:
1971 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1972 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1972 (short(qpp[0]), short(qpp[1])))
1973 (short(qpp[0]), short(qpp[1])))
1973 if qupdate:
1974 if qupdate:
1974 self.ui.status(_("updating queue directory\n"))
1975 self.ui.status(_("updating queue directory\n"))
1975 r = self.qrepo()
1976 r = self.qrepo()
1976 if not r:
1977 if not r:
1977 self.ui.warn(_("unable to load queue repository\n"))
1978 self.ui.warn(_("unable to load queue repository\n"))
1978 return 1
1979 return 1
1979 hg.clean(r, qpp[0])
1980 hg.clean(r, qpp[0])
1980
1981
1981 def save(self, repo, msg=None):
1982 def save(self, repo, msg=None):
1982 if not self.applied:
1983 if not self.applied:
1983 self.ui.warn(_("save: no patches applied, exiting\n"))
1984 self.ui.warn(_("save: no patches applied, exiting\n"))
1984 return 1
1985 return 1
1985 if self.issaveline(self.applied[-1]):
1986 if self.issaveline(self.applied[-1]):
1986 self.ui.warn(_("status is already saved\n"))
1987 self.ui.warn(_("status is already saved\n"))
1987 return 1
1988 return 1
1988
1989
1989 if not msg:
1990 if not msg:
1990 msg = _("hg patches saved state")
1991 msg = _("hg patches saved state")
1991 else:
1992 else:
1992 msg = "hg patches: " + msg.rstrip('\r\n')
1993 msg = "hg patches: " + msg.rstrip('\r\n')
1993 r = self.qrepo()
1994 r = self.qrepo()
1994 if r:
1995 if r:
1995 pp = r.dirstate.parents()
1996 pp = r.dirstate.parents()
1996 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1997 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1997 msg += "\n\nPatch Data:\n"
1998 msg += "\n\nPatch Data:\n"
1998 msg += ''.join('%s\n' % x for x in self.applied)
1999 msg += ''.join('%s\n' % x for x in self.applied)
1999 msg += ''.join(':%s\n' % x for x in self.fullseries)
2000 msg += ''.join(':%s\n' % x for x in self.fullseries)
2000 n = repo.commit(msg, force=True)
2001 n = repo.commit(msg, force=True)
2001 if not n:
2002 if not n:
2002 self.ui.warn(_("repo commit failed\n"))
2003 self.ui.warn(_("repo commit failed\n"))
2003 return 1
2004 return 1
2004 self.applied.append(statusentry(n, '.hg.patches.save.line'))
2005 self.applied.append(statusentry(n, '.hg.patches.save.line'))
2005 self.applieddirty = True
2006 self.applieddirty = True
2006 self.removeundo(repo)
2007 self.removeundo(repo)
2007
2008
2008 def fullseriesend(self):
2009 def fullseriesend(self):
2009 if self.applied:
2010 if self.applied:
2010 p = self.applied[-1].name
2011 p = self.applied[-1].name
2011 end = self.findseries(p)
2012 end = self.findseries(p)
2012 if end is None:
2013 if end is None:
2013 return len(self.fullseries)
2014 return len(self.fullseries)
2014 return end + 1
2015 return end + 1
2015 return 0
2016 return 0
2016
2017
2017 def seriesend(self, all_patches=False):
2018 def seriesend(self, all_patches=False):
2018 """If all_patches is False, return the index of the next pushable patch
2019 """If all_patches is False, return the index of the next pushable patch
2019 in the series, or the series length. If all_patches is True, return the
2020 in the series, or the series length. If all_patches is True, return the
2020 index of the first patch past the last applied one.
2021 index of the first patch past the last applied one.
2021 """
2022 """
2022 end = 0
2023 end = 0
2023 def nextpatch(start):
2024 def nextpatch(start):
2024 if all_patches or start >= len(self.series):
2025 if all_patches or start >= len(self.series):
2025 return start
2026 return start
2026 for i in xrange(start, len(self.series)):
2027 for i in xrange(start, len(self.series)):
2027 p, reason = self.pushable(i)
2028 p, reason = self.pushable(i)
2028 if p:
2029 if p:
2029 return i
2030 return i
2030 self.explainpushable(i)
2031 self.explainpushable(i)
2031 return len(self.series)
2032 return len(self.series)
2032 if self.applied:
2033 if self.applied:
2033 p = self.applied[-1].name
2034 p = self.applied[-1].name
2034 try:
2035 try:
2035 end = self.series.index(p)
2036 end = self.series.index(p)
2036 except ValueError:
2037 except ValueError:
2037 return 0
2038 return 0
2038 return nextpatch(end + 1)
2039 return nextpatch(end + 1)
2039 return nextpatch(end)
2040 return nextpatch(end)
2040
2041
2041 def appliedname(self, index):
2042 def appliedname(self, index):
2042 pname = self.applied[index].name
2043 pname = self.applied[index].name
2043 if not self.ui.verbose:
2044 if not self.ui.verbose:
2044 p = pname
2045 p = pname
2045 else:
2046 else:
2046 p = str(self.series.index(pname)) + " " + pname
2047 p = str(self.series.index(pname)) + " " + pname
2047 return p
2048 return p
2048
2049
2049 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2050 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2050 force=None, git=False):
2051 force=None, git=False):
2051 def checkseries(patchname):
2052 def checkseries(patchname):
2052 if patchname in self.series:
2053 if patchname in self.series:
2053 raise error.Abort(_('patch %s is already in the series file')
2054 raise error.Abort(_('patch %s is already in the series file')
2054 % patchname)
2055 % patchname)
2055
2056
2056 if rev:
2057 if rev:
2057 if files:
2058 if files:
2058 raise error.Abort(_('option "-r" not valid when importing '
2059 raise error.Abort(_('option "-r" not valid when importing '
2059 'files'))
2060 'files'))
2060 rev = scmutil.revrange(repo, rev)
2061 rev = scmutil.revrange(repo, rev)
2061 rev.sort(reverse=True)
2062 rev.sort(reverse=True)
2062 elif not files:
2063 elif not files:
2063 raise error.Abort(_('no files or revisions specified'))
2064 raise error.Abort(_('no files or revisions specified'))
2064 if (len(files) > 1 or len(rev) > 1) and patchname:
2065 if (len(files) > 1 or len(rev) > 1) and patchname:
2065 raise error.Abort(_('option "-n" not valid when importing multiple '
2066 raise error.Abort(_('option "-n" not valid when importing multiple '
2066 'patches'))
2067 'patches'))
2067 imported = []
2068 imported = []
2068 if rev:
2069 if rev:
2069 # If mq patches are applied, we can only import revisions
2070 # If mq patches are applied, we can only import revisions
2070 # that form a linear path to qbase.
2071 # that form a linear path to qbase.
2071 # Otherwise, they should form a linear path to a head.
2072 # Otherwise, they should form a linear path to a head.
2072 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2073 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2073 if len(heads) > 1:
2074 if len(heads) > 1:
2074 raise error.Abort(_('revision %d is the root of more than one '
2075 raise error.Abort(_('revision %d is the root of more than one '
2075 'branch') % rev.last())
2076 'branch') % rev.last())
2076 if self.applied:
2077 if self.applied:
2077 base = repo.changelog.node(rev.first())
2078 base = repo.changelog.node(rev.first())
2078 if base in [n.node for n in self.applied]:
2079 if base in [n.node for n in self.applied]:
2079 raise error.Abort(_('revision %d is already managed')
2080 raise error.Abort(_('revision %d is already managed')
2080 % rev.first())
2081 % rev.first())
2081 if heads != [self.applied[-1].node]:
2082 if heads != [self.applied[-1].node]:
2082 raise error.Abort(_('revision %d is not the parent of '
2083 raise error.Abort(_('revision %d is not the parent of '
2083 'the queue') % rev.first())
2084 'the queue') % rev.first())
2084 base = repo.changelog.rev(self.applied[0].node)
2085 base = repo.changelog.rev(self.applied[0].node)
2085 lastparent = repo.changelog.parentrevs(base)[0]
2086 lastparent = repo.changelog.parentrevs(base)[0]
2086 else:
2087 else:
2087 if heads != [repo.changelog.node(rev.first())]:
2088 if heads != [repo.changelog.node(rev.first())]:
2088 raise error.Abort(_('revision %d has unmanaged children')
2089 raise error.Abort(_('revision %d has unmanaged children')
2089 % rev.first())
2090 % rev.first())
2090 lastparent = None
2091 lastparent = None
2091
2092
2092 diffopts = self.diffopts({'git': git})
2093 diffopts = self.diffopts({'git': git})
2093 with repo.transaction('qimport') as tr:
2094 with repo.transaction('qimport') as tr:
2094 for r in rev:
2095 for r in rev:
2095 if not repo[r].mutable():
2096 if not repo[r].mutable():
2096 raise error.Abort(_('revision %d is not mutable') % r,
2097 raise error.Abort(_('revision %d is not mutable') % r,
2097 hint=_('see "hg help phases" '
2098 hint=_('see "hg help phases" '
2098 'for details'))
2099 'for details'))
2099 p1, p2 = repo.changelog.parentrevs(r)
2100 p1, p2 = repo.changelog.parentrevs(r)
2100 n = repo.changelog.node(r)
2101 n = repo.changelog.node(r)
2101 if p2 != nullrev:
2102 if p2 != nullrev:
2102 raise error.Abort(_('cannot import merge revision %d')
2103 raise error.Abort(_('cannot import merge revision %d')
2103 % r)
2104 % r)
2104 if lastparent and lastparent != r:
2105 if lastparent and lastparent != r:
2105 raise error.Abort(_('revision %d is not the parent of '
2106 raise error.Abort(_('revision %d is not the parent of '
2106 '%d')
2107 '%d')
2107 % (r, lastparent))
2108 % (r, lastparent))
2108 lastparent = p1
2109 lastparent = p1
2109
2110
2110 if not patchname:
2111 if not patchname:
2111 patchname = self.makepatchname(
2112 patchname = self.makepatchname(
2112 repo[r].description().split('\n', 1)[0],
2113 repo[r].description().split('\n', 1)[0],
2113 '%d.diff' % r)
2114 '%d.diff' % r)
2114 checkseries(patchname)
2115 checkseries(patchname)
2115 self.checkpatchname(patchname, force)
2116 self.checkpatchname(patchname, force)
2116 self.fullseries.insert(0, patchname)
2117 self.fullseries.insert(0, patchname)
2117
2118
2118 patchf = self.opener(patchname, "w")
2119 patchf = self.opener(patchname, "w")
2119 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2120 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2120 patchf.close()
2121 patchf.close()
2121
2122
2122 se = statusentry(n, patchname)
2123 se = statusentry(n, patchname)
2123 self.applied.insert(0, se)
2124 self.applied.insert(0, se)
2124
2125
2125 self.added.append(patchname)
2126 self.added.append(patchname)
2126 imported.append(patchname)
2127 imported.append(patchname)
2127 patchname = None
2128 patchname = None
2128 if rev and repo.ui.configbool('mq', 'secret', False):
2129 if rev and repo.ui.configbool('mq', 'secret', False):
2129 # if we added anything with --rev, move the secret root
2130 # if we added anything with --rev, move the secret root
2130 phases.retractboundary(repo, tr, phases.secret, [n])
2131 phases.retractboundary(repo, tr, phases.secret, [n])
2131 self.parseseries()
2132 self.parseseries()
2132 self.applieddirty = True
2133 self.applieddirty = True
2133 self.seriesdirty = True
2134 self.seriesdirty = True
2134
2135
2135 for i, filename in enumerate(files):
2136 for i, filename in enumerate(files):
2136 if existing:
2137 if existing:
2137 if filename == '-':
2138 if filename == '-':
2138 raise error.Abort(_('-e is incompatible with import from -')
2139 raise error.Abort(_('-e is incompatible with import from -')
2139 )
2140 )
2140 filename = normname(filename)
2141 filename = normname(filename)
2141 self.checkreservedname(filename)
2142 self.checkreservedname(filename)
2142 if util.url(filename).islocal():
2143 if util.url(filename).islocal():
2143 originpath = self.join(filename)
2144 originpath = self.join(filename)
2144 if not os.path.isfile(originpath):
2145 if not os.path.isfile(originpath):
2145 raise error.Abort(
2146 raise error.Abort(
2146 _("patch %s does not exist") % filename)
2147 _("patch %s does not exist") % filename)
2147
2148
2148 if patchname:
2149 if patchname:
2149 self.checkpatchname(patchname, force)
2150 self.checkpatchname(patchname, force)
2150
2151
2151 self.ui.write(_('renaming %s to %s\n')
2152 self.ui.write(_('renaming %s to %s\n')
2152 % (filename, patchname))
2153 % (filename, patchname))
2153 util.rename(originpath, self.join(patchname))
2154 util.rename(originpath, self.join(patchname))
2154 else:
2155 else:
2155 patchname = filename
2156 patchname = filename
2156
2157
2157 else:
2158 else:
2158 if filename == '-' and not patchname:
2159 if filename == '-' and not patchname:
2159 raise error.Abort(_('need --name to import a patch from -'))
2160 raise error.Abort(_('need --name to import a patch from -'))
2160 elif not patchname:
2161 elif not patchname:
2161 patchname = normname(os.path.basename(filename.rstrip('/')))
2162 patchname = normname(os.path.basename(filename.rstrip('/')))
2162 self.checkpatchname(patchname, force)
2163 self.checkpatchname(patchname, force)
2163 try:
2164 try:
2164 if filename == '-':
2165 if filename == '-':
2165 text = self.ui.fin.read()
2166 text = self.ui.fin.read()
2166 else:
2167 else:
2167 fp = hg.openpath(self.ui, filename)
2168 fp = hg.openpath(self.ui, filename)
2168 text = fp.read()
2169 text = fp.read()
2169 fp.close()
2170 fp.close()
2170 except (OSError, IOError):
2171 except (OSError, IOError):
2171 raise error.Abort(_("unable to read file %s") % filename)
2172 raise error.Abort(_("unable to read file %s") % filename)
2172 patchf = self.opener(patchname, "w")
2173 patchf = self.opener(patchname, "w")
2173 patchf.write(text)
2174 patchf.write(text)
2174 patchf.close()
2175 patchf.close()
2175 if not force:
2176 if not force:
2176 checkseries(patchname)
2177 checkseries(patchname)
2177 if patchname not in self.series:
2178 if patchname not in self.series:
2178 index = self.fullseriesend() + i
2179 index = self.fullseriesend() + i
2179 self.fullseries[index:index] = [patchname]
2180 self.fullseries[index:index] = [patchname]
2180 self.parseseries()
2181 self.parseseries()
2181 self.seriesdirty = True
2182 self.seriesdirty = True
2182 self.ui.warn(_("adding %s to series file\n") % patchname)
2183 self.ui.warn(_("adding %s to series file\n") % patchname)
2183 self.added.append(patchname)
2184 self.added.append(patchname)
2184 imported.append(patchname)
2185 imported.append(patchname)
2185 patchname = None
2186 patchname = None
2186
2187
2187 self.removeundo(repo)
2188 self.removeundo(repo)
2188 return imported
2189 return imported
2189
2190
2190 def fixkeepchangesopts(ui, opts):
2191 def fixkeepchangesopts(ui, opts):
2191 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2192 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2192 or opts.get('exact')):
2193 or opts.get('exact')):
2193 return opts
2194 return opts
2194 opts = dict(opts)
2195 opts = dict(opts)
2195 opts['keep_changes'] = True
2196 opts['keep_changes'] = True
2196 return opts
2197 return opts
2197
2198
2198 @command("qdelete|qremove|qrm",
2199 @command("qdelete|qremove|qrm",
2199 [('k', 'keep', None, _('keep patch file')),
2200 [('k', 'keep', None, _('keep patch file')),
2200 ('r', 'rev', [],
2201 ('r', 'rev', [],
2201 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2202 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2202 _('hg qdelete [-k] [PATCH]...'))
2203 _('hg qdelete [-k] [PATCH]...'))
2203 def delete(ui, repo, *patches, **opts):
2204 def delete(ui, repo, *patches, **opts):
2204 """remove patches from queue
2205 """remove patches from queue
2205
2206
2206 The patches must not be applied, and at least one patch is required. Exact
2207 The patches must not be applied, and at least one patch is required. Exact
2207 patch identifiers must be given. With -k/--keep, the patch files are
2208 patch identifiers must be given. With -k/--keep, the patch files are
2208 preserved in the patch directory.
2209 preserved in the patch directory.
2209
2210
2210 To stop managing a patch and move it into permanent history,
2211 To stop managing a patch and move it into permanent history,
2211 use the :hg:`qfinish` command."""
2212 use the :hg:`qfinish` command."""
2212 q = repo.mq
2213 q = repo.mq
2213 q.delete(repo, patches, opts)
2214 q.delete(repo, patches, opts)
2214 q.savedirty()
2215 q.savedirty()
2215 return 0
2216 return 0
2216
2217
2217 @command("qapplied",
2218 @command("qapplied",
2218 [('1', 'last', None, _('show only the preceding applied patch'))
2219 [('1', 'last', None, _('show only the preceding applied patch'))
2219 ] + seriesopts,
2220 ] + seriesopts,
2220 _('hg qapplied [-1] [-s] [PATCH]'))
2221 _('hg qapplied [-1] [-s] [PATCH]'))
2221 def applied(ui, repo, patch=None, **opts):
2222 def applied(ui, repo, patch=None, **opts):
2222 """print the patches already applied
2223 """print the patches already applied
2223
2224
2224 Returns 0 on success."""
2225 Returns 0 on success."""
2225
2226
2226 q = repo.mq
2227 q = repo.mq
2227
2228
2228 if patch:
2229 if patch:
2229 if patch not in q.series:
2230 if patch not in q.series:
2230 raise error.Abort(_("patch %s is not in series file") % patch)
2231 raise error.Abort(_("patch %s is not in series file") % patch)
2231 end = q.series.index(patch) + 1
2232 end = q.series.index(patch) + 1
2232 else:
2233 else:
2233 end = q.seriesend(True)
2234 end = q.seriesend(True)
2234
2235
2235 if opts.get('last') and not end:
2236 if opts.get('last') and not end:
2236 ui.write(_("no patches applied\n"))
2237 ui.write(_("no patches applied\n"))
2237 return 1
2238 return 1
2238 elif opts.get('last') and end == 1:
2239 elif opts.get('last') and end == 1:
2239 ui.write(_("only one patch applied\n"))
2240 ui.write(_("only one patch applied\n"))
2240 return 1
2241 return 1
2241 elif opts.get('last'):
2242 elif opts.get('last'):
2242 start = end - 2
2243 start = end - 2
2243 end = 1
2244 end = 1
2244 else:
2245 else:
2245 start = 0
2246 start = 0
2246
2247
2247 q.qseries(repo, length=end, start=start, status='A',
2248 q.qseries(repo, length=end, start=start, status='A',
2248 summary=opts.get('summary'))
2249 summary=opts.get('summary'))
2249
2250
2250
2251
2251 @command("qunapplied",
2252 @command("qunapplied",
2252 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2253 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2253 _('hg qunapplied [-1] [-s] [PATCH]'))
2254 _('hg qunapplied [-1] [-s] [PATCH]'))
2254 def unapplied(ui, repo, patch=None, **opts):
2255 def unapplied(ui, repo, patch=None, **opts):
2255 """print the patches not yet applied
2256 """print the patches not yet applied
2256
2257
2257 Returns 0 on success."""
2258 Returns 0 on success."""
2258
2259
2259 q = repo.mq
2260 q = repo.mq
2260 if patch:
2261 if patch:
2261 if patch not in q.series:
2262 if patch not in q.series:
2262 raise error.Abort(_("patch %s is not in series file") % patch)
2263 raise error.Abort(_("patch %s is not in series file") % patch)
2263 start = q.series.index(patch) + 1
2264 start = q.series.index(patch) + 1
2264 else:
2265 else:
2265 start = q.seriesend(True)
2266 start = q.seriesend(True)
2266
2267
2267 if start == len(q.series) and opts.get('first'):
2268 if start == len(q.series) and opts.get('first'):
2268 ui.write(_("all patches applied\n"))
2269 ui.write(_("all patches applied\n"))
2269 return 1
2270 return 1
2270
2271
2271 if opts.get('first'):
2272 if opts.get('first'):
2272 length = 1
2273 length = 1
2273 else:
2274 else:
2274 length = None
2275 length = None
2275 q.qseries(repo, start=start, length=length, status='U',
2276 q.qseries(repo, start=start, length=length, status='U',
2276 summary=opts.get('summary'))
2277 summary=opts.get('summary'))
2277
2278
2278 @command("qimport",
2279 @command("qimport",
2279 [('e', 'existing', None, _('import file in patch directory')),
2280 [('e', 'existing', None, _('import file in patch directory')),
2280 ('n', 'name', '',
2281 ('n', 'name', '',
2281 _('name of patch file'), _('NAME')),
2282 _('name of patch file'), _('NAME')),
2282 ('f', 'force', None, _('overwrite existing files')),
2283 ('f', 'force', None, _('overwrite existing files')),
2283 ('r', 'rev', [],
2284 ('r', 'rev', [],
2284 _('place existing revisions under mq control'), _('REV')),
2285 _('place existing revisions under mq control'), _('REV')),
2285 ('g', 'git', None, _('use git extended diff format')),
2286 ('g', 'git', None, _('use git extended diff format')),
2286 ('P', 'push', None, _('qpush after importing'))],
2287 ('P', 'push', None, _('qpush after importing'))],
2287 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2288 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2288 def qimport(ui, repo, *filename, **opts):
2289 def qimport(ui, repo, *filename, **opts):
2289 """import a patch or existing changeset
2290 """import a patch or existing changeset
2290
2291
2291 The patch is inserted into the series after the last applied
2292 The patch is inserted into the series after the last applied
2292 patch. If no patches have been applied, qimport prepends the patch
2293 patch. If no patches have been applied, qimport prepends the patch
2293 to the series.
2294 to the series.
2294
2295
2295 The patch will have the same name as its source file unless you
2296 The patch will have the same name as its source file unless you
2296 give it a new one with -n/--name.
2297 give it a new one with -n/--name.
2297
2298
2298 You can register an existing patch inside the patch directory with
2299 You can register an existing patch inside the patch directory with
2299 the -e/--existing flag.
2300 the -e/--existing flag.
2300
2301
2301 With -f/--force, an existing patch of the same name will be
2302 With -f/--force, an existing patch of the same name will be
2302 overwritten.
2303 overwritten.
2303
2304
2304 An existing changeset may be placed under mq control with -r/--rev
2305 An existing changeset may be placed under mq control with -r/--rev
2305 (e.g. qimport --rev . -n patch will place the current revision
2306 (e.g. qimport --rev . -n patch will place the current revision
2306 under mq control). With -g/--git, patches imported with --rev will
2307 under mq control). With -g/--git, patches imported with --rev will
2307 use the git diff format. See the diffs help topic for information
2308 use the git diff format. See the diffs help topic for information
2308 on why this is important for preserving rename/copy information
2309 on why this is important for preserving rename/copy information
2309 and permission changes. Use :hg:`qfinish` to remove changesets
2310 and permission changes. Use :hg:`qfinish` to remove changesets
2310 from mq control.
2311 from mq control.
2311
2312
2312 To import a patch from standard input, pass - as the patch file.
2313 To import a patch from standard input, pass - as the patch file.
2313 When importing from standard input, a patch name must be specified
2314 When importing from standard input, a patch name must be specified
2314 using the --name flag.
2315 using the --name flag.
2315
2316
2316 To import an existing patch while renaming it::
2317 To import an existing patch while renaming it::
2317
2318
2318 hg qimport -e existing-patch -n new-name
2319 hg qimport -e existing-patch -n new-name
2319
2320
2320 Returns 0 if import succeeded.
2321 Returns 0 if import succeeded.
2321 """
2322 """
2322 with repo.lock(): # cause this may move phase
2323 with repo.lock(): # cause this may move phase
2323 q = repo.mq
2324 q = repo.mq
2324 try:
2325 try:
2325 imported = q.qimport(
2326 imported = q.qimport(
2326 repo, filename, patchname=opts.get('name'),
2327 repo, filename, patchname=opts.get('name'),
2327 existing=opts.get('existing'), force=opts.get('force'),
2328 existing=opts.get('existing'), force=opts.get('force'),
2328 rev=opts.get('rev'), git=opts.get('git'))
2329 rev=opts.get('rev'), git=opts.get('git'))
2329 finally:
2330 finally:
2330 q.savedirty()
2331 q.savedirty()
2331
2332
2332 if imported and opts.get('push') and not opts.get('rev'):
2333 if imported and opts.get('push') and not opts.get('rev'):
2333 return q.push(repo, imported[-1])
2334 return q.push(repo, imported[-1])
2334 return 0
2335 return 0
2335
2336
2336 def qinit(ui, repo, create):
2337 def qinit(ui, repo, create):
2337 """initialize a new queue repository
2338 """initialize a new queue repository
2338
2339
2339 This command also creates a series file for ordering patches, and
2340 This command also creates a series file for ordering patches, and
2340 an mq-specific .hgignore file in the queue repository, to exclude
2341 an mq-specific .hgignore file in the queue repository, to exclude
2341 the status and guards files (these contain mostly transient state).
2342 the status and guards files (these contain mostly transient state).
2342
2343
2343 Returns 0 if initialization succeeded."""
2344 Returns 0 if initialization succeeded."""
2344 q = repo.mq
2345 q = repo.mq
2345 r = q.init(repo, create)
2346 r = q.init(repo, create)
2346 q.savedirty()
2347 q.savedirty()
2347 if r:
2348 if r:
2348 if not os.path.exists(r.wjoin('.hgignore')):
2349 if not os.path.exists(r.wjoin('.hgignore')):
2349 fp = r.wvfs('.hgignore', 'w')
2350 fp = r.wvfs('.hgignore', 'w')
2350 fp.write('^\\.hg\n')
2351 fp.write('^\\.hg\n')
2351 fp.write('^\\.mq\n')
2352 fp.write('^\\.mq\n')
2352 fp.write('syntax: glob\n')
2353 fp.write('syntax: glob\n')
2353 fp.write('status\n')
2354 fp.write('status\n')
2354 fp.write('guards\n')
2355 fp.write('guards\n')
2355 fp.close()
2356 fp.close()
2356 if not os.path.exists(r.wjoin('series')):
2357 if not os.path.exists(r.wjoin('series')):
2357 r.wvfs('series', 'w').close()
2358 r.wvfs('series', 'w').close()
2358 r[None].add(['.hgignore', 'series'])
2359 r[None].add(['.hgignore', 'series'])
2359 commands.add(ui, r)
2360 commands.add(ui, r)
2360 return 0
2361 return 0
2361
2362
2362 @command("^qinit",
2363 @command("^qinit",
2363 [('c', 'create-repo', None, _('create queue repository'))],
2364 [('c', 'create-repo', None, _('create queue repository'))],
2364 _('hg qinit [-c]'))
2365 _('hg qinit [-c]'))
2365 def init(ui, repo, **opts):
2366 def init(ui, repo, **opts):
2366 """init a new queue repository (DEPRECATED)
2367 """init a new queue repository (DEPRECATED)
2367
2368
2368 The queue repository is unversioned by default. If
2369 The queue repository is unversioned by default. If
2369 -c/--create-repo is specified, qinit will create a separate nested
2370 -c/--create-repo is specified, qinit will create a separate nested
2370 repository for patches (qinit -c may also be run later to convert
2371 repository for patches (qinit -c may also be run later to convert
2371 an unversioned patch repository into a versioned one). You can use
2372 an unversioned patch repository into a versioned one). You can use
2372 qcommit to commit changes to this queue repository.
2373 qcommit to commit changes to this queue repository.
2373
2374
2374 This command is deprecated. Without -c, it's implied by other relevant
2375 This command is deprecated. Without -c, it's implied by other relevant
2375 commands. With -c, use :hg:`init --mq` instead."""
2376 commands. With -c, use :hg:`init --mq` instead."""
2376 return qinit(ui, repo, create=opts.get('create_repo'))
2377 return qinit(ui, repo, create=opts.get('create_repo'))
2377
2378
2378 @command("qclone",
2379 @command("qclone",
2379 [('', 'pull', None, _('use pull protocol to copy metadata')),
2380 [('', 'pull', None, _('use pull protocol to copy metadata')),
2380 ('U', 'noupdate', None,
2381 ('U', 'noupdate', None,
2381 _('do not update the new working directories')),
2382 _('do not update the new working directories')),
2382 ('', 'uncompressed', None,
2383 ('', 'uncompressed', None,
2383 _('use uncompressed transfer (fast over LAN)')),
2384 _('use uncompressed transfer (fast over LAN)')),
2384 ('p', 'patches', '',
2385 ('p', 'patches', '',
2385 _('location of source patch repository'), _('REPO')),
2386 _('location of source patch repository'), _('REPO')),
2386 ] + commands.remoteopts,
2387 ] + commands.remoteopts,
2387 _('hg qclone [OPTION]... SOURCE [DEST]'),
2388 _('hg qclone [OPTION]... SOURCE [DEST]'),
2388 norepo=True)
2389 norepo=True)
2389 def clone(ui, source, dest=None, **opts):
2390 def clone(ui, source, dest=None, **opts):
2390 '''clone main and patch repository at same time
2391 '''clone main and patch repository at same time
2391
2392
2392 If source is local, destination will have no patches applied. If
2393 If source is local, destination will have no patches applied. If
2393 source is remote, this command can not check if patches are
2394 source is remote, this command can not check if patches are
2394 applied in source, so cannot guarantee that patches are not
2395 applied in source, so cannot guarantee that patches are not
2395 applied in destination. If you clone remote repository, be sure
2396 applied in destination. If you clone remote repository, be sure
2396 before that it has no patches applied.
2397 before that it has no patches applied.
2397
2398
2398 Source patch repository is looked for in <src>/.hg/patches by
2399 Source patch repository is looked for in <src>/.hg/patches by
2399 default. Use -p <url> to change.
2400 default. Use -p <url> to change.
2400
2401
2401 The patch directory must be a nested Mercurial repository, as
2402 The patch directory must be a nested Mercurial repository, as
2402 would be created by :hg:`init --mq`.
2403 would be created by :hg:`init --mq`.
2403
2404
2404 Return 0 on success.
2405 Return 0 on success.
2405 '''
2406 '''
2406 def patchdir(repo):
2407 def patchdir(repo):
2407 """compute a patch repo url from a repo object"""
2408 """compute a patch repo url from a repo object"""
2408 url = repo.url()
2409 url = repo.url()
2409 if url.endswith('/'):
2410 if url.endswith('/'):
2410 url = url[:-1]
2411 url = url[:-1]
2411 return url + '/.hg/patches'
2412 return url + '/.hg/patches'
2412
2413
2413 # main repo (destination and sources)
2414 # main repo (destination and sources)
2414 if dest is None:
2415 if dest is None:
2415 dest = hg.defaultdest(source)
2416 dest = hg.defaultdest(source)
2416 sr = hg.peer(ui, opts, ui.expandpath(source))
2417 sr = hg.peer(ui, opts, ui.expandpath(source))
2417
2418
2418 # patches repo (source only)
2419 # patches repo (source only)
2419 if opts.get('patches'):
2420 if opts.get('patches'):
2420 patchespath = ui.expandpath(opts.get('patches'))
2421 patchespath = ui.expandpath(opts.get('patches'))
2421 else:
2422 else:
2422 patchespath = patchdir(sr)
2423 patchespath = patchdir(sr)
2423 try:
2424 try:
2424 hg.peer(ui, opts, patchespath)
2425 hg.peer(ui, opts, patchespath)
2425 except error.RepoError:
2426 except error.RepoError:
2426 raise error.Abort(_('versioned patch repository not found'
2427 raise error.Abort(_('versioned patch repository not found'
2427 ' (see init --mq)'))
2428 ' (see init --mq)'))
2428 qbase, destrev = None, None
2429 qbase, destrev = None, None
2429 if sr.local():
2430 if sr.local():
2430 repo = sr.local()
2431 repo = sr.local()
2431 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2432 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2432 qbase = repo.mq.applied[0].node
2433 qbase = repo.mq.applied[0].node
2433 if not hg.islocal(dest):
2434 if not hg.islocal(dest):
2434 heads = set(repo.heads())
2435 heads = set(repo.heads())
2435 destrev = list(heads.difference(repo.heads(qbase)))
2436 destrev = list(heads.difference(repo.heads(qbase)))
2436 destrev.append(repo.changelog.parents(qbase)[0])
2437 destrev.append(repo.changelog.parents(qbase)[0])
2437 elif sr.capable('lookup'):
2438 elif sr.capable('lookup'):
2438 try:
2439 try:
2439 qbase = sr.lookup('qbase')
2440 qbase = sr.lookup('qbase')
2440 except error.RepoError:
2441 except error.RepoError:
2441 pass
2442 pass
2442
2443
2443 ui.note(_('cloning main repository\n'))
2444 ui.note(_('cloning main repository\n'))
2444 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2445 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2445 pull=opts.get('pull'),
2446 pull=opts.get('pull'),
2446 rev=destrev,
2447 rev=destrev,
2447 update=False,
2448 update=False,
2448 stream=opts.get('uncompressed'))
2449 stream=opts.get('uncompressed'))
2449
2450
2450 ui.note(_('cloning patch repository\n'))
2451 ui.note(_('cloning patch repository\n'))
2451 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2452 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2452 pull=opts.get('pull'), update=not opts.get('noupdate'),
2453 pull=opts.get('pull'), update=not opts.get('noupdate'),
2453 stream=opts.get('uncompressed'))
2454 stream=opts.get('uncompressed'))
2454
2455
2455 if dr.local():
2456 if dr.local():
2456 repo = dr.local()
2457 repo = dr.local()
2457 if qbase:
2458 if qbase:
2458 ui.note(_('stripping applied patches from destination '
2459 ui.note(_('stripping applied patches from destination '
2459 'repository\n'))
2460 'repository\n'))
2460 strip(ui, repo, [qbase], update=False, backup=None)
2461 strip(ui, repo, [qbase], update=False, backup=None)
2461 if not opts.get('noupdate'):
2462 if not opts.get('noupdate'):
2462 ui.note(_('updating destination repository\n'))
2463 ui.note(_('updating destination repository\n'))
2463 hg.update(repo, repo.changelog.tip())
2464 hg.update(repo, repo.changelog.tip())
2464
2465
2465 @command("qcommit|qci",
2466 @command("qcommit|qci",
2466 commands.table["^commit|ci"][1],
2467 commands.table["^commit|ci"][1],
2467 _('hg qcommit [OPTION]... [FILE]...'),
2468 _('hg qcommit [OPTION]... [FILE]...'),
2468 inferrepo=True)
2469 inferrepo=True)
2469 def commit(ui, repo, *pats, **opts):
2470 def commit(ui, repo, *pats, **opts):
2470 """commit changes in the queue repository (DEPRECATED)
2471 """commit changes in the queue repository (DEPRECATED)
2471
2472
2472 This command is deprecated; use :hg:`commit --mq` instead."""
2473 This command is deprecated; use :hg:`commit --mq` instead."""
2473 q = repo.mq
2474 q = repo.mq
2474 r = q.qrepo()
2475 r = q.qrepo()
2475 if not r:
2476 if not r:
2476 raise error.Abort('no queue repository')
2477 raise error.Abort('no queue repository')
2477 commands.commit(r.ui, r, *pats, **opts)
2478 commands.commit(r.ui, r, *pats, **opts)
2478
2479
2479 @command("qseries",
2480 @command("qseries",
2480 [('m', 'missing', None, _('print patches not in series')),
2481 [('m', 'missing', None, _('print patches not in series')),
2481 ] + seriesopts,
2482 ] + seriesopts,
2482 _('hg qseries [-ms]'))
2483 _('hg qseries [-ms]'))
2483 def series(ui, repo, **opts):
2484 def series(ui, repo, **opts):
2484 """print the entire series file
2485 """print the entire series file
2485
2486
2486 Returns 0 on success."""
2487 Returns 0 on success."""
2487 repo.mq.qseries(repo, missing=opts.get('missing'),
2488 repo.mq.qseries(repo, missing=opts.get('missing'),
2488 summary=opts.get('summary'))
2489 summary=opts.get('summary'))
2489 return 0
2490 return 0
2490
2491
2491 @command("qtop", seriesopts, _('hg qtop [-s]'))
2492 @command("qtop", seriesopts, _('hg qtop [-s]'))
2492 def top(ui, repo, **opts):
2493 def top(ui, repo, **opts):
2493 """print the name of the current patch
2494 """print the name of the current patch
2494
2495
2495 Returns 0 on success."""
2496 Returns 0 on success."""
2496 q = repo.mq
2497 q = repo.mq
2497 if q.applied:
2498 if q.applied:
2498 t = q.seriesend(True)
2499 t = q.seriesend(True)
2499 else:
2500 else:
2500 t = 0
2501 t = 0
2501
2502
2502 if t:
2503 if t:
2503 q.qseries(repo, start=t - 1, length=1, status='A',
2504 q.qseries(repo, start=t - 1, length=1, status='A',
2504 summary=opts.get('summary'))
2505 summary=opts.get('summary'))
2505 else:
2506 else:
2506 ui.write(_("no patches applied\n"))
2507 ui.write(_("no patches applied\n"))
2507 return 1
2508 return 1
2508
2509
2509 @command("qnext", seriesopts, _('hg qnext [-s]'))
2510 @command("qnext", seriesopts, _('hg qnext [-s]'))
2510 def next(ui, repo, **opts):
2511 def next(ui, repo, **opts):
2511 """print the name of the next pushable patch
2512 """print the name of the next pushable patch
2512
2513
2513 Returns 0 on success."""
2514 Returns 0 on success."""
2514 q = repo.mq
2515 q = repo.mq
2515 end = q.seriesend()
2516 end = q.seriesend()
2516 if end == len(q.series):
2517 if end == len(q.series):
2517 ui.write(_("all patches applied\n"))
2518 ui.write(_("all patches applied\n"))
2518 return 1
2519 return 1
2519 q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
2520 q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
2520
2521
2521 @command("qprev", seriesopts, _('hg qprev [-s]'))
2522 @command("qprev", seriesopts, _('hg qprev [-s]'))
2522 def prev(ui, repo, **opts):
2523 def prev(ui, repo, **opts):
2523 """print the name of the preceding applied patch
2524 """print the name of the preceding applied patch
2524
2525
2525 Returns 0 on success."""
2526 Returns 0 on success."""
2526 q = repo.mq
2527 q = repo.mq
2527 l = len(q.applied)
2528 l = len(q.applied)
2528 if l == 1:
2529 if l == 1:
2529 ui.write(_("only one patch applied\n"))
2530 ui.write(_("only one patch applied\n"))
2530 return 1
2531 return 1
2531 if not l:
2532 if not l:
2532 ui.write(_("no patches applied\n"))
2533 ui.write(_("no patches applied\n"))
2533 return 1
2534 return 1
2534 idx = q.series.index(q.applied[-2].name)
2535 idx = q.series.index(q.applied[-2].name)
2535 q.qseries(repo, start=idx, length=1, status='A',
2536 q.qseries(repo, start=idx, length=1, status='A',
2536 summary=opts.get('summary'))
2537 summary=opts.get('summary'))
2537
2538
2538 def setupheaderopts(ui, opts):
2539 def setupheaderopts(ui, opts):
2539 if not opts.get('user') and opts.get('currentuser'):
2540 if not opts.get('user') and opts.get('currentuser'):
2540 opts['user'] = ui.username()
2541 opts['user'] = ui.username()
2541 if not opts.get('date') and opts.get('currentdate'):
2542 if not opts.get('date') and opts.get('currentdate'):
2542 opts['date'] = "%d %d" % util.makedate()
2543 opts['date'] = "%d %d" % util.makedate()
2543
2544
2544 @command("^qnew",
2545 @command("^qnew",
2545 [('e', 'edit', None, _('invoke editor on commit messages')),
2546 [('e', 'edit', None, _('invoke editor on commit messages')),
2546 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2547 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2547 ('g', 'git', None, _('use git extended diff format')),
2548 ('g', 'git', None, _('use git extended diff format')),
2548 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2549 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2549 ('u', 'user', '',
2550 ('u', 'user', '',
2550 _('add "From: <USER>" to patch'), _('USER')),
2551 _('add "From: <USER>" to patch'), _('USER')),
2551 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2552 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2552 ('d', 'date', '',
2553 ('d', 'date', '',
2553 _('add "Date: <DATE>" to patch'), _('DATE'))
2554 _('add "Date: <DATE>" to patch'), _('DATE'))
2554 ] + commands.walkopts + commands.commitopts,
2555 ] + commands.walkopts + commands.commitopts,
2555 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2556 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2556 inferrepo=True)
2557 inferrepo=True)
2557 def new(ui, repo, patch, *args, **opts):
2558 def new(ui, repo, patch, *args, **opts):
2558 """create a new patch
2559 """create a new patch
2559
2560
2560 qnew creates a new patch on top of the currently-applied patch (if
2561 qnew creates a new patch on top of the currently-applied patch (if
2561 any). The patch will be initialized with any outstanding changes
2562 any). The patch will be initialized with any outstanding changes
2562 in the working directory. You may also use -I/--include,
2563 in the working directory. You may also use -I/--include,
2563 -X/--exclude, and/or a list of files after the patch name to add
2564 -X/--exclude, and/or a list of files after the patch name to add
2564 only changes to matching files to the new patch, leaving the rest
2565 only changes to matching files to the new patch, leaving the rest
2565 as uncommitted modifications.
2566 as uncommitted modifications.
2566
2567
2567 -u/--user and -d/--date can be used to set the (given) user and
2568 -u/--user and -d/--date can be used to set the (given) user and
2568 date, respectively. -U/--currentuser and -D/--currentdate set user
2569 date, respectively. -U/--currentuser and -D/--currentdate set user
2569 to current user and date to current date.
2570 to current user and date to current date.
2570
2571
2571 -e/--edit, -m/--message or -l/--logfile set the patch header as
2572 -e/--edit, -m/--message or -l/--logfile set the patch header as
2572 well as the commit message. If none is specified, the header is
2573 well as the commit message. If none is specified, the header is
2573 empty and the commit message is '[mq]: PATCH'.
2574 empty and the commit message is '[mq]: PATCH'.
2574
2575
2575 Use the -g/--git option to keep the patch in the git extended diff
2576 Use the -g/--git option to keep the patch in the git extended diff
2576 format. Read the diffs help topic for more information on why this
2577 format. Read the diffs help topic for more information on why this
2577 is important for preserving permission changes and copy/rename
2578 is important for preserving permission changes and copy/rename
2578 information.
2579 information.
2579
2580
2580 Returns 0 on successful creation of a new patch.
2581 Returns 0 on successful creation of a new patch.
2581 """
2582 """
2582 msg = cmdutil.logmessage(ui, opts)
2583 msg = cmdutil.logmessage(ui, opts)
2583 q = repo.mq
2584 q = repo.mq
2584 opts['msg'] = msg
2585 opts['msg'] = msg
2585 setupheaderopts(ui, opts)
2586 setupheaderopts(ui, opts)
2586 q.new(repo, patch, *args, **opts)
2587 q.new(repo, patch, *args, **opts)
2587 q.savedirty()
2588 q.savedirty()
2588 return 0
2589 return 0
2589
2590
2590 @command("^qrefresh",
2591 @command("^qrefresh",
2591 [('e', 'edit', None, _('invoke editor on commit messages')),
2592 [('e', 'edit', None, _('invoke editor on commit messages')),
2592 ('g', 'git', None, _('use git extended diff format')),
2593 ('g', 'git', None, _('use git extended diff format')),
2593 ('s', 'short', None,
2594 ('s', 'short', None,
2594 _('refresh only files already in the patch and specified files')),
2595 _('refresh only files already in the patch and specified files')),
2595 ('U', 'currentuser', None,
2596 ('U', 'currentuser', None,
2596 _('add/update author field in patch with current user')),
2597 _('add/update author field in patch with current user')),
2597 ('u', 'user', '',
2598 ('u', 'user', '',
2598 _('add/update author field in patch with given user'), _('USER')),
2599 _('add/update author field in patch with given user'), _('USER')),
2599 ('D', 'currentdate', None,
2600 ('D', 'currentdate', None,
2600 _('add/update date field in patch with current date')),
2601 _('add/update date field in patch with current date')),
2601 ('d', 'date', '',
2602 ('d', 'date', '',
2602 _('add/update date field in patch with given date'), _('DATE'))
2603 _('add/update date field in patch with given date'), _('DATE'))
2603 ] + commands.walkopts + commands.commitopts,
2604 ] + commands.walkopts + commands.commitopts,
2604 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2605 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2605 inferrepo=True)
2606 inferrepo=True)
2606 def refresh(ui, repo, *pats, **opts):
2607 def refresh(ui, repo, *pats, **opts):
2607 """update the current patch
2608 """update the current patch
2608
2609
2609 If any file patterns are provided, the refreshed patch will
2610 If any file patterns are provided, the refreshed patch will
2610 contain only the modifications that match those patterns; the
2611 contain only the modifications that match those patterns; the
2611 remaining modifications will remain in the working directory.
2612 remaining modifications will remain in the working directory.
2612
2613
2613 If -s/--short is specified, files currently included in the patch
2614 If -s/--short is specified, files currently included in the patch
2614 will be refreshed just like matched files and remain in the patch.
2615 will be refreshed just like matched files and remain in the patch.
2615
2616
2616 If -e/--edit is specified, Mercurial will start your configured editor for
2617 If -e/--edit is specified, Mercurial will start your configured editor for
2617 you to enter a message. In case qrefresh fails, you will find a backup of
2618 you to enter a message. In case qrefresh fails, you will find a backup of
2618 your message in ``.hg/last-message.txt``.
2619 your message in ``.hg/last-message.txt``.
2619
2620
2620 hg add/remove/copy/rename work as usual, though you might want to
2621 hg add/remove/copy/rename work as usual, though you might want to
2621 use git-style patches (-g/--git or [diff] git=1) to track copies
2622 use git-style patches (-g/--git or [diff] git=1) to track copies
2622 and renames. See the diffs help topic for more information on the
2623 and renames. See the diffs help topic for more information on the
2623 git diff format.
2624 git diff format.
2624
2625
2625 Returns 0 on success.
2626 Returns 0 on success.
2626 """
2627 """
2627 q = repo.mq
2628 q = repo.mq
2628 message = cmdutil.logmessage(ui, opts)
2629 message = cmdutil.logmessage(ui, opts)
2629 setupheaderopts(ui, opts)
2630 setupheaderopts(ui, opts)
2630 with repo.wlock():
2631 with repo.wlock():
2631 ret = q.refresh(repo, pats, msg=message, **opts)
2632 ret = q.refresh(repo, pats, msg=message, **opts)
2632 q.savedirty()
2633 q.savedirty()
2633 return ret
2634 return ret
2634
2635
2635 @command("^qdiff",
2636 @command("^qdiff",
2636 commands.diffopts + commands.diffopts2 + commands.walkopts,
2637 commands.diffopts + commands.diffopts2 + commands.walkopts,
2637 _('hg qdiff [OPTION]... [FILE]...'),
2638 _('hg qdiff [OPTION]... [FILE]...'),
2638 inferrepo=True)
2639 inferrepo=True)
2639 def diff(ui, repo, *pats, **opts):
2640 def diff(ui, repo, *pats, **opts):
2640 """diff of the current patch and subsequent modifications
2641 """diff of the current patch and subsequent modifications
2641
2642
2642 Shows a diff which includes the current patch as well as any
2643 Shows a diff which includes the current patch as well as any
2643 changes which have been made in the working directory since the
2644 changes which have been made in the working directory since the
2644 last refresh (thus showing what the current patch would become
2645 last refresh (thus showing what the current patch would become
2645 after a qrefresh).
2646 after a qrefresh).
2646
2647
2647 Use :hg:`diff` if you only want to see the changes made since the
2648 Use :hg:`diff` if you only want to see the changes made since the
2648 last qrefresh, or :hg:`export qtip` if you want to see changes
2649 last qrefresh, or :hg:`export qtip` if you want to see changes
2649 made by the current patch without including changes made since the
2650 made by the current patch without including changes made since the
2650 qrefresh.
2651 qrefresh.
2651
2652
2652 Returns 0 on success.
2653 Returns 0 on success.
2653 """
2654 """
2654 repo.mq.diff(repo, pats, opts)
2655 repo.mq.diff(repo, pats, opts)
2655 return 0
2656 return 0
2656
2657
2657 @command('qfold',
2658 @command('qfold',
2658 [('e', 'edit', None, _('invoke editor on commit messages')),
2659 [('e', 'edit', None, _('invoke editor on commit messages')),
2659 ('k', 'keep', None, _('keep folded patch files')),
2660 ('k', 'keep', None, _('keep folded patch files')),
2660 ] + commands.commitopts,
2661 ] + commands.commitopts,
2661 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2662 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2662 def fold(ui, repo, *files, **opts):
2663 def fold(ui, repo, *files, **opts):
2663 """fold the named patches into the current patch
2664 """fold the named patches into the current patch
2664
2665
2665 Patches must not yet be applied. Each patch will be successively
2666 Patches must not yet be applied. Each patch will be successively
2666 applied to the current patch in the order given. If all the
2667 applied to the current patch in the order given. If all the
2667 patches apply successfully, the current patch will be refreshed
2668 patches apply successfully, the current patch will be refreshed
2668 with the new cumulative patch, and the folded patches will be
2669 with the new cumulative patch, and the folded patches will be
2669 deleted. With -k/--keep, the folded patch files will not be
2670 deleted. With -k/--keep, the folded patch files will not be
2670 removed afterwards.
2671 removed afterwards.
2671
2672
2672 The header for each folded patch will be concatenated with the
2673 The header for each folded patch will be concatenated with the
2673 current patch header, separated by a line of ``* * *``.
2674 current patch header, separated by a line of ``* * *``.
2674
2675
2675 Returns 0 on success."""
2676 Returns 0 on success."""
2676 q = repo.mq
2677 q = repo.mq
2677 if not files:
2678 if not files:
2678 raise error.Abort(_('qfold requires at least one patch name'))
2679 raise error.Abort(_('qfold requires at least one patch name'))
2679 if not q.checktoppatch(repo)[0]:
2680 if not q.checktoppatch(repo)[0]:
2680 raise error.Abort(_('no patches applied'))
2681 raise error.Abort(_('no patches applied'))
2681 q.checklocalchanges(repo)
2682 q.checklocalchanges(repo)
2682
2683
2683 message = cmdutil.logmessage(ui, opts)
2684 message = cmdutil.logmessage(ui, opts)
2684
2685
2685 parent = q.lookup('qtip')
2686 parent = q.lookup('qtip')
2686 patches = []
2687 patches = []
2687 messages = []
2688 messages = []
2688 for f in files:
2689 for f in files:
2689 p = q.lookup(f)
2690 p = q.lookup(f)
2690 if p in patches or p == parent:
2691 if p in patches or p == parent:
2691 ui.warn(_('skipping already folded patch %s\n') % p)
2692 ui.warn(_('skipping already folded patch %s\n') % p)
2692 if q.isapplied(p):
2693 if q.isapplied(p):
2693 raise error.Abort(_('qfold cannot fold already applied patch %s')
2694 raise error.Abort(_('qfold cannot fold already applied patch %s')
2694 % p)
2695 % p)
2695 patches.append(p)
2696 patches.append(p)
2696
2697
2697 for p in patches:
2698 for p in patches:
2698 if not message:
2699 if not message:
2699 ph = patchheader(q.join(p), q.plainmode)
2700 ph = patchheader(q.join(p), q.plainmode)
2700 if ph.message:
2701 if ph.message:
2701 messages.append(ph.message)
2702 messages.append(ph.message)
2702 pf = q.join(p)
2703 pf = q.join(p)
2703 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2704 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2704 if not patchsuccess:
2705 if not patchsuccess:
2705 raise error.Abort(_('error folding patch %s') % p)
2706 raise error.Abort(_('error folding patch %s') % p)
2706
2707
2707 if not message:
2708 if not message:
2708 ph = patchheader(q.join(parent), q.plainmode)
2709 ph = patchheader(q.join(parent), q.plainmode)
2709 message = ph.message
2710 message = ph.message
2710 for msg in messages:
2711 for msg in messages:
2711 if msg:
2712 if msg:
2712 if message:
2713 if message:
2713 message.append('* * *')
2714 message.append('* * *')
2714 message.extend(msg)
2715 message.extend(msg)
2715 message = '\n'.join(message)
2716 message = '\n'.join(message)
2716
2717
2717 diffopts = q.patchopts(q.diffopts(), *patches)
2718 diffopts = q.patchopts(q.diffopts(), *patches)
2718 with repo.wlock():
2719 with repo.wlock():
2719 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2720 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2720 editform='mq.qfold')
2721 editform='mq.qfold')
2721 q.delete(repo, patches, opts)
2722 q.delete(repo, patches, opts)
2722 q.savedirty()
2723 q.savedirty()
2723
2724
2724 @command("qgoto",
2725 @command("qgoto",
2725 [('', 'keep-changes', None,
2726 [('', 'keep-changes', None,
2726 _('tolerate non-conflicting local changes')),
2727 _('tolerate non-conflicting local changes')),
2727 ('f', 'force', None, _('overwrite any local changes')),
2728 ('f', 'force', None, _('overwrite any local changes')),
2728 ('', 'no-backup', None, _('do not save backup copies of files'))],
2729 ('', 'no-backup', None, _('do not save backup copies of files'))],
2729 _('hg qgoto [OPTION]... PATCH'))
2730 _('hg qgoto [OPTION]... PATCH'))
2730 def goto(ui, repo, patch, **opts):
2731 def goto(ui, repo, patch, **opts):
2731 '''push or pop patches until named patch is at top of stack
2732 '''push or pop patches until named patch is at top of stack
2732
2733
2733 Returns 0 on success.'''
2734 Returns 0 on success.'''
2734 opts = fixkeepchangesopts(ui, opts)
2735 opts = fixkeepchangesopts(ui, opts)
2735 q = repo.mq
2736 q = repo.mq
2736 patch = q.lookup(patch)
2737 patch = q.lookup(patch)
2737 nobackup = opts.get('no_backup')
2738 nobackup = opts.get('no_backup')
2738 keepchanges = opts.get('keep_changes')
2739 keepchanges = opts.get('keep_changes')
2739 if q.isapplied(patch):
2740 if q.isapplied(patch):
2740 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2741 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2741 keepchanges=keepchanges)
2742 keepchanges=keepchanges)
2742 else:
2743 else:
2743 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2744 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2744 keepchanges=keepchanges)
2745 keepchanges=keepchanges)
2745 q.savedirty()
2746 q.savedirty()
2746 return ret
2747 return ret
2747
2748
2748 @command("qguard",
2749 @command("qguard",
2749 [('l', 'list', None, _('list all patches and guards')),
2750 [('l', 'list', None, _('list all patches and guards')),
2750 ('n', 'none', None, _('drop all guards'))],
2751 ('n', 'none', None, _('drop all guards'))],
2751 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2752 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2752 def guard(ui, repo, *args, **opts):
2753 def guard(ui, repo, *args, **opts):
2753 '''set or print guards for a patch
2754 '''set or print guards for a patch
2754
2755
2755 Guards control whether a patch can be pushed. A patch with no
2756 Guards control whether a patch can be pushed. A patch with no
2756 guards is always pushed. A patch with a positive guard ("+foo") is
2757 guards is always pushed. A patch with a positive guard ("+foo") is
2757 pushed only if the :hg:`qselect` command has activated it. A patch with
2758 pushed only if the :hg:`qselect` command has activated it. A patch with
2758 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2759 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2759 has activated it.
2760 has activated it.
2760
2761
2761 With no arguments, print the currently active guards.
2762 With no arguments, print the currently active guards.
2762 With arguments, set guards for the named patch.
2763 With arguments, set guards for the named patch.
2763
2764
2764 .. note::
2765 .. note::
2765
2766
2766 Specifying negative guards now requires '--'.
2767 Specifying negative guards now requires '--'.
2767
2768
2768 To set guards on another patch::
2769 To set guards on another patch::
2769
2770
2770 hg qguard other.patch -- +2.6.17 -stable
2771 hg qguard other.patch -- +2.6.17 -stable
2771
2772
2772 Returns 0 on success.
2773 Returns 0 on success.
2773 '''
2774 '''
2774 def status(idx):
2775 def status(idx):
2775 guards = q.seriesguards[idx] or ['unguarded']
2776 guards = q.seriesguards[idx] or ['unguarded']
2776 if q.series[idx] in applied:
2777 if q.series[idx] in applied:
2777 state = 'applied'
2778 state = 'applied'
2778 elif q.pushable(idx)[0]:
2779 elif q.pushable(idx)[0]:
2779 state = 'unapplied'
2780 state = 'unapplied'
2780 else:
2781 else:
2781 state = 'guarded'
2782 state = 'guarded'
2782 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2783 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2783 ui.write('%s: ' % ui.label(q.series[idx], label))
2784 ui.write('%s: ' % ui.label(q.series[idx], label))
2784
2785
2785 for i, guard in enumerate(guards):
2786 for i, guard in enumerate(guards):
2786 if guard.startswith('+'):
2787 if guard.startswith('+'):
2787 ui.write(guard, label='qguard.positive')
2788 ui.write(guard, label='qguard.positive')
2788 elif guard.startswith('-'):
2789 elif guard.startswith('-'):
2789 ui.write(guard, label='qguard.negative')
2790 ui.write(guard, label='qguard.negative')
2790 else:
2791 else:
2791 ui.write(guard, label='qguard.unguarded')
2792 ui.write(guard, label='qguard.unguarded')
2792 if i != len(guards) - 1:
2793 if i != len(guards) - 1:
2793 ui.write(' ')
2794 ui.write(' ')
2794 ui.write('\n')
2795 ui.write('\n')
2795 q = repo.mq
2796 q = repo.mq
2796 applied = set(p.name for p in q.applied)
2797 applied = set(p.name for p in q.applied)
2797 patch = None
2798 patch = None
2798 args = list(args)
2799 args = list(args)
2799 if opts.get('list'):
2800 if opts.get('list'):
2800 if args or opts.get('none'):
2801 if args or opts.get('none'):
2801 raise error.Abort(_('cannot mix -l/--list with options or '
2802 raise error.Abort(_('cannot mix -l/--list with options or '
2802 'arguments'))
2803 'arguments'))
2803 for i in xrange(len(q.series)):
2804 for i in xrange(len(q.series)):
2804 status(i)
2805 status(i)
2805 return
2806 return
2806 if not args or args[0][0:1] in '-+':
2807 if not args or args[0][0:1] in '-+':
2807 if not q.applied:
2808 if not q.applied:
2808 raise error.Abort(_('no patches applied'))
2809 raise error.Abort(_('no patches applied'))
2809 patch = q.applied[-1].name
2810 patch = q.applied[-1].name
2810 if patch is None and args[0][0:1] not in '-+':
2811 if patch is None and args[0][0:1] not in '-+':
2811 patch = args.pop(0)
2812 patch = args.pop(0)
2812 if patch is None:
2813 if patch is None:
2813 raise error.Abort(_('no patch to work with'))
2814 raise error.Abort(_('no patch to work with'))
2814 if args or opts.get('none'):
2815 if args or opts.get('none'):
2815 idx = q.findseries(patch)
2816 idx = q.findseries(patch)
2816 if idx is None:
2817 if idx is None:
2817 raise error.Abort(_('no patch named %s') % patch)
2818 raise error.Abort(_('no patch named %s') % patch)
2818 q.setguards(idx, args)
2819 q.setguards(idx, args)
2819 q.savedirty()
2820 q.savedirty()
2820 else:
2821 else:
2821 status(q.series.index(q.lookup(patch)))
2822 status(q.series.index(q.lookup(patch)))
2822
2823
2823 @command("qheader", [], _('hg qheader [PATCH]'))
2824 @command("qheader", [], _('hg qheader [PATCH]'))
2824 def header(ui, repo, patch=None):
2825 def header(ui, repo, patch=None):
2825 """print the header of the topmost or specified patch
2826 """print the header of the topmost or specified patch
2826
2827
2827 Returns 0 on success."""
2828 Returns 0 on success."""
2828 q = repo.mq
2829 q = repo.mq
2829
2830
2830 if patch:
2831 if patch:
2831 patch = q.lookup(patch)
2832 patch = q.lookup(patch)
2832 else:
2833 else:
2833 if not q.applied:
2834 if not q.applied:
2834 ui.write(_('no patches applied\n'))
2835 ui.write(_('no patches applied\n'))
2835 return 1
2836 return 1
2836 patch = q.lookup('qtip')
2837 patch = q.lookup('qtip')
2837 ph = patchheader(q.join(patch), q.plainmode)
2838 ph = patchheader(q.join(patch), q.plainmode)
2838
2839
2839 ui.write('\n'.join(ph.message) + '\n')
2840 ui.write('\n'.join(ph.message) + '\n')
2840
2841
2841 def lastsavename(path):
2842 def lastsavename(path):
2842 (directory, base) = os.path.split(path)
2843 (directory, base) = os.path.split(path)
2843 names = os.listdir(directory)
2844 names = os.listdir(directory)
2844 namere = re.compile("%s.([0-9]+)" % base)
2845 namere = re.compile("%s.([0-9]+)" % base)
2845 maxindex = None
2846 maxindex = None
2846 maxname = None
2847 maxname = None
2847 for f in names:
2848 for f in names:
2848 m = namere.match(f)
2849 m = namere.match(f)
2849 if m:
2850 if m:
2850 index = int(m.group(1))
2851 index = int(m.group(1))
2851 if maxindex is None or index > maxindex:
2852 if maxindex is None or index > maxindex:
2852 maxindex = index
2853 maxindex = index
2853 maxname = f
2854 maxname = f
2854 if maxname:
2855 if maxname:
2855 return (os.path.join(directory, maxname), maxindex)
2856 return (os.path.join(directory, maxname), maxindex)
2856 return (None, None)
2857 return (None, None)
2857
2858
2858 def savename(path):
2859 def savename(path):
2859 (last, index) = lastsavename(path)
2860 (last, index) = lastsavename(path)
2860 if last is None:
2861 if last is None:
2861 index = 0
2862 index = 0
2862 newpath = path + ".%d" % (index + 1)
2863 newpath = path + ".%d" % (index + 1)
2863 return newpath
2864 return newpath
2864
2865
2865 @command("^qpush",
2866 @command("^qpush",
2866 [('', 'keep-changes', None,
2867 [('', 'keep-changes', None,
2867 _('tolerate non-conflicting local changes')),
2868 _('tolerate non-conflicting local changes')),
2868 ('f', 'force', None, _('apply on top of local changes')),
2869 ('f', 'force', None, _('apply on top of local changes')),
2869 ('e', 'exact', None,
2870 ('e', 'exact', None,
2870 _('apply the target patch to its recorded parent')),
2871 _('apply the target patch to its recorded parent')),
2871 ('l', 'list', None, _('list patch name in commit text')),
2872 ('l', 'list', None, _('list patch name in commit text')),
2872 ('a', 'all', None, _('apply all patches')),
2873 ('a', 'all', None, _('apply all patches')),
2873 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2874 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2874 ('n', 'name', '',
2875 ('n', 'name', '',
2875 _('merge queue name (DEPRECATED)'), _('NAME')),
2876 _('merge queue name (DEPRECATED)'), _('NAME')),
2876 ('', 'move', None,
2877 ('', 'move', None,
2877 _('reorder patch series and apply only the patch')),
2878 _('reorder patch series and apply only the patch')),
2878 ('', 'no-backup', None, _('do not save backup copies of files'))],
2879 ('', 'no-backup', None, _('do not save backup copies of files'))],
2879 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2880 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2880 def push(ui, repo, patch=None, **opts):
2881 def push(ui, repo, patch=None, **opts):
2881 """push the next patch onto the stack
2882 """push the next patch onto the stack
2882
2883
2883 By default, abort if the working directory contains uncommitted
2884 By default, abort if the working directory contains uncommitted
2884 changes. With --keep-changes, abort only if the uncommitted files
2885 changes. With --keep-changes, abort only if the uncommitted files
2885 overlap with patched files. With -f/--force, backup and patch over
2886 overlap with patched files. With -f/--force, backup and patch over
2886 uncommitted changes.
2887 uncommitted changes.
2887
2888
2888 Return 0 on success.
2889 Return 0 on success.
2889 """
2890 """
2890 q = repo.mq
2891 q = repo.mq
2891 mergeq = None
2892 mergeq = None
2892
2893
2893 opts = fixkeepchangesopts(ui, opts)
2894 opts = fixkeepchangesopts(ui, opts)
2894 if opts.get('merge'):
2895 if opts.get('merge'):
2895 if opts.get('name'):
2896 if opts.get('name'):
2896 newpath = repo.join(opts.get('name'))
2897 newpath = repo.join(opts.get('name'))
2897 else:
2898 else:
2898 newpath, i = lastsavename(q.path)
2899 newpath, i = lastsavename(q.path)
2899 if not newpath:
2900 if not newpath:
2900 ui.warn(_("no saved queues found, please use -n\n"))
2901 ui.warn(_("no saved queues found, please use -n\n"))
2901 return 1
2902 return 1
2902 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2903 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2903 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2904 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2904 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2905 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2905 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2906 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2906 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2907 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2907 keepchanges=opts.get('keep_changes'))
2908 keepchanges=opts.get('keep_changes'))
2908 return ret
2909 return ret
2909
2910
2910 @command("^qpop",
2911 @command("^qpop",
2911 [('a', 'all', None, _('pop all patches')),
2912 [('a', 'all', None, _('pop all patches')),
2912 ('n', 'name', '',
2913 ('n', 'name', '',
2913 _('queue name to pop (DEPRECATED)'), _('NAME')),
2914 _('queue name to pop (DEPRECATED)'), _('NAME')),
2914 ('', 'keep-changes', None,
2915 ('', 'keep-changes', None,
2915 _('tolerate non-conflicting local changes')),
2916 _('tolerate non-conflicting local changes')),
2916 ('f', 'force', None, _('forget any local changes to patched files')),
2917 ('f', 'force', None, _('forget any local changes to patched files')),
2917 ('', 'no-backup', None, _('do not save backup copies of files'))],
2918 ('', 'no-backup', None, _('do not save backup copies of files'))],
2918 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2919 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2919 def pop(ui, repo, patch=None, **opts):
2920 def pop(ui, repo, patch=None, **opts):
2920 """pop the current patch off the stack
2921 """pop the current patch off the stack
2921
2922
2922 Without argument, pops off the top of the patch stack. If given a
2923 Without argument, pops off the top of the patch stack. If given a
2923 patch name, keeps popping off patches until the named patch is at
2924 patch name, keeps popping off patches until the named patch is at
2924 the top of the stack.
2925 the top of the stack.
2925
2926
2926 By default, abort if the working directory contains uncommitted
2927 By default, abort if the working directory contains uncommitted
2927 changes. With --keep-changes, abort only if the uncommitted files
2928 changes. With --keep-changes, abort only if the uncommitted files
2928 overlap with patched files. With -f/--force, backup and discard
2929 overlap with patched files. With -f/--force, backup and discard
2929 changes made to such files.
2930 changes made to such files.
2930
2931
2931 Return 0 on success.
2932 Return 0 on success.
2932 """
2933 """
2933 opts = fixkeepchangesopts(ui, opts)
2934 opts = fixkeepchangesopts(ui, opts)
2934 localupdate = True
2935 localupdate = True
2935 if opts.get('name'):
2936 if opts.get('name'):
2936 q = queue(ui, repo.baseui, repo.path, repo.join(opts.get('name')))
2937 q = queue(ui, repo.baseui, repo.path, repo.join(opts.get('name')))
2937 ui.warn(_('using patch queue: %s\n') % q.path)
2938 ui.warn(_('using patch queue: %s\n') % q.path)
2938 localupdate = False
2939 localupdate = False
2939 else:
2940 else:
2940 q = repo.mq
2941 q = repo.mq
2941 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
2942 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
2942 all=opts.get('all'), nobackup=opts.get('no_backup'),
2943 all=opts.get('all'), nobackup=opts.get('no_backup'),
2943 keepchanges=opts.get('keep_changes'))
2944 keepchanges=opts.get('keep_changes'))
2944 q.savedirty()
2945 q.savedirty()
2945 return ret
2946 return ret
2946
2947
2947 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
2948 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
2948 def rename(ui, repo, patch, name=None, **opts):
2949 def rename(ui, repo, patch, name=None, **opts):
2949 """rename a patch
2950 """rename a patch
2950
2951
2951 With one argument, renames the current patch to PATCH1.
2952 With one argument, renames the current patch to PATCH1.
2952 With two arguments, renames PATCH1 to PATCH2.
2953 With two arguments, renames PATCH1 to PATCH2.
2953
2954
2954 Returns 0 on success."""
2955 Returns 0 on success."""
2955 q = repo.mq
2956 q = repo.mq
2956 if not name:
2957 if not name:
2957 name = patch
2958 name = patch
2958 patch = None
2959 patch = None
2959
2960
2960 if patch:
2961 if patch:
2961 patch = q.lookup(patch)
2962 patch = q.lookup(patch)
2962 else:
2963 else:
2963 if not q.applied:
2964 if not q.applied:
2964 ui.write(_('no patches applied\n'))
2965 ui.write(_('no patches applied\n'))
2965 return
2966 return
2966 patch = q.lookup('qtip')
2967 patch = q.lookup('qtip')
2967 absdest = q.join(name)
2968 absdest = q.join(name)
2968 if os.path.isdir(absdest):
2969 if os.path.isdir(absdest):
2969 name = normname(os.path.join(name, os.path.basename(patch)))
2970 name = normname(os.path.join(name, os.path.basename(patch)))
2970 absdest = q.join(name)
2971 absdest = q.join(name)
2971 q.checkpatchname(name)
2972 q.checkpatchname(name)
2972
2973
2973 ui.note(_('renaming %s to %s\n') % (patch, name))
2974 ui.note(_('renaming %s to %s\n') % (patch, name))
2974 i = q.findseries(patch)
2975 i = q.findseries(patch)
2975 guards = q.guard_re.findall(q.fullseries[i])
2976 guards = q.guard_re.findall(q.fullseries[i])
2976 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
2977 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
2977 q.parseseries()
2978 q.parseseries()
2978 q.seriesdirty = True
2979 q.seriesdirty = True
2979
2980
2980 info = q.isapplied(patch)
2981 info = q.isapplied(patch)
2981 if info:
2982 if info:
2982 q.applied[info[0]] = statusentry(info[1], name)
2983 q.applied[info[0]] = statusentry(info[1], name)
2983 q.applieddirty = True
2984 q.applieddirty = True
2984
2985
2985 destdir = os.path.dirname(absdest)
2986 destdir = os.path.dirname(absdest)
2986 if not os.path.isdir(destdir):
2987 if not os.path.isdir(destdir):
2987 os.makedirs(destdir)
2988 os.makedirs(destdir)
2988 util.rename(q.join(patch), absdest)
2989 util.rename(q.join(patch), absdest)
2989 r = q.qrepo()
2990 r = q.qrepo()
2990 if r and patch in r.dirstate:
2991 if r and patch in r.dirstate:
2991 wctx = r[None]
2992 wctx = r[None]
2992 with r.wlock():
2993 with r.wlock():
2993 if r.dirstate[patch] == 'a':
2994 if r.dirstate[patch] == 'a':
2994 r.dirstate.drop(patch)
2995 r.dirstate.drop(patch)
2995 r.dirstate.add(name)
2996 r.dirstate.add(name)
2996 else:
2997 else:
2997 wctx.copy(patch, name)
2998 wctx.copy(patch, name)
2998 wctx.forget([patch])
2999 wctx.forget([patch])
2999
3000
3000 q.savedirty()
3001 q.savedirty()
3001
3002
3002 @command("qrestore",
3003 @command("qrestore",
3003 [('d', 'delete', None, _('delete save entry')),
3004 [('d', 'delete', None, _('delete save entry')),
3004 ('u', 'update', None, _('update queue working directory'))],
3005 ('u', 'update', None, _('update queue working directory'))],
3005 _('hg qrestore [-d] [-u] REV'))
3006 _('hg qrestore [-d] [-u] REV'))
3006 def restore(ui, repo, rev, **opts):
3007 def restore(ui, repo, rev, **opts):
3007 """restore the queue state saved by a revision (DEPRECATED)
3008 """restore the queue state saved by a revision (DEPRECATED)
3008
3009
3009 This command is deprecated, use :hg:`rebase` instead."""
3010 This command is deprecated, use :hg:`rebase` instead."""
3010 rev = repo.lookup(rev)
3011 rev = repo.lookup(rev)
3011 q = repo.mq
3012 q = repo.mq
3012 q.restore(repo, rev, delete=opts.get('delete'),
3013 q.restore(repo, rev, delete=opts.get('delete'),
3013 qupdate=opts.get('update'))
3014 qupdate=opts.get('update'))
3014 q.savedirty()
3015 q.savedirty()
3015 return 0
3016 return 0
3016
3017
3017 @command("qsave",
3018 @command("qsave",
3018 [('c', 'copy', None, _('copy patch directory')),
3019 [('c', 'copy', None, _('copy patch directory')),
3019 ('n', 'name', '',
3020 ('n', 'name', '',
3020 _('copy directory name'), _('NAME')),
3021 _('copy directory name'), _('NAME')),
3021 ('e', 'empty', None, _('clear queue status file')),
3022 ('e', 'empty', None, _('clear queue status file')),
3022 ('f', 'force', None, _('force copy'))] + commands.commitopts,
3023 ('f', 'force', None, _('force copy'))] + commands.commitopts,
3023 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3024 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3024 def save(ui, repo, **opts):
3025 def save(ui, repo, **opts):
3025 """save current queue state (DEPRECATED)
3026 """save current queue state (DEPRECATED)
3026
3027
3027 This command is deprecated, use :hg:`rebase` instead."""
3028 This command is deprecated, use :hg:`rebase` instead."""
3028 q = repo.mq
3029 q = repo.mq
3029 message = cmdutil.logmessage(ui, opts)
3030 message = cmdutil.logmessage(ui, opts)
3030 ret = q.save(repo, msg=message)
3031 ret = q.save(repo, msg=message)
3031 if ret:
3032 if ret:
3032 return ret
3033 return ret
3033 q.savedirty() # save to .hg/patches before copying
3034 q.savedirty() # save to .hg/patches before copying
3034 if opts.get('copy'):
3035 if opts.get('copy'):
3035 path = q.path
3036 path = q.path
3036 if opts.get('name'):
3037 if opts.get('name'):
3037 newpath = os.path.join(q.basepath, opts.get('name'))
3038 newpath = os.path.join(q.basepath, opts.get('name'))
3038 if os.path.exists(newpath):
3039 if os.path.exists(newpath):
3039 if not os.path.isdir(newpath):
3040 if not os.path.isdir(newpath):
3040 raise error.Abort(_('destination %s exists and is not '
3041 raise error.Abort(_('destination %s exists and is not '
3041 'a directory') % newpath)
3042 'a directory') % newpath)
3042 if not opts.get('force'):
3043 if not opts.get('force'):
3043 raise error.Abort(_('destination %s exists, '
3044 raise error.Abort(_('destination %s exists, '
3044 'use -f to force') % newpath)
3045 'use -f to force') % newpath)
3045 else:
3046 else:
3046 newpath = savename(path)
3047 newpath = savename(path)
3047 ui.warn(_("copy %s to %s\n") % (path, newpath))
3048 ui.warn(_("copy %s to %s\n") % (path, newpath))
3048 util.copyfiles(path, newpath)
3049 util.copyfiles(path, newpath)
3049 if opts.get('empty'):
3050 if opts.get('empty'):
3050 del q.applied[:]
3051 del q.applied[:]
3051 q.applieddirty = True
3052 q.applieddirty = True
3052 q.savedirty()
3053 q.savedirty()
3053 return 0
3054 return 0
3054
3055
3055
3056
3056 @command("qselect",
3057 @command("qselect",
3057 [('n', 'none', None, _('disable all guards')),
3058 [('n', 'none', None, _('disable all guards')),
3058 ('s', 'series', None, _('list all guards in series file')),
3059 ('s', 'series', None, _('list all guards in series file')),
3059 ('', 'pop', None, _('pop to before first guarded applied patch')),
3060 ('', 'pop', None, _('pop to before first guarded applied patch')),
3060 ('', 'reapply', None, _('pop, then reapply patches'))],
3061 ('', 'reapply', None, _('pop, then reapply patches'))],
3061 _('hg qselect [OPTION]... [GUARD]...'))
3062 _('hg qselect [OPTION]... [GUARD]...'))
3062 def select(ui, repo, *args, **opts):
3063 def select(ui, repo, *args, **opts):
3063 '''set or print guarded patches to push
3064 '''set or print guarded patches to push
3064
3065
3065 Use the :hg:`qguard` command to set or print guards on patch, then use
3066 Use the :hg:`qguard` command to set or print guards on patch, then use
3066 qselect to tell mq which guards to use. A patch will be pushed if
3067 qselect to tell mq which guards to use. A patch will be pushed if
3067 it has no guards or any positive guards match the currently
3068 it has no guards or any positive guards match the currently
3068 selected guard, but will not be pushed if any negative guards
3069 selected guard, but will not be pushed if any negative guards
3069 match the current guard. For example::
3070 match the current guard. For example::
3070
3071
3071 qguard foo.patch -- -stable (negative guard)
3072 qguard foo.patch -- -stable (negative guard)
3072 qguard bar.patch +stable (positive guard)
3073 qguard bar.patch +stable (positive guard)
3073 qselect stable
3074 qselect stable
3074
3075
3075 This activates the "stable" guard. mq will skip foo.patch (because
3076 This activates the "stable" guard. mq will skip foo.patch (because
3076 it has a negative match) but push bar.patch (because it has a
3077 it has a negative match) but push bar.patch (because it has a
3077 positive match).
3078 positive match).
3078
3079
3079 With no arguments, prints the currently active guards.
3080 With no arguments, prints the currently active guards.
3080 With one argument, sets the active guard.
3081 With one argument, sets the active guard.
3081
3082
3082 Use -n/--none to deactivate guards (no other arguments needed).
3083 Use -n/--none to deactivate guards (no other arguments needed).
3083 When no guards are active, patches with positive guards are
3084 When no guards are active, patches with positive guards are
3084 skipped and patches with negative guards are pushed.
3085 skipped and patches with negative guards are pushed.
3085
3086
3086 qselect can change the guards on applied patches. It does not pop
3087 qselect can change the guards on applied patches. It does not pop
3087 guarded patches by default. Use --pop to pop back to the last
3088 guarded patches by default. Use --pop to pop back to the last
3088 applied patch that is not guarded. Use --reapply (which implies
3089 applied patch that is not guarded. Use --reapply (which implies
3089 --pop) to push back to the current patch afterwards, but skip
3090 --pop) to push back to the current patch afterwards, but skip
3090 guarded patches.
3091 guarded patches.
3091
3092
3092 Use -s/--series to print a list of all guards in the series file
3093 Use -s/--series to print a list of all guards in the series file
3093 (no other arguments needed). Use -v for more information.
3094 (no other arguments needed). Use -v for more information.
3094
3095
3095 Returns 0 on success.'''
3096 Returns 0 on success.'''
3096
3097
3097 q = repo.mq
3098 q = repo.mq
3098 guards = q.active()
3099 guards = q.active()
3099 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3100 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3100 if args or opts.get('none'):
3101 if args or opts.get('none'):
3101 old_unapplied = q.unapplied(repo)
3102 old_unapplied = q.unapplied(repo)
3102 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3103 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3103 q.setactive(args)
3104 q.setactive(args)
3104 q.savedirty()
3105 q.savedirty()
3105 if not args:
3106 if not args:
3106 ui.status(_('guards deactivated\n'))
3107 ui.status(_('guards deactivated\n'))
3107 if not opts.get('pop') and not opts.get('reapply'):
3108 if not opts.get('pop') and not opts.get('reapply'):
3108 unapplied = q.unapplied(repo)
3109 unapplied = q.unapplied(repo)
3109 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3110 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3110 if len(unapplied) != len(old_unapplied):
3111 if len(unapplied) != len(old_unapplied):
3111 ui.status(_('number of unguarded, unapplied patches has '
3112 ui.status(_('number of unguarded, unapplied patches has '
3112 'changed from %d to %d\n') %
3113 'changed from %d to %d\n') %
3113 (len(old_unapplied), len(unapplied)))
3114 (len(old_unapplied), len(unapplied)))
3114 if len(guarded) != len(old_guarded):
3115 if len(guarded) != len(old_guarded):
3115 ui.status(_('number of guarded, applied patches has changed '
3116 ui.status(_('number of guarded, applied patches has changed '
3116 'from %d to %d\n') %
3117 'from %d to %d\n') %
3117 (len(old_guarded), len(guarded)))
3118 (len(old_guarded), len(guarded)))
3118 elif opts.get('series'):
3119 elif opts.get('series'):
3119 guards = {}
3120 guards = {}
3120 noguards = 0
3121 noguards = 0
3121 for gs in q.seriesguards:
3122 for gs in q.seriesguards:
3122 if not gs:
3123 if not gs:
3123 noguards += 1
3124 noguards += 1
3124 for g in gs:
3125 for g in gs:
3125 guards.setdefault(g, 0)
3126 guards.setdefault(g, 0)
3126 guards[g] += 1
3127 guards[g] += 1
3127 if ui.verbose:
3128 if ui.verbose:
3128 guards['NONE'] = noguards
3129 guards['NONE'] = noguards
3129 guards = guards.items()
3130 guards = guards.items()
3130 guards.sort(key=lambda x: x[0][1:])
3131 guards.sort(key=lambda x: x[0][1:])
3131 if guards:
3132 if guards:
3132 ui.note(_('guards in series file:\n'))
3133 ui.note(_('guards in series file:\n'))
3133 for guard, count in guards:
3134 for guard, count in guards:
3134 ui.note('%2d ' % count)
3135 ui.note('%2d ' % count)
3135 ui.write(guard, '\n')
3136 ui.write(guard, '\n')
3136 else:
3137 else:
3137 ui.note(_('no guards in series file\n'))
3138 ui.note(_('no guards in series file\n'))
3138 else:
3139 else:
3139 if guards:
3140 if guards:
3140 ui.note(_('active guards:\n'))
3141 ui.note(_('active guards:\n'))
3141 for g in guards:
3142 for g in guards:
3142 ui.write(g, '\n')
3143 ui.write(g, '\n')
3143 else:
3144 else:
3144 ui.write(_('no active guards\n'))
3145 ui.write(_('no active guards\n'))
3145 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3146 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3146 popped = False
3147 popped = False
3147 if opts.get('pop') or opts.get('reapply'):
3148 if opts.get('pop') or opts.get('reapply'):
3148 for i in xrange(len(q.applied)):
3149 for i in xrange(len(q.applied)):
3149 if not pushable(i):
3150 if not pushable(i):
3150 ui.status(_('popping guarded patches\n'))
3151 ui.status(_('popping guarded patches\n'))
3151 popped = True
3152 popped = True
3152 if i == 0:
3153 if i == 0:
3153 q.pop(repo, all=True)
3154 q.pop(repo, all=True)
3154 else:
3155 else:
3155 q.pop(repo, q.applied[i - 1].name)
3156 q.pop(repo, q.applied[i - 1].name)
3156 break
3157 break
3157 if popped:
3158 if popped:
3158 try:
3159 try:
3159 if reapply:
3160 if reapply:
3160 ui.status(_('reapplying unguarded patches\n'))
3161 ui.status(_('reapplying unguarded patches\n'))
3161 q.push(repo, reapply)
3162 q.push(repo, reapply)
3162 finally:
3163 finally:
3163 q.savedirty()
3164 q.savedirty()
3164
3165
3165 @command("qfinish",
3166 @command("qfinish",
3166 [('a', 'applied', None, _('finish all applied changesets'))],
3167 [('a', 'applied', None, _('finish all applied changesets'))],
3167 _('hg qfinish [-a] [REV]...'))
3168 _('hg qfinish [-a] [REV]...'))
3168 def finish(ui, repo, *revrange, **opts):
3169 def finish(ui, repo, *revrange, **opts):
3169 """move applied patches into repository history
3170 """move applied patches into repository history
3170
3171
3171 Finishes the specified revisions (corresponding to applied
3172 Finishes the specified revisions (corresponding to applied
3172 patches) by moving them out of mq control into regular repository
3173 patches) by moving them out of mq control into regular repository
3173 history.
3174 history.
3174
3175
3175 Accepts a revision range or the -a/--applied option. If --applied
3176 Accepts a revision range or the -a/--applied option. If --applied
3176 is specified, all applied mq revisions are removed from mq
3177 is specified, all applied mq revisions are removed from mq
3177 control. Otherwise, the given revisions must be at the base of the
3178 control. Otherwise, the given revisions must be at the base of the
3178 stack of applied patches.
3179 stack of applied patches.
3179
3180
3180 This can be especially useful if your changes have been applied to
3181 This can be especially useful if your changes have been applied to
3181 an upstream repository, or if you are about to push your changes
3182 an upstream repository, or if you are about to push your changes
3182 to upstream.
3183 to upstream.
3183
3184
3184 Returns 0 on success.
3185 Returns 0 on success.
3185 """
3186 """
3186 if not opts.get('applied') and not revrange:
3187 if not opts.get('applied') and not revrange:
3187 raise error.Abort(_('no revisions specified'))
3188 raise error.Abort(_('no revisions specified'))
3188 elif opts.get('applied'):
3189 elif opts.get('applied'):
3189 revrange = ('qbase::qtip',) + revrange
3190 revrange = ('qbase::qtip',) + revrange
3190
3191
3191 q = repo.mq
3192 q = repo.mq
3192 if not q.applied:
3193 if not q.applied:
3193 ui.status(_('no patches applied\n'))
3194 ui.status(_('no patches applied\n'))
3194 return 0
3195 return 0
3195
3196
3196 revs = scmutil.revrange(repo, revrange)
3197 revs = scmutil.revrange(repo, revrange)
3197 if repo['.'].rev() in revs and repo[None].files():
3198 if repo['.'].rev() in revs and repo[None].files():
3198 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3199 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3199 # queue.finish may changes phases but leave the responsibility to lock the
3200 # queue.finish may changes phases but leave the responsibility to lock the
3200 # repo to the caller to avoid deadlock with wlock. This command code is
3201 # repo to the caller to avoid deadlock with wlock. This command code is
3201 # responsibility for this locking.
3202 # responsibility for this locking.
3202 with repo.lock():
3203 with repo.lock():
3203 q.finish(repo, revs)
3204 q.finish(repo, revs)
3204 q.savedirty()
3205 q.savedirty()
3205 return 0
3206 return 0
3206
3207
3207 @command("qqueue",
3208 @command("qqueue",
3208 [('l', 'list', False, _('list all available queues')),
3209 [('l', 'list', False, _('list all available queues')),
3209 ('', 'active', False, _('print name of active queue')),
3210 ('', 'active', False, _('print name of active queue')),
3210 ('c', 'create', False, _('create new queue')),
3211 ('c', 'create', False, _('create new queue')),
3211 ('', 'rename', False, _('rename active queue')),
3212 ('', 'rename', False, _('rename active queue')),
3212 ('', 'delete', False, _('delete reference to queue')),
3213 ('', 'delete', False, _('delete reference to queue')),
3213 ('', 'purge', False, _('delete queue, and remove patch dir')),
3214 ('', 'purge', False, _('delete queue, and remove patch dir')),
3214 ],
3215 ],
3215 _('[OPTION] [QUEUE]'))
3216 _('[OPTION] [QUEUE]'))
3216 def qqueue(ui, repo, name=None, **opts):
3217 def qqueue(ui, repo, name=None, **opts):
3217 '''manage multiple patch queues
3218 '''manage multiple patch queues
3218
3219
3219 Supports switching between different patch queues, as well as creating
3220 Supports switching between different patch queues, as well as creating
3220 new patch queues and deleting existing ones.
3221 new patch queues and deleting existing ones.
3221
3222
3222 Omitting a queue name or specifying -l/--list will show you the registered
3223 Omitting a queue name or specifying -l/--list will show you the registered
3223 queues - by default the "normal" patches queue is registered. The currently
3224 queues - by default the "normal" patches queue is registered. The currently
3224 active queue will be marked with "(active)". Specifying --active will print
3225 active queue will be marked with "(active)". Specifying --active will print
3225 only the name of the active queue.
3226 only the name of the active queue.
3226
3227
3227 To create a new queue, use -c/--create. The queue is automatically made
3228 To create a new queue, use -c/--create. The queue is automatically made
3228 active, except in the case where there are applied patches from the
3229 active, except in the case where there are applied patches from the
3229 currently active queue in the repository. Then the queue will only be
3230 currently active queue in the repository. Then the queue will only be
3230 created and switching will fail.
3231 created and switching will fail.
3231
3232
3232 To delete an existing queue, use --delete. You cannot delete the currently
3233 To delete an existing queue, use --delete. You cannot delete the currently
3233 active queue.
3234 active queue.
3234
3235
3235 Returns 0 on success.
3236 Returns 0 on success.
3236 '''
3237 '''
3237 q = repo.mq
3238 q = repo.mq
3238 _defaultqueue = 'patches'
3239 _defaultqueue = 'patches'
3239 _allqueues = 'patches.queues'
3240 _allqueues = 'patches.queues'
3240 _activequeue = 'patches.queue'
3241 _activequeue = 'patches.queue'
3241
3242
3242 def _getcurrent():
3243 def _getcurrent():
3243 cur = os.path.basename(q.path)
3244 cur = os.path.basename(q.path)
3244 if cur.startswith('patches-'):
3245 if cur.startswith('patches-'):
3245 cur = cur[8:]
3246 cur = cur[8:]
3246 return cur
3247 return cur
3247
3248
3248 def _noqueues():
3249 def _noqueues():
3249 try:
3250 try:
3250 fh = repo.vfs(_allqueues, 'r')
3251 fh = repo.vfs(_allqueues, 'r')
3251 fh.close()
3252 fh.close()
3252 except IOError:
3253 except IOError:
3253 return True
3254 return True
3254
3255
3255 return False
3256 return False
3256
3257
3257 def _getqueues():
3258 def _getqueues():
3258 current = _getcurrent()
3259 current = _getcurrent()
3259
3260
3260 try:
3261 try:
3261 fh = repo.vfs(_allqueues, 'r')
3262 fh = repo.vfs(_allqueues, 'r')
3262 queues = [queue.strip() for queue in fh if queue.strip()]
3263 queues = [queue.strip() for queue in fh if queue.strip()]
3263 fh.close()
3264 fh.close()
3264 if current not in queues:
3265 if current not in queues:
3265 queues.append(current)
3266 queues.append(current)
3266 except IOError:
3267 except IOError:
3267 queues = [_defaultqueue]
3268 queues = [_defaultqueue]
3268
3269
3269 return sorted(queues)
3270 return sorted(queues)
3270
3271
3271 def _setactive(name):
3272 def _setactive(name):
3272 if q.applied:
3273 if q.applied:
3273 raise error.Abort(_('new queue created, but cannot make active '
3274 raise error.Abort(_('new queue created, but cannot make active '
3274 'as patches are applied'))
3275 'as patches are applied'))
3275 _setactivenocheck(name)
3276 _setactivenocheck(name)
3276
3277
3277 def _setactivenocheck(name):
3278 def _setactivenocheck(name):
3278 fh = repo.vfs(_activequeue, 'w')
3279 fh = repo.vfs(_activequeue, 'w')
3279 if name != 'patches':
3280 if name != 'patches':
3280 fh.write(name)
3281 fh.write(name)
3281 fh.close()
3282 fh.close()
3282
3283
3283 def _addqueue(name):
3284 def _addqueue(name):
3284 fh = repo.vfs(_allqueues, 'a')
3285 fh = repo.vfs(_allqueues, 'a')
3285 fh.write('%s\n' % (name,))
3286 fh.write('%s\n' % (name,))
3286 fh.close()
3287 fh.close()
3287
3288
3288 def _queuedir(name):
3289 def _queuedir(name):
3289 if name == 'patches':
3290 if name == 'patches':
3290 return repo.join('patches')
3291 return repo.join('patches')
3291 else:
3292 else:
3292 return repo.join('patches-' + name)
3293 return repo.join('patches-' + name)
3293
3294
3294 def _validname(name):
3295 def _validname(name):
3295 for n in name:
3296 for n in name:
3296 if n in ':\\/.':
3297 if n in ':\\/.':
3297 return False
3298 return False
3298 return True
3299 return True
3299
3300
3300 def _delete(name):
3301 def _delete(name):
3301 if name not in existing:
3302 if name not in existing:
3302 raise error.Abort(_('cannot delete queue that does not exist'))
3303 raise error.Abort(_('cannot delete queue that does not exist'))
3303
3304
3304 current = _getcurrent()
3305 current = _getcurrent()
3305
3306
3306 if name == current:
3307 if name == current:
3307 raise error.Abort(_('cannot delete currently active queue'))
3308 raise error.Abort(_('cannot delete currently active queue'))
3308
3309
3309 fh = repo.vfs('patches.queues.new', 'w')
3310 fh = repo.vfs('patches.queues.new', 'w')
3310 for queue in existing:
3311 for queue in existing:
3311 if queue == name:
3312 if queue == name:
3312 continue
3313 continue
3313 fh.write('%s\n' % (queue,))
3314 fh.write('%s\n' % (queue,))
3314 fh.close()
3315 fh.close()
3315 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3316 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3316
3317
3317 if not name or opts.get('list') or opts.get('active'):
3318 if not name or opts.get('list') or opts.get('active'):
3318 current = _getcurrent()
3319 current = _getcurrent()
3319 if opts.get('active'):
3320 if opts.get('active'):
3320 ui.write('%s\n' % (current,))
3321 ui.write('%s\n' % (current,))
3321 return
3322 return
3322 for queue in _getqueues():
3323 for queue in _getqueues():
3323 ui.write('%s' % (queue,))
3324 ui.write('%s' % (queue,))
3324 if queue == current and not ui.quiet:
3325 if queue == current and not ui.quiet:
3325 ui.write(_(' (active)\n'))
3326 ui.write(_(' (active)\n'))
3326 else:
3327 else:
3327 ui.write('\n')
3328 ui.write('\n')
3328 return
3329 return
3329
3330
3330 if not _validname(name):
3331 if not _validname(name):
3331 raise error.Abort(
3332 raise error.Abort(
3332 _('invalid queue name, may not contain the characters ":\\/."'))
3333 _('invalid queue name, may not contain the characters ":\\/."'))
3333
3334
3334 existing = _getqueues()
3335 existing = _getqueues()
3335
3336
3336 if opts.get('create'):
3337 if opts.get('create'):
3337 if name in existing:
3338 if name in existing:
3338 raise error.Abort(_('queue "%s" already exists') % name)
3339 raise error.Abort(_('queue "%s" already exists') % name)
3339 if _noqueues():
3340 if _noqueues():
3340 _addqueue(_defaultqueue)
3341 _addqueue(_defaultqueue)
3341 _addqueue(name)
3342 _addqueue(name)
3342 _setactive(name)
3343 _setactive(name)
3343 elif opts.get('rename'):
3344 elif opts.get('rename'):
3344 current = _getcurrent()
3345 current = _getcurrent()
3345 if name == current:
3346 if name == current:
3346 raise error.Abort(_('can\'t rename "%s" to its current name')
3347 raise error.Abort(_('can\'t rename "%s" to its current name')
3347 % name)
3348 % name)
3348 if name in existing:
3349 if name in existing:
3349 raise error.Abort(_('queue "%s" already exists') % name)
3350 raise error.Abort(_('queue "%s" already exists') % name)
3350
3351
3351 olddir = _queuedir(current)
3352 olddir = _queuedir(current)
3352 newdir = _queuedir(name)
3353 newdir = _queuedir(name)
3353
3354
3354 if os.path.exists(newdir):
3355 if os.path.exists(newdir):
3355 raise error.Abort(_('non-queue directory "%s" already exists') %
3356 raise error.Abort(_('non-queue directory "%s" already exists') %
3356 newdir)
3357 newdir)
3357
3358
3358 fh = repo.vfs('patches.queues.new', 'w')
3359 fh = repo.vfs('patches.queues.new', 'w')
3359 for queue in existing:
3360 for queue in existing:
3360 if queue == current:
3361 if queue == current:
3361 fh.write('%s\n' % (name,))
3362 fh.write('%s\n' % (name,))
3362 if os.path.exists(olddir):
3363 if os.path.exists(olddir):
3363 util.rename(olddir, newdir)
3364 util.rename(olddir, newdir)
3364 else:
3365 else:
3365 fh.write('%s\n' % (queue,))
3366 fh.write('%s\n' % (queue,))
3366 fh.close()
3367 fh.close()
3367 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3368 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3368 _setactivenocheck(name)
3369 _setactivenocheck(name)
3369 elif opts.get('delete'):
3370 elif opts.get('delete'):
3370 _delete(name)
3371 _delete(name)
3371 elif opts.get('purge'):
3372 elif opts.get('purge'):
3372 if name in existing:
3373 if name in existing:
3373 _delete(name)
3374 _delete(name)
3374 qdir = _queuedir(name)
3375 qdir = _queuedir(name)
3375 if os.path.exists(qdir):
3376 if os.path.exists(qdir):
3376 shutil.rmtree(qdir)
3377 shutil.rmtree(qdir)
3377 else:
3378 else:
3378 if name not in existing:
3379 if name not in existing:
3379 raise error.Abort(_('use --create to create a new queue'))
3380 raise error.Abort(_('use --create to create a new queue'))
3380 _setactive(name)
3381 _setactive(name)
3381
3382
3382 def mqphasedefaults(repo, roots):
3383 def mqphasedefaults(repo, roots):
3383 """callback used to set mq changeset as secret when no phase data exists"""
3384 """callback used to set mq changeset as secret when no phase data exists"""
3384 if repo.mq.applied:
3385 if repo.mq.applied:
3385 if repo.ui.configbool('mq', 'secret', False):
3386 if repo.ui.configbool('mq', 'secret', False):
3386 mqphase = phases.secret
3387 mqphase = phases.secret
3387 else:
3388 else:
3388 mqphase = phases.draft
3389 mqphase = phases.draft
3389 qbase = repo[repo.mq.applied[0].node]
3390 qbase = repo[repo.mq.applied[0].node]
3390 roots[mqphase].add(qbase.node())
3391 roots[mqphase].add(qbase.node())
3391 return roots
3392 return roots
3392
3393
3393 def reposetup(ui, repo):
3394 def reposetup(ui, repo):
3394 class mqrepo(repo.__class__):
3395 class mqrepo(repo.__class__):
3395 @localrepo.unfilteredpropertycache
3396 @localrepo.unfilteredpropertycache
3396 def mq(self):
3397 def mq(self):
3397 return queue(self.ui, self.baseui, self.path)
3398 return queue(self.ui, self.baseui, self.path)
3398
3399
3399 def invalidateall(self):
3400 def invalidateall(self):
3400 super(mqrepo, self).invalidateall()
3401 super(mqrepo, self).invalidateall()
3401 if localrepo.hasunfilteredcache(self, 'mq'):
3402 if localrepo.hasunfilteredcache(self, 'mq'):
3402 # recreate mq in case queue path was changed
3403 # recreate mq in case queue path was changed
3403 delattr(self.unfiltered(), 'mq')
3404 delattr(self.unfiltered(), 'mq')
3404
3405
3405 def abortifwdirpatched(self, errmsg, force=False):
3406 def abortifwdirpatched(self, errmsg, force=False):
3406 if self.mq.applied and self.mq.checkapplied and not force:
3407 if self.mq.applied and self.mq.checkapplied and not force:
3407 parents = self.dirstate.parents()
3408 parents = self.dirstate.parents()
3408 patches = [s.node for s in self.mq.applied]
3409 patches = [s.node for s in self.mq.applied]
3409 if parents[0] in patches or parents[1] in patches:
3410 if parents[0] in patches or parents[1] in patches:
3410 raise error.Abort(errmsg)
3411 raise error.Abort(errmsg)
3411
3412
3412 def commit(self, text="", user=None, date=None, match=None,
3413 def commit(self, text="", user=None, date=None, match=None,
3413 force=False, editor=False, extra={}):
3414 force=False, editor=False, extra={}):
3414 self.abortifwdirpatched(
3415 self.abortifwdirpatched(
3415 _('cannot commit over an applied mq patch'),
3416 _('cannot commit over an applied mq patch'),
3416 force)
3417 force)
3417
3418
3418 return super(mqrepo, self).commit(text, user, date, match, force,
3419 return super(mqrepo, self).commit(text, user, date, match, force,
3419 editor, extra)
3420 editor, extra)
3420
3421
3421 def checkpush(self, pushop):
3422 def checkpush(self, pushop):
3422 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3423 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3423 outapplied = [e.node for e in self.mq.applied]
3424 outapplied = [e.node for e in self.mq.applied]
3424 if pushop.revs:
3425 if pushop.revs:
3425 # Assume applied patches have no non-patch descendants and
3426 # Assume applied patches have no non-patch descendants and
3426 # are not on remote already. Filtering any changeset not
3427 # are not on remote already. Filtering any changeset not
3427 # pushed.
3428 # pushed.
3428 heads = set(pushop.revs)
3429 heads = set(pushop.revs)
3429 for node in reversed(outapplied):
3430 for node in reversed(outapplied):
3430 if node in heads:
3431 if node in heads:
3431 break
3432 break
3432 else:
3433 else:
3433 outapplied.pop()
3434 outapplied.pop()
3434 # looking for pushed and shared changeset
3435 # looking for pushed and shared changeset
3435 for node in outapplied:
3436 for node in outapplied:
3436 if self[node].phase() < phases.secret:
3437 if self[node].phase() < phases.secret:
3437 raise error.Abort(_('source has mq patches applied'))
3438 raise error.Abort(_('source has mq patches applied'))
3438 # no non-secret patches pushed
3439 # no non-secret patches pushed
3439 super(mqrepo, self).checkpush(pushop)
3440 super(mqrepo, self).checkpush(pushop)
3440
3441
3441 def _findtags(self):
3442 def _findtags(self):
3442 '''augment tags from base class with patch tags'''
3443 '''augment tags from base class with patch tags'''
3443 result = super(mqrepo, self)._findtags()
3444 result = super(mqrepo, self)._findtags()
3444
3445
3445 q = self.mq
3446 q = self.mq
3446 if not q.applied:
3447 if not q.applied:
3447 return result
3448 return result
3448
3449
3449 mqtags = [(patch.node, patch.name) for patch in q.applied]
3450 mqtags = [(patch.node, patch.name) for patch in q.applied]
3450
3451
3451 try:
3452 try:
3452 # for now ignore filtering business
3453 # for now ignore filtering business
3453 self.unfiltered().changelog.rev(mqtags[-1][0])
3454 self.unfiltered().changelog.rev(mqtags[-1][0])
3454 except error.LookupError:
3455 except error.LookupError:
3455 self.ui.warn(_('mq status file refers to unknown node %s\n')
3456 self.ui.warn(_('mq status file refers to unknown node %s\n')
3456 % short(mqtags[-1][0]))
3457 % short(mqtags[-1][0]))
3457 return result
3458 return result
3458
3459
3459 # do not add fake tags for filtered revisions
3460 # do not add fake tags for filtered revisions
3460 included = self.changelog.hasnode
3461 included = self.changelog.hasnode
3461 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3462 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3462 if not mqtags:
3463 if not mqtags:
3463 return result
3464 return result
3464
3465
3465 mqtags.append((mqtags[-1][0], 'qtip'))
3466 mqtags.append((mqtags[-1][0], 'qtip'))
3466 mqtags.append((mqtags[0][0], 'qbase'))
3467 mqtags.append((mqtags[0][0], 'qbase'))
3467 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3468 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3468 tags = result[0]
3469 tags = result[0]
3469 for patch in mqtags:
3470 for patch in mqtags:
3470 if patch[1] in tags:
3471 if patch[1] in tags:
3471 self.ui.warn(_('tag %s overrides mq patch of the same '
3472 self.ui.warn(_('tag %s overrides mq patch of the same '
3472 'name\n') % patch[1])
3473 'name\n') % patch[1])
3473 else:
3474 else:
3474 tags[patch[1]] = patch[0]
3475 tags[patch[1]] = patch[0]
3475
3476
3476 return result
3477 return result
3477
3478
3478 if repo.local():
3479 if repo.local():
3479 repo.__class__ = mqrepo
3480 repo.__class__ = mqrepo
3480
3481
3481 repo._phasedefaults.append(mqphasedefaults)
3482 repo._phasedefaults.append(mqphasedefaults)
3482
3483
3483 def mqimport(orig, ui, repo, *args, **kwargs):
3484 def mqimport(orig, ui, repo, *args, **kwargs):
3484 if (util.safehasattr(repo, 'abortifwdirpatched')
3485 if (util.safehasattr(repo, 'abortifwdirpatched')
3485 and not kwargs.get('no_commit', False)):
3486 and not kwargs.get('no_commit', False)):
3486 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3487 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3487 kwargs.get('force'))
3488 kwargs.get('force'))
3488 return orig(ui, repo, *args, **kwargs)
3489 return orig(ui, repo, *args, **kwargs)
3489
3490
3490 def mqinit(orig, ui, *args, **kwargs):
3491 def mqinit(orig, ui, *args, **kwargs):
3491 mq = kwargs.pop('mq', None)
3492 mq = kwargs.pop('mq', None)
3492
3493
3493 if not mq:
3494 if not mq:
3494 return orig(ui, *args, **kwargs)
3495 return orig(ui, *args, **kwargs)
3495
3496
3496 if args:
3497 if args:
3497 repopath = args[0]
3498 repopath = args[0]
3498 if not hg.islocal(repopath):
3499 if not hg.islocal(repopath):
3499 raise error.Abort(_('only a local queue repository '
3500 raise error.Abort(_('only a local queue repository '
3500 'may be initialized'))
3501 'may be initialized'))
3501 else:
3502 else:
3502 repopath = cmdutil.findrepo(os.getcwd())
3503 repopath = cmdutil.findrepo(os.getcwd())
3503 if not repopath:
3504 if not repopath:
3504 raise error.Abort(_('there is no Mercurial repository here '
3505 raise error.Abort(_('there is no Mercurial repository here '
3505 '(.hg not found)'))
3506 '(.hg not found)'))
3506 repo = hg.repository(ui, repopath)
3507 repo = hg.repository(ui, repopath)
3507 return qinit(ui, repo, True)
3508 return qinit(ui, repo, True)
3508
3509
3509 def mqcommand(orig, ui, repo, *args, **kwargs):
3510 def mqcommand(orig, ui, repo, *args, **kwargs):
3510 """Add --mq option to operate on patch repository instead of main"""
3511 """Add --mq option to operate on patch repository instead of main"""
3511
3512
3512 # some commands do not like getting unknown options
3513 # some commands do not like getting unknown options
3513 mq = kwargs.pop('mq', None)
3514 mq = kwargs.pop('mq', None)
3514
3515
3515 if not mq:
3516 if not mq:
3516 return orig(ui, repo, *args, **kwargs)
3517 return orig(ui, repo, *args, **kwargs)
3517
3518
3518 q = repo.mq
3519 q = repo.mq
3519 r = q.qrepo()
3520 r = q.qrepo()
3520 if not r:
3521 if not r:
3521 raise error.Abort(_('no queue repository'))
3522 raise error.Abort(_('no queue repository'))
3522 return orig(r.ui, r, *args, **kwargs)
3523 return orig(r.ui, r, *args, **kwargs)
3523
3524
3524 def summaryhook(ui, repo):
3525 def summaryhook(ui, repo):
3525 q = repo.mq
3526 q = repo.mq
3526 m = []
3527 m = []
3527 a, u = len(q.applied), len(q.unapplied(repo))
3528 a, u = len(q.applied), len(q.unapplied(repo))
3528 if a:
3529 if a:
3529 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3530 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3530 if u:
3531 if u:
3531 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3532 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3532 if m:
3533 if m:
3533 # i18n: column positioning for "hg summary"
3534 # i18n: column positioning for "hg summary"
3534 ui.write(_("mq: %s\n") % ', '.join(m))
3535 ui.write(_("mq: %s\n") % ', '.join(m))
3535 else:
3536 else:
3536 # i18n: column positioning for "hg summary"
3537 # i18n: column positioning for "hg summary"
3537 ui.note(_("mq: (empty queue)\n"))
3538 ui.note(_("mq: (empty queue)\n"))
3538
3539
3539 revsetpredicate = revset.extpredicate()
3540 revsetpredicate = registrar.revsetpredicate()
3540
3541
3541 @revsetpredicate('mq()')
3542 @revsetpredicate('mq()')
3542 def revsetmq(repo, subset, x):
3543 def revsetmq(repo, subset, x):
3543 """Changesets managed by MQ.
3544 """Changesets managed by MQ.
3544 """
3545 """
3545 revset.getargs(x, 0, 0, _("mq takes no arguments"))
3546 revset.getargs(x, 0, 0, _("mq takes no arguments"))
3546 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3547 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3547 return revset.baseset([r for r in subset if r in applied])
3548 return revset.baseset([r for r in subset if r in applied])
3548
3549
3549 # tell hggettext to extract docstrings from these functions:
3550 # tell hggettext to extract docstrings from these functions:
3550 i18nfunctions = [revsetmq]
3551 i18nfunctions = [revsetmq]
3551
3552
3552 def extsetup(ui):
3553 def extsetup(ui):
3553 # Ensure mq wrappers are called first, regardless of extension load order by
3554 # Ensure mq wrappers are called first, regardless of extension load order by
3554 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3555 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3555 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3556 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3556
3557
3557 extensions.wrapcommand(commands.table, 'import', mqimport)
3558 extensions.wrapcommand(commands.table, 'import', mqimport)
3558 cmdutil.summaryhooks.add('mq', summaryhook)
3559 cmdutil.summaryhooks.add('mq', summaryhook)
3559
3560
3560 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3561 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3561 entry[1].extend(mqopt)
3562 entry[1].extend(mqopt)
3562
3563
3563 def dotable(cmdtable):
3564 def dotable(cmdtable):
3564 for cmd, entry in cmdtable.iteritems():
3565 for cmd, entry in cmdtable.iteritems():
3565 cmd = cmdutil.parsealiases(cmd)[0]
3566 cmd = cmdutil.parsealiases(cmd)[0]
3566 func = entry[0]
3567 func = entry[0]
3567 if func.norepo:
3568 if func.norepo:
3568 continue
3569 continue
3569 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3570 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3570 entry[1].extend(mqopt)
3571 entry[1].extend(mqopt)
3571
3572
3572 dotable(commands.table)
3573 dotable(commands.table)
3573
3574
3574 for extname, extmodule in extensions.extensions():
3575 for extname, extmodule in extensions.extensions():
3575 if extmodule.__file__ != __file__:
3576 if extmodule.__file__ != __file__:
3576 dotable(getattr(extmodule, 'cmdtable', {}))
3577 dotable(getattr(extmodule, 'cmdtable', {}))
3577
3578
3578 revsetpredicate.setup()
3579
3580 colortable = {'qguard.negative': 'red',
3579 colortable = {'qguard.negative': 'red',
3581 'qguard.positive': 'yellow',
3580 'qguard.positive': 'yellow',
3582 'qguard.unguarded': 'green',
3581 'qguard.unguarded': 'green',
3583 'qseries.applied': 'blue bold underline',
3582 'qseries.applied': 'blue bold underline',
3584 'qseries.guarded': 'black bold',
3583 'qseries.guarded': 'black bold',
3585 'qseries.missing': 'red bold',
3584 'qseries.missing': 'red bold',
3586 'qseries.unapplied': 'black bold'}
3585 'qseries.unapplied': 'black bold'}
@@ -1,1328 +1,1327 b''
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 https://mercurial-scm.org/wiki/RebaseExtension
14 https://mercurial-scm.org/wiki/RebaseExtension
15 '''
15 '''
16
16
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
19 from mercurial import copies, destutil, repoview, revset
19 from mercurial import copies, destutil, repoview, registrar, revset
20 from mercurial.commands import templateopts
20 from mercurial.commands import templateopts
21 from mercurial.node import nullrev, nullid, hex, short
21 from mercurial.node import nullrev, nullid, hex, short
22 from mercurial.lock import release
22 from mercurial.lock import release
23 from mercurial.i18n import _
23 from mercurial.i18n import _
24 import os, errno
24 import os, errno
25
25
26 # The following constants are used throughout the rebase module. The ordering of
26 # The following constants are used throughout the rebase module. The ordering of
27 # their values must be maintained.
27 # their values must be maintained.
28
28
29 # Indicates that a revision needs to be rebased
29 # Indicates that a revision needs to be rebased
30 revtodo = -1
30 revtodo = -1
31 nullmerge = -2
31 nullmerge = -2
32 revignored = -3
32 revignored = -3
33 # successor in rebase destination
33 # successor in rebase destination
34 revprecursor = -4
34 revprecursor = -4
35 # plain prune (no successor)
35 # plain prune (no successor)
36 revpruned = -5
36 revpruned = -5
37 revskipped = (revignored, revprecursor, revpruned)
37 revskipped = (revignored, revprecursor, revpruned)
38
38
39 cmdtable = {}
39 cmdtable = {}
40 command = cmdutil.command(cmdtable)
40 command = cmdutil.command(cmdtable)
41 # Note for extension authors: ONLY specify testedwith = 'internal' for
41 # Note for extension authors: ONLY specify testedwith = 'internal' for
42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
43 # be specifying the version(s) of Mercurial they are tested with, or
43 # be specifying the version(s) of Mercurial they are tested with, or
44 # leave the attribute unspecified.
44 # leave the attribute unspecified.
45 testedwith = 'internal'
45 testedwith = 'internal'
46
46
47 def _nothingtorebase():
47 def _nothingtorebase():
48 return 1
48 return 1
49
49
50 def _savegraft(ctx, extra):
50 def _savegraft(ctx, extra):
51 s = ctx.extra().get('source', None)
51 s = ctx.extra().get('source', None)
52 if s is not None:
52 if s is not None:
53 extra['source'] = s
53 extra['source'] = s
54 s = ctx.extra().get('intermediate-source', None)
54 s = ctx.extra().get('intermediate-source', None)
55 if s is not None:
55 if s is not None:
56 extra['intermediate-source'] = s
56 extra['intermediate-source'] = s
57
57
58 def _savebranch(ctx, extra):
58 def _savebranch(ctx, extra):
59 extra['branch'] = ctx.branch()
59 extra['branch'] = ctx.branch()
60
60
61 def _makeextrafn(copiers):
61 def _makeextrafn(copiers):
62 """make an extrafn out of the given copy-functions.
62 """make an extrafn out of the given copy-functions.
63
63
64 A copy function takes a context and an extra dict, and mutates the
64 A copy function takes a context and an extra dict, and mutates the
65 extra dict as needed based on the given context.
65 extra dict as needed based on the given context.
66 """
66 """
67 def extrafn(ctx, extra):
67 def extrafn(ctx, extra):
68 for c in copiers:
68 for c in copiers:
69 c(ctx, extra)
69 c(ctx, extra)
70 return extrafn
70 return extrafn
71
71
72 def _destrebase(repo, sourceset):
72 def _destrebase(repo, sourceset):
73 """small wrapper around destmerge to pass the right extra args
73 """small wrapper around destmerge to pass the right extra args
74
74
75 Please wrap destutil.destmerge instead."""
75 Please wrap destutil.destmerge instead."""
76 return destutil.destmerge(repo, action='rebase', sourceset=sourceset,
76 return destutil.destmerge(repo, action='rebase', sourceset=sourceset,
77 onheadcheck=False)
77 onheadcheck=False)
78
78
79 revsetpredicate = revset.extpredicate()
79 revsetpredicate = registrar.revsetpredicate()
80
80
81 @revsetpredicate('_destrebase')
81 @revsetpredicate('_destrebase')
82 def _revsetdestrebase(repo, subset, x):
82 def _revsetdestrebase(repo, subset, x):
83 # ``_rebasedefaultdest()``
83 # ``_rebasedefaultdest()``
84
84
85 # default destination for rebase.
85 # default destination for rebase.
86 # # XXX: Currently private because I expect the signature to change.
86 # # XXX: Currently private because I expect the signature to change.
87 # # XXX: - bailing out in case of ambiguity vs returning all data.
87 # # XXX: - bailing out in case of ambiguity vs returning all data.
88 # i18n: "_rebasedefaultdest" is a keyword
88 # i18n: "_rebasedefaultdest" is a keyword
89 sourceset = None
89 sourceset = None
90 if x is not None:
90 if x is not None:
91 sourceset = revset.getset(repo, revset.fullreposet(repo), x)
91 sourceset = revset.getset(repo, revset.fullreposet(repo), x)
92 return subset & revset.baseset([_destrebase(repo, sourceset)])
92 return subset & revset.baseset([_destrebase(repo, sourceset)])
93
93
94 @command('rebase',
94 @command('rebase',
95 [('s', 'source', '',
95 [('s', 'source', '',
96 _('rebase the specified changeset and descendants'), _('REV')),
96 _('rebase the specified changeset and descendants'), _('REV')),
97 ('b', 'base', '',
97 ('b', 'base', '',
98 _('rebase everything from branching point of specified changeset'),
98 _('rebase everything from branching point of specified changeset'),
99 _('REV')),
99 _('REV')),
100 ('r', 'rev', [],
100 ('r', 'rev', [],
101 _('rebase these revisions'),
101 _('rebase these revisions'),
102 _('REV')),
102 _('REV')),
103 ('d', 'dest', '',
103 ('d', 'dest', '',
104 _('rebase onto the specified changeset'), _('REV')),
104 _('rebase onto the specified changeset'), _('REV')),
105 ('', 'collapse', False, _('collapse the rebased changesets')),
105 ('', 'collapse', False, _('collapse the rebased changesets')),
106 ('m', 'message', '',
106 ('m', 'message', '',
107 _('use text as collapse commit message'), _('TEXT')),
107 _('use text as collapse commit message'), _('TEXT')),
108 ('e', 'edit', False, _('invoke editor on commit messages')),
108 ('e', 'edit', False, _('invoke editor on commit messages')),
109 ('l', 'logfile', '',
109 ('l', 'logfile', '',
110 _('read collapse commit message from file'), _('FILE')),
110 _('read collapse commit message from file'), _('FILE')),
111 ('k', 'keep', False, _('keep original changesets')),
111 ('k', 'keep', False, _('keep original changesets')),
112 ('', 'keepbranches', False, _('keep original branch names')),
112 ('', 'keepbranches', False, _('keep original branch names')),
113 ('D', 'detach', False, _('(DEPRECATED)')),
113 ('D', 'detach', False, _('(DEPRECATED)')),
114 ('i', 'interactive', False, _('(DEPRECATED)')),
114 ('i', 'interactive', False, _('(DEPRECATED)')),
115 ('t', 'tool', '', _('specify merge tool')),
115 ('t', 'tool', '', _('specify merge tool')),
116 ('c', 'continue', False, _('continue an interrupted rebase')),
116 ('c', 'continue', False, _('continue an interrupted rebase')),
117 ('a', 'abort', False, _('abort an interrupted rebase'))] +
117 ('a', 'abort', False, _('abort an interrupted rebase'))] +
118 templateopts,
118 templateopts,
119 _('[-s REV | -b REV] [-d REV] [OPTION]'))
119 _('[-s REV | -b REV] [-d REV] [OPTION]'))
120 def rebase(ui, repo, **opts):
120 def rebase(ui, repo, **opts):
121 """move changeset (and descendants) to a different branch
121 """move changeset (and descendants) to a different branch
122
122
123 Rebase uses repeated merging to graft changesets from one part of
123 Rebase uses repeated merging to graft changesets from one part of
124 history (the source) onto another (the destination). This can be
124 history (the source) onto another (the destination). This can be
125 useful for linearizing *local* changes relative to a master
125 useful for linearizing *local* changes relative to a master
126 development tree.
126 development tree.
127
127
128 Published commits cannot be rebased (see :hg:`help phases`).
128 Published commits cannot be rebased (see :hg:`help phases`).
129 To copy commits, see :hg:`help graft`.
129 To copy commits, see :hg:`help graft`.
130
130
131 If you don't specify a destination changeset (``-d/--dest``), rebase
131 If you don't specify a destination changeset (``-d/--dest``), rebase
132 will use the same logic as :hg:`merge` to pick a destination. if
132 will use the same logic as :hg:`merge` to pick a destination. if
133 the current branch contains exactly one other head, the other head
133 the current branch contains exactly one other head, the other head
134 is merged with by default. Otherwise, an explicit revision with
134 is merged with by default. Otherwise, an explicit revision with
135 which to merge with must be provided. (destination changeset is not
135 which to merge with must be provided. (destination changeset is not
136 modified by rebasing, but new changesets are added as its
136 modified by rebasing, but new changesets are added as its
137 descendants.)
137 descendants.)
138
138
139 Here are the ways to select changesets:
139 Here are the ways to select changesets:
140
140
141 1. Explicitly select them using ``--rev``.
141 1. Explicitly select them using ``--rev``.
142
142
143 2. Use ``--source`` to select a root changeset and include all of its
143 2. Use ``--source`` to select a root changeset and include all of its
144 descendants.
144 descendants.
145
145
146 3. Use ``--base`` to select a changeset; rebase will find ancestors
146 3. Use ``--base`` to select a changeset; rebase will find ancestors
147 and their descendants which are not also ancestors of the destination.
147 and their descendants which are not also ancestors of the destination.
148
148
149 4. If you do not specify any of ``--rev``, ``source``, or ``--base``,
149 4. If you do not specify any of ``--rev``, ``source``, or ``--base``,
150 rebase will use ``--base .`` as above.
150 rebase will use ``--base .`` as above.
151
151
152 Rebase will destroy original changesets unless you use ``--keep``.
152 Rebase will destroy original changesets unless you use ``--keep``.
153 It will also move your bookmarks (even if you do).
153 It will also move your bookmarks (even if you do).
154
154
155 Some changesets may be dropped if they do not contribute changes
155 Some changesets may be dropped if they do not contribute changes
156 (e.g. merges from the destination branch).
156 (e.g. merges from the destination branch).
157
157
158 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
158 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
159 a named branch with two heads. You will need to explicitly specify source
159 a named branch with two heads. You will need to explicitly specify source
160 and/or destination.
160 and/or destination.
161
161
162 If you need to use a tool to automate merge/conflict decisions, you
162 If you need to use a tool to automate merge/conflict decisions, you
163 can specify one with ``--tool``, see :hg:`help merge-tools`.
163 can specify one with ``--tool``, see :hg:`help merge-tools`.
164 As a caveat: the tool will not be used to mediate when a file was
164 As a caveat: the tool will not be used to mediate when a file was
165 deleted, there is no hook presently available for this.
165 deleted, there is no hook presently available for this.
166
166
167 If a rebase is interrupted to manually resolve a conflict, it can be
167 If a rebase is interrupted to manually resolve a conflict, it can be
168 continued with --continue/-c or aborted with --abort/-a.
168 continued with --continue/-c or aborted with --abort/-a.
169
169
170 .. container:: verbose
170 .. container:: verbose
171
171
172 Examples:
172 Examples:
173
173
174 - move "local changes" (current commit back to branching point)
174 - move "local changes" (current commit back to branching point)
175 to the current branch tip after a pull::
175 to the current branch tip after a pull::
176
176
177 hg rebase
177 hg rebase
178
178
179 - move a single changeset to the stable branch::
179 - move a single changeset to the stable branch::
180
180
181 hg rebase -r 5f493448 -d stable
181 hg rebase -r 5f493448 -d stable
182
182
183 - splice a commit and all its descendants onto another part of history::
183 - splice a commit and all its descendants onto another part of history::
184
184
185 hg rebase --source c0c3 --dest 4cf9
185 hg rebase --source c0c3 --dest 4cf9
186
186
187 - rebase everything on a branch marked by a bookmark onto the
187 - rebase everything on a branch marked by a bookmark onto the
188 default branch::
188 default branch::
189
189
190 hg rebase --base myfeature --dest default
190 hg rebase --base myfeature --dest default
191
191
192 - collapse a sequence of changes into a single commit::
192 - collapse a sequence of changes into a single commit::
193
193
194 hg rebase --collapse -r 1520:1525 -d .
194 hg rebase --collapse -r 1520:1525 -d .
195
195
196 - move a named branch while preserving its name::
196 - move a named branch while preserving its name::
197
197
198 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
198 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
199
199
200 Returns 0 on success, 1 if nothing to rebase or there are
200 Returns 0 on success, 1 if nothing to rebase or there are
201 unresolved conflicts.
201 unresolved conflicts.
202
202
203 """
203 """
204 originalwd = target = None
204 originalwd = target = None
205 activebookmark = None
205 activebookmark = None
206 external = nullrev
206 external = nullrev
207 # Mapping between the old revision id and either what is the new rebased
207 # Mapping between the old revision id and either what is the new rebased
208 # revision or what needs to be done with the old revision. The state dict
208 # revision or what needs to be done with the old revision. The state dict
209 # will be what contains most of the rebase progress state.
209 # will be what contains most of the rebase progress state.
210 state = {}
210 state = {}
211 skipped = set()
211 skipped = set()
212 targetancestors = set()
212 targetancestors = set()
213
213
214
214
215 lock = wlock = None
215 lock = wlock = None
216 try:
216 try:
217 wlock = repo.wlock()
217 wlock = repo.wlock()
218 lock = repo.lock()
218 lock = repo.lock()
219
219
220 # Validate input and define rebasing points
220 # Validate input and define rebasing points
221 destf = opts.get('dest', None)
221 destf = opts.get('dest', None)
222 srcf = opts.get('source', None)
222 srcf = opts.get('source', None)
223 basef = opts.get('base', None)
223 basef = opts.get('base', None)
224 revf = opts.get('rev', [])
224 revf = opts.get('rev', [])
225 contf = opts.get('continue')
225 contf = opts.get('continue')
226 abortf = opts.get('abort')
226 abortf = opts.get('abort')
227 collapsef = opts.get('collapse', False)
227 collapsef = opts.get('collapse', False)
228 collapsemsg = cmdutil.logmessage(ui, opts)
228 collapsemsg = cmdutil.logmessage(ui, opts)
229 date = opts.get('date', None)
229 date = opts.get('date', None)
230 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
230 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
231 extrafns = [_savegraft]
231 extrafns = [_savegraft]
232 if e:
232 if e:
233 extrafns = [e]
233 extrafns = [e]
234 keepf = opts.get('keep', False)
234 keepf = opts.get('keep', False)
235 keepbranchesf = opts.get('keepbranches', False)
235 keepbranchesf = opts.get('keepbranches', False)
236 # keepopen is not meant for use on the command line, but by
236 # keepopen is not meant for use on the command line, but by
237 # other extensions
237 # other extensions
238 keepopen = opts.get('keepopen', False)
238 keepopen = opts.get('keepopen', False)
239
239
240 if opts.get('interactive'):
240 if opts.get('interactive'):
241 try:
241 try:
242 if extensions.find('histedit'):
242 if extensions.find('histedit'):
243 enablehistedit = ''
243 enablehistedit = ''
244 except KeyError:
244 except KeyError:
245 enablehistedit = " --config extensions.histedit="
245 enablehistedit = " --config extensions.histedit="
246 help = "hg%s help -e histedit" % enablehistedit
246 help = "hg%s help -e histedit" % enablehistedit
247 msg = _("interactive history editing is supported by the "
247 msg = _("interactive history editing is supported by the "
248 "'histedit' extension (see \"%s\")") % help
248 "'histedit' extension (see \"%s\")") % help
249 raise error.Abort(msg)
249 raise error.Abort(msg)
250
250
251 if collapsemsg and not collapsef:
251 if collapsemsg and not collapsef:
252 raise error.Abort(
252 raise error.Abort(
253 _('message can only be specified with collapse'))
253 _('message can only be specified with collapse'))
254
254
255 if contf or abortf:
255 if contf or abortf:
256 if contf and abortf:
256 if contf and abortf:
257 raise error.Abort(_('cannot use both abort and continue'))
257 raise error.Abort(_('cannot use both abort and continue'))
258 if collapsef:
258 if collapsef:
259 raise error.Abort(
259 raise error.Abort(
260 _('cannot use collapse with continue or abort'))
260 _('cannot use collapse with continue or abort'))
261 if srcf or basef or destf:
261 if srcf or basef or destf:
262 raise error.Abort(
262 raise error.Abort(
263 _('abort and continue do not allow specifying revisions'))
263 _('abort and continue do not allow specifying revisions'))
264 if abortf and opts.get('tool', False):
264 if abortf and opts.get('tool', False):
265 ui.warn(_('tool option will be ignored\n'))
265 ui.warn(_('tool option will be ignored\n'))
266
266
267 try:
267 try:
268 (originalwd, target, state, skipped, collapsef, keepf,
268 (originalwd, target, state, skipped, collapsef, keepf,
269 keepbranchesf, external, activebookmark) = restorestatus(repo)
269 keepbranchesf, external, activebookmark) = restorestatus(repo)
270 collapsemsg = restorecollapsemsg(repo)
270 collapsemsg = restorecollapsemsg(repo)
271 except error.RepoLookupError:
271 except error.RepoLookupError:
272 if abortf:
272 if abortf:
273 clearstatus(repo)
273 clearstatus(repo)
274 clearcollapsemsg(repo)
274 clearcollapsemsg(repo)
275 repo.ui.warn(_('rebase aborted (no revision is removed,'
275 repo.ui.warn(_('rebase aborted (no revision is removed,'
276 ' only broken state is cleared)\n'))
276 ' only broken state is cleared)\n'))
277 return 0
277 return 0
278 else:
278 else:
279 msg = _('cannot continue inconsistent rebase')
279 msg = _('cannot continue inconsistent rebase')
280 hint = _('use "hg rebase --abort" to clear broken state')
280 hint = _('use "hg rebase --abort" to clear broken state')
281 raise error.Abort(msg, hint=hint)
281 raise error.Abort(msg, hint=hint)
282 if abortf:
282 if abortf:
283 return abort(repo, originalwd, target, state,
283 return abort(repo, originalwd, target, state,
284 activebookmark=activebookmark)
284 activebookmark=activebookmark)
285 else:
285 else:
286 dest, rebaseset = _definesets(ui, repo, destf, srcf, basef, revf)
286 dest, rebaseset = _definesets(ui, repo, destf, srcf, basef, revf)
287 if dest is None:
287 if dest is None:
288 return _nothingtorebase()
288 return _nothingtorebase()
289
289
290 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
290 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
291 if (not (keepf or allowunstable)
291 if (not (keepf or allowunstable)
292 and repo.revs('first(children(%ld) - %ld)',
292 and repo.revs('first(children(%ld) - %ld)',
293 rebaseset, rebaseset)):
293 rebaseset, rebaseset)):
294 raise error.Abort(
294 raise error.Abort(
295 _("can't remove original changesets with"
295 _("can't remove original changesets with"
296 " unrebased descendants"),
296 " unrebased descendants"),
297 hint=_('use --keep to keep original changesets'))
297 hint=_('use --keep to keep original changesets'))
298
298
299 obsoletenotrebased = {}
299 obsoletenotrebased = {}
300 if ui.configbool('experimental', 'rebaseskipobsolete'):
300 if ui.configbool('experimental', 'rebaseskipobsolete'):
301 rebasesetrevs = set(rebaseset)
301 rebasesetrevs = set(rebaseset)
302 rebaseobsrevs = _filterobsoleterevs(repo, rebasesetrevs)
302 rebaseobsrevs = _filterobsoleterevs(repo, rebasesetrevs)
303 obsoletenotrebased = _computeobsoletenotrebased(repo,
303 obsoletenotrebased = _computeobsoletenotrebased(repo,
304 rebaseobsrevs,
304 rebaseobsrevs,
305 dest)
305 dest)
306 rebaseobsskipped = set(obsoletenotrebased)
306 rebaseobsskipped = set(obsoletenotrebased)
307
307
308 # Obsolete node with successors not in dest leads to divergence
308 # Obsolete node with successors not in dest leads to divergence
309 divergenceok = ui.configbool('experimental',
309 divergenceok = ui.configbool('experimental',
310 'allowdivergence')
310 'allowdivergence')
311 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
311 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
312
312
313 if divergencebasecandidates and not divergenceok:
313 if divergencebasecandidates and not divergenceok:
314 divhashes = (str(repo[r])
314 divhashes = (str(repo[r])
315 for r in divergencebasecandidates)
315 for r in divergencebasecandidates)
316 msg = _("this rebase will cause "
316 msg = _("this rebase will cause "
317 "divergences from: %s")
317 "divergences from: %s")
318 h = _("to force the rebase please set "
318 h = _("to force the rebase please set "
319 "experimental.allowdivergence=True")
319 "experimental.allowdivergence=True")
320 raise error.Abort(msg % (",".join(divhashes),), hint=h)
320 raise error.Abort(msg % (",".join(divhashes),), hint=h)
321
321
322 # - plain prune (no successor) changesets are rebased
322 # - plain prune (no successor) changesets are rebased
323 # - split changesets are not rebased if at least one of the
323 # - split changesets are not rebased if at least one of the
324 # changeset resulting from the split is an ancestor of dest
324 # changeset resulting from the split is an ancestor of dest
325 rebaseset = rebasesetrevs - rebaseobsskipped
325 rebaseset = rebasesetrevs - rebaseobsskipped
326 if rebasesetrevs and not rebaseset:
326 if rebasesetrevs and not rebaseset:
327 msg = _('all requested changesets have equivalents '
327 msg = _('all requested changesets have equivalents '
328 'or were marked as obsolete')
328 'or were marked as obsolete')
329 hint = _('to force the rebase, set the config '
329 hint = _('to force the rebase, set the config '
330 'experimental.rebaseskipobsolete to False')
330 'experimental.rebaseskipobsolete to False')
331 raise error.Abort(msg, hint=hint)
331 raise error.Abort(msg, hint=hint)
332
332
333 result = buildstate(repo, dest, rebaseset, collapsef,
333 result = buildstate(repo, dest, rebaseset, collapsef,
334 obsoletenotrebased)
334 obsoletenotrebased)
335
335
336 if not result:
336 if not result:
337 # Empty state built, nothing to rebase
337 # Empty state built, nothing to rebase
338 ui.status(_('nothing to rebase\n'))
338 ui.status(_('nothing to rebase\n'))
339 return _nothingtorebase()
339 return _nothingtorebase()
340
340
341 root = min(rebaseset)
341 root = min(rebaseset)
342 if not keepf and not repo[root].mutable():
342 if not keepf and not repo[root].mutable():
343 raise error.Abort(_("can't rebase public changeset %s")
343 raise error.Abort(_("can't rebase public changeset %s")
344 % repo[root],
344 % repo[root],
345 hint=_('see "hg help phases" for details'))
345 hint=_('see "hg help phases" for details'))
346
346
347 originalwd, target, state = result
347 originalwd, target, state = result
348 if collapsef:
348 if collapsef:
349 targetancestors = repo.changelog.ancestors([target],
349 targetancestors = repo.changelog.ancestors([target],
350 inclusive=True)
350 inclusive=True)
351 external = externalparent(repo, state, targetancestors)
351 external = externalparent(repo, state, targetancestors)
352
352
353 if dest.closesbranch() and not keepbranchesf:
353 if dest.closesbranch() and not keepbranchesf:
354 ui.status(_('reopening closed branch head %s\n') % dest)
354 ui.status(_('reopening closed branch head %s\n') % dest)
355
355
356 if keepbranchesf:
356 if keepbranchesf:
357 # insert _savebranch at the start of extrafns so if
357 # insert _savebranch at the start of extrafns so if
358 # there's a user-provided extrafn it can clobber branch if
358 # there's a user-provided extrafn it can clobber branch if
359 # desired
359 # desired
360 extrafns.insert(0, _savebranch)
360 extrafns.insert(0, _savebranch)
361 if collapsef:
361 if collapsef:
362 branches = set()
362 branches = set()
363 for rev in state:
363 for rev in state:
364 branches.add(repo[rev].branch())
364 branches.add(repo[rev].branch())
365 if len(branches) > 1:
365 if len(branches) > 1:
366 raise error.Abort(_('cannot collapse multiple named '
366 raise error.Abort(_('cannot collapse multiple named '
367 'branches'))
367 'branches'))
368
368
369 # Rebase
369 # Rebase
370 if not targetancestors:
370 if not targetancestors:
371 targetancestors = repo.changelog.ancestors([target], inclusive=True)
371 targetancestors = repo.changelog.ancestors([target], inclusive=True)
372
372
373 # Keep track of the current bookmarks in order to reset them later
373 # Keep track of the current bookmarks in order to reset them later
374 currentbookmarks = repo._bookmarks.copy()
374 currentbookmarks = repo._bookmarks.copy()
375 activebookmark = activebookmark or repo._activebookmark
375 activebookmark = activebookmark or repo._activebookmark
376 if activebookmark:
376 if activebookmark:
377 bookmarks.deactivate(repo)
377 bookmarks.deactivate(repo)
378
378
379 extrafn = _makeextrafn(extrafns)
379 extrafn = _makeextrafn(extrafns)
380
380
381 sortedstate = sorted(state)
381 sortedstate = sorted(state)
382 total = len(sortedstate)
382 total = len(sortedstate)
383 pos = 0
383 pos = 0
384 for rev in sortedstate:
384 for rev in sortedstate:
385 ctx = repo[rev]
385 ctx = repo[rev]
386 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
386 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
387 ctx.description().split('\n', 1)[0])
387 ctx.description().split('\n', 1)[0])
388 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
388 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
389 if names:
389 if names:
390 desc += ' (%s)' % ' '.join(names)
390 desc += ' (%s)' % ' '.join(names)
391 pos += 1
391 pos += 1
392 if state[rev] == revtodo:
392 if state[rev] == revtodo:
393 ui.status(_('rebasing %s\n') % desc)
393 ui.status(_('rebasing %s\n') % desc)
394 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
394 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
395 _('changesets'), total)
395 _('changesets'), total)
396 p1, p2, base = defineparents(repo, rev, target, state,
396 p1, p2, base = defineparents(repo, rev, target, state,
397 targetancestors)
397 targetancestors)
398 storestatus(repo, originalwd, target, state, collapsef, keepf,
398 storestatus(repo, originalwd, target, state, collapsef, keepf,
399 keepbranchesf, external, activebookmark)
399 keepbranchesf, external, activebookmark)
400 storecollapsemsg(repo, collapsemsg)
400 storecollapsemsg(repo, collapsemsg)
401 if len(repo[None].parents()) == 2:
401 if len(repo[None].parents()) == 2:
402 repo.ui.debug('resuming interrupted rebase\n')
402 repo.ui.debug('resuming interrupted rebase\n')
403 else:
403 else:
404 try:
404 try:
405 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
405 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
406 'rebase')
406 'rebase')
407 stats = rebasenode(repo, rev, p1, base, state,
407 stats = rebasenode(repo, rev, p1, base, state,
408 collapsef, target)
408 collapsef, target)
409 if stats and stats[3] > 0:
409 if stats and stats[3] > 0:
410 raise error.InterventionRequired(
410 raise error.InterventionRequired(
411 _('unresolved conflicts (see hg '
411 _('unresolved conflicts (see hg '
412 'resolve, then hg rebase --continue)'))
412 'resolve, then hg rebase --continue)'))
413 finally:
413 finally:
414 ui.setconfig('ui', 'forcemerge', '', 'rebase')
414 ui.setconfig('ui', 'forcemerge', '', 'rebase')
415 if not collapsef:
415 if not collapsef:
416 merging = p2 != nullrev
416 merging = p2 != nullrev
417 editform = cmdutil.mergeeditform(merging, 'rebase')
417 editform = cmdutil.mergeeditform(merging, 'rebase')
418 editor = cmdutil.getcommiteditor(editform=editform, **opts)
418 editor = cmdutil.getcommiteditor(editform=editform, **opts)
419 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
419 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
420 editor=editor,
420 editor=editor,
421 keepbranches=keepbranchesf,
421 keepbranches=keepbranchesf,
422 date=date)
422 date=date)
423 else:
423 else:
424 # Skip commit if we are collapsing
424 # Skip commit if we are collapsing
425 repo.dirstate.beginparentchange()
425 repo.dirstate.beginparentchange()
426 repo.setparents(repo[p1].node())
426 repo.setparents(repo[p1].node())
427 repo.dirstate.endparentchange()
427 repo.dirstate.endparentchange()
428 newnode = None
428 newnode = None
429 # Update the state
429 # Update the state
430 if newnode is not None:
430 if newnode is not None:
431 state[rev] = repo[newnode].rev()
431 state[rev] = repo[newnode].rev()
432 ui.debug('rebased as %s\n' % short(newnode))
432 ui.debug('rebased as %s\n' % short(newnode))
433 else:
433 else:
434 if not collapsef:
434 if not collapsef:
435 ui.warn(_('note: rebase of %d:%s created no changes '
435 ui.warn(_('note: rebase of %d:%s created no changes '
436 'to commit\n') % (rev, ctx))
436 'to commit\n') % (rev, ctx))
437 skipped.add(rev)
437 skipped.add(rev)
438 state[rev] = p1
438 state[rev] = p1
439 ui.debug('next revision set to %s\n' % p1)
439 ui.debug('next revision set to %s\n' % p1)
440 elif state[rev] == nullmerge:
440 elif state[rev] == nullmerge:
441 ui.debug('ignoring null merge rebase of %s\n' % rev)
441 ui.debug('ignoring null merge rebase of %s\n' % rev)
442 elif state[rev] == revignored:
442 elif state[rev] == revignored:
443 ui.status(_('not rebasing ignored %s\n') % desc)
443 ui.status(_('not rebasing ignored %s\n') % desc)
444 elif state[rev] == revprecursor:
444 elif state[rev] == revprecursor:
445 targetctx = repo[obsoletenotrebased[rev]]
445 targetctx = repo[obsoletenotrebased[rev]]
446 desctarget = '%d:%s "%s"' % (targetctx.rev(), targetctx,
446 desctarget = '%d:%s "%s"' % (targetctx.rev(), targetctx,
447 targetctx.description().split('\n', 1)[0])
447 targetctx.description().split('\n', 1)[0])
448 msg = _('note: not rebasing %s, already in destination as %s\n')
448 msg = _('note: not rebasing %s, already in destination as %s\n')
449 ui.status(msg % (desc, desctarget))
449 ui.status(msg % (desc, desctarget))
450 elif state[rev] == revpruned:
450 elif state[rev] == revpruned:
451 msg = _('note: not rebasing %s, it has no successor\n')
451 msg = _('note: not rebasing %s, it has no successor\n')
452 ui.status(msg % desc)
452 ui.status(msg % desc)
453 else:
453 else:
454 ui.status(_('already rebased %s as %s\n') %
454 ui.status(_('already rebased %s as %s\n') %
455 (desc, repo[state[rev]]))
455 (desc, repo[state[rev]]))
456
456
457 ui.progress(_('rebasing'), None)
457 ui.progress(_('rebasing'), None)
458 ui.note(_('rebase merging completed\n'))
458 ui.note(_('rebase merging completed\n'))
459
459
460 if collapsef and not keepopen:
460 if collapsef and not keepopen:
461 p1, p2, _base = defineparents(repo, min(state), target,
461 p1, p2, _base = defineparents(repo, min(state), target,
462 state, targetancestors)
462 state, targetancestors)
463 editopt = opts.get('edit')
463 editopt = opts.get('edit')
464 editform = 'rebase.collapse'
464 editform = 'rebase.collapse'
465 if collapsemsg:
465 if collapsemsg:
466 commitmsg = collapsemsg
466 commitmsg = collapsemsg
467 else:
467 else:
468 commitmsg = 'Collapsed revision'
468 commitmsg = 'Collapsed revision'
469 for rebased in state:
469 for rebased in state:
470 if rebased not in skipped and state[rebased] > nullmerge:
470 if rebased not in skipped and state[rebased] > nullmerge:
471 commitmsg += '\n* %s' % repo[rebased].description()
471 commitmsg += '\n* %s' % repo[rebased].description()
472 editopt = True
472 editopt = True
473 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
473 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
474 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
474 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
475 extrafn=extrafn, editor=editor,
475 extrafn=extrafn, editor=editor,
476 keepbranches=keepbranchesf,
476 keepbranches=keepbranchesf,
477 date=date)
477 date=date)
478 if newnode is None:
478 if newnode is None:
479 newrev = target
479 newrev = target
480 else:
480 else:
481 newrev = repo[newnode].rev()
481 newrev = repo[newnode].rev()
482 for oldrev in state.iterkeys():
482 for oldrev in state.iterkeys():
483 if state[oldrev] > nullmerge:
483 if state[oldrev] > nullmerge:
484 state[oldrev] = newrev
484 state[oldrev] = newrev
485
485
486 if 'qtip' in repo.tags():
486 if 'qtip' in repo.tags():
487 updatemq(repo, state, skipped, **opts)
487 updatemq(repo, state, skipped, **opts)
488
488
489 if currentbookmarks:
489 if currentbookmarks:
490 # Nodeids are needed to reset bookmarks
490 # Nodeids are needed to reset bookmarks
491 nstate = {}
491 nstate = {}
492 for k, v in state.iteritems():
492 for k, v in state.iteritems():
493 if v > nullmerge:
493 if v > nullmerge:
494 nstate[repo[k].node()] = repo[v].node()
494 nstate[repo[k].node()] = repo[v].node()
495 # XXX this is the same as dest.node() for the non-continue path --
495 # XXX this is the same as dest.node() for the non-continue path --
496 # this should probably be cleaned up
496 # this should probably be cleaned up
497 targetnode = repo[target].node()
497 targetnode = repo[target].node()
498
498
499 # restore original working directory
499 # restore original working directory
500 # (we do this before stripping)
500 # (we do this before stripping)
501 newwd = state.get(originalwd, originalwd)
501 newwd = state.get(originalwd, originalwd)
502 if newwd < 0:
502 if newwd < 0:
503 # original directory is a parent of rebase set root or ignored
503 # original directory is a parent of rebase set root or ignored
504 newwd = originalwd
504 newwd = originalwd
505 if newwd not in [c.rev() for c in repo[None].parents()]:
505 if newwd not in [c.rev() for c in repo[None].parents()]:
506 ui.note(_("update back to initial working directory parent\n"))
506 ui.note(_("update back to initial working directory parent\n"))
507 hg.updaterepo(repo, newwd, False)
507 hg.updaterepo(repo, newwd, False)
508
508
509 if not keepf:
509 if not keepf:
510 collapsedas = None
510 collapsedas = None
511 if collapsef:
511 if collapsef:
512 collapsedas = newnode
512 collapsedas = newnode
513 clearrebased(ui, repo, state, skipped, collapsedas)
513 clearrebased(ui, repo, state, skipped, collapsedas)
514
514
515 with repo.transaction('bookmark') as tr:
515 with repo.transaction('bookmark') as tr:
516 if currentbookmarks:
516 if currentbookmarks:
517 updatebookmarks(repo, targetnode, nstate, currentbookmarks, tr)
517 updatebookmarks(repo, targetnode, nstate, currentbookmarks, tr)
518 if activebookmark not in repo._bookmarks:
518 if activebookmark not in repo._bookmarks:
519 # active bookmark was divergent one and has been deleted
519 # active bookmark was divergent one and has been deleted
520 activebookmark = None
520 activebookmark = None
521 clearstatus(repo)
521 clearstatus(repo)
522 clearcollapsemsg(repo)
522 clearcollapsemsg(repo)
523
523
524 ui.note(_("rebase completed\n"))
524 ui.note(_("rebase completed\n"))
525 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
525 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
526 if skipped:
526 if skipped:
527 ui.note(_("%d revisions have been skipped\n") % len(skipped))
527 ui.note(_("%d revisions have been skipped\n") % len(skipped))
528
528
529 if (activebookmark and
529 if (activebookmark and
530 repo['.'].node() == repo._bookmarks[activebookmark]):
530 repo['.'].node() == repo._bookmarks[activebookmark]):
531 bookmarks.activate(repo, activebookmark)
531 bookmarks.activate(repo, activebookmark)
532
532
533 finally:
533 finally:
534 release(lock, wlock)
534 release(lock, wlock)
535
535
536 def _definesets(ui, repo, destf=None, srcf=None, basef=None, revf=[]):
536 def _definesets(ui, repo, destf=None, srcf=None, basef=None, revf=[]):
537 """use revisions argument to define destination and rebase set
537 """use revisions argument to define destination and rebase set
538 """
538 """
539 if srcf and basef:
539 if srcf and basef:
540 raise error.Abort(_('cannot specify both a source and a base'))
540 raise error.Abort(_('cannot specify both a source and a base'))
541 if revf and basef:
541 if revf and basef:
542 raise error.Abort(_('cannot specify both a revision and a base'))
542 raise error.Abort(_('cannot specify both a revision and a base'))
543 if revf and srcf:
543 if revf and srcf:
544 raise error.Abort(_('cannot specify both a revision and a source'))
544 raise error.Abort(_('cannot specify both a revision and a source'))
545
545
546 cmdutil.checkunfinished(repo)
546 cmdutil.checkunfinished(repo)
547 cmdutil.bailifchanged(repo)
547 cmdutil.bailifchanged(repo)
548
548
549 if destf:
549 if destf:
550 dest = scmutil.revsingle(repo, destf)
550 dest = scmutil.revsingle(repo, destf)
551
551
552 if revf:
552 if revf:
553 rebaseset = scmutil.revrange(repo, revf)
553 rebaseset = scmutil.revrange(repo, revf)
554 if not rebaseset:
554 if not rebaseset:
555 ui.status(_('empty "rev" revision set - nothing to rebase\n'))
555 ui.status(_('empty "rev" revision set - nothing to rebase\n'))
556 return None, None
556 return None, None
557 elif srcf:
557 elif srcf:
558 src = scmutil.revrange(repo, [srcf])
558 src = scmutil.revrange(repo, [srcf])
559 if not src:
559 if not src:
560 ui.status(_('empty "source" revision set - nothing to rebase\n'))
560 ui.status(_('empty "source" revision set - nothing to rebase\n'))
561 return None, None
561 return None, None
562 rebaseset = repo.revs('(%ld)::', src)
562 rebaseset = repo.revs('(%ld)::', src)
563 assert rebaseset
563 assert rebaseset
564 else:
564 else:
565 base = scmutil.revrange(repo, [basef or '.'])
565 base = scmutil.revrange(repo, [basef or '.'])
566 if not base:
566 if not base:
567 ui.status(_('empty "base" revision set - '
567 ui.status(_('empty "base" revision set - '
568 "can't compute rebase set\n"))
568 "can't compute rebase set\n"))
569 return None, None
569 return None, None
570 if not destf:
570 if not destf:
571 dest = repo[_destrebase(repo, base)]
571 dest = repo[_destrebase(repo, base)]
572 destf = str(dest)
572 destf = str(dest)
573
573
574 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
574 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
575 if commonanc is not None:
575 if commonanc is not None:
576 rebaseset = repo.revs('(%d::(%ld) - %d)::',
576 rebaseset = repo.revs('(%d::(%ld) - %d)::',
577 commonanc, base, commonanc)
577 commonanc, base, commonanc)
578 else:
578 else:
579 rebaseset = []
579 rebaseset = []
580
580
581 if not rebaseset:
581 if not rebaseset:
582 # transform to list because smartsets are not comparable to
582 # transform to list because smartsets are not comparable to
583 # lists. This should be improved to honor laziness of
583 # lists. This should be improved to honor laziness of
584 # smartset.
584 # smartset.
585 if list(base) == [dest.rev()]:
585 if list(base) == [dest.rev()]:
586 if basef:
586 if basef:
587 ui.status(_('nothing to rebase - %s is both "base"'
587 ui.status(_('nothing to rebase - %s is both "base"'
588 ' and destination\n') % dest)
588 ' and destination\n') % dest)
589 else:
589 else:
590 ui.status(_('nothing to rebase - working directory '
590 ui.status(_('nothing to rebase - working directory '
591 'parent is also destination\n'))
591 'parent is also destination\n'))
592 elif not repo.revs('%ld - ::%d', base, dest):
592 elif not repo.revs('%ld - ::%d', base, dest):
593 if basef:
593 if basef:
594 ui.status(_('nothing to rebase - "base" %s is '
594 ui.status(_('nothing to rebase - "base" %s is '
595 'already an ancestor of destination '
595 'already an ancestor of destination '
596 '%s\n') %
596 '%s\n') %
597 ('+'.join(str(repo[r]) for r in base),
597 ('+'.join(str(repo[r]) for r in base),
598 dest))
598 dest))
599 else:
599 else:
600 ui.status(_('nothing to rebase - working '
600 ui.status(_('nothing to rebase - working '
601 'directory parent is already an '
601 'directory parent is already an '
602 'ancestor of destination %s\n') % dest)
602 'ancestor of destination %s\n') % dest)
603 else: # can it happen?
603 else: # can it happen?
604 ui.status(_('nothing to rebase from %s to %s\n') %
604 ui.status(_('nothing to rebase from %s to %s\n') %
605 ('+'.join(str(repo[r]) for r in base), dest))
605 ('+'.join(str(repo[r]) for r in base), dest))
606 return None, None
606 return None, None
607
607
608 if not destf:
608 if not destf:
609 dest = repo[_destrebase(repo, rebaseset)]
609 dest = repo[_destrebase(repo, rebaseset)]
610 destf = str(dest)
610 destf = str(dest)
611
611
612 return dest, rebaseset
612 return dest, rebaseset
613
613
614 def externalparent(repo, state, targetancestors):
614 def externalparent(repo, state, targetancestors):
615 """Return the revision that should be used as the second parent
615 """Return the revision that should be used as the second parent
616 when the revisions in state is collapsed on top of targetancestors.
616 when the revisions in state is collapsed on top of targetancestors.
617 Abort if there is more than one parent.
617 Abort if there is more than one parent.
618 """
618 """
619 parents = set()
619 parents = set()
620 source = min(state)
620 source = min(state)
621 for rev in state:
621 for rev in state:
622 if rev == source:
622 if rev == source:
623 continue
623 continue
624 for p in repo[rev].parents():
624 for p in repo[rev].parents():
625 if (p.rev() not in state
625 if (p.rev() not in state
626 and p.rev() not in targetancestors):
626 and p.rev() not in targetancestors):
627 parents.add(p.rev())
627 parents.add(p.rev())
628 if not parents:
628 if not parents:
629 return nullrev
629 return nullrev
630 if len(parents) == 1:
630 if len(parents) == 1:
631 return parents.pop()
631 return parents.pop()
632 raise error.Abort(_('unable to collapse on top of %s, there is more '
632 raise error.Abort(_('unable to collapse on top of %s, there is more '
633 'than one external parent: %s') %
633 'than one external parent: %s') %
634 (max(targetancestors),
634 (max(targetancestors),
635 ', '.join(str(p) for p in sorted(parents))))
635 ', '.join(str(p) for p in sorted(parents))))
636
636
637 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
637 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
638 keepbranches=False, date=None):
638 keepbranches=False, date=None):
639 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
639 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
640 but also store useful information in extra.
640 but also store useful information in extra.
641 Return node of committed revision.'''
641 Return node of committed revision.'''
642 dsguard = cmdutil.dirstateguard(repo, 'rebase')
642 dsguard = cmdutil.dirstateguard(repo, 'rebase')
643 try:
643 try:
644 repo.setparents(repo[p1].node(), repo[p2].node())
644 repo.setparents(repo[p1].node(), repo[p2].node())
645 ctx = repo[rev]
645 ctx = repo[rev]
646 if commitmsg is None:
646 if commitmsg is None:
647 commitmsg = ctx.description()
647 commitmsg = ctx.description()
648 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
648 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
649 extra = {'rebase_source': ctx.hex()}
649 extra = {'rebase_source': ctx.hex()}
650 if extrafn:
650 if extrafn:
651 extrafn(ctx, extra)
651 extrafn(ctx, extra)
652
652
653 backup = repo.ui.backupconfig('phases', 'new-commit')
653 backup = repo.ui.backupconfig('phases', 'new-commit')
654 try:
654 try:
655 targetphase = max(ctx.phase(), phases.draft)
655 targetphase = max(ctx.phase(), phases.draft)
656 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
656 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
657 if keepbranch:
657 if keepbranch:
658 repo.ui.setconfig('ui', 'allowemptycommit', True)
658 repo.ui.setconfig('ui', 'allowemptycommit', True)
659 # Commit might fail if unresolved files exist
659 # Commit might fail if unresolved files exist
660 if date is None:
660 if date is None:
661 date = ctx.date()
661 date = ctx.date()
662 newnode = repo.commit(text=commitmsg, user=ctx.user(),
662 newnode = repo.commit(text=commitmsg, user=ctx.user(),
663 date=date, extra=extra, editor=editor)
663 date=date, extra=extra, editor=editor)
664 finally:
664 finally:
665 repo.ui.restoreconfig(backup)
665 repo.ui.restoreconfig(backup)
666
666
667 repo.dirstate.setbranch(repo[newnode].branch())
667 repo.dirstate.setbranch(repo[newnode].branch())
668 dsguard.close()
668 dsguard.close()
669 return newnode
669 return newnode
670 finally:
670 finally:
671 release(dsguard)
671 release(dsguard)
672
672
673 def rebasenode(repo, rev, p1, base, state, collapse, target):
673 def rebasenode(repo, rev, p1, base, state, collapse, target):
674 'Rebase a single revision rev on top of p1 using base as merge ancestor'
674 'Rebase a single revision rev on top of p1 using base as merge ancestor'
675 # Merge phase
675 # Merge phase
676 # Update to target and merge it with local
676 # Update to target and merge it with local
677 if repo['.'].rev() != p1:
677 if repo['.'].rev() != p1:
678 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
678 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
679 merge.update(repo, p1, False, True)
679 merge.update(repo, p1, False, True)
680 else:
680 else:
681 repo.ui.debug(" already in target\n")
681 repo.ui.debug(" already in target\n")
682 repo.dirstate.write(repo.currenttransaction())
682 repo.dirstate.write(repo.currenttransaction())
683 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
683 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
684 if base is not None:
684 if base is not None:
685 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
685 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
686 # When collapsing in-place, the parent is the common ancestor, we
686 # When collapsing in-place, the parent is the common ancestor, we
687 # have to allow merging with it.
687 # have to allow merging with it.
688 stats = merge.update(repo, rev, True, True, base, collapse,
688 stats = merge.update(repo, rev, True, True, base, collapse,
689 labels=['dest', 'source'])
689 labels=['dest', 'source'])
690 if collapse:
690 if collapse:
691 copies.duplicatecopies(repo, rev, target)
691 copies.duplicatecopies(repo, rev, target)
692 else:
692 else:
693 # If we're not using --collapse, we need to
693 # If we're not using --collapse, we need to
694 # duplicate copies between the revision we're
694 # duplicate copies between the revision we're
695 # rebasing and its first parent, but *not*
695 # rebasing and its first parent, but *not*
696 # duplicate any copies that have already been
696 # duplicate any copies that have already been
697 # performed in the destination.
697 # performed in the destination.
698 p1rev = repo[rev].p1().rev()
698 p1rev = repo[rev].p1().rev()
699 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
699 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
700 return stats
700 return stats
701
701
702 def nearestrebased(repo, rev, state):
702 def nearestrebased(repo, rev, state):
703 """return the nearest ancestors of rev in the rebase result"""
703 """return the nearest ancestors of rev in the rebase result"""
704 rebased = [r for r in state if state[r] > nullmerge]
704 rebased = [r for r in state if state[r] > nullmerge]
705 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
705 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
706 if candidates:
706 if candidates:
707 return state[candidates.first()]
707 return state[candidates.first()]
708 else:
708 else:
709 return None
709 return None
710
710
711 def defineparents(repo, rev, target, state, targetancestors):
711 def defineparents(repo, rev, target, state, targetancestors):
712 'Return the new parent relationship of the revision that will be rebased'
712 'Return the new parent relationship of the revision that will be rebased'
713 parents = repo[rev].parents()
713 parents = repo[rev].parents()
714 p1 = p2 = nullrev
714 p1 = p2 = nullrev
715
715
716 p1n = parents[0].rev()
716 p1n = parents[0].rev()
717 if p1n in targetancestors:
717 if p1n in targetancestors:
718 p1 = target
718 p1 = target
719 elif p1n in state:
719 elif p1n in state:
720 if state[p1n] == nullmerge:
720 if state[p1n] == nullmerge:
721 p1 = target
721 p1 = target
722 elif state[p1n] in revskipped:
722 elif state[p1n] in revskipped:
723 p1 = nearestrebased(repo, p1n, state)
723 p1 = nearestrebased(repo, p1n, state)
724 if p1 is None:
724 if p1 is None:
725 p1 = target
725 p1 = target
726 else:
726 else:
727 p1 = state[p1n]
727 p1 = state[p1n]
728 else: # p1n external
728 else: # p1n external
729 p1 = target
729 p1 = target
730 p2 = p1n
730 p2 = p1n
731
731
732 if len(parents) == 2 and parents[1].rev() not in targetancestors:
732 if len(parents) == 2 and parents[1].rev() not in targetancestors:
733 p2n = parents[1].rev()
733 p2n = parents[1].rev()
734 # interesting second parent
734 # interesting second parent
735 if p2n in state:
735 if p2n in state:
736 if p1 == target: # p1n in targetancestors or external
736 if p1 == target: # p1n in targetancestors or external
737 p1 = state[p2n]
737 p1 = state[p2n]
738 elif state[p2n] in revskipped:
738 elif state[p2n] in revskipped:
739 p2 = nearestrebased(repo, p2n, state)
739 p2 = nearestrebased(repo, p2n, state)
740 if p2 is None:
740 if p2 is None:
741 # no ancestors rebased yet, detach
741 # no ancestors rebased yet, detach
742 p2 = target
742 p2 = target
743 else:
743 else:
744 p2 = state[p2n]
744 p2 = state[p2n]
745 else: # p2n external
745 else: # p2n external
746 if p2 != nullrev: # p1n external too => rev is a merged revision
746 if p2 != nullrev: # p1n external too => rev is a merged revision
747 raise error.Abort(_('cannot use revision %d as base, result '
747 raise error.Abort(_('cannot use revision %d as base, result '
748 'would have 3 parents') % rev)
748 'would have 3 parents') % rev)
749 p2 = p2n
749 p2 = p2n
750 repo.ui.debug(" future parents are %d and %d\n" %
750 repo.ui.debug(" future parents are %d and %d\n" %
751 (repo[p1].rev(), repo[p2].rev()))
751 (repo[p1].rev(), repo[p2].rev()))
752
752
753 if not any(p.rev() in state for p in parents):
753 if not any(p.rev() in state for p in parents):
754 # Case (1) root changeset of a non-detaching rebase set.
754 # Case (1) root changeset of a non-detaching rebase set.
755 # Let the merge mechanism find the base itself.
755 # Let the merge mechanism find the base itself.
756 base = None
756 base = None
757 elif not repo[rev].p2():
757 elif not repo[rev].p2():
758 # Case (2) detaching the node with a single parent, use this parent
758 # Case (2) detaching the node with a single parent, use this parent
759 base = repo[rev].p1().rev()
759 base = repo[rev].p1().rev()
760 else:
760 else:
761 # Assuming there is a p1, this is the case where there also is a p2.
761 # Assuming there is a p1, this is the case where there also is a p2.
762 # We are thus rebasing a merge and need to pick the right merge base.
762 # We are thus rebasing a merge and need to pick the right merge base.
763 #
763 #
764 # Imagine we have:
764 # Imagine we have:
765 # - M: current rebase revision in this step
765 # - M: current rebase revision in this step
766 # - A: one parent of M
766 # - A: one parent of M
767 # - B: other parent of M
767 # - B: other parent of M
768 # - D: destination of this merge step (p1 var)
768 # - D: destination of this merge step (p1 var)
769 #
769 #
770 # Consider the case where D is a descendant of A or B and the other is
770 # Consider the case where D is a descendant of A or B and the other is
771 # 'outside'. In this case, the right merge base is the D ancestor.
771 # 'outside'. In this case, the right merge base is the D ancestor.
772 #
772 #
773 # An informal proof, assuming A is 'outside' and B is the D ancestor:
773 # An informal proof, assuming A is 'outside' and B is the D ancestor:
774 #
774 #
775 # If we pick B as the base, the merge involves:
775 # If we pick B as the base, the merge involves:
776 # - changes from B to M (actual changeset payload)
776 # - changes from B to M (actual changeset payload)
777 # - changes from B to D (induced by rebase) as D is a rebased
777 # - changes from B to D (induced by rebase) as D is a rebased
778 # version of B)
778 # version of B)
779 # Which exactly represent the rebase operation.
779 # Which exactly represent the rebase operation.
780 #
780 #
781 # If we pick A as the base, the merge involves:
781 # If we pick A as the base, the merge involves:
782 # - changes from A to M (actual changeset payload)
782 # - changes from A to M (actual changeset payload)
783 # - changes from A to D (with include changes between unrelated A and B
783 # - changes from A to D (with include changes between unrelated A and B
784 # plus changes induced by rebase)
784 # plus changes induced by rebase)
785 # Which does not represent anything sensible and creates a lot of
785 # Which does not represent anything sensible and creates a lot of
786 # conflicts. A is thus not the right choice - B is.
786 # conflicts. A is thus not the right choice - B is.
787 #
787 #
788 # Note: The base found in this 'proof' is only correct in the specified
788 # Note: The base found in this 'proof' is only correct in the specified
789 # case. This base does not make sense if is not D a descendant of A or B
789 # case. This base does not make sense if is not D a descendant of A or B
790 # or if the other is not parent 'outside' (especially not if the other
790 # or if the other is not parent 'outside' (especially not if the other
791 # parent has been rebased). The current implementation does not
791 # parent has been rebased). The current implementation does not
792 # make it feasible to consider different cases separately. In these
792 # make it feasible to consider different cases separately. In these
793 # other cases we currently just leave it to the user to correctly
793 # other cases we currently just leave it to the user to correctly
794 # resolve an impossible merge using a wrong ancestor.
794 # resolve an impossible merge using a wrong ancestor.
795 for p in repo[rev].parents():
795 for p in repo[rev].parents():
796 if state.get(p.rev()) == p1:
796 if state.get(p.rev()) == p1:
797 base = p.rev()
797 base = p.rev()
798 break
798 break
799 else: # fallback when base not found
799 else: # fallback when base not found
800 base = None
800 base = None
801
801
802 # Raise because this function is called wrong (see issue 4106)
802 # Raise because this function is called wrong (see issue 4106)
803 raise AssertionError('no base found to rebase on '
803 raise AssertionError('no base found to rebase on '
804 '(defineparents called wrong)')
804 '(defineparents called wrong)')
805 return p1, p2, base
805 return p1, p2, base
806
806
807 def isagitpatch(repo, patchname):
807 def isagitpatch(repo, patchname):
808 'Return true if the given patch is in git format'
808 'Return true if the given patch is in git format'
809 mqpatch = os.path.join(repo.mq.path, patchname)
809 mqpatch = os.path.join(repo.mq.path, patchname)
810 for line in patch.linereader(file(mqpatch, 'rb')):
810 for line in patch.linereader(file(mqpatch, 'rb')):
811 if line.startswith('diff --git'):
811 if line.startswith('diff --git'):
812 return True
812 return True
813 return False
813 return False
814
814
815 def updatemq(repo, state, skipped, **opts):
815 def updatemq(repo, state, skipped, **opts):
816 'Update rebased mq patches - finalize and then import them'
816 'Update rebased mq patches - finalize and then import them'
817 mqrebase = {}
817 mqrebase = {}
818 mq = repo.mq
818 mq = repo.mq
819 original_series = mq.fullseries[:]
819 original_series = mq.fullseries[:]
820 skippedpatches = set()
820 skippedpatches = set()
821
821
822 for p in mq.applied:
822 for p in mq.applied:
823 rev = repo[p.node].rev()
823 rev = repo[p.node].rev()
824 if rev in state:
824 if rev in state:
825 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
825 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
826 (rev, p.name))
826 (rev, p.name))
827 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
827 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
828 else:
828 else:
829 # Applied but not rebased, not sure this should happen
829 # Applied but not rebased, not sure this should happen
830 skippedpatches.add(p.name)
830 skippedpatches.add(p.name)
831
831
832 if mqrebase:
832 if mqrebase:
833 mq.finish(repo, mqrebase.keys())
833 mq.finish(repo, mqrebase.keys())
834
834
835 # We must start import from the newest revision
835 # We must start import from the newest revision
836 for rev in sorted(mqrebase, reverse=True):
836 for rev in sorted(mqrebase, reverse=True):
837 if rev not in skipped:
837 if rev not in skipped:
838 name, isgit = mqrebase[rev]
838 name, isgit = mqrebase[rev]
839 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
839 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
840 (name, state[rev], repo[state[rev]]))
840 (name, state[rev], repo[state[rev]]))
841 mq.qimport(repo, (), patchname=name, git=isgit,
841 mq.qimport(repo, (), patchname=name, git=isgit,
842 rev=[str(state[rev])])
842 rev=[str(state[rev])])
843 else:
843 else:
844 # Rebased and skipped
844 # Rebased and skipped
845 skippedpatches.add(mqrebase[rev][0])
845 skippedpatches.add(mqrebase[rev][0])
846
846
847 # Patches were either applied and rebased and imported in
847 # Patches were either applied and rebased and imported in
848 # order, applied and removed or unapplied. Discard the removed
848 # order, applied and removed or unapplied. Discard the removed
849 # ones while preserving the original series order and guards.
849 # ones while preserving the original series order and guards.
850 newseries = [s for s in original_series
850 newseries = [s for s in original_series
851 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
851 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
852 mq.fullseries[:] = newseries
852 mq.fullseries[:] = newseries
853 mq.seriesdirty = True
853 mq.seriesdirty = True
854 mq.savedirty()
854 mq.savedirty()
855
855
856 def updatebookmarks(repo, targetnode, nstate, originalbookmarks, tr):
856 def updatebookmarks(repo, targetnode, nstate, originalbookmarks, tr):
857 'Move bookmarks to their correct changesets, and delete divergent ones'
857 'Move bookmarks to their correct changesets, and delete divergent ones'
858 marks = repo._bookmarks
858 marks = repo._bookmarks
859 for k, v in originalbookmarks.iteritems():
859 for k, v in originalbookmarks.iteritems():
860 if v in nstate:
860 if v in nstate:
861 # update the bookmarks for revs that have moved
861 # update the bookmarks for revs that have moved
862 marks[k] = nstate[v]
862 marks[k] = nstate[v]
863 bookmarks.deletedivergent(repo, [targetnode], k)
863 bookmarks.deletedivergent(repo, [targetnode], k)
864 marks.recordchange(tr)
864 marks.recordchange(tr)
865
865
866 def storecollapsemsg(repo, collapsemsg):
866 def storecollapsemsg(repo, collapsemsg):
867 'Store the collapse message to allow recovery'
867 'Store the collapse message to allow recovery'
868 collapsemsg = collapsemsg or ''
868 collapsemsg = collapsemsg or ''
869 f = repo.vfs("last-message.txt", "w")
869 f = repo.vfs("last-message.txt", "w")
870 f.write("%s\n" % collapsemsg)
870 f.write("%s\n" % collapsemsg)
871 f.close()
871 f.close()
872
872
873 def clearcollapsemsg(repo):
873 def clearcollapsemsg(repo):
874 'Remove collapse message file'
874 'Remove collapse message file'
875 util.unlinkpath(repo.join("last-message.txt"), ignoremissing=True)
875 util.unlinkpath(repo.join("last-message.txt"), ignoremissing=True)
876
876
877 def restorecollapsemsg(repo):
877 def restorecollapsemsg(repo):
878 'Restore previously stored collapse message'
878 'Restore previously stored collapse message'
879 try:
879 try:
880 f = repo.vfs("last-message.txt")
880 f = repo.vfs("last-message.txt")
881 collapsemsg = f.readline().strip()
881 collapsemsg = f.readline().strip()
882 f.close()
882 f.close()
883 except IOError as err:
883 except IOError as err:
884 if err.errno != errno.ENOENT:
884 if err.errno != errno.ENOENT:
885 raise
885 raise
886 raise error.Abort(_('no rebase in progress'))
886 raise error.Abort(_('no rebase in progress'))
887 return collapsemsg
887 return collapsemsg
888
888
889 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
889 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
890 external, activebookmark):
890 external, activebookmark):
891 'Store the current status to allow recovery'
891 'Store the current status to allow recovery'
892 f = repo.vfs("rebasestate", "w")
892 f = repo.vfs("rebasestate", "w")
893 f.write(repo[originalwd].hex() + '\n')
893 f.write(repo[originalwd].hex() + '\n')
894 f.write(repo[target].hex() + '\n')
894 f.write(repo[target].hex() + '\n')
895 f.write(repo[external].hex() + '\n')
895 f.write(repo[external].hex() + '\n')
896 f.write('%d\n' % int(collapse))
896 f.write('%d\n' % int(collapse))
897 f.write('%d\n' % int(keep))
897 f.write('%d\n' % int(keep))
898 f.write('%d\n' % int(keepbranches))
898 f.write('%d\n' % int(keepbranches))
899 f.write('%s\n' % (activebookmark or ''))
899 f.write('%s\n' % (activebookmark or ''))
900 for d, v in state.iteritems():
900 for d, v in state.iteritems():
901 oldrev = repo[d].hex()
901 oldrev = repo[d].hex()
902 if v >= 0:
902 if v >= 0:
903 newrev = repo[v].hex()
903 newrev = repo[v].hex()
904 elif v == revtodo:
904 elif v == revtodo:
905 # To maintain format compatibility, we have to use nullid.
905 # To maintain format compatibility, we have to use nullid.
906 # Please do remove this special case when upgrading the format.
906 # Please do remove this special case when upgrading the format.
907 newrev = hex(nullid)
907 newrev = hex(nullid)
908 else:
908 else:
909 newrev = v
909 newrev = v
910 f.write("%s:%s\n" % (oldrev, newrev))
910 f.write("%s:%s\n" % (oldrev, newrev))
911 f.close()
911 f.close()
912 repo.ui.debug('rebase status stored\n')
912 repo.ui.debug('rebase status stored\n')
913
913
914 def clearstatus(repo):
914 def clearstatus(repo):
915 'Remove the status files'
915 'Remove the status files'
916 _clearrebasesetvisibiliy(repo)
916 _clearrebasesetvisibiliy(repo)
917 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
917 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
918
918
919 def restorestatus(repo):
919 def restorestatus(repo):
920 'Restore a previously stored status'
920 'Restore a previously stored status'
921 keepbranches = None
921 keepbranches = None
922 target = None
922 target = None
923 collapse = False
923 collapse = False
924 external = nullrev
924 external = nullrev
925 activebookmark = None
925 activebookmark = None
926 state = {}
926 state = {}
927
927
928 try:
928 try:
929 f = repo.vfs("rebasestate")
929 f = repo.vfs("rebasestate")
930 for i, l in enumerate(f.read().splitlines()):
930 for i, l in enumerate(f.read().splitlines()):
931 if i == 0:
931 if i == 0:
932 originalwd = repo[l].rev()
932 originalwd = repo[l].rev()
933 elif i == 1:
933 elif i == 1:
934 target = repo[l].rev()
934 target = repo[l].rev()
935 elif i == 2:
935 elif i == 2:
936 external = repo[l].rev()
936 external = repo[l].rev()
937 elif i == 3:
937 elif i == 3:
938 collapse = bool(int(l))
938 collapse = bool(int(l))
939 elif i == 4:
939 elif i == 4:
940 keep = bool(int(l))
940 keep = bool(int(l))
941 elif i == 5:
941 elif i == 5:
942 keepbranches = bool(int(l))
942 keepbranches = bool(int(l))
943 elif i == 6 and not (len(l) == 81 and ':' in l):
943 elif i == 6 and not (len(l) == 81 and ':' in l):
944 # line 6 is a recent addition, so for backwards compatibility
944 # line 6 is a recent addition, so for backwards compatibility
945 # check that the line doesn't look like the oldrev:newrev lines
945 # check that the line doesn't look like the oldrev:newrev lines
946 activebookmark = l
946 activebookmark = l
947 else:
947 else:
948 oldrev, newrev = l.split(':')
948 oldrev, newrev = l.split(':')
949 if newrev in (str(nullmerge), str(revignored),
949 if newrev in (str(nullmerge), str(revignored),
950 str(revprecursor), str(revpruned)):
950 str(revprecursor), str(revpruned)):
951 state[repo[oldrev].rev()] = int(newrev)
951 state[repo[oldrev].rev()] = int(newrev)
952 elif newrev == nullid:
952 elif newrev == nullid:
953 state[repo[oldrev].rev()] = revtodo
953 state[repo[oldrev].rev()] = revtodo
954 # Legacy compat special case
954 # Legacy compat special case
955 else:
955 else:
956 state[repo[oldrev].rev()] = repo[newrev].rev()
956 state[repo[oldrev].rev()] = repo[newrev].rev()
957
957
958 except IOError as err:
958 except IOError as err:
959 if err.errno != errno.ENOENT:
959 if err.errno != errno.ENOENT:
960 raise
960 raise
961 cmdutil.wrongtooltocontinue(repo, _('rebase'))
961 cmdutil.wrongtooltocontinue(repo, _('rebase'))
962
962
963 if keepbranches is None:
963 if keepbranches is None:
964 raise error.Abort(_('.hg/rebasestate is incomplete'))
964 raise error.Abort(_('.hg/rebasestate is incomplete'))
965
965
966 skipped = set()
966 skipped = set()
967 # recompute the set of skipped revs
967 # recompute the set of skipped revs
968 if not collapse:
968 if not collapse:
969 seen = set([target])
969 seen = set([target])
970 for old, new in sorted(state.items()):
970 for old, new in sorted(state.items()):
971 if new != revtodo and new in seen:
971 if new != revtodo and new in seen:
972 skipped.add(old)
972 skipped.add(old)
973 seen.add(new)
973 seen.add(new)
974 repo.ui.debug('computed skipped revs: %s\n' %
974 repo.ui.debug('computed skipped revs: %s\n' %
975 (' '.join(str(r) for r in sorted(skipped)) or None))
975 (' '.join(str(r) for r in sorted(skipped)) or None))
976 repo.ui.debug('rebase status resumed\n')
976 repo.ui.debug('rebase status resumed\n')
977 _setrebasesetvisibility(repo, state.keys())
977 _setrebasesetvisibility(repo, state.keys())
978 return (originalwd, target, state, skipped,
978 return (originalwd, target, state, skipped,
979 collapse, keep, keepbranches, external, activebookmark)
979 collapse, keep, keepbranches, external, activebookmark)
980
980
981 def needupdate(repo, state):
981 def needupdate(repo, state):
982 '''check whether we should `update --clean` away from a merge, or if
982 '''check whether we should `update --clean` away from a merge, or if
983 somehow the working dir got forcibly updated, e.g. by older hg'''
983 somehow the working dir got forcibly updated, e.g. by older hg'''
984 parents = [p.rev() for p in repo[None].parents()]
984 parents = [p.rev() for p in repo[None].parents()]
985
985
986 # Are we in a merge state at all?
986 # Are we in a merge state at all?
987 if len(parents) < 2:
987 if len(parents) < 2:
988 return False
988 return False
989
989
990 # We should be standing on the first as-of-yet unrebased commit.
990 # We should be standing on the first as-of-yet unrebased commit.
991 firstunrebased = min([old for old, new in state.iteritems()
991 firstunrebased = min([old for old, new in state.iteritems()
992 if new == nullrev])
992 if new == nullrev])
993 if firstunrebased in parents:
993 if firstunrebased in parents:
994 return True
994 return True
995
995
996 return False
996 return False
997
997
998 def abort(repo, originalwd, target, state, activebookmark=None):
998 def abort(repo, originalwd, target, state, activebookmark=None):
999 '''Restore the repository to its original state. Additional args:
999 '''Restore the repository to its original state. Additional args:
1000
1000
1001 activebookmark: the name of the bookmark that should be active after the
1001 activebookmark: the name of the bookmark that should be active after the
1002 restore'''
1002 restore'''
1003
1003
1004 try:
1004 try:
1005 # If the first commits in the rebased set get skipped during the rebase,
1005 # If the first commits in the rebased set get skipped during the rebase,
1006 # their values within the state mapping will be the target rev id. The
1006 # their values within the state mapping will be the target rev id. The
1007 # dstates list must must not contain the target rev (issue4896)
1007 # dstates list must must not contain the target rev (issue4896)
1008 dstates = [s for s in state.values() if s >= 0 and s != target]
1008 dstates = [s for s in state.values() if s >= 0 and s != target]
1009 immutable = [d for d in dstates if not repo[d].mutable()]
1009 immutable = [d for d in dstates if not repo[d].mutable()]
1010 cleanup = True
1010 cleanup = True
1011 if immutable:
1011 if immutable:
1012 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1012 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
1013 % ', '.join(str(repo[r]) for r in immutable),
1013 % ', '.join(str(repo[r]) for r in immutable),
1014 hint=_('see "hg help phases" for details'))
1014 hint=_('see "hg help phases" for details'))
1015 cleanup = False
1015 cleanup = False
1016
1016
1017 descendants = set()
1017 descendants = set()
1018 if dstates:
1018 if dstates:
1019 descendants = set(repo.changelog.descendants(dstates))
1019 descendants = set(repo.changelog.descendants(dstates))
1020 if descendants - set(dstates):
1020 if descendants - set(dstates):
1021 repo.ui.warn(_("warning: new changesets detected on target branch, "
1021 repo.ui.warn(_("warning: new changesets detected on target branch, "
1022 "can't strip\n"))
1022 "can't strip\n"))
1023 cleanup = False
1023 cleanup = False
1024
1024
1025 if cleanup:
1025 if cleanup:
1026 shouldupdate = False
1026 shouldupdate = False
1027 rebased = filter(lambda x: x >= 0 and x != target, state.values())
1027 rebased = filter(lambda x: x >= 0 and x != target, state.values())
1028 if rebased:
1028 if rebased:
1029 strippoints = [
1029 strippoints = [
1030 c.node() for c in repo.set('roots(%ld)', rebased)]
1030 c.node() for c in repo.set('roots(%ld)', rebased)]
1031 shouldupdate = len([
1031 shouldupdate = len([
1032 c.node() for c in repo.set('. & (%ld)', rebased)]) > 0
1032 c.node() for c in repo.set('. & (%ld)', rebased)]) > 0
1033
1033
1034 # Update away from the rebase if necessary
1034 # Update away from the rebase if necessary
1035 if shouldupdate or needupdate(repo, state):
1035 if shouldupdate or needupdate(repo, state):
1036 merge.update(repo, originalwd, False, True)
1036 merge.update(repo, originalwd, False, True)
1037
1037
1038 # Strip from the first rebased revision
1038 # Strip from the first rebased revision
1039 if rebased:
1039 if rebased:
1040 # no backup of rebased cset versions needed
1040 # no backup of rebased cset versions needed
1041 repair.strip(repo.ui, repo, strippoints)
1041 repair.strip(repo.ui, repo, strippoints)
1042
1042
1043 if activebookmark and activebookmark in repo._bookmarks:
1043 if activebookmark and activebookmark in repo._bookmarks:
1044 bookmarks.activate(repo, activebookmark)
1044 bookmarks.activate(repo, activebookmark)
1045
1045
1046 finally:
1046 finally:
1047 clearstatus(repo)
1047 clearstatus(repo)
1048 clearcollapsemsg(repo)
1048 clearcollapsemsg(repo)
1049 repo.ui.warn(_('rebase aborted\n'))
1049 repo.ui.warn(_('rebase aborted\n'))
1050 return 0
1050 return 0
1051
1051
1052 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
1052 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
1053 '''Define which revisions are going to be rebased and where
1053 '''Define which revisions are going to be rebased and where
1054
1054
1055 repo: repo
1055 repo: repo
1056 dest: context
1056 dest: context
1057 rebaseset: set of rev
1057 rebaseset: set of rev
1058 '''
1058 '''
1059 _setrebasesetvisibility(repo, rebaseset)
1059 _setrebasesetvisibility(repo, rebaseset)
1060
1060
1061 # This check isn't strictly necessary, since mq detects commits over an
1061 # This check isn't strictly necessary, since mq detects commits over an
1062 # applied patch. But it prevents messing up the working directory when
1062 # applied patch. But it prevents messing up the working directory when
1063 # a partially completed rebase is blocked by mq.
1063 # a partially completed rebase is blocked by mq.
1064 if 'qtip' in repo.tags() and (dest.node() in
1064 if 'qtip' in repo.tags() and (dest.node() in
1065 [s.node for s in repo.mq.applied]):
1065 [s.node for s in repo.mq.applied]):
1066 raise error.Abort(_('cannot rebase onto an applied mq patch'))
1066 raise error.Abort(_('cannot rebase onto an applied mq patch'))
1067
1067
1068 roots = list(repo.set('roots(%ld)', rebaseset))
1068 roots = list(repo.set('roots(%ld)', rebaseset))
1069 if not roots:
1069 if not roots:
1070 raise error.Abort(_('no matching revisions'))
1070 raise error.Abort(_('no matching revisions'))
1071 roots.sort()
1071 roots.sort()
1072 state = {}
1072 state = {}
1073 detachset = set()
1073 detachset = set()
1074 for root in roots:
1074 for root in roots:
1075 commonbase = root.ancestor(dest)
1075 commonbase = root.ancestor(dest)
1076 if commonbase == root:
1076 if commonbase == root:
1077 raise error.Abort(_('source is ancestor of destination'))
1077 raise error.Abort(_('source is ancestor of destination'))
1078 if commonbase == dest:
1078 if commonbase == dest:
1079 samebranch = root.branch() == dest.branch()
1079 samebranch = root.branch() == dest.branch()
1080 if not collapse and samebranch and root in dest.children():
1080 if not collapse and samebranch and root in dest.children():
1081 repo.ui.debug('source is a child of destination\n')
1081 repo.ui.debug('source is a child of destination\n')
1082 return None
1082 return None
1083
1083
1084 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1084 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1085 state.update(dict.fromkeys(rebaseset, revtodo))
1085 state.update(dict.fromkeys(rebaseset, revtodo))
1086 # Rebase tries to turn <dest> into a parent of <root> while
1086 # Rebase tries to turn <dest> into a parent of <root> while
1087 # preserving the number of parents of rebased changesets:
1087 # preserving the number of parents of rebased changesets:
1088 #
1088 #
1089 # - A changeset with a single parent will always be rebased as a
1089 # - A changeset with a single parent will always be rebased as a
1090 # changeset with a single parent.
1090 # changeset with a single parent.
1091 #
1091 #
1092 # - A merge will be rebased as merge unless its parents are both
1092 # - A merge will be rebased as merge unless its parents are both
1093 # ancestors of <dest> or are themselves in the rebased set and
1093 # ancestors of <dest> or are themselves in the rebased set and
1094 # pruned while rebased.
1094 # pruned while rebased.
1095 #
1095 #
1096 # If one parent of <root> is an ancestor of <dest>, the rebased
1096 # If one parent of <root> is an ancestor of <dest>, the rebased
1097 # version of this parent will be <dest>. This is always true with
1097 # version of this parent will be <dest>. This is always true with
1098 # --base option.
1098 # --base option.
1099 #
1099 #
1100 # Otherwise, we need to *replace* the original parents with
1100 # Otherwise, we need to *replace* the original parents with
1101 # <dest>. This "detaches" the rebased set from its former location
1101 # <dest>. This "detaches" the rebased set from its former location
1102 # and rebases it onto <dest>. Changes introduced by ancestors of
1102 # and rebases it onto <dest>. Changes introduced by ancestors of
1103 # <root> not common with <dest> (the detachset, marked as
1103 # <root> not common with <dest> (the detachset, marked as
1104 # nullmerge) are "removed" from the rebased changesets.
1104 # nullmerge) are "removed" from the rebased changesets.
1105 #
1105 #
1106 # - If <root> has a single parent, set it to <dest>.
1106 # - If <root> has a single parent, set it to <dest>.
1107 #
1107 #
1108 # - If <root> is a merge, we cannot decide which parent to
1108 # - If <root> is a merge, we cannot decide which parent to
1109 # replace, the rebase operation is not clearly defined.
1109 # replace, the rebase operation is not clearly defined.
1110 #
1110 #
1111 # The table below sums up this behavior:
1111 # The table below sums up this behavior:
1112 #
1112 #
1113 # +------------------+----------------------+-------------------------+
1113 # +------------------+----------------------+-------------------------+
1114 # | | one parent | merge |
1114 # | | one parent | merge |
1115 # +------------------+----------------------+-------------------------+
1115 # +------------------+----------------------+-------------------------+
1116 # | parent in | new parent is <dest> | parents in ::<dest> are |
1116 # | parent in | new parent is <dest> | parents in ::<dest> are |
1117 # | ::<dest> | | remapped to <dest> |
1117 # | ::<dest> | | remapped to <dest> |
1118 # +------------------+----------------------+-------------------------+
1118 # +------------------+----------------------+-------------------------+
1119 # | unrelated source | new parent is <dest> | ambiguous, abort |
1119 # | unrelated source | new parent is <dest> | ambiguous, abort |
1120 # +------------------+----------------------+-------------------------+
1120 # +------------------+----------------------+-------------------------+
1121 #
1121 #
1122 # The actual abort is handled by `defineparents`
1122 # The actual abort is handled by `defineparents`
1123 if len(root.parents()) <= 1:
1123 if len(root.parents()) <= 1:
1124 # ancestors of <root> not ancestors of <dest>
1124 # ancestors of <root> not ancestors of <dest>
1125 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1125 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1126 [root.rev()]))
1126 [root.rev()]))
1127 for r in detachset:
1127 for r in detachset:
1128 if r not in state:
1128 if r not in state:
1129 state[r] = nullmerge
1129 state[r] = nullmerge
1130 if len(roots) > 1:
1130 if len(roots) > 1:
1131 # If we have multiple roots, we may have "hole" in the rebase set.
1131 # If we have multiple roots, we may have "hole" in the rebase set.
1132 # Rebase roots that descend from those "hole" should not be detached as
1132 # Rebase roots that descend from those "hole" should not be detached as
1133 # other root are. We use the special `revignored` to inform rebase that
1133 # other root are. We use the special `revignored` to inform rebase that
1134 # the revision should be ignored but that `defineparents` should search
1134 # the revision should be ignored but that `defineparents` should search
1135 # a rebase destination that make sense regarding rebased topology.
1135 # a rebase destination that make sense regarding rebased topology.
1136 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1136 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1137 for ignored in set(rebasedomain) - set(rebaseset):
1137 for ignored in set(rebasedomain) - set(rebaseset):
1138 state[ignored] = revignored
1138 state[ignored] = revignored
1139 for r in obsoletenotrebased:
1139 for r in obsoletenotrebased:
1140 if obsoletenotrebased[r] is None:
1140 if obsoletenotrebased[r] is None:
1141 state[r] = revpruned
1141 state[r] = revpruned
1142 else:
1142 else:
1143 state[r] = revprecursor
1143 state[r] = revprecursor
1144 return repo['.'].rev(), dest.rev(), state
1144 return repo['.'].rev(), dest.rev(), state
1145
1145
1146 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1146 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1147 """dispose of rebased revision at the end of the rebase
1147 """dispose of rebased revision at the end of the rebase
1148
1148
1149 If `collapsedas` is not None, the rebase was a collapse whose result if the
1149 If `collapsedas` is not None, the rebase was a collapse whose result if the
1150 `collapsedas` node."""
1150 `collapsedas` node."""
1151 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1151 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1152 markers = []
1152 markers = []
1153 for rev, newrev in sorted(state.items()):
1153 for rev, newrev in sorted(state.items()):
1154 if newrev >= 0:
1154 if newrev >= 0:
1155 if rev in skipped:
1155 if rev in skipped:
1156 succs = ()
1156 succs = ()
1157 elif collapsedas is not None:
1157 elif collapsedas is not None:
1158 succs = (repo[collapsedas],)
1158 succs = (repo[collapsedas],)
1159 else:
1159 else:
1160 succs = (repo[newrev],)
1160 succs = (repo[newrev],)
1161 markers.append((repo[rev], succs))
1161 markers.append((repo[rev], succs))
1162 if markers:
1162 if markers:
1163 obsolete.createmarkers(repo, markers)
1163 obsolete.createmarkers(repo, markers)
1164 else:
1164 else:
1165 rebased = [rev for rev in state if state[rev] > nullmerge]
1165 rebased = [rev for rev in state if state[rev] > nullmerge]
1166 if rebased:
1166 if rebased:
1167 stripped = []
1167 stripped = []
1168 for root in repo.set('roots(%ld)', rebased):
1168 for root in repo.set('roots(%ld)', rebased):
1169 if set(repo.changelog.descendants([root.rev()])) - set(state):
1169 if set(repo.changelog.descendants([root.rev()])) - set(state):
1170 ui.warn(_("warning: new changesets detected "
1170 ui.warn(_("warning: new changesets detected "
1171 "on source branch, not stripping\n"))
1171 "on source branch, not stripping\n"))
1172 else:
1172 else:
1173 stripped.append(root.node())
1173 stripped.append(root.node())
1174 if stripped:
1174 if stripped:
1175 # backup the old csets by default
1175 # backup the old csets by default
1176 repair.strip(ui, repo, stripped, "all")
1176 repair.strip(ui, repo, stripped, "all")
1177
1177
1178
1178
1179 def pullrebase(orig, ui, repo, *args, **opts):
1179 def pullrebase(orig, ui, repo, *args, **opts):
1180 'Call rebase after pull if the latter has been invoked with --rebase'
1180 'Call rebase after pull if the latter has been invoked with --rebase'
1181 ret = None
1181 ret = None
1182 if opts.get('rebase'):
1182 if opts.get('rebase'):
1183 wlock = lock = None
1183 wlock = lock = None
1184 try:
1184 try:
1185 wlock = repo.wlock()
1185 wlock = repo.wlock()
1186 lock = repo.lock()
1186 lock = repo.lock()
1187 if opts.get('update'):
1187 if opts.get('update'):
1188 del opts['update']
1188 del opts['update']
1189 ui.debug('--update and --rebase are not compatible, ignoring '
1189 ui.debug('--update and --rebase are not compatible, ignoring '
1190 'the update flag\n')
1190 'the update flag\n')
1191
1191
1192 revsprepull = len(repo)
1192 revsprepull = len(repo)
1193 origpostincoming = commands.postincoming
1193 origpostincoming = commands.postincoming
1194 def _dummy(*args, **kwargs):
1194 def _dummy(*args, **kwargs):
1195 pass
1195 pass
1196 commands.postincoming = _dummy
1196 commands.postincoming = _dummy
1197 try:
1197 try:
1198 ret = orig(ui, repo, *args, **opts)
1198 ret = orig(ui, repo, *args, **opts)
1199 finally:
1199 finally:
1200 commands.postincoming = origpostincoming
1200 commands.postincoming = origpostincoming
1201 revspostpull = len(repo)
1201 revspostpull = len(repo)
1202 if revspostpull > revsprepull:
1202 if revspostpull > revsprepull:
1203 # --rev option from pull conflict with rebase own --rev
1203 # --rev option from pull conflict with rebase own --rev
1204 # dropping it
1204 # dropping it
1205 if 'rev' in opts:
1205 if 'rev' in opts:
1206 del opts['rev']
1206 del opts['rev']
1207 # positional argument from pull conflicts with rebase's own
1207 # positional argument from pull conflicts with rebase's own
1208 # --source.
1208 # --source.
1209 if 'source' in opts:
1209 if 'source' in opts:
1210 del opts['source']
1210 del opts['source']
1211 try:
1211 try:
1212 rebase(ui, repo, **opts)
1212 rebase(ui, repo, **opts)
1213 except error.NoMergeDestAbort:
1213 except error.NoMergeDestAbort:
1214 # we can maybe update instead
1214 # we can maybe update instead
1215 rev, _a, _b = destutil.destupdate(repo)
1215 rev, _a, _b = destutil.destupdate(repo)
1216 if rev == repo['.'].rev():
1216 if rev == repo['.'].rev():
1217 ui.status(_('nothing to rebase\n'))
1217 ui.status(_('nothing to rebase\n'))
1218 else:
1218 else:
1219 ui.status(_('nothing to rebase - updating instead\n'))
1219 ui.status(_('nothing to rebase - updating instead\n'))
1220 # not passing argument to get the bare update behavior
1220 # not passing argument to get the bare update behavior
1221 # with warning and trumpets
1221 # with warning and trumpets
1222 commands.update(ui, repo)
1222 commands.update(ui, repo)
1223 finally:
1223 finally:
1224 release(lock, wlock)
1224 release(lock, wlock)
1225 else:
1225 else:
1226 if opts.get('tool'):
1226 if opts.get('tool'):
1227 raise error.Abort(_('--tool can only be used with --rebase'))
1227 raise error.Abort(_('--tool can only be used with --rebase'))
1228 ret = orig(ui, repo, *args, **opts)
1228 ret = orig(ui, repo, *args, **opts)
1229
1229
1230 return ret
1230 return ret
1231
1231
1232 def _setrebasesetvisibility(repo, revs):
1232 def _setrebasesetvisibility(repo, revs):
1233 """store the currently rebased set on the repo object
1233 """store the currently rebased set on the repo object
1234
1234
1235 This is used by another function to prevent rebased revision to because
1235 This is used by another function to prevent rebased revision to because
1236 hidden (see issue4505)"""
1236 hidden (see issue4505)"""
1237 repo = repo.unfiltered()
1237 repo = repo.unfiltered()
1238 revs = set(revs)
1238 revs = set(revs)
1239 repo._rebaseset = revs
1239 repo._rebaseset = revs
1240 # invalidate cache if visibility changes
1240 # invalidate cache if visibility changes
1241 hiddens = repo.filteredrevcache.get('visible', set())
1241 hiddens = repo.filteredrevcache.get('visible', set())
1242 if revs & hiddens:
1242 if revs & hiddens:
1243 repo.invalidatevolatilesets()
1243 repo.invalidatevolatilesets()
1244
1244
1245 def _clearrebasesetvisibiliy(repo):
1245 def _clearrebasesetvisibiliy(repo):
1246 """remove rebaseset data from the repo"""
1246 """remove rebaseset data from the repo"""
1247 repo = repo.unfiltered()
1247 repo = repo.unfiltered()
1248 if '_rebaseset' in vars(repo):
1248 if '_rebaseset' in vars(repo):
1249 del repo._rebaseset
1249 del repo._rebaseset
1250
1250
1251 def _rebasedvisible(orig, repo):
1251 def _rebasedvisible(orig, repo):
1252 """ensure rebased revs stay visible (see issue4505)"""
1252 """ensure rebased revs stay visible (see issue4505)"""
1253 blockers = orig(repo)
1253 blockers = orig(repo)
1254 blockers.update(getattr(repo, '_rebaseset', ()))
1254 blockers.update(getattr(repo, '_rebaseset', ()))
1255 return blockers
1255 return blockers
1256
1256
1257 def _filterobsoleterevs(repo, revs):
1257 def _filterobsoleterevs(repo, revs):
1258 """returns a set of the obsolete revisions in revs"""
1258 """returns a set of the obsolete revisions in revs"""
1259 return set(r for r in revs if repo[r].obsolete())
1259 return set(r for r in revs if repo[r].obsolete())
1260
1260
1261 def _computeobsoletenotrebased(repo, rebaseobsrevs, dest):
1261 def _computeobsoletenotrebased(repo, rebaseobsrevs, dest):
1262 """return a mapping obsolete => successor for all obsolete nodes to be
1262 """return a mapping obsolete => successor for all obsolete nodes to be
1263 rebased that have a successors in the destination
1263 rebased that have a successors in the destination
1264
1264
1265 obsolete => None entries in the mapping indicate nodes with no succesor"""
1265 obsolete => None entries in the mapping indicate nodes with no succesor"""
1266 obsoletenotrebased = {}
1266 obsoletenotrebased = {}
1267
1267
1268 # Build a mapping successor => obsolete nodes for the obsolete
1268 # Build a mapping successor => obsolete nodes for the obsolete
1269 # nodes to be rebased
1269 # nodes to be rebased
1270 allsuccessors = {}
1270 allsuccessors = {}
1271 cl = repo.changelog
1271 cl = repo.changelog
1272 for r in rebaseobsrevs:
1272 for r in rebaseobsrevs:
1273 node = cl.node(r)
1273 node = cl.node(r)
1274 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1274 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1275 try:
1275 try:
1276 allsuccessors[cl.rev(s)] = cl.rev(node)
1276 allsuccessors[cl.rev(s)] = cl.rev(node)
1277 except LookupError:
1277 except LookupError:
1278 pass
1278 pass
1279
1279
1280 if allsuccessors:
1280 if allsuccessors:
1281 # Look for successors of obsolete nodes to be rebased among
1281 # Look for successors of obsolete nodes to be rebased among
1282 # the ancestors of dest
1282 # the ancestors of dest
1283 ancs = cl.ancestors([repo[dest].rev()],
1283 ancs = cl.ancestors([repo[dest].rev()],
1284 stoprev=min(allsuccessors),
1284 stoprev=min(allsuccessors),
1285 inclusive=True)
1285 inclusive=True)
1286 for s in allsuccessors:
1286 for s in allsuccessors:
1287 if s in ancs:
1287 if s in ancs:
1288 obsoletenotrebased[allsuccessors[s]] = s
1288 obsoletenotrebased[allsuccessors[s]] = s
1289 elif (s == allsuccessors[s] and
1289 elif (s == allsuccessors[s] and
1290 allsuccessors.values().count(s) == 1):
1290 allsuccessors.values().count(s) == 1):
1291 # plain prune
1291 # plain prune
1292 obsoletenotrebased[s] = None
1292 obsoletenotrebased[s] = None
1293
1293
1294 return obsoletenotrebased
1294 return obsoletenotrebased
1295
1295
1296 def summaryhook(ui, repo):
1296 def summaryhook(ui, repo):
1297 if not os.path.exists(repo.join('rebasestate')):
1297 if not os.path.exists(repo.join('rebasestate')):
1298 return
1298 return
1299 try:
1299 try:
1300 state = restorestatus(repo)[2]
1300 state = restorestatus(repo)[2]
1301 except error.RepoLookupError:
1301 except error.RepoLookupError:
1302 # i18n: column positioning for "hg summary"
1302 # i18n: column positioning for "hg summary"
1303 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1303 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1304 ui.write(msg)
1304 ui.write(msg)
1305 return
1305 return
1306 numrebased = len([i for i in state.itervalues() if i >= 0])
1306 numrebased = len([i for i in state.itervalues() if i >= 0])
1307 # i18n: column positioning for "hg summary"
1307 # i18n: column positioning for "hg summary"
1308 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1308 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1309 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1309 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1310 ui.label(_('%d remaining'), 'rebase.remaining') %
1310 ui.label(_('%d remaining'), 'rebase.remaining') %
1311 (len(state) - numrebased)))
1311 (len(state) - numrebased)))
1312
1312
1313 def uisetup(ui):
1313 def uisetup(ui):
1314 #Replace pull with a decorator to provide --rebase option
1314 #Replace pull with a decorator to provide --rebase option
1315 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1315 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1316 entry[1].append(('', 'rebase', None,
1316 entry[1].append(('', 'rebase', None,
1317 _("rebase working directory to branch head")))
1317 _("rebase working directory to branch head")))
1318 entry[1].append(('t', 'tool', '',
1318 entry[1].append(('t', 'tool', '',
1319 _("specify merge tool for rebase")))
1319 _("specify merge tool for rebase")))
1320 cmdutil.summaryhooks.add('rebase', summaryhook)
1320 cmdutil.summaryhooks.add('rebase', summaryhook)
1321 cmdutil.unfinishedstates.append(
1321 cmdutil.unfinishedstates.append(
1322 ['rebasestate', False, False, _('rebase in progress'),
1322 ['rebasestate', False, False, _('rebase in progress'),
1323 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1323 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1324 cmdutil.afterresolvedstates.append(
1324 cmdutil.afterresolvedstates.append(
1325 ['rebasestate', _('hg rebase --continue')])
1325 ['rebasestate', _('hg rebase --continue')])
1326 # ensure rebased rev are not hidden
1326 # ensure rebased rev are not hidden
1327 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1327 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1328 revsetpredicate.setup()
@@ -1,724 +1,723 b''
1 # Patch transplanting extension for Mercurial
1 # Patch transplanting extension for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to transplant changesets from another branch
8 '''command to transplant changesets from another branch
9
9
10 This extension allows you to transplant changes to another parent revision,
10 This extension allows you to transplant changes to another parent revision,
11 possibly in another repository. The transplant is done using 'diff' patches.
11 possibly in another repository. The transplant is done using 'diff' patches.
12
12
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 map from a changeset hash to its hash in the source repository.
14 map from a changeset hash to its hash in the source repository.
15 '''
15 '''
16
16
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18 import os, tempfile
18 import os, tempfile
19 from mercurial.node import short
19 from mercurial.node import short
20 from mercurial import bundlerepo, hg, merge, match
20 from mercurial import bundlerepo, hg, merge, match
21 from mercurial import patch, revlog, scmutil, util, error, cmdutil
21 from mercurial import patch, revlog, scmutil, util, error, cmdutil
22 from mercurial import revset, templatekw, exchange
22 from mercurial import registrar, revset, templatekw, exchange
23
23
24 class TransplantError(error.Abort):
24 class TransplantError(error.Abort):
25 pass
25 pass
26
26
27 cmdtable = {}
27 cmdtable = {}
28 command = cmdutil.command(cmdtable)
28 command = cmdutil.command(cmdtable)
29 # Note for extension authors: ONLY specify testedwith = 'internal' for
29 # Note for extension authors: ONLY specify testedwith = 'internal' for
30 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
30 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
31 # be specifying the version(s) of Mercurial they are tested with, or
31 # be specifying the version(s) of Mercurial they are tested with, or
32 # leave the attribute unspecified.
32 # leave the attribute unspecified.
33 testedwith = 'internal'
33 testedwith = 'internal'
34
34
35 class transplantentry(object):
35 class transplantentry(object):
36 def __init__(self, lnode, rnode):
36 def __init__(self, lnode, rnode):
37 self.lnode = lnode
37 self.lnode = lnode
38 self.rnode = rnode
38 self.rnode = rnode
39
39
40 class transplants(object):
40 class transplants(object):
41 def __init__(self, path=None, transplantfile=None, opener=None):
41 def __init__(self, path=None, transplantfile=None, opener=None):
42 self.path = path
42 self.path = path
43 self.transplantfile = transplantfile
43 self.transplantfile = transplantfile
44 self.opener = opener
44 self.opener = opener
45
45
46 if not opener:
46 if not opener:
47 self.opener = scmutil.opener(self.path)
47 self.opener = scmutil.opener(self.path)
48 self.transplants = {}
48 self.transplants = {}
49 self.dirty = False
49 self.dirty = False
50 self.read()
50 self.read()
51
51
52 def read(self):
52 def read(self):
53 abspath = os.path.join(self.path, self.transplantfile)
53 abspath = os.path.join(self.path, self.transplantfile)
54 if self.transplantfile and os.path.exists(abspath):
54 if self.transplantfile and os.path.exists(abspath):
55 for line in self.opener.read(self.transplantfile).splitlines():
55 for line in self.opener.read(self.transplantfile).splitlines():
56 lnode, rnode = map(revlog.bin, line.split(':'))
56 lnode, rnode = map(revlog.bin, line.split(':'))
57 list = self.transplants.setdefault(rnode, [])
57 list = self.transplants.setdefault(rnode, [])
58 list.append(transplantentry(lnode, rnode))
58 list.append(transplantentry(lnode, rnode))
59
59
60 def write(self):
60 def write(self):
61 if self.dirty and self.transplantfile:
61 if self.dirty and self.transplantfile:
62 if not os.path.isdir(self.path):
62 if not os.path.isdir(self.path):
63 os.mkdir(self.path)
63 os.mkdir(self.path)
64 fp = self.opener(self.transplantfile, 'w')
64 fp = self.opener(self.transplantfile, 'w')
65 for list in self.transplants.itervalues():
65 for list in self.transplants.itervalues():
66 for t in list:
66 for t in list:
67 l, r = map(revlog.hex, (t.lnode, t.rnode))
67 l, r = map(revlog.hex, (t.lnode, t.rnode))
68 fp.write(l + ':' + r + '\n')
68 fp.write(l + ':' + r + '\n')
69 fp.close()
69 fp.close()
70 self.dirty = False
70 self.dirty = False
71
71
72 def get(self, rnode):
72 def get(self, rnode):
73 return self.transplants.get(rnode) or []
73 return self.transplants.get(rnode) or []
74
74
75 def set(self, lnode, rnode):
75 def set(self, lnode, rnode):
76 list = self.transplants.setdefault(rnode, [])
76 list = self.transplants.setdefault(rnode, [])
77 list.append(transplantentry(lnode, rnode))
77 list.append(transplantentry(lnode, rnode))
78 self.dirty = True
78 self.dirty = True
79
79
80 def remove(self, transplant):
80 def remove(self, transplant):
81 list = self.transplants.get(transplant.rnode)
81 list = self.transplants.get(transplant.rnode)
82 if list:
82 if list:
83 del list[list.index(transplant)]
83 del list[list.index(transplant)]
84 self.dirty = True
84 self.dirty = True
85
85
86 class transplanter(object):
86 class transplanter(object):
87 def __init__(self, ui, repo, opts):
87 def __init__(self, ui, repo, opts):
88 self.ui = ui
88 self.ui = ui
89 self.path = repo.join('transplant')
89 self.path = repo.join('transplant')
90 self.opener = scmutil.opener(self.path)
90 self.opener = scmutil.opener(self.path)
91 self.transplants = transplants(self.path, 'transplants',
91 self.transplants = transplants(self.path, 'transplants',
92 opener=self.opener)
92 opener=self.opener)
93 def getcommiteditor():
93 def getcommiteditor():
94 editform = cmdutil.mergeeditform(repo[None], 'transplant')
94 editform = cmdutil.mergeeditform(repo[None], 'transplant')
95 return cmdutil.getcommiteditor(editform=editform, **opts)
95 return cmdutil.getcommiteditor(editform=editform, **opts)
96 self.getcommiteditor = getcommiteditor
96 self.getcommiteditor = getcommiteditor
97
97
98 def applied(self, repo, node, parent):
98 def applied(self, repo, node, parent):
99 '''returns True if a node is already an ancestor of parent
99 '''returns True if a node is already an ancestor of parent
100 or is parent or has already been transplanted'''
100 or is parent or has already been transplanted'''
101 if hasnode(repo, parent):
101 if hasnode(repo, parent):
102 parentrev = repo.changelog.rev(parent)
102 parentrev = repo.changelog.rev(parent)
103 if hasnode(repo, node):
103 if hasnode(repo, node):
104 rev = repo.changelog.rev(node)
104 rev = repo.changelog.rev(node)
105 reachable = repo.changelog.ancestors([parentrev], rev,
105 reachable = repo.changelog.ancestors([parentrev], rev,
106 inclusive=True)
106 inclusive=True)
107 if rev in reachable:
107 if rev in reachable:
108 return True
108 return True
109 for t in self.transplants.get(node):
109 for t in self.transplants.get(node):
110 # it might have been stripped
110 # it might have been stripped
111 if not hasnode(repo, t.lnode):
111 if not hasnode(repo, t.lnode):
112 self.transplants.remove(t)
112 self.transplants.remove(t)
113 return False
113 return False
114 lnoderev = repo.changelog.rev(t.lnode)
114 lnoderev = repo.changelog.rev(t.lnode)
115 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
115 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
116 inclusive=True):
116 inclusive=True):
117 return True
117 return True
118 return False
118 return False
119
119
120 def apply(self, repo, source, revmap, merges, opts=None):
120 def apply(self, repo, source, revmap, merges, opts=None):
121 '''apply the revisions in revmap one by one in revision order'''
121 '''apply the revisions in revmap one by one in revision order'''
122 if opts is None:
122 if opts is None:
123 opts = {}
123 opts = {}
124 revs = sorted(revmap)
124 revs = sorted(revmap)
125 p1, p2 = repo.dirstate.parents()
125 p1, p2 = repo.dirstate.parents()
126 pulls = []
126 pulls = []
127 diffopts = patch.difffeatureopts(self.ui, opts)
127 diffopts = patch.difffeatureopts(self.ui, opts)
128 diffopts.git = True
128 diffopts.git = True
129
129
130 lock = tr = None
130 lock = tr = None
131 try:
131 try:
132 lock = repo.lock()
132 lock = repo.lock()
133 tr = repo.transaction('transplant')
133 tr = repo.transaction('transplant')
134 for rev in revs:
134 for rev in revs:
135 node = revmap[rev]
135 node = revmap[rev]
136 revstr = '%s:%s' % (rev, short(node))
136 revstr = '%s:%s' % (rev, short(node))
137
137
138 if self.applied(repo, node, p1):
138 if self.applied(repo, node, p1):
139 self.ui.warn(_('skipping already applied revision %s\n') %
139 self.ui.warn(_('skipping already applied revision %s\n') %
140 revstr)
140 revstr)
141 continue
141 continue
142
142
143 parents = source.changelog.parents(node)
143 parents = source.changelog.parents(node)
144 if not (opts.get('filter') or opts.get('log')):
144 if not (opts.get('filter') or opts.get('log')):
145 # If the changeset parent is the same as the
145 # If the changeset parent is the same as the
146 # wdir's parent, just pull it.
146 # wdir's parent, just pull it.
147 if parents[0] == p1:
147 if parents[0] == p1:
148 pulls.append(node)
148 pulls.append(node)
149 p1 = node
149 p1 = node
150 continue
150 continue
151 if pulls:
151 if pulls:
152 if source != repo:
152 if source != repo:
153 exchange.pull(repo, source.peer(), heads=pulls)
153 exchange.pull(repo, source.peer(), heads=pulls)
154 merge.update(repo, pulls[-1], False, False)
154 merge.update(repo, pulls[-1], False, False)
155 p1, p2 = repo.dirstate.parents()
155 p1, p2 = repo.dirstate.parents()
156 pulls = []
156 pulls = []
157
157
158 domerge = False
158 domerge = False
159 if node in merges:
159 if node in merges:
160 # pulling all the merge revs at once would mean we
160 # pulling all the merge revs at once would mean we
161 # couldn't transplant after the latest even if
161 # couldn't transplant after the latest even if
162 # transplants before them fail.
162 # transplants before them fail.
163 domerge = True
163 domerge = True
164 if not hasnode(repo, node):
164 if not hasnode(repo, node):
165 exchange.pull(repo, source.peer(), heads=[node])
165 exchange.pull(repo, source.peer(), heads=[node])
166
166
167 skipmerge = False
167 skipmerge = False
168 if parents[1] != revlog.nullid:
168 if parents[1] != revlog.nullid:
169 if not opts.get('parent'):
169 if not opts.get('parent'):
170 self.ui.note(_('skipping merge changeset %s:%s\n')
170 self.ui.note(_('skipping merge changeset %s:%s\n')
171 % (rev, short(node)))
171 % (rev, short(node)))
172 skipmerge = True
172 skipmerge = True
173 else:
173 else:
174 parent = source.lookup(opts['parent'])
174 parent = source.lookup(opts['parent'])
175 if parent not in parents:
175 if parent not in parents:
176 raise error.Abort(_('%s is not a parent of %s') %
176 raise error.Abort(_('%s is not a parent of %s') %
177 (short(parent), short(node)))
177 (short(parent), short(node)))
178 else:
178 else:
179 parent = parents[0]
179 parent = parents[0]
180
180
181 if skipmerge:
181 if skipmerge:
182 patchfile = None
182 patchfile = None
183 else:
183 else:
184 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
184 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
185 fp = os.fdopen(fd, 'w')
185 fp = os.fdopen(fd, 'w')
186 gen = patch.diff(source, parent, node, opts=diffopts)
186 gen = patch.diff(source, parent, node, opts=diffopts)
187 for chunk in gen:
187 for chunk in gen:
188 fp.write(chunk)
188 fp.write(chunk)
189 fp.close()
189 fp.close()
190
190
191 del revmap[rev]
191 del revmap[rev]
192 if patchfile or domerge:
192 if patchfile or domerge:
193 try:
193 try:
194 try:
194 try:
195 n = self.applyone(repo, node,
195 n = self.applyone(repo, node,
196 source.changelog.read(node),
196 source.changelog.read(node),
197 patchfile, merge=domerge,
197 patchfile, merge=domerge,
198 log=opts.get('log'),
198 log=opts.get('log'),
199 filter=opts.get('filter'))
199 filter=opts.get('filter'))
200 except TransplantError:
200 except TransplantError:
201 # Do not rollback, it is up to the user to
201 # Do not rollback, it is up to the user to
202 # fix the merge or cancel everything
202 # fix the merge or cancel everything
203 tr.close()
203 tr.close()
204 raise
204 raise
205 if n and domerge:
205 if n and domerge:
206 self.ui.status(_('%s merged at %s\n') % (revstr,
206 self.ui.status(_('%s merged at %s\n') % (revstr,
207 short(n)))
207 short(n)))
208 elif n:
208 elif n:
209 self.ui.status(_('%s transplanted to %s\n')
209 self.ui.status(_('%s transplanted to %s\n')
210 % (short(node),
210 % (short(node),
211 short(n)))
211 short(n)))
212 finally:
212 finally:
213 if patchfile:
213 if patchfile:
214 os.unlink(patchfile)
214 os.unlink(patchfile)
215 tr.close()
215 tr.close()
216 if pulls:
216 if pulls:
217 exchange.pull(repo, source.peer(), heads=pulls)
217 exchange.pull(repo, source.peer(), heads=pulls)
218 merge.update(repo, pulls[-1], False, False)
218 merge.update(repo, pulls[-1], False, False)
219 finally:
219 finally:
220 self.saveseries(revmap, merges)
220 self.saveseries(revmap, merges)
221 self.transplants.write()
221 self.transplants.write()
222 if tr:
222 if tr:
223 tr.release()
223 tr.release()
224 if lock:
224 if lock:
225 lock.release()
225 lock.release()
226
226
227 def filter(self, filter, node, changelog, patchfile):
227 def filter(self, filter, node, changelog, patchfile):
228 '''arbitrarily rewrite changeset before applying it'''
228 '''arbitrarily rewrite changeset before applying it'''
229
229
230 self.ui.status(_('filtering %s\n') % patchfile)
230 self.ui.status(_('filtering %s\n') % patchfile)
231 user, date, msg = (changelog[1], changelog[2], changelog[4])
231 user, date, msg = (changelog[1], changelog[2], changelog[4])
232 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
232 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
233 fp = os.fdopen(fd, 'w')
233 fp = os.fdopen(fd, 'w')
234 fp.write("# HG changeset patch\n")
234 fp.write("# HG changeset patch\n")
235 fp.write("# User %s\n" % user)
235 fp.write("# User %s\n" % user)
236 fp.write("# Date %d %d\n" % date)
236 fp.write("# Date %d %d\n" % date)
237 fp.write(msg + '\n')
237 fp.write(msg + '\n')
238 fp.close()
238 fp.close()
239
239
240 try:
240 try:
241 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
241 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
242 util.shellquote(patchfile)),
242 util.shellquote(patchfile)),
243 environ={'HGUSER': changelog[1],
243 environ={'HGUSER': changelog[1],
244 'HGREVISION': revlog.hex(node),
244 'HGREVISION': revlog.hex(node),
245 },
245 },
246 onerr=error.Abort, errprefix=_('filter failed'))
246 onerr=error.Abort, errprefix=_('filter failed'))
247 user, date, msg = self.parselog(file(headerfile))[1:4]
247 user, date, msg = self.parselog(file(headerfile))[1:4]
248 finally:
248 finally:
249 os.unlink(headerfile)
249 os.unlink(headerfile)
250
250
251 return (user, date, msg)
251 return (user, date, msg)
252
252
253 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
253 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
254 filter=None):
254 filter=None):
255 '''apply the patch in patchfile to the repository as a transplant'''
255 '''apply the patch in patchfile to the repository as a transplant'''
256 (manifest, user, (time, timezone), files, message) = cl[:5]
256 (manifest, user, (time, timezone), files, message) = cl[:5]
257 date = "%d %d" % (time, timezone)
257 date = "%d %d" % (time, timezone)
258 extra = {'transplant_source': node}
258 extra = {'transplant_source': node}
259 if filter:
259 if filter:
260 (user, date, message) = self.filter(filter, node, cl, patchfile)
260 (user, date, message) = self.filter(filter, node, cl, patchfile)
261
261
262 if log:
262 if log:
263 # we don't translate messages inserted into commits
263 # we don't translate messages inserted into commits
264 message += '\n(transplanted from %s)' % revlog.hex(node)
264 message += '\n(transplanted from %s)' % revlog.hex(node)
265
265
266 self.ui.status(_('applying %s\n') % short(node))
266 self.ui.status(_('applying %s\n') % short(node))
267 self.ui.note('%s %s\n%s\n' % (user, date, message))
267 self.ui.note('%s %s\n%s\n' % (user, date, message))
268
268
269 if not patchfile and not merge:
269 if not patchfile and not merge:
270 raise error.Abort(_('can only omit patchfile if merging'))
270 raise error.Abort(_('can only omit patchfile if merging'))
271 if patchfile:
271 if patchfile:
272 try:
272 try:
273 files = set()
273 files = set()
274 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
274 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
275 files = list(files)
275 files = list(files)
276 except Exception as inst:
276 except Exception as inst:
277 seriespath = os.path.join(self.path, 'series')
277 seriespath = os.path.join(self.path, 'series')
278 if os.path.exists(seriespath):
278 if os.path.exists(seriespath):
279 os.unlink(seriespath)
279 os.unlink(seriespath)
280 p1 = repo.dirstate.p1()
280 p1 = repo.dirstate.p1()
281 p2 = node
281 p2 = node
282 self.log(user, date, message, p1, p2, merge=merge)
282 self.log(user, date, message, p1, p2, merge=merge)
283 self.ui.write(str(inst) + '\n')
283 self.ui.write(str(inst) + '\n')
284 raise TransplantError(_('fix up the working directory and run '
284 raise TransplantError(_('fix up the working directory and run '
285 'hg transplant --continue'))
285 'hg transplant --continue'))
286 else:
286 else:
287 files = None
287 files = None
288 if merge:
288 if merge:
289 p1, p2 = repo.dirstate.parents()
289 p1, p2 = repo.dirstate.parents()
290 repo.setparents(p1, node)
290 repo.setparents(p1, node)
291 m = match.always(repo.root, '')
291 m = match.always(repo.root, '')
292 else:
292 else:
293 m = match.exact(repo.root, '', files)
293 m = match.exact(repo.root, '', files)
294
294
295 n = repo.commit(message, user, date, extra=extra, match=m,
295 n = repo.commit(message, user, date, extra=extra, match=m,
296 editor=self.getcommiteditor())
296 editor=self.getcommiteditor())
297 if not n:
297 if not n:
298 self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
298 self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
299 return None
299 return None
300 if not merge:
300 if not merge:
301 self.transplants.set(n, node)
301 self.transplants.set(n, node)
302
302
303 return n
303 return n
304
304
305 def canresume(self):
305 def canresume(self):
306 return os.path.exists(os.path.join(self.path, 'journal'))
306 return os.path.exists(os.path.join(self.path, 'journal'))
307
307
308 def resume(self, repo, source, opts):
308 def resume(self, repo, source, opts):
309 '''recover last transaction and apply remaining changesets'''
309 '''recover last transaction and apply remaining changesets'''
310 if os.path.exists(os.path.join(self.path, 'journal')):
310 if os.path.exists(os.path.join(self.path, 'journal')):
311 n, node = self.recover(repo, source, opts)
311 n, node = self.recover(repo, source, opts)
312 if n:
312 if n:
313 self.ui.status(_('%s transplanted as %s\n') % (short(node),
313 self.ui.status(_('%s transplanted as %s\n') % (short(node),
314 short(n)))
314 short(n)))
315 else:
315 else:
316 self.ui.status(_('%s skipped due to empty diff\n')
316 self.ui.status(_('%s skipped due to empty diff\n')
317 % (short(node),))
317 % (short(node),))
318 seriespath = os.path.join(self.path, 'series')
318 seriespath = os.path.join(self.path, 'series')
319 if not os.path.exists(seriespath):
319 if not os.path.exists(seriespath):
320 self.transplants.write()
320 self.transplants.write()
321 return
321 return
322 nodes, merges = self.readseries()
322 nodes, merges = self.readseries()
323 revmap = {}
323 revmap = {}
324 for n in nodes:
324 for n in nodes:
325 revmap[source.changelog.rev(n)] = n
325 revmap[source.changelog.rev(n)] = n
326 os.unlink(seriespath)
326 os.unlink(seriespath)
327
327
328 self.apply(repo, source, revmap, merges, opts)
328 self.apply(repo, source, revmap, merges, opts)
329
329
330 def recover(self, repo, source, opts):
330 def recover(self, repo, source, opts):
331 '''commit working directory using journal metadata'''
331 '''commit working directory using journal metadata'''
332 node, user, date, message, parents = self.readlog()
332 node, user, date, message, parents = self.readlog()
333 merge = False
333 merge = False
334
334
335 if not user or not date or not message or not parents[0]:
335 if not user or not date or not message or not parents[0]:
336 raise error.Abort(_('transplant log file is corrupt'))
336 raise error.Abort(_('transplant log file is corrupt'))
337
337
338 parent = parents[0]
338 parent = parents[0]
339 if len(parents) > 1:
339 if len(parents) > 1:
340 if opts.get('parent'):
340 if opts.get('parent'):
341 parent = source.lookup(opts['parent'])
341 parent = source.lookup(opts['parent'])
342 if parent not in parents:
342 if parent not in parents:
343 raise error.Abort(_('%s is not a parent of %s') %
343 raise error.Abort(_('%s is not a parent of %s') %
344 (short(parent), short(node)))
344 (short(parent), short(node)))
345 else:
345 else:
346 merge = True
346 merge = True
347
347
348 extra = {'transplant_source': node}
348 extra = {'transplant_source': node}
349 try:
349 try:
350 p1, p2 = repo.dirstate.parents()
350 p1, p2 = repo.dirstate.parents()
351 if p1 != parent:
351 if p1 != parent:
352 raise error.Abort(_('working directory not at transplant '
352 raise error.Abort(_('working directory not at transplant '
353 'parent %s') % revlog.hex(parent))
353 'parent %s') % revlog.hex(parent))
354 if merge:
354 if merge:
355 repo.setparents(p1, parents[1])
355 repo.setparents(p1, parents[1])
356 modified, added, removed, deleted = repo.status()[:4]
356 modified, added, removed, deleted = repo.status()[:4]
357 if merge or modified or added or removed or deleted:
357 if merge or modified or added or removed or deleted:
358 n = repo.commit(message, user, date, extra=extra,
358 n = repo.commit(message, user, date, extra=extra,
359 editor=self.getcommiteditor())
359 editor=self.getcommiteditor())
360 if not n:
360 if not n:
361 raise error.Abort(_('commit failed'))
361 raise error.Abort(_('commit failed'))
362 if not merge:
362 if not merge:
363 self.transplants.set(n, node)
363 self.transplants.set(n, node)
364 else:
364 else:
365 n = None
365 n = None
366 self.unlog()
366 self.unlog()
367
367
368 return n, node
368 return n, node
369 finally:
369 finally:
370 # TODO: get rid of this meaningless try/finally enclosing.
370 # TODO: get rid of this meaningless try/finally enclosing.
371 # this is kept only to reduce changes in a patch.
371 # this is kept only to reduce changes in a patch.
372 pass
372 pass
373
373
374 def readseries(self):
374 def readseries(self):
375 nodes = []
375 nodes = []
376 merges = []
376 merges = []
377 cur = nodes
377 cur = nodes
378 for line in self.opener.read('series').splitlines():
378 for line in self.opener.read('series').splitlines():
379 if line.startswith('# Merges'):
379 if line.startswith('# Merges'):
380 cur = merges
380 cur = merges
381 continue
381 continue
382 cur.append(revlog.bin(line))
382 cur.append(revlog.bin(line))
383
383
384 return (nodes, merges)
384 return (nodes, merges)
385
385
386 def saveseries(self, revmap, merges):
386 def saveseries(self, revmap, merges):
387 if not revmap:
387 if not revmap:
388 return
388 return
389
389
390 if not os.path.isdir(self.path):
390 if not os.path.isdir(self.path):
391 os.mkdir(self.path)
391 os.mkdir(self.path)
392 series = self.opener('series', 'w')
392 series = self.opener('series', 'w')
393 for rev in sorted(revmap):
393 for rev in sorted(revmap):
394 series.write(revlog.hex(revmap[rev]) + '\n')
394 series.write(revlog.hex(revmap[rev]) + '\n')
395 if merges:
395 if merges:
396 series.write('# Merges\n')
396 series.write('# Merges\n')
397 for m in merges:
397 for m in merges:
398 series.write(revlog.hex(m) + '\n')
398 series.write(revlog.hex(m) + '\n')
399 series.close()
399 series.close()
400
400
401 def parselog(self, fp):
401 def parselog(self, fp):
402 parents = []
402 parents = []
403 message = []
403 message = []
404 node = revlog.nullid
404 node = revlog.nullid
405 inmsg = False
405 inmsg = False
406 user = None
406 user = None
407 date = None
407 date = None
408 for line in fp.read().splitlines():
408 for line in fp.read().splitlines():
409 if inmsg:
409 if inmsg:
410 message.append(line)
410 message.append(line)
411 elif line.startswith('# User '):
411 elif line.startswith('# User '):
412 user = line[7:]
412 user = line[7:]
413 elif line.startswith('# Date '):
413 elif line.startswith('# Date '):
414 date = line[7:]
414 date = line[7:]
415 elif line.startswith('# Node ID '):
415 elif line.startswith('# Node ID '):
416 node = revlog.bin(line[10:])
416 node = revlog.bin(line[10:])
417 elif line.startswith('# Parent '):
417 elif line.startswith('# Parent '):
418 parents.append(revlog.bin(line[9:]))
418 parents.append(revlog.bin(line[9:]))
419 elif not line.startswith('# '):
419 elif not line.startswith('# '):
420 inmsg = True
420 inmsg = True
421 message.append(line)
421 message.append(line)
422 if None in (user, date):
422 if None in (user, date):
423 raise error.Abort(_("filter corrupted changeset (no user or date)"))
423 raise error.Abort(_("filter corrupted changeset (no user or date)"))
424 return (node, user, date, '\n'.join(message), parents)
424 return (node, user, date, '\n'.join(message), parents)
425
425
426 def log(self, user, date, message, p1, p2, merge=False):
426 def log(self, user, date, message, p1, p2, merge=False):
427 '''journal changelog metadata for later recover'''
427 '''journal changelog metadata for later recover'''
428
428
429 if not os.path.isdir(self.path):
429 if not os.path.isdir(self.path):
430 os.mkdir(self.path)
430 os.mkdir(self.path)
431 fp = self.opener('journal', 'w')
431 fp = self.opener('journal', 'w')
432 fp.write('# User %s\n' % user)
432 fp.write('# User %s\n' % user)
433 fp.write('# Date %s\n' % date)
433 fp.write('# Date %s\n' % date)
434 fp.write('# Node ID %s\n' % revlog.hex(p2))
434 fp.write('# Node ID %s\n' % revlog.hex(p2))
435 fp.write('# Parent ' + revlog.hex(p1) + '\n')
435 fp.write('# Parent ' + revlog.hex(p1) + '\n')
436 if merge:
436 if merge:
437 fp.write('# Parent ' + revlog.hex(p2) + '\n')
437 fp.write('# Parent ' + revlog.hex(p2) + '\n')
438 fp.write(message.rstrip() + '\n')
438 fp.write(message.rstrip() + '\n')
439 fp.close()
439 fp.close()
440
440
441 def readlog(self):
441 def readlog(self):
442 return self.parselog(self.opener('journal'))
442 return self.parselog(self.opener('journal'))
443
443
444 def unlog(self):
444 def unlog(self):
445 '''remove changelog journal'''
445 '''remove changelog journal'''
446 absdst = os.path.join(self.path, 'journal')
446 absdst = os.path.join(self.path, 'journal')
447 if os.path.exists(absdst):
447 if os.path.exists(absdst):
448 os.unlink(absdst)
448 os.unlink(absdst)
449
449
450 def transplantfilter(self, repo, source, root):
450 def transplantfilter(self, repo, source, root):
451 def matchfn(node):
451 def matchfn(node):
452 if self.applied(repo, node, root):
452 if self.applied(repo, node, root):
453 return False
453 return False
454 if source.changelog.parents(node)[1] != revlog.nullid:
454 if source.changelog.parents(node)[1] != revlog.nullid:
455 return False
455 return False
456 extra = source.changelog.read(node)[5]
456 extra = source.changelog.read(node)[5]
457 cnode = extra.get('transplant_source')
457 cnode = extra.get('transplant_source')
458 if cnode and self.applied(repo, cnode, root):
458 if cnode and self.applied(repo, cnode, root):
459 return False
459 return False
460 return True
460 return True
461
461
462 return matchfn
462 return matchfn
463
463
464 def hasnode(repo, node):
464 def hasnode(repo, node):
465 try:
465 try:
466 return repo.changelog.rev(node) is not None
466 return repo.changelog.rev(node) is not None
467 except error.RevlogError:
467 except error.RevlogError:
468 return False
468 return False
469
469
470 def browserevs(ui, repo, nodes, opts):
470 def browserevs(ui, repo, nodes, opts):
471 '''interactively transplant changesets'''
471 '''interactively transplant changesets'''
472 displayer = cmdutil.show_changeset(ui, repo, opts)
472 displayer = cmdutil.show_changeset(ui, repo, opts)
473 transplants = []
473 transplants = []
474 merges = []
474 merges = []
475 prompt = _('apply changeset? [ynmpcq?]:'
475 prompt = _('apply changeset? [ynmpcq?]:'
476 '$$ &yes, transplant this changeset'
476 '$$ &yes, transplant this changeset'
477 '$$ &no, skip this changeset'
477 '$$ &no, skip this changeset'
478 '$$ &merge at this changeset'
478 '$$ &merge at this changeset'
479 '$$ show &patch'
479 '$$ show &patch'
480 '$$ &commit selected changesets'
480 '$$ &commit selected changesets'
481 '$$ &quit and cancel transplant'
481 '$$ &quit and cancel transplant'
482 '$$ &? (show this help)')
482 '$$ &? (show this help)')
483 for node in nodes:
483 for node in nodes:
484 displayer.show(repo[node])
484 displayer.show(repo[node])
485 action = None
485 action = None
486 while not action:
486 while not action:
487 action = 'ynmpcq?'[ui.promptchoice(prompt)]
487 action = 'ynmpcq?'[ui.promptchoice(prompt)]
488 if action == '?':
488 if action == '?':
489 for c, t in ui.extractchoices(prompt)[1]:
489 for c, t in ui.extractchoices(prompt)[1]:
490 ui.write('%s: %s\n' % (c, t))
490 ui.write('%s: %s\n' % (c, t))
491 action = None
491 action = None
492 elif action == 'p':
492 elif action == 'p':
493 parent = repo.changelog.parents(node)[0]
493 parent = repo.changelog.parents(node)[0]
494 for chunk in patch.diff(repo, parent, node):
494 for chunk in patch.diff(repo, parent, node):
495 ui.write(chunk)
495 ui.write(chunk)
496 action = None
496 action = None
497 if action == 'y':
497 if action == 'y':
498 transplants.append(node)
498 transplants.append(node)
499 elif action == 'm':
499 elif action == 'm':
500 merges.append(node)
500 merges.append(node)
501 elif action == 'c':
501 elif action == 'c':
502 break
502 break
503 elif action == 'q':
503 elif action == 'q':
504 transplants = ()
504 transplants = ()
505 merges = ()
505 merges = ()
506 break
506 break
507 displayer.close()
507 displayer.close()
508 return (transplants, merges)
508 return (transplants, merges)
509
509
510 @command('transplant',
510 @command('transplant',
511 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
511 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
512 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
512 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
513 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
513 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
514 ('p', 'prune', [], _('skip over REV'), _('REV')),
514 ('p', 'prune', [], _('skip over REV'), _('REV')),
515 ('m', 'merge', [], _('merge at REV'), _('REV')),
515 ('m', 'merge', [], _('merge at REV'), _('REV')),
516 ('', 'parent', '',
516 ('', 'parent', '',
517 _('parent to choose when transplanting merge'), _('REV')),
517 _('parent to choose when transplanting merge'), _('REV')),
518 ('e', 'edit', False, _('invoke editor on commit messages')),
518 ('e', 'edit', False, _('invoke editor on commit messages')),
519 ('', 'log', None, _('append transplant info to log message')),
519 ('', 'log', None, _('append transplant info to log message')),
520 ('c', 'continue', None, _('continue last transplant session '
520 ('c', 'continue', None, _('continue last transplant session '
521 'after fixing conflicts')),
521 'after fixing conflicts')),
522 ('', 'filter', '',
522 ('', 'filter', '',
523 _('filter changesets through command'), _('CMD'))],
523 _('filter changesets through command'), _('CMD'))],
524 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
524 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
525 '[-m REV] [REV]...'))
525 '[-m REV] [REV]...'))
526 def transplant(ui, repo, *revs, **opts):
526 def transplant(ui, repo, *revs, **opts):
527 '''transplant changesets from another branch
527 '''transplant changesets from another branch
528
528
529 Selected changesets will be applied on top of the current working
529 Selected changesets will be applied on top of the current working
530 directory with the log of the original changeset. The changesets
530 directory with the log of the original changeset. The changesets
531 are copied and will thus appear twice in the history with different
531 are copied and will thus appear twice in the history with different
532 identities.
532 identities.
533
533
534 Consider using the graft command if everything is inside the same
534 Consider using the graft command if everything is inside the same
535 repository - it will use merges and will usually give a better result.
535 repository - it will use merges and will usually give a better result.
536 Use the rebase extension if the changesets are unpublished and you want
536 Use the rebase extension if the changesets are unpublished and you want
537 to move them instead of copying them.
537 to move them instead of copying them.
538
538
539 If --log is specified, log messages will have a comment appended
539 If --log is specified, log messages will have a comment appended
540 of the form::
540 of the form::
541
541
542 (transplanted from CHANGESETHASH)
542 (transplanted from CHANGESETHASH)
543
543
544 You can rewrite the changelog message with the --filter option.
544 You can rewrite the changelog message with the --filter option.
545 Its argument will be invoked with the current changelog message as
545 Its argument will be invoked with the current changelog message as
546 $1 and the patch as $2.
546 $1 and the patch as $2.
547
547
548 --source/-s specifies another repository to use for selecting changesets,
548 --source/-s specifies another repository to use for selecting changesets,
549 just as if it temporarily had been pulled.
549 just as if it temporarily had been pulled.
550 If --branch/-b is specified, these revisions will be used as
550 If --branch/-b is specified, these revisions will be used as
551 heads when deciding which changesets to transplant, just as if only
551 heads when deciding which changesets to transplant, just as if only
552 these revisions had been pulled.
552 these revisions had been pulled.
553 If --all/-a is specified, all the revisions up to the heads specified
553 If --all/-a is specified, all the revisions up to the heads specified
554 with --branch will be transplanted.
554 with --branch will be transplanted.
555
555
556 Example:
556 Example:
557
557
558 - transplant all changes up to REV on top of your current revision::
558 - transplant all changes up to REV on top of your current revision::
559
559
560 hg transplant --branch REV --all
560 hg transplant --branch REV --all
561
561
562 You can optionally mark selected transplanted changesets as merge
562 You can optionally mark selected transplanted changesets as merge
563 changesets. You will not be prompted to transplant any ancestors
563 changesets. You will not be prompted to transplant any ancestors
564 of a merged transplant, and you can merge descendants of them
564 of a merged transplant, and you can merge descendants of them
565 normally instead of transplanting them.
565 normally instead of transplanting them.
566
566
567 Merge changesets may be transplanted directly by specifying the
567 Merge changesets may be transplanted directly by specifying the
568 proper parent changeset by calling :hg:`transplant --parent`.
568 proper parent changeset by calling :hg:`transplant --parent`.
569
569
570 If no merges or revisions are provided, :hg:`transplant` will
570 If no merges or revisions are provided, :hg:`transplant` will
571 start an interactive changeset browser.
571 start an interactive changeset browser.
572
572
573 If a changeset application fails, you can fix the merge by hand
573 If a changeset application fails, you can fix the merge by hand
574 and then resume where you left off by calling :hg:`transplant
574 and then resume where you left off by calling :hg:`transplant
575 --continue/-c`.
575 --continue/-c`.
576 '''
576 '''
577 with repo.wlock():
577 with repo.wlock():
578 return _dotransplant(ui, repo, *revs, **opts)
578 return _dotransplant(ui, repo, *revs, **opts)
579
579
580 def _dotransplant(ui, repo, *revs, **opts):
580 def _dotransplant(ui, repo, *revs, **opts):
581 def incwalk(repo, csets, match=util.always):
581 def incwalk(repo, csets, match=util.always):
582 for node in csets:
582 for node in csets:
583 if match(node):
583 if match(node):
584 yield node
584 yield node
585
585
586 def transplantwalk(repo, dest, heads, match=util.always):
586 def transplantwalk(repo, dest, heads, match=util.always):
587 '''Yield all nodes that are ancestors of a head but not ancestors
587 '''Yield all nodes that are ancestors of a head but not ancestors
588 of dest.
588 of dest.
589 If no heads are specified, the heads of repo will be used.'''
589 If no heads are specified, the heads of repo will be used.'''
590 if not heads:
590 if not heads:
591 heads = repo.heads()
591 heads = repo.heads()
592 ancestors = []
592 ancestors = []
593 ctx = repo[dest]
593 ctx = repo[dest]
594 for head in heads:
594 for head in heads:
595 ancestors.append(ctx.ancestor(repo[head]).node())
595 ancestors.append(ctx.ancestor(repo[head]).node())
596 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
596 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
597 if match(node):
597 if match(node):
598 yield node
598 yield node
599
599
600 def checkopts(opts, revs):
600 def checkopts(opts, revs):
601 if opts.get('continue'):
601 if opts.get('continue'):
602 if opts.get('branch') or opts.get('all') or opts.get('merge'):
602 if opts.get('branch') or opts.get('all') or opts.get('merge'):
603 raise error.Abort(_('--continue is incompatible with '
603 raise error.Abort(_('--continue is incompatible with '
604 '--branch, --all and --merge'))
604 '--branch, --all and --merge'))
605 return
605 return
606 if not (opts.get('source') or revs or
606 if not (opts.get('source') or revs or
607 opts.get('merge') or opts.get('branch')):
607 opts.get('merge') or opts.get('branch')):
608 raise error.Abort(_('no source URL, branch revision, or revision '
608 raise error.Abort(_('no source URL, branch revision, or revision '
609 'list provided'))
609 'list provided'))
610 if opts.get('all'):
610 if opts.get('all'):
611 if not opts.get('branch'):
611 if not opts.get('branch'):
612 raise error.Abort(_('--all requires a branch revision'))
612 raise error.Abort(_('--all requires a branch revision'))
613 if revs:
613 if revs:
614 raise error.Abort(_('--all is incompatible with a '
614 raise error.Abort(_('--all is incompatible with a '
615 'revision list'))
615 'revision list'))
616
616
617 checkopts(opts, revs)
617 checkopts(opts, revs)
618
618
619 if not opts.get('log'):
619 if not opts.get('log'):
620 # deprecated config: transplant.log
620 # deprecated config: transplant.log
621 opts['log'] = ui.config('transplant', 'log')
621 opts['log'] = ui.config('transplant', 'log')
622 if not opts.get('filter'):
622 if not opts.get('filter'):
623 # deprecated config: transplant.filter
623 # deprecated config: transplant.filter
624 opts['filter'] = ui.config('transplant', 'filter')
624 opts['filter'] = ui.config('transplant', 'filter')
625
625
626 tp = transplanter(ui, repo, opts)
626 tp = transplanter(ui, repo, opts)
627
627
628 p1, p2 = repo.dirstate.parents()
628 p1, p2 = repo.dirstate.parents()
629 if len(repo) > 0 and p1 == revlog.nullid:
629 if len(repo) > 0 and p1 == revlog.nullid:
630 raise error.Abort(_('no revision checked out'))
630 raise error.Abort(_('no revision checked out'))
631 if opts.get('continue'):
631 if opts.get('continue'):
632 if not tp.canresume():
632 if not tp.canresume():
633 raise error.Abort(_('no transplant to continue'))
633 raise error.Abort(_('no transplant to continue'))
634 else:
634 else:
635 cmdutil.checkunfinished(repo)
635 cmdutil.checkunfinished(repo)
636 if p2 != revlog.nullid:
636 if p2 != revlog.nullid:
637 raise error.Abort(_('outstanding uncommitted merges'))
637 raise error.Abort(_('outstanding uncommitted merges'))
638 m, a, r, d = repo.status()[:4]
638 m, a, r, d = repo.status()[:4]
639 if m or a or r or d:
639 if m or a or r or d:
640 raise error.Abort(_('outstanding local changes'))
640 raise error.Abort(_('outstanding local changes'))
641
641
642 sourcerepo = opts.get('source')
642 sourcerepo = opts.get('source')
643 if sourcerepo:
643 if sourcerepo:
644 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
644 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
645 heads = map(peer.lookup, opts.get('branch', ()))
645 heads = map(peer.lookup, opts.get('branch', ()))
646 target = set(heads)
646 target = set(heads)
647 for r in revs:
647 for r in revs:
648 try:
648 try:
649 target.add(peer.lookup(r))
649 target.add(peer.lookup(r))
650 except error.RepoError:
650 except error.RepoError:
651 pass
651 pass
652 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
652 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
653 onlyheads=sorted(target), force=True)
653 onlyheads=sorted(target), force=True)
654 else:
654 else:
655 source = repo
655 source = repo
656 heads = map(source.lookup, opts.get('branch', ()))
656 heads = map(source.lookup, opts.get('branch', ()))
657 cleanupfn = None
657 cleanupfn = None
658
658
659 try:
659 try:
660 if opts.get('continue'):
660 if opts.get('continue'):
661 tp.resume(repo, source, opts)
661 tp.resume(repo, source, opts)
662 return
662 return
663
663
664 tf = tp.transplantfilter(repo, source, p1)
664 tf = tp.transplantfilter(repo, source, p1)
665 if opts.get('prune'):
665 if opts.get('prune'):
666 prune = set(source.lookup(r)
666 prune = set(source.lookup(r)
667 for r in scmutil.revrange(source, opts.get('prune')))
667 for r in scmutil.revrange(source, opts.get('prune')))
668 matchfn = lambda x: tf(x) and x not in prune
668 matchfn = lambda x: tf(x) and x not in prune
669 else:
669 else:
670 matchfn = tf
670 matchfn = tf
671 merges = map(source.lookup, opts.get('merge', ()))
671 merges = map(source.lookup, opts.get('merge', ()))
672 revmap = {}
672 revmap = {}
673 if revs:
673 if revs:
674 for r in scmutil.revrange(source, revs):
674 for r in scmutil.revrange(source, revs):
675 revmap[int(r)] = source.lookup(r)
675 revmap[int(r)] = source.lookup(r)
676 elif opts.get('all') or not merges:
676 elif opts.get('all') or not merges:
677 if source != repo:
677 if source != repo:
678 alltransplants = incwalk(source, csets, match=matchfn)
678 alltransplants = incwalk(source, csets, match=matchfn)
679 else:
679 else:
680 alltransplants = transplantwalk(source, p1, heads,
680 alltransplants = transplantwalk(source, p1, heads,
681 match=matchfn)
681 match=matchfn)
682 if opts.get('all'):
682 if opts.get('all'):
683 revs = alltransplants
683 revs = alltransplants
684 else:
684 else:
685 revs, newmerges = browserevs(ui, source, alltransplants, opts)
685 revs, newmerges = browserevs(ui, source, alltransplants, opts)
686 merges.extend(newmerges)
686 merges.extend(newmerges)
687 for r in revs:
687 for r in revs:
688 revmap[source.changelog.rev(r)] = r
688 revmap[source.changelog.rev(r)] = r
689 for r in merges:
689 for r in merges:
690 revmap[source.changelog.rev(r)] = r
690 revmap[source.changelog.rev(r)] = r
691
691
692 tp.apply(repo, source, revmap, merges, opts)
692 tp.apply(repo, source, revmap, merges, opts)
693 finally:
693 finally:
694 if cleanupfn:
694 if cleanupfn:
695 cleanupfn()
695 cleanupfn()
696
696
697 revsetpredicate = revset.extpredicate()
697 revsetpredicate = registrar.revsetpredicate()
698
698
699 @revsetpredicate('transplanted([set])')
699 @revsetpredicate('transplanted([set])')
700 def revsettransplanted(repo, subset, x):
700 def revsettransplanted(repo, subset, x):
701 """Transplanted changesets in set, or all transplanted changesets.
701 """Transplanted changesets in set, or all transplanted changesets.
702 """
702 """
703 if x:
703 if x:
704 s = revset.getset(repo, subset, x)
704 s = revset.getset(repo, subset, x)
705 else:
705 else:
706 s = subset
706 s = subset
707 return revset.baseset([r for r in s if
707 return revset.baseset([r for r in s if
708 repo[r].extra().get('transplant_source')])
708 repo[r].extra().get('transplant_source')])
709
709
710 def kwtransplanted(repo, ctx, **args):
710 def kwtransplanted(repo, ctx, **args):
711 """:transplanted: String. The node identifier of the transplanted
711 """:transplanted: String. The node identifier of the transplanted
712 changeset if any."""
712 changeset if any."""
713 n = ctx.extra().get('transplant_source')
713 n = ctx.extra().get('transplant_source')
714 return n and revlog.hex(n) or ''
714 return n and revlog.hex(n) or ''
715
715
716 def extsetup(ui):
716 def extsetup(ui):
717 revsetpredicate.setup()
718 templatekw.keywords['transplanted'] = kwtransplanted
717 templatekw.keywords['transplanted'] = kwtransplanted
719 cmdutil.unfinishedstates.append(
718 cmdutil.unfinishedstates.append(
720 ['transplant/journal', True, False, _('transplant in progress'),
719 ['transplant/journal', True, False, _('transplant in progress'),
721 _("use 'hg transplant --continue' or 'hg update' to abort")])
720 _("use 'hg transplant --continue' or 'hg update' to abort")])
722
721
723 # tell hggettext to extract docstrings from these functions:
722 # tell hggettext to extract docstrings from these functions:
724 i18nfunctions = [revsettransplanted, kwtransplanted]
723 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,1048 +1,1050 b''
1 # dispatch.py - command dispatching for mercurial
1 # dispatch.py - command dispatching for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import, print_function
8 from __future__ import absolute_import, print_function
9
9
10 import atexit
10 import atexit
11 import difflib
11 import difflib
12 import errno
12 import errno
13 import os
13 import os
14 import pdb
14 import pdb
15 import re
15 import re
16 import shlex
16 import shlex
17 import signal
17 import signal
18 import socket
18 import socket
19 import sys
19 import sys
20 import time
20 import time
21 import traceback
21 import traceback
22
22
23
23
24 from .i18n import _
24 from .i18n import _
25
25
26 from . import (
26 from . import (
27 cmdutil,
27 cmdutil,
28 commands,
28 commands,
29 demandimport,
29 demandimport,
30 encoding,
30 encoding,
31 error,
31 error,
32 extensions,
32 extensions,
33 fancyopts,
33 fancyopts,
34 hg,
34 hg,
35 hook,
35 hook,
36 revset,
36 ui as uimod,
37 ui as uimod,
37 util,
38 util,
38 )
39 )
39
40
40 class request(object):
41 class request(object):
41 def __init__(self, args, ui=None, repo=None, fin=None, fout=None,
42 def __init__(self, args, ui=None, repo=None, fin=None, fout=None,
42 ferr=None):
43 ferr=None):
43 self.args = args
44 self.args = args
44 self.ui = ui
45 self.ui = ui
45 self.repo = repo
46 self.repo = repo
46
47
47 # input/output/error streams
48 # input/output/error streams
48 self.fin = fin
49 self.fin = fin
49 self.fout = fout
50 self.fout = fout
50 self.ferr = ferr
51 self.ferr = ferr
51
52
52 def run():
53 def run():
53 "run the command in sys.argv"
54 "run the command in sys.argv"
54 sys.exit((dispatch(request(sys.argv[1:])) or 0) & 255)
55 sys.exit((dispatch(request(sys.argv[1:])) or 0) & 255)
55
56
56 def _getsimilar(symbols, value):
57 def _getsimilar(symbols, value):
57 sim = lambda x: difflib.SequenceMatcher(None, value, x).ratio()
58 sim = lambda x: difflib.SequenceMatcher(None, value, x).ratio()
58 # The cutoff for similarity here is pretty arbitrary. It should
59 # The cutoff for similarity here is pretty arbitrary. It should
59 # probably be investigated and tweaked.
60 # probably be investigated and tweaked.
60 return [s for s in symbols if sim(s) > 0.6]
61 return [s for s in symbols if sim(s) > 0.6]
61
62
62 def _reportsimilar(write, similar):
63 def _reportsimilar(write, similar):
63 if len(similar) == 1:
64 if len(similar) == 1:
64 write(_("(did you mean %s?)\n") % similar[0])
65 write(_("(did you mean %s?)\n") % similar[0])
65 elif similar:
66 elif similar:
66 ss = ", ".join(sorted(similar))
67 ss = ", ".join(sorted(similar))
67 write(_("(did you mean one of %s?)\n") % ss)
68 write(_("(did you mean one of %s?)\n") % ss)
68
69
69 def _formatparse(write, inst):
70 def _formatparse(write, inst):
70 similar = []
71 similar = []
71 if isinstance(inst, error.UnknownIdentifier):
72 if isinstance(inst, error.UnknownIdentifier):
72 # make sure to check fileset first, as revset can invoke fileset
73 # make sure to check fileset first, as revset can invoke fileset
73 similar = _getsimilar(inst.symbols, inst.function)
74 similar = _getsimilar(inst.symbols, inst.function)
74 if len(inst.args) > 1:
75 if len(inst.args) > 1:
75 write(_("hg: parse error at %s: %s\n") %
76 write(_("hg: parse error at %s: %s\n") %
76 (inst.args[1], inst.args[0]))
77 (inst.args[1], inst.args[0]))
77 if (inst.args[0][0] == ' '):
78 if (inst.args[0][0] == ' '):
78 write(_("unexpected leading whitespace\n"))
79 write(_("unexpected leading whitespace\n"))
79 else:
80 else:
80 write(_("hg: parse error: %s\n") % inst.args[0])
81 write(_("hg: parse error: %s\n") % inst.args[0])
81 _reportsimilar(write, similar)
82 _reportsimilar(write, similar)
82
83
83 def dispatch(req):
84 def dispatch(req):
84 "run the command specified in req.args"
85 "run the command specified in req.args"
85 if req.ferr:
86 if req.ferr:
86 ferr = req.ferr
87 ferr = req.ferr
87 elif req.ui:
88 elif req.ui:
88 ferr = req.ui.ferr
89 ferr = req.ui.ferr
89 else:
90 else:
90 ferr = sys.stderr
91 ferr = sys.stderr
91
92
92 try:
93 try:
93 if not req.ui:
94 if not req.ui:
94 req.ui = uimod.ui()
95 req.ui = uimod.ui()
95 if '--traceback' in req.args:
96 if '--traceback' in req.args:
96 req.ui.setconfig('ui', 'traceback', 'on', '--traceback')
97 req.ui.setconfig('ui', 'traceback', 'on', '--traceback')
97
98
98 # set ui streams from the request
99 # set ui streams from the request
99 if req.fin:
100 if req.fin:
100 req.ui.fin = req.fin
101 req.ui.fin = req.fin
101 if req.fout:
102 if req.fout:
102 req.ui.fout = req.fout
103 req.ui.fout = req.fout
103 if req.ferr:
104 if req.ferr:
104 req.ui.ferr = req.ferr
105 req.ui.ferr = req.ferr
105 except error.Abort as inst:
106 except error.Abort as inst:
106 ferr.write(_("abort: %s\n") % inst)
107 ferr.write(_("abort: %s\n") % inst)
107 if inst.hint:
108 if inst.hint:
108 ferr.write(_("(%s)\n") % inst.hint)
109 ferr.write(_("(%s)\n") % inst.hint)
109 return -1
110 return -1
110 except error.ParseError as inst:
111 except error.ParseError as inst:
111 _formatparse(ferr.write, inst)
112 _formatparse(ferr.write, inst)
112 if inst.hint:
113 if inst.hint:
113 ferr.write(_("(%s)\n") % inst.hint)
114 ferr.write(_("(%s)\n") % inst.hint)
114 return -1
115 return -1
115
116
116 msg = ' '.join(' ' in a and repr(a) or a for a in req.args)
117 msg = ' '.join(' ' in a and repr(a) or a for a in req.args)
117 starttime = time.time()
118 starttime = time.time()
118 ret = None
119 ret = None
119 try:
120 try:
120 ret = _runcatch(req)
121 ret = _runcatch(req)
121 return ret
122 return ret
122 finally:
123 finally:
123 duration = time.time() - starttime
124 duration = time.time() - starttime
124 req.ui.log("commandfinish", "%s exited %s after %0.2f seconds\n",
125 req.ui.log("commandfinish", "%s exited %s after %0.2f seconds\n",
125 msg, ret or 0, duration)
126 msg, ret or 0, duration)
126
127
127 def _runcatch(req):
128 def _runcatch(req):
128 def catchterm(*args):
129 def catchterm(*args):
129 raise error.SignalInterrupt
130 raise error.SignalInterrupt
130
131
131 ui = req.ui
132 ui = req.ui
132 try:
133 try:
133 for name in 'SIGBREAK', 'SIGHUP', 'SIGTERM':
134 for name in 'SIGBREAK', 'SIGHUP', 'SIGTERM':
134 num = getattr(signal, name, None)
135 num = getattr(signal, name, None)
135 if num:
136 if num:
136 signal.signal(num, catchterm)
137 signal.signal(num, catchterm)
137 except ValueError:
138 except ValueError:
138 pass # happens if called in a thread
139 pass # happens if called in a thread
139
140
140 try:
141 try:
141 try:
142 try:
142 debugger = 'pdb'
143 debugger = 'pdb'
143 debugtrace = {
144 debugtrace = {
144 'pdb' : pdb.set_trace
145 'pdb' : pdb.set_trace
145 }
146 }
146 debugmortem = {
147 debugmortem = {
147 'pdb' : pdb.post_mortem
148 'pdb' : pdb.post_mortem
148 }
149 }
149
150
150 # read --config before doing anything else
151 # read --config before doing anything else
151 # (e.g. to change trust settings for reading .hg/hgrc)
152 # (e.g. to change trust settings for reading .hg/hgrc)
152 cfgs = _parseconfig(req.ui, _earlygetopt(['--config'], req.args))
153 cfgs = _parseconfig(req.ui, _earlygetopt(['--config'], req.args))
153
154
154 if req.repo:
155 if req.repo:
155 # copy configs that were passed on the cmdline (--config) to
156 # copy configs that were passed on the cmdline (--config) to
156 # the repo ui
157 # the repo ui
157 for sec, name, val in cfgs:
158 for sec, name, val in cfgs:
158 req.repo.ui.setconfig(sec, name, val, source='--config')
159 req.repo.ui.setconfig(sec, name, val, source='--config')
159
160
160 # developer config: ui.debugger
161 # developer config: ui.debugger
161 debugger = ui.config("ui", "debugger")
162 debugger = ui.config("ui", "debugger")
162 debugmod = pdb
163 debugmod = pdb
163 if not debugger or ui.plain():
164 if not debugger or ui.plain():
164 # if we are in HGPLAIN mode, then disable custom debugging
165 # if we are in HGPLAIN mode, then disable custom debugging
165 debugger = 'pdb'
166 debugger = 'pdb'
166 elif '--debugger' in req.args:
167 elif '--debugger' in req.args:
167 # This import can be slow for fancy debuggers, so only
168 # This import can be slow for fancy debuggers, so only
168 # do it when absolutely necessary, i.e. when actual
169 # do it when absolutely necessary, i.e. when actual
169 # debugging has been requested
170 # debugging has been requested
170 with demandimport.deactivated():
171 with demandimport.deactivated():
171 try:
172 try:
172 debugmod = __import__(debugger)
173 debugmod = __import__(debugger)
173 except ImportError:
174 except ImportError:
174 pass # Leave debugmod = pdb
175 pass # Leave debugmod = pdb
175
176
176 debugtrace[debugger] = debugmod.set_trace
177 debugtrace[debugger] = debugmod.set_trace
177 debugmortem[debugger] = debugmod.post_mortem
178 debugmortem[debugger] = debugmod.post_mortem
178
179
179 # enter the debugger before command execution
180 # enter the debugger before command execution
180 if '--debugger' in req.args:
181 if '--debugger' in req.args:
181 ui.warn(_("entering debugger - "
182 ui.warn(_("entering debugger - "
182 "type c to continue starting hg or h for help\n"))
183 "type c to continue starting hg or h for help\n"))
183
184
184 if (debugger != 'pdb' and
185 if (debugger != 'pdb' and
185 debugtrace[debugger] == debugtrace['pdb']):
186 debugtrace[debugger] == debugtrace['pdb']):
186 ui.warn(_("%s debugger specified "
187 ui.warn(_("%s debugger specified "
187 "but its module was not found\n") % debugger)
188 "but its module was not found\n") % debugger)
188 with demandimport.deactivated():
189 with demandimport.deactivated():
189 debugtrace[debugger]()
190 debugtrace[debugger]()
190 try:
191 try:
191 return _dispatch(req)
192 return _dispatch(req)
192 finally:
193 finally:
193 ui.flush()
194 ui.flush()
194 except: # re-raises
195 except: # re-raises
195 # enter the debugger when we hit an exception
196 # enter the debugger when we hit an exception
196 if '--debugger' in req.args:
197 if '--debugger' in req.args:
197 traceback.print_exc()
198 traceback.print_exc()
198 debugmortem[debugger](sys.exc_info()[2])
199 debugmortem[debugger](sys.exc_info()[2])
199 ui.traceback()
200 ui.traceback()
200 raise
201 raise
201
202
202 # Global exception handling, alphabetically
203 # Global exception handling, alphabetically
203 # Mercurial-specific first, followed by built-in and library exceptions
204 # Mercurial-specific first, followed by built-in and library exceptions
204 except error.AmbiguousCommand as inst:
205 except error.AmbiguousCommand as inst:
205 ui.warn(_("hg: command '%s' is ambiguous:\n %s\n") %
206 ui.warn(_("hg: command '%s' is ambiguous:\n %s\n") %
206 (inst.args[0], " ".join(inst.args[1])))
207 (inst.args[0], " ".join(inst.args[1])))
207 except error.ParseError as inst:
208 except error.ParseError as inst:
208 _formatparse(ui.warn, inst)
209 _formatparse(ui.warn, inst)
209 if inst.hint:
210 if inst.hint:
210 ui.warn(_("(%s)\n") % inst.hint)
211 ui.warn(_("(%s)\n") % inst.hint)
211 return -1
212 return -1
212 except error.LockHeld as inst:
213 except error.LockHeld as inst:
213 if inst.errno == errno.ETIMEDOUT:
214 if inst.errno == errno.ETIMEDOUT:
214 reason = _('timed out waiting for lock held by %s') % inst.locker
215 reason = _('timed out waiting for lock held by %s') % inst.locker
215 else:
216 else:
216 reason = _('lock held by %s') % inst.locker
217 reason = _('lock held by %s') % inst.locker
217 ui.warn(_("abort: %s: %s\n") % (inst.desc or inst.filename, reason))
218 ui.warn(_("abort: %s: %s\n") % (inst.desc or inst.filename, reason))
218 except error.LockUnavailable as inst:
219 except error.LockUnavailable as inst:
219 ui.warn(_("abort: could not lock %s: %s\n") %
220 ui.warn(_("abort: could not lock %s: %s\n") %
220 (inst.desc or inst.filename, inst.strerror))
221 (inst.desc or inst.filename, inst.strerror))
221 except error.CommandError as inst:
222 except error.CommandError as inst:
222 if inst.args[0]:
223 if inst.args[0]:
223 ui.warn(_("hg %s: %s\n") % (inst.args[0], inst.args[1]))
224 ui.warn(_("hg %s: %s\n") % (inst.args[0], inst.args[1]))
224 commands.help_(ui, inst.args[0], full=False, command=True)
225 commands.help_(ui, inst.args[0], full=False, command=True)
225 else:
226 else:
226 ui.warn(_("hg: %s\n") % inst.args[1])
227 ui.warn(_("hg: %s\n") % inst.args[1])
227 commands.help_(ui, 'shortlist')
228 commands.help_(ui, 'shortlist')
228 except error.OutOfBandError as inst:
229 except error.OutOfBandError as inst:
229 if inst.args:
230 if inst.args:
230 msg = _("abort: remote error:\n")
231 msg = _("abort: remote error:\n")
231 else:
232 else:
232 msg = _("abort: remote error\n")
233 msg = _("abort: remote error\n")
233 ui.warn(msg)
234 ui.warn(msg)
234 if inst.args:
235 if inst.args:
235 ui.warn(''.join(inst.args))
236 ui.warn(''.join(inst.args))
236 if inst.hint:
237 if inst.hint:
237 ui.warn('(%s)\n' % inst.hint)
238 ui.warn('(%s)\n' % inst.hint)
238 except error.RepoError as inst:
239 except error.RepoError as inst:
239 ui.warn(_("abort: %s!\n") % inst)
240 ui.warn(_("abort: %s!\n") % inst)
240 if inst.hint:
241 if inst.hint:
241 ui.warn(_("(%s)\n") % inst.hint)
242 ui.warn(_("(%s)\n") % inst.hint)
242 except error.ResponseError as inst:
243 except error.ResponseError as inst:
243 ui.warn(_("abort: %s") % inst.args[0])
244 ui.warn(_("abort: %s") % inst.args[0])
244 if not isinstance(inst.args[1], basestring):
245 if not isinstance(inst.args[1], basestring):
245 ui.warn(" %r\n" % (inst.args[1],))
246 ui.warn(" %r\n" % (inst.args[1],))
246 elif not inst.args[1]:
247 elif not inst.args[1]:
247 ui.warn(_(" empty string\n"))
248 ui.warn(_(" empty string\n"))
248 else:
249 else:
249 ui.warn("\n%r\n" % util.ellipsis(inst.args[1]))
250 ui.warn("\n%r\n" % util.ellipsis(inst.args[1]))
250 except error.CensoredNodeError as inst:
251 except error.CensoredNodeError as inst:
251 ui.warn(_("abort: file censored %s!\n") % inst)
252 ui.warn(_("abort: file censored %s!\n") % inst)
252 except error.RevlogError as inst:
253 except error.RevlogError as inst:
253 ui.warn(_("abort: %s!\n") % inst)
254 ui.warn(_("abort: %s!\n") % inst)
254 except error.SignalInterrupt:
255 except error.SignalInterrupt:
255 ui.warn(_("killed!\n"))
256 ui.warn(_("killed!\n"))
256 except error.UnknownCommand as inst:
257 except error.UnknownCommand as inst:
257 ui.warn(_("hg: unknown command '%s'\n") % inst.args[0])
258 ui.warn(_("hg: unknown command '%s'\n") % inst.args[0])
258 try:
259 try:
259 # check if the command is in a disabled extension
260 # check if the command is in a disabled extension
260 # (but don't check for extensions themselves)
261 # (but don't check for extensions themselves)
261 commands.help_(ui, inst.args[0], unknowncmd=True)
262 commands.help_(ui, inst.args[0], unknowncmd=True)
262 except (error.UnknownCommand, error.Abort):
263 except (error.UnknownCommand, error.Abort):
263 suggested = False
264 suggested = False
264 if len(inst.args) == 2:
265 if len(inst.args) == 2:
265 sim = _getsimilar(inst.args[1], inst.args[0])
266 sim = _getsimilar(inst.args[1], inst.args[0])
266 if sim:
267 if sim:
267 _reportsimilar(ui.warn, sim)
268 _reportsimilar(ui.warn, sim)
268 suggested = True
269 suggested = True
269 if not suggested:
270 if not suggested:
270 commands.help_(ui, 'shortlist')
271 commands.help_(ui, 'shortlist')
271 except error.InterventionRequired as inst:
272 except error.InterventionRequired as inst:
272 ui.warn("%s\n" % inst)
273 ui.warn("%s\n" % inst)
273 if inst.hint:
274 if inst.hint:
274 ui.warn(_("(%s)\n") % inst.hint)
275 ui.warn(_("(%s)\n") % inst.hint)
275 return 1
276 return 1
276 except error.Abort as inst:
277 except error.Abort as inst:
277 ui.warn(_("abort: %s\n") % inst)
278 ui.warn(_("abort: %s\n") % inst)
278 if inst.hint:
279 if inst.hint:
279 ui.warn(_("(%s)\n") % inst.hint)
280 ui.warn(_("(%s)\n") % inst.hint)
280 except ImportError as inst:
281 except ImportError as inst:
281 ui.warn(_("abort: %s!\n") % inst)
282 ui.warn(_("abort: %s!\n") % inst)
282 m = str(inst).split()[-1]
283 m = str(inst).split()[-1]
283 if m in "mpatch bdiff".split():
284 if m in "mpatch bdiff".split():
284 ui.warn(_("(did you forget to compile extensions?)\n"))
285 ui.warn(_("(did you forget to compile extensions?)\n"))
285 elif m in "zlib".split():
286 elif m in "zlib".split():
286 ui.warn(_("(is your Python install correct?)\n"))
287 ui.warn(_("(is your Python install correct?)\n"))
287 except IOError as inst:
288 except IOError as inst:
288 if util.safehasattr(inst, "code"):
289 if util.safehasattr(inst, "code"):
289 ui.warn(_("abort: %s\n") % inst)
290 ui.warn(_("abort: %s\n") % inst)
290 elif util.safehasattr(inst, "reason"):
291 elif util.safehasattr(inst, "reason"):
291 try: # usually it is in the form (errno, strerror)
292 try: # usually it is in the form (errno, strerror)
292 reason = inst.reason.args[1]
293 reason = inst.reason.args[1]
293 except (AttributeError, IndexError):
294 except (AttributeError, IndexError):
294 # it might be anything, for example a string
295 # it might be anything, for example a string
295 reason = inst.reason
296 reason = inst.reason
296 if isinstance(reason, unicode):
297 if isinstance(reason, unicode):
297 # SSLError of Python 2.7.9 contains a unicode
298 # SSLError of Python 2.7.9 contains a unicode
298 reason = reason.encode(encoding.encoding, 'replace')
299 reason = reason.encode(encoding.encoding, 'replace')
299 ui.warn(_("abort: error: %s\n") % reason)
300 ui.warn(_("abort: error: %s\n") % reason)
300 elif (util.safehasattr(inst, "args")
301 elif (util.safehasattr(inst, "args")
301 and inst.args and inst.args[0] == errno.EPIPE):
302 and inst.args and inst.args[0] == errno.EPIPE):
302 pass
303 pass
303 elif getattr(inst, "strerror", None):
304 elif getattr(inst, "strerror", None):
304 if getattr(inst, "filename", None):
305 if getattr(inst, "filename", None):
305 ui.warn(_("abort: %s: %s\n") % (inst.strerror, inst.filename))
306 ui.warn(_("abort: %s: %s\n") % (inst.strerror, inst.filename))
306 else:
307 else:
307 ui.warn(_("abort: %s\n") % inst.strerror)
308 ui.warn(_("abort: %s\n") % inst.strerror)
308 else:
309 else:
309 raise
310 raise
310 except OSError as inst:
311 except OSError as inst:
311 if getattr(inst, "filename", None) is not None:
312 if getattr(inst, "filename", None) is not None:
312 ui.warn(_("abort: %s: '%s'\n") % (inst.strerror, inst.filename))
313 ui.warn(_("abort: %s: '%s'\n") % (inst.strerror, inst.filename))
313 else:
314 else:
314 ui.warn(_("abort: %s\n") % inst.strerror)
315 ui.warn(_("abort: %s\n") % inst.strerror)
315 except KeyboardInterrupt:
316 except KeyboardInterrupt:
316 try:
317 try:
317 ui.warn(_("interrupted!\n"))
318 ui.warn(_("interrupted!\n"))
318 except IOError as inst:
319 except IOError as inst:
319 if inst.errno != errno.EPIPE:
320 if inst.errno != errno.EPIPE:
320 raise
321 raise
321 except MemoryError:
322 except MemoryError:
322 ui.warn(_("abort: out of memory\n"))
323 ui.warn(_("abort: out of memory\n"))
323 except SystemExit as inst:
324 except SystemExit as inst:
324 # Commands shouldn't sys.exit directly, but give a return code.
325 # Commands shouldn't sys.exit directly, but give a return code.
325 # Just in case catch this and and pass exit code to caller.
326 # Just in case catch this and and pass exit code to caller.
326 return inst.code
327 return inst.code
327 except socket.error as inst:
328 except socket.error as inst:
328 ui.warn(_("abort: %s\n") % inst.args[-1])
329 ui.warn(_("abort: %s\n") % inst.args[-1])
329 except: # re-raises
330 except: # re-raises
330 # For compatibility checking, we discard the portion of the hg
331 # For compatibility checking, we discard the portion of the hg
331 # version after the + on the assumption that if a "normal
332 # version after the + on the assumption that if a "normal
332 # user" is running a build with a + in it the packager
333 # user" is running a build with a + in it the packager
333 # probably built from fairly close to a tag and anyone with a
334 # probably built from fairly close to a tag and anyone with a
334 # 'make local' copy of hg (where the version number can be out
335 # 'make local' copy of hg (where the version number can be out
335 # of date) will be clueful enough to notice the implausible
336 # of date) will be clueful enough to notice the implausible
336 # version number and try updating.
337 # version number and try updating.
337 ct = util.versiontuple(n=2)
338 ct = util.versiontuple(n=2)
338 worst = None, ct, ''
339 worst = None, ct, ''
339 if ui.config('ui', 'supportcontact', None) is None:
340 if ui.config('ui', 'supportcontact', None) is None:
340 for name, mod in extensions.extensions():
341 for name, mod in extensions.extensions():
341 testedwith = getattr(mod, 'testedwith', '')
342 testedwith = getattr(mod, 'testedwith', '')
342 report = getattr(mod, 'buglink', _('the extension author.'))
343 report = getattr(mod, 'buglink', _('the extension author.'))
343 if not testedwith.strip():
344 if not testedwith.strip():
344 # We found an untested extension. It's likely the culprit.
345 # We found an untested extension. It's likely the culprit.
345 worst = name, 'unknown', report
346 worst = name, 'unknown', report
346 break
347 break
347
348
348 # Never blame on extensions bundled with Mercurial.
349 # Never blame on extensions bundled with Mercurial.
349 if testedwith == 'internal':
350 if testedwith == 'internal':
350 continue
351 continue
351
352
352 tested = [util.versiontuple(t, 2) for t in testedwith.split()]
353 tested = [util.versiontuple(t, 2) for t in testedwith.split()]
353 if ct in tested:
354 if ct in tested:
354 continue
355 continue
355
356
356 lower = [t for t in tested if t < ct]
357 lower = [t for t in tested if t < ct]
357 nearest = max(lower or tested)
358 nearest = max(lower or tested)
358 if worst[0] is None or nearest < worst[1]:
359 if worst[0] is None or nearest < worst[1]:
359 worst = name, nearest, report
360 worst = name, nearest, report
360 if worst[0] is not None:
361 if worst[0] is not None:
361 name, testedwith, report = worst
362 name, testedwith, report = worst
362 if not isinstance(testedwith, str):
363 if not isinstance(testedwith, str):
363 testedwith = '.'.join([str(c) for c in testedwith])
364 testedwith = '.'.join([str(c) for c in testedwith])
364 warning = (_('** Unknown exception encountered with '
365 warning = (_('** Unknown exception encountered with '
365 'possibly-broken third-party extension %s\n'
366 'possibly-broken third-party extension %s\n'
366 '** which supports versions %s of Mercurial.\n'
367 '** which supports versions %s of Mercurial.\n'
367 '** Please disable %s and try your action again.\n'
368 '** Please disable %s and try your action again.\n'
368 '** If that fixes the bug please report it to %s\n')
369 '** If that fixes the bug please report it to %s\n')
369 % (name, testedwith, name, report))
370 % (name, testedwith, name, report))
370 else:
371 else:
371 bugtracker = ui.config('ui', 'supportcontact', None)
372 bugtracker = ui.config('ui', 'supportcontact', None)
372 if bugtracker is None:
373 if bugtracker is None:
373 bugtracker = _("https://mercurial-scm.org/wiki/BugTracker")
374 bugtracker = _("https://mercurial-scm.org/wiki/BugTracker")
374 warning = (_("** unknown exception encountered, "
375 warning = (_("** unknown exception encountered, "
375 "please report by visiting\n** ") + bugtracker + '\n')
376 "please report by visiting\n** ") + bugtracker + '\n')
376 warning += ((_("** Python %s\n") % sys.version.replace('\n', '')) +
377 warning += ((_("** Python %s\n") % sys.version.replace('\n', '')) +
377 (_("** Mercurial Distributed SCM (version %s)\n") %
378 (_("** Mercurial Distributed SCM (version %s)\n") %
378 util.version()) +
379 util.version()) +
379 (_("** Extensions loaded: %s\n") %
380 (_("** Extensions loaded: %s\n") %
380 ", ".join([x[0] for x in extensions.extensions()])))
381 ", ".join([x[0] for x in extensions.extensions()])))
381 ui.log("commandexception", "%s\n%s\n", warning, traceback.format_exc())
382 ui.log("commandexception", "%s\n%s\n", warning, traceback.format_exc())
382 ui.warn(warning)
383 ui.warn(warning)
383 raise
384 raise
384
385
385 return -1
386 return -1
386
387
387 def aliasargs(fn, givenargs):
388 def aliasargs(fn, givenargs):
388 args = getattr(fn, 'args', [])
389 args = getattr(fn, 'args', [])
389 if args:
390 if args:
390 cmd = ' '.join(map(util.shellquote, args))
391 cmd = ' '.join(map(util.shellquote, args))
391
392
392 nums = []
393 nums = []
393 def replacer(m):
394 def replacer(m):
394 num = int(m.group(1)) - 1
395 num = int(m.group(1)) - 1
395 nums.append(num)
396 nums.append(num)
396 if num < len(givenargs):
397 if num < len(givenargs):
397 return givenargs[num]
398 return givenargs[num]
398 raise error.Abort(_('too few arguments for command alias'))
399 raise error.Abort(_('too few arguments for command alias'))
399 cmd = re.sub(r'\$(\d+|\$)', replacer, cmd)
400 cmd = re.sub(r'\$(\d+|\$)', replacer, cmd)
400 givenargs = [x for i, x in enumerate(givenargs)
401 givenargs = [x for i, x in enumerate(givenargs)
401 if i not in nums]
402 if i not in nums]
402 args = shlex.split(cmd)
403 args = shlex.split(cmd)
403 return args + givenargs
404 return args + givenargs
404
405
405 def aliasinterpolate(name, args, cmd):
406 def aliasinterpolate(name, args, cmd):
406 '''interpolate args into cmd for shell aliases
407 '''interpolate args into cmd for shell aliases
407
408
408 This also handles $0, $@ and "$@".
409 This also handles $0, $@ and "$@".
409 '''
410 '''
410 # util.interpolate can't deal with "$@" (with quotes) because it's only
411 # util.interpolate can't deal with "$@" (with quotes) because it's only
411 # built to match prefix + patterns.
412 # built to match prefix + patterns.
412 replacemap = dict(('$%d' % (i + 1), arg) for i, arg in enumerate(args))
413 replacemap = dict(('$%d' % (i + 1), arg) for i, arg in enumerate(args))
413 replacemap['$0'] = name
414 replacemap['$0'] = name
414 replacemap['$$'] = '$'
415 replacemap['$$'] = '$'
415 replacemap['$@'] = ' '.join(args)
416 replacemap['$@'] = ' '.join(args)
416 # Typical Unix shells interpolate "$@" (with quotes) as all the positional
417 # Typical Unix shells interpolate "$@" (with quotes) as all the positional
417 # parameters, separated out into words. Emulate the same behavior here by
418 # parameters, separated out into words. Emulate the same behavior here by
418 # quoting the arguments individually. POSIX shells will then typically
419 # quoting the arguments individually. POSIX shells will then typically
419 # tokenize each argument into exactly one word.
420 # tokenize each argument into exactly one word.
420 replacemap['"$@"'] = ' '.join(util.shellquote(arg) for arg in args)
421 replacemap['"$@"'] = ' '.join(util.shellquote(arg) for arg in args)
421 # escape '\$' for regex
422 # escape '\$' for regex
422 regex = '|'.join(replacemap.keys()).replace('$', r'\$')
423 regex = '|'.join(replacemap.keys()).replace('$', r'\$')
423 r = re.compile(regex)
424 r = re.compile(regex)
424 return r.sub(lambda x: replacemap[x.group()], cmd)
425 return r.sub(lambda x: replacemap[x.group()], cmd)
425
426
426 class cmdalias(object):
427 class cmdalias(object):
427 def __init__(self, name, definition, cmdtable):
428 def __init__(self, name, definition, cmdtable):
428 self.name = self.cmd = name
429 self.name = self.cmd = name
429 self.cmdname = ''
430 self.cmdname = ''
430 self.definition = definition
431 self.definition = definition
431 self.fn = None
432 self.fn = None
432 self.args = []
433 self.args = []
433 self.opts = []
434 self.opts = []
434 self.help = ''
435 self.help = ''
435 self.norepo = True
436 self.norepo = True
436 self.optionalrepo = False
437 self.optionalrepo = False
437 self.inferrepo = False
438 self.inferrepo = False
438 self.badalias = None
439 self.badalias = None
439 self.unknowncmd = False
440 self.unknowncmd = False
440
441
441 try:
442 try:
442 aliases, entry = cmdutil.findcmd(self.name, cmdtable)
443 aliases, entry = cmdutil.findcmd(self.name, cmdtable)
443 for alias, e in cmdtable.iteritems():
444 for alias, e in cmdtable.iteritems():
444 if e is entry:
445 if e is entry:
445 self.cmd = alias
446 self.cmd = alias
446 break
447 break
447 self.shadows = True
448 self.shadows = True
448 except error.UnknownCommand:
449 except error.UnknownCommand:
449 self.shadows = False
450 self.shadows = False
450
451
451 if not self.definition:
452 if not self.definition:
452 self.badalias = _("no definition for alias '%s'") % self.name
453 self.badalias = _("no definition for alias '%s'") % self.name
453 return
454 return
454
455
455 if self.definition.startswith('!'):
456 if self.definition.startswith('!'):
456 self.shell = True
457 self.shell = True
457 def fn(ui, *args):
458 def fn(ui, *args):
458 env = {'HG_ARGS': ' '.join((self.name,) + args)}
459 env = {'HG_ARGS': ' '.join((self.name,) + args)}
459 def _checkvar(m):
460 def _checkvar(m):
460 if m.groups()[0] == '$':
461 if m.groups()[0] == '$':
461 return m.group()
462 return m.group()
462 elif int(m.groups()[0]) <= len(args):
463 elif int(m.groups()[0]) <= len(args):
463 return m.group()
464 return m.group()
464 else:
465 else:
465 ui.debug("No argument found for substitution "
466 ui.debug("No argument found for substitution "
466 "of %i variable in alias '%s' definition."
467 "of %i variable in alias '%s' definition."
467 % (int(m.groups()[0]), self.name))
468 % (int(m.groups()[0]), self.name))
468 return ''
469 return ''
469 cmd = re.sub(r'\$(\d+|\$)', _checkvar, self.definition[1:])
470 cmd = re.sub(r'\$(\d+|\$)', _checkvar, self.definition[1:])
470 cmd = aliasinterpolate(self.name, args, cmd)
471 cmd = aliasinterpolate(self.name, args, cmd)
471 return ui.system(cmd, environ=env)
472 return ui.system(cmd, environ=env)
472 self.fn = fn
473 self.fn = fn
473 return
474 return
474
475
475 try:
476 try:
476 args = shlex.split(self.definition)
477 args = shlex.split(self.definition)
477 except ValueError as inst:
478 except ValueError as inst:
478 self.badalias = (_("error in definition for alias '%s': %s")
479 self.badalias = (_("error in definition for alias '%s': %s")
479 % (self.name, inst))
480 % (self.name, inst))
480 return
481 return
481 self.cmdname = cmd = args.pop(0)
482 self.cmdname = cmd = args.pop(0)
482 args = map(util.expandpath, args)
483 args = map(util.expandpath, args)
483
484
484 for invalidarg in ("--cwd", "-R", "--repository", "--repo", "--config"):
485 for invalidarg in ("--cwd", "-R", "--repository", "--repo", "--config"):
485 if _earlygetopt([invalidarg], args):
486 if _earlygetopt([invalidarg], args):
486 self.badalias = (_("error in definition for alias '%s': %s may "
487 self.badalias = (_("error in definition for alias '%s': %s may "
487 "only be given on the command line")
488 "only be given on the command line")
488 % (self.name, invalidarg))
489 % (self.name, invalidarg))
489 return
490 return
490
491
491 try:
492 try:
492 tableentry = cmdutil.findcmd(cmd, cmdtable, False)[1]
493 tableentry = cmdutil.findcmd(cmd, cmdtable, False)[1]
493 if len(tableentry) > 2:
494 if len(tableentry) > 2:
494 self.fn, self.opts, self.help = tableentry
495 self.fn, self.opts, self.help = tableentry
495 else:
496 else:
496 self.fn, self.opts = tableentry
497 self.fn, self.opts = tableentry
497
498
498 self.args = aliasargs(self.fn, args)
499 self.args = aliasargs(self.fn, args)
499 if not self.fn.norepo:
500 if not self.fn.norepo:
500 self.norepo = False
501 self.norepo = False
501 if self.fn.optionalrepo:
502 if self.fn.optionalrepo:
502 self.optionalrepo = True
503 self.optionalrepo = True
503 if self.fn.inferrepo:
504 if self.fn.inferrepo:
504 self.inferrepo = True
505 self.inferrepo = True
505 if self.help.startswith("hg " + cmd):
506 if self.help.startswith("hg " + cmd):
506 # drop prefix in old-style help lines so hg shows the alias
507 # drop prefix in old-style help lines so hg shows the alias
507 self.help = self.help[4 + len(cmd):]
508 self.help = self.help[4 + len(cmd):]
508 self.__doc__ = self.fn.__doc__
509 self.__doc__ = self.fn.__doc__
509
510
510 except error.UnknownCommand:
511 except error.UnknownCommand:
511 self.badalias = (_("alias '%s' resolves to unknown command '%s'")
512 self.badalias = (_("alias '%s' resolves to unknown command '%s'")
512 % (self.name, cmd))
513 % (self.name, cmd))
513 self.unknowncmd = True
514 self.unknowncmd = True
514 except error.AmbiguousCommand:
515 except error.AmbiguousCommand:
515 self.badalias = (_("alias '%s' resolves to ambiguous command '%s'")
516 self.badalias = (_("alias '%s' resolves to ambiguous command '%s'")
516 % (self.name, cmd))
517 % (self.name, cmd))
517
518
518 def __call__(self, ui, *args, **opts):
519 def __call__(self, ui, *args, **opts):
519 if self.badalias:
520 if self.badalias:
520 hint = None
521 hint = None
521 if self.unknowncmd:
522 if self.unknowncmd:
522 try:
523 try:
523 # check if the command is in a disabled extension
524 # check if the command is in a disabled extension
524 cmd, ext = extensions.disabledcmd(ui, self.cmdname)[:2]
525 cmd, ext = extensions.disabledcmd(ui, self.cmdname)[:2]
525 hint = _("'%s' is provided by '%s' extension") % (cmd, ext)
526 hint = _("'%s' is provided by '%s' extension") % (cmd, ext)
526 except error.UnknownCommand:
527 except error.UnknownCommand:
527 pass
528 pass
528 raise error.Abort(self.badalias, hint=hint)
529 raise error.Abort(self.badalias, hint=hint)
529 if self.shadows:
530 if self.shadows:
530 ui.debug("alias '%s' shadows command '%s'\n" %
531 ui.debug("alias '%s' shadows command '%s'\n" %
531 (self.name, self.cmdname))
532 (self.name, self.cmdname))
532
533
533 if util.safehasattr(self, 'shell'):
534 if util.safehasattr(self, 'shell'):
534 return self.fn(ui, *args, **opts)
535 return self.fn(ui, *args, **opts)
535 else:
536 else:
536 try:
537 try:
537 return util.checksignature(self.fn)(ui, *args, **opts)
538 return util.checksignature(self.fn)(ui, *args, **opts)
538 except error.SignatureError:
539 except error.SignatureError:
539 args = ' '.join([self.cmdname] + self.args)
540 args = ' '.join([self.cmdname] + self.args)
540 ui.debug("alias '%s' expands to '%s'\n" % (self.name, args))
541 ui.debug("alias '%s' expands to '%s'\n" % (self.name, args))
541 raise
542 raise
542
543
543 def addaliases(ui, cmdtable):
544 def addaliases(ui, cmdtable):
544 # aliases are processed after extensions have been loaded, so they
545 # aliases are processed after extensions have been loaded, so they
545 # may use extension commands. Aliases can also use other alias definitions,
546 # may use extension commands. Aliases can also use other alias definitions,
546 # but only if they have been defined prior to the current definition.
547 # but only if they have been defined prior to the current definition.
547 for alias, definition in ui.configitems('alias'):
548 for alias, definition in ui.configitems('alias'):
548 aliasdef = cmdalias(alias, definition, cmdtable)
549 aliasdef = cmdalias(alias, definition, cmdtable)
549
550
550 try:
551 try:
551 olddef = cmdtable[aliasdef.cmd][0]
552 olddef = cmdtable[aliasdef.cmd][0]
552 if olddef.definition == aliasdef.definition:
553 if olddef.definition == aliasdef.definition:
553 continue
554 continue
554 except (KeyError, AttributeError):
555 except (KeyError, AttributeError):
555 # definition might not exist or it might not be a cmdalias
556 # definition might not exist or it might not be a cmdalias
556 pass
557 pass
557
558
558 cmdtable[aliasdef.name] = (aliasdef, aliasdef.opts, aliasdef.help)
559 cmdtable[aliasdef.name] = (aliasdef, aliasdef.opts, aliasdef.help)
559
560
560 def _parse(ui, args):
561 def _parse(ui, args):
561 options = {}
562 options = {}
562 cmdoptions = {}
563 cmdoptions = {}
563
564
564 try:
565 try:
565 args = fancyopts.fancyopts(args, commands.globalopts, options)
566 args = fancyopts.fancyopts(args, commands.globalopts, options)
566 except fancyopts.getopt.GetoptError as inst:
567 except fancyopts.getopt.GetoptError as inst:
567 raise error.CommandError(None, inst)
568 raise error.CommandError(None, inst)
568
569
569 if args:
570 if args:
570 cmd, args = args[0], args[1:]
571 cmd, args = args[0], args[1:]
571 aliases, entry = cmdutil.findcmd(cmd, commands.table,
572 aliases, entry = cmdutil.findcmd(cmd, commands.table,
572 ui.configbool("ui", "strict"))
573 ui.configbool("ui", "strict"))
573 cmd = aliases[0]
574 cmd = aliases[0]
574 args = aliasargs(entry[0], args)
575 args = aliasargs(entry[0], args)
575 defaults = ui.config("defaults", cmd)
576 defaults = ui.config("defaults", cmd)
576 if defaults:
577 if defaults:
577 args = map(util.expandpath, shlex.split(defaults)) + args
578 args = map(util.expandpath, shlex.split(defaults)) + args
578 c = list(entry[1])
579 c = list(entry[1])
579 else:
580 else:
580 cmd = None
581 cmd = None
581 c = []
582 c = []
582
583
583 # combine global options into local
584 # combine global options into local
584 for o in commands.globalopts:
585 for o in commands.globalopts:
585 c.append((o[0], o[1], options[o[1]], o[3]))
586 c.append((o[0], o[1], options[o[1]], o[3]))
586
587
587 try:
588 try:
588 args = fancyopts.fancyopts(args, c, cmdoptions, True)
589 args = fancyopts.fancyopts(args, c, cmdoptions, True)
589 except fancyopts.getopt.GetoptError as inst:
590 except fancyopts.getopt.GetoptError as inst:
590 raise error.CommandError(cmd, inst)
591 raise error.CommandError(cmd, inst)
591
592
592 # separate global options back out
593 # separate global options back out
593 for o in commands.globalopts:
594 for o in commands.globalopts:
594 n = o[1]
595 n = o[1]
595 options[n] = cmdoptions[n]
596 options[n] = cmdoptions[n]
596 del cmdoptions[n]
597 del cmdoptions[n]
597
598
598 return (cmd, cmd and entry[0] or None, args, options, cmdoptions)
599 return (cmd, cmd and entry[0] or None, args, options, cmdoptions)
599
600
600 def _parseconfig(ui, config):
601 def _parseconfig(ui, config):
601 """parse the --config options from the command line"""
602 """parse the --config options from the command line"""
602 configs = []
603 configs = []
603
604
604 for cfg in config:
605 for cfg in config:
605 try:
606 try:
606 name, value = [cfgelem.strip()
607 name, value = [cfgelem.strip()
607 for cfgelem in cfg.split('=', 1)]
608 for cfgelem in cfg.split('=', 1)]
608 section, name = name.split('.', 1)
609 section, name = name.split('.', 1)
609 if not section or not name:
610 if not section or not name:
610 raise IndexError
611 raise IndexError
611 ui.setconfig(section, name, value, '--config')
612 ui.setconfig(section, name, value, '--config')
612 configs.append((section, name, value))
613 configs.append((section, name, value))
613 except (IndexError, ValueError):
614 except (IndexError, ValueError):
614 raise error.Abort(_('malformed --config option: %r '
615 raise error.Abort(_('malformed --config option: %r '
615 '(use --config section.name=value)') % cfg)
616 '(use --config section.name=value)') % cfg)
616
617
617 return configs
618 return configs
618
619
619 def _earlygetopt(aliases, args):
620 def _earlygetopt(aliases, args):
620 """Return list of values for an option (or aliases).
621 """Return list of values for an option (or aliases).
621
622
622 The values are listed in the order they appear in args.
623 The values are listed in the order they appear in args.
623 The options and values are removed from args.
624 The options and values are removed from args.
624
625
625 >>> args = ['x', '--cwd', 'foo', 'y']
626 >>> args = ['x', '--cwd', 'foo', 'y']
626 >>> _earlygetopt(['--cwd'], args), args
627 >>> _earlygetopt(['--cwd'], args), args
627 (['foo'], ['x', 'y'])
628 (['foo'], ['x', 'y'])
628
629
629 >>> args = ['x', '--cwd=bar', 'y']
630 >>> args = ['x', '--cwd=bar', 'y']
630 >>> _earlygetopt(['--cwd'], args), args
631 >>> _earlygetopt(['--cwd'], args), args
631 (['bar'], ['x', 'y'])
632 (['bar'], ['x', 'y'])
632
633
633 >>> args = ['x', '-R', 'foo', 'y']
634 >>> args = ['x', '-R', 'foo', 'y']
634 >>> _earlygetopt(['-R'], args), args
635 >>> _earlygetopt(['-R'], args), args
635 (['foo'], ['x', 'y'])
636 (['foo'], ['x', 'y'])
636
637
637 >>> args = ['x', '-Rbar', 'y']
638 >>> args = ['x', '-Rbar', 'y']
638 >>> _earlygetopt(['-R'], args), args
639 >>> _earlygetopt(['-R'], args), args
639 (['bar'], ['x', 'y'])
640 (['bar'], ['x', 'y'])
640 """
641 """
641 try:
642 try:
642 argcount = args.index("--")
643 argcount = args.index("--")
643 except ValueError:
644 except ValueError:
644 argcount = len(args)
645 argcount = len(args)
645 shortopts = [opt for opt in aliases if len(opt) == 2]
646 shortopts = [opt for opt in aliases if len(opt) == 2]
646 values = []
647 values = []
647 pos = 0
648 pos = 0
648 while pos < argcount:
649 while pos < argcount:
649 fullarg = arg = args[pos]
650 fullarg = arg = args[pos]
650 equals = arg.find('=')
651 equals = arg.find('=')
651 if equals > -1:
652 if equals > -1:
652 arg = arg[:equals]
653 arg = arg[:equals]
653 if arg in aliases:
654 if arg in aliases:
654 del args[pos]
655 del args[pos]
655 if equals > -1:
656 if equals > -1:
656 values.append(fullarg[equals + 1:])
657 values.append(fullarg[equals + 1:])
657 argcount -= 1
658 argcount -= 1
658 else:
659 else:
659 if pos + 1 >= argcount:
660 if pos + 1 >= argcount:
660 # ignore and let getopt report an error if there is no value
661 # ignore and let getopt report an error if there is no value
661 break
662 break
662 values.append(args.pop(pos))
663 values.append(args.pop(pos))
663 argcount -= 2
664 argcount -= 2
664 elif arg[:2] in shortopts:
665 elif arg[:2] in shortopts:
665 # short option can have no following space, e.g. hg log -Rfoo
666 # short option can have no following space, e.g. hg log -Rfoo
666 values.append(args.pop(pos)[2:])
667 values.append(args.pop(pos)[2:])
667 argcount -= 1
668 argcount -= 1
668 else:
669 else:
669 pos += 1
670 pos += 1
670 return values
671 return values
671
672
672 def runcommand(lui, repo, cmd, fullargs, ui, options, d, cmdpats, cmdoptions):
673 def runcommand(lui, repo, cmd, fullargs, ui, options, d, cmdpats, cmdoptions):
673 # run pre-hook, and abort if it fails
674 # run pre-hook, and abort if it fails
674 hook.hook(lui, repo, "pre-%s" % cmd, True, args=" ".join(fullargs),
675 hook.hook(lui, repo, "pre-%s" % cmd, True, args=" ".join(fullargs),
675 pats=cmdpats, opts=cmdoptions)
676 pats=cmdpats, opts=cmdoptions)
676 ret = _runcommand(ui, options, cmd, d)
677 ret = _runcommand(ui, options, cmd, d)
677 # run post-hook, passing command result
678 # run post-hook, passing command result
678 hook.hook(lui, repo, "post-%s" % cmd, False, args=" ".join(fullargs),
679 hook.hook(lui, repo, "post-%s" % cmd, False, args=" ".join(fullargs),
679 result=ret, pats=cmdpats, opts=cmdoptions)
680 result=ret, pats=cmdpats, opts=cmdoptions)
680 return ret
681 return ret
681
682
682 def _getlocal(ui, rpath, wd=None):
683 def _getlocal(ui, rpath, wd=None):
683 """Return (path, local ui object) for the given target path.
684 """Return (path, local ui object) for the given target path.
684
685
685 Takes paths in [cwd]/.hg/hgrc into account."
686 Takes paths in [cwd]/.hg/hgrc into account."
686 """
687 """
687 if wd is None:
688 if wd is None:
688 try:
689 try:
689 wd = os.getcwd()
690 wd = os.getcwd()
690 except OSError as e:
691 except OSError as e:
691 raise error.Abort(_("error getting current working directory: %s") %
692 raise error.Abort(_("error getting current working directory: %s") %
692 e.strerror)
693 e.strerror)
693 path = cmdutil.findrepo(wd) or ""
694 path = cmdutil.findrepo(wd) or ""
694 if not path:
695 if not path:
695 lui = ui
696 lui = ui
696 else:
697 else:
697 lui = ui.copy()
698 lui = ui.copy()
698 lui.readconfig(os.path.join(path, ".hg", "hgrc"), path)
699 lui.readconfig(os.path.join(path, ".hg", "hgrc"), path)
699
700
700 if rpath and rpath[-1]:
701 if rpath and rpath[-1]:
701 path = lui.expandpath(rpath[-1])
702 path = lui.expandpath(rpath[-1])
702 lui = ui.copy()
703 lui = ui.copy()
703 lui.readconfig(os.path.join(path, ".hg", "hgrc"), path)
704 lui.readconfig(os.path.join(path, ".hg", "hgrc"), path)
704
705
705 return path, lui
706 return path, lui
706
707
707 def _checkshellalias(lui, ui, args, precheck=True):
708 def _checkshellalias(lui, ui, args, precheck=True):
708 """Return the function to run the shell alias, if it is required
709 """Return the function to run the shell alias, if it is required
709
710
710 'precheck' is whether this function is invoked before adding
711 'precheck' is whether this function is invoked before adding
711 aliases or not.
712 aliases or not.
712 """
713 """
713 options = {}
714 options = {}
714
715
715 try:
716 try:
716 args = fancyopts.fancyopts(args, commands.globalopts, options)
717 args = fancyopts.fancyopts(args, commands.globalopts, options)
717 except fancyopts.getopt.GetoptError:
718 except fancyopts.getopt.GetoptError:
718 return
719 return
719
720
720 if not args:
721 if not args:
721 return
722 return
722
723
723 if precheck:
724 if precheck:
724 strict = True
725 strict = True
725 cmdtable = commands.table.copy()
726 cmdtable = commands.table.copy()
726 addaliases(lui, cmdtable)
727 addaliases(lui, cmdtable)
727 else:
728 else:
728 strict = False
729 strict = False
729 cmdtable = commands.table
730 cmdtable = commands.table
730
731
731 cmd = args[0]
732 cmd = args[0]
732 try:
733 try:
733 aliases, entry = cmdutil.findcmd(cmd, cmdtable, strict)
734 aliases, entry = cmdutil.findcmd(cmd, cmdtable, strict)
734 except (error.AmbiguousCommand, error.UnknownCommand):
735 except (error.AmbiguousCommand, error.UnknownCommand):
735 return
736 return
736
737
737 cmd = aliases[0]
738 cmd = aliases[0]
738 fn = entry[0]
739 fn = entry[0]
739
740
740 if cmd and util.safehasattr(fn, 'shell'):
741 if cmd and util.safehasattr(fn, 'shell'):
741 d = lambda: fn(ui, *args[1:])
742 d = lambda: fn(ui, *args[1:])
742 return lambda: runcommand(lui, None, cmd, args[:1], ui, options, d,
743 return lambda: runcommand(lui, None, cmd, args[:1], ui, options, d,
743 [], {})
744 [], {})
744
745
745 _loaded = set()
746 _loaded = set()
746
747
747 # list of (objname, loadermod, loadername) tuple:
748 # list of (objname, loadermod, loadername) tuple:
748 # - objname is the name of an object in extension module, from which
749 # - objname is the name of an object in extension module, from which
749 # extra information is loaded
750 # extra information is loaded
750 # - loadermod is the module where loader is placed
751 # - loadermod is the module where loader is placed
751 # - loadername is the name of the function, which takes (ui, extensionname,
752 # - loadername is the name of the function, which takes (ui, extensionname,
752 # extraobj) arguments
753 # extraobj) arguments
753 extraloaders = [
754 extraloaders = [
754 ('cmdtable', commands, 'loadcmdtable'),
755 ('cmdtable', commands, 'loadcmdtable'),
756 ('revsetpredicate', revset, 'loadpredicate'),
755 ]
757 ]
756
758
757 def _dispatch(req):
759 def _dispatch(req):
758 args = req.args
760 args = req.args
759 ui = req.ui
761 ui = req.ui
760
762
761 # check for cwd
763 # check for cwd
762 cwd = _earlygetopt(['--cwd'], args)
764 cwd = _earlygetopt(['--cwd'], args)
763 if cwd:
765 if cwd:
764 os.chdir(cwd[-1])
766 os.chdir(cwd[-1])
765
767
766 rpath = _earlygetopt(["-R", "--repository", "--repo"], args)
768 rpath = _earlygetopt(["-R", "--repository", "--repo"], args)
767 path, lui = _getlocal(ui, rpath)
769 path, lui = _getlocal(ui, rpath)
768
770
769 # Now that we're operating in the right directory/repository with
771 # Now that we're operating in the right directory/repository with
770 # the right config settings, check for shell aliases
772 # the right config settings, check for shell aliases
771 shellaliasfn = _checkshellalias(lui, ui, args)
773 shellaliasfn = _checkshellalias(lui, ui, args)
772 if shellaliasfn:
774 if shellaliasfn:
773 return shellaliasfn()
775 return shellaliasfn()
774
776
775 # Configure extensions in phases: uisetup, extsetup, cmdtable, and
777 # Configure extensions in phases: uisetup, extsetup, cmdtable, and
776 # reposetup. Programs like TortoiseHg will call _dispatch several
778 # reposetup. Programs like TortoiseHg will call _dispatch several
777 # times so we keep track of configured extensions in _loaded.
779 # times so we keep track of configured extensions in _loaded.
778 extensions.loadall(lui)
780 extensions.loadall(lui)
779 exts = [ext for ext in extensions.extensions() if ext[0] not in _loaded]
781 exts = [ext for ext in extensions.extensions() if ext[0] not in _loaded]
780 # Propagate any changes to lui.__class__ by extensions
782 # Propagate any changes to lui.__class__ by extensions
781 ui.__class__ = lui.__class__
783 ui.__class__ = lui.__class__
782
784
783 # (uisetup and extsetup are handled in extensions.loadall)
785 # (uisetup and extsetup are handled in extensions.loadall)
784
786
785 for name, module in exts:
787 for name, module in exts:
786 for objname, loadermod, loadername in extraloaders:
788 for objname, loadermod, loadername in extraloaders:
787 extraobj = getattr(module, objname, None)
789 extraobj = getattr(module, objname, None)
788 if extraobj is not None:
790 if extraobj is not None:
789 getattr(loadermod, loadername)(ui, name, extraobj)
791 getattr(loadermod, loadername)(ui, name, extraobj)
790 _loaded.add(name)
792 _loaded.add(name)
791
793
792 # (reposetup is handled in hg.repository)
794 # (reposetup is handled in hg.repository)
793
795
794 addaliases(lui, commands.table)
796 addaliases(lui, commands.table)
795
797
796 if not lui.configbool("ui", "strict"):
798 if not lui.configbool("ui", "strict"):
797 # All aliases and commands are completely defined, now.
799 # All aliases and commands are completely defined, now.
798 # Check abbreviation/ambiguity of shell alias again, because shell
800 # Check abbreviation/ambiguity of shell alias again, because shell
799 # alias may cause failure of "_parse" (see issue4355)
801 # alias may cause failure of "_parse" (see issue4355)
800 shellaliasfn = _checkshellalias(lui, ui, args, precheck=False)
802 shellaliasfn = _checkshellalias(lui, ui, args, precheck=False)
801 if shellaliasfn:
803 if shellaliasfn:
802 return shellaliasfn()
804 return shellaliasfn()
803
805
804 # check for fallback encoding
806 # check for fallback encoding
805 fallback = lui.config('ui', 'fallbackencoding')
807 fallback = lui.config('ui', 'fallbackencoding')
806 if fallback:
808 if fallback:
807 encoding.fallbackencoding = fallback
809 encoding.fallbackencoding = fallback
808
810
809 fullargs = args
811 fullargs = args
810 cmd, func, args, options, cmdoptions = _parse(lui, args)
812 cmd, func, args, options, cmdoptions = _parse(lui, args)
811
813
812 if options["config"]:
814 if options["config"]:
813 raise error.Abort(_("option --config may not be abbreviated!"))
815 raise error.Abort(_("option --config may not be abbreviated!"))
814 if options["cwd"]:
816 if options["cwd"]:
815 raise error.Abort(_("option --cwd may not be abbreviated!"))
817 raise error.Abort(_("option --cwd may not be abbreviated!"))
816 if options["repository"]:
818 if options["repository"]:
817 raise error.Abort(_(
819 raise error.Abort(_(
818 "option -R has to be separated from other options (e.g. not -qR) "
820 "option -R has to be separated from other options (e.g. not -qR) "
819 "and --repository may only be abbreviated as --repo!"))
821 "and --repository may only be abbreviated as --repo!"))
820
822
821 if options["encoding"]:
823 if options["encoding"]:
822 encoding.encoding = options["encoding"]
824 encoding.encoding = options["encoding"]
823 if options["encodingmode"]:
825 if options["encodingmode"]:
824 encoding.encodingmode = options["encodingmode"]
826 encoding.encodingmode = options["encodingmode"]
825 if options["time"]:
827 if options["time"]:
826 def get_times():
828 def get_times():
827 t = os.times()
829 t = os.times()
828 if t[4] == 0.0: # Windows leaves this as zero, so use time.clock()
830 if t[4] == 0.0: # Windows leaves this as zero, so use time.clock()
829 t = (t[0], t[1], t[2], t[3], time.clock())
831 t = (t[0], t[1], t[2], t[3], time.clock())
830 return t
832 return t
831 s = get_times()
833 s = get_times()
832 def print_time():
834 def print_time():
833 t = get_times()
835 t = get_times()
834 ui.warn(_("time: real %.3f secs (user %.3f+%.3f sys %.3f+%.3f)\n") %
836 ui.warn(_("time: real %.3f secs (user %.3f+%.3f sys %.3f+%.3f)\n") %
835 (t[4]-s[4], t[0]-s[0], t[2]-s[2], t[1]-s[1], t[3]-s[3]))
837 (t[4]-s[4], t[0]-s[0], t[2]-s[2], t[1]-s[1], t[3]-s[3]))
836 atexit.register(print_time)
838 atexit.register(print_time)
837
839
838 uis = set([ui, lui])
840 uis = set([ui, lui])
839
841
840 if req.repo:
842 if req.repo:
841 uis.add(req.repo.ui)
843 uis.add(req.repo.ui)
842
844
843 if options['verbose'] or options['debug'] or options['quiet']:
845 if options['verbose'] or options['debug'] or options['quiet']:
844 for opt in ('verbose', 'debug', 'quiet'):
846 for opt in ('verbose', 'debug', 'quiet'):
845 val = str(bool(options[opt]))
847 val = str(bool(options[opt]))
846 for ui_ in uis:
848 for ui_ in uis:
847 ui_.setconfig('ui', opt, val, '--' + opt)
849 ui_.setconfig('ui', opt, val, '--' + opt)
848
850
849 if options['traceback']:
851 if options['traceback']:
850 for ui_ in uis:
852 for ui_ in uis:
851 ui_.setconfig('ui', 'traceback', 'on', '--traceback')
853 ui_.setconfig('ui', 'traceback', 'on', '--traceback')
852
854
853 if options['noninteractive']:
855 if options['noninteractive']:
854 for ui_ in uis:
856 for ui_ in uis:
855 ui_.setconfig('ui', 'interactive', 'off', '-y')
857 ui_.setconfig('ui', 'interactive', 'off', '-y')
856
858
857 if cmdoptions.get('insecure', False):
859 if cmdoptions.get('insecure', False):
858 for ui_ in uis:
860 for ui_ in uis:
859 ui_.setconfig('web', 'cacerts', '!', '--insecure')
861 ui_.setconfig('web', 'cacerts', '!', '--insecure')
860
862
861 if options['version']:
863 if options['version']:
862 return commands.version_(ui)
864 return commands.version_(ui)
863 if options['help']:
865 if options['help']:
864 return commands.help_(ui, cmd, command=cmd is not None)
866 return commands.help_(ui, cmd, command=cmd is not None)
865 elif not cmd:
867 elif not cmd:
866 return commands.help_(ui, 'shortlist')
868 return commands.help_(ui, 'shortlist')
867
869
868 repo = None
870 repo = None
869 cmdpats = args[:]
871 cmdpats = args[:]
870 if not func.norepo:
872 if not func.norepo:
871 # use the repo from the request only if we don't have -R
873 # use the repo from the request only if we don't have -R
872 if not rpath and not cwd:
874 if not rpath and not cwd:
873 repo = req.repo
875 repo = req.repo
874
876
875 if repo:
877 if repo:
876 # set the descriptors of the repo ui to those of ui
878 # set the descriptors of the repo ui to those of ui
877 repo.ui.fin = ui.fin
879 repo.ui.fin = ui.fin
878 repo.ui.fout = ui.fout
880 repo.ui.fout = ui.fout
879 repo.ui.ferr = ui.ferr
881 repo.ui.ferr = ui.ferr
880 else:
882 else:
881 try:
883 try:
882 repo = hg.repository(ui, path=path)
884 repo = hg.repository(ui, path=path)
883 if not repo.local():
885 if not repo.local():
884 raise error.Abort(_("repository '%s' is not local") % path)
886 raise error.Abort(_("repository '%s' is not local") % path)
885 repo.ui.setconfig("bundle", "mainreporoot", repo.root, 'repo')
887 repo.ui.setconfig("bundle", "mainreporoot", repo.root, 'repo')
886 except error.RequirementError:
888 except error.RequirementError:
887 raise
889 raise
888 except error.RepoError:
890 except error.RepoError:
889 if rpath and rpath[-1]: # invalid -R path
891 if rpath and rpath[-1]: # invalid -R path
890 raise
892 raise
891 if not func.optionalrepo:
893 if not func.optionalrepo:
892 if func.inferrepo and args and not path:
894 if func.inferrepo and args and not path:
893 # try to infer -R from command args
895 # try to infer -R from command args
894 repos = map(cmdutil.findrepo, args)
896 repos = map(cmdutil.findrepo, args)
895 guess = repos[0]
897 guess = repos[0]
896 if guess and repos.count(guess) == len(repos):
898 if guess and repos.count(guess) == len(repos):
897 req.args = ['--repository', guess] + fullargs
899 req.args = ['--repository', guess] + fullargs
898 return _dispatch(req)
900 return _dispatch(req)
899 if not path:
901 if not path:
900 raise error.RepoError(_("no repository found in '%s'"
902 raise error.RepoError(_("no repository found in '%s'"
901 " (.hg not found)")
903 " (.hg not found)")
902 % os.getcwd())
904 % os.getcwd())
903 raise
905 raise
904 if repo:
906 if repo:
905 ui = repo.ui
907 ui = repo.ui
906 if options['hidden']:
908 if options['hidden']:
907 repo = repo.unfiltered()
909 repo = repo.unfiltered()
908 args.insert(0, repo)
910 args.insert(0, repo)
909 elif rpath:
911 elif rpath:
910 ui.warn(_("warning: --repository ignored\n"))
912 ui.warn(_("warning: --repository ignored\n"))
911
913
912 msg = ' '.join(' ' in a and repr(a) or a for a in fullargs)
914 msg = ' '.join(' ' in a and repr(a) or a for a in fullargs)
913 ui.log("command", '%s\n', msg)
915 ui.log("command", '%s\n', msg)
914 d = lambda: util.checksignature(func)(ui, *args, **cmdoptions)
916 d = lambda: util.checksignature(func)(ui, *args, **cmdoptions)
915 try:
917 try:
916 return runcommand(lui, repo, cmd, fullargs, ui, options, d,
918 return runcommand(lui, repo, cmd, fullargs, ui, options, d,
917 cmdpats, cmdoptions)
919 cmdpats, cmdoptions)
918 finally:
920 finally:
919 if repo and repo != req.repo:
921 if repo and repo != req.repo:
920 repo.close()
922 repo.close()
921
923
922 def lsprofile(ui, func, fp):
924 def lsprofile(ui, func, fp):
923 format = ui.config('profiling', 'format', default='text')
925 format = ui.config('profiling', 'format', default='text')
924 field = ui.config('profiling', 'sort', default='inlinetime')
926 field = ui.config('profiling', 'sort', default='inlinetime')
925 limit = ui.configint('profiling', 'limit', default=30)
927 limit = ui.configint('profiling', 'limit', default=30)
926 climit = ui.configint('profiling', 'nested', default=0)
928 climit = ui.configint('profiling', 'nested', default=0)
927
929
928 if format not in ['text', 'kcachegrind']:
930 if format not in ['text', 'kcachegrind']:
929 ui.warn(_("unrecognized profiling format '%s'"
931 ui.warn(_("unrecognized profiling format '%s'"
930 " - Ignored\n") % format)
932 " - Ignored\n") % format)
931 format = 'text'
933 format = 'text'
932
934
933 try:
935 try:
934 from . import lsprof
936 from . import lsprof
935 except ImportError:
937 except ImportError:
936 raise error.Abort(_(
938 raise error.Abort(_(
937 'lsprof not available - install from '
939 'lsprof not available - install from '
938 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
940 'http://codespeak.net/svn/user/arigo/hack/misc/lsprof/'))
939 p = lsprof.Profiler()
941 p = lsprof.Profiler()
940 p.enable(subcalls=True)
942 p.enable(subcalls=True)
941 try:
943 try:
942 return func()
944 return func()
943 finally:
945 finally:
944 p.disable()
946 p.disable()
945
947
946 if format == 'kcachegrind':
948 if format == 'kcachegrind':
947 from . import lsprofcalltree
949 from . import lsprofcalltree
948 calltree = lsprofcalltree.KCacheGrind(p)
950 calltree = lsprofcalltree.KCacheGrind(p)
949 calltree.output(fp)
951 calltree.output(fp)
950 else:
952 else:
951 # format == 'text'
953 # format == 'text'
952 stats = lsprof.Stats(p.getstats())
954 stats = lsprof.Stats(p.getstats())
953 stats.sort(field)
955 stats.sort(field)
954 stats.pprint(limit=limit, file=fp, climit=climit)
956 stats.pprint(limit=limit, file=fp, climit=climit)
955
957
956 def flameprofile(ui, func, fp):
958 def flameprofile(ui, func, fp):
957 try:
959 try:
958 from flamegraph import flamegraph
960 from flamegraph import flamegraph
959 except ImportError:
961 except ImportError:
960 raise error.Abort(_(
962 raise error.Abort(_(
961 'flamegraph not available - install from '
963 'flamegraph not available - install from '
962 'https://github.com/evanhempel/python-flamegraph'))
964 'https://github.com/evanhempel/python-flamegraph'))
963 # developer config: profiling.freq
965 # developer config: profiling.freq
964 freq = ui.configint('profiling', 'freq', default=1000)
966 freq = ui.configint('profiling', 'freq', default=1000)
965 filter_ = None
967 filter_ = None
966 collapse_recursion = True
968 collapse_recursion = True
967 thread = flamegraph.ProfileThread(fp, 1.0 / freq,
969 thread = flamegraph.ProfileThread(fp, 1.0 / freq,
968 filter_, collapse_recursion)
970 filter_, collapse_recursion)
969 start_time = time.clock()
971 start_time = time.clock()
970 try:
972 try:
971 thread.start()
973 thread.start()
972 func()
974 func()
973 finally:
975 finally:
974 thread.stop()
976 thread.stop()
975 thread.join()
977 thread.join()
976 print('Collected %d stack frames (%d unique) in %2.2f seconds.' % (
978 print('Collected %d stack frames (%d unique) in %2.2f seconds.' % (
977 time.clock() - start_time, thread.num_frames(),
979 time.clock() - start_time, thread.num_frames(),
978 thread.num_frames(unique=True)))
980 thread.num_frames(unique=True)))
979
981
980
982
981 def statprofile(ui, func, fp):
983 def statprofile(ui, func, fp):
982 try:
984 try:
983 import statprof
985 import statprof
984 except ImportError:
986 except ImportError:
985 raise error.Abort(_(
987 raise error.Abort(_(
986 'statprof not available - install using "easy_install statprof"'))
988 'statprof not available - install using "easy_install statprof"'))
987
989
988 freq = ui.configint('profiling', 'freq', default=1000)
990 freq = ui.configint('profiling', 'freq', default=1000)
989 if freq > 0:
991 if freq > 0:
990 statprof.reset(freq)
992 statprof.reset(freq)
991 else:
993 else:
992 ui.warn(_("invalid sampling frequency '%s' - ignoring\n") % freq)
994 ui.warn(_("invalid sampling frequency '%s' - ignoring\n") % freq)
993
995
994 statprof.start()
996 statprof.start()
995 try:
997 try:
996 return func()
998 return func()
997 finally:
999 finally:
998 statprof.stop()
1000 statprof.stop()
999 statprof.display(fp)
1001 statprof.display(fp)
1000
1002
1001 def _runcommand(ui, options, cmd, cmdfunc):
1003 def _runcommand(ui, options, cmd, cmdfunc):
1002 """Enables the profiler if applicable.
1004 """Enables the profiler if applicable.
1003
1005
1004 ``profiling.enabled`` - boolean config that enables or disables profiling
1006 ``profiling.enabled`` - boolean config that enables or disables profiling
1005 """
1007 """
1006 def checkargs():
1008 def checkargs():
1007 try:
1009 try:
1008 return cmdfunc()
1010 return cmdfunc()
1009 except error.SignatureError:
1011 except error.SignatureError:
1010 raise error.CommandError(cmd, _("invalid arguments"))
1012 raise error.CommandError(cmd, _("invalid arguments"))
1011
1013
1012 if options['profile'] or ui.configbool('profiling', 'enabled'):
1014 if options['profile'] or ui.configbool('profiling', 'enabled'):
1013 profiler = os.getenv('HGPROF')
1015 profiler = os.getenv('HGPROF')
1014 if profiler is None:
1016 if profiler is None:
1015 profiler = ui.config('profiling', 'type', default='ls')
1017 profiler = ui.config('profiling', 'type', default='ls')
1016 if profiler not in ('ls', 'stat', 'flame'):
1018 if profiler not in ('ls', 'stat', 'flame'):
1017 ui.warn(_("unrecognized profiler '%s' - ignored\n") % profiler)
1019 ui.warn(_("unrecognized profiler '%s' - ignored\n") % profiler)
1018 profiler = 'ls'
1020 profiler = 'ls'
1019
1021
1020 output = ui.config('profiling', 'output')
1022 output = ui.config('profiling', 'output')
1021
1023
1022 if output == 'blackbox':
1024 if output == 'blackbox':
1023 import StringIO
1025 import StringIO
1024 fp = StringIO.StringIO()
1026 fp = StringIO.StringIO()
1025 elif output:
1027 elif output:
1026 path = ui.expandpath(output)
1028 path = ui.expandpath(output)
1027 fp = open(path, 'wb')
1029 fp = open(path, 'wb')
1028 else:
1030 else:
1029 fp = sys.stderr
1031 fp = sys.stderr
1030
1032
1031 try:
1033 try:
1032 if profiler == 'ls':
1034 if profiler == 'ls':
1033 return lsprofile(ui, checkargs, fp)
1035 return lsprofile(ui, checkargs, fp)
1034 elif profiler == 'flame':
1036 elif profiler == 'flame':
1035 return flameprofile(ui, checkargs, fp)
1037 return flameprofile(ui, checkargs, fp)
1036 else:
1038 else:
1037 return statprofile(ui, checkargs, fp)
1039 return statprofile(ui, checkargs, fp)
1038 finally:
1040 finally:
1039 if output:
1041 if output:
1040 if output == 'blackbox':
1042 if output == 'blackbox':
1041 val = "Profile:\n%s" % fp.getvalue()
1043 val = "Profile:\n%s" % fp.getvalue()
1042 # ui.log treats the input as a format string,
1044 # ui.log treats the input as a format string,
1043 # so we need to escape any % signs.
1045 # so we need to escape any % signs.
1044 val = val.replace('%', '%%')
1046 val = val.replace('%', '%%')
1045 ui.log('profile', val)
1047 ui.log('profile', val)
1046 fp.close()
1048 fp.close()
1047 else:
1049 else:
1048 return checkargs()
1050 return checkargs()
@@ -1,2232 +1,2220 b''
1 $ HGENCODING=utf-8
1 $ HGENCODING=utf-8
2 $ export HGENCODING
2 $ export HGENCODING
3 $ cat > testrevset.py << EOF
3 $ cat > testrevset.py << EOF
4 > import mercurial.revset
4 > import mercurial.revset
5 >
5 >
6 > baseset = mercurial.revset.baseset
6 > baseset = mercurial.revset.baseset
7 >
7 >
8 > def r3232(repo, subset, x):
8 > def r3232(repo, subset, x):
9 > """"simple revset that return [3,2,3,2]
9 > """"simple revset that return [3,2,3,2]
10 >
10 >
11 > revisions duplicated on purpose.
11 > revisions duplicated on purpose.
12 > """
12 > """
13 > if 3 not in subset:
13 > if 3 not in subset:
14 > if 2 in subset:
14 > if 2 in subset:
15 > return baseset([2,2])
15 > return baseset([2,2])
16 > return baseset()
16 > return baseset()
17 > return baseset([3,3,2,2])
17 > return baseset([3,3,2,2])
18 >
18 >
19 > mercurial.revset.symbols['r3232'] = r3232
19 > mercurial.revset.symbols['r3232'] = r3232
20 > EOF
20 > EOF
21 $ cat >> $HGRCPATH << EOF
21 $ cat >> $HGRCPATH << EOF
22 > [extensions]
22 > [extensions]
23 > testrevset=$TESTTMP/testrevset.py
23 > testrevset=$TESTTMP/testrevset.py
24 > EOF
24 > EOF
25
25
26 $ try() {
26 $ try() {
27 > hg debugrevspec --debug "$@"
27 > hg debugrevspec --debug "$@"
28 > }
28 > }
29
29
30 $ log() {
30 $ log() {
31 > hg log --template '{rev}\n' -r "$1"
31 > hg log --template '{rev}\n' -r "$1"
32 > }
32 > }
33
33
34 $ hg init repo
34 $ hg init repo
35 $ cd repo
35 $ cd repo
36
36
37 $ echo a > a
37 $ echo a > a
38 $ hg branch a
38 $ hg branch a
39 marked working directory as branch a
39 marked working directory as branch a
40 (branches are permanent and global, did you want a bookmark?)
40 (branches are permanent and global, did you want a bookmark?)
41 $ hg ci -Aqm0
41 $ hg ci -Aqm0
42
42
43 $ echo b > b
43 $ echo b > b
44 $ hg branch b
44 $ hg branch b
45 marked working directory as branch b
45 marked working directory as branch b
46 $ hg ci -Aqm1
46 $ hg ci -Aqm1
47
47
48 $ rm a
48 $ rm a
49 $ hg branch a-b-c-
49 $ hg branch a-b-c-
50 marked working directory as branch a-b-c-
50 marked working directory as branch a-b-c-
51 $ hg ci -Aqm2 -u Bob
51 $ hg ci -Aqm2 -u Bob
52
52
53 $ hg log -r "extra('branch', 'a-b-c-')" --template '{rev}\n'
53 $ hg log -r "extra('branch', 'a-b-c-')" --template '{rev}\n'
54 2
54 2
55 $ hg log -r "extra('branch')" --template '{rev}\n'
55 $ hg log -r "extra('branch')" --template '{rev}\n'
56 0
56 0
57 1
57 1
58 2
58 2
59 $ hg log -r "extra('branch', 're:a')" --template '{rev} {branch}\n'
59 $ hg log -r "extra('branch', 're:a')" --template '{rev} {branch}\n'
60 0 a
60 0 a
61 2 a-b-c-
61 2 a-b-c-
62
62
63 $ hg co 1
63 $ hg co 1
64 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
64 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
65 $ hg branch +a+b+c+
65 $ hg branch +a+b+c+
66 marked working directory as branch +a+b+c+
66 marked working directory as branch +a+b+c+
67 $ hg ci -Aqm3
67 $ hg ci -Aqm3
68
68
69 $ hg co 2 # interleave
69 $ hg co 2 # interleave
70 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
70 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
71 $ echo bb > b
71 $ echo bb > b
72 $ hg branch -- -a-b-c-
72 $ hg branch -- -a-b-c-
73 marked working directory as branch -a-b-c-
73 marked working directory as branch -a-b-c-
74 $ hg ci -Aqm4 -d "May 12 2005"
74 $ hg ci -Aqm4 -d "May 12 2005"
75
75
76 $ hg co 3
76 $ hg co 3
77 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
77 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
78 $ hg branch !a/b/c/
78 $ hg branch !a/b/c/
79 marked working directory as branch !a/b/c/
79 marked working directory as branch !a/b/c/
80 $ hg ci -Aqm"5 bug"
80 $ hg ci -Aqm"5 bug"
81
81
82 $ hg merge 4
82 $ hg merge 4
83 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
83 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
84 (branch merge, don't forget to commit)
84 (branch merge, don't forget to commit)
85 $ hg branch _a_b_c_
85 $ hg branch _a_b_c_
86 marked working directory as branch _a_b_c_
86 marked working directory as branch _a_b_c_
87 $ hg ci -Aqm"6 issue619"
87 $ hg ci -Aqm"6 issue619"
88
88
89 $ hg branch .a.b.c.
89 $ hg branch .a.b.c.
90 marked working directory as branch .a.b.c.
90 marked working directory as branch .a.b.c.
91 $ hg ci -Aqm7
91 $ hg ci -Aqm7
92
92
93 $ hg branch all
93 $ hg branch all
94 marked working directory as branch all
94 marked working directory as branch all
95
95
96 $ hg co 4
96 $ hg co 4
97 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
97 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
98 $ hg branch Γ©
98 $ hg branch Γ©
99 marked working directory as branch \xc3\xa9 (esc)
99 marked working directory as branch \xc3\xa9 (esc)
100 $ hg ci -Aqm9
100 $ hg ci -Aqm9
101
101
102 $ hg tag -r6 1.0
102 $ hg tag -r6 1.0
103 $ hg bookmark -r6 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
103 $ hg bookmark -r6 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
104
104
105 $ hg clone --quiet -U -r 7 . ../remote1
105 $ hg clone --quiet -U -r 7 . ../remote1
106 $ hg clone --quiet -U -r 8 . ../remote2
106 $ hg clone --quiet -U -r 8 . ../remote2
107 $ echo "[paths]" >> .hg/hgrc
107 $ echo "[paths]" >> .hg/hgrc
108 $ echo "default = ../remote1" >> .hg/hgrc
108 $ echo "default = ../remote1" >> .hg/hgrc
109
109
110 trivial
110 trivial
111
111
112 $ try 0:1
112 $ try 0:1
113 (range
113 (range
114 ('symbol', '0')
114 ('symbol', '0')
115 ('symbol', '1'))
115 ('symbol', '1'))
116 * set:
116 * set:
117 <spanset+ 0:1>
117 <spanset+ 0:1>
118 0
118 0
119 1
119 1
120 $ try --optimize :
120 $ try --optimize :
121 (rangeall
121 (rangeall
122 None)
122 None)
123 * optimized:
123 * optimized:
124 (range
124 (range
125 ('string', '0')
125 ('string', '0')
126 ('string', 'tip'))
126 ('string', 'tip'))
127 * set:
127 * set:
128 <spanset+ 0:9>
128 <spanset+ 0:9>
129 0
129 0
130 1
130 1
131 2
131 2
132 3
132 3
133 4
133 4
134 5
134 5
135 6
135 6
136 7
136 7
137 8
137 8
138 9
138 9
139 $ try 3::6
139 $ try 3::6
140 (dagrange
140 (dagrange
141 ('symbol', '3')
141 ('symbol', '3')
142 ('symbol', '6'))
142 ('symbol', '6'))
143 * set:
143 * set:
144 <baseset+ [3, 5, 6]>
144 <baseset+ [3, 5, 6]>
145 3
145 3
146 5
146 5
147 6
147 6
148 $ try '0|1|2'
148 $ try '0|1|2'
149 (or
149 (or
150 ('symbol', '0')
150 ('symbol', '0')
151 ('symbol', '1')
151 ('symbol', '1')
152 ('symbol', '2'))
152 ('symbol', '2'))
153 * set:
153 * set:
154 <baseset [0, 1, 2]>
154 <baseset [0, 1, 2]>
155 0
155 0
156 1
156 1
157 2
157 2
158
158
159 names that should work without quoting
159 names that should work without quoting
160
160
161 $ try a
161 $ try a
162 ('symbol', 'a')
162 ('symbol', 'a')
163 * set:
163 * set:
164 <baseset [0]>
164 <baseset [0]>
165 0
165 0
166 $ try b-a
166 $ try b-a
167 (minus
167 (minus
168 ('symbol', 'b')
168 ('symbol', 'b')
169 ('symbol', 'a'))
169 ('symbol', 'a'))
170 * set:
170 * set:
171 <filteredset
171 <filteredset
172 <baseset [1]>>
172 <baseset [1]>>
173 1
173 1
174 $ try _a_b_c_
174 $ try _a_b_c_
175 ('symbol', '_a_b_c_')
175 ('symbol', '_a_b_c_')
176 * set:
176 * set:
177 <baseset [6]>
177 <baseset [6]>
178 6
178 6
179 $ try _a_b_c_-a
179 $ try _a_b_c_-a
180 (minus
180 (minus
181 ('symbol', '_a_b_c_')
181 ('symbol', '_a_b_c_')
182 ('symbol', 'a'))
182 ('symbol', 'a'))
183 * set:
183 * set:
184 <filteredset
184 <filteredset
185 <baseset [6]>>
185 <baseset [6]>>
186 6
186 6
187 $ try .a.b.c.
187 $ try .a.b.c.
188 ('symbol', '.a.b.c.')
188 ('symbol', '.a.b.c.')
189 * set:
189 * set:
190 <baseset [7]>
190 <baseset [7]>
191 7
191 7
192 $ try .a.b.c.-a
192 $ try .a.b.c.-a
193 (minus
193 (minus
194 ('symbol', '.a.b.c.')
194 ('symbol', '.a.b.c.')
195 ('symbol', 'a'))
195 ('symbol', 'a'))
196 * set:
196 * set:
197 <filteredset
197 <filteredset
198 <baseset [7]>>
198 <baseset [7]>>
199 7
199 7
200
200
201 names that should be caught by fallback mechanism
201 names that should be caught by fallback mechanism
202
202
203 $ try -- '-a-b-c-'
203 $ try -- '-a-b-c-'
204 ('symbol', '-a-b-c-')
204 ('symbol', '-a-b-c-')
205 * set:
205 * set:
206 <baseset [4]>
206 <baseset [4]>
207 4
207 4
208 $ log -a-b-c-
208 $ log -a-b-c-
209 4
209 4
210 $ try '+a+b+c+'
210 $ try '+a+b+c+'
211 ('symbol', '+a+b+c+')
211 ('symbol', '+a+b+c+')
212 * set:
212 * set:
213 <baseset [3]>
213 <baseset [3]>
214 3
214 3
215 $ try '+a+b+c+:'
215 $ try '+a+b+c+:'
216 (rangepost
216 (rangepost
217 ('symbol', '+a+b+c+'))
217 ('symbol', '+a+b+c+'))
218 * set:
218 * set:
219 <spanset+ 3:9>
219 <spanset+ 3:9>
220 3
220 3
221 4
221 4
222 5
222 5
223 6
223 6
224 7
224 7
225 8
225 8
226 9
226 9
227 $ try ':+a+b+c+'
227 $ try ':+a+b+c+'
228 (rangepre
228 (rangepre
229 ('symbol', '+a+b+c+'))
229 ('symbol', '+a+b+c+'))
230 * set:
230 * set:
231 <spanset+ 0:3>
231 <spanset+ 0:3>
232 0
232 0
233 1
233 1
234 2
234 2
235 3
235 3
236 $ try -- '-a-b-c-:+a+b+c+'
236 $ try -- '-a-b-c-:+a+b+c+'
237 (range
237 (range
238 ('symbol', '-a-b-c-')
238 ('symbol', '-a-b-c-')
239 ('symbol', '+a+b+c+'))
239 ('symbol', '+a+b+c+'))
240 * set:
240 * set:
241 <spanset- 3:4>
241 <spanset- 3:4>
242 4
242 4
243 3
243 3
244 $ log '-a-b-c-:+a+b+c+'
244 $ log '-a-b-c-:+a+b+c+'
245 4
245 4
246 3
246 3
247
247
248 $ try -- -a-b-c--a # complains
248 $ try -- -a-b-c--a # complains
249 (minus
249 (minus
250 (minus
250 (minus
251 (minus
251 (minus
252 (negate
252 (negate
253 ('symbol', 'a'))
253 ('symbol', 'a'))
254 ('symbol', 'b'))
254 ('symbol', 'b'))
255 ('symbol', 'c'))
255 ('symbol', 'c'))
256 (negate
256 (negate
257 ('symbol', 'a')))
257 ('symbol', 'a')))
258 abort: unknown revision '-a'!
258 abort: unknown revision '-a'!
259 [255]
259 [255]
260 $ try Γ©
260 $ try Γ©
261 ('symbol', '\xc3\xa9')
261 ('symbol', '\xc3\xa9')
262 * set:
262 * set:
263 <baseset [9]>
263 <baseset [9]>
264 9
264 9
265
265
266 no quoting needed
266 no quoting needed
267
267
268 $ log ::a-b-c-
268 $ log ::a-b-c-
269 0
269 0
270 1
270 1
271 2
271 2
272
272
273 quoting needed
273 quoting needed
274
274
275 $ try '"-a-b-c-"-a'
275 $ try '"-a-b-c-"-a'
276 (minus
276 (minus
277 ('string', '-a-b-c-')
277 ('string', '-a-b-c-')
278 ('symbol', 'a'))
278 ('symbol', 'a'))
279 * set:
279 * set:
280 <filteredset
280 <filteredset
281 <baseset [4]>>
281 <baseset [4]>>
282 4
282 4
283
283
284 $ log '1 or 2'
284 $ log '1 or 2'
285 1
285 1
286 2
286 2
287 $ log '1|2'
287 $ log '1|2'
288 1
288 1
289 2
289 2
290 $ log '1 and 2'
290 $ log '1 and 2'
291 $ log '1&2'
291 $ log '1&2'
292 $ try '1&2|3' # precedence - and is higher
292 $ try '1&2|3' # precedence - and is higher
293 (or
293 (or
294 (and
294 (and
295 ('symbol', '1')
295 ('symbol', '1')
296 ('symbol', '2'))
296 ('symbol', '2'))
297 ('symbol', '3'))
297 ('symbol', '3'))
298 * set:
298 * set:
299 <addset
299 <addset
300 <baseset []>,
300 <baseset []>,
301 <baseset [3]>>
301 <baseset [3]>>
302 3
302 3
303 $ try '1|2&3'
303 $ try '1|2&3'
304 (or
304 (or
305 ('symbol', '1')
305 ('symbol', '1')
306 (and
306 (and
307 ('symbol', '2')
307 ('symbol', '2')
308 ('symbol', '3')))
308 ('symbol', '3')))
309 * set:
309 * set:
310 <addset
310 <addset
311 <baseset [1]>,
311 <baseset [1]>,
312 <baseset []>>
312 <baseset []>>
313 1
313 1
314 $ try '1&2&3' # associativity
314 $ try '1&2&3' # associativity
315 (and
315 (and
316 (and
316 (and
317 ('symbol', '1')
317 ('symbol', '1')
318 ('symbol', '2'))
318 ('symbol', '2'))
319 ('symbol', '3'))
319 ('symbol', '3'))
320 * set:
320 * set:
321 <baseset []>
321 <baseset []>
322 $ try '1|(2|3)'
322 $ try '1|(2|3)'
323 (or
323 (or
324 ('symbol', '1')
324 ('symbol', '1')
325 (group
325 (group
326 (or
326 (or
327 ('symbol', '2')
327 ('symbol', '2')
328 ('symbol', '3'))))
328 ('symbol', '3'))))
329 * set:
329 * set:
330 <addset
330 <addset
331 <baseset [1]>,
331 <baseset [1]>,
332 <baseset [2, 3]>>
332 <baseset [2, 3]>>
333 1
333 1
334 2
334 2
335 3
335 3
336 $ log '1.0' # tag
336 $ log '1.0' # tag
337 6
337 6
338 $ log 'a' # branch
338 $ log 'a' # branch
339 0
339 0
340 $ log '2785f51ee'
340 $ log '2785f51ee'
341 0
341 0
342 $ log 'date(2005)'
342 $ log 'date(2005)'
343 4
343 4
344 $ log 'date(this is a test)'
344 $ log 'date(this is a test)'
345 hg: parse error at 10: unexpected token: symbol
345 hg: parse error at 10: unexpected token: symbol
346 [255]
346 [255]
347 $ log 'date()'
347 $ log 'date()'
348 hg: parse error: date requires a string
348 hg: parse error: date requires a string
349 [255]
349 [255]
350 $ log 'date'
350 $ log 'date'
351 abort: unknown revision 'date'!
351 abort: unknown revision 'date'!
352 [255]
352 [255]
353 $ log 'date('
353 $ log 'date('
354 hg: parse error at 5: not a prefix: end
354 hg: parse error at 5: not a prefix: end
355 [255]
355 [255]
356 $ log 'date("\xy")'
356 $ log 'date("\xy")'
357 hg: parse error: invalid \x escape
357 hg: parse error: invalid \x escape
358 [255]
358 [255]
359 $ log 'date(tip)'
359 $ log 'date(tip)'
360 abort: invalid date: 'tip'
360 abort: invalid date: 'tip'
361 [255]
361 [255]
362 $ log '0:date'
362 $ log '0:date'
363 abort: unknown revision 'date'!
363 abort: unknown revision 'date'!
364 [255]
364 [255]
365 $ log '::"date"'
365 $ log '::"date"'
366 abort: unknown revision 'date'!
366 abort: unknown revision 'date'!
367 [255]
367 [255]
368 $ hg book date -r 4
368 $ hg book date -r 4
369 $ log '0:date'
369 $ log '0:date'
370 0
370 0
371 1
371 1
372 2
372 2
373 3
373 3
374 4
374 4
375 $ log '::date'
375 $ log '::date'
376 0
376 0
377 1
377 1
378 2
378 2
379 4
379 4
380 $ log '::"date"'
380 $ log '::"date"'
381 0
381 0
382 1
382 1
383 2
383 2
384 4
384 4
385 $ log 'date(2005) and 1::'
385 $ log 'date(2005) and 1::'
386 4
386 4
387 $ hg book -d date
387 $ hg book -d date
388
388
389 keyword arguments
389 keyword arguments
390
390
391 $ log 'extra(branch, value=a)'
391 $ log 'extra(branch, value=a)'
392 0
392 0
393
393
394 $ log 'extra(branch, a, b)'
394 $ log 'extra(branch, a, b)'
395 hg: parse error: extra takes at most 2 arguments
395 hg: parse error: extra takes at most 2 arguments
396 [255]
396 [255]
397 $ log 'extra(a, label=b)'
397 $ log 'extra(a, label=b)'
398 hg: parse error: extra got multiple values for keyword argument 'label'
398 hg: parse error: extra got multiple values for keyword argument 'label'
399 [255]
399 [255]
400 $ log 'extra(label=branch, default)'
400 $ log 'extra(label=branch, default)'
401 hg: parse error: extra got an invalid argument
401 hg: parse error: extra got an invalid argument
402 [255]
402 [255]
403 $ log 'extra(branch, foo+bar=baz)'
403 $ log 'extra(branch, foo+bar=baz)'
404 hg: parse error: extra got an invalid argument
404 hg: parse error: extra got an invalid argument
405 [255]
405 [255]
406 $ log 'extra(unknown=branch)'
406 $ log 'extra(unknown=branch)'
407 hg: parse error: extra got an unexpected keyword argument 'unknown'
407 hg: parse error: extra got an unexpected keyword argument 'unknown'
408 [255]
408 [255]
409
409
410 $ try 'foo=bar|baz'
410 $ try 'foo=bar|baz'
411 (keyvalue
411 (keyvalue
412 ('symbol', 'foo')
412 ('symbol', 'foo')
413 (or
413 (or
414 ('symbol', 'bar')
414 ('symbol', 'bar')
415 ('symbol', 'baz')))
415 ('symbol', 'baz')))
416 hg: parse error: can't use a key-value pair in this context
416 hg: parse error: can't use a key-value pair in this context
417 [255]
417 [255]
418
418
419 Test that symbols only get parsed as functions if there's an opening
419 Test that symbols only get parsed as functions if there's an opening
420 parenthesis.
420 parenthesis.
421
421
422 $ hg book only -r 9
422 $ hg book only -r 9
423 $ log 'only(only)' # Outer "only" is a function, inner "only" is the bookmark
423 $ log 'only(only)' # Outer "only" is a function, inner "only" is the bookmark
424 8
424 8
425 9
425 9
426
426
427 ancestor can accept 0 or more arguments
427 ancestor can accept 0 or more arguments
428
428
429 $ log 'ancestor()'
429 $ log 'ancestor()'
430 $ log 'ancestor(1)'
430 $ log 'ancestor(1)'
431 1
431 1
432 $ log 'ancestor(4,5)'
432 $ log 'ancestor(4,5)'
433 1
433 1
434 $ log 'ancestor(4,5) and 4'
434 $ log 'ancestor(4,5) and 4'
435 $ log 'ancestor(0,0,1,3)'
435 $ log 'ancestor(0,0,1,3)'
436 0
436 0
437 $ log 'ancestor(3,1,5,3,5,1)'
437 $ log 'ancestor(3,1,5,3,5,1)'
438 1
438 1
439 $ log 'ancestor(0,1,3,5)'
439 $ log 'ancestor(0,1,3,5)'
440 0
440 0
441 $ log 'ancestor(1,2,3,4,5)'
441 $ log 'ancestor(1,2,3,4,5)'
442 1
442 1
443
443
444 test ancestors
444 test ancestors
445
445
446 $ log 'ancestors(5)'
446 $ log 'ancestors(5)'
447 0
447 0
448 1
448 1
449 3
449 3
450 5
450 5
451 $ log 'ancestor(ancestors(5))'
451 $ log 'ancestor(ancestors(5))'
452 0
452 0
453 $ log '::r3232()'
453 $ log '::r3232()'
454 0
454 0
455 1
455 1
456 2
456 2
457 3
457 3
458
458
459 $ log 'author(bob)'
459 $ log 'author(bob)'
460 2
460 2
461 $ log 'author("re:bob|test")'
461 $ log 'author("re:bob|test")'
462 0
462 0
463 1
463 1
464 2
464 2
465 3
465 3
466 4
466 4
467 5
467 5
468 6
468 6
469 7
469 7
470 8
470 8
471 9
471 9
472 $ log 'branch(Γ©)'
472 $ log 'branch(Γ©)'
473 8
473 8
474 9
474 9
475 $ log 'branch(a)'
475 $ log 'branch(a)'
476 0
476 0
477 $ hg log -r 'branch("re:a")' --template '{rev} {branch}\n'
477 $ hg log -r 'branch("re:a")' --template '{rev} {branch}\n'
478 0 a
478 0 a
479 2 a-b-c-
479 2 a-b-c-
480 3 +a+b+c+
480 3 +a+b+c+
481 4 -a-b-c-
481 4 -a-b-c-
482 5 !a/b/c/
482 5 !a/b/c/
483 6 _a_b_c_
483 6 _a_b_c_
484 7 .a.b.c.
484 7 .a.b.c.
485 $ log 'children(ancestor(4,5))'
485 $ log 'children(ancestor(4,5))'
486 2
486 2
487 3
487 3
488 $ log 'closed()'
488 $ log 'closed()'
489 $ log 'contains(a)'
489 $ log 'contains(a)'
490 0
490 0
491 1
491 1
492 3
492 3
493 5
493 5
494 $ log 'contains("../repo/a")'
494 $ log 'contains("../repo/a")'
495 0
495 0
496 1
496 1
497 3
497 3
498 5
498 5
499 $ log 'desc(B)'
499 $ log 'desc(B)'
500 5
500 5
501 $ log 'descendants(2 or 3)'
501 $ log 'descendants(2 or 3)'
502 2
502 2
503 3
503 3
504 4
504 4
505 5
505 5
506 6
506 6
507 7
507 7
508 8
508 8
509 9
509 9
510 $ log 'file("b*")'
510 $ log 'file("b*")'
511 1
511 1
512 4
512 4
513 $ log 'filelog("b")'
513 $ log 'filelog("b")'
514 1
514 1
515 4
515 4
516 $ log 'filelog("../repo/b")'
516 $ log 'filelog("../repo/b")'
517 1
517 1
518 4
518 4
519 $ log 'follow()'
519 $ log 'follow()'
520 0
520 0
521 1
521 1
522 2
522 2
523 4
523 4
524 8
524 8
525 9
525 9
526 $ log 'grep("issue\d+")'
526 $ log 'grep("issue\d+")'
527 6
527 6
528 $ try 'grep("(")' # invalid regular expression
528 $ try 'grep("(")' # invalid regular expression
529 (func
529 (func
530 ('symbol', 'grep')
530 ('symbol', 'grep')
531 ('string', '('))
531 ('string', '('))
532 hg: parse error: invalid match pattern: unbalanced parenthesis
532 hg: parse error: invalid match pattern: unbalanced parenthesis
533 [255]
533 [255]
534 $ try 'grep("\bissue\d+")'
534 $ try 'grep("\bissue\d+")'
535 (func
535 (func
536 ('symbol', 'grep')
536 ('symbol', 'grep')
537 ('string', '\x08issue\\d+'))
537 ('string', '\x08issue\\d+'))
538 * set:
538 * set:
539 <filteredset
539 <filteredset
540 <fullreposet+ 0:9>>
540 <fullreposet+ 0:9>>
541 $ try 'grep(r"\bissue\d+")'
541 $ try 'grep(r"\bissue\d+")'
542 (func
542 (func
543 ('symbol', 'grep')
543 ('symbol', 'grep')
544 ('string', '\\bissue\\d+'))
544 ('string', '\\bissue\\d+'))
545 * set:
545 * set:
546 <filteredset
546 <filteredset
547 <fullreposet+ 0:9>>
547 <fullreposet+ 0:9>>
548 6
548 6
549 $ try 'grep(r"\")'
549 $ try 'grep(r"\")'
550 hg: parse error at 7: unterminated string
550 hg: parse error at 7: unterminated string
551 [255]
551 [255]
552 $ log 'head()'
552 $ log 'head()'
553 0
553 0
554 1
554 1
555 2
555 2
556 3
556 3
557 4
557 4
558 5
558 5
559 6
559 6
560 7
560 7
561 9
561 9
562 $ log 'heads(6::)'
562 $ log 'heads(6::)'
563 7
563 7
564 $ log 'keyword(issue)'
564 $ log 'keyword(issue)'
565 6
565 6
566 $ log 'keyword("test a")'
566 $ log 'keyword("test a")'
567 $ log 'limit(head(), 1)'
567 $ log 'limit(head(), 1)'
568 0
568 0
569 $ log 'limit(author("re:bob|test"), 3, 5)'
569 $ log 'limit(author("re:bob|test"), 3, 5)'
570 5
570 5
571 6
571 6
572 7
572 7
573 $ log 'limit(author("re:bob|test"), offset=6)'
573 $ log 'limit(author("re:bob|test"), offset=6)'
574 6
574 6
575 $ log 'limit(author("re:bob|test"), offset=10)'
575 $ log 'limit(author("re:bob|test"), offset=10)'
576 $ log 'limit(all(), 1, -1)'
576 $ log 'limit(all(), 1, -1)'
577 hg: parse error: negative offset
577 hg: parse error: negative offset
578 [255]
578 [255]
579 $ log 'matching(6)'
579 $ log 'matching(6)'
580 6
580 6
581 $ log 'matching(6:7, "phase parents user date branch summary files description substate")'
581 $ log 'matching(6:7, "phase parents user date branch summary files description substate")'
582 6
582 6
583 7
583 7
584
584
585 Testing min and max
585 Testing min and max
586
586
587 max: simple
587 max: simple
588
588
589 $ log 'max(contains(a))'
589 $ log 'max(contains(a))'
590 5
590 5
591
591
592 max: simple on unordered set)
592 max: simple on unordered set)
593
593
594 $ log 'max((4+0+2+5+7) and contains(a))'
594 $ log 'max((4+0+2+5+7) and contains(a))'
595 5
595 5
596
596
597 max: no result
597 max: no result
598
598
599 $ log 'max(contains(stringthatdoesnotappearanywhere))'
599 $ log 'max(contains(stringthatdoesnotappearanywhere))'
600
600
601 max: no result on unordered set
601 max: no result on unordered set
602
602
603 $ log 'max((4+0+2+5+7) and contains(stringthatdoesnotappearanywhere))'
603 $ log 'max((4+0+2+5+7) and contains(stringthatdoesnotappearanywhere))'
604
604
605 min: simple
605 min: simple
606
606
607 $ log 'min(contains(a))'
607 $ log 'min(contains(a))'
608 0
608 0
609
609
610 min: simple on unordered set
610 min: simple on unordered set
611
611
612 $ log 'min((4+0+2+5+7) and contains(a))'
612 $ log 'min((4+0+2+5+7) and contains(a))'
613 0
613 0
614
614
615 min: empty
615 min: empty
616
616
617 $ log 'min(contains(stringthatdoesnotappearanywhere))'
617 $ log 'min(contains(stringthatdoesnotappearanywhere))'
618
618
619 min: empty on unordered set
619 min: empty on unordered set
620
620
621 $ log 'min((4+0+2+5+7) and contains(stringthatdoesnotappearanywhere))'
621 $ log 'min((4+0+2+5+7) and contains(stringthatdoesnotappearanywhere))'
622
622
623
623
624 $ log 'merge()'
624 $ log 'merge()'
625 6
625 6
626 $ log 'branchpoint()'
626 $ log 'branchpoint()'
627 1
627 1
628 4
628 4
629 $ log 'modifies(b)'
629 $ log 'modifies(b)'
630 4
630 4
631 $ log 'modifies("path:b")'
631 $ log 'modifies("path:b")'
632 4
632 4
633 $ log 'modifies("*")'
633 $ log 'modifies("*")'
634 4
634 4
635 6
635 6
636 $ log 'modifies("set:modified()")'
636 $ log 'modifies("set:modified()")'
637 4
637 4
638 $ log 'id(5)'
638 $ log 'id(5)'
639 2
639 2
640 $ log 'only(9)'
640 $ log 'only(9)'
641 8
641 8
642 9
642 9
643 $ log 'only(8)'
643 $ log 'only(8)'
644 8
644 8
645 $ log 'only(9, 5)'
645 $ log 'only(9, 5)'
646 2
646 2
647 4
647 4
648 8
648 8
649 9
649 9
650 $ log 'only(7 + 9, 5 + 2)'
650 $ log 'only(7 + 9, 5 + 2)'
651 4
651 4
652 6
652 6
653 7
653 7
654 8
654 8
655 9
655 9
656
656
657 Test empty set input
657 Test empty set input
658 $ log 'only(p2())'
658 $ log 'only(p2())'
659 $ log 'only(p1(), p2())'
659 $ log 'only(p1(), p2())'
660 0
660 0
661 1
661 1
662 2
662 2
663 4
663 4
664 8
664 8
665 9
665 9
666
666
667 Test '%' operator
667 Test '%' operator
668
668
669 $ log '9%'
669 $ log '9%'
670 8
670 8
671 9
671 9
672 $ log '9%5'
672 $ log '9%5'
673 2
673 2
674 4
674 4
675 8
675 8
676 9
676 9
677 $ log '(7 + 9)%(5 + 2)'
677 $ log '(7 + 9)%(5 + 2)'
678 4
678 4
679 6
679 6
680 7
680 7
681 8
681 8
682 9
682 9
683
683
684 Test opreand of '%' is optimized recursively (issue4670)
684 Test opreand of '%' is optimized recursively (issue4670)
685
685
686 $ try --optimize '8:9-8%'
686 $ try --optimize '8:9-8%'
687 (onlypost
687 (onlypost
688 (minus
688 (minus
689 (range
689 (range
690 ('symbol', '8')
690 ('symbol', '8')
691 ('symbol', '9'))
691 ('symbol', '9'))
692 ('symbol', '8')))
692 ('symbol', '8')))
693 * optimized:
693 * optimized:
694 (func
694 (func
695 ('symbol', 'only')
695 ('symbol', 'only')
696 (difference
696 (difference
697 (range
697 (range
698 ('symbol', '8')
698 ('symbol', '8')
699 ('symbol', '9'))
699 ('symbol', '9'))
700 ('symbol', '8')))
700 ('symbol', '8')))
701 * set:
701 * set:
702 <baseset+ [8, 9]>
702 <baseset+ [8, 9]>
703 8
703 8
704 9
704 9
705 $ try --optimize '(9)%(5)'
705 $ try --optimize '(9)%(5)'
706 (only
706 (only
707 (group
707 (group
708 ('symbol', '9'))
708 ('symbol', '9'))
709 (group
709 (group
710 ('symbol', '5')))
710 ('symbol', '5')))
711 * optimized:
711 * optimized:
712 (func
712 (func
713 ('symbol', 'only')
713 ('symbol', 'only')
714 (list
714 (list
715 ('symbol', '9')
715 ('symbol', '9')
716 ('symbol', '5')))
716 ('symbol', '5')))
717 * set:
717 * set:
718 <baseset+ [8, 9, 2, 4]>
718 <baseset+ [8, 9, 2, 4]>
719 2
719 2
720 4
720 4
721 8
721 8
722 9
722 9
723
723
724 Test the order of operations
724 Test the order of operations
725
725
726 $ log '7 + 9%5 + 2'
726 $ log '7 + 9%5 + 2'
727 7
727 7
728 2
728 2
729 4
729 4
730 8
730 8
731 9
731 9
732
732
733 Test explicit numeric revision
733 Test explicit numeric revision
734 $ log 'rev(-2)'
734 $ log 'rev(-2)'
735 $ log 'rev(-1)'
735 $ log 'rev(-1)'
736 -1
736 -1
737 $ log 'rev(0)'
737 $ log 'rev(0)'
738 0
738 0
739 $ log 'rev(9)'
739 $ log 'rev(9)'
740 9
740 9
741 $ log 'rev(10)'
741 $ log 'rev(10)'
742 $ log 'rev(tip)'
742 $ log 'rev(tip)'
743 hg: parse error: rev expects a number
743 hg: parse error: rev expects a number
744 [255]
744 [255]
745
745
746 Test hexadecimal revision
746 Test hexadecimal revision
747 $ log 'id(2)'
747 $ log 'id(2)'
748 abort: 00changelog.i@2: ambiguous identifier!
748 abort: 00changelog.i@2: ambiguous identifier!
749 [255]
749 [255]
750 $ log 'id(23268)'
750 $ log 'id(23268)'
751 4
751 4
752 $ log 'id(2785f51eece)'
752 $ log 'id(2785f51eece)'
753 0
753 0
754 $ log 'id(d5d0dcbdc4d9ff5dbb2d336f32f0bb561c1a532c)'
754 $ log 'id(d5d0dcbdc4d9ff5dbb2d336f32f0bb561c1a532c)'
755 8
755 8
756 $ log 'id(d5d0dcbdc4a)'
756 $ log 'id(d5d0dcbdc4a)'
757 $ log 'id(d5d0dcbdc4w)'
757 $ log 'id(d5d0dcbdc4w)'
758 $ log 'id(d5d0dcbdc4d9ff5dbb2d336f32f0bb561c1a532d)'
758 $ log 'id(d5d0dcbdc4d9ff5dbb2d336f32f0bb561c1a532d)'
759 $ log 'id(d5d0dcbdc4d9ff5dbb2d336f32f0bb561c1a532q)'
759 $ log 'id(d5d0dcbdc4d9ff5dbb2d336f32f0bb561c1a532q)'
760 $ log 'id(1.0)'
760 $ log 'id(1.0)'
761 $ log 'id(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)'
761 $ log 'id(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)'
762
762
763 Test null revision
763 Test null revision
764 $ log '(null)'
764 $ log '(null)'
765 -1
765 -1
766 $ log '(null:0)'
766 $ log '(null:0)'
767 -1
767 -1
768 0
768 0
769 $ log '(0:null)'
769 $ log '(0:null)'
770 0
770 0
771 -1
771 -1
772 $ log 'null::0'
772 $ log 'null::0'
773 -1
773 -1
774 0
774 0
775 $ log 'null:tip - 0:'
775 $ log 'null:tip - 0:'
776 -1
776 -1
777 $ log 'null: and null::' | head -1
777 $ log 'null: and null::' | head -1
778 -1
778 -1
779 $ log 'null: or 0:' | head -2
779 $ log 'null: or 0:' | head -2
780 -1
780 -1
781 0
781 0
782 $ log 'ancestors(null)'
782 $ log 'ancestors(null)'
783 -1
783 -1
784 $ log 'reverse(null:)' | tail -2
784 $ log 'reverse(null:)' | tail -2
785 0
785 0
786 -1
786 -1
787 BROKEN: should be '-1'
787 BROKEN: should be '-1'
788 $ log 'first(null:)'
788 $ log 'first(null:)'
789 BROKEN: should be '-1'
789 BROKEN: should be '-1'
790 $ log 'min(null:)'
790 $ log 'min(null:)'
791 $ log 'tip:null and all()' | tail -2
791 $ log 'tip:null and all()' | tail -2
792 1
792 1
793 0
793 0
794
794
795 Test working-directory revision
795 Test working-directory revision
796 $ hg debugrevspec 'wdir()'
796 $ hg debugrevspec 'wdir()'
797 2147483647
797 2147483647
798 $ hg debugrevspec 'tip or wdir()'
798 $ hg debugrevspec 'tip or wdir()'
799 9
799 9
800 2147483647
800 2147483647
801 $ hg debugrevspec '0:tip and wdir()'
801 $ hg debugrevspec '0:tip and wdir()'
802 $ log '0:wdir()' | tail -3
802 $ log '0:wdir()' | tail -3
803 8
803 8
804 9
804 9
805 2147483647
805 2147483647
806 $ log 'wdir():0' | head -3
806 $ log 'wdir():0' | head -3
807 2147483647
807 2147483647
808 9
808 9
809 8
809 8
810 $ log 'wdir():wdir()'
810 $ log 'wdir():wdir()'
811 2147483647
811 2147483647
812 $ log '(all() + wdir()) & min(. + wdir())'
812 $ log '(all() + wdir()) & min(. + wdir())'
813 9
813 9
814 $ log '(all() + wdir()) & max(. + wdir())'
814 $ log '(all() + wdir()) & max(. + wdir())'
815 2147483647
815 2147483647
816 $ log '(all() + wdir()) & first(wdir() + .)'
816 $ log '(all() + wdir()) & first(wdir() + .)'
817 2147483647
817 2147483647
818 $ log '(all() + wdir()) & last(. + wdir())'
818 $ log '(all() + wdir()) & last(. + wdir())'
819 2147483647
819 2147483647
820
820
821 $ log 'outgoing()'
821 $ log 'outgoing()'
822 8
822 8
823 9
823 9
824 $ log 'outgoing("../remote1")'
824 $ log 'outgoing("../remote1")'
825 8
825 8
826 9
826 9
827 $ log 'outgoing("../remote2")'
827 $ log 'outgoing("../remote2")'
828 3
828 3
829 5
829 5
830 6
830 6
831 7
831 7
832 9
832 9
833 $ log 'p1(merge())'
833 $ log 'p1(merge())'
834 5
834 5
835 $ log 'p2(merge())'
835 $ log 'p2(merge())'
836 4
836 4
837 $ log 'parents(merge())'
837 $ log 'parents(merge())'
838 4
838 4
839 5
839 5
840 $ log 'p1(branchpoint())'
840 $ log 'p1(branchpoint())'
841 0
841 0
842 2
842 2
843 $ log 'p2(branchpoint())'
843 $ log 'p2(branchpoint())'
844 $ log 'parents(branchpoint())'
844 $ log 'parents(branchpoint())'
845 0
845 0
846 2
846 2
847 $ log 'removes(a)'
847 $ log 'removes(a)'
848 2
848 2
849 6
849 6
850 $ log 'roots(all())'
850 $ log 'roots(all())'
851 0
851 0
852 $ log 'reverse(2 or 3 or 4 or 5)'
852 $ log 'reverse(2 or 3 or 4 or 5)'
853 5
853 5
854 4
854 4
855 3
855 3
856 2
856 2
857 $ log 'reverse(all())'
857 $ log 'reverse(all())'
858 9
858 9
859 8
859 8
860 7
860 7
861 6
861 6
862 5
862 5
863 4
863 4
864 3
864 3
865 2
865 2
866 1
866 1
867 0
867 0
868 $ log 'reverse(all()) & filelog(b)'
868 $ log 'reverse(all()) & filelog(b)'
869 4
869 4
870 1
870 1
871 $ log 'rev(5)'
871 $ log 'rev(5)'
872 5
872 5
873 $ log 'sort(limit(reverse(all()), 3))'
873 $ log 'sort(limit(reverse(all()), 3))'
874 7
874 7
875 8
875 8
876 9
876 9
877 $ log 'sort(2 or 3 or 4 or 5, date)'
877 $ log 'sort(2 or 3 or 4 or 5, date)'
878 2
878 2
879 3
879 3
880 5
880 5
881 4
881 4
882 $ log 'tagged()'
882 $ log 'tagged()'
883 6
883 6
884 $ log 'tag()'
884 $ log 'tag()'
885 6
885 6
886 $ log 'tag(1.0)'
886 $ log 'tag(1.0)'
887 6
887 6
888 $ log 'tag(tip)'
888 $ log 'tag(tip)'
889 9
889 9
890
890
891 test sort revset
891 test sort revset
892 --------------------------------------------
892 --------------------------------------------
893
893
894 test when adding two unordered revsets
894 test when adding two unordered revsets
895
895
896 $ log 'sort(keyword(issue) or modifies(b))'
896 $ log 'sort(keyword(issue) or modifies(b))'
897 4
897 4
898 6
898 6
899
899
900 test when sorting a reversed collection in the same way it is
900 test when sorting a reversed collection in the same way it is
901
901
902 $ log 'sort(reverse(all()), -rev)'
902 $ log 'sort(reverse(all()), -rev)'
903 9
903 9
904 8
904 8
905 7
905 7
906 6
906 6
907 5
907 5
908 4
908 4
909 3
909 3
910 2
910 2
911 1
911 1
912 0
912 0
913
913
914 test when sorting a reversed collection
914 test when sorting a reversed collection
915
915
916 $ log 'sort(reverse(all()), rev)'
916 $ log 'sort(reverse(all()), rev)'
917 0
917 0
918 1
918 1
919 2
919 2
920 3
920 3
921 4
921 4
922 5
922 5
923 6
923 6
924 7
924 7
925 8
925 8
926 9
926 9
927
927
928
928
929 test sorting two sorted collections in different orders
929 test sorting two sorted collections in different orders
930
930
931 $ log 'sort(outgoing() or reverse(removes(a)), rev)'
931 $ log 'sort(outgoing() or reverse(removes(a)), rev)'
932 2
932 2
933 6
933 6
934 8
934 8
935 9
935 9
936
936
937 test sorting two sorted collections in different orders backwards
937 test sorting two sorted collections in different orders backwards
938
938
939 $ log 'sort(outgoing() or reverse(removes(a)), -rev)'
939 $ log 'sort(outgoing() or reverse(removes(a)), -rev)'
940 9
940 9
941 8
941 8
942 6
942 6
943 2
943 2
944
944
945 test subtracting something from an addset
945 test subtracting something from an addset
946
946
947 $ log '(outgoing() or removes(a)) - removes(a)'
947 $ log '(outgoing() or removes(a)) - removes(a)'
948 8
948 8
949 9
949 9
950
950
951 test intersecting something with an addset
951 test intersecting something with an addset
952
952
953 $ log 'parents(outgoing() or removes(a))'
953 $ log 'parents(outgoing() or removes(a))'
954 1
954 1
955 4
955 4
956 5
956 5
957 8
957 8
958
958
959 test that `or` operation combines elements in the right order:
959 test that `or` operation combines elements in the right order:
960
960
961 $ log '3:4 or 2:5'
961 $ log '3:4 or 2:5'
962 3
962 3
963 4
963 4
964 2
964 2
965 5
965 5
966 $ log '3:4 or 5:2'
966 $ log '3:4 or 5:2'
967 3
967 3
968 4
968 4
969 5
969 5
970 2
970 2
971 $ log 'sort(3:4 or 2:5)'
971 $ log 'sort(3:4 or 2:5)'
972 2
972 2
973 3
973 3
974 4
974 4
975 5
975 5
976 $ log 'sort(3:4 or 5:2)'
976 $ log 'sort(3:4 or 5:2)'
977 2
977 2
978 3
978 3
979 4
979 4
980 5
980 5
981
981
982 test that more than one `-r`s are combined in the right order and deduplicated:
982 test that more than one `-r`s are combined in the right order and deduplicated:
983
983
984 $ hg log -T '{rev}\n' -r 3 -r 3 -r 4 -r 5:2 -r 'ancestors(4)'
984 $ hg log -T '{rev}\n' -r 3 -r 3 -r 4 -r 5:2 -r 'ancestors(4)'
985 3
985 3
986 4
986 4
987 5
987 5
988 2
988 2
989 0
989 0
990 1
990 1
991
991
992 test that `or` operation skips duplicated revisions from right-hand side
992 test that `or` operation skips duplicated revisions from right-hand side
993
993
994 $ try 'reverse(1::5) or ancestors(4)'
994 $ try 'reverse(1::5) or ancestors(4)'
995 (or
995 (or
996 (func
996 (func
997 ('symbol', 'reverse')
997 ('symbol', 'reverse')
998 (dagrange
998 (dagrange
999 ('symbol', '1')
999 ('symbol', '1')
1000 ('symbol', '5')))
1000 ('symbol', '5')))
1001 (func
1001 (func
1002 ('symbol', 'ancestors')
1002 ('symbol', 'ancestors')
1003 ('symbol', '4')))
1003 ('symbol', '4')))
1004 * set:
1004 * set:
1005 <addset
1005 <addset
1006 <baseset- [1, 3, 5]>,
1006 <baseset- [1, 3, 5]>,
1007 <generatorset+>>
1007 <generatorset+>>
1008 5
1008 5
1009 3
1009 3
1010 1
1010 1
1011 0
1011 0
1012 2
1012 2
1013 4
1013 4
1014 $ try 'sort(ancestors(4) or reverse(1::5))'
1014 $ try 'sort(ancestors(4) or reverse(1::5))'
1015 (func
1015 (func
1016 ('symbol', 'sort')
1016 ('symbol', 'sort')
1017 (or
1017 (or
1018 (func
1018 (func
1019 ('symbol', 'ancestors')
1019 ('symbol', 'ancestors')
1020 ('symbol', '4'))
1020 ('symbol', '4'))
1021 (func
1021 (func
1022 ('symbol', 'reverse')
1022 ('symbol', 'reverse')
1023 (dagrange
1023 (dagrange
1024 ('symbol', '1')
1024 ('symbol', '1')
1025 ('symbol', '5')))))
1025 ('symbol', '5')))))
1026 * set:
1026 * set:
1027 <addset+
1027 <addset+
1028 <generatorset+>,
1028 <generatorset+>,
1029 <baseset- [1, 3, 5]>>
1029 <baseset- [1, 3, 5]>>
1030 0
1030 0
1031 1
1031 1
1032 2
1032 2
1033 3
1033 3
1034 4
1034 4
1035 5
1035 5
1036
1036
1037 test optimization of trivial `or` operation
1037 test optimization of trivial `or` operation
1038
1038
1039 $ try --optimize '0|(1)|"2"|-2|tip|null'
1039 $ try --optimize '0|(1)|"2"|-2|tip|null'
1040 (or
1040 (or
1041 ('symbol', '0')
1041 ('symbol', '0')
1042 (group
1042 (group
1043 ('symbol', '1'))
1043 ('symbol', '1'))
1044 ('string', '2')
1044 ('string', '2')
1045 (negate
1045 (negate
1046 ('symbol', '2'))
1046 ('symbol', '2'))
1047 ('symbol', 'tip')
1047 ('symbol', 'tip')
1048 ('symbol', 'null'))
1048 ('symbol', 'null'))
1049 * optimized:
1049 * optimized:
1050 (func
1050 (func
1051 ('symbol', '_list')
1051 ('symbol', '_list')
1052 ('string', '0\x001\x002\x00-2\x00tip\x00null'))
1052 ('string', '0\x001\x002\x00-2\x00tip\x00null'))
1053 * set:
1053 * set:
1054 <baseset [0, 1, 2, 8, 9, -1]>
1054 <baseset [0, 1, 2, 8, 9, -1]>
1055 0
1055 0
1056 1
1056 1
1057 2
1057 2
1058 8
1058 8
1059 9
1059 9
1060 -1
1060 -1
1061
1061
1062 $ try --optimize '0|1|2:3'
1062 $ try --optimize '0|1|2:3'
1063 (or
1063 (or
1064 ('symbol', '0')
1064 ('symbol', '0')
1065 ('symbol', '1')
1065 ('symbol', '1')
1066 (range
1066 (range
1067 ('symbol', '2')
1067 ('symbol', '2')
1068 ('symbol', '3')))
1068 ('symbol', '3')))
1069 * optimized:
1069 * optimized:
1070 (or
1070 (or
1071 (func
1071 (func
1072 ('symbol', '_list')
1072 ('symbol', '_list')
1073 ('string', '0\x001'))
1073 ('string', '0\x001'))
1074 (range
1074 (range
1075 ('symbol', '2')
1075 ('symbol', '2')
1076 ('symbol', '3')))
1076 ('symbol', '3')))
1077 * set:
1077 * set:
1078 <addset
1078 <addset
1079 <baseset [0, 1]>,
1079 <baseset [0, 1]>,
1080 <spanset+ 2:3>>
1080 <spanset+ 2:3>>
1081 0
1081 0
1082 1
1082 1
1083 2
1083 2
1084 3
1084 3
1085
1085
1086 $ try --optimize '0:1|2|3:4|5|6'
1086 $ try --optimize '0:1|2|3:4|5|6'
1087 (or
1087 (or
1088 (range
1088 (range
1089 ('symbol', '0')
1089 ('symbol', '0')
1090 ('symbol', '1'))
1090 ('symbol', '1'))
1091 ('symbol', '2')
1091 ('symbol', '2')
1092 (range
1092 (range
1093 ('symbol', '3')
1093 ('symbol', '3')
1094 ('symbol', '4'))
1094 ('symbol', '4'))
1095 ('symbol', '5')
1095 ('symbol', '5')
1096 ('symbol', '6'))
1096 ('symbol', '6'))
1097 * optimized:
1097 * optimized:
1098 (or
1098 (or
1099 (range
1099 (range
1100 ('symbol', '0')
1100 ('symbol', '0')
1101 ('symbol', '1'))
1101 ('symbol', '1'))
1102 ('symbol', '2')
1102 ('symbol', '2')
1103 (range
1103 (range
1104 ('symbol', '3')
1104 ('symbol', '3')
1105 ('symbol', '4'))
1105 ('symbol', '4'))
1106 (func
1106 (func
1107 ('symbol', '_list')
1107 ('symbol', '_list')
1108 ('string', '5\x006')))
1108 ('string', '5\x006')))
1109 * set:
1109 * set:
1110 <addset
1110 <addset
1111 <addset
1111 <addset
1112 <spanset+ 0:1>,
1112 <spanset+ 0:1>,
1113 <baseset [2]>>,
1113 <baseset [2]>>,
1114 <addset
1114 <addset
1115 <spanset+ 3:4>,
1115 <spanset+ 3:4>,
1116 <baseset [5, 6]>>>
1116 <baseset [5, 6]>>>
1117 0
1117 0
1118 1
1118 1
1119 2
1119 2
1120 3
1120 3
1121 4
1121 4
1122 5
1122 5
1123 6
1123 6
1124
1124
1125 test that `_list` should be narrowed by provided `subset`
1125 test that `_list` should be narrowed by provided `subset`
1126
1126
1127 $ log '0:2 and (null|1|2|3)'
1127 $ log '0:2 and (null|1|2|3)'
1128 1
1128 1
1129 2
1129 2
1130
1130
1131 test that `_list` should remove duplicates
1131 test that `_list` should remove duplicates
1132
1132
1133 $ log '0|1|2|1|2|-1|tip'
1133 $ log '0|1|2|1|2|-1|tip'
1134 0
1134 0
1135 1
1135 1
1136 2
1136 2
1137 9
1137 9
1138
1138
1139 test unknown revision in `_list`
1139 test unknown revision in `_list`
1140
1140
1141 $ log '0|unknown'
1141 $ log '0|unknown'
1142 abort: unknown revision 'unknown'!
1142 abort: unknown revision 'unknown'!
1143 [255]
1143 [255]
1144
1144
1145 test integer range in `_list`
1145 test integer range in `_list`
1146
1146
1147 $ log '-1|-10'
1147 $ log '-1|-10'
1148 9
1148 9
1149 0
1149 0
1150
1150
1151 $ log '-10|-11'
1151 $ log '-10|-11'
1152 abort: unknown revision '-11'!
1152 abort: unknown revision '-11'!
1153 [255]
1153 [255]
1154
1154
1155 $ log '9|10'
1155 $ log '9|10'
1156 abort: unknown revision '10'!
1156 abort: unknown revision '10'!
1157 [255]
1157 [255]
1158
1158
1159 test '0000' != '0' in `_list`
1159 test '0000' != '0' in `_list`
1160
1160
1161 $ log '0|0000'
1161 $ log '0|0000'
1162 0
1162 0
1163 -1
1163 -1
1164
1164
1165 test ',' in `_list`
1165 test ',' in `_list`
1166 $ log '0,1'
1166 $ log '0,1'
1167 hg: parse error: can't use a list in this context
1167 hg: parse error: can't use a list in this context
1168 (see hg help "revsets.x or y")
1168 (see hg help "revsets.x or y")
1169 [255]
1169 [255]
1170 $ try '0,1,2'
1170 $ try '0,1,2'
1171 (list
1171 (list
1172 ('symbol', '0')
1172 ('symbol', '0')
1173 ('symbol', '1')
1173 ('symbol', '1')
1174 ('symbol', '2'))
1174 ('symbol', '2'))
1175 hg: parse error: can't use a list in this context
1175 hg: parse error: can't use a list in this context
1176 (see hg help "revsets.x or y")
1176 (see hg help "revsets.x or y")
1177 [255]
1177 [255]
1178
1178
1179 test that chained `or` operations make balanced addsets
1179 test that chained `or` operations make balanced addsets
1180
1180
1181 $ try '0:1|1:2|2:3|3:4|4:5'
1181 $ try '0:1|1:2|2:3|3:4|4:5'
1182 (or
1182 (or
1183 (range
1183 (range
1184 ('symbol', '0')
1184 ('symbol', '0')
1185 ('symbol', '1'))
1185 ('symbol', '1'))
1186 (range
1186 (range
1187 ('symbol', '1')
1187 ('symbol', '1')
1188 ('symbol', '2'))
1188 ('symbol', '2'))
1189 (range
1189 (range
1190 ('symbol', '2')
1190 ('symbol', '2')
1191 ('symbol', '3'))
1191 ('symbol', '3'))
1192 (range
1192 (range
1193 ('symbol', '3')
1193 ('symbol', '3')
1194 ('symbol', '4'))
1194 ('symbol', '4'))
1195 (range
1195 (range
1196 ('symbol', '4')
1196 ('symbol', '4')
1197 ('symbol', '5')))
1197 ('symbol', '5')))
1198 * set:
1198 * set:
1199 <addset
1199 <addset
1200 <addset
1200 <addset
1201 <spanset+ 0:1>,
1201 <spanset+ 0:1>,
1202 <spanset+ 1:2>>,
1202 <spanset+ 1:2>>,
1203 <addset
1203 <addset
1204 <spanset+ 2:3>,
1204 <spanset+ 2:3>,
1205 <addset
1205 <addset
1206 <spanset+ 3:4>,
1206 <spanset+ 3:4>,
1207 <spanset+ 4:5>>>>
1207 <spanset+ 4:5>>>>
1208 0
1208 0
1209 1
1209 1
1210 2
1210 2
1211 3
1211 3
1212 4
1212 4
1213 5
1213 5
1214
1214
1215 no crash by empty group "()" while optimizing `or` operations
1215 no crash by empty group "()" while optimizing `or` operations
1216
1216
1217 $ try --optimize '0|()'
1217 $ try --optimize '0|()'
1218 (or
1218 (or
1219 ('symbol', '0')
1219 ('symbol', '0')
1220 (group
1220 (group
1221 None))
1221 None))
1222 * optimized:
1222 * optimized:
1223 (or
1223 (or
1224 ('symbol', '0')
1224 ('symbol', '0')
1225 None)
1225 None)
1226 hg: parse error: missing argument
1226 hg: parse error: missing argument
1227 [255]
1227 [255]
1228
1228
1229 test that chained `or` operations never eat up stack (issue4624)
1229 test that chained `or` operations never eat up stack (issue4624)
1230 (uses `0:1` instead of `0` to avoid future optimization of trivial revisions)
1230 (uses `0:1` instead of `0` to avoid future optimization of trivial revisions)
1231
1231
1232 $ hg log -T '{rev}\n' -r `python -c "print '+'.join(['0:1'] * 500)"`
1232 $ hg log -T '{rev}\n' -r `python -c "print '+'.join(['0:1'] * 500)"`
1233 0
1233 0
1234 1
1234 1
1235
1235
1236 test that repeated `-r` options never eat up stack (issue4565)
1236 test that repeated `-r` options never eat up stack (issue4565)
1237 (uses `-r 0::1` to avoid possible optimization at old-style parser)
1237 (uses `-r 0::1` to avoid possible optimization at old-style parser)
1238
1238
1239 $ hg log -T '{rev}\n' `python -c "for i in xrange(500): print '-r 0::1 ',"`
1239 $ hg log -T '{rev}\n' `python -c "for i in xrange(500): print '-r 0::1 ',"`
1240 0
1240 0
1241 1
1241 1
1242
1242
1243 check that conversion to only works
1243 check that conversion to only works
1244 $ try --optimize '::3 - ::1'
1244 $ try --optimize '::3 - ::1'
1245 (minus
1245 (minus
1246 (dagrangepre
1246 (dagrangepre
1247 ('symbol', '3'))
1247 ('symbol', '3'))
1248 (dagrangepre
1248 (dagrangepre
1249 ('symbol', '1')))
1249 ('symbol', '1')))
1250 * optimized:
1250 * optimized:
1251 (func
1251 (func
1252 ('symbol', 'only')
1252 ('symbol', 'only')
1253 (list
1253 (list
1254 ('symbol', '3')
1254 ('symbol', '3')
1255 ('symbol', '1')))
1255 ('symbol', '1')))
1256 * set:
1256 * set:
1257 <baseset+ [3]>
1257 <baseset+ [3]>
1258 3
1258 3
1259 $ try --optimize 'ancestors(1) - ancestors(3)'
1259 $ try --optimize 'ancestors(1) - ancestors(3)'
1260 (minus
1260 (minus
1261 (func
1261 (func
1262 ('symbol', 'ancestors')
1262 ('symbol', 'ancestors')
1263 ('symbol', '1'))
1263 ('symbol', '1'))
1264 (func
1264 (func
1265 ('symbol', 'ancestors')
1265 ('symbol', 'ancestors')
1266 ('symbol', '3')))
1266 ('symbol', '3')))
1267 * optimized:
1267 * optimized:
1268 (func
1268 (func
1269 ('symbol', 'only')
1269 ('symbol', 'only')
1270 (list
1270 (list
1271 ('symbol', '1')
1271 ('symbol', '1')
1272 ('symbol', '3')))
1272 ('symbol', '3')))
1273 * set:
1273 * set:
1274 <baseset+ []>
1274 <baseset+ []>
1275 $ try --optimize 'not ::2 and ::6'
1275 $ try --optimize 'not ::2 and ::6'
1276 (and
1276 (and
1277 (not
1277 (not
1278 (dagrangepre
1278 (dagrangepre
1279 ('symbol', '2')))
1279 ('symbol', '2')))
1280 (dagrangepre
1280 (dagrangepre
1281 ('symbol', '6')))
1281 ('symbol', '6')))
1282 * optimized:
1282 * optimized:
1283 (func
1283 (func
1284 ('symbol', 'only')
1284 ('symbol', 'only')
1285 (list
1285 (list
1286 ('symbol', '6')
1286 ('symbol', '6')
1287 ('symbol', '2')))
1287 ('symbol', '2')))
1288 * set:
1288 * set:
1289 <baseset+ [3, 4, 5, 6]>
1289 <baseset+ [3, 4, 5, 6]>
1290 3
1290 3
1291 4
1291 4
1292 5
1292 5
1293 6
1293 6
1294 $ try --optimize 'ancestors(6) and not ancestors(4)'
1294 $ try --optimize 'ancestors(6) and not ancestors(4)'
1295 (and
1295 (and
1296 (func
1296 (func
1297 ('symbol', 'ancestors')
1297 ('symbol', 'ancestors')
1298 ('symbol', '6'))
1298 ('symbol', '6'))
1299 (not
1299 (not
1300 (func
1300 (func
1301 ('symbol', 'ancestors')
1301 ('symbol', 'ancestors')
1302 ('symbol', '4'))))
1302 ('symbol', '4'))))
1303 * optimized:
1303 * optimized:
1304 (func
1304 (func
1305 ('symbol', 'only')
1305 ('symbol', 'only')
1306 (list
1306 (list
1307 ('symbol', '6')
1307 ('symbol', '6')
1308 ('symbol', '4')))
1308 ('symbol', '4')))
1309 * set:
1309 * set:
1310 <baseset+ [3, 5, 6]>
1310 <baseset+ [3, 5, 6]>
1311 3
1311 3
1312 5
1312 5
1313 6
1313 6
1314
1314
1315 no crash by empty group "()" while optimizing to "only()"
1315 no crash by empty group "()" while optimizing to "only()"
1316
1316
1317 $ try --optimize '::1 and ()'
1317 $ try --optimize '::1 and ()'
1318 (and
1318 (and
1319 (dagrangepre
1319 (dagrangepre
1320 ('symbol', '1'))
1320 ('symbol', '1'))
1321 (group
1321 (group
1322 None))
1322 None))
1323 * optimized:
1323 * optimized:
1324 (and
1324 (and
1325 None
1325 None
1326 (func
1326 (func
1327 ('symbol', 'ancestors')
1327 ('symbol', 'ancestors')
1328 ('symbol', '1')))
1328 ('symbol', '1')))
1329 hg: parse error: missing argument
1329 hg: parse error: missing argument
1330 [255]
1330 [255]
1331
1331
1332 we can use patterns when searching for tags
1332 we can use patterns when searching for tags
1333
1333
1334 $ log 'tag("1..*")'
1334 $ log 'tag("1..*")'
1335 abort: tag '1..*' does not exist!
1335 abort: tag '1..*' does not exist!
1336 [255]
1336 [255]
1337 $ log 'tag("re:1..*")'
1337 $ log 'tag("re:1..*")'
1338 6
1338 6
1339 $ log 'tag("re:[0-9].[0-9]")'
1339 $ log 'tag("re:[0-9].[0-9]")'
1340 6
1340 6
1341 $ log 'tag("literal:1.0")'
1341 $ log 'tag("literal:1.0")'
1342 6
1342 6
1343 $ log 'tag("re:0..*")'
1343 $ log 'tag("re:0..*")'
1344
1344
1345 $ log 'tag(unknown)'
1345 $ log 'tag(unknown)'
1346 abort: tag 'unknown' does not exist!
1346 abort: tag 'unknown' does not exist!
1347 [255]
1347 [255]
1348 $ log 'tag("re:unknown")'
1348 $ log 'tag("re:unknown")'
1349 $ log 'present(tag("unknown"))'
1349 $ log 'present(tag("unknown"))'
1350 $ log 'present(tag("re:unknown"))'
1350 $ log 'present(tag("re:unknown"))'
1351 $ log 'branch(unknown)'
1351 $ log 'branch(unknown)'
1352 abort: unknown revision 'unknown'!
1352 abort: unknown revision 'unknown'!
1353 [255]
1353 [255]
1354 $ log 'branch("literal:unknown")'
1354 $ log 'branch("literal:unknown")'
1355 abort: branch 'unknown' does not exist!
1355 abort: branch 'unknown' does not exist!
1356 [255]
1356 [255]
1357 $ log 'branch("re:unknown")'
1357 $ log 'branch("re:unknown")'
1358 $ log 'present(branch("unknown"))'
1358 $ log 'present(branch("unknown"))'
1359 $ log 'present(branch("re:unknown"))'
1359 $ log 'present(branch("re:unknown"))'
1360 $ log 'user(bob)'
1360 $ log 'user(bob)'
1361 2
1361 2
1362
1362
1363 $ log '4::8'
1363 $ log '4::8'
1364 4
1364 4
1365 8
1365 8
1366 $ log '4:8'
1366 $ log '4:8'
1367 4
1367 4
1368 5
1368 5
1369 6
1369 6
1370 7
1370 7
1371 8
1371 8
1372
1372
1373 $ log 'sort(!merge() & (modifies(b) | user(bob) | keyword(bug) | keyword(issue) & 1::9), "-date")'
1373 $ log 'sort(!merge() & (modifies(b) | user(bob) | keyword(bug) | keyword(issue) & 1::9), "-date")'
1374 4
1374 4
1375 2
1375 2
1376 5
1376 5
1377
1377
1378 $ log 'not 0 and 0:2'
1378 $ log 'not 0 and 0:2'
1379 1
1379 1
1380 2
1380 2
1381 $ log 'not 1 and 0:2'
1381 $ log 'not 1 and 0:2'
1382 0
1382 0
1383 2
1383 2
1384 $ log 'not 2 and 0:2'
1384 $ log 'not 2 and 0:2'
1385 0
1385 0
1386 1
1386 1
1387 $ log '(1 and 2)::'
1387 $ log '(1 and 2)::'
1388 $ log '(1 and 2):'
1388 $ log '(1 and 2):'
1389 $ log '(1 and 2):3'
1389 $ log '(1 and 2):3'
1390 $ log 'sort(head(), -rev)'
1390 $ log 'sort(head(), -rev)'
1391 9
1391 9
1392 7
1392 7
1393 6
1393 6
1394 5
1394 5
1395 4
1395 4
1396 3
1396 3
1397 2
1397 2
1398 1
1398 1
1399 0
1399 0
1400 $ log '4::8 - 8'
1400 $ log '4::8 - 8'
1401 4
1401 4
1402 $ log 'matching(1 or 2 or 3) and (2 or 3 or 1)'
1402 $ log 'matching(1 or 2 or 3) and (2 or 3 or 1)'
1403 2
1403 2
1404 3
1404 3
1405 1
1405 1
1406
1406
1407 $ log 'named("unknown")'
1407 $ log 'named("unknown")'
1408 abort: namespace 'unknown' does not exist!
1408 abort: namespace 'unknown' does not exist!
1409 [255]
1409 [255]
1410 $ log 'named("re:unknown")'
1410 $ log 'named("re:unknown")'
1411 abort: no namespace exists that match 'unknown'!
1411 abort: no namespace exists that match 'unknown'!
1412 [255]
1412 [255]
1413 $ log 'present(named("unknown"))'
1413 $ log 'present(named("unknown"))'
1414 $ log 'present(named("re:unknown"))'
1414 $ log 'present(named("re:unknown"))'
1415
1415
1416 $ log 'tag()'
1416 $ log 'tag()'
1417 6
1417 6
1418 $ log 'named("tags")'
1418 $ log 'named("tags")'
1419 6
1419 6
1420
1420
1421 issue2437
1421 issue2437
1422
1422
1423 $ log '3 and p1(5)'
1423 $ log '3 and p1(5)'
1424 3
1424 3
1425 $ log '4 and p2(6)'
1425 $ log '4 and p2(6)'
1426 4
1426 4
1427 $ log '1 and parents(:2)'
1427 $ log '1 and parents(:2)'
1428 1
1428 1
1429 $ log '2 and children(1:)'
1429 $ log '2 and children(1:)'
1430 2
1430 2
1431 $ log 'roots(all()) or roots(all())'
1431 $ log 'roots(all()) or roots(all())'
1432 0
1432 0
1433 $ hg debugrevspec 'roots(all()) or roots(all())'
1433 $ hg debugrevspec 'roots(all()) or roots(all())'
1434 0
1434 0
1435 $ log 'heads(branch(Γ©)) or heads(branch(Γ©))'
1435 $ log 'heads(branch(Γ©)) or heads(branch(Γ©))'
1436 9
1436 9
1437 $ log 'ancestors(8) and (heads(branch("-a-b-c-")) or heads(branch(Γ©)))'
1437 $ log 'ancestors(8) and (heads(branch("-a-b-c-")) or heads(branch(Γ©)))'
1438 4
1438 4
1439
1439
1440 issue2654: report a parse error if the revset was not completely parsed
1440 issue2654: report a parse error if the revset was not completely parsed
1441
1441
1442 $ log '1 OR 2'
1442 $ log '1 OR 2'
1443 hg: parse error at 2: invalid token
1443 hg: parse error at 2: invalid token
1444 [255]
1444 [255]
1445
1445
1446 or operator should preserve ordering:
1446 or operator should preserve ordering:
1447 $ log 'reverse(2::4) or tip'
1447 $ log 'reverse(2::4) or tip'
1448 4
1448 4
1449 2
1449 2
1450 9
1450 9
1451
1451
1452 parentrevspec
1452 parentrevspec
1453
1453
1454 $ log 'merge()^0'
1454 $ log 'merge()^0'
1455 6
1455 6
1456 $ log 'merge()^'
1456 $ log 'merge()^'
1457 5
1457 5
1458 $ log 'merge()^1'
1458 $ log 'merge()^1'
1459 5
1459 5
1460 $ log 'merge()^2'
1460 $ log 'merge()^2'
1461 4
1461 4
1462 $ log 'merge()^^'
1462 $ log 'merge()^^'
1463 3
1463 3
1464 $ log 'merge()^1^'
1464 $ log 'merge()^1^'
1465 3
1465 3
1466 $ log 'merge()^^^'
1466 $ log 'merge()^^^'
1467 1
1467 1
1468
1468
1469 $ log 'merge()~0'
1469 $ log 'merge()~0'
1470 6
1470 6
1471 $ log 'merge()~1'
1471 $ log 'merge()~1'
1472 5
1472 5
1473 $ log 'merge()~2'
1473 $ log 'merge()~2'
1474 3
1474 3
1475 $ log 'merge()~2^1'
1475 $ log 'merge()~2^1'
1476 1
1476 1
1477 $ log 'merge()~3'
1477 $ log 'merge()~3'
1478 1
1478 1
1479
1479
1480 $ log '(-3:tip)^'
1480 $ log '(-3:tip)^'
1481 4
1481 4
1482 6
1482 6
1483 8
1483 8
1484
1484
1485 $ log 'tip^foo'
1485 $ log 'tip^foo'
1486 hg: parse error: ^ expects a number 0, 1, or 2
1486 hg: parse error: ^ expects a number 0, 1, or 2
1487 [255]
1487 [255]
1488
1488
1489 Bogus function gets suggestions
1489 Bogus function gets suggestions
1490 $ log 'add()'
1490 $ log 'add()'
1491 hg: parse error: unknown identifier: add
1491 hg: parse error: unknown identifier: add
1492 (did you mean adds?)
1492 (did you mean adds?)
1493 [255]
1493 [255]
1494 $ log 'added()'
1494 $ log 'added()'
1495 hg: parse error: unknown identifier: added
1495 hg: parse error: unknown identifier: added
1496 (did you mean adds?)
1496 (did you mean adds?)
1497 [255]
1497 [255]
1498 $ log 'remo()'
1498 $ log 'remo()'
1499 hg: parse error: unknown identifier: remo
1499 hg: parse error: unknown identifier: remo
1500 (did you mean one of remote, removes?)
1500 (did you mean one of remote, removes?)
1501 [255]
1501 [255]
1502 $ log 'babar()'
1502 $ log 'babar()'
1503 hg: parse error: unknown identifier: babar
1503 hg: parse error: unknown identifier: babar
1504 [255]
1504 [255]
1505
1505
1506 Bogus function with a similar internal name doesn't suggest the internal name
1506 Bogus function with a similar internal name doesn't suggest the internal name
1507 $ log 'matches()'
1507 $ log 'matches()'
1508 hg: parse error: unknown identifier: matches
1508 hg: parse error: unknown identifier: matches
1509 (did you mean matching?)
1509 (did you mean matching?)
1510 [255]
1510 [255]
1511
1511
1512 Undocumented functions aren't suggested as similar either
1512 Undocumented functions aren't suggested as similar either
1513 $ log 'wdir2()'
1513 $ log 'wdir2()'
1514 hg: parse error: unknown identifier: wdir2
1514 hg: parse error: unknown identifier: wdir2
1515 [255]
1515 [255]
1516
1516
1517 multiple revspecs
1517 multiple revspecs
1518
1518
1519 $ hg log -r 'tip~1:tip' -r 'tip~2:tip~1' --template '{rev}\n'
1519 $ hg log -r 'tip~1:tip' -r 'tip~2:tip~1' --template '{rev}\n'
1520 8
1520 8
1521 9
1521 9
1522 4
1522 4
1523 5
1523 5
1524 6
1524 6
1525 7
1525 7
1526
1526
1527 test usage in revpair (with "+")
1527 test usage in revpair (with "+")
1528
1528
1529 (real pair)
1529 (real pair)
1530
1530
1531 $ hg diff -r 'tip^^' -r 'tip'
1531 $ hg diff -r 'tip^^' -r 'tip'
1532 diff -r 2326846efdab -r 24286f4ae135 .hgtags
1532 diff -r 2326846efdab -r 24286f4ae135 .hgtags
1533 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1533 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1534 +++ b/.hgtags Thu Jan 01 00:00:00 1970 +0000
1534 +++ b/.hgtags Thu Jan 01 00:00:00 1970 +0000
1535 @@ -0,0 +1,1 @@
1535 @@ -0,0 +1,1 @@
1536 +e0cc66ef77e8b6f711815af4e001a6594fde3ba5 1.0
1536 +e0cc66ef77e8b6f711815af4e001a6594fde3ba5 1.0
1537 $ hg diff -r 'tip^^::tip'
1537 $ hg diff -r 'tip^^::tip'
1538 diff -r 2326846efdab -r 24286f4ae135 .hgtags
1538 diff -r 2326846efdab -r 24286f4ae135 .hgtags
1539 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1539 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1540 +++ b/.hgtags Thu Jan 01 00:00:00 1970 +0000
1540 +++ b/.hgtags Thu Jan 01 00:00:00 1970 +0000
1541 @@ -0,0 +1,1 @@
1541 @@ -0,0 +1,1 @@
1542 +e0cc66ef77e8b6f711815af4e001a6594fde3ba5 1.0
1542 +e0cc66ef77e8b6f711815af4e001a6594fde3ba5 1.0
1543
1543
1544 (single rev)
1544 (single rev)
1545
1545
1546 $ hg diff -r 'tip^' -r 'tip^'
1546 $ hg diff -r 'tip^' -r 'tip^'
1547 $ hg diff -r 'tip^:tip^'
1547 $ hg diff -r 'tip^:tip^'
1548
1548
1549 (single rev that does not looks like a range)
1549 (single rev that does not looks like a range)
1550
1550
1551 $ hg diff -r 'tip^::tip^ or tip^'
1551 $ hg diff -r 'tip^::tip^ or tip^'
1552 diff -r d5d0dcbdc4d9 .hgtags
1552 diff -r d5d0dcbdc4d9 .hgtags
1553 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1553 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1554 +++ b/.hgtags * (glob)
1554 +++ b/.hgtags * (glob)
1555 @@ -0,0 +1,1 @@
1555 @@ -0,0 +1,1 @@
1556 +e0cc66ef77e8b6f711815af4e001a6594fde3ba5 1.0
1556 +e0cc66ef77e8b6f711815af4e001a6594fde3ba5 1.0
1557 $ hg diff -r 'tip^ or tip^'
1557 $ hg diff -r 'tip^ or tip^'
1558 diff -r d5d0dcbdc4d9 .hgtags
1558 diff -r d5d0dcbdc4d9 .hgtags
1559 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1559 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1560 +++ b/.hgtags * (glob)
1560 +++ b/.hgtags * (glob)
1561 @@ -0,0 +1,1 @@
1561 @@ -0,0 +1,1 @@
1562 +e0cc66ef77e8b6f711815af4e001a6594fde3ba5 1.0
1562 +e0cc66ef77e8b6f711815af4e001a6594fde3ba5 1.0
1563
1563
1564 (no rev)
1564 (no rev)
1565
1565
1566 $ hg diff -r 'author("babar") or author("celeste")'
1566 $ hg diff -r 'author("babar") or author("celeste")'
1567 abort: empty revision range
1567 abort: empty revision range
1568 [255]
1568 [255]
1569
1569
1570 aliases:
1570 aliases:
1571
1571
1572 $ echo '[revsetalias]' >> .hg/hgrc
1572 $ echo '[revsetalias]' >> .hg/hgrc
1573 $ echo 'm = merge()' >> .hg/hgrc
1573 $ echo 'm = merge()' >> .hg/hgrc
1574 (revset aliases can override builtin revsets)
1574 (revset aliases can override builtin revsets)
1575 $ echo 'p2($1) = p1($1)' >> .hg/hgrc
1575 $ echo 'p2($1) = p1($1)' >> .hg/hgrc
1576 $ echo 'sincem = descendants(m)' >> .hg/hgrc
1576 $ echo 'sincem = descendants(m)' >> .hg/hgrc
1577 $ echo 'd($1) = reverse(sort($1, date))' >> .hg/hgrc
1577 $ echo 'd($1) = reverse(sort($1, date))' >> .hg/hgrc
1578 $ echo 'rs(ARG1, ARG2) = reverse(sort(ARG1, ARG2))' >> .hg/hgrc
1578 $ echo 'rs(ARG1, ARG2) = reverse(sort(ARG1, ARG2))' >> .hg/hgrc
1579 $ echo 'rs4(ARG1, ARGA, ARGB, ARG2) = reverse(sort(ARG1, ARG2))' >> .hg/hgrc
1579 $ echo 'rs4(ARG1, ARGA, ARGB, ARG2) = reverse(sort(ARG1, ARG2))' >> .hg/hgrc
1580
1580
1581 $ try m
1581 $ try m
1582 ('symbol', 'm')
1582 ('symbol', 'm')
1583 (func
1583 (func
1584 ('symbol', 'merge')
1584 ('symbol', 'merge')
1585 None)
1585 None)
1586 * set:
1586 * set:
1587 <filteredset
1587 <filteredset
1588 <fullreposet+ 0:9>>
1588 <fullreposet+ 0:9>>
1589 6
1589 6
1590
1590
1591 $ HGPLAIN=1
1591 $ HGPLAIN=1
1592 $ export HGPLAIN
1592 $ export HGPLAIN
1593 $ try m
1593 $ try m
1594 ('symbol', 'm')
1594 ('symbol', 'm')
1595 abort: unknown revision 'm'!
1595 abort: unknown revision 'm'!
1596 [255]
1596 [255]
1597
1597
1598 $ HGPLAINEXCEPT=revsetalias
1598 $ HGPLAINEXCEPT=revsetalias
1599 $ export HGPLAINEXCEPT
1599 $ export HGPLAINEXCEPT
1600 $ try m
1600 $ try m
1601 ('symbol', 'm')
1601 ('symbol', 'm')
1602 (func
1602 (func
1603 ('symbol', 'merge')
1603 ('symbol', 'merge')
1604 None)
1604 None)
1605 * set:
1605 * set:
1606 <filteredset
1606 <filteredset
1607 <fullreposet+ 0:9>>
1607 <fullreposet+ 0:9>>
1608 6
1608 6
1609
1609
1610 $ unset HGPLAIN
1610 $ unset HGPLAIN
1611 $ unset HGPLAINEXCEPT
1611 $ unset HGPLAINEXCEPT
1612
1612
1613 $ try 'p2(.)'
1613 $ try 'p2(.)'
1614 (func
1614 (func
1615 ('symbol', 'p2')
1615 ('symbol', 'p2')
1616 ('symbol', '.'))
1616 ('symbol', '.'))
1617 (func
1617 (func
1618 ('symbol', 'p1')
1618 ('symbol', 'p1')
1619 ('symbol', '.'))
1619 ('symbol', '.'))
1620 * set:
1620 * set:
1621 <baseset+ [8]>
1621 <baseset+ [8]>
1622 8
1622 8
1623
1623
1624 $ HGPLAIN=1
1624 $ HGPLAIN=1
1625 $ export HGPLAIN
1625 $ export HGPLAIN
1626 $ try 'p2(.)'
1626 $ try 'p2(.)'
1627 (func
1627 (func
1628 ('symbol', 'p2')
1628 ('symbol', 'p2')
1629 ('symbol', '.'))
1629 ('symbol', '.'))
1630 * set:
1630 * set:
1631 <baseset+ []>
1631 <baseset+ []>
1632
1632
1633 $ HGPLAINEXCEPT=revsetalias
1633 $ HGPLAINEXCEPT=revsetalias
1634 $ export HGPLAINEXCEPT
1634 $ export HGPLAINEXCEPT
1635 $ try 'p2(.)'
1635 $ try 'p2(.)'
1636 (func
1636 (func
1637 ('symbol', 'p2')
1637 ('symbol', 'p2')
1638 ('symbol', '.'))
1638 ('symbol', '.'))
1639 (func
1639 (func
1640 ('symbol', 'p1')
1640 ('symbol', 'p1')
1641 ('symbol', '.'))
1641 ('symbol', '.'))
1642 * set:
1642 * set:
1643 <baseset+ [8]>
1643 <baseset+ [8]>
1644 8
1644 8
1645
1645
1646 $ unset HGPLAIN
1646 $ unset HGPLAIN
1647 $ unset HGPLAINEXCEPT
1647 $ unset HGPLAINEXCEPT
1648
1648
1649 test alias recursion
1649 test alias recursion
1650
1650
1651 $ try sincem
1651 $ try sincem
1652 ('symbol', 'sincem')
1652 ('symbol', 'sincem')
1653 (func
1653 (func
1654 ('symbol', 'descendants')
1654 ('symbol', 'descendants')
1655 (func
1655 (func
1656 ('symbol', 'merge')
1656 ('symbol', 'merge')
1657 None))
1657 None))
1658 * set:
1658 * set:
1659 <addset+
1659 <addset+
1660 <filteredset
1660 <filteredset
1661 <fullreposet+ 0:9>>,
1661 <fullreposet+ 0:9>>,
1662 <generatorset+>>
1662 <generatorset+>>
1663 6
1663 6
1664 7
1664 7
1665
1665
1666 test infinite recursion
1666 test infinite recursion
1667
1667
1668 $ echo 'recurse1 = recurse2' >> .hg/hgrc
1668 $ echo 'recurse1 = recurse2' >> .hg/hgrc
1669 $ echo 'recurse2 = recurse1' >> .hg/hgrc
1669 $ echo 'recurse2 = recurse1' >> .hg/hgrc
1670 $ try recurse1
1670 $ try recurse1
1671 ('symbol', 'recurse1')
1671 ('symbol', 'recurse1')
1672 hg: parse error: infinite expansion of revset alias "recurse1" detected
1672 hg: parse error: infinite expansion of revset alias "recurse1" detected
1673 [255]
1673 [255]
1674
1674
1675 $ echo 'level1($1, $2) = $1 or $2' >> .hg/hgrc
1675 $ echo 'level1($1, $2) = $1 or $2' >> .hg/hgrc
1676 $ echo 'level2($1, $2) = level1($2, $1)' >> .hg/hgrc
1676 $ echo 'level2($1, $2) = level1($2, $1)' >> .hg/hgrc
1677 $ try "level2(level1(1, 2), 3)"
1677 $ try "level2(level1(1, 2), 3)"
1678 (func
1678 (func
1679 ('symbol', 'level2')
1679 ('symbol', 'level2')
1680 (list
1680 (list
1681 (func
1681 (func
1682 ('symbol', 'level1')
1682 ('symbol', 'level1')
1683 (list
1683 (list
1684 ('symbol', '1')
1684 ('symbol', '1')
1685 ('symbol', '2')))
1685 ('symbol', '2')))
1686 ('symbol', '3')))
1686 ('symbol', '3')))
1687 (or
1687 (or
1688 ('symbol', '3')
1688 ('symbol', '3')
1689 (or
1689 (or
1690 ('symbol', '1')
1690 ('symbol', '1')
1691 ('symbol', '2')))
1691 ('symbol', '2')))
1692 * set:
1692 * set:
1693 <addset
1693 <addset
1694 <baseset [3]>,
1694 <baseset [3]>,
1695 <baseset [1, 2]>>
1695 <baseset [1, 2]>>
1696 3
1696 3
1697 1
1697 1
1698 2
1698 2
1699
1699
1700 test nesting and variable passing
1700 test nesting and variable passing
1701
1701
1702 $ echo 'nested($1) = nested2($1)' >> .hg/hgrc
1702 $ echo 'nested($1) = nested2($1)' >> .hg/hgrc
1703 $ echo 'nested2($1) = nested3($1)' >> .hg/hgrc
1703 $ echo 'nested2($1) = nested3($1)' >> .hg/hgrc
1704 $ echo 'nested3($1) = max($1)' >> .hg/hgrc
1704 $ echo 'nested3($1) = max($1)' >> .hg/hgrc
1705 $ try 'nested(2:5)'
1705 $ try 'nested(2:5)'
1706 (func
1706 (func
1707 ('symbol', 'nested')
1707 ('symbol', 'nested')
1708 (range
1708 (range
1709 ('symbol', '2')
1709 ('symbol', '2')
1710 ('symbol', '5')))
1710 ('symbol', '5')))
1711 (func
1711 (func
1712 ('symbol', 'max')
1712 ('symbol', 'max')
1713 (range
1713 (range
1714 ('symbol', '2')
1714 ('symbol', '2')
1715 ('symbol', '5')))
1715 ('symbol', '5')))
1716 * set:
1716 * set:
1717 <baseset [5]>
1717 <baseset [5]>
1718 5
1718 5
1719
1719
1720 test chained `or` operations are flattened at parsing phase
1720 test chained `or` operations are flattened at parsing phase
1721
1721
1722 $ echo 'chainedorops($1, $2, $3) = $1|$2|$3' >> .hg/hgrc
1722 $ echo 'chainedorops($1, $2, $3) = $1|$2|$3' >> .hg/hgrc
1723 $ try 'chainedorops(0:1, 1:2, 2:3)'
1723 $ try 'chainedorops(0:1, 1:2, 2:3)'
1724 (func
1724 (func
1725 ('symbol', 'chainedorops')
1725 ('symbol', 'chainedorops')
1726 (list
1726 (list
1727 (range
1727 (range
1728 ('symbol', '0')
1728 ('symbol', '0')
1729 ('symbol', '1'))
1729 ('symbol', '1'))
1730 (range
1730 (range
1731 ('symbol', '1')
1731 ('symbol', '1')
1732 ('symbol', '2'))
1732 ('symbol', '2'))
1733 (range
1733 (range
1734 ('symbol', '2')
1734 ('symbol', '2')
1735 ('symbol', '3'))))
1735 ('symbol', '3'))))
1736 (or
1736 (or
1737 (range
1737 (range
1738 ('symbol', '0')
1738 ('symbol', '0')
1739 ('symbol', '1'))
1739 ('symbol', '1'))
1740 (range
1740 (range
1741 ('symbol', '1')
1741 ('symbol', '1')
1742 ('symbol', '2'))
1742 ('symbol', '2'))
1743 (range
1743 (range
1744 ('symbol', '2')
1744 ('symbol', '2')
1745 ('symbol', '3')))
1745 ('symbol', '3')))
1746 * set:
1746 * set:
1747 <addset
1747 <addset
1748 <spanset+ 0:1>,
1748 <spanset+ 0:1>,
1749 <addset
1749 <addset
1750 <spanset+ 1:2>,
1750 <spanset+ 1:2>,
1751 <spanset+ 2:3>>>
1751 <spanset+ 2:3>>>
1752 0
1752 0
1753 1
1753 1
1754 2
1754 2
1755 3
1755 3
1756
1756
1757 test variable isolation, variable placeholders are rewritten as string
1757 test variable isolation, variable placeholders are rewritten as string
1758 then parsed and matched again as string. Check they do not leak too
1758 then parsed and matched again as string. Check they do not leak too
1759 far away.
1759 far away.
1760
1760
1761 $ echo 'injectparamasstring = max("$1")' >> .hg/hgrc
1761 $ echo 'injectparamasstring = max("$1")' >> .hg/hgrc
1762 $ echo 'callinjection($1) = descendants(injectparamasstring)' >> .hg/hgrc
1762 $ echo 'callinjection($1) = descendants(injectparamasstring)' >> .hg/hgrc
1763 $ try 'callinjection(2:5)'
1763 $ try 'callinjection(2:5)'
1764 (func
1764 (func
1765 ('symbol', 'callinjection')
1765 ('symbol', 'callinjection')
1766 (range
1766 (range
1767 ('symbol', '2')
1767 ('symbol', '2')
1768 ('symbol', '5')))
1768 ('symbol', '5')))
1769 (func
1769 (func
1770 ('symbol', 'descendants')
1770 ('symbol', 'descendants')
1771 (func
1771 (func
1772 ('symbol', 'max')
1772 ('symbol', 'max')
1773 ('string', '$1')))
1773 ('string', '$1')))
1774 abort: unknown revision '$1'!
1774 abort: unknown revision '$1'!
1775 [255]
1775 [255]
1776
1776
1777 $ echo 'injectparamasstring2 = max(_aliasarg("$1"))' >> .hg/hgrc
1777 $ echo 'injectparamasstring2 = max(_aliasarg("$1"))' >> .hg/hgrc
1778 $ echo 'callinjection2($1) = descendants(injectparamasstring2)' >> .hg/hgrc
1778 $ echo 'callinjection2($1) = descendants(injectparamasstring2)' >> .hg/hgrc
1779 $ try 'callinjection2(2:5)'
1779 $ try 'callinjection2(2:5)'
1780 (func
1780 (func
1781 ('symbol', 'callinjection2')
1781 ('symbol', 'callinjection2')
1782 (range
1782 (range
1783 ('symbol', '2')
1783 ('symbol', '2')
1784 ('symbol', '5')))
1784 ('symbol', '5')))
1785 abort: failed to parse the definition of revset alias "injectparamasstring2": unknown identifier: _aliasarg
1785 abort: failed to parse the definition of revset alias "injectparamasstring2": unknown identifier: _aliasarg
1786 [255]
1786 [255]
1787 $ hg debugrevspec --debug --config revsetalias.anotherbadone='branch(' "tip"
1787 $ hg debugrevspec --debug --config revsetalias.anotherbadone='branch(' "tip"
1788 ('symbol', 'tip')
1788 ('symbol', 'tip')
1789 warning: failed to parse the definition of revset alias "anotherbadone": at 7: not a prefix: end
1789 warning: failed to parse the definition of revset alias "anotherbadone": at 7: not a prefix: end
1790 warning: failed to parse the definition of revset alias "injectparamasstring2": unknown identifier: _aliasarg
1790 warning: failed to parse the definition of revset alias "injectparamasstring2": unknown identifier: _aliasarg
1791 * set:
1791 * set:
1792 <baseset [9]>
1792 <baseset [9]>
1793 9
1793 9
1794 >>> data = file('.hg/hgrc', 'rb').read()
1794 >>> data = file('.hg/hgrc', 'rb').read()
1795 >>> file('.hg/hgrc', 'wb').write(data.replace('_aliasarg', ''))
1795 >>> file('.hg/hgrc', 'wb').write(data.replace('_aliasarg', ''))
1796
1796
1797 $ try 'tip'
1797 $ try 'tip'
1798 ('symbol', 'tip')
1798 ('symbol', 'tip')
1799 * set:
1799 * set:
1800 <baseset [9]>
1800 <baseset [9]>
1801 9
1801 9
1802
1802
1803 $ hg debugrevspec --debug --config revsetalias.'bad name'='tip' "tip"
1803 $ hg debugrevspec --debug --config revsetalias.'bad name'='tip' "tip"
1804 ('symbol', 'tip')
1804 ('symbol', 'tip')
1805 warning: failed to parse the declaration of revset alias "bad name": at 4: invalid token
1805 warning: failed to parse the declaration of revset alias "bad name": at 4: invalid token
1806 * set:
1806 * set:
1807 <baseset [9]>
1807 <baseset [9]>
1808 9
1808 9
1809 $ echo 'strictreplacing($1, $10) = $10 or desc("$1")' >> .hg/hgrc
1809 $ echo 'strictreplacing($1, $10) = $10 or desc("$1")' >> .hg/hgrc
1810 $ try 'strictreplacing("foo", tip)'
1810 $ try 'strictreplacing("foo", tip)'
1811 (func
1811 (func
1812 ('symbol', 'strictreplacing')
1812 ('symbol', 'strictreplacing')
1813 (list
1813 (list
1814 ('string', 'foo')
1814 ('string', 'foo')
1815 ('symbol', 'tip')))
1815 ('symbol', 'tip')))
1816 (or
1816 (or
1817 ('symbol', 'tip')
1817 ('symbol', 'tip')
1818 (func
1818 (func
1819 ('symbol', 'desc')
1819 ('symbol', 'desc')
1820 ('string', '$1')))
1820 ('string', '$1')))
1821 * set:
1821 * set:
1822 <addset
1822 <addset
1823 <baseset [9]>,
1823 <baseset [9]>,
1824 <filteredset
1824 <filteredset
1825 <fullreposet+ 0:9>>>
1825 <fullreposet+ 0:9>>>
1826 9
1826 9
1827
1827
1828 $ try 'd(2:5)'
1828 $ try 'd(2:5)'
1829 (func
1829 (func
1830 ('symbol', 'd')
1830 ('symbol', 'd')
1831 (range
1831 (range
1832 ('symbol', '2')
1832 ('symbol', '2')
1833 ('symbol', '5')))
1833 ('symbol', '5')))
1834 (func
1834 (func
1835 ('symbol', 'reverse')
1835 ('symbol', 'reverse')
1836 (func
1836 (func
1837 ('symbol', 'sort')
1837 ('symbol', 'sort')
1838 (list
1838 (list
1839 (range
1839 (range
1840 ('symbol', '2')
1840 ('symbol', '2')
1841 ('symbol', '5'))
1841 ('symbol', '5'))
1842 ('symbol', 'date'))))
1842 ('symbol', 'date'))))
1843 * set:
1843 * set:
1844 <baseset [4, 5, 3, 2]>
1844 <baseset [4, 5, 3, 2]>
1845 4
1845 4
1846 5
1846 5
1847 3
1847 3
1848 2
1848 2
1849 $ try 'rs(2 or 3, date)'
1849 $ try 'rs(2 or 3, date)'
1850 (func
1850 (func
1851 ('symbol', 'rs')
1851 ('symbol', 'rs')
1852 (list
1852 (list
1853 (or
1853 (or
1854 ('symbol', '2')
1854 ('symbol', '2')
1855 ('symbol', '3'))
1855 ('symbol', '3'))
1856 ('symbol', 'date')))
1856 ('symbol', 'date')))
1857 (func
1857 (func
1858 ('symbol', 'reverse')
1858 ('symbol', 'reverse')
1859 (func
1859 (func
1860 ('symbol', 'sort')
1860 ('symbol', 'sort')
1861 (list
1861 (list
1862 (or
1862 (or
1863 ('symbol', '2')
1863 ('symbol', '2')
1864 ('symbol', '3'))
1864 ('symbol', '3'))
1865 ('symbol', 'date'))))
1865 ('symbol', 'date'))))
1866 * set:
1866 * set:
1867 <baseset [3, 2]>
1867 <baseset [3, 2]>
1868 3
1868 3
1869 2
1869 2
1870 $ try 'rs()'
1870 $ try 'rs()'
1871 (func
1871 (func
1872 ('symbol', 'rs')
1872 ('symbol', 'rs')
1873 None)
1873 None)
1874 hg: parse error: invalid number of arguments: 0
1874 hg: parse error: invalid number of arguments: 0
1875 [255]
1875 [255]
1876 $ try 'rs(2)'
1876 $ try 'rs(2)'
1877 (func
1877 (func
1878 ('symbol', 'rs')
1878 ('symbol', 'rs')
1879 ('symbol', '2'))
1879 ('symbol', '2'))
1880 hg: parse error: invalid number of arguments: 1
1880 hg: parse error: invalid number of arguments: 1
1881 [255]
1881 [255]
1882 $ try 'rs(2, data, 7)'
1882 $ try 'rs(2, data, 7)'
1883 (func
1883 (func
1884 ('symbol', 'rs')
1884 ('symbol', 'rs')
1885 (list
1885 (list
1886 ('symbol', '2')
1886 ('symbol', '2')
1887 ('symbol', 'data')
1887 ('symbol', 'data')
1888 ('symbol', '7')))
1888 ('symbol', '7')))
1889 hg: parse error: invalid number of arguments: 3
1889 hg: parse error: invalid number of arguments: 3
1890 [255]
1890 [255]
1891 $ try 'rs4(2 or 3, x, x, date)'
1891 $ try 'rs4(2 or 3, x, x, date)'
1892 (func
1892 (func
1893 ('symbol', 'rs4')
1893 ('symbol', 'rs4')
1894 (list
1894 (list
1895 (or
1895 (or
1896 ('symbol', '2')
1896 ('symbol', '2')
1897 ('symbol', '3'))
1897 ('symbol', '3'))
1898 ('symbol', 'x')
1898 ('symbol', 'x')
1899 ('symbol', 'x')
1899 ('symbol', 'x')
1900 ('symbol', 'date')))
1900 ('symbol', 'date')))
1901 (func
1901 (func
1902 ('symbol', 'reverse')
1902 ('symbol', 'reverse')
1903 (func
1903 (func
1904 ('symbol', 'sort')
1904 ('symbol', 'sort')
1905 (list
1905 (list
1906 (or
1906 (or
1907 ('symbol', '2')
1907 ('symbol', '2')
1908 ('symbol', '3'))
1908 ('symbol', '3'))
1909 ('symbol', 'date'))))
1909 ('symbol', 'date'))))
1910 * set:
1910 * set:
1911 <baseset [3, 2]>
1911 <baseset [3, 2]>
1912 3
1912 3
1913 2
1913 2
1914
1914
1915 issue4553: check that revset aliases override existing hash prefix
1915 issue4553: check that revset aliases override existing hash prefix
1916
1916
1917 $ hg log -qr e
1917 $ hg log -qr e
1918 6:e0cc66ef77e8
1918 6:e0cc66ef77e8
1919
1919
1920 $ hg log -qr e --config revsetalias.e="all()"
1920 $ hg log -qr e --config revsetalias.e="all()"
1921 0:2785f51eece5
1921 0:2785f51eece5
1922 1:d75937da8da0
1922 1:d75937da8da0
1923 2:5ed5505e9f1c
1923 2:5ed5505e9f1c
1924 3:8528aa5637f2
1924 3:8528aa5637f2
1925 4:2326846efdab
1925 4:2326846efdab
1926 5:904fa392b941
1926 5:904fa392b941
1927 6:e0cc66ef77e8
1927 6:e0cc66ef77e8
1928 7:013af1973af4
1928 7:013af1973af4
1929 8:d5d0dcbdc4d9
1929 8:d5d0dcbdc4d9
1930 9:24286f4ae135
1930 9:24286f4ae135
1931
1931
1932 $ hg log -qr e: --config revsetalias.e="0"
1932 $ hg log -qr e: --config revsetalias.e="0"
1933 0:2785f51eece5
1933 0:2785f51eece5
1934 1:d75937da8da0
1934 1:d75937da8da0
1935 2:5ed5505e9f1c
1935 2:5ed5505e9f1c
1936 3:8528aa5637f2
1936 3:8528aa5637f2
1937 4:2326846efdab
1937 4:2326846efdab
1938 5:904fa392b941
1938 5:904fa392b941
1939 6:e0cc66ef77e8
1939 6:e0cc66ef77e8
1940 7:013af1973af4
1940 7:013af1973af4
1941 8:d5d0dcbdc4d9
1941 8:d5d0dcbdc4d9
1942 9:24286f4ae135
1942 9:24286f4ae135
1943
1943
1944 $ hg log -qr :e --config revsetalias.e="9"
1944 $ hg log -qr :e --config revsetalias.e="9"
1945 0:2785f51eece5
1945 0:2785f51eece5
1946 1:d75937da8da0
1946 1:d75937da8da0
1947 2:5ed5505e9f1c
1947 2:5ed5505e9f1c
1948 3:8528aa5637f2
1948 3:8528aa5637f2
1949 4:2326846efdab
1949 4:2326846efdab
1950 5:904fa392b941
1950 5:904fa392b941
1951 6:e0cc66ef77e8
1951 6:e0cc66ef77e8
1952 7:013af1973af4
1952 7:013af1973af4
1953 8:d5d0dcbdc4d9
1953 8:d5d0dcbdc4d9
1954 9:24286f4ae135
1954 9:24286f4ae135
1955
1955
1956 $ hg log -qr e:
1956 $ hg log -qr e:
1957 6:e0cc66ef77e8
1957 6:e0cc66ef77e8
1958 7:013af1973af4
1958 7:013af1973af4
1959 8:d5d0dcbdc4d9
1959 8:d5d0dcbdc4d9
1960 9:24286f4ae135
1960 9:24286f4ae135
1961
1961
1962 $ hg log -qr :e
1962 $ hg log -qr :e
1963 0:2785f51eece5
1963 0:2785f51eece5
1964 1:d75937da8da0
1964 1:d75937da8da0
1965 2:5ed5505e9f1c
1965 2:5ed5505e9f1c
1966 3:8528aa5637f2
1966 3:8528aa5637f2
1967 4:2326846efdab
1967 4:2326846efdab
1968 5:904fa392b941
1968 5:904fa392b941
1969 6:e0cc66ef77e8
1969 6:e0cc66ef77e8
1970
1970
1971 issue2549 - correct optimizations
1971 issue2549 - correct optimizations
1972
1972
1973 $ log 'limit(1 or 2 or 3, 2) and not 2'
1973 $ log 'limit(1 or 2 or 3, 2) and not 2'
1974 1
1974 1
1975 $ log 'max(1 or 2) and not 2'
1975 $ log 'max(1 or 2) and not 2'
1976 $ log 'min(1 or 2) and not 1'
1976 $ log 'min(1 or 2) and not 1'
1977 $ log 'last(1 or 2, 1) and not 2'
1977 $ log 'last(1 or 2, 1) and not 2'
1978
1978
1979 issue4289 - ordering of built-ins
1979 issue4289 - ordering of built-ins
1980 $ hg log -M -q -r 3:2
1980 $ hg log -M -q -r 3:2
1981 3:8528aa5637f2
1981 3:8528aa5637f2
1982 2:5ed5505e9f1c
1982 2:5ed5505e9f1c
1983
1983
1984 test revsets started with 40-chars hash (issue3669)
1984 test revsets started with 40-chars hash (issue3669)
1985
1985
1986 $ ISSUE3669_TIP=`hg tip --template '{node}'`
1986 $ ISSUE3669_TIP=`hg tip --template '{node}'`
1987 $ hg log -r "${ISSUE3669_TIP}" --template '{rev}\n'
1987 $ hg log -r "${ISSUE3669_TIP}" --template '{rev}\n'
1988 9
1988 9
1989 $ hg log -r "${ISSUE3669_TIP}^" --template '{rev}\n'
1989 $ hg log -r "${ISSUE3669_TIP}^" --template '{rev}\n'
1990 8
1990 8
1991
1991
1992 test or-ed indirect predicates (issue3775)
1992 test or-ed indirect predicates (issue3775)
1993
1993
1994 $ log '6 or 6^1' | sort
1994 $ log '6 or 6^1' | sort
1995 5
1995 5
1996 6
1996 6
1997 $ log '6^1 or 6' | sort
1997 $ log '6^1 or 6' | sort
1998 5
1998 5
1999 6
1999 6
2000 $ log '4 or 4~1' | sort
2000 $ log '4 or 4~1' | sort
2001 2
2001 2
2002 4
2002 4
2003 $ log '4~1 or 4' | sort
2003 $ log '4~1 or 4' | sort
2004 2
2004 2
2005 4
2005 4
2006 $ log '(0 or 2):(4 or 6) or 0 or 6' | sort
2006 $ log '(0 or 2):(4 or 6) or 0 or 6' | sort
2007 0
2007 0
2008 1
2008 1
2009 2
2009 2
2010 3
2010 3
2011 4
2011 4
2012 5
2012 5
2013 6
2013 6
2014 $ log '0 or 6 or (0 or 2):(4 or 6)' | sort
2014 $ log '0 or 6 or (0 or 2):(4 or 6)' | sort
2015 0
2015 0
2016 1
2016 1
2017 2
2017 2
2018 3
2018 3
2019 4
2019 4
2020 5
2020 5
2021 6
2021 6
2022
2022
2023 tests for 'remote()' predicate:
2023 tests for 'remote()' predicate:
2024 #. (csets in remote) (id) (remote)
2024 #. (csets in remote) (id) (remote)
2025 1. less than local current branch "default"
2025 1. less than local current branch "default"
2026 2. same with local specified "default"
2026 2. same with local specified "default"
2027 3. more than local specified specified
2027 3. more than local specified specified
2028
2028
2029 $ hg clone --quiet -U . ../remote3
2029 $ hg clone --quiet -U . ../remote3
2030 $ cd ../remote3
2030 $ cd ../remote3
2031 $ hg update -q 7
2031 $ hg update -q 7
2032 $ echo r > r
2032 $ echo r > r
2033 $ hg ci -Aqm 10
2033 $ hg ci -Aqm 10
2034 $ log 'remote()'
2034 $ log 'remote()'
2035 7
2035 7
2036 $ log 'remote("a-b-c-")'
2036 $ log 'remote("a-b-c-")'
2037 2
2037 2
2038 $ cd ../repo
2038 $ cd ../repo
2039 $ log 'remote(".a.b.c.", "../remote3")'
2039 $ log 'remote(".a.b.c.", "../remote3")'
2040
2040
2041 tests for concatenation of strings/symbols by "##"
2041 tests for concatenation of strings/symbols by "##"
2042
2042
2043 $ try "278 ## '5f5' ## 1ee ## 'ce5'"
2043 $ try "278 ## '5f5' ## 1ee ## 'ce5'"
2044 (_concat
2044 (_concat
2045 (_concat
2045 (_concat
2046 (_concat
2046 (_concat
2047 ('symbol', '278')
2047 ('symbol', '278')
2048 ('string', '5f5'))
2048 ('string', '5f5'))
2049 ('symbol', '1ee'))
2049 ('symbol', '1ee'))
2050 ('string', 'ce5'))
2050 ('string', 'ce5'))
2051 ('string', '2785f51eece5')
2051 ('string', '2785f51eece5')
2052 * set:
2052 * set:
2053 <baseset [0]>
2053 <baseset [0]>
2054 0
2054 0
2055
2055
2056 $ echo 'cat4($1, $2, $3, $4) = $1 ## $2 ## $3 ## $4' >> .hg/hgrc
2056 $ echo 'cat4($1, $2, $3, $4) = $1 ## $2 ## $3 ## $4' >> .hg/hgrc
2057 $ try "cat4(278, '5f5', 1ee, 'ce5')"
2057 $ try "cat4(278, '5f5', 1ee, 'ce5')"
2058 (func
2058 (func
2059 ('symbol', 'cat4')
2059 ('symbol', 'cat4')
2060 (list
2060 (list
2061 ('symbol', '278')
2061 ('symbol', '278')
2062 ('string', '5f5')
2062 ('string', '5f5')
2063 ('symbol', '1ee')
2063 ('symbol', '1ee')
2064 ('string', 'ce5')))
2064 ('string', 'ce5')))
2065 (_concat
2065 (_concat
2066 (_concat
2066 (_concat
2067 (_concat
2067 (_concat
2068 ('symbol', '278')
2068 ('symbol', '278')
2069 ('string', '5f5'))
2069 ('string', '5f5'))
2070 ('symbol', '1ee'))
2070 ('symbol', '1ee'))
2071 ('string', 'ce5'))
2071 ('string', 'ce5'))
2072 ('string', '2785f51eece5')
2072 ('string', '2785f51eece5')
2073 * set:
2073 * set:
2074 <baseset [0]>
2074 <baseset [0]>
2075 0
2075 0
2076
2076
2077 (check concatenation in alias nesting)
2077 (check concatenation in alias nesting)
2078
2078
2079 $ echo 'cat2($1, $2) = $1 ## $2' >> .hg/hgrc
2079 $ echo 'cat2($1, $2) = $1 ## $2' >> .hg/hgrc
2080 $ echo 'cat2x2($1, $2, $3, $4) = cat2($1 ## $2, $3 ## $4)' >> .hg/hgrc
2080 $ echo 'cat2x2($1, $2, $3, $4) = cat2($1 ## $2, $3 ## $4)' >> .hg/hgrc
2081 $ log "cat2x2(278, '5f5', 1ee, 'ce5')"
2081 $ log "cat2x2(278, '5f5', 1ee, 'ce5')"
2082 0
2082 0
2083
2083
2084 (check operator priority)
2084 (check operator priority)
2085
2085
2086 $ echo 'cat2n2($1, $2, $3, $4) = $1 ## $2 or $3 ## $4~2' >> .hg/hgrc
2086 $ echo 'cat2n2($1, $2, $3, $4) = $1 ## $2 or $3 ## $4~2' >> .hg/hgrc
2087 $ log "cat2n2(2785f5, 1eece5, 24286f, 4ae135)"
2087 $ log "cat2n2(2785f5, 1eece5, 24286f, 4ae135)"
2088 0
2088 0
2089 4
2089 4
2090
2090
2091 $ cd ..
2091 $ cd ..
2092
2092
2093 prepare repository that has "default" branches of multiple roots
2093 prepare repository that has "default" branches of multiple roots
2094
2094
2095 $ hg init namedbranch
2095 $ hg init namedbranch
2096 $ cd namedbranch
2096 $ cd namedbranch
2097
2097
2098 $ echo default0 >> a
2098 $ echo default0 >> a
2099 $ hg ci -Aqm0
2099 $ hg ci -Aqm0
2100 $ echo default1 >> a
2100 $ echo default1 >> a
2101 $ hg ci -m1
2101 $ hg ci -m1
2102
2102
2103 $ hg branch -q stable
2103 $ hg branch -q stable
2104 $ echo stable2 >> a
2104 $ echo stable2 >> a
2105 $ hg ci -m2
2105 $ hg ci -m2
2106 $ echo stable3 >> a
2106 $ echo stable3 >> a
2107 $ hg ci -m3
2107 $ hg ci -m3
2108
2108
2109 $ hg update -q null
2109 $ hg update -q null
2110 $ echo default4 >> a
2110 $ echo default4 >> a
2111 $ hg ci -Aqm4
2111 $ hg ci -Aqm4
2112 $ echo default5 >> a
2112 $ echo default5 >> a
2113 $ hg ci -m5
2113 $ hg ci -m5
2114
2114
2115 "null" revision belongs to "default" branch (issue4683)
2115 "null" revision belongs to "default" branch (issue4683)
2116
2116
2117 $ log 'branch(null)'
2117 $ log 'branch(null)'
2118 0
2118 0
2119 1
2119 1
2120 4
2120 4
2121 5
2121 5
2122
2122
2123 "null" revision belongs to "default" branch, but it shouldn't appear in set
2123 "null" revision belongs to "default" branch, but it shouldn't appear in set
2124 unless explicitly specified (issue4682)
2124 unless explicitly specified (issue4682)
2125
2125
2126 $ log 'children(branch(default))'
2126 $ log 'children(branch(default))'
2127 1
2127 1
2128 2
2128 2
2129 5
2129 5
2130
2130
2131 $ cd ..
2131 $ cd ..
2132
2132
2133 test author/desc/keyword in problematic encoding
2133 test author/desc/keyword in problematic encoding
2134 # unicode: cp932:
2134 # unicode: cp932:
2135 # u30A2 0x83 0x41(= 'A')
2135 # u30A2 0x83 0x41(= 'A')
2136 # u30C2 0x83 0x61(= 'a')
2136 # u30C2 0x83 0x61(= 'a')
2137
2137
2138 $ hg init problematicencoding
2138 $ hg init problematicencoding
2139 $ cd problematicencoding
2139 $ cd problematicencoding
2140
2140
2141 $ python > setup.sh <<EOF
2141 $ python > setup.sh <<EOF
2142 > print u'''
2142 > print u'''
2143 > echo a > text
2143 > echo a > text
2144 > hg add text
2144 > hg add text
2145 > hg --encoding utf-8 commit -u '\u30A2' -m none
2145 > hg --encoding utf-8 commit -u '\u30A2' -m none
2146 > echo b > text
2146 > echo b > text
2147 > hg --encoding utf-8 commit -u '\u30C2' -m none
2147 > hg --encoding utf-8 commit -u '\u30C2' -m none
2148 > echo c > text
2148 > echo c > text
2149 > hg --encoding utf-8 commit -u none -m '\u30A2'
2149 > hg --encoding utf-8 commit -u none -m '\u30A2'
2150 > echo d > text
2150 > echo d > text
2151 > hg --encoding utf-8 commit -u none -m '\u30C2'
2151 > hg --encoding utf-8 commit -u none -m '\u30C2'
2152 > '''.encode('utf-8')
2152 > '''.encode('utf-8')
2153 > EOF
2153 > EOF
2154 $ sh < setup.sh
2154 $ sh < setup.sh
2155
2155
2156 test in problematic encoding
2156 test in problematic encoding
2157 $ python > test.sh <<EOF
2157 $ python > test.sh <<EOF
2158 > print u'''
2158 > print u'''
2159 > hg --encoding cp932 log --template '{rev}\\n' -r 'author(\u30A2)'
2159 > hg --encoding cp932 log --template '{rev}\\n' -r 'author(\u30A2)'
2160 > echo ====
2160 > echo ====
2161 > hg --encoding cp932 log --template '{rev}\\n' -r 'author(\u30C2)'
2161 > hg --encoding cp932 log --template '{rev}\\n' -r 'author(\u30C2)'
2162 > echo ====
2162 > echo ====
2163 > hg --encoding cp932 log --template '{rev}\\n' -r 'desc(\u30A2)'
2163 > hg --encoding cp932 log --template '{rev}\\n' -r 'desc(\u30A2)'
2164 > echo ====
2164 > echo ====
2165 > hg --encoding cp932 log --template '{rev}\\n' -r 'desc(\u30C2)'
2165 > hg --encoding cp932 log --template '{rev}\\n' -r 'desc(\u30C2)'
2166 > echo ====
2166 > echo ====
2167 > hg --encoding cp932 log --template '{rev}\\n' -r 'keyword(\u30A2)'
2167 > hg --encoding cp932 log --template '{rev}\\n' -r 'keyword(\u30A2)'
2168 > echo ====
2168 > echo ====
2169 > hg --encoding cp932 log --template '{rev}\\n' -r 'keyword(\u30C2)'
2169 > hg --encoding cp932 log --template '{rev}\\n' -r 'keyword(\u30C2)'
2170 > '''.encode('cp932')
2170 > '''.encode('cp932')
2171 > EOF
2171 > EOF
2172 $ sh < test.sh
2172 $ sh < test.sh
2173 0
2173 0
2174 ====
2174 ====
2175 1
2175 1
2176 ====
2176 ====
2177 2
2177 2
2178 ====
2178 ====
2179 3
2179 3
2180 ====
2180 ====
2181 0
2181 0
2182 2
2182 2
2183 ====
2183 ====
2184 1
2184 1
2185 3
2185 3
2186
2186
2187 test error message of bad revset
2187 test error message of bad revset
2188 $ hg log -r 'foo\\'
2188 $ hg log -r 'foo\\'
2189 hg: parse error at 3: syntax error in revset 'foo\\'
2189 hg: parse error at 3: syntax error in revset 'foo\\'
2190 [255]
2190 [255]
2191
2191
2192 $ cd ..
2192 $ cd ..
2193
2193
2194 Test registrar.delayregistrar via revset.extpredicate
2194 Test that revset predicate of extension isn't loaded at failure of
2195
2195 loading it
2196 'extpredicate' decorator shouldn't register any functions until
2197 'setup()' on it.
2198
2196
2199 $ cd repo
2197 $ cd repo
2200
2198
2201 $ cat <<EOF > $TESTTMP/custompredicate.py
2199 $ cat <<EOF > $TESTTMP/custompredicate.py
2202 > from mercurial import revset
2200 > from mercurial import error, registrar, revset
2203 >
2201 >
2204 > revsetpredicate = revset.extpredicate()
2202 > revsetpredicate = registrar.revsetpredicate()
2205 >
2203 >
2206 > @revsetpredicate('custom1()')
2204 > @revsetpredicate('custom1()')
2207 > def custom1(repo, subset, x):
2205 > def custom1(repo, subset, x):
2208 > return revset.baseset([1])
2206 > return revset.baseset([1])
2209 > @revsetpredicate('custom2()')
2210 > def custom2(repo, subset, x):
2211 > return revset.baseset([2])
2212 >
2207 >
2213 > def uisetup(ui):
2208 > raise error.Abort('intentional failure of loading extension')
2214 > if ui.configbool('custompredicate', 'enabled'):
2215 > revsetpredicate.setup()
2216 > EOF
2209 > EOF
2217 $ cat <<EOF > .hg/hgrc
2210 $ cat <<EOF > .hg/hgrc
2218 > [extensions]
2211 > [extensions]
2219 > custompredicate = $TESTTMP/custompredicate.py
2212 > custompredicate = $TESTTMP/custompredicate.py
2220 > EOF
2213 > EOF
2221
2214
2222 $ hg debugrevspec "custom1()"
2215 $ hg debugrevspec "custom1()"
2216 *** failed to import extension custompredicate from $TESTTMP/custompredicate.py: intentional failure of loading extension
2223 hg: parse error: unknown identifier: custom1
2217 hg: parse error: unknown identifier: custom1
2224 [255]
2218 [255]
2225 $ hg debugrevspec "custom2()"
2226 hg: parse error: unknown identifier: custom2
2227 [255]
2228 $ hg debugrevspec "custom1() or custom2()" --config custompredicate.enabled=true
2229 1
2230 2
2231
2219
2232 $ cd ..
2220 $ cd ..
General Comments 0
You need to be logged in to leave comments. Login now