##// END OF EJS Templates
config: set a 'source' in most cases where config don't come from file but code...
Mads Kiilerich -
r20790:49f2d564 default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,350 +1,350 b''
1 1 """automatically manage newlines in repository files
2 2
3 3 This extension allows you to manage the type of line endings (CRLF or
4 4 LF) that are used in the repository and in the local working
5 5 directory. That way you can get CRLF line endings on Windows and LF on
6 6 Unix/Mac, thereby letting everybody use their OS native line endings.
7 7
8 8 The extension reads its configuration from a versioned ``.hgeol``
9 9 configuration file found in the root of the working copy. The
10 10 ``.hgeol`` file use the same syntax as all other Mercurial
11 11 configuration files. It uses two sections, ``[patterns]`` and
12 12 ``[repository]``.
13 13
14 14 The ``[patterns]`` section specifies how line endings should be
15 15 converted between the working copy and the repository. The format is
16 16 specified by a file pattern. The first match is used, so put more
17 17 specific patterns first. The available line endings are ``LF``,
18 18 ``CRLF``, and ``BIN``.
19 19
20 20 Files with the declared format of ``CRLF`` or ``LF`` are always
21 21 checked out and stored in the repository in that format and files
22 22 declared to be binary (``BIN``) are left unchanged. Additionally,
23 23 ``native`` is an alias for checking out in the platform's default line
24 24 ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on
25 25 Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's
26 26 default behaviour; it is only needed if you need to override a later,
27 27 more general pattern.
28 28
29 29 The optional ``[repository]`` section specifies the line endings to
30 30 use for files stored in the repository. It has a single setting,
31 31 ``native``, which determines the storage line endings for files
32 32 declared as ``native`` in the ``[patterns]`` section. It can be set to
33 33 ``LF`` or ``CRLF``. The default is ``LF``. For example, this means
34 34 that on Windows, files configured as ``native`` (``CRLF`` by default)
35 35 will be converted to ``LF`` when stored in the repository. Files
36 36 declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section
37 37 are always stored as-is in the repository.
38 38
39 39 Example versioned ``.hgeol`` file::
40 40
41 41 [patterns]
42 42 **.py = native
43 43 **.vcproj = CRLF
44 44 **.txt = native
45 45 Makefile = LF
46 46 **.jpg = BIN
47 47
48 48 [repository]
49 49 native = LF
50 50
51 51 .. note::
52 52
53 53 The rules will first apply when files are touched in the working
54 54 copy, e.g. by updating to null and back to tip to touch all files.
55 55
56 56 The extension uses an optional ``[eol]`` section read from both the
57 57 normal Mercurial configuration files and the ``.hgeol`` file, with the
58 58 latter overriding the former. You can use that section to control the
59 59 overall behavior. There are three settings:
60 60
61 61 - ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or
62 62 ``CRLF`` to override the default interpretation of ``native`` for
63 63 checkout. This can be used with :hg:`archive` on Unix, say, to
64 64 generate an archive where files have line endings for Windows.
65 65
66 66 - ``eol.only-consistent`` (default True) can be set to False to make
67 67 the extension convert files with inconsistent EOLs. Inconsistent
68 68 means that there is both ``CRLF`` and ``LF`` present in the file.
69 69 Such files are normally not touched under the assumption that they
70 70 have mixed EOLs on purpose.
71 71
72 72 - ``eol.fix-trailing-newline`` (default False) can be set to True to
73 73 ensure that converted files end with a EOL character (either ``\\n``
74 74 or ``\\r\\n`` as per the configured patterns).
75 75
76 76 The extension provides ``cleverencode:`` and ``cleverdecode:`` filters
77 77 like the deprecated win32text extension does. This means that you can
78 78 disable win32text and enable eol and your filters will still work. You
79 79 only need to these filters until you have prepared a ``.hgeol`` file.
80 80
81 81 The ``win32text.forbid*`` hooks provided by the win32text extension
82 82 have been unified into a single hook named ``eol.checkheadshook``. The
83 83 hook will lookup the expected line endings from the ``.hgeol`` file,
84 84 which means you must migrate to a ``.hgeol`` file first before using
85 85 the hook. ``eol.checkheadshook`` only checks heads, intermediate
86 86 invalid revisions will be pushed. To forbid them completely, use the
87 87 ``eol.checkallhook`` hook. These hooks are best used as
88 88 ``pretxnchangegroup`` hooks.
89 89
90 90 See :hg:`help patterns` for more information about the glob patterns
91 91 used.
92 92 """
93 93
94 94 from mercurial.i18n import _
95 95 from mercurial import util, config, extensions, match, error
96 96 import re, os
97 97
98 98 testedwith = 'internal'
99 99
100 100 # Matches a lone LF, i.e., one that is not part of CRLF.
101 101 singlelf = re.compile('(^|[^\r])\n')
102 102 # Matches a single EOL which can either be a CRLF where repeated CR
103 103 # are removed or a LF. We do not care about old Macintosh files, so a
104 104 # stray CR is an error.
105 105 eolre = re.compile('\r*\n')
106 106
107 107
108 108 def inconsistenteol(data):
109 109 return '\r\n' in data and singlelf.search(data)
110 110
111 111 def tolf(s, params, ui, **kwargs):
112 112 """Filter to convert to LF EOLs."""
113 113 if util.binary(s):
114 114 return s
115 115 if ui.configbool('eol', 'only-consistent', True) and inconsistenteol(s):
116 116 return s
117 117 if (ui.configbool('eol', 'fix-trailing-newline', False)
118 118 and s and s[-1] != '\n'):
119 119 s = s + '\n'
120 120 return eolre.sub('\n', s)
121 121
122 122 def tocrlf(s, params, ui, **kwargs):
123 123 """Filter to convert to CRLF EOLs."""
124 124 if util.binary(s):
125 125 return s
126 126 if ui.configbool('eol', 'only-consistent', True) and inconsistenteol(s):
127 127 return s
128 128 if (ui.configbool('eol', 'fix-trailing-newline', False)
129 129 and s and s[-1] != '\n'):
130 130 s = s + '\n'
131 131 return eolre.sub('\r\n', s)
132 132
133 133 def isbinary(s, params):
134 134 """Filter to do nothing with the file."""
135 135 return s
136 136
137 137 filters = {
138 138 'to-lf': tolf,
139 139 'to-crlf': tocrlf,
140 140 'is-binary': isbinary,
141 141 # The following provide backwards compatibility with win32text
142 142 'cleverencode:': tolf,
143 143 'cleverdecode:': tocrlf
144 144 }
145 145
146 146 class eolfile(object):
147 147 def __init__(self, ui, root, data):
148 148 self._decode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
149 149 self._encode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
150 150
151 151 self.cfg = config.config()
152 152 # Our files should not be touched. The pattern must be
153 153 # inserted first override a '** = native' pattern.
154 self.cfg.set('patterns', '.hg*', 'BIN')
154 self.cfg.set('patterns', '.hg*', 'BIN', 'eol')
155 155 # We can then parse the user's patterns.
156 156 self.cfg.parse('.hgeol', data)
157 157
158 158 isrepolf = self.cfg.get('repository', 'native') != 'CRLF'
159 159 self._encode['NATIVE'] = isrepolf and 'to-lf' or 'to-crlf'
160 160 iswdlf = ui.config('eol', 'native', os.linesep) in ('LF', '\n')
161 161 self._decode['NATIVE'] = iswdlf and 'to-lf' or 'to-crlf'
162 162
163 163 include = []
164 164 exclude = []
165 165 for pattern, style in self.cfg.items('patterns'):
166 166 key = style.upper()
167 167 if key == 'BIN':
168 168 exclude.append(pattern)
169 169 else:
170 170 include.append(pattern)
171 171 # This will match the files for which we need to care
172 172 # about inconsistent newlines.
173 173 self.match = match.match(root, '', [], include, exclude)
174 174
175 175 def copytoui(self, ui):
176 176 for pattern, style in self.cfg.items('patterns'):
177 177 key = style.upper()
178 178 try:
179 ui.setconfig('decode', pattern, self._decode[key])
180 ui.setconfig('encode', pattern, self._encode[key])
179 ui.setconfig('decode', pattern, self._decode[key], 'eol')
180 ui.setconfig('encode', pattern, self._encode[key], 'eol')
181 181 except KeyError:
182 182 ui.warn(_("ignoring unknown EOL style '%s' from %s\n")
183 183 % (style, self.cfg.source('patterns', pattern)))
184 184 # eol.only-consistent can be specified in ~/.hgrc or .hgeol
185 185 for k, v in self.cfg.items('eol'):
186 ui.setconfig('eol', k, v)
186 ui.setconfig('eol', k, v, 'eol')
187 187
188 188 def checkrev(self, repo, ctx, files):
189 189 failed = []
190 190 for f in (files or ctx.files()):
191 191 if f not in ctx:
192 192 continue
193 193 for pattern, style in self.cfg.items('patterns'):
194 194 if not match.match(repo.root, '', [pattern])(f):
195 195 continue
196 196 target = self._encode[style.upper()]
197 197 data = ctx[f].data()
198 198 if (target == "to-lf" and "\r\n" in data
199 199 or target == "to-crlf" and singlelf.search(data)):
200 200 failed.append((str(ctx), target, f))
201 201 break
202 202 return failed
203 203
204 204 def parseeol(ui, repo, nodes):
205 205 try:
206 206 for node in nodes:
207 207 try:
208 208 if node is None:
209 209 # Cannot use workingctx.data() since it would load
210 210 # and cache the filters before we configure them.
211 211 data = repo.wfile('.hgeol').read()
212 212 else:
213 213 data = repo[node]['.hgeol'].data()
214 214 return eolfile(ui, repo.root, data)
215 215 except (IOError, LookupError):
216 216 pass
217 217 except error.ParseError, inst:
218 218 ui.warn(_("warning: ignoring .hgeol file due to parse error "
219 219 "at %s: %s\n") % (inst.args[1], inst.args[0]))
220 220 return None
221 221
222 222 def _checkhook(ui, repo, node, headsonly):
223 223 # Get revisions to check and touched files at the same time
224 224 files = set()
225 225 revs = set()
226 226 for rev in xrange(repo[node].rev(), len(repo)):
227 227 revs.add(rev)
228 228 if headsonly:
229 229 ctx = repo[rev]
230 230 files.update(ctx.files())
231 231 for pctx in ctx.parents():
232 232 revs.discard(pctx.rev())
233 233 failed = []
234 234 for rev in revs:
235 235 ctx = repo[rev]
236 236 eol = parseeol(ui, repo, [ctx.node()])
237 237 if eol:
238 238 failed.extend(eol.checkrev(repo, ctx, files))
239 239
240 240 if failed:
241 241 eols = {'to-lf': 'CRLF', 'to-crlf': 'LF'}
242 242 msgs = []
243 243 for node, target, f in failed:
244 244 msgs.append(_(" %s in %s should not have %s line endings") %
245 245 (f, node, eols[target]))
246 246 raise util.Abort(_("end-of-line check failed:\n") + "\n".join(msgs))
247 247
248 248 def checkallhook(ui, repo, node, hooktype, **kwargs):
249 249 """verify that files have expected EOLs"""
250 250 _checkhook(ui, repo, node, False)
251 251
252 252 def checkheadshook(ui, repo, node, hooktype, **kwargs):
253 253 """verify that files have expected EOLs"""
254 254 _checkhook(ui, repo, node, True)
255 255
256 256 # "checkheadshook" used to be called "hook"
257 257 hook = checkheadshook
258 258
259 259 def preupdate(ui, repo, hooktype, parent1, parent2):
260 260 repo.loadeol([parent1])
261 261 return False
262 262
263 263 def uisetup(ui):
264 ui.setconfig('hooks', 'preupdate.eol', preupdate)
264 ui.setconfig('hooks', 'preupdate.eol', preupdate, 'eol')
265 265
266 266 def extsetup(ui):
267 267 try:
268 268 extensions.find('win32text')
269 269 ui.warn(_("the eol extension is incompatible with the "
270 270 "win32text extension\n"))
271 271 except KeyError:
272 272 pass
273 273
274 274
275 275 def reposetup(ui, repo):
276 276 uisetup(repo.ui)
277 277
278 278 if not repo.local():
279 279 return
280 280 for name, fn in filters.iteritems():
281 281 repo.adddatafilter(name, fn)
282 282
283 ui.setconfig('patch', 'eol', 'auto')
283 ui.setconfig('patch', 'eol', 'auto', 'eol')
284 284
285 285 class eolrepo(repo.__class__):
286 286
287 287 def loadeol(self, nodes):
288 288 eol = parseeol(self.ui, self, nodes)
289 289 if eol is None:
290 290 return None
291 291 eol.copytoui(self.ui)
292 292 return eol.match
293 293
294 294 def _hgcleardirstate(self):
295 295 self._eolfile = self.loadeol([None, 'tip'])
296 296 if not self._eolfile:
297 297 self._eolfile = util.never
298 298 return
299 299
300 300 try:
301 301 cachemtime = os.path.getmtime(self.join("eol.cache"))
302 302 except OSError:
303 303 cachemtime = 0
304 304
305 305 try:
306 306 eolmtime = os.path.getmtime(self.wjoin(".hgeol"))
307 307 except OSError:
308 308 eolmtime = 0
309 309
310 310 if eolmtime > cachemtime:
311 311 self.ui.debug("eol: detected change in .hgeol\n")
312 312 wlock = None
313 313 try:
314 314 wlock = self.wlock()
315 315 for f in self.dirstate:
316 316 if self.dirstate[f] == 'n':
317 317 # all normal files need to be looked at
318 318 # again since the new .hgeol file might no
319 319 # longer match a file it matched before
320 320 self.dirstate.normallookup(f)
321 321 # Create or touch the cache to update mtime
322 322 self.opener("eol.cache", "w").close()
323 323 wlock.release()
324 324 except error.LockUnavailable:
325 325 # If we cannot lock the repository and clear the
326 326 # dirstate, then a commit might not see all files
327 327 # as modified. But if we cannot lock the
328 328 # repository, then we can also not make a commit,
329 329 # so ignore the error.
330 330 pass
331 331
332 332 def commitctx(self, ctx, error=False):
333 333 for f in sorted(ctx.added() + ctx.modified()):
334 334 if not self._eolfile(f):
335 335 continue
336 336 try:
337 337 data = ctx[f].data()
338 338 except IOError:
339 339 continue
340 340 if util.binary(data):
341 341 # We should not abort here, since the user should
342 342 # be able to say "** = native" to automatically
343 343 # have all non-binary files taken care of.
344 344 continue
345 345 if inconsistenteol(data):
346 346 raise util.Abort(_("inconsistent newline style "
347 347 "in %s\n" % f))
348 348 return super(eolrepo, self).commitctx(ctx, error)
349 349 repo.__class__ = eolrepo
350 350 repo._hgcleardirstate()
@@ -1,925 +1,927 b''
1 1 # histedit.py - interactive history editing for mercurial
2 2 #
3 3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """interactive history editing
8 8
9 9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 10 is as follows, assuming the following history::
11 11
12 12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 13 | Add delta
14 14 |
15 15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 16 | Add gamma
17 17 |
18 18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 19 | Add beta
20 20 |
21 21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 22 Add alpha
23 23
24 24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 25 file open in your editor::
26 26
27 27 pick c561b4e977df Add beta
28 28 pick 030b686bedc4 Add gamma
29 29 pick 7c2fd3b9020c Add delta
30 30
31 31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 32 #
33 33 # Commits are listed from least to most recent
34 34 #
35 35 # Commands:
36 36 # p, pick = use commit
37 37 # e, edit = use commit, but stop for amending
38 38 # f, fold = use commit, but combine it with the one above
39 39 # d, drop = remove commit from history
40 40 # m, mess = edit message without changing commit content
41 41 #
42 42
43 43 In this file, lines beginning with ``#`` are ignored. You must specify a rule
44 44 for each revision in your history. For example, if you had meant to add gamma
45 45 before beta, and then wanted to add delta in the same revision as beta, you
46 46 would reorganize the file to look like this::
47 47
48 48 pick 030b686bedc4 Add gamma
49 49 pick c561b4e977df Add beta
50 50 fold 7c2fd3b9020c Add delta
51 51
52 52 # Edit history between c561b4e977df and 7c2fd3b9020c
53 53 #
54 54 # Commits are listed from least to most recent
55 55 #
56 56 # Commands:
57 57 # p, pick = use commit
58 58 # e, edit = use commit, but stop for amending
59 59 # f, fold = use commit, but combine it with the one above
60 60 # d, drop = remove commit from history
61 61 # m, mess = edit message without changing commit content
62 62 #
63 63
64 64 At which point you close the editor and ``histedit`` starts working. When you
65 65 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
66 66 those revisions together, offering you a chance to clean up the commit message::
67 67
68 68 Add beta
69 69 ***
70 70 Add delta
71 71
72 72 Edit the commit message to your liking, then close the editor. For
73 73 this example, let's assume that the commit message was changed to
74 74 ``Add beta and delta.`` After histedit has run and had a chance to
75 75 remove any old or temporary revisions it needed, the history looks
76 76 like this::
77 77
78 78 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
79 79 | Add beta and delta.
80 80 |
81 81 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
82 82 | Add gamma
83 83 |
84 84 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
85 85 Add alpha
86 86
87 87 Note that ``histedit`` does *not* remove any revisions (even its own temporary
88 88 ones) until after it has completed all the editing operations, so it will
89 89 probably perform several strip operations when it's done. For the above example,
90 90 it had to run strip twice. Strip can be slow depending on a variety of factors,
91 91 so you might need to be a little patient. You can choose to keep the original
92 92 revisions by passing the ``--keep`` flag.
93 93
94 94 The ``edit`` operation will drop you back to a command prompt,
95 95 allowing you to edit files freely, or even use ``hg record`` to commit
96 96 some changes as a separate commit. When you're done, any remaining
97 97 uncommitted changes will be committed as well. When done, run ``hg
98 98 histedit --continue`` to finish this step. You'll be prompted for a
99 99 new commit message, but the default commit message will be the
100 100 original message for the ``edit`` ed revision.
101 101
102 102 The ``message`` operation will give you a chance to revise a commit
103 103 message without changing the contents. It's a shortcut for doing
104 104 ``edit`` immediately followed by `hg histedit --continue``.
105 105
106 106 If ``histedit`` encounters a conflict when moving a revision (while
107 107 handling ``pick`` or ``fold``), it'll stop in a similar manner to
108 108 ``edit`` with the difference that it won't prompt you for a commit
109 109 message when done. If you decide at this point that you don't like how
110 110 much work it will be to rearrange history, or that you made a mistake,
111 111 you can use ``hg histedit --abort`` to abandon the new changes you
112 112 have made and return to the state before you attempted to edit your
113 113 history.
114 114
115 115 If we clone the histedit-ed example repository above and add four more
116 116 changes, such that we have the following history::
117 117
118 118 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
119 119 | Add theta
120 120 |
121 121 o 5 140988835471 2009-04-27 18:04 -0500 stefan
122 122 | Add eta
123 123 |
124 124 o 4 122930637314 2009-04-27 18:04 -0500 stefan
125 125 | Add zeta
126 126 |
127 127 o 3 836302820282 2009-04-27 18:04 -0500 stefan
128 128 | Add epsilon
129 129 |
130 130 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
131 131 | Add beta and delta.
132 132 |
133 133 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
134 134 | Add gamma
135 135 |
136 136 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
137 137 Add alpha
138 138
139 139 If you run ``hg histedit --outgoing`` on the clone then it is the same
140 140 as running ``hg histedit 836302820282``. If you need plan to push to a
141 141 repository that Mercurial does not detect to be related to the source
142 142 repo, you can add a ``--force`` option.
143 143 """
144 144
145 145 try:
146 146 import cPickle as pickle
147 147 pickle.dump # import now
148 148 except ImportError:
149 149 import pickle
150 150 import os
151 151 import sys
152 152
153 153 from mercurial import cmdutil
154 154 from mercurial import discovery
155 155 from mercurial import error
156 156 from mercurial import copies
157 157 from mercurial import context
158 158 from mercurial import hg
159 159 from mercurial import node
160 160 from mercurial import repair
161 161 from mercurial import scmutil
162 162 from mercurial import util
163 163 from mercurial import obsolete
164 164 from mercurial import merge as mergemod
165 165 from mercurial.lock import release
166 166 from mercurial.i18n import _
167 167
168 168 cmdtable = {}
169 169 command = cmdutil.command(cmdtable)
170 170
171 171 testedwith = 'internal'
172 172
173 173 # i18n: command names and abbreviations must remain untranslated
174 174 editcomment = _("""# Edit history between %s and %s
175 175 #
176 176 # Commits are listed from least to most recent
177 177 #
178 178 # Commands:
179 179 # p, pick = use commit
180 180 # e, edit = use commit, but stop for amending
181 181 # f, fold = use commit, but combine it with the one above
182 182 # d, drop = remove commit from history
183 183 # m, mess = edit message without changing commit content
184 184 #
185 185 """)
186 186
187 187 def commitfuncfor(repo, src):
188 188 """Build a commit function for the replacement of <src>
189 189
190 190 This function ensure we apply the same treatment to all changesets.
191 191
192 192 - Add a 'histedit_source' entry in extra.
193 193
194 194 Note that fold have its own separated logic because its handling is a bit
195 195 different and not easily factored out of the fold method.
196 196 """
197 197 phasemin = src.phase()
198 198 def commitfunc(**kwargs):
199 199 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
200 200 try:
201 repo.ui.setconfig('phases', 'new-commit', phasemin)
201 repo.ui.setconfig('phases', 'new-commit', phasemin,
202 'histedit')
202 203 extra = kwargs.get('extra', {}).copy()
203 204 extra['histedit_source'] = src.hex()
204 205 kwargs['extra'] = extra
205 206 return repo.commit(**kwargs)
206 207 finally:
207 208 repo.ui.restoreconfig(phasebackup)
208 209 return commitfunc
209 210
210 211
211 212
212 213 def applychanges(ui, repo, ctx, opts):
213 214 """Merge changeset from ctx (only) in the current working directory"""
214 215 wcpar = repo.dirstate.parents()[0]
215 216 if ctx.p1().node() == wcpar:
216 217 # edition ar "in place" we do not need to make any merge,
217 218 # just applies changes on parent for edition
218 219 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
219 220 stats = None
220 221 else:
221 222 try:
222 223 # ui.forcemerge is an internal variable, do not document
223 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''))
224 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
225 'histedit')
224 226 stats = mergemod.update(repo, ctx.node(), True, True, False,
225 227 ctx.p1().node())
226 228 finally:
227 repo.ui.setconfig('ui', 'forcemerge', '')
229 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
228 230 repo.setparents(wcpar, node.nullid)
229 231 repo.dirstate.write()
230 232 # fix up dirstate for copies and renames
231 233 cmdutil.duplicatecopies(repo, ctx.rev(), ctx.p1().rev())
232 234 return stats
233 235
234 236 def collapse(repo, first, last, commitopts):
235 237 """collapse the set of revisions from first to last as new one.
236 238
237 239 Expected commit options are:
238 240 - message
239 241 - date
240 242 - username
241 243 Commit message is edited in all cases.
242 244
243 245 This function works in memory."""
244 246 ctxs = list(repo.set('%d::%d', first, last))
245 247 if not ctxs:
246 248 return None
247 249 base = first.parents()[0]
248 250
249 251 # commit a new version of the old changeset, including the update
250 252 # collect all files which might be affected
251 253 files = set()
252 254 for ctx in ctxs:
253 255 files.update(ctx.files())
254 256
255 257 # Recompute copies (avoid recording a -> b -> a)
256 258 copied = copies.pathcopies(base, last)
257 259
258 260 # prune files which were reverted by the updates
259 261 def samefile(f):
260 262 if f in last.manifest():
261 263 a = last.filectx(f)
262 264 if f in base.manifest():
263 265 b = base.filectx(f)
264 266 return (a.data() == b.data()
265 267 and a.flags() == b.flags())
266 268 else:
267 269 return False
268 270 else:
269 271 return f not in base.manifest()
270 272 files = [f for f in files if not samefile(f)]
271 273 # commit version of these files as defined by head
272 274 headmf = last.manifest()
273 275 def filectxfn(repo, ctx, path):
274 276 if path in headmf:
275 277 fctx = last[path]
276 278 flags = fctx.flags()
277 279 mctx = context.memfilectx(fctx.path(), fctx.data(),
278 280 islink='l' in flags,
279 281 isexec='x' in flags,
280 282 copied=copied.get(path))
281 283 return mctx
282 284 raise IOError()
283 285
284 286 if commitopts.get('message'):
285 287 message = commitopts['message']
286 288 else:
287 289 message = first.description()
288 290 user = commitopts.get('user')
289 291 date = commitopts.get('date')
290 292 extra = commitopts.get('extra')
291 293
292 294 parents = (first.p1().node(), first.p2().node())
293 295 new = context.memctx(repo,
294 296 parents=parents,
295 297 text=message,
296 298 files=files,
297 299 filectxfn=filectxfn,
298 300 user=user,
299 301 date=date,
300 302 extra=extra)
301 303 new._text = cmdutil.commitforceeditor(repo, new, [])
302 304 repo.savecommitmessage(new.description())
303 305 return repo.commitctx(new)
304 306
305 307 def pick(ui, repo, ctx, ha, opts):
306 308 oldctx = repo[ha]
307 309 if oldctx.parents()[0] == ctx:
308 310 ui.debug('node %s unchanged\n' % ha)
309 311 return oldctx, []
310 312 hg.update(repo, ctx.node())
311 313 stats = applychanges(ui, repo, oldctx, opts)
312 314 if stats and stats[3] > 0:
313 315 raise error.InterventionRequired(_('Fix up the change and run '
314 316 'hg histedit --continue'))
315 317 # drop the second merge parent
316 318 commit = commitfuncfor(repo, oldctx)
317 319 n = commit(text=oldctx.description(), user=oldctx.user(),
318 320 date=oldctx.date(), extra=oldctx.extra())
319 321 if n is None:
320 322 ui.warn(_('%s: empty changeset\n')
321 323 % node.hex(ha))
322 324 return ctx, []
323 325 new = repo[n]
324 326 return new, [(oldctx.node(), (n,))]
325 327
326 328
327 329 def edit(ui, repo, ctx, ha, opts):
328 330 oldctx = repo[ha]
329 331 hg.update(repo, ctx.node())
330 332 applychanges(ui, repo, oldctx, opts)
331 333 raise error.InterventionRequired(
332 334 _('Make changes as needed, you may commit or record as needed now.\n'
333 335 'When you are finished, run hg histedit --continue to resume.'))
334 336
335 337 def fold(ui, repo, ctx, ha, opts):
336 338 oldctx = repo[ha]
337 339 hg.update(repo, ctx.node())
338 340 stats = applychanges(ui, repo, oldctx, opts)
339 341 if stats and stats[3] > 0:
340 342 raise error.InterventionRequired(
341 343 _('Fix up the change and run hg histedit --continue'))
342 344 n = repo.commit(text='fold-temp-revision %s' % ha, user=oldctx.user(),
343 345 date=oldctx.date(), extra=oldctx.extra())
344 346 if n is None:
345 347 ui.warn(_('%s: empty changeset')
346 348 % node.hex(ha))
347 349 return ctx, []
348 350 return finishfold(ui, repo, ctx, oldctx, n, opts, [])
349 351
350 352 def finishfold(ui, repo, ctx, oldctx, newnode, opts, internalchanges):
351 353 parent = ctx.parents()[0].node()
352 354 hg.update(repo, parent)
353 355 ### prepare new commit data
354 356 commitopts = opts.copy()
355 357 # username
356 358 if ctx.user() == oldctx.user():
357 359 username = ctx.user()
358 360 else:
359 361 username = ui.username()
360 362 commitopts['user'] = username
361 363 # commit message
362 364 newmessage = '\n***\n'.join(
363 365 [ctx.description()] +
364 366 [repo[r].description() for r in internalchanges] +
365 367 [oldctx.description()]) + '\n'
366 368 commitopts['message'] = newmessage
367 369 # date
368 370 commitopts['date'] = max(ctx.date(), oldctx.date())
369 371 extra = ctx.extra().copy()
370 372 # histedit_source
371 373 # note: ctx is likely a temporary commit but that the best we can do here
372 374 # This is sufficient to solve issue3681 anyway
373 375 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
374 376 commitopts['extra'] = extra
375 377 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
376 378 try:
377 379 phasemin = max(ctx.phase(), oldctx.phase())
378 repo.ui.setconfig('phases', 'new-commit', phasemin)
380 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
379 381 n = collapse(repo, ctx, repo[newnode], commitopts)
380 382 finally:
381 383 repo.ui.restoreconfig(phasebackup)
382 384 if n is None:
383 385 return ctx, []
384 386 hg.update(repo, n)
385 387 replacements = [(oldctx.node(), (newnode,)),
386 388 (ctx.node(), (n,)),
387 389 (newnode, (n,)),
388 390 ]
389 391 for ich in internalchanges:
390 392 replacements.append((ich, (n,)))
391 393 return repo[n], replacements
392 394
393 395 def drop(ui, repo, ctx, ha, opts):
394 396 return ctx, [(repo[ha].node(), ())]
395 397
396 398
397 399 def message(ui, repo, ctx, ha, opts):
398 400 oldctx = repo[ha]
399 401 hg.update(repo, ctx.node())
400 402 stats = applychanges(ui, repo, oldctx, opts)
401 403 if stats and stats[3] > 0:
402 404 raise error.InterventionRequired(
403 405 _('Fix up the change and run hg histedit --continue'))
404 406 message = oldctx.description() + '\n'
405 407 message = ui.edit(message, ui.username())
406 408 commit = commitfuncfor(repo, oldctx)
407 409 new = commit(text=message, user=oldctx.user(), date=oldctx.date(),
408 410 extra=oldctx.extra())
409 411 newctx = repo[new]
410 412 if oldctx.node() != newctx.node():
411 413 return newctx, [(oldctx.node(), (new,))]
412 414 # We didn't make an edit, so just indicate no replaced nodes
413 415 return newctx, []
414 416
415 417 def findoutgoing(ui, repo, remote=None, force=False, opts={}):
416 418 """utility function to find the first outgoing changeset
417 419
418 420 Used by initialisation code"""
419 421 dest = ui.expandpath(remote or 'default-push', remote or 'default')
420 422 dest, revs = hg.parseurl(dest, None)[:2]
421 423 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
422 424
423 425 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
424 426 other = hg.peer(repo, opts, dest)
425 427
426 428 if revs:
427 429 revs = [repo.lookup(rev) for rev in revs]
428 430
429 431 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
430 432 if not outgoing.missing:
431 433 raise util.Abort(_('no outgoing ancestors'))
432 434 roots = list(repo.revs("roots(%ln)", outgoing.missing))
433 435 if 1 < len(roots):
434 436 msg = _('there are ambiguous outgoing revisions')
435 437 hint = _('see "hg help histedit" for more detail')
436 438 raise util.Abort(msg, hint=hint)
437 439 return repo.lookup(roots[0])
438 440
439 441 actiontable = {'p': pick,
440 442 'pick': pick,
441 443 'e': edit,
442 444 'edit': edit,
443 445 'f': fold,
444 446 'fold': fold,
445 447 'd': drop,
446 448 'drop': drop,
447 449 'm': message,
448 450 'mess': message,
449 451 }
450 452
451 453 @command('histedit',
452 454 [('', 'commands', '',
453 455 _('Read history edits from the specified file.')),
454 456 ('c', 'continue', False, _('continue an edit already in progress')),
455 457 ('k', 'keep', False,
456 458 _("don't strip old nodes after edit is complete")),
457 459 ('', 'abort', False, _('abort an edit in progress')),
458 460 ('o', 'outgoing', False, _('changesets not found in destination')),
459 461 ('f', 'force', False,
460 462 _('force outgoing even for unrelated repositories')),
461 463 ('r', 'rev', [], _('first revision to be edited'))],
462 464 _("ANCESTOR | --outgoing [URL]"))
463 465 def histedit(ui, repo, *freeargs, **opts):
464 466 """interactively edit changeset history
465 467
466 468 This command edits changesets between ANCESTOR and the parent of
467 469 the working directory.
468 470
469 471 With --outgoing, this edits changesets not found in the
470 472 destination repository. If URL of the destination is omitted, the
471 473 'default-push' (or 'default') path will be used.
472 474
473 475 For safety, this command is aborted, also if there are ambiguous
474 476 outgoing revisions which may confuse users: for example, there are
475 477 multiple branches containing outgoing revisions.
476 478
477 479 Use "min(outgoing() and ::.)" or similar revset specification
478 480 instead of --outgoing to specify edit target revision exactly in
479 481 such ambiguous situation. See :hg:`help revsets` for detail about
480 482 selecting revisions.
481 483
482 484 Returns 0 on success, 1 if user intervention is required (not only
483 485 for intentional "edit" command, but also for resolving unexpected
484 486 conflicts).
485 487 """
486 488 lock = wlock = None
487 489 try:
488 490 wlock = repo.wlock()
489 491 lock = repo.lock()
490 492 _histedit(ui, repo, *freeargs, **opts)
491 493 finally:
492 494 release(lock, wlock)
493 495
494 496 def _histedit(ui, repo, *freeargs, **opts):
495 497 # TODO only abort if we try and histedit mq patches, not just
496 498 # blanket if mq patches are applied somewhere
497 499 mq = getattr(repo, 'mq', None)
498 500 if mq and mq.applied:
499 501 raise util.Abort(_('source has mq patches applied'))
500 502
501 503 # basic argument incompatibility processing
502 504 outg = opts.get('outgoing')
503 505 cont = opts.get('continue')
504 506 abort = opts.get('abort')
505 507 force = opts.get('force')
506 508 rules = opts.get('commands', '')
507 509 revs = opts.get('rev', [])
508 510 goal = 'new' # This invocation goal, in new, continue, abort
509 511 if force and not outg:
510 512 raise util.Abort(_('--force only allowed with --outgoing'))
511 513 if cont:
512 514 if util.any((outg, abort, revs, freeargs, rules)):
513 515 raise util.Abort(_('no arguments allowed with --continue'))
514 516 goal = 'continue'
515 517 elif abort:
516 518 if util.any((outg, revs, freeargs, rules)):
517 519 raise util.Abort(_('no arguments allowed with --abort'))
518 520 goal = 'abort'
519 521 else:
520 522 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
521 523 raise util.Abort(_('history edit already in progress, try '
522 524 '--continue or --abort'))
523 525 if outg:
524 526 if revs:
525 527 raise util.Abort(_('no revisions allowed with --outgoing'))
526 528 if len(freeargs) > 1:
527 529 raise util.Abort(
528 530 _('only one repo argument allowed with --outgoing'))
529 531 else:
530 532 revs.extend(freeargs)
531 533 if len(revs) != 1:
532 534 raise util.Abort(
533 535 _('histedit requires exactly one ancestor revision'))
534 536
535 537
536 538 if goal == 'continue':
537 539 (parentctxnode, rules, keep, topmost, replacements) = readstate(repo)
538 540 parentctx = repo[parentctxnode]
539 541 parentctx, repl = bootstrapcontinue(ui, repo, parentctx, rules, opts)
540 542 replacements.extend(repl)
541 543 elif goal == 'abort':
542 544 (parentctxnode, rules, keep, topmost, replacements) = readstate(repo)
543 545 mapping, tmpnodes, leafs, _ntm = processreplacement(repo, replacements)
544 546 ui.debug('restore wc to old parent %s\n' % node.short(topmost))
545 547 # check whether we should update away
546 548 parentnodes = [c.node() for c in repo[None].parents()]
547 549 for n in leafs | set([parentctxnode]):
548 550 if n in parentnodes:
549 551 hg.clean(repo, topmost)
550 552 break
551 553 else:
552 554 pass
553 555 cleanupnode(ui, repo, 'created', tmpnodes)
554 556 cleanupnode(ui, repo, 'temp', leafs)
555 557 os.unlink(os.path.join(repo.path, 'histedit-state'))
556 558 return
557 559 else:
558 560 cmdutil.checkunfinished(repo)
559 561 cmdutil.bailifchanged(repo)
560 562
561 563 topmost, empty = repo.dirstate.parents()
562 564 if outg:
563 565 if freeargs:
564 566 remote = freeargs[0]
565 567 else:
566 568 remote = None
567 569 root = findoutgoing(ui, repo, remote, force, opts)
568 570 else:
569 571 root = revs[0]
570 572 root = scmutil.revsingle(repo, root).node()
571 573
572 574 keep = opts.get('keep', False)
573 575 revs = between(repo, root, topmost, keep)
574 576 if not revs:
575 577 raise util.Abort(_('%s is not an ancestor of working directory') %
576 578 node.short(root))
577 579
578 580 ctxs = [repo[r] for r in revs]
579 581 if not rules:
580 582 rules = '\n'.join([makedesc(c) for c in ctxs])
581 583 rules += '\n\n'
582 584 rules += editcomment % (node.short(root), node.short(topmost))
583 585 rules = ui.edit(rules, ui.username())
584 586 # Save edit rules in .hg/histedit-last-edit.txt in case
585 587 # the user needs to ask for help after something
586 588 # surprising happens.
587 589 f = open(repo.join('histedit-last-edit.txt'), 'w')
588 590 f.write(rules)
589 591 f.close()
590 592 else:
591 593 if rules == '-':
592 594 f = sys.stdin
593 595 else:
594 596 f = open(rules)
595 597 rules = f.read()
596 598 f.close()
597 599 rules = [l for l in (r.strip() for r in rules.splitlines())
598 600 if l and not l[0] == '#']
599 601 rules = verifyrules(rules, repo, ctxs)
600 602
601 603 parentctx = repo[root].parents()[0]
602 604 keep = opts.get('keep', False)
603 605 replacements = []
604 606
605 607
606 608 while rules:
607 609 writestate(repo, parentctx.node(), rules, keep, topmost, replacements)
608 610 action, ha = rules.pop(0)
609 611 ui.debug('histedit: processing %s %s\n' % (action, ha))
610 612 actfunc = actiontable[action]
611 613 parentctx, replacement_ = actfunc(ui, repo, parentctx, ha, opts)
612 614 replacements.extend(replacement_)
613 615
614 616 hg.update(repo, parentctx.node())
615 617
616 618 mapping, tmpnodes, created, ntm = processreplacement(repo, replacements)
617 619 if mapping:
618 620 for prec, succs in mapping.iteritems():
619 621 if not succs:
620 622 ui.debug('histedit: %s is dropped\n' % node.short(prec))
621 623 else:
622 624 ui.debug('histedit: %s is replaced by %s\n' % (
623 625 node.short(prec), node.short(succs[0])))
624 626 if len(succs) > 1:
625 627 m = 'histedit: %s'
626 628 for n in succs[1:]:
627 629 ui.debug(m % node.short(n))
628 630
629 631 if not keep:
630 632 if mapping:
631 633 movebookmarks(ui, repo, mapping, topmost, ntm)
632 634 # TODO update mq state
633 635 if obsolete._enabled:
634 636 markers = []
635 637 # sort by revision number because it sound "right"
636 638 for prec in sorted(mapping, key=repo.changelog.rev):
637 639 succs = mapping[prec]
638 640 markers.append((repo[prec],
639 641 tuple(repo[s] for s in succs)))
640 642 if markers:
641 643 obsolete.createmarkers(repo, markers)
642 644 else:
643 645 cleanupnode(ui, repo, 'replaced', mapping)
644 646
645 647 cleanupnode(ui, repo, 'temp', tmpnodes)
646 648 os.unlink(os.path.join(repo.path, 'histedit-state'))
647 649 if os.path.exists(repo.sjoin('undo')):
648 650 os.unlink(repo.sjoin('undo'))
649 651
650 652 def gatherchildren(repo, ctx):
651 653 # is there any new commit between the expected parent and "."
652 654 #
653 655 # note: does not take non linear new change in account (but previous
654 656 # implementation didn't used them anyway (issue3655)
655 657 newchildren = [c.node() for c in repo.set('(%d::.)', ctx)]
656 658 if ctx.node() != node.nullid:
657 659 if not newchildren:
658 660 # `ctx` should match but no result. This means that
659 661 # currentnode is not a descendant from ctx.
660 662 msg = _('%s is not an ancestor of working directory')
661 663 hint = _('use "histedit --abort" to clear broken state')
662 664 raise util.Abort(msg % ctx, hint=hint)
663 665 newchildren.pop(0) # remove ctx
664 666 return newchildren
665 667
666 668 def bootstrapcontinue(ui, repo, parentctx, rules, opts):
667 669 action, currentnode = rules.pop(0)
668 670 ctx = repo[currentnode]
669 671
670 672 newchildren = gatherchildren(repo, parentctx)
671 673
672 674 # Commit dirty working directory if necessary
673 675 new = None
674 676 m, a, r, d = repo.status()[:4]
675 677 if m or a or r or d:
676 678 # prepare the message for the commit to comes
677 679 if action in ('f', 'fold'):
678 680 message = 'fold-temp-revision %s' % currentnode
679 681 else:
680 682 message = ctx.description() + '\n'
681 683 if action in ('e', 'edit', 'm', 'mess'):
682 684 editor = cmdutil.commitforceeditor
683 685 else:
684 686 editor = False
685 687 commit = commitfuncfor(repo, ctx)
686 688 new = commit(text=message, user=ctx.user(),
687 689 date=ctx.date(), extra=ctx.extra(),
688 690 editor=editor)
689 691 if new is not None:
690 692 newchildren.append(new)
691 693
692 694 replacements = []
693 695 # track replacements
694 696 if ctx.node() not in newchildren:
695 697 # note: new children may be empty when the changeset is dropped.
696 698 # this happen e.g during conflicting pick where we revert content
697 699 # to parent.
698 700 replacements.append((ctx.node(), tuple(newchildren)))
699 701
700 702 if action in ('f', 'fold'):
701 703 if newchildren:
702 704 # finalize fold operation if applicable
703 705 if new is None:
704 706 new = newchildren[-1]
705 707 else:
706 708 newchildren.pop() # remove new from internal changes
707 709 parentctx, repl = finishfold(ui, repo, parentctx, ctx, new, opts,
708 710 newchildren)
709 711 replacements.extend(repl)
710 712 else:
711 713 # newchildren is empty if the fold did not result in any commit
712 714 # this happen when all folded change are discarded during the
713 715 # merge.
714 716 replacements.append((ctx.node(), (parentctx.node(),)))
715 717 elif newchildren:
716 718 # otherwise update "parentctx" before proceeding to further operation
717 719 parentctx = repo[newchildren[-1]]
718 720 return parentctx, replacements
719 721
720 722
721 723 def between(repo, old, new, keep):
722 724 """select and validate the set of revision to edit
723 725
724 726 When keep is false, the specified set can't have children."""
725 727 ctxs = list(repo.set('%n::%n', old, new))
726 728 if ctxs and not keep:
727 729 if (not obsolete._enabled and
728 730 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
729 731 raise util.Abort(_('cannot edit history that would orphan nodes'))
730 732 if repo.revs('(%ld) and merge()', ctxs):
731 733 raise util.Abort(_('cannot edit history that contains merges'))
732 734 root = ctxs[0] # list is already sorted by repo.set
733 735 if not root.phase():
734 736 raise util.Abort(_('cannot edit immutable changeset: %s') % root)
735 737 return [c.node() for c in ctxs]
736 738
737 739
738 740 def writestate(repo, parentnode, rules, keep, topmost, replacements):
739 741 fp = open(os.path.join(repo.path, 'histedit-state'), 'w')
740 742 pickle.dump((parentnode, rules, keep, topmost, replacements), fp)
741 743 fp.close()
742 744
743 745 def readstate(repo):
744 746 """Returns a tuple of (parentnode, rules, keep, topmost, replacements).
745 747 """
746 748 fp = open(os.path.join(repo.path, 'histedit-state'))
747 749 return pickle.load(fp)
748 750
749 751
750 752 def makedesc(c):
751 753 """build a initial action line for a ctx `c`
752 754
753 755 line are in the form:
754 756
755 757 pick <hash> <rev> <summary>
756 758 """
757 759 summary = ''
758 760 if c.description():
759 761 summary = c.description().splitlines()[0]
760 762 line = 'pick %s %d %s' % (c, c.rev(), summary)
761 763 return line[:80] # trim to 80 chars so it's not stupidly wide in my editor
762 764
763 765 def verifyrules(rules, repo, ctxs):
764 766 """Verify that there exists exactly one edit rule per given changeset.
765 767
766 768 Will abort if there are to many or too few rules, a malformed rule,
767 769 or a rule on a changeset outside of the user-given range.
768 770 """
769 771 parsed = []
770 772 expected = set(str(c) for c in ctxs)
771 773 seen = set()
772 774 for r in rules:
773 775 if ' ' not in r:
774 776 raise util.Abort(_('malformed line "%s"') % r)
775 777 action, rest = r.split(' ', 1)
776 778 ha = rest.strip().split(' ', 1)[0]
777 779 try:
778 780 ha = str(repo[ha]) # ensure its a short hash
779 781 except error.RepoError:
780 782 raise util.Abort(_('unknown changeset %s listed') % ha)
781 783 if ha not in expected:
782 784 raise util.Abort(
783 785 _('may not use changesets other than the ones listed'))
784 786 if ha in seen:
785 787 raise util.Abort(_('duplicated command for changeset %s') % ha)
786 788 seen.add(ha)
787 789 if action not in actiontable:
788 790 raise util.Abort(_('unknown action "%s"') % action)
789 791 parsed.append([action, ha])
790 792 missing = sorted(expected - seen) # sort to stabilize output
791 793 if missing:
792 794 raise util.Abort(_('missing rules for changeset %s') % missing[0],
793 795 hint=_('do you want to use the drop action?'))
794 796 return parsed
795 797
796 798 def processreplacement(repo, replacements):
797 799 """process the list of replacements to return
798 800
799 801 1) the final mapping between original and created nodes
800 802 2) the list of temporary node created by histedit
801 803 3) the list of new commit created by histedit"""
802 804 allsuccs = set()
803 805 replaced = set()
804 806 fullmapping = {}
805 807 # initialise basic set
806 808 # fullmapping record all operation recorded in replacement
807 809 for rep in replacements:
808 810 allsuccs.update(rep[1])
809 811 replaced.add(rep[0])
810 812 fullmapping.setdefault(rep[0], set()).update(rep[1])
811 813 new = allsuccs - replaced
812 814 tmpnodes = allsuccs & replaced
813 815 # Reduce content fullmapping into direct relation between original nodes
814 816 # and final node created during history edition
815 817 # Dropped changeset are replaced by an empty list
816 818 toproceed = set(fullmapping)
817 819 final = {}
818 820 while toproceed:
819 821 for x in list(toproceed):
820 822 succs = fullmapping[x]
821 823 for s in list(succs):
822 824 if s in toproceed:
823 825 # non final node with unknown closure
824 826 # We can't process this now
825 827 break
826 828 elif s in final:
827 829 # non final node, replace with closure
828 830 succs.remove(s)
829 831 succs.update(final[s])
830 832 else:
831 833 final[x] = succs
832 834 toproceed.remove(x)
833 835 # remove tmpnodes from final mapping
834 836 for n in tmpnodes:
835 837 del final[n]
836 838 # we expect all changes involved in final to exist in the repo
837 839 # turn `final` into list (topologically sorted)
838 840 nm = repo.changelog.nodemap
839 841 for prec, succs in final.items():
840 842 final[prec] = sorted(succs, key=nm.get)
841 843
842 844 # computed topmost element (necessary for bookmark)
843 845 if new:
844 846 newtopmost = sorted(new, key=repo.changelog.rev)[-1]
845 847 elif not final:
846 848 # Nothing rewritten at all. we won't need `newtopmost`
847 849 # It is the same as `oldtopmost` and `processreplacement` know it
848 850 newtopmost = None
849 851 else:
850 852 # every body died. The newtopmost is the parent of the root.
851 853 newtopmost = repo[sorted(final, key=repo.changelog.rev)[0]].p1().node()
852 854
853 855 return final, tmpnodes, new, newtopmost
854 856
855 857 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
856 858 """Move bookmark from old to newly created node"""
857 859 if not mapping:
858 860 # if nothing got rewritten there is not purpose for this function
859 861 return
860 862 moves = []
861 863 for bk, old in sorted(repo._bookmarks.iteritems()):
862 864 if old == oldtopmost:
863 865 # special case ensure bookmark stay on tip.
864 866 #
865 867 # This is arguably a feature and we may only want that for the
866 868 # active bookmark. But the behavior is kept compatible with the old
867 869 # version for now.
868 870 moves.append((bk, newtopmost))
869 871 continue
870 872 base = old
871 873 new = mapping.get(base, None)
872 874 if new is None:
873 875 continue
874 876 while not new:
875 877 # base is killed, trying with parent
876 878 base = repo[base].p1().node()
877 879 new = mapping.get(base, (base,))
878 880 # nothing to move
879 881 moves.append((bk, new[-1]))
880 882 if moves:
881 883 marks = repo._bookmarks
882 884 for mark, new in moves:
883 885 old = marks[mark]
884 886 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
885 887 % (mark, node.short(old), node.short(new)))
886 888 marks[mark] = new
887 889 marks.write()
888 890
889 891 def cleanupnode(ui, repo, name, nodes):
890 892 """strip a group of nodes from the repository
891 893
892 894 The set of node to strip may contains unknown nodes."""
893 895 ui.debug('should strip %s nodes %s\n' %
894 896 (name, ', '.join([node.short(n) for n in nodes])))
895 897 lock = None
896 898 try:
897 899 lock = repo.lock()
898 900 # Find all node that need to be stripped
899 901 # (we hg %lr instead of %ln to silently ignore unknown item
900 902 nm = repo.changelog.nodemap
901 903 nodes = [n for n in nodes if n in nm]
902 904 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
903 905 for c in roots:
904 906 # We should process node in reverse order to strip tip most first.
905 907 # but this trigger a bug in changegroup hook.
906 908 # This would reduce bundle overhead
907 909 repair.strip(ui, repo, c)
908 910 finally:
909 911 release(lock)
910 912
911 913 def summaryhook(ui, repo):
912 914 if not os.path.exists(repo.join('histedit-state')):
913 915 return
914 916 (parentctxnode, rules, keep, topmost, replacements) = readstate(repo)
915 917 if rules:
916 918 # i18n: column positioning for "hg summary"
917 919 ui.write(_('hist: %s (histedit --continue)\n') %
918 920 (ui.label(_('%d remaining'), 'histedit.remaining') %
919 921 len(rules)))
920 922
921 923 def extsetup(ui):
922 924 cmdutil.summaryhooks.add('histedit', summaryhook)
923 925 cmdutil.unfinishedstates.append(
924 926 ['histedit-state', False, True, _('histedit in progress'),
925 927 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
@@ -1,735 +1,735 b''
1 1 # keyword.py - $Keyword$ expansion for Mercurial
2 2 #
3 3 # Copyright 2007-2012 Christian Ebert <blacktrash@gmx.net>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 #
8 8 # $Id$
9 9 #
10 10 # Keyword expansion hack against the grain of a Distributed SCM
11 11 #
12 12 # There are many good reasons why this is not needed in a distributed
13 13 # SCM, still it may be useful in very small projects based on single
14 14 # files (like LaTeX packages), that are mostly addressed to an
15 15 # audience not running a version control system.
16 16 #
17 17 # For in-depth discussion refer to
18 18 # <http://mercurial.selenic.com/wiki/KeywordPlan>.
19 19 #
20 20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 21 #
22 22 # Binary files are not touched.
23 23 #
24 24 # Files to act upon/ignore are specified in the [keyword] section.
25 25 # Customized keyword template mappings in the [keywordmaps] section.
26 26 #
27 27 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
28 28
29 29 '''expand keywords in tracked files
30 30
31 31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
32 32 tracked text files selected by your configuration.
33 33
34 34 Keywords are only expanded in local repositories and not stored in the
35 35 change history. The mechanism can be regarded as a convenience for the
36 36 current user or for archive distribution.
37 37
38 38 Keywords expand to the changeset data pertaining to the latest change
39 39 relative to the working directory parent of each file.
40 40
41 41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
42 42 sections of hgrc files.
43 43
44 44 Example::
45 45
46 46 [keyword]
47 47 # expand keywords in every python file except those matching "x*"
48 48 **.py =
49 49 x* = ignore
50 50
51 51 [keywordset]
52 52 # prefer svn- over cvs-like default keywordmaps
53 53 svn = True
54 54
55 55 .. note::
56 56
57 57 The more specific you are in your filename patterns the less you
58 58 lose speed in huge repositories.
59 59
60 60 For [keywordmaps] template mapping and expansion demonstration and
61 61 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
62 62 available templates and filters.
63 63
64 64 Three additional date template filters are provided:
65 65
66 66 :``utcdate``: "2006/09/18 15:13:13"
67 67 :``svnutcdate``: "2006-09-18 15:13:13Z"
68 68 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
69 69
70 70 The default template mappings (view with :hg:`kwdemo -d`) can be
71 71 replaced with customized keywords and templates. Again, run
72 72 :hg:`kwdemo` to control the results of your configuration changes.
73 73
74 74 Before changing/disabling active keywords, you must run :hg:`kwshrink`
75 75 to avoid storing expanded keywords in the change history.
76 76
77 77 To force expansion after enabling it, or a configuration change, run
78 78 :hg:`kwexpand`.
79 79
80 80 Expansions spanning more than one line and incremental expansions,
81 81 like CVS' $Log$, are not supported. A keyword template map "Log =
82 82 {desc}" expands to the first line of the changeset description.
83 83 '''
84 84
85 85 from mercurial import commands, context, cmdutil, dispatch, filelog, extensions
86 86 from mercurial import localrepo, match, patch, templatefilters, templater, util
87 87 from mercurial import scmutil, pathutil
88 88 from mercurial.hgweb import webcommands
89 89 from mercurial.i18n import _
90 90 import os, re, shutil, tempfile
91 91
92 92 commands.optionalrepo += ' kwdemo'
93 93 commands.inferrepo += ' kwexpand kwfiles kwshrink'
94 94
95 95 cmdtable = {}
96 96 command = cmdutil.command(cmdtable)
97 97 testedwith = 'internal'
98 98
99 99 # hg commands that do not act on keywords
100 100 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
101 101 ' outgoing push tip verify convert email glog')
102 102
103 103 # hg commands that trigger expansion only when writing to working dir,
104 104 # not when reading filelog, and unexpand when reading from working dir
105 105 restricted = 'merge kwexpand kwshrink record qrecord resolve transplant'
106 106
107 107 # names of extensions using dorecord
108 108 recordextensions = 'record'
109 109
110 110 colortable = {
111 111 'kwfiles.enabled': 'green bold',
112 112 'kwfiles.deleted': 'cyan bold underline',
113 113 'kwfiles.enabledunknown': 'green',
114 114 'kwfiles.ignored': 'bold',
115 115 'kwfiles.ignoredunknown': 'none'
116 116 }
117 117
118 118 # date like in cvs' $Date
119 119 def utcdate(text):
120 120 ''':utcdate: Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
121 121 '''
122 122 return util.datestr((util.parsedate(text)[0], 0), '%Y/%m/%d %H:%M:%S')
123 123 # date like in svn's $Date
124 124 def svnisodate(text):
125 125 ''':svnisodate: Date. Returns a date in this format: "2009-08-18 13:00:13
126 126 +0200 (Tue, 18 Aug 2009)".
127 127 '''
128 128 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
129 129 # date like in svn's $Id
130 130 def svnutcdate(text):
131 131 ''':svnutcdate: Date. Returns a UTC-date in this format: "2009-08-18
132 132 11:00:13Z".
133 133 '''
134 134 return util.datestr((util.parsedate(text)[0], 0), '%Y-%m-%d %H:%M:%SZ')
135 135
136 136 templatefilters.filters.update({'utcdate': utcdate,
137 137 'svnisodate': svnisodate,
138 138 'svnutcdate': svnutcdate})
139 139
140 140 # make keyword tools accessible
141 141 kwtools = {'templater': None, 'hgcmd': ''}
142 142
143 143 def _defaultkwmaps(ui):
144 144 '''Returns default keywordmaps according to keywordset configuration.'''
145 145 templates = {
146 146 'Revision': '{node|short}',
147 147 'Author': '{author|user}',
148 148 }
149 149 kwsets = ({
150 150 'Date': '{date|utcdate}',
151 151 'RCSfile': '{file|basename},v',
152 152 'RCSFile': '{file|basename},v', # kept for backwards compatibility
153 153 # with hg-keyword
154 154 'Source': '{root}/{file},v',
155 155 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
156 156 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
157 157 }, {
158 158 'Date': '{date|svnisodate}',
159 159 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
160 160 'LastChangedRevision': '{node|short}',
161 161 'LastChangedBy': '{author|user}',
162 162 'LastChangedDate': '{date|svnisodate}',
163 163 })
164 164 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
165 165 return templates
166 166
167 167 def _shrinktext(text, subfunc):
168 168 '''Helper for keyword expansion removal in text.
169 169 Depending on subfunc also returns number of substitutions.'''
170 170 return subfunc(r'$\1$', text)
171 171
172 172 def _preselect(wstatus, changed):
173 173 '''Retrieves modified and added files from a working directory state
174 174 and returns the subset of each contained in given changed files
175 175 retrieved from a change context.'''
176 176 modified, added = wstatus[:2]
177 177 modified = [f for f in modified if f in changed]
178 178 added = [f for f in added if f in changed]
179 179 return modified, added
180 180
181 181
182 182 class kwtemplater(object):
183 183 '''
184 184 Sets up keyword templates, corresponding keyword regex, and
185 185 provides keyword substitution functions.
186 186 '''
187 187
188 188 def __init__(self, ui, repo, inc, exc):
189 189 self.ui = ui
190 190 self.repo = repo
191 191 self.match = match.match(repo.root, '', [], inc, exc)
192 192 self.restrict = kwtools['hgcmd'] in restricted.split()
193 193 self.postcommit = False
194 194
195 195 kwmaps = self.ui.configitems('keywordmaps')
196 196 if kwmaps: # override default templates
197 197 self.templates = dict((k, templater.parsestring(v, False))
198 198 for k, v in kwmaps)
199 199 else:
200 200 self.templates = _defaultkwmaps(self.ui)
201 201
202 202 @util.propertycache
203 203 def escape(self):
204 204 '''Returns bar-separated and escaped keywords.'''
205 205 return '|'.join(map(re.escape, self.templates.keys()))
206 206
207 207 @util.propertycache
208 208 def rekw(self):
209 209 '''Returns regex for unexpanded keywords.'''
210 210 return re.compile(r'\$(%s)\$' % self.escape)
211 211
212 212 @util.propertycache
213 213 def rekwexp(self):
214 214 '''Returns regex for expanded keywords.'''
215 215 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
216 216
217 217 def substitute(self, data, path, ctx, subfunc):
218 218 '''Replaces keywords in data with expanded template.'''
219 219 def kwsub(mobj):
220 220 kw = mobj.group(1)
221 221 ct = cmdutil.changeset_templater(self.ui, self.repo, False, None,
222 222 self.templates[kw], '', False)
223 223 self.ui.pushbuffer()
224 224 ct.show(ctx, root=self.repo.root, file=path)
225 225 ekw = templatefilters.firstline(self.ui.popbuffer())
226 226 return '$%s: %s $' % (kw, ekw)
227 227 return subfunc(kwsub, data)
228 228
229 229 def linkctx(self, path, fileid):
230 230 '''Similar to filelog.linkrev, but returns a changectx.'''
231 231 return self.repo.filectx(path, fileid=fileid).changectx()
232 232
233 233 def expand(self, path, node, data):
234 234 '''Returns data with keywords expanded.'''
235 235 if not self.restrict and self.match(path) and not util.binary(data):
236 236 ctx = self.linkctx(path, node)
237 237 return self.substitute(data, path, ctx, self.rekw.sub)
238 238 return data
239 239
240 240 def iskwfile(self, cand, ctx):
241 241 '''Returns subset of candidates which are configured for keyword
242 242 expansion but are not symbolic links.'''
243 243 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
244 244
245 245 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
246 246 '''Overwrites selected files expanding/shrinking keywords.'''
247 247 if self.restrict or lookup or self.postcommit: # exclude kw_copy
248 248 candidates = self.iskwfile(candidates, ctx)
249 249 if not candidates:
250 250 return
251 251 kwcmd = self.restrict and lookup # kwexpand/kwshrink
252 252 if self.restrict or expand and lookup:
253 253 mf = ctx.manifest()
254 254 if self.restrict or rekw:
255 255 re_kw = self.rekw
256 256 else:
257 257 re_kw = self.rekwexp
258 258 if expand:
259 259 msg = _('overwriting %s expanding keywords\n')
260 260 else:
261 261 msg = _('overwriting %s shrinking keywords\n')
262 262 for f in candidates:
263 263 if self.restrict:
264 264 data = self.repo.file(f).read(mf[f])
265 265 else:
266 266 data = self.repo.wread(f)
267 267 if util.binary(data):
268 268 continue
269 269 if expand:
270 270 if lookup:
271 271 ctx = self.linkctx(f, mf[f])
272 272 data, found = self.substitute(data, f, ctx, re_kw.subn)
273 273 elif self.restrict:
274 274 found = re_kw.search(data)
275 275 else:
276 276 data, found = _shrinktext(data, re_kw.subn)
277 277 if found:
278 278 self.ui.note(msg % f)
279 279 fp = self.repo.wopener(f, "wb", atomictemp=True)
280 280 fp.write(data)
281 281 fp.close()
282 282 if kwcmd:
283 283 self.repo.dirstate.normal(f)
284 284 elif self.postcommit:
285 285 self.repo.dirstate.normallookup(f)
286 286
287 287 def shrink(self, fname, text):
288 288 '''Returns text with all keyword substitutions removed.'''
289 289 if self.match(fname) and not util.binary(text):
290 290 return _shrinktext(text, self.rekwexp.sub)
291 291 return text
292 292
293 293 def shrinklines(self, fname, lines):
294 294 '''Returns lines with keyword substitutions removed.'''
295 295 if self.match(fname):
296 296 text = ''.join(lines)
297 297 if not util.binary(text):
298 298 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
299 299 return lines
300 300
301 301 def wread(self, fname, data):
302 302 '''If in restricted mode returns data read from wdir with
303 303 keyword substitutions removed.'''
304 304 if self.restrict:
305 305 return self.shrink(fname, data)
306 306 return data
307 307
308 308 class kwfilelog(filelog.filelog):
309 309 '''
310 310 Subclass of filelog to hook into its read, add, cmp methods.
311 311 Keywords are "stored" unexpanded, and processed on reading.
312 312 '''
313 313 def __init__(self, opener, kwt, path):
314 314 super(kwfilelog, self).__init__(opener, path)
315 315 self.kwt = kwt
316 316 self.path = path
317 317
318 318 def read(self, node):
319 319 '''Expands keywords when reading filelog.'''
320 320 data = super(kwfilelog, self).read(node)
321 321 if self.renamed(node):
322 322 return data
323 323 return self.kwt.expand(self.path, node, data)
324 324
325 325 def add(self, text, meta, tr, link, p1=None, p2=None):
326 326 '''Removes keyword substitutions when adding to filelog.'''
327 327 text = self.kwt.shrink(self.path, text)
328 328 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
329 329
330 330 def cmp(self, node, text):
331 331 '''Removes keyword substitutions for comparison.'''
332 332 text = self.kwt.shrink(self.path, text)
333 333 return super(kwfilelog, self).cmp(node, text)
334 334
335 335 def _status(ui, repo, wctx, kwt, *pats, **opts):
336 336 '''Bails out if [keyword] configuration is not active.
337 337 Returns status of working directory.'''
338 338 if kwt:
339 339 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
340 340 unknown=opts.get('unknown') or opts.get('all'))
341 341 if ui.configitems('keyword'):
342 342 raise util.Abort(_('[keyword] patterns cannot match'))
343 343 raise util.Abort(_('no [keyword] patterns configured'))
344 344
345 345 def _kwfwrite(ui, repo, expand, *pats, **opts):
346 346 '''Selects files and passes them to kwtemplater.overwrite.'''
347 347 wctx = repo[None]
348 348 if len(wctx.parents()) > 1:
349 349 raise util.Abort(_('outstanding uncommitted merge'))
350 350 kwt = kwtools['templater']
351 351 wlock = repo.wlock()
352 352 try:
353 353 status = _status(ui, repo, wctx, kwt, *pats, **opts)
354 354 modified, added, removed, deleted, unknown, ignored, clean = status
355 355 if modified or added or removed or deleted:
356 356 raise util.Abort(_('outstanding uncommitted changes'))
357 357 kwt.overwrite(wctx, clean, True, expand)
358 358 finally:
359 359 wlock.release()
360 360
361 361 @command('kwdemo',
362 362 [('d', 'default', None, _('show default keyword template maps')),
363 363 ('f', 'rcfile', '',
364 364 _('read maps from rcfile'), _('FILE'))],
365 365 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'))
366 366 def demo(ui, repo, *args, **opts):
367 367 '''print [keywordmaps] configuration and an expansion example
368 368
369 369 Show current, custom, or default keyword template maps and their
370 370 expansions.
371 371
372 372 Extend the current configuration by specifying maps as arguments
373 373 and using -f/--rcfile to source an external hgrc file.
374 374
375 375 Use -d/--default to disable current configuration.
376 376
377 377 See :hg:`help templates` for information on templates and filters.
378 378 '''
379 379 def demoitems(section, items):
380 380 ui.write('[%s]\n' % section)
381 381 for k, v in sorted(items):
382 382 ui.write('%s = %s\n' % (k, v))
383 383
384 384 fn = 'demo.txt'
385 385 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
386 386 ui.note(_('creating temporary repository at %s\n') % tmpdir)
387 387 repo = localrepo.localrepository(repo.baseui, tmpdir, True)
388 ui.setconfig('keyword', fn, '')
388 ui.setconfig('keyword', fn, '', 'keyword')
389 389 svn = ui.configbool('keywordset', 'svn')
390 390 # explicitly set keywordset for demo output
391 ui.setconfig('keywordset', 'svn', svn)
391 ui.setconfig('keywordset', 'svn', svn, 'keyword')
392 392
393 393 uikwmaps = ui.configitems('keywordmaps')
394 394 if args or opts.get('rcfile'):
395 395 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
396 396 if uikwmaps:
397 397 ui.status(_('\textending current template maps\n'))
398 398 if opts.get('default') or not uikwmaps:
399 399 if svn:
400 400 ui.status(_('\toverriding default svn keywordset\n'))
401 401 else:
402 402 ui.status(_('\toverriding default cvs keywordset\n'))
403 403 if opts.get('rcfile'):
404 404 ui.readconfig(opts.get('rcfile'))
405 405 if args:
406 406 # simulate hgrc parsing
407 407 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
408 408 fp = repo.opener('hgrc', 'w')
409 409 fp.writelines(rcmaps)
410 410 fp.close()
411 411 ui.readconfig(repo.join('hgrc'))
412 412 kwmaps = dict(ui.configitems('keywordmaps'))
413 413 elif opts.get('default'):
414 414 if svn:
415 415 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
416 416 else:
417 417 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
418 418 kwmaps = _defaultkwmaps(ui)
419 419 if uikwmaps:
420 420 ui.status(_('\tdisabling current template maps\n'))
421 421 for k, v in kwmaps.iteritems():
422 ui.setconfig('keywordmaps', k, v)
422 ui.setconfig('keywordmaps', k, v, 'keyword')
423 423 else:
424 424 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
425 425 if uikwmaps:
426 426 kwmaps = dict(uikwmaps)
427 427 else:
428 428 kwmaps = _defaultkwmaps(ui)
429 429
430 430 uisetup(ui)
431 431 reposetup(ui, repo)
432 432 ui.write('[extensions]\nkeyword =\n')
433 433 demoitems('keyword', ui.configitems('keyword'))
434 434 demoitems('keywordset', ui.configitems('keywordset'))
435 435 demoitems('keywordmaps', kwmaps.iteritems())
436 436 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
437 437 repo.wopener.write(fn, keywords)
438 438 repo[None].add([fn])
439 439 ui.note(_('\nkeywords written to %s:\n') % fn)
440 440 ui.note(keywords)
441 441 wlock = repo.wlock()
442 442 try:
443 443 repo.dirstate.setbranch('demobranch')
444 444 finally:
445 445 wlock.release()
446 446 for name, cmd in ui.configitems('hooks'):
447 447 if name.split('.', 1)[0].find('commit') > -1:
448 repo.ui.setconfig('hooks', name, '')
448 repo.ui.setconfig('hooks', name, '', 'keyword')
449 449 msg = _('hg keyword configuration and expansion example')
450 450 ui.note(("hg ci -m '%s'\n" % msg))
451 451 repo.commit(text=msg)
452 452 ui.status(_('\n\tkeywords expanded\n'))
453 453 ui.write(repo.wread(fn))
454 454 shutil.rmtree(tmpdir, ignore_errors=True)
455 455
456 456 @command('kwexpand', commands.walkopts, _('hg kwexpand [OPTION]... [FILE]...'))
457 457 def expand(ui, repo, *pats, **opts):
458 458 '''expand keywords in the working directory
459 459
460 460 Run after (re)enabling keyword expansion.
461 461
462 462 kwexpand refuses to run if given files contain local changes.
463 463 '''
464 464 # 3rd argument sets expansion to True
465 465 _kwfwrite(ui, repo, True, *pats, **opts)
466 466
467 467 @command('kwfiles',
468 468 [('A', 'all', None, _('show keyword status flags of all files')),
469 469 ('i', 'ignore', None, _('show files excluded from expansion')),
470 470 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
471 471 ] + commands.walkopts,
472 472 _('hg kwfiles [OPTION]... [FILE]...'))
473 473 def files(ui, repo, *pats, **opts):
474 474 '''show files configured for keyword expansion
475 475
476 476 List which files in the working directory are matched by the
477 477 [keyword] configuration patterns.
478 478
479 479 Useful to prevent inadvertent keyword expansion and to speed up
480 480 execution by including only files that are actual candidates for
481 481 expansion.
482 482
483 483 See :hg:`help keyword` on how to construct patterns both for
484 484 inclusion and exclusion of files.
485 485
486 486 With -A/--all and -v/--verbose the codes used to show the status
487 487 of files are::
488 488
489 489 K = keyword expansion candidate
490 490 k = keyword expansion candidate (not tracked)
491 491 I = ignored
492 492 i = ignored (not tracked)
493 493 '''
494 494 kwt = kwtools['templater']
495 495 wctx = repo[None]
496 496 status = _status(ui, repo, wctx, kwt, *pats, **opts)
497 497 cwd = pats and repo.getcwd() or ''
498 498 modified, added, removed, deleted, unknown, ignored, clean = status
499 499 files = []
500 500 if not opts.get('unknown') or opts.get('all'):
501 501 files = sorted(modified + added + clean)
502 502 kwfiles = kwt.iskwfile(files, wctx)
503 503 kwdeleted = kwt.iskwfile(deleted, wctx)
504 504 kwunknown = kwt.iskwfile(unknown, wctx)
505 505 if not opts.get('ignore') or opts.get('all'):
506 506 showfiles = kwfiles, kwdeleted, kwunknown
507 507 else:
508 508 showfiles = [], [], []
509 509 if opts.get('all') or opts.get('ignore'):
510 510 showfiles += ([f for f in files if f not in kwfiles],
511 511 [f for f in unknown if f not in kwunknown])
512 512 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
513 513 kwstates = zip(kwlabels, 'K!kIi', showfiles)
514 514 fm = ui.formatter('kwfiles', opts)
515 515 fmt = '%.0s%s\n'
516 516 if opts.get('all') or ui.verbose:
517 517 fmt = '%s %s\n'
518 518 for kwstate, char, filenames in kwstates:
519 519 label = 'kwfiles.' + kwstate
520 520 for f in filenames:
521 521 fm.startitem()
522 522 fm.write('kwstatus path', fmt, char,
523 523 repo.pathto(f, cwd), label=label)
524 524 fm.end()
525 525
526 526 @command('kwshrink', commands.walkopts, _('hg kwshrink [OPTION]... [FILE]...'))
527 527 def shrink(ui, repo, *pats, **opts):
528 528 '''revert expanded keywords in the working directory
529 529
530 530 Must be run before changing/disabling active keywords.
531 531
532 532 kwshrink refuses to run if given files contain local changes.
533 533 '''
534 534 # 3rd argument sets expansion to False
535 535 _kwfwrite(ui, repo, False, *pats, **opts)
536 536
537 537
538 538 def uisetup(ui):
539 539 ''' Monkeypatches dispatch._parse to retrieve user command.'''
540 540
541 541 def kwdispatch_parse(orig, ui, args):
542 542 '''Monkeypatch dispatch._parse to obtain running hg command.'''
543 543 cmd, func, args, options, cmdoptions = orig(ui, args)
544 544 kwtools['hgcmd'] = cmd
545 545 return cmd, func, args, options, cmdoptions
546 546
547 547 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
548 548
549 549 def reposetup(ui, repo):
550 550 '''Sets up repo as kwrepo for keyword substitution.
551 551 Overrides file method to return kwfilelog instead of filelog
552 552 if file matches user configuration.
553 553 Wraps commit to overwrite configured files with updated
554 554 keyword substitutions.
555 555 Monkeypatches patch and webcommands.'''
556 556
557 557 try:
558 558 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
559 559 or '.hg' in util.splitpath(repo.root)
560 560 or repo._url.startswith('bundle:')):
561 561 return
562 562 except AttributeError:
563 563 pass
564 564
565 565 inc, exc = [], ['.hg*']
566 566 for pat, opt in ui.configitems('keyword'):
567 567 if opt != 'ignore':
568 568 inc.append(pat)
569 569 else:
570 570 exc.append(pat)
571 571 if not inc:
572 572 return
573 573
574 574 kwtools['templater'] = kwt = kwtemplater(ui, repo, inc, exc)
575 575
576 576 class kwrepo(repo.__class__):
577 577 def file(self, f):
578 578 if f[0] == '/':
579 579 f = f[1:]
580 580 return kwfilelog(self.sopener, kwt, f)
581 581
582 582 def wread(self, filename):
583 583 data = super(kwrepo, self).wread(filename)
584 584 return kwt.wread(filename, data)
585 585
586 586 def commit(self, *args, **opts):
587 587 # use custom commitctx for user commands
588 588 # other extensions can still wrap repo.commitctx directly
589 589 self.commitctx = self.kwcommitctx
590 590 try:
591 591 return super(kwrepo, self).commit(*args, **opts)
592 592 finally:
593 593 del self.commitctx
594 594
595 595 def kwcommitctx(self, ctx, error=False):
596 596 n = super(kwrepo, self).commitctx(ctx, error)
597 597 # no lock needed, only called from repo.commit() which already locks
598 598 if not kwt.postcommit:
599 599 restrict = kwt.restrict
600 600 kwt.restrict = True
601 601 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
602 602 False, True)
603 603 kwt.restrict = restrict
604 604 return n
605 605
606 606 def rollback(self, dryrun=False, force=False):
607 607 wlock = self.wlock()
608 608 try:
609 609 if not dryrun:
610 610 changed = self['.'].files()
611 611 ret = super(kwrepo, self).rollback(dryrun, force)
612 612 if not dryrun:
613 613 ctx = self['.']
614 614 modified, added = _preselect(self[None].status(), changed)
615 615 kwt.overwrite(ctx, modified, True, True)
616 616 kwt.overwrite(ctx, added, True, False)
617 617 return ret
618 618 finally:
619 619 wlock.release()
620 620
621 621 # monkeypatches
622 622 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
623 623 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
624 624 rejects or conflicts due to expanded keywords in working dir.'''
625 625 orig(self, ui, gp, backend, store, eolmode)
626 626 # shrink keywords read from working dir
627 627 self.lines = kwt.shrinklines(self.fname, self.lines)
628 628
629 629 def kw_diff(orig, repo, node1=None, node2=None, match=None, changes=None,
630 630 opts=None, prefix=''):
631 631 '''Monkeypatch patch.diff to avoid expansion.'''
632 632 kwt.restrict = True
633 633 return orig(repo, node1, node2, match, changes, opts, prefix)
634 634
635 635 def kwweb_skip(orig, web, req, tmpl):
636 636 '''Wraps webcommands.x turning off keyword expansion.'''
637 637 kwt.match = util.never
638 638 return orig(web, req, tmpl)
639 639
640 640 def kw_amend(orig, ui, repo, commitfunc, old, extra, pats, opts):
641 641 '''Wraps cmdutil.amend expanding keywords after amend.'''
642 642 wlock = repo.wlock()
643 643 try:
644 644 kwt.postcommit = True
645 645 newid = orig(ui, repo, commitfunc, old, extra, pats, opts)
646 646 if newid != old.node():
647 647 ctx = repo[newid]
648 648 kwt.restrict = True
649 649 kwt.overwrite(ctx, ctx.files(), False, True)
650 650 kwt.restrict = False
651 651 return newid
652 652 finally:
653 653 wlock.release()
654 654
655 655 def kw_copy(orig, ui, repo, pats, opts, rename=False):
656 656 '''Wraps cmdutil.copy so that copy/rename destinations do not
657 657 contain expanded keywords.
658 658 Note that the source of a regular file destination may also be a
659 659 symlink:
660 660 hg cp sym x -> x is symlink
661 661 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
662 662 For the latter we have to follow the symlink to find out whether its
663 663 target is configured for expansion and we therefore must unexpand the
664 664 keywords in the destination.'''
665 665 wlock = repo.wlock()
666 666 try:
667 667 orig(ui, repo, pats, opts, rename)
668 668 if opts.get('dry_run'):
669 669 return
670 670 wctx = repo[None]
671 671 cwd = repo.getcwd()
672 672
673 673 def haskwsource(dest):
674 674 '''Returns true if dest is a regular file and configured for
675 675 expansion or a symlink which points to a file configured for
676 676 expansion. '''
677 677 source = repo.dirstate.copied(dest)
678 678 if 'l' in wctx.flags(source):
679 679 source = pathutil.canonpath(repo.root, cwd,
680 680 os.path.realpath(source))
681 681 return kwt.match(source)
682 682
683 683 candidates = [f for f in repo.dirstate.copies() if
684 684 'l' not in wctx.flags(f) and haskwsource(f)]
685 685 kwt.overwrite(wctx, candidates, False, False)
686 686 finally:
687 687 wlock.release()
688 688
689 689 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
690 690 '''Wraps record.dorecord expanding keywords after recording.'''
691 691 wlock = repo.wlock()
692 692 try:
693 693 # record returns 0 even when nothing has changed
694 694 # therefore compare nodes before and after
695 695 kwt.postcommit = True
696 696 ctx = repo['.']
697 697 wstatus = repo[None].status()
698 698 ret = orig(ui, repo, commitfunc, *pats, **opts)
699 699 recctx = repo['.']
700 700 if ctx != recctx:
701 701 modified, added = _preselect(wstatus, recctx.files())
702 702 kwt.restrict = False
703 703 kwt.overwrite(recctx, modified, False, True)
704 704 kwt.overwrite(recctx, added, False, True, True)
705 705 kwt.restrict = True
706 706 return ret
707 707 finally:
708 708 wlock.release()
709 709
710 710 def kwfilectx_cmp(orig, self, fctx):
711 711 # keyword affects data size, comparing wdir and filelog size does
712 712 # not make sense
713 713 if (fctx._filerev is None and
714 714 (self._repo._encodefilterpats or
715 715 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
716 716 self.size() - 4 == fctx.size()) or
717 717 self.size() == fctx.size()):
718 718 return self._filelog.cmp(self._filenode, fctx.data())
719 719 return True
720 720
721 721 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
722 722 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
723 723 extensions.wrapfunction(patch, 'diff', kw_diff)
724 724 extensions.wrapfunction(cmdutil, 'amend', kw_amend)
725 725 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
726 726 for c in 'annotate changeset rev filediff diff'.split():
727 727 extensions.wrapfunction(webcommands, c, kwweb_skip)
728 728 for name in recordextensions.split():
729 729 try:
730 730 record = extensions.find(name)
731 731 extensions.wrapfunction(record, 'dorecord', kw_dorecord)
732 732 except KeyError:
733 733 pass
734 734
735 735 repo.__class__ = kwrepo
@@ -1,514 +1,515 b''
1 1 # Copyright 2009-2010 Gregory P. Ward
2 2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 3 # Copyright 2010-2011 Fog Creek Software
4 4 # Copyright 2010-2011 Unity Technologies
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''setup for largefiles repositories: reposetup'''
10 10 import copy
11 11 import os
12 12
13 13 from mercurial import error, manifest, match as match_, util, discovery
14 14 from mercurial import node as node_
15 15 from mercurial.i18n import _
16 16 from mercurial import localrepo
17 17
18 18 import lfcommands
19 19 import proto
20 20 import lfutil
21 21
22 22 def reposetup(ui, repo):
23 23 # wire repositories should be given new wireproto functions but not the
24 24 # other largefiles modifications
25 25 if not repo.local():
26 26 return proto.wirereposetup(ui, repo)
27 27
28 28 class lfilesrepo(repo.__class__):
29 29 lfstatus = False
30 30 def status_nolfiles(self, *args, **kwargs):
31 31 return super(lfilesrepo, self).status(*args, **kwargs)
32 32
33 33 # When lfstatus is set, return a context that gives the names
34 34 # of largefiles instead of their corresponding standins and
35 35 # identifies the largefiles as always binary, regardless of
36 36 # their actual contents.
37 37 def __getitem__(self, changeid):
38 38 ctx = super(lfilesrepo, self).__getitem__(changeid)
39 39 if self.lfstatus:
40 40 class lfilesmanifestdict(manifest.manifestdict):
41 41 def __contains__(self, filename):
42 42 if super(lfilesmanifestdict,
43 43 self).__contains__(filename):
44 44 return True
45 45 return super(lfilesmanifestdict,
46 46 self).__contains__(lfutil.standin(filename))
47 47 class lfilesctx(ctx.__class__):
48 48 def files(self):
49 49 filenames = super(lfilesctx, self).files()
50 50 return [lfutil.splitstandin(f) or f for f in filenames]
51 51 def manifest(self):
52 52 man1 = super(lfilesctx, self).manifest()
53 53 man1.__class__ = lfilesmanifestdict
54 54 return man1
55 55 def filectx(self, path, fileid=None, filelog=None):
56 56 try:
57 57 if filelog is not None:
58 58 result = super(lfilesctx, self).filectx(
59 59 path, fileid, filelog)
60 60 else:
61 61 result = super(lfilesctx, self).filectx(
62 62 path, fileid)
63 63 except error.LookupError:
64 64 # Adding a null character will cause Mercurial to
65 65 # identify this as a binary file.
66 66 if filelog is not None:
67 67 result = super(lfilesctx, self).filectx(
68 68 lfutil.standin(path), fileid, filelog)
69 69 else:
70 70 result = super(lfilesctx, self).filectx(
71 71 lfutil.standin(path), fileid)
72 72 olddata = result.data
73 73 result.data = lambda: olddata() + '\0'
74 74 return result
75 75 ctx.__class__ = lfilesctx
76 76 return ctx
77 77
78 78 # Figure out the status of big files and insert them into the
79 79 # appropriate list in the result. Also removes standin files
80 80 # from the listing. Revert to the original status if
81 81 # self.lfstatus is False.
82 82 # XXX large file status is buggy when used on repo proxy.
83 83 # XXX this needs to be investigated.
84 84 @localrepo.unfilteredmethod
85 85 def status(self, node1='.', node2=None, match=None, ignored=False,
86 86 clean=False, unknown=False, listsubrepos=False):
87 87 listignored, listclean, listunknown = ignored, clean, unknown
88 88 if not self.lfstatus:
89 89 return super(lfilesrepo, self).status(node1, node2, match,
90 90 listignored, listclean, listunknown, listsubrepos)
91 91 else:
92 92 # some calls in this function rely on the old version of status
93 93 self.lfstatus = False
94 94 ctx1 = self[node1]
95 95 ctx2 = self[node2]
96 96 working = ctx2.rev() is None
97 97 parentworking = working and ctx1 == self['.']
98 98
99 99 def inctx(file, ctx):
100 100 try:
101 101 if ctx.rev() is None:
102 102 return file in ctx.manifest()
103 103 ctx[file]
104 104 return True
105 105 except KeyError:
106 106 return False
107 107
108 108 if match is None:
109 109 match = match_.always(self.root, self.getcwd())
110 110
111 111 wlock = None
112 112 try:
113 113 try:
114 114 # updating the dirstate is optional
115 115 # so we don't wait on the lock
116 116 wlock = self.wlock(False)
117 117 except error.LockError:
118 118 pass
119 119
120 120 # First check if there were files specified on the
121 121 # command line. If there were, and none of them were
122 122 # largefiles, we should just bail here and let super
123 123 # handle it -- thus gaining a big performance boost.
124 124 lfdirstate = lfutil.openlfdirstate(ui, self)
125 125 if match.files() and not match.anypats():
126 126 for f in lfdirstate:
127 127 if match(f):
128 128 break
129 129 else:
130 130 return super(lfilesrepo, self).status(node1, node2,
131 131 match, listignored, listclean,
132 132 listunknown, listsubrepos)
133 133
134 134 # Create a copy of match that matches standins instead
135 135 # of largefiles.
136 136 def tostandins(files):
137 137 if not working:
138 138 return files
139 139 newfiles = []
140 140 dirstate = self.dirstate
141 141 for f in files:
142 142 sf = lfutil.standin(f)
143 143 if sf in dirstate:
144 144 newfiles.append(sf)
145 145 elif sf in dirstate.dirs():
146 146 # Directory entries could be regular or
147 147 # standin, check both
148 148 newfiles.extend((f, sf))
149 149 else:
150 150 newfiles.append(f)
151 151 return newfiles
152 152
153 153 m = copy.copy(match)
154 154 m._files = tostandins(m._files)
155 155
156 156 result = super(lfilesrepo, self).status(node1, node2, m,
157 157 ignored, clean, unknown, listsubrepos)
158 158 if working:
159 159
160 160 def sfindirstate(f):
161 161 sf = lfutil.standin(f)
162 162 dirstate = self.dirstate
163 163 return sf in dirstate or sf in dirstate.dirs()
164 164
165 165 match._files = [f for f in match._files
166 166 if sfindirstate(f)]
167 167 # Don't waste time getting the ignored and unknown
168 168 # files from lfdirstate
169 169 s = lfdirstate.status(match, [], False,
170 170 listclean, False)
171 171 (unsure, modified, added, removed, missing, _unknown,
172 172 _ignored, clean) = s
173 173 if parentworking:
174 174 for lfile in unsure:
175 175 standin = lfutil.standin(lfile)
176 176 if standin not in ctx1:
177 177 # from second parent
178 178 modified.append(lfile)
179 179 elif ctx1[standin].data().strip() \
180 180 != lfutil.hashfile(self.wjoin(lfile)):
181 181 modified.append(lfile)
182 182 else:
183 183 clean.append(lfile)
184 184 lfdirstate.normal(lfile)
185 185 else:
186 186 tocheck = unsure + modified + added + clean
187 187 modified, added, clean = [], [], []
188 188
189 189 for lfile in tocheck:
190 190 standin = lfutil.standin(lfile)
191 191 if inctx(standin, ctx1):
192 192 if ctx1[standin].data().strip() != \
193 193 lfutil.hashfile(self.wjoin(lfile)):
194 194 modified.append(lfile)
195 195 else:
196 196 clean.append(lfile)
197 197 else:
198 198 added.append(lfile)
199 199
200 200 # Standins no longer found in lfdirstate has been
201 201 # removed
202 202 for standin in ctx1.manifest():
203 203 if not lfutil.isstandin(standin):
204 204 continue
205 205 lfile = lfutil.splitstandin(standin)
206 206 if not match(lfile):
207 207 continue
208 208 if lfile not in lfdirstate:
209 209 removed.append(lfile)
210 210
211 211 # Filter result lists
212 212 result = list(result)
213 213
214 214 # Largefiles are not really removed when they're
215 215 # still in the normal dirstate. Likewise, normal
216 216 # files are not really removed if they are still in
217 217 # lfdirstate. This happens in merges where files
218 218 # change type.
219 219 removed = [f for f in removed
220 220 if f not in self.dirstate]
221 221 result[2] = [f for f in result[2]
222 222 if f not in lfdirstate]
223 223
224 224 lfiles = set(lfdirstate._map)
225 225 # Unknown files
226 226 result[4] = set(result[4]).difference(lfiles)
227 227 # Ignored files
228 228 result[5] = set(result[5]).difference(lfiles)
229 229 # combine normal files and largefiles
230 230 normals = [[fn for fn in filelist
231 231 if not lfutil.isstandin(fn)]
232 232 for filelist in result]
233 233 lfiles = (modified, added, removed, missing, [], [],
234 234 clean)
235 235 result = [sorted(list1 + list2)
236 236 for (list1, list2) in zip(normals, lfiles)]
237 237 else:
238 238 def toname(f):
239 239 if lfutil.isstandin(f):
240 240 return lfutil.splitstandin(f)
241 241 return f
242 242 result = [[toname(f) for f in items]
243 243 for items in result]
244 244
245 245 if wlock:
246 246 lfdirstate.write()
247 247
248 248 finally:
249 249 if wlock:
250 250 wlock.release()
251 251
252 252 if not listunknown:
253 253 result[4] = []
254 254 if not listignored:
255 255 result[5] = []
256 256 if not listclean:
257 257 result[6] = []
258 258 self.lfstatus = True
259 259 return result
260 260
261 261 # As part of committing, copy all of the largefiles into the
262 262 # cache.
263 263 def commitctx(self, *args, **kwargs):
264 264 node = super(lfilesrepo, self).commitctx(*args, **kwargs)
265 265 lfutil.copyalltostore(self, node)
266 266 return node
267 267
268 268 # Before commit, largefile standins have not had their
269 269 # contents updated to reflect the hash of their largefile.
270 270 # Do that here.
271 271 def commit(self, text="", user=None, date=None, match=None,
272 272 force=False, editor=False, extra={}):
273 273 orig = super(lfilesrepo, self).commit
274 274
275 275 wlock = self.wlock()
276 276 try:
277 277 # Case 0: Rebase or Transplant
278 278 # We have to take the time to pull down the new largefiles now.
279 279 # Otherwise, any largefiles that were modified in the
280 280 # destination changesets get overwritten, either by the rebase
281 281 # or in the first commit after the rebase or transplant.
282 282 # updatelfiles will update the dirstate to mark any pulled
283 283 # largefiles as modified
284 284 if getattr(self, "_isrebasing", False) or \
285 285 getattr(self, "_istransplanting", False):
286 286 lfcommands.updatelfiles(self.ui, self, filelist=None,
287 287 printmessage=False)
288 288 result = orig(text=text, user=user, date=date, match=match,
289 289 force=force, editor=editor, extra=extra)
290 290 return result
291 291 # Case 1: user calls commit with no specific files or
292 292 # include/exclude patterns: refresh and commit all files that
293 293 # are "dirty".
294 294 if ((match is None) or
295 295 (not match.anypats() and not match.files())):
296 296 # Spend a bit of time here to get a list of files we know
297 297 # are modified so we can compare only against those.
298 298 # It can cost a lot of time (several seconds)
299 299 # otherwise to update all standins if the largefiles are
300 300 # large.
301 301 lfdirstate = lfutil.openlfdirstate(ui, self)
302 302 dirtymatch = match_.always(self.root, self.getcwd())
303 303 s = lfdirstate.status(dirtymatch, [], False, False, False)
304 304 (unsure, modified, added, removed, _missing, _unknown,
305 305 _ignored, _clean) = s
306 306 modifiedfiles = unsure + modified + added + removed
307 307 lfiles = lfutil.listlfiles(self)
308 308 # this only loops through largefiles that exist (not
309 309 # removed/renamed)
310 310 for lfile in lfiles:
311 311 if lfile in modifiedfiles:
312 312 if os.path.exists(
313 313 self.wjoin(lfutil.standin(lfile))):
314 314 # this handles the case where a rebase is being
315 315 # performed and the working copy is not updated
316 316 # yet.
317 317 if os.path.exists(self.wjoin(lfile)):
318 318 lfutil.updatestandin(self,
319 319 lfutil.standin(lfile))
320 320 lfdirstate.normal(lfile)
321 321
322 322 result = orig(text=text, user=user, date=date, match=match,
323 323 force=force, editor=editor, extra=extra)
324 324
325 325 if result is not None:
326 326 for lfile in lfdirstate:
327 327 if lfile in modifiedfiles:
328 328 if (not os.path.exists(self.wjoin(
329 329 lfutil.standin(lfile)))) or \
330 330 (not os.path.exists(self.wjoin(lfile))):
331 331 lfdirstate.drop(lfile)
332 332
333 333 # This needs to be after commit; otherwise precommit hooks
334 334 # get the wrong status
335 335 lfdirstate.write()
336 336 return result
337 337
338 338 lfiles = lfutil.listlfiles(self)
339 339 match._files = self._subdirlfs(match.files(), lfiles)
340 340
341 341 # Case 2: user calls commit with specified patterns: refresh
342 342 # any matching big files.
343 343 smatcher = lfutil.composestandinmatcher(self, match)
344 344 standins = self.dirstate.walk(smatcher, [], False, False)
345 345
346 346 # No matching big files: get out of the way and pass control to
347 347 # the usual commit() method.
348 348 if not standins:
349 349 return orig(text=text, user=user, date=date, match=match,
350 350 force=force, editor=editor, extra=extra)
351 351
352 352 # Refresh all matching big files. It's possible that the
353 353 # commit will end up failing, in which case the big files will
354 354 # stay refreshed. No harm done: the user modified them and
355 355 # asked to commit them, so sooner or later we're going to
356 356 # refresh the standins. Might as well leave them refreshed.
357 357 lfdirstate = lfutil.openlfdirstate(ui, self)
358 358 for standin in standins:
359 359 lfile = lfutil.splitstandin(standin)
360 360 if lfdirstate[lfile] != 'r':
361 361 lfutil.updatestandin(self, standin)
362 362 lfdirstate.normal(lfile)
363 363 else:
364 364 lfdirstate.drop(lfile)
365 365
366 366 # Cook up a new matcher that only matches regular files or
367 367 # standins corresponding to the big files requested by the
368 368 # user. Have to modify _files to prevent commit() from
369 369 # complaining "not tracked" for big files.
370 370 match = copy.copy(match)
371 371 origmatchfn = match.matchfn
372 372
373 373 # Check both the list of largefiles and the list of
374 374 # standins because if a largefile was removed, it
375 375 # won't be in the list of largefiles at this point
376 376 match._files += sorted(standins)
377 377
378 378 actualfiles = []
379 379 for f in match._files:
380 380 fstandin = lfutil.standin(f)
381 381
382 382 # ignore known largefiles and standins
383 383 if f in lfiles or fstandin in standins:
384 384 continue
385 385
386 386 # append directory separator to avoid collisions
387 387 if not fstandin.endswith(os.sep):
388 388 fstandin += os.sep
389 389
390 390 actualfiles.append(f)
391 391 match._files = actualfiles
392 392
393 393 def matchfn(f):
394 394 if origmatchfn(f):
395 395 return f not in lfiles
396 396 else:
397 397 return f in standins
398 398
399 399 match.matchfn = matchfn
400 400 result = orig(text=text, user=user, date=date, match=match,
401 401 force=force, editor=editor, extra=extra)
402 402 # This needs to be after commit; otherwise precommit hooks
403 403 # get the wrong status
404 404 lfdirstate.write()
405 405 return result
406 406 finally:
407 407 wlock.release()
408 408
409 409 def push(self, remote, force=False, revs=None, newbranch=False):
410 410 if remote.local():
411 411 missing = set(self.requirements) - remote.local().supported
412 412 if missing:
413 413 msg = _("required features are not"
414 414 " supported in the destination:"
415 415 " %s") % (', '.join(sorted(missing)))
416 416 raise util.Abort(msg)
417 417
418 418 outgoing = discovery.findcommonoutgoing(repo, remote.peer(),
419 419 force=force)
420 420 if outgoing.missing:
421 421 toupload = set()
422 422 o = self.changelog.nodesbetween(outgoing.missing, revs)[0]
423 423 for n in o:
424 424 parents = [p for p in self.changelog.parents(n)
425 425 if p != node_.nullid]
426 426 ctx = self[n]
427 427 files = set(ctx.files())
428 428 if len(parents) == 2:
429 429 mc = ctx.manifest()
430 430 mp1 = ctx.parents()[0].manifest()
431 431 mp2 = ctx.parents()[1].manifest()
432 432 for f in mp1:
433 433 if f not in mc:
434 434 files.add(f)
435 435 for f in mp2:
436 436 if f not in mc:
437 437 files.add(f)
438 438 for f in mc:
439 439 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f,
440 440 None):
441 441 files.add(f)
442 442
443 443 toupload = toupload.union(
444 444 set([ctx[f].data().strip()
445 445 for f in files
446 446 if lfutil.isstandin(f) and f in ctx]))
447 447 lfcommands.uploadlfiles(ui, self, remote, toupload)
448 448 return super(lfilesrepo, self).push(remote, force=force, revs=revs,
449 449 newbranch=newbranch)
450 450
451 451 def _subdirlfs(self, files, lfiles):
452 452 '''
453 453 Adjust matched file list
454 454 If we pass a directory to commit whose only commitable files
455 455 are largefiles, the core commit code aborts before finding
456 456 the largefiles.
457 457 So we do the following:
458 458 For directories that only have largefiles as matches,
459 459 we explicitly add the largefiles to the match list and remove
460 460 the directory.
461 461 In other cases, we leave the match list unmodified.
462 462 '''
463 463 actualfiles = []
464 464 dirs = []
465 465 regulars = []
466 466
467 467 for f in files:
468 468 if lfutil.isstandin(f + '/'):
469 469 raise util.Abort(
470 470 _('file "%s" is a largefile standin') % f,
471 471 hint=('commit the largefile itself instead'))
472 472 # Scan directories
473 473 if os.path.isdir(self.wjoin(f)):
474 474 dirs.append(f)
475 475 else:
476 476 regulars.append(f)
477 477
478 478 for f in dirs:
479 479 matcheddir = False
480 480 d = self.dirstate.normalize(f) + '/'
481 481 # Check for matched normal files
482 482 for mf in regulars:
483 483 if self.dirstate.normalize(mf).startswith(d):
484 484 actualfiles.append(f)
485 485 matcheddir = True
486 486 break
487 487 if not matcheddir:
488 488 # If no normal match, manually append
489 489 # any matching largefiles
490 490 for lf in lfiles:
491 491 if self.dirstate.normalize(lf).startswith(d):
492 492 actualfiles.append(lf)
493 493 if not matcheddir:
494 494 actualfiles.append(lfutil.standin(f))
495 495 matcheddir = True
496 496 # Nothing in dir, so readd it
497 497 # and let commit reject it
498 498 if not matcheddir:
499 499 actualfiles.append(f)
500 500
501 501 # Always add normal files
502 502 actualfiles += regulars
503 503 return actualfiles
504 504
505 505 repo.__class__ = lfilesrepo
506 506
507 507 def checkrequireslfiles(ui, repo, **kwargs):
508 508 if 'largefiles' not in repo.requirements and util.any(
509 509 lfutil.shortname+'/' in f[0] for f in repo.store.datafiles()):
510 510 repo.requirements.add('largefiles')
511 511 repo._writerequirements()
512 512
513 ui.setconfig('hooks', 'changegroup.lfiles', checkrequireslfiles)
514 ui.setconfig('hooks', 'commit.lfiles', checkrequireslfiles)
513 ui.setconfig('hooks', 'changegroup.lfiles', checkrequireslfiles,
514 'largefiles')
515 ui.setconfig('hooks', 'commit.lfiles', checkrequireslfiles, 'largefiles')
@@ -1,3458 +1,3458 b''
1 1 # mq.py - patch queues for mercurial
2 2 #
3 3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''manage a stack of patches
9 9
10 10 This extension lets you work with a stack of patches in a Mercurial
11 11 repository. It manages two stacks of patches - all known patches, and
12 12 applied patches (subset of known patches).
13 13
14 14 Known patches are represented as patch files in the .hg/patches
15 15 directory. Applied patches are both patch files and changesets.
16 16
17 17 Common tasks (use :hg:`help command` for more details)::
18 18
19 19 create new patch qnew
20 20 import existing patch qimport
21 21
22 22 print patch series qseries
23 23 print applied patches qapplied
24 24
25 25 add known patch to applied stack qpush
26 26 remove patch from applied stack qpop
27 27 refresh contents of top applied patch qrefresh
28 28
29 29 By default, mq will automatically use git patches when required to
30 30 avoid losing file mode changes, copy records, binary files or empty
31 31 files creations or deletions. This behaviour can be configured with::
32 32
33 33 [mq]
34 34 git = auto/keep/yes/no
35 35
36 36 If set to 'keep', mq will obey the [diff] section configuration while
37 37 preserving existing git patches upon qrefresh. If set to 'yes' or
38 38 'no', mq will override the [diff] section and always generate git or
39 39 regular patches, possibly losing data in the second case.
40 40
41 41 It may be desirable for mq changesets to be kept in the secret phase (see
42 42 :hg:`help phases`), which can be enabled with the following setting::
43 43
44 44 [mq]
45 45 secret = True
46 46
47 47 You will by default be managing a patch queue named "patches". You can
48 48 create other, independent patch queues with the :hg:`qqueue` command.
49 49
50 50 If the working directory contains uncommitted files, qpush, qpop and
51 51 qgoto abort immediately. If -f/--force is used, the changes are
52 52 discarded. Setting::
53 53
54 54 [mq]
55 55 keepchanges = True
56 56
57 57 make them behave as if --keep-changes were passed, and non-conflicting
58 58 local changes will be tolerated and preserved. If incompatible options
59 59 such as -f/--force or --exact are passed, this setting is ignored.
60 60
61 61 This extension used to provide a strip command. This command now lives
62 62 in the strip extension.
63 63 '''
64 64
65 65 from mercurial.i18n import _
66 66 from mercurial.node import bin, hex, short, nullid, nullrev
67 67 from mercurial.lock import release
68 68 from mercurial import commands, cmdutil, hg, scmutil, util, revset
69 69 from mercurial import extensions, error, phases
70 70 from mercurial import patch as patchmod
71 71 from mercurial import localrepo
72 72 from mercurial import subrepo
73 73 import os, re, errno, shutil
74 74
75 75 commands.norepo += " qclone"
76 76
77 77 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
78 78
79 79 cmdtable = {}
80 80 command = cmdutil.command(cmdtable)
81 81 testedwith = 'internal'
82 82
83 83 # force load strip extension formerly included in mq and import some utility
84 84 try:
85 85 stripext = extensions.find('strip')
86 86 except KeyError:
87 87 # note: load is lazy so we could avoid the try-except,
88 88 # but I (marmoute) prefer this explicit code.
89 89 class dummyui(object):
90 90 def debug(self, msg):
91 91 pass
92 92 stripext = extensions.load(dummyui(), 'strip', '')
93 93
94 94 strip = stripext.strip
95 95 checksubstate = stripext.checksubstate
96 96 checklocalchanges = stripext.checklocalchanges
97 97
98 98
99 99 # Patch names looks like unix-file names.
100 100 # They must be joinable with queue directory and result in the patch path.
101 101 normname = util.normpath
102 102
103 103 class statusentry(object):
104 104 def __init__(self, node, name):
105 105 self.node, self.name = node, name
106 106 def __repr__(self):
107 107 return hex(self.node) + ':' + self.name
108 108
109 109 class patchheader(object):
110 110 def __init__(self, pf, plainmode=False):
111 111 def eatdiff(lines):
112 112 while lines:
113 113 l = lines[-1]
114 114 if (l.startswith("diff -") or
115 115 l.startswith("Index:") or
116 116 l.startswith("===========")):
117 117 del lines[-1]
118 118 else:
119 119 break
120 120 def eatempty(lines):
121 121 while lines:
122 122 if not lines[-1].strip():
123 123 del lines[-1]
124 124 else:
125 125 break
126 126
127 127 message = []
128 128 comments = []
129 129 user = None
130 130 date = None
131 131 parent = None
132 132 format = None
133 133 subject = None
134 134 branch = None
135 135 nodeid = None
136 136 diffstart = 0
137 137
138 138 for line in file(pf):
139 139 line = line.rstrip()
140 140 if (line.startswith('diff --git')
141 141 or (diffstart and line.startswith('+++ '))):
142 142 diffstart = 2
143 143 break
144 144 diffstart = 0 # reset
145 145 if line.startswith("--- "):
146 146 diffstart = 1
147 147 continue
148 148 elif format == "hgpatch":
149 149 # parse values when importing the result of an hg export
150 150 if line.startswith("# User "):
151 151 user = line[7:]
152 152 elif line.startswith("# Date "):
153 153 date = line[7:]
154 154 elif line.startswith("# Parent "):
155 155 parent = line[9:].lstrip()
156 156 elif line.startswith("# Branch "):
157 157 branch = line[9:]
158 158 elif line.startswith("# Node ID "):
159 159 nodeid = line[10:]
160 160 elif not line.startswith("# ") and line:
161 161 message.append(line)
162 162 format = None
163 163 elif line == '# HG changeset patch':
164 164 message = []
165 165 format = "hgpatch"
166 166 elif (format != "tagdone" and (line.startswith("Subject: ") or
167 167 line.startswith("subject: "))):
168 168 subject = line[9:]
169 169 format = "tag"
170 170 elif (format != "tagdone" and (line.startswith("From: ") or
171 171 line.startswith("from: "))):
172 172 user = line[6:]
173 173 format = "tag"
174 174 elif (format != "tagdone" and (line.startswith("Date: ") or
175 175 line.startswith("date: "))):
176 176 date = line[6:]
177 177 format = "tag"
178 178 elif format == "tag" and line == "":
179 179 # when looking for tags (subject: from: etc) they
180 180 # end once you find a blank line in the source
181 181 format = "tagdone"
182 182 elif message or line:
183 183 message.append(line)
184 184 comments.append(line)
185 185
186 186 eatdiff(message)
187 187 eatdiff(comments)
188 188 # Remember the exact starting line of the patch diffs before consuming
189 189 # empty lines, for external use by TortoiseHg and others
190 190 self.diffstartline = len(comments)
191 191 eatempty(message)
192 192 eatempty(comments)
193 193
194 194 # make sure message isn't empty
195 195 if format and format.startswith("tag") and subject:
196 196 message.insert(0, "")
197 197 message.insert(0, subject)
198 198
199 199 self.message = message
200 200 self.comments = comments
201 201 self.user = user
202 202 self.date = date
203 203 self.parent = parent
204 204 # nodeid and branch are for external use by TortoiseHg and others
205 205 self.nodeid = nodeid
206 206 self.branch = branch
207 207 self.haspatch = diffstart > 1
208 208 self.plainmode = plainmode
209 209
210 210 def setuser(self, user):
211 211 if not self.updateheader(['From: ', '# User '], user):
212 212 try:
213 213 patchheaderat = self.comments.index('# HG changeset patch')
214 214 self.comments.insert(patchheaderat + 1, '# User ' + user)
215 215 except ValueError:
216 216 if self.plainmode or self._hasheader(['Date: ']):
217 217 self.comments = ['From: ' + user] + self.comments
218 218 else:
219 219 tmp = ['# HG changeset patch', '# User ' + user, '']
220 220 self.comments = tmp + self.comments
221 221 self.user = user
222 222
223 223 def setdate(self, date):
224 224 if not self.updateheader(['Date: ', '# Date '], date):
225 225 try:
226 226 patchheaderat = self.comments.index('# HG changeset patch')
227 227 self.comments.insert(patchheaderat + 1, '# Date ' + date)
228 228 except ValueError:
229 229 if self.plainmode or self._hasheader(['From: ']):
230 230 self.comments = ['Date: ' + date] + self.comments
231 231 else:
232 232 tmp = ['# HG changeset patch', '# Date ' + date, '']
233 233 self.comments = tmp + self.comments
234 234 self.date = date
235 235
236 236 def setparent(self, parent):
237 237 if not self.updateheader(['# Parent '], parent):
238 238 try:
239 239 patchheaderat = self.comments.index('# HG changeset patch')
240 240 self.comments.insert(patchheaderat + 1, '# Parent ' + parent)
241 241 except ValueError:
242 242 pass
243 243 self.parent = parent
244 244
245 245 def setmessage(self, message):
246 246 if self.comments:
247 247 self._delmsg()
248 248 self.message = [message]
249 249 self.comments += self.message
250 250
251 251 def updateheader(self, prefixes, new):
252 252 '''Update all references to a field in the patch header.
253 253 Return whether the field is present.'''
254 254 res = False
255 255 for prefix in prefixes:
256 256 for i in xrange(len(self.comments)):
257 257 if self.comments[i].startswith(prefix):
258 258 self.comments[i] = prefix + new
259 259 res = True
260 260 break
261 261 return res
262 262
263 263 def _hasheader(self, prefixes):
264 264 '''Check if a header starts with any of the given prefixes.'''
265 265 for prefix in prefixes:
266 266 for comment in self.comments:
267 267 if comment.startswith(prefix):
268 268 return True
269 269 return False
270 270
271 271 def __str__(self):
272 272 if not self.comments:
273 273 return ''
274 274 return '\n'.join(self.comments) + '\n\n'
275 275
276 276 def _delmsg(self):
277 277 '''Remove existing message, keeping the rest of the comments fields.
278 278 If comments contains 'subject: ', message will prepend
279 279 the field and a blank line.'''
280 280 if self.message:
281 281 subj = 'subject: ' + self.message[0].lower()
282 282 for i in xrange(len(self.comments)):
283 283 if subj == self.comments[i].lower():
284 284 del self.comments[i]
285 285 self.message = self.message[2:]
286 286 break
287 287 ci = 0
288 288 for mi in self.message:
289 289 while mi != self.comments[ci]:
290 290 ci += 1
291 291 del self.comments[ci]
292 292
293 293 def newcommit(repo, phase, *args, **kwargs):
294 294 """helper dedicated to ensure a commit respect mq.secret setting
295 295
296 296 It should be used instead of repo.commit inside the mq source for operation
297 297 creating new changeset.
298 298 """
299 299 repo = repo.unfiltered()
300 300 if phase is None:
301 301 if repo.ui.configbool('mq', 'secret', False):
302 302 phase = phases.secret
303 303 if phase is not None:
304 304 backup = repo.ui.backupconfig('phases', 'new-commit')
305 305 try:
306 306 if phase is not None:
307 repo.ui.setconfig('phases', 'new-commit', phase)
307 repo.ui.setconfig('phases', 'new-commit', phase, 'mq')
308 308 return repo.commit(*args, **kwargs)
309 309 finally:
310 310 if phase is not None:
311 311 repo.ui.restoreconfig(backup)
312 312
313 313 class AbortNoCleanup(error.Abort):
314 314 pass
315 315
316 316 class queue(object):
317 317 def __init__(self, ui, baseui, path, patchdir=None):
318 318 self.basepath = path
319 319 try:
320 320 fh = open(os.path.join(path, 'patches.queue'))
321 321 cur = fh.read().rstrip()
322 322 fh.close()
323 323 if not cur:
324 324 curpath = os.path.join(path, 'patches')
325 325 else:
326 326 curpath = os.path.join(path, 'patches-' + cur)
327 327 except IOError:
328 328 curpath = os.path.join(path, 'patches')
329 329 self.path = patchdir or curpath
330 330 self.opener = scmutil.opener(self.path)
331 331 self.ui = ui
332 332 self.baseui = baseui
333 333 self.applieddirty = False
334 334 self.seriesdirty = False
335 335 self.added = []
336 336 self.seriespath = "series"
337 337 self.statuspath = "status"
338 338 self.guardspath = "guards"
339 339 self.activeguards = None
340 340 self.guardsdirty = False
341 341 # Handle mq.git as a bool with extended values
342 342 try:
343 343 gitmode = ui.configbool('mq', 'git', None)
344 344 if gitmode is None:
345 345 raise error.ConfigError
346 346 self.gitmode = gitmode and 'yes' or 'no'
347 347 except error.ConfigError:
348 348 self.gitmode = ui.config('mq', 'git', 'auto').lower()
349 349 self.plainmode = ui.configbool('mq', 'plain', False)
350 350 self.checkapplied = True
351 351
352 352 @util.propertycache
353 353 def applied(self):
354 354 def parselines(lines):
355 355 for l in lines:
356 356 entry = l.split(':', 1)
357 357 if len(entry) > 1:
358 358 n, name = entry
359 359 yield statusentry(bin(n), name)
360 360 elif l.strip():
361 361 self.ui.warn(_('malformated mq status line: %s\n') % entry)
362 362 # else we ignore empty lines
363 363 try:
364 364 lines = self.opener.read(self.statuspath).splitlines()
365 365 return list(parselines(lines))
366 366 except IOError, e:
367 367 if e.errno == errno.ENOENT:
368 368 return []
369 369 raise
370 370
371 371 @util.propertycache
372 372 def fullseries(self):
373 373 try:
374 374 return self.opener.read(self.seriespath).splitlines()
375 375 except IOError, e:
376 376 if e.errno == errno.ENOENT:
377 377 return []
378 378 raise
379 379
380 380 @util.propertycache
381 381 def series(self):
382 382 self.parseseries()
383 383 return self.series
384 384
385 385 @util.propertycache
386 386 def seriesguards(self):
387 387 self.parseseries()
388 388 return self.seriesguards
389 389
390 390 def invalidate(self):
391 391 for a in 'applied fullseries series seriesguards'.split():
392 392 if a in self.__dict__:
393 393 delattr(self, a)
394 394 self.applieddirty = False
395 395 self.seriesdirty = False
396 396 self.guardsdirty = False
397 397 self.activeguards = None
398 398
399 399 def diffopts(self, opts={}, patchfn=None):
400 400 diffopts = patchmod.diffopts(self.ui, opts)
401 401 if self.gitmode == 'auto':
402 402 diffopts.upgrade = True
403 403 elif self.gitmode == 'keep':
404 404 pass
405 405 elif self.gitmode in ('yes', 'no'):
406 406 diffopts.git = self.gitmode == 'yes'
407 407 else:
408 408 raise util.Abort(_('mq.git option can be auto/keep/yes/no'
409 409 ' got %s') % self.gitmode)
410 410 if patchfn:
411 411 diffopts = self.patchopts(diffopts, patchfn)
412 412 return diffopts
413 413
414 414 def patchopts(self, diffopts, *patches):
415 415 """Return a copy of input diff options with git set to true if
416 416 referenced patch is a git patch and should be preserved as such.
417 417 """
418 418 diffopts = diffopts.copy()
419 419 if not diffopts.git and self.gitmode == 'keep':
420 420 for patchfn in patches:
421 421 patchf = self.opener(patchfn, 'r')
422 422 # if the patch was a git patch, refresh it as a git patch
423 423 for line in patchf:
424 424 if line.startswith('diff --git'):
425 425 diffopts.git = True
426 426 break
427 427 patchf.close()
428 428 return diffopts
429 429
430 430 def join(self, *p):
431 431 return os.path.join(self.path, *p)
432 432
433 433 def findseries(self, patch):
434 434 def matchpatch(l):
435 435 l = l.split('#', 1)[0]
436 436 return l.strip() == patch
437 437 for index, l in enumerate(self.fullseries):
438 438 if matchpatch(l):
439 439 return index
440 440 return None
441 441
442 442 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
443 443
444 444 def parseseries(self):
445 445 self.series = []
446 446 self.seriesguards = []
447 447 for l in self.fullseries:
448 448 h = l.find('#')
449 449 if h == -1:
450 450 patch = l
451 451 comment = ''
452 452 elif h == 0:
453 453 continue
454 454 else:
455 455 patch = l[:h]
456 456 comment = l[h:]
457 457 patch = patch.strip()
458 458 if patch:
459 459 if patch in self.series:
460 460 raise util.Abort(_('%s appears more than once in %s') %
461 461 (patch, self.join(self.seriespath)))
462 462 self.series.append(patch)
463 463 self.seriesguards.append(self.guard_re.findall(comment))
464 464
465 465 def checkguard(self, guard):
466 466 if not guard:
467 467 return _('guard cannot be an empty string')
468 468 bad_chars = '# \t\r\n\f'
469 469 first = guard[0]
470 470 if first in '-+':
471 471 return (_('guard %r starts with invalid character: %r') %
472 472 (guard, first))
473 473 for c in bad_chars:
474 474 if c in guard:
475 475 return _('invalid character in guard %r: %r') % (guard, c)
476 476
477 477 def setactive(self, guards):
478 478 for guard in guards:
479 479 bad = self.checkguard(guard)
480 480 if bad:
481 481 raise util.Abort(bad)
482 482 guards = sorted(set(guards))
483 483 self.ui.debug('active guards: %s\n' % ' '.join(guards))
484 484 self.activeguards = guards
485 485 self.guardsdirty = True
486 486
487 487 def active(self):
488 488 if self.activeguards is None:
489 489 self.activeguards = []
490 490 try:
491 491 guards = self.opener.read(self.guardspath).split()
492 492 except IOError, err:
493 493 if err.errno != errno.ENOENT:
494 494 raise
495 495 guards = []
496 496 for i, guard in enumerate(guards):
497 497 bad = self.checkguard(guard)
498 498 if bad:
499 499 self.ui.warn('%s:%d: %s\n' %
500 500 (self.join(self.guardspath), i + 1, bad))
501 501 else:
502 502 self.activeguards.append(guard)
503 503 return self.activeguards
504 504
505 505 def setguards(self, idx, guards):
506 506 for g in guards:
507 507 if len(g) < 2:
508 508 raise util.Abort(_('guard %r too short') % g)
509 509 if g[0] not in '-+':
510 510 raise util.Abort(_('guard %r starts with invalid char') % g)
511 511 bad = self.checkguard(g[1:])
512 512 if bad:
513 513 raise util.Abort(bad)
514 514 drop = self.guard_re.sub('', self.fullseries[idx])
515 515 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
516 516 self.parseseries()
517 517 self.seriesdirty = True
518 518
519 519 def pushable(self, idx):
520 520 if isinstance(idx, str):
521 521 idx = self.series.index(idx)
522 522 patchguards = self.seriesguards[idx]
523 523 if not patchguards:
524 524 return True, None
525 525 guards = self.active()
526 526 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
527 527 if exactneg:
528 528 return False, repr(exactneg[0])
529 529 pos = [g for g in patchguards if g[0] == '+']
530 530 exactpos = [g for g in pos if g[1:] in guards]
531 531 if pos:
532 532 if exactpos:
533 533 return True, repr(exactpos[0])
534 534 return False, ' '.join(map(repr, pos))
535 535 return True, ''
536 536
537 537 def explainpushable(self, idx, all_patches=False):
538 538 write = all_patches and self.ui.write or self.ui.warn
539 539 if all_patches or self.ui.verbose:
540 540 if isinstance(idx, str):
541 541 idx = self.series.index(idx)
542 542 pushable, why = self.pushable(idx)
543 543 if all_patches and pushable:
544 544 if why is None:
545 545 write(_('allowing %s - no guards in effect\n') %
546 546 self.series[idx])
547 547 else:
548 548 if not why:
549 549 write(_('allowing %s - no matching negative guards\n') %
550 550 self.series[idx])
551 551 else:
552 552 write(_('allowing %s - guarded by %s\n') %
553 553 (self.series[idx], why))
554 554 if not pushable:
555 555 if why:
556 556 write(_('skipping %s - guarded by %s\n') %
557 557 (self.series[idx], why))
558 558 else:
559 559 write(_('skipping %s - no matching guards\n') %
560 560 self.series[idx])
561 561
562 562 def savedirty(self):
563 563 def writelist(items, path):
564 564 fp = self.opener(path, 'w')
565 565 for i in items:
566 566 fp.write("%s\n" % i)
567 567 fp.close()
568 568 if self.applieddirty:
569 569 writelist(map(str, self.applied), self.statuspath)
570 570 self.applieddirty = False
571 571 if self.seriesdirty:
572 572 writelist(self.fullseries, self.seriespath)
573 573 self.seriesdirty = False
574 574 if self.guardsdirty:
575 575 writelist(self.activeguards, self.guardspath)
576 576 self.guardsdirty = False
577 577 if self.added:
578 578 qrepo = self.qrepo()
579 579 if qrepo:
580 580 qrepo[None].add(f for f in self.added if f not in qrepo[None])
581 581 self.added = []
582 582
583 583 def removeundo(self, repo):
584 584 undo = repo.sjoin('undo')
585 585 if not os.path.exists(undo):
586 586 return
587 587 try:
588 588 os.unlink(undo)
589 589 except OSError, inst:
590 590 self.ui.warn(_('error removing undo: %s\n') % str(inst))
591 591
592 592 def backup(self, repo, files, copy=False):
593 593 # backup local changes in --force case
594 594 for f in sorted(files):
595 595 absf = repo.wjoin(f)
596 596 if os.path.lexists(absf):
597 597 self.ui.note(_('saving current version of %s as %s\n') %
598 598 (f, f + '.orig'))
599 599 if copy:
600 600 util.copyfile(absf, absf + '.orig')
601 601 else:
602 602 util.rename(absf, absf + '.orig')
603 603
604 604 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
605 605 fp=None, changes=None, opts={}):
606 606 stat = opts.get('stat')
607 607 m = scmutil.match(repo[node1], files, opts)
608 608 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
609 609 changes, stat, fp)
610 610
611 611 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
612 612 # first try just applying the patch
613 613 (err, n) = self.apply(repo, [patch], update_status=False,
614 614 strict=True, merge=rev)
615 615
616 616 if err == 0:
617 617 return (err, n)
618 618
619 619 if n is None:
620 620 raise util.Abort(_("apply failed for patch %s") % patch)
621 621
622 622 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
623 623
624 624 # apply failed, strip away that rev and merge.
625 625 hg.clean(repo, head)
626 626 strip(self.ui, repo, [n], update=False, backup='strip')
627 627
628 628 ctx = repo[rev]
629 629 ret = hg.merge(repo, rev)
630 630 if ret:
631 631 raise util.Abort(_("update returned %d") % ret)
632 632 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
633 633 if n is None:
634 634 raise util.Abort(_("repo commit failed"))
635 635 try:
636 636 ph = patchheader(mergeq.join(patch), self.plainmode)
637 637 except Exception:
638 638 raise util.Abort(_("unable to read %s") % patch)
639 639
640 640 diffopts = self.patchopts(diffopts, patch)
641 641 patchf = self.opener(patch, "w")
642 642 comments = str(ph)
643 643 if comments:
644 644 patchf.write(comments)
645 645 self.printdiff(repo, diffopts, head, n, fp=patchf)
646 646 patchf.close()
647 647 self.removeundo(repo)
648 648 return (0, n)
649 649
650 650 def qparents(self, repo, rev=None):
651 651 """return the mq handled parent or p1
652 652
653 653 In some case where mq get himself in being the parent of a merge the
654 654 appropriate parent may be p2.
655 655 (eg: an in progress merge started with mq disabled)
656 656
657 657 If no parent are managed by mq, p1 is returned.
658 658 """
659 659 if rev is None:
660 660 (p1, p2) = repo.dirstate.parents()
661 661 if p2 == nullid:
662 662 return p1
663 663 if not self.applied:
664 664 return None
665 665 return self.applied[-1].node
666 666 p1, p2 = repo.changelog.parents(rev)
667 667 if p2 != nullid and p2 in [x.node for x in self.applied]:
668 668 return p2
669 669 return p1
670 670
671 671 def mergepatch(self, repo, mergeq, series, diffopts):
672 672 if not self.applied:
673 673 # each of the patches merged in will have two parents. This
674 674 # can confuse the qrefresh, qdiff, and strip code because it
675 675 # needs to know which parent is actually in the patch queue.
676 676 # so, we insert a merge marker with only one parent. This way
677 677 # the first patch in the queue is never a merge patch
678 678 #
679 679 pname = ".hg.patches.merge.marker"
680 680 n = newcommit(repo, None, '[mq]: merge marker', force=True)
681 681 self.removeundo(repo)
682 682 self.applied.append(statusentry(n, pname))
683 683 self.applieddirty = True
684 684
685 685 head = self.qparents(repo)
686 686
687 687 for patch in series:
688 688 patch = mergeq.lookup(patch, strict=True)
689 689 if not patch:
690 690 self.ui.warn(_("patch %s does not exist\n") % patch)
691 691 return (1, None)
692 692 pushable, reason = self.pushable(patch)
693 693 if not pushable:
694 694 self.explainpushable(patch, all_patches=True)
695 695 continue
696 696 info = mergeq.isapplied(patch)
697 697 if not info:
698 698 self.ui.warn(_("patch %s is not applied\n") % patch)
699 699 return (1, None)
700 700 rev = info[1]
701 701 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
702 702 if head:
703 703 self.applied.append(statusentry(head, patch))
704 704 self.applieddirty = True
705 705 if err:
706 706 return (err, head)
707 707 self.savedirty()
708 708 return (0, head)
709 709
710 710 def patch(self, repo, patchfile):
711 711 '''Apply patchfile to the working directory.
712 712 patchfile: name of patch file'''
713 713 files = set()
714 714 try:
715 715 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
716 716 files=files, eolmode=None)
717 717 return (True, list(files), fuzz)
718 718 except Exception, inst:
719 719 self.ui.note(str(inst) + '\n')
720 720 if not self.ui.verbose:
721 721 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
722 722 self.ui.traceback()
723 723 return (False, list(files), False)
724 724
725 725 def apply(self, repo, series, list=False, update_status=True,
726 726 strict=False, patchdir=None, merge=None, all_files=None,
727 727 tobackup=None, keepchanges=False):
728 728 wlock = lock = tr = None
729 729 try:
730 730 wlock = repo.wlock()
731 731 lock = repo.lock()
732 732 tr = repo.transaction("qpush")
733 733 try:
734 734 ret = self._apply(repo, series, list, update_status,
735 735 strict, patchdir, merge, all_files=all_files,
736 736 tobackup=tobackup, keepchanges=keepchanges)
737 737 tr.close()
738 738 self.savedirty()
739 739 return ret
740 740 except AbortNoCleanup:
741 741 tr.close()
742 742 self.savedirty()
743 743 return 2, repo.dirstate.p1()
744 744 except: # re-raises
745 745 try:
746 746 tr.abort()
747 747 finally:
748 748 repo.invalidate()
749 749 repo.dirstate.invalidate()
750 750 self.invalidate()
751 751 raise
752 752 finally:
753 753 release(tr, lock, wlock)
754 754 self.removeundo(repo)
755 755
756 756 def _apply(self, repo, series, list=False, update_status=True,
757 757 strict=False, patchdir=None, merge=None, all_files=None,
758 758 tobackup=None, keepchanges=False):
759 759 """returns (error, hash)
760 760
761 761 error = 1 for unable to read, 2 for patch failed, 3 for patch
762 762 fuzz. tobackup is None or a set of files to backup before they
763 763 are modified by a patch.
764 764 """
765 765 # TODO unify with commands.py
766 766 if not patchdir:
767 767 patchdir = self.path
768 768 err = 0
769 769 n = None
770 770 for patchname in series:
771 771 pushable, reason = self.pushable(patchname)
772 772 if not pushable:
773 773 self.explainpushable(patchname, all_patches=True)
774 774 continue
775 775 self.ui.status(_("applying %s\n") % patchname)
776 776 pf = os.path.join(patchdir, patchname)
777 777
778 778 try:
779 779 ph = patchheader(self.join(patchname), self.plainmode)
780 780 except IOError:
781 781 self.ui.warn(_("unable to read %s\n") % patchname)
782 782 err = 1
783 783 break
784 784
785 785 message = ph.message
786 786 if not message:
787 787 # The commit message should not be translated
788 788 message = "imported patch %s\n" % patchname
789 789 else:
790 790 if list:
791 791 # The commit message should not be translated
792 792 message.append("\nimported patch %s" % patchname)
793 793 message = '\n'.join(message)
794 794
795 795 if ph.haspatch:
796 796 if tobackup:
797 797 touched = patchmod.changedfiles(self.ui, repo, pf)
798 798 touched = set(touched) & tobackup
799 799 if touched and keepchanges:
800 800 raise AbortNoCleanup(
801 801 _("local changes found, refresh first"))
802 802 self.backup(repo, touched, copy=True)
803 803 tobackup = tobackup - touched
804 804 (patcherr, files, fuzz) = self.patch(repo, pf)
805 805 if all_files is not None:
806 806 all_files.update(files)
807 807 patcherr = not patcherr
808 808 else:
809 809 self.ui.warn(_("patch %s is empty\n") % patchname)
810 810 patcherr, files, fuzz = 0, [], 0
811 811
812 812 if merge and files:
813 813 # Mark as removed/merged and update dirstate parent info
814 814 removed = []
815 815 merged = []
816 816 for f in files:
817 817 if os.path.lexists(repo.wjoin(f)):
818 818 merged.append(f)
819 819 else:
820 820 removed.append(f)
821 821 for f in removed:
822 822 repo.dirstate.remove(f)
823 823 for f in merged:
824 824 repo.dirstate.merge(f)
825 825 p1, p2 = repo.dirstate.parents()
826 826 repo.setparents(p1, merge)
827 827
828 828 if all_files and '.hgsubstate' in all_files:
829 829 wctx = repo['.']
830 830 mctx = actx = repo[None]
831 831 overwrite = False
832 832 mergedsubstate = subrepo.submerge(repo, wctx, mctx, actx,
833 833 overwrite)
834 834 files += mergedsubstate.keys()
835 835
836 836 match = scmutil.matchfiles(repo, files or [])
837 837 oldtip = repo['tip']
838 838 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
839 839 force=True)
840 840 if repo['tip'] == oldtip:
841 841 raise util.Abort(_("qpush exactly duplicates child changeset"))
842 842 if n is None:
843 843 raise util.Abort(_("repository commit failed"))
844 844
845 845 if update_status:
846 846 self.applied.append(statusentry(n, patchname))
847 847
848 848 if patcherr:
849 849 self.ui.warn(_("patch failed, rejects left in working dir\n"))
850 850 err = 2
851 851 break
852 852
853 853 if fuzz and strict:
854 854 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
855 855 err = 3
856 856 break
857 857 return (err, n)
858 858
859 859 def _cleanup(self, patches, numrevs, keep=False):
860 860 if not keep:
861 861 r = self.qrepo()
862 862 if r:
863 863 r[None].forget(patches)
864 864 for p in patches:
865 865 try:
866 866 os.unlink(self.join(p))
867 867 except OSError, inst:
868 868 if inst.errno != errno.ENOENT:
869 869 raise
870 870
871 871 qfinished = []
872 872 if numrevs:
873 873 qfinished = self.applied[:numrevs]
874 874 del self.applied[:numrevs]
875 875 self.applieddirty = True
876 876
877 877 unknown = []
878 878
879 879 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
880 880 reverse=True):
881 881 if i is not None:
882 882 del self.fullseries[i]
883 883 else:
884 884 unknown.append(p)
885 885
886 886 if unknown:
887 887 if numrevs:
888 888 rev = dict((entry.name, entry.node) for entry in qfinished)
889 889 for p in unknown:
890 890 msg = _('revision %s refers to unknown patches: %s\n')
891 891 self.ui.warn(msg % (short(rev[p]), p))
892 892 else:
893 893 msg = _('unknown patches: %s\n')
894 894 raise util.Abort(''.join(msg % p for p in unknown))
895 895
896 896 self.parseseries()
897 897 self.seriesdirty = True
898 898 return [entry.node for entry in qfinished]
899 899
900 900 def _revpatches(self, repo, revs):
901 901 firstrev = repo[self.applied[0].node].rev()
902 902 patches = []
903 903 for i, rev in enumerate(revs):
904 904
905 905 if rev < firstrev:
906 906 raise util.Abort(_('revision %d is not managed') % rev)
907 907
908 908 ctx = repo[rev]
909 909 base = self.applied[i].node
910 910 if ctx.node() != base:
911 911 msg = _('cannot delete revision %d above applied patches')
912 912 raise util.Abort(msg % rev)
913 913
914 914 patch = self.applied[i].name
915 915 for fmt in ('[mq]: %s', 'imported patch %s'):
916 916 if ctx.description() == fmt % patch:
917 917 msg = _('patch %s finalized without changeset message\n')
918 918 repo.ui.status(msg % patch)
919 919 break
920 920
921 921 patches.append(patch)
922 922 return patches
923 923
924 924 def finish(self, repo, revs):
925 925 # Manually trigger phase computation to ensure phasedefaults is
926 926 # executed before we remove the patches.
927 927 repo._phasecache
928 928 patches = self._revpatches(repo, sorted(revs))
929 929 qfinished = self._cleanup(patches, len(patches))
930 930 if qfinished and repo.ui.configbool('mq', 'secret', False):
931 931 # only use this logic when the secret option is added
932 932 oldqbase = repo[qfinished[0]]
933 933 tphase = repo.ui.config('phases', 'new-commit', phases.draft)
934 934 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
935 935 phases.advanceboundary(repo, tphase, qfinished)
936 936
937 937 def delete(self, repo, patches, opts):
938 938 if not patches and not opts.get('rev'):
939 939 raise util.Abort(_('qdelete requires at least one revision or '
940 940 'patch name'))
941 941
942 942 realpatches = []
943 943 for patch in patches:
944 944 patch = self.lookup(patch, strict=True)
945 945 info = self.isapplied(patch)
946 946 if info:
947 947 raise util.Abort(_("cannot delete applied patch %s") % patch)
948 948 if patch not in self.series:
949 949 raise util.Abort(_("patch %s not in series file") % patch)
950 950 if patch not in realpatches:
951 951 realpatches.append(patch)
952 952
953 953 numrevs = 0
954 954 if opts.get('rev'):
955 955 if not self.applied:
956 956 raise util.Abort(_('no patches applied'))
957 957 revs = scmutil.revrange(repo, opts.get('rev'))
958 958 if len(revs) > 1 and revs[0] > revs[1]:
959 959 revs.reverse()
960 960 revpatches = self._revpatches(repo, revs)
961 961 realpatches += revpatches
962 962 numrevs = len(revpatches)
963 963
964 964 self._cleanup(realpatches, numrevs, opts.get('keep'))
965 965
966 966 def checktoppatch(self, repo):
967 967 '''check that working directory is at qtip'''
968 968 if self.applied:
969 969 top = self.applied[-1].node
970 970 patch = self.applied[-1].name
971 971 if repo.dirstate.p1() != top:
972 972 raise util.Abort(_("working directory revision is not qtip"))
973 973 return top, patch
974 974 return None, None
975 975
976 976 def putsubstate2changes(self, substatestate, changes):
977 977 for files in changes[:3]:
978 978 if '.hgsubstate' in files:
979 979 return # already listed up
980 980 # not yet listed up
981 981 if substatestate in 'a?':
982 982 changes[1].append('.hgsubstate')
983 983 elif substatestate in 'r':
984 984 changes[2].append('.hgsubstate')
985 985 else: # modified
986 986 changes[0].append('.hgsubstate')
987 987
988 988 def checklocalchanges(self, repo, force=False, refresh=True):
989 989 excsuffix = ''
990 990 if refresh:
991 991 excsuffix = ', refresh first'
992 992 # plain versions for i18n tool to detect them
993 993 _("local changes found, refresh first")
994 994 _("local changed subrepos found, refresh first")
995 995 return checklocalchanges(repo, force, excsuffix)
996 996
997 997 _reserved = ('series', 'status', 'guards', '.', '..')
998 998 def checkreservedname(self, name):
999 999 if name in self._reserved:
1000 1000 raise util.Abort(_('"%s" cannot be used as the name of a patch')
1001 1001 % name)
1002 1002 for prefix in ('.hg', '.mq'):
1003 1003 if name.startswith(prefix):
1004 1004 raise util.Abort(_('patch name cannot begin with "%s"')
1005 1005 % prefix)
1006 1006 for c in ('#', ':'):
1007 1007 if c in name:
1008 1008 raise util.Abort(_('"%s" cannot be used in the name of a patch')
1009 1009 % c)
1010 1010
1011 1011 def checkpatchname(self, name, force=False):
1012 1012 self.checkreservedname(name)
1013 1013 if not force and os.path.exists(self.join(name)):
1014 1014 if os.path.isdir(self.join(name)):
1015 1015 raise util.Abort(_('"%s" already exists as a directory')
1016 1016 % name)
1017 1017 else:
1018 1018 raise util.Abort(_('patch "%s" already exists') % name)
1019 1019
1020 1020 def checkkeepchanges(self, keepchanges, force):
1021 1021 if force and keepchanges:
1022 1022 raise util.Abort(_('cannot use both --force and --keep-changes'))
1023 1023
1024 1024 def new(self, repo, patchfn, *pats, **opts):
1025 1025 """options:
1026 1026 msg: a string or a no-argument function returning a string
1027 1027 """
1028 1028 msg = opts.get('msg')
1029 1029 user = opts.get('user')
1030 1030 date = opts.get('date')
1031 1031 if date:
1032 1032 date = util.parsedate(date)
1033 1033 diffopts = self.diffopts({'git': opts.get('git')})
1034 1034 if opts.get('checkname', True):
1035 1035 self.checkpatchname(patchfn)
1036 1036 inclsubs = checksubstate(repo)
1037 1037 if inclsubs:
1038 1038 substatestate = repo.dirstate['.hgsubstate']
1039 1039 if opts.get('include') or opts.get('exclude') or pats:
1040 1040 match = scmutil.match(repo[None], pats, opts)
1041 1041 # detect missing files in pats
1042 1042 def badfn(f, msg):
1043 1043 if f != '.hgsubstate': # .hgsubstate is auto-created
1044 1044 raise util.Abort('%s: %s' % (f, msg))
1045 1045 match.bad = badfn
1046 1046 changes = repo.status(match=match)
1047 1047 else:
1048 1048 changes = self.checklocalchanges(repo, force=True)
1049 1049 commitfiles = list(inclsubs)
1050 1050 for files in changes[:3]:
1051 1051 commitfiles.extend([f for f in files if f != '.hgsubstate'])
1052 1052 match = scmutil.matchfiles(repo, commitfiles)
1053 1053 if len(repo[None].parents()) > 1:
1054 1054 raise util.Abort(_('cannot manage merge changesets'))
1055 1055 self.checktoppatch(repo)
1056 1056 insert = self.fullseriesend()
1057 1057 wlock = repo.wlock()
1058 1058 try:
1059 1059 try:
1060 1060 # if patch file write fails, abort early
1061 1061 p = self.opener(patchfn, "w")
1062 1062 except IOError, e:
1063 1063 raise util.Abort(_('cannot write patch "%s": %s')
1064 1064 % (patchfn, e.strerror))
1065 1065 try:
1066 1066 if self.plainmode:
1067 1067 if user:
1068 1068 p.write("From: " + user + "\n")
1069 1069 if not date:
1070 1070 p.write("\n")
1071 1071 if date:
1072 1072 p.write("Date: %d %d\n\n" % date)
1073 1073 else:
1074 1074 p.write("# HG changeset patch\n")
1075 1075 p.write("# Parent "
1076 1076 + hex(repo[None].p1().node()) + "\n")
1077 1077 if user:
1078 1078 p.write("# User " + user + "\n")
1079 1079 if date:
1080 1080 p.write("# Date %s %s\n\n" % date)
1081 1081 if util.safehasattr(msg, '__call__'):
1082 1082 msg = msg()
1083 1083 repo.savecommitmessage(msg)
1084 1084 commitmsg = msg and msg or ("[mq]: %s" % patchfn)
1085 1085 n = newcommit(repo, None, commitmsg, user, date, match=match,
1086 1086 force=True)
1087 1087 if n is None:
1088 1088 raise util.Abort(_("repo commit failed"))
1089 1089 try:
1090 1090 self.fullseries[insert:insert] = [patchfn]
1091 1091 self.applied.append(statusentry(n, patchfn))
1092 1092 self.parseseries()
1093 1093 self.seriesdirty = True
1094 1094 self.applieddirty = True
1095 1095 if msg:
1096 1096 msg = msg + "\n\n"
1097 1097 p.write(msg)
1098 1098 if commitfiles:
1099 1099 parent = self.qparents(repo, n)
1100 1100 if inclsubs:
1101 1101 self.putsubstate2changes(substatestate, changes)
1102 1102 chunks = patchmod.diff(repo, node1=parent, node2=n,
1103 1103 changes=changes, opts=diffopts)
1104 1104 for chunk in chunks:
1105 1105 p.write(chunk)
1106 1106 p.close()
1107 1107 r = self.qrepo()
1108 1108 if r:
1109 1109 r[None].add([patchfn])
1110 1110 except: # re-raises
1111 1111 repo.rollback()
1112 1112 raise
1113 1113 except Exception:
1114 1114 patchpath = self.join(patchfn)
1115 1115 try:
1116 1116 os.unlink(patchpath)
1117 1117 except OSError:
1118 1118 self.ui.warn(_('error unlinking %s\n') % patchpath)
1119 1119 raise
1120 1120 self.removeundo(repo)
1121 1121 finally:
1122 1122 release(wlock)
1123 1123
1124 1124 def isapplied(self, patch):
1125 1125 """returns (index, rev, patch)"""
1126 1126 for i, a in enumerate(self.applied):
1127 1127 if a.name == patch:
1128 1128 return (i, a.node, a.name)
1129 1129 return None
1130 1130
1131 1131 # if the exact patch name does not exist, we try a few
1132 1132 # variations. If strict is passed, we try only #1
1133 1133 #
1134 1134 # 1) a number (as string) to indicate an offset in the series file
1135 1135 # 2) a unique substring of the patch name was given
1136 1136 # 3) patchname[-+]num to indicate an offset in the series file
1137 1137 def lookup(self, patch, strict=False):
1138 1138 def partialname(s):
1139 1139 if s in self.series:
1140 1140 return s
1141 1141 matches = [x for x in self.series if s in x]
1142 1142 if len(matches) > 1:
1143 1143 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1144 1144 for m in matches:
1145 1145 self.ui.warn(' %s\n' % m)
1146 1146 return None
1147 1147 if matches:
1148 1148 return matches[0]
1149 1149 if self.series and self.applied:
1150 1150 if s == 'qtip':
1151 1151 return self.series[self.seriesend(True) - 1]
1152 1152 if s == 'qbase':
1153 1153 return self.series[0]
1154 1154 return None
1155 1155
1156 1156 if patch in self.series:
1157 1157 return patch
1158 1158
1159 1159 if not os.path.isfile(self.join(patch)):
1160 1160 try:
1161 1161 sno = int(patch)
1162 1162 except (ValueError, OverflowError):
1163 1163 pass
1164 1164 else:
1165 1165 if -len(self.series) <= sno < len(self.series):
1166 1166 return self.series[sno]
1167 1167
1168 1168 if not strict:
1169 1169 res = partialname(patch)
1170 1170 if res:
1171 1171 return res
1172 1172 minus = patch.rfind('-')
1173 1173 if minus >= 0:
1174 1174 res = partialname(patch[:minus])
1175 1175 if res:
1176 1176 i = self.series.index(res)
1177 1177 try:
1178 1178 off = int(patch[minus + 1:] or 1)
1179 1179 except (ValueError, OverflowError):
1180 1180 pass
1181 1181 else:
1182 1182 if i - off >= 0:
1183 1183 return self.series[i - off]
1184 1184 plus = patch.rfind('+')
1185 1185 if plus >= 0:
1186 1186 res = partialname(patch[:plus])
1187 1187 if res:
1188 1188 i = self.series.index(res)
1189 1189 try:
1190 1190 off = int(patch[plus + 1:] or 1)
1191 1191 except (ValueError, OverflowError):
1192 1192 pass
1193 1193 else:
1194 1194 if i + off < len(self.series):
1195 1195 return self.series[i + off]
1196 1196 raise util.Abort(_("patch %s not in series") % patch)
1197 1197
1198 1198 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1199 1199 all=False, move=False, exact=False, nobackup=False,
1200 1200 keepchanges=False):
1201 1201 self.checkkeepchanges(keepchanges, force)
1202 1202 diffopts = self.diffopts()
1203 1203 wlock = repo.wlock()
1204 1204 try:
1205 1205 heads = []
1206 1206 for hs in repo.branchmap().itervalues():
1207 1207 heads.extend(hs)
1208 1208 if not heads:
1209 1209 heads = [nullid]
1210 1210 if repo.dirstate.p1() not in heads and not exact:
1211 1211 self.ui.status(_("(working directory not at a head)\n"))
1212 1212
1213 1213 if not self.series:
1214 1214 self.ui.warn(_('no patches in series\n'))
1215 1215 return 0
1216 1216
1217 1217 # Suppose our series file is: A B C and the current 'top'
1218 1218 # patch is B. qpush C should be performed (moving forward)
1219 1219 # qpush B is a NOP (no change) qpush A is an error (can't
1220 1220 # go backwards with qpush)
1221 1221 if patch:
1222 1222 patch = self.lookup(patch)
1223 1223 info = self.isapplied(patch)
1224 1224 if info and info[0] >= len(self.applied) - 1:
1225 1225 self.ui.warn(
1226 1226 _('qpush: %s is already at the top\n') % patch)
1227 1227 return 0
1228 1228
1229 1229 pushable, reason = self.pushable(patch)
1230 1230 if pushable:
1231 1231 if self.series.index(patch) < self.seriesend():
1232 1232 raise util.Abort(
1233 1233 _("cannot push to a previous patch: %s") % patch)
1234 1234 else:
1235 1235 if reason:
1236 1236 reason = _('guarded by %s') % reason
1237 1237 else:
1238 1238 reason = _('no matching guards')
1239 1239 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1240 1240 return 1
1241 1241 elif all:
1242 1242 patch = self.series[-1]
1243 1243 if self.isapplied(patch):
1244 1244 self.ui.warn(_('all patches are currently applied\n'))
1245 1245 return 0
1246 1246
1247 1247 # Following the above example, starting at 'top' of B:
1248 1248 # qpush should be performed (pushes C), but a subsequent
1249 1249 # qpush without an argument is an error (nothing to
1250 1250 # apply). This allows a loop of "...while hg qpush..." to
1251 1251 # work as it detects an error when done
1252 1252 start = self.seriesend()
1253 1253 if start == len(self.series):
1254 1254 self.ui.warn(_('patch series already fully applied\n'))
1255 1255 return 1
1256 1256 if not force and not keepchanges:
1257 1257 self.checklocalchanges(repo, refresh=self.applied)
1258 1258
1259 1259 if exact:
1260 1260 if keepchanges:
1261 1261 raise util.Abort(
1262 1262 _("cannot use --exact and --keep-changes together"))
1263 1263 if move:
1264 1264 raise util.Abort(_('cannot use --exact and --move '
1265 1265 'together'))
1266 1266 if self.applied:
1267 1267 raise util.Abort(_('cannot push --exact with applied '
1268 1268 'patches'))
1269 1269 root = self.series[start]
1270 1270 target = patchheader(self.join(root), self.plainmode).parent
1271 1271 if not target:
1272 1272 raise util.Abort(
1273 1273 _("%s does not have a parent recorded") % root)
1274 1274 if not repo[target] == repo['.']:
1275 1275 hg.update(repo, target)
1276 1276
1277 1277 if move:
1278 1278 if not patch:
1279 1279 raise util.Abort(_("please specify the patch to move"))
1280 1280 for fullstart, rpn in enumerate(self.fullseries):
1281 1281 # strip markers for patch guards
1282 1282 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1283 1283 break
1284 1284 for i, rpn in enumerate(self.fullseries[fullstart:]):
1285 1285 # strip markers for patch guards
1286 1286 if self.guard_re.split(rpn, 1)[0] == patch:
1287 1287 break
1288 1288 index = fullstart + i
1289 1289 assert index < len(self.fullseries)
1290 1290 fullpatch = self.fullseries[index]
1291 1291 del self.fullseries[index]
1292 1292 self.fullseries.insert(fullstart, fullpatch)
1293 1293 self.parseseries()
1294 1294 self.seriesdirty = True
1295 1295
1296 1296 self.applieddirty = True
1297 1297 if start > 0:
1298 1298 self.checktoppatch(repo)
1299 1299 if not patch:
1300 1300 patch = self.series[start]
1301 1301 end = start + 1
1302 1302 else:
1303 1303 end = self.series.index(patch, start) + 1
1304 1304
1305 1305 tobackup = set()
1306 1306 if (not nobackup and force) or keepchanges:
1307 1307 m, a, r, d = self.checklocalchanges(repo, force=True)
1308 1308 if keepchanges:
1309 1309 tobackup.update(m + a + r + d)
1310 1310 else:
1311 1311 tobackup.update(m + a)
1312 1312
1313 1313 s = self.series[start:end]
1314 1314 all_files = set()
1315 1315 try:
1316 1316 if mergeq:
1317 1317 ret = self.mergepatch(repo, mergeq, s, diffopts)
1318 1318 else:
1319 1319 ret = self.apply(repo, s, list, all_files=all_files,
1320 1320 tobackup=tobackup, keepchanges=keepchanges)
1321 1321 except: # re-raises
1322 1322 self.ui.warn(_('cleaning up working directory...'))
1323 1323 node = repo.dirstate.p1()
1324 1324 hg.revert(repo, node, None)
1325 1325 # only remove unknown files that we know we touched or
1326 1326 # created while patching
1327 1327 for f in all_files:
1328 1328 if f not in repo.dirstate:
1329 1329 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1330 1330 self.ui.warn(_('done\n'))
1331 1331 raise
1332 1332
1333 1333 if not self.applied:
1334 1334 return ret[0]
1335 1335 top = self.applied[-1].name
1336 1336 if ret[0] and ret[0] > 1:
1337 1337 msg = _("errors during apply, please fix and refresh %s\n")
1338 1338 self.ui.write(msg % top)
1339 1339 else:
1340 1340 self.ui.write(_("now at: %s\n") % top)
1341 1341 return ret[0]
1342 1342
1343 1343 finally:
1344 1344 wlock.release()
1345 1345
1346 1346 def pop(self, repo, patch=None, force=False, update=True, all=False,
1347 1347 nobackup=False, keepchanges=False):
1348 1348 self.checkkeepchanges(keepchanges, force)
1349 1349 wlock = repo.wlock()
1350 1350 try:
1351 1351 if patch:
1352 1352 # index, rev, patch
1353 1353 info = self.isapplied(patch)
1354 1354 if not info:
1355 1355 patch = self.lookup(patch)
1356 1356 info = self.isapplied(patch)
1357 1357 if not info:
1358 1358 raise util.Abort(_("patch %s is not applied") % patch)
1359 1359
1360 1360 if not self.applied:
1361 1361 # Allow qpop -a to work repeatedly,
1362 1362 # but not qpop without an argument
1363 1363 self.ui.warn(_("no patches applied\n"))
1364 1364 return not all
1365 1365
1366 1366 if all:
1367 1367 start = 0
1368 1368 elif patch:
1369 1369 start = info[0] + 1
1370 1370 else:
1371 1371 start = len(self.applied) - 1
1372 1372
1373 1373 if start >= len(self.applied):
1374 1374 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1375 1375 return
1376 1376
1377 1377 if not update:
1378 1378 parents = repo.dirstate.parents()
1379 1379 rr = [x.node for x in self.applied]
1380 1380 for p in parents:
1381 1381 if p in rr:
1382 1382 self.ui.warn(_("qpop: forcing dirstate update\n"))
1383 1383 update = True
1384 1384 else:
1385 1385 parents = [p.node() for p in repo[None].parents()]
1386 1386 needupdate = False
1387 1387 for entry in self.applied[start:]:
1388 1388 if entry.node in parents:
1389 1389 needupdate = True
1390 1390 break
1391 1391 update = needupdate
1392 1392
1393 1393 tobackup = set()
1394 1394 if update:
1395 1395 m, a, r, d = self.checklocalchanges(
1396 1396 repo, force=force or keepchanges)
1397 1397 if force:
1398 1398 if not nobackup:
1399 1399 tobackup.update(m + a)
1400 1400 elif keepchanges:
1401 1401 tobackup.update(m + a + r + d)
1402 1402
1403 1403 self.applieddirty = True
1404 1404 end = len(self.applied)
1405 1405 rev = self.applied[start].node
1406 1406
1407 1407 try:
1408 1408 heads = repo.changelog.heads(rev)
1409 1409 except error.LookupError:
1410 1410 node = short(rev)
1411 1411 raise util.Abort(_('trying to pop unknown node %s') % node)
1412 1412
1413 1413 if heads != [self.applied[-1].node]:
1414 1414 raise util.Abort(_("popping would remove a revision not "
1415 1415 "managed by this patch queue"))
1416 1416 if not repo[self.applied[-1].node].mutable():
1417 1417 raise util.Abort(
1418 1418 _("popping would remove an immutable revision"),
1419 1419 hint=_('see "hg help phases" for details'))
1420 1420
1421 1421 # we know there are no local changes, so we can make a simplified
1422 1422 # form of hg.update.
1423 1423 if update:
1424 1424 qp = self.qparents(repo, rev)
1425 1425 ctx = repo[qp]
1426 1426 m, a, r, d = repo.status(qp, '.')[:4]
1427 1427 if d:
1428 1428 raise util.Abort(_("deletions found between repo revs"))
1429 1429
1430 1430 tobackup = set(a + m + r) & tobackup
1431 1431 if keepchanges and tobackup:
1432 1432 raise util.Abort(_("local changes found, refresh first"))
1433 1433 self.backup(repo, tobackup)
1434 1434
1435 1435 for f in a:
1436 1436 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1437 1437 repo.dirstate.drop(f)
1438 1438 for f in m + r:
1439 1439 fctx = ctx[f]
1440 1440 repo.wwrite(f, fctx.data(), fctx.flags())
1441 1441 repo.dirstate.normal(f)
1442 1442 repo.setparents(qp, nullid)
1443 1443 for patch in reversed(self.applied[start:end]):
1444 1444 self.ui.status(_("popping %s\n") % patch.name)
1445 1445 del self.applied[start:end]
1446 1446 strip(self.ui, repo, [rev], update=False, backup='strip')
1447 1447 for s, state in repo['.'].substate.items():
1448 1448 repo['.'].sub(s).get(state)
1449 1449 if self.applied:
1450 1450 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1451 1451 else:
1452 1452 self.ui.write(_("patch queue now empty\n"))
1453 1453 finally:
1454 1454 wlock.release()
1455 1455
1456 1456 def diff(self, repo, pats, opts):
1457 1457 top, patch = self.checktoppatch(repo)
1458 1458 if not top:
1459 1459 self.ui.write(_("no patches applied\n"))
1460 1460 return
1461 1461 qp = self.qparents(repo, top)
1462 1462 if opts.get('reverse'):
1463 1463 node1, node2 = None, qp
1464 1464 else:
1465 1465 node1, node2 = qp, None
1466 1466 diffopts = self.diffopts(opts, patch)
1467 1467 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1468 1468
1469 1469 def refresh(self, repo, pats=None, **opts):
1470 1470 if not self.applied:
1471 1471 self.ui.write(_("no patches applied\n"))
1472 1472 return 1
1473 1473 msg = opts.get('msg', '').rstrip()
1474 1474 newuser = opts.get('user')
1475 1475 newdate = opts.get('date')
1476 1476 if newdate:
1477 1477 newdate = '%d %d' % util.parsedate(newdate)
1478 1478 wlock = repo.wlock()
1479 1479
1480 1480 try:
1481 1481 self.checktoppatch(repo)
1482 1482 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1483 1483 if repo.changelog.heads(top) != [top]:
1484 1484 raise util.Abort(_("cannot refresh a revision with children"))
1485 1485 if not repo[top].mutable():
1486 1486 raise util.Abort(_("cannot refresh immutable revision"),
1487 1487 hint=_('see "hg help phases" for details'))
1488 1488
1489 1489 cparents = repo.changelog.parents(top)
1490 1490 patchparent = self.qparents(repo, top)
1491 1491
1492 1492 inclsubs = checksubstate(repo, hex(patchparent))
1493 1493 if inclsubs:
1494 1494 substatestate = repo.dirstate['.hgsubstate']
1495 1495
1496 1496 ph = patchheader(self.join(patchfn), self.plainmode)
1497 1497 diffopts = self.diffopts({'git': opts.get('git')}, patchfn)
1498 1498 if msg:
1499 1499 ph.setmessage(msg)
1500 1500 if newuser:
1501 1501 ph.setuser(newuser)
1502 1502 if newdate:
1503 1503 ph.setdate(newdate)
1504 1504 ph.setparent(hex(patchparent))
1505 1505
1506 1506 # only commit new patch when write is complete
1507 1507 patchf = self.opener(patchfn, 'w', atomictemp=True)
1508 1508
1509 1509 comments = str(ph)
1510 1510 if comments:
1511 1511 patchf.write(comments)
1512 1512
1513 1513 # update the dirstate in place, strip off the qtip commit
1514 1514 # and then commit.
1515 1515 #
1516 1516 # this should really read:
1517 1517 # mm, dd, aa = repo.status(top, patchparent)[:3]
1518 1518 # but we do it backwards to take advantage of manifest/changelog
1519 1519 # caching against the next repo.status call
1520 1520 mm, aa, dd = repo.status(patchparent, top)[:3]
1521 1521 changes = repo.changelog.read(top)
1522 1522 man = repo.manifest.read(changes[0])
1523 1523 aaa = aa[:]
1524 1524 matchfn = scmutil.match(repo[None], pats, opts)
1525 1525 # in short mode, we only diff the files included in the
1526 1526 # patch already plus specified files
1527 1527 if opts.get('short'):
1528 1528 # if amending a patch, we start with existing
1529 1529 # files plus specified files - unfiltered
1530 1530 match = scmutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1531 1531 # filter with include/exclude options
1532 1532 matchfn = scmutil.match(repo[None], opts=opts)
1533 1533 else:
1534 1534 match = scmutil.matchall(repo)
1535 1535 m, a, r, d = repo.status(match=match)[:4]
1536 1536 mm = set(mm)
1537 1537 aa = set(aa)
1538 1538 dd = set(dd)
1539 1539
1540 1540 # we might end up with files that were added between
1541 1541 # qtip and the dirstate parent, but then changed in the
1542 1542 # local dirstate. in this case, we want them to only
1543 1543 # show up in the added section
1544 1544 for x in m:
1545 1545 if x not in aa:
1546 1546 mm.add(x)
1547 1547 # we might end up with files added by the local dirstate that
1548 1548 # were deleted by the patch. In this case, they should only
1549 1549 # show up in the changed section.
1550 1550 for x in a:
1551 1551 if x in dd:
1552 1552 dd.remove(x)
1553 1553 mm.add(x)
1554 1554 else:
1555 1555 aa.add(x)
1556 1556 # make sure any files deleted in the local dirstate
1557 1557 # are not in the add or change column of the patch
1558 1558 forget = []
1559 1559 for x in d + r:
1560 1560 if x in aa:
1561 1561 aa.remove(x)
1562 1562 forget.append(x)
1563 1563 continue
1564 1564 else:
1565 1565 mm.discard(x)
1566 1566 dd.add(x)
1567 1567
1568 1568 m = list(mm)
1569 1569 r = list(dd)
1570 1570 a = list(aa)
1571 1571
1572 1572 # create 'match' that includes the files to be recommitted.
1573 1573 # apply matchfn via repo.status to ensure correct case handling.
1574 1574 cm, ca, cr, cd = repo.status(patchparent, match=matchfn)[:4]
1575 1575 allmatches = set(cm + ca + cr + cd)
1576 1576 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1577 1577
1578 1578 files = set(inclsubs)
1579 1579 for x in refreshchanges:
1580 1580 files.update([f for f in x if f != '.hgsubstate'])
1581 1581 match = scmutil.matchfiles(repo, files)
1582 1582
1583 1583 bmlist = repo[top].bookmarks()
1584 1584
1585 1585 try:
1586 1586 if diffopts.git or diffopts.upgrade:
1587 1587 copies = {}
1588 1588 for dst in a:
1589 1589 src = repo.dirstate.copied(dst)
1590 1590 # during qfold, the source file for copies may
1591 1591 # be removed. Treat this as a simple add.
1592 1592 if src is not None and src in repo.dirstate:
1593 1593 copies.setdefault(src, []).append(dst)
1594 1594 repo.dirstate.add(dst)
1595 1595 # remember the copies between patchparent and qtip
1596 1596 for dst in aaa:
1597 1597 f = repo.file(dst)
1598 1598 src = f.renamed(man[dst])
1599 1599 if src:
1600 1600 copies.setdefault(src[0], []).extend(
1601 1601 copies.get(dst, []))
1602 1602 if dst in a:
1603 1603 copies[src[0]].append(dst)
1604 1604 # we can't copy a file created by the patch itself
1605 1605 if dst in copies:
1606 1606 del copies[dst]
1607 1607 for src, dsts in copies.iteritems():
1608 1608 for dst in dsts:
1609 1609 repo.dirstate.copy(src, dst)
1610 1610 else:
1611 1611 for dst in a:
1612 1612 repo.dirstate.add(dst)
1613 1613 # Drop useless copy information
1614 1614 for f in list(repo.dirstate.copies()):
1615 1615 repo.dirstate.copy(None, f)
1616 1616 for f in r:
1617 1617 repo.dirstate.remove(f)
1618 1618 # if the patch excludes a modified file, mark that
1619 1619 # file with mtime=0 so status can see it.
1620 1620 mm = []
1621 1621 for i in xrange(len(m) - 1, -1, -1):
1622 1622 if not matchfn(m[i]):
1623 1623 mm.append(m[i])
1624 1624 del m[i]
1625 1625 for f in m:
1626 1626 repo.dirstate.normal(f)
1627 1627 for f in mm:
1628 1628 repo.dirstate.normallookup(f)
1629 1629 for f in forget:
1630 1630 repo.dirstate.drop(f)
1631 1631
1632 1632 if not msg:
1633 1633 if not ph.message:
1634 1634 message = "[mq]: %s\n" % patchfn
1635 1635 else:
1636 1636 message = "\n".join(ph.message)
1637 1637 else:
1638 1638 message = msg
1639 1639
1640 1640 user = ph.user or changes[1]
1641 1641
1642 1642 oldphase = repo[top].phase()
1643 1643
1644 1644 # assumes strip can roll itself back if interrupted
1645 1645 repo.setparents(*cparents)
1646 1646 self.applied.pop()
1647 1647 self.applieddirty = True
1648 1648 strip(self.ui, repo, [top], update=False, backup='strip')
1649 1649 except: # re-raises
1650 1650 repo.dirstate.invalidate()
1651 1651 raise
1652 1652
1653 1653 try:
1654 1654 # might be nice to attempt to roll back strip after this
1655 1655
1656 1656 # Ensure we create a new changeset in the same phase than
1657 1657 # the old one.
1658 1658 n = newcommit(repo, oldphase, message, user, ph.date,
1659 1659 match=match, force=True)
1660 1660 # only write patch after a successful commit
1661 1661 c = [list(x) for x in refreshchanges]
1662 1662 if inclsubs:
1663 1663 self.putsubstate2changes(substatestate, c)
1664 1664 chunks = patchmod.diff(repo, patchparent,
1665 1665 changes=c, opts=diffopts)
1666 1666 for chunk in chunks:
1667 1667 patchf.write(chunk)
1668 1668 patchf.close()
1669 1669
1670 1670 marks = repo._bookmarks
1671 1671 for bm in bmlist:
1672 1672 marks[bm] = n
1673 1673 marks.write()
1674 1674
1675 1675 self.applied.append(statusentry(n, patchfn))
1676 1676 except: # re-raises
1677 1677 ctx = repo[cparents[0]]
1678 1678 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1679 1679 self.savedirty()
1680 1680 self.ui.warn(_('refresh interrupted while patch was popped! '
1681 1681 '(revert --all, qpush to recover)\n'))
1682 1682 raise
1683 1683 finally:
1684 1684 wlock.release()
1685 1685 self.removeundo(repo)
1686 1686
1687 1687 def init(self, repo, create=False):
1688 1688 if not create and os.path.isdir(self.path):
1689 1689 raise util.Abort(_("patch queue directory already exists"))
1690 1690 try:
1691 1691 os.mkdir(self.path)
1692 1692 except OSError, inst:
1693 1693 if inst.errno != errno.EEXIST or not create:
1694 1694 raise
1695 1695 if create:
1696 1696 return self.qrepo(create=True)
1697 1697
1698 1698 def unapplied(self, repo, patch=None):
1699 1699 if patch and patch not in self.series:
1700 1700 raise util.Abort(_("patch %s is not in series file") % patch)
1701 1701 if not patch:
1702 1702 start = self.seriesend()
1703 1703 else:
1704 1704 start = self.series.index(patch) + 1
1705 1705 unapplied = []
1706 1706 for i in xrange(start, len(self.series)):
1707 1707 pushable, reason = self.pushable(i)
1708 1708 if pushable:
1709 1709 unapplied.append((i, self.series[i]))
1710 1710 self.explainpushable(i)
1711 1711 return unapplied
1712 1712
1713 1713 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1714 1714 summary=False):
1715 1715 def displayname(pfx, patchname, state):
1716 1716 if pfx:
1717 1717 self.ui.write(pfx)
1718 1718 if summary:
1719 1719 ph = patchheader(self.join(patchname), self.plainmode)
1720 1720 msg = ph.message and ph.message[0] or ''
1721 1721 if self.ui.formatted():
1722 1722 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1723 1723 if width > 0:
1724 1724 msg = util.ellipsis(msg, width)
1725 1725 else:
1726 1726 msg = ''
1727 1727 self.ui.write(patchname, label='qseries.' + state)
1728 1728 self.ui.write(': ')
1729 1729 self.ui.write(msg, label='qseries.message.' + state)
1730 1730 else:
1731 1731 self.ui.write(patchname, label='qseries.' + state)
1732 1732 self.ui.write('\n')
1733 1733
1734 1734 applied = set([p.name for p in self.applied])
1735 1735 if length is None:
1736 1736 length = len(self.series) - start
1737 1737 if not missing:
1738 1738 if self.ui.verbose:
1739 1739 idxwidth = len(str(start + length - 1))
1740 1740 for i in xrange(start, start + length):
1741 1741 patch = self.series[i]
1742 1742 if patch in applied:
1743 1743 char, state = 'A', 'applied'
1744 1744 elif self.pushable(i)[0]:
1745 1745 char, state = 'U', 'unapplied'
1746 1746 else:
1747 1747 char, state = 'G', 'guarded'
1748 1748 pfx = ''
1749 1749 if self.ui.verbose:
1750 1750 pfx = '%*d %s ' % (idxwidth, i, char)
1751 1751 elif status and status != char:
1752 1752 continue
1753 1753 displayname(pfx, patch, state)
1754 1754 else:
1755 1755 msng_list = []
1756 1756 for root, dirs, files in os.walk(self.path):
1757 1757 d = root[len(self.path) + 1:]
1758 1758 for f in files:
1759 1759 fl = os.path.join(d, f)
1760 1760 if (fl not in self.series and
1761 1761 fl not in (self.statuspath, self.seriespath,
1762 1762 self.guardspath)
1763 1763 and not fl.startswith('.')):
1764 1764 msng_list.append(fl)
1765 1765 for x in sorted(msng_list):
1766 1766 pfx = self.ui.verbose and ('D ') or ''
1767 1767 displayname(pfx, x, 'missing')
1768 1768
1769 1769 def issaveline(self, l):
1770 1770 if l.name == '.hg.patches.save.line':
1771 1771 return True
1772 1772
1773 1773 def qrepo(self, create=False):
1774 1774 ui = self.baseui.copy()
1775 1775 if create or os.path.isdir(self.join(".hg")):
1776 1776 return hg.repository(ui, path=self.path, create=create)
1777 1777
1778 1778 def restore(self, repo, rev, delete=None, qupdate=None):
1779 1779 desc = repo[rev].description().strip()
1780 1780 lines = desc.splitlines()
1781 1781 i = 0
1782 1782 datastart = None
1783 1783 series = []
1784 1784 applied = []
1785 1785 qpp = None
1786 1786 for i, line in enumerate(lines):
1787 1787 if line == 'Patch Data:':
1788 1788 datastart = i + 1
1789 1789 elif line.startswith('Dirstate:'):
1790 1790 l = line.rstrip()
1791 1791 l = l[10:].split(' ')
1792 1792 qpp = [bin(x) for x in l]
1793 1793 elif datastart is not None:
1794 1794 l = line.rstrip()
1795 1795 n, name = l.split(':', 1)
1796 1796 if n:
1797 1797 applied.append(statusentry(bin(n), name))
1798 1798 else:
1799 1799 series.append(l)
1800 1800 if datastart is None:
1801 1801 self.ui.warn(_("no saved patch data found\n"))
1802 1802 return 1
1803 1803 self.ui.warn(_("restoring status: %s\n") % lines[0])
1804 1804 self.fullseries = series
1805 1805 self.applied = applied
1806 1806 self.parseseries()
1807 1807 self.seriesdirty = True
1808 1808 self.applieddirty = True
1809 1809 heads = repo.changelog.heads()
1810 1810 if delete:
1811 1811 if rev not in heads:
1812 1812 self.ui.warn(_("save entry has children, leaving it alone\n"))
1813 1813 else:
1814 1814 self.ui.warn(_("removing save entry %s\n") % short(rev))
1815 1815 pp = repo.dirstate.parents()
1816 1816 if rev in pp:
1817 1817 update = True
1818 1818 else:
1819 1819 update = False
1820 1820 strip(self.ui, repo, [rev], update=update, backup='strip')
1821 1821 if qpp:
1822 1822 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1823 1823 (short(qpp[0]), short(qpp[1])))
1824 1824 if qupdate:
1825 1825 self.ui.status(_("updating queue directory\n"))
1826 1826 r = self.qrepo()
1827 1827 if not r:
1828 1828 self.ui.warn(_("unable to load queue repository\n"))
1829 1829 return 1
1830 1830 hg.clean(r, qpp[0])
1831 1831
1832 1832 def save(self, repo, msg=None):
1833 1833 if not self.applied:
1834 1834 self.ui.warn(_("save: no patches applied, exiting\n"))
1835 1835 return 1
1836 1836 if self.issaveline(self.applied[-1]):
1837 1837 self.ui.warn(_("status is already saved\n"))
1838 1838 return 1
1839 1839
1840 1840 if not msg:
1841 1841 msg = _("hg patches saved state")
1842 1842 else:
1843 1843 msg = "hg patches: " + msg.rstrip('\r\n')
1844 1844 r = self.qrepo()
1845 1845 if r:
1846 1846 pp = r.dirstate.parents()
1847 1847 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1848 1848 msg += "\n\nPatch Data:\n"
1849 1849 msg += ''.join('%s\n' % x for x in self.applied)
1850 1850 msg += ''.join(':%s\n' % x for x in self.fullseries)
1851 1851 n = repo.commit(msg, force=True)
1852 1852 if not n:
1853 1853 self.ui.warn(_("repo commit failed\n"))
1854 1854 return 1
1855 1855 self.applied.append(statusentry(n, '.hg.patches.save.line'))
1856 1856 self.applieddirty = True
1857 1857 self.removeundo(repo)
1858 1858
1859 1859 def fullseriesend(self):
1860 1860 if self.applied:
1861 1861 p = self.applied[-1].name
1862 1862 end = self.findseries(p)
1863 1863 if end is None:
1864 1864 return len(self.fullseries)
1865 1865 return end + 1
1866 1866 return 0
1867 1867
1868 1868 def seriesend(self, all_patches=False):
1869 1869 """If all_patches is False, return the index of the next pushable patch
1870 1870 in the series, or the series length. If all_patches is True, return the
1871 1871 index of the first patch past the last applied one.
1872 1872 """
1873 1873 end = 0
1874 1874 def nextpatch(start):
1875 1875 if all_patches or start >= len(self.series):
1876 1876 return start
1877 1877 for i in xrange(start, len(self.series)):
1878 1878 p, reason = self.pushable(i)
1879 1879 if p:
1880 1880 return i
1881 1881 self.explainpushable(i)
1882 1882 return len(self.series)
1883 1883 if self.applied:
1884 1884 p = self.applied[-1].name
1885 1885 try:
1886 1886 end = self.series.index(p)
1887 1887 except ValueError:
1888 1888 return 0
1889 1889 return nextpatch(end + 1)
1890 1890 return nextpatch(end)
1891 1891
1892 1892 def appliedname(self, index):
1893 1893 pname = self.applied[index].name
1894 1894 if not self.ui.verbose:
1895 1895 p = pname
1896 1896 else:
1897 1897 p = str(self.series.index(pname)) + " " + pname
1898 1898 return p
1899 1899
1900 1900 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
1901 1901 force=None, git=False):
1902 1902 def checkseries(patchname):
1903 1903 if patchname in self.series:
1904 1904 raise util.Abort(_('patch %s is already in the series file')
1905 1905 % patchname)
1906 1906
1907 1907 if rev:
1908 1908 if files:
1909 1909 raise util.Abort(_('option "-r" not valid when importing '
1910 1910 'files'))
1911 1911 rev = scmutil.revrange(repo, rev)
1912 1912 rev.sort(reverse=True)
1913 1913 elif not files:
1914 1914 raise util.Abort(_('no files or revisions specified'))
1915 1915 if (len(files) > 1 or len(rev) > 1) and patchname:
1916 1916 raise util.Abort(_('option "-n" not valid when importing multiple '
1917 1917 'patches'))
1918 1918 imported = []
1919 1919 if rev:
1920 1920 # If mq patches are applied, we can only import revisions
1921 1921 # that form a linear path to qbase.
1922 1922 # Otherwise, they should form a linear path to a head.
1923 1923 heads = repo.changelog.heads(repo.changelog.node(rev[-1]))
1924 1924 if len(heads) > 1:
1925 1925 raise util.Abort(_('revision %d is the root of more than one '
1926 1926 'branch') % rev[-1])
1927 1927 if self.applied:
1928 1928 base = repo.changelog.node(rev[0])
1929 1929 if base in [n.node for n in self.applied]:
1930 1930 raise util.Abort(_('revision %d is already managed')
1931 1931 % rev[0])
1932 1932 if heads != [self.applied[-1].node]:
1933 1933 raise util.Abort(_('revision %d is not the parent of '
1934 1934 'the queue') % rev[0])
1935 1935 base = repo.changelog.rev(self.applied[0].node)
1936 1936 lastparent = repo.changelog.parentrevs(base)[0]
1937 1937 else:
1938 1938 if heads != [repo.changelog.node(rev[0])]:
1939 1939 raise util.Abort(_('revision %d has unmanaged children')
1940 1940 % rev[0])
1941 1941 lastparent = None
1942 1942
1943 1943 diffopts = self.diffopts({'git': git})
1944 1944 for r in rev:
1945 1945 if not repo[r].mutable():
1946 1946 raise util.Abort(_('revision %d is not mutable') % r,
1947 1947 hint=_('see "hg help phases" for details'))
1948 1948 p1, p2 = repo.changelog.parentrevs(r)
1949 1949 n = repo.changelog.node(r)
1950 1950 if p2 != nullrev:
1951 1951 raise util.Abort(_('cannot import merge revision %d') % r)
1952 1952 if lastparent and lastparent != r:
1953 1953 raise util.Abort(_('revision %d is not the parent of %d')
1954 1954 % (r, lastparent))
1955 1955 lastparent = p1
1956 1956
1957 1957 if not patchname:
1958 1958 patchname = normname('%d.diff' % r)
1959 1959 checkseries(patchname)
1960 1960 self.checkpatchname(patchname, force)
1961 1961 self.fullseries.insert(0, patchname)
1962 1962
1963 1963 patchf = self.opener(patchname, "w")
1964 1964 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
1965 1965 patchf.close()
1966 1966
1967 1967 se = statusentry(n, patchname)
1968 1968 self.applied.insert(0, se)
1969 1969
1970 1970 self.added.append(patchname)
1971 1971 imported.append(patchname)
1972 1972 patchname = None
1973 1973 if rev and repo.ui.configbool('mq', 'secret', False):
1974 1974 # if we added anything with --rev, we must move the secret root
1975 1975 phases.retractboundary(repo, phases.secret, [n])
1976 1976 self.parseseries()
1977 1977 self.applieddirty = True
1978 1978 self.seriesdirty = True
1979 1979
1980 1980 for i, filename in enumerate(files):
1981 1981 if existing:
1982 1982 if filename == '-':
1983 1983 raise util.Abort(_('-e is incompatible with import from -'))
1984 1984 filename = normname(filename)
1985 1985 self.checkreservedname(filename)
1986 1986 if util.url(filename).islocal():
1987 1987 originpath = self.join(filename)
1988 1988 if not os.path.isfile(originpath):
1989 1989 raise util.Abort(
1990 1990 _("patch %s does not exist") % filename)
1991 1991
1992 1992 if patchname:
1993 1993 self.checkpatchname(patchname, force)
1994 1994
1995 1995 self.ui.write(_('renaming %s to %s\n')
1996 1996 % (filename, patchname))
1997 1997 util.rename(originpath, self.join(patchname))
1998 1998 else:
1999 1999 patchname = filename
2000 2000
2001 2001 else:
2002 2002 if filename == '-' and not patchname:
2003 2003 raise util.Abort(_('need --name to import a patch from -'))
2004 2004 elif not patchname:
2005 2005 patchname = normname(os.path.basename(filename.rstrip('/')))
2006 2006 self.checkpatchname(patchname, force)
2007 2007 try:
2008 2008 if filename == '-':
2009 2009 text = self.ui.fin.read()
2010 2010 else:
2011 2011 fp = hg.openpath(self.ui, filename)
2012 2012 text = fp.read()
2013 2013 fp.close()
2014 2014 except (OSError, IOError):
2015 2015 raise util.Abort(_("unable to read file %s") % filename)
2016 2016 patchf = self.opener(patchname, "w")
2017 2017 patchf.write(text)
2018 2018 patchf.close()
2019 2019 if not force:
2020 2020 checkseries(patchname)
2021 2021 if patchname not in self.series:
2022 2022 index = self.fullseriesend() + i
2023 2023 self.fullseries[index:index] = [patchname]
2024 2024 self.parseseries()
2025 2025 self.seriesdirty = True
2026 2026 self.ui.warn(_("adding %s to series file\n") % patchname)
2027 2027 self.added.append(patchname)
2028 2028 imported.append(patchname)
2029 2029 patchname = None
2030 2030
2031 2031 self.removeundo(repo)
2032 2032 return imported
2033 2033
2034 2034 def fixkeepchangesopts(ui, opts):
2035 2035 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2036 2036 or opts.get('exact')):
2037 2037 return opts
2038 2038 opts = dict(opts)
2039 2039 opts['keep_changes'] = True
2040 2040 return opts
2041 2041
2042 2042 @command("qdelete|qremove|qrm",
2043 2043 [('k', 'keep', None, _('keep patch file')),
2044 2044 ('r', 'rev', [],
2045 2045 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2046 2046 _('hg qdelete [-k] [PATCH]...'))
2047 2047 def delete(ui, repo, *patches, **opts):
2048 2048 """remove patches from queue
2049 2049
2050 2050 The patches must not be applied, and at least one patch is required. Exact
2051 2051 patch identifiers must be given. With -k/--keep, the patch files are
2052 2052 preserved in the patch directory.
2053 2053
2054 2054 To stop managing a patch and move it into permanent history,
2055 2055 use the :hg:`qfinish` command."""
2056 2056 q = repo.mq
2057 2057 q.delete(repo, patches, opts)
2058 2058 q.savedirty()
2059 2059 return 0
2060 2060
2061 2061 @command("qapplied",
2062 2062 [('1', 'last', None, _('show only the preceding applied patch'))
2063 2063 ] + seriesopts,
2064 2064 _('hg qapplied [-1] [-s] [PATCH]'))
2065 2065 def applied(ui, repo, patch=None, **opts):
2066 2066 """print the patches already applied
2067 2067
2068 2068 Returns 0 on success."""
2069 2069
2070 2070 q = repo.mq
2071 2071
2072 2072 if patch:
2073 2073 if patch not in q.series:
2074 2074 raise util.Abort(_("patch %s is not in series file") % patch)
2075 2075 end = q.series.index(patch) + 1
2076 2076 else:
2077 2077 end = q.seriesend(True)
2078 2078
2079 2079 if opts.get('last') and not end:
2080 2080 ui.write(_("no patches applied\n"))
2081 2081 return 1
2082 2082 elif opts.get('last') and end == 1:
2083 2083 ui.write(_("only one patch applied\n"))
2084 2084 return 1
2085 2085 elif opts.get('last'):
2086 2086 start = end - 2
2087 2087 end = 1
2088 2088 else:
2089 2089 start = 0
2090 2090
2091 2091 q.qseries(repo, length=end, start=start, status='A',
2092 2092 summary=opts.get('summary'))
2093 2093
2094 2094
2095 2095 @command("qunapplied",
2096 2096 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2097 2097 _('hg qunapplied [-1] [-s] [PATCH]'))
2098 2098 def unapplied(ui, repo, patch=None, **opts):
2099 2099 """print the patches not yet applied
2100 2100
2101 2101 Returns 0 on success."""
2102 2102
2103 2103 q = repo.mq
2104 2104 if patch:
2105 2105 if patch not in q.series:
2106 2106 raise util.Abort(_("patch %s is not in series file") % patch)
2107 2107 start = q.series.index(patch) + 1
2108 2108 else:
2109 2109 start = q.seriesend(True)
2110 2110
2111 2111 if start == len(q.series) and opts.get('first'):
2112 2112 ui.write(_("all patches applied\n"))
2113 2113 return 1
2114 2114
2115 2115 length = opts.get('first') and 1 or None
2116 2116 q.qseries(repo, start=start, length=length, status='U',
2117 2117 summary=opts.get('summary'))
2118 2118
2119 2119 @command("qimport",
2120 2120 [('e', 'existing', None, _('import file in patch directory')),
2121 2121 ('n', 'name', '',
2122 2122 _('name of patch file'), _('NAME')),
2123 2123 ('f', 'force', None, _('overwrite existing files')),
2124 2124 ('r', 'rev', [],
2125 2125 _('place existing revisions under mq control'), _('REV')),
2126 2126 ('g', 'git', None, _('use git extended diff format')),
2127 2127 ('P', 'push', None, _('qpush after importing'))],
2128 2128 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2129 2129 def qimport(ui, repo, *filename, **opts):
2130 2130 """import a patch or existing changeset
2131 2131
2132 2132 The patch is inserted into the series after the last applied
2133 2133 patch. If no patches have been applied, qimport prepends the patch
2134 2134 to the series.
2135 2135
2136 2136 The patch will have the same name as its source file unless you
2137 2137 give it a new one with -n/--name.
2138 2138
2139 2139 You can register an existing patch inside the patch directory with
2140 2140 the -e/--existing flag.
2141 2141
2142 2142 With -f/--force, an existing patch of the same name will be
2143 2143 overwritten.
2144 2144
2145 2145 An existing changeset may be placed under mq control with -r/--rev
2146 2146 (e.g. qimport --rev . -n patch will place the current revision
2147 2147 under mq control). With -g/--git, patches imported with --rev will
2148 2148 use the git diff format. See the diffs help topic for information
2149 2149 on why this is important for preserving rename/copy information
2150 2150 and permission changes. Use :hg:`qfinish` to remove changesets
2151 2151 from mq control.
2152 2152
2153 2153 To import a patch from standard input, pass - as the patch file.
2154 2154 When importing from standard input, a patch name must be specified
2155 2155 using the --name flag.
2156 2156
2157 2157 To import an existing patch while renaming it::
2158 2158
2159 2159 hg qimport -e existing-patch -n new-name
2160 2160
2161 2161 Returns 0 if import succeeded.
2162 2162 """
2163 2163 lock = repo.lock() # cause this may move phase
2164 2164 try:
2165 2165 q = repo.mq
2166 2166 try:
2167 2167 imported = q.qimport(
2168 2168 repo, filename, patchname=opts.get('name'),
2169 2169 existing=opts.get('existing'), force=opts.get('force'),
2170 2170 rev=opts.get('rev'), git=opts.get('git'))
2171 2171 finally:
2172 2172 q.savedirty()
2173 2173 finally:
2174 2174 lock.release()
2175 2175
2176 2176 if imported and opts.get('push') and not opts.get('rev'):
2177 2177 return q.push(repo, imported[-1])
2178 2178 return 0
2179 2179
2180 2180 def qinit(ui, repo, create):
2181 2181 """initialize a new queue repository
2182 2182
2183 2183 This command also creates a series file for ordering patches, and
2184 2184 an mq-specific .hgignore file in the queue repository, to exclude
2185 2185 the status and guards files (these contain mostly transient state).
2186 2186
2187 2187 Returns 0 if initialization succeeded."""
2188 2188 q = repo.mq
2189 2189 r = q.init(repo, create)
2190 2190 q.savedirty()
2191 2191 if r:
2192 2192 if not os.path.exists(r.wjoin('.hgignore')):
2193 2193 fp = r.wopener('.hgignore', 'w')
2194 2194 fp.write('^\\.hg\n')
2195 2195 fp.write('^\\.mq\n')
2196 2196 fp.write('syntax: glob\n')
2197 2197 fp.write('status\n')
2198 2198 fp.write('guards\n')
2199 2199 fp.close()
2200 2200 if not os.path.exists(r.wjoin('series')):
2201 2201 r.wopener('series', 'w').close()
2202 2202 r[None].add(['.hgignore', 'series'])
2203 2203 commands.add(ui, r)
2204 2204 return 0
2205 2205
2206 2206 @command("^qinit",
2207 2207 [('c', 'create-repo', None, _('create queue repository'))],
2208 2208 _('hg qinit [-c]'))
2209 2209 def init(ui, repo, **opts):
2210 2210 """init a new queue repository (DEPRECATED)
2211 2211
2212 2212 The queue repository is unversioned by default. If
2213 2213 -c/--create-repo is specified, qinit will create a separate nested
2214 2214 repository for patches (qinit -c may also be run later to convert
2215 2215 an unversioned patch repository into a versioned one). You can use
2216 2216 qcommit to commit changes to this queue repository.
2217 2217
2218 2218 This command is deprecated. Without -c, it's implied by other relevant
2219 2219 commands. With -c, use :hg:`init --mq` instead."""
2220 2220 return qinit(ui, repo, create=opts.get('create_repo'))
2221 2221
2222 2222 @command("qclone",
2223 2223 [('', 'pull', None, _('use pull protocol to copy metadata')),
2224 2224 ('U', 'noupdate', None,
2225 2225 _('do not update the new working directories')),
2226 2226 ('', 'uncompressed', None,
2227 2227 _('use uncompressed transfer (fast over LAN)')),
2228 2228 ('p', 'patches', '',
2229 2229 _('location of source patch repository'), _('REPO')),
2230 2230 ] + commands.remoteopts,
2231 2231 _('hg qclone [OPTION]... SOURCE [DEST]'))
2232 2232 def clone(ui, source, dest=None, **opts):
2233 2233 '''clone main and patch repository at same time
2234 2234
2235 2235 If source is local, destination will have no patches applied. If
2236 2236 source is remote, this command can not check if patches are
2237 2237 applied in source, so cannot guarantee that patches are not
2238 2238 applied in destination. If you clone remote repository, be sure
2239 2239 before that it has no patches applied.
2240 2240
2241 2241 Source patch repository is looked for in <src>/.hg/patches by
2242 2242 default. Use -p <url> to change.
2243 2243
2244 2244 The patch directory must be a nested Mercurial repository, as
2245 2245 would be created by :hg:`init --mq`.
2246 2246
2247 2247 Return 0 on success.
2248 2248 '''
2249 2249 def patchdir(repo):
2250 2250 """compute a patch repo url from a repo object"""
2251 2251 url = repo.url()
2252 2252 if url.endswith('/'):
2253 2253 url = url[:-1]
2254 2254 return url + '/.hg/patches'
2255 2255
2256 2256 # main repo (destination and sources)
2257 2257 if dest is None:
2258 2258 dest = hg.defaultdest(source)
2259 2259 sr = hg.peer(ui, opts, ui.expandpath(source))
2260 2260
2261 2261 # patches repo (source only)
2262 2262 if opts.get('patches'):
2263 2263 patchespath = ui.expandpath(opts.get('patches'))
2264 2264 else:
2265 2265 patchespath = patchdir(sr)
2266 2266 try:
2267 2267 hg.peer(ui, opts, patchespath)
2268 2268 except error.RepoError:
2269 2269 raise util.Abort(_('versioned patch repository not found'
2270 2270 ' (see init --mq)'))
2271 2271 qbase, destrev = None, None
2272 2272 if sr.local():
2273 2273 repo = sr.local()
2274 2274 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2275 2275 qbase = repo.mq.applied[0].node
2276 2276 if not hg.islocal(dest):
2277 2277 heads = set(repo.heads())
2278 2278 destrev = list(heads.difference(repo.heads(qbase)))
2279 2279 destrev.append(repo.changelog.parents(qbase)[0])
2280 2280 elif sr.capable('lookup'):
2281 2281 try:
2282 2282 qbase = sr.lookup('qbase')
2283 2283 except error.RepoError:
2284 2284 pass
2285 2285
2286 2286 ui.note(_('cloning main repository\n'))
2287 2287 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2288 2288 pull=opts.get('pull'),
2289 2289 rev=destrev,
2290 2290 update=False,
2291 2291 stream=opts.get('uncompressed'))
2292 2292
2293 2293 ui.note(_('cloning patch repository\n'))
2294 2294 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2295 2295 pull=opts.get('pull'), update=not opts.get('noupdate'),
2296 2296 stream=opts.get('uncompressed'))
2297 2297
2298 2298 if dr.local():
2299 2299 repo = dr.local()
2300 2300 if qbase:
2301 2301 ui.note(_('stripping applied patches from destination '
2302 2302 'repository\n'))
2303 2303 strip(ui, repo, [qbase], update=False, backup=None)
2304 2304 if not opts.get('noupdate'):
2305 2305 ui.note(_('updating destination repository\n'))
2306 2306 hg.update(repo, repo.changelog.tip())
2307 2307
2308 2308 @command("qcommit|qci",
2309 2309 commands.table["^commit|ci"][1],
2310 2310 _('hg qcommit [OPTION]... [FILE]...'))
2311 2311 def commit(ui, repo, *pats, **opts):
2312 2312 """commit changes in the queue repository (DEPRECATED)
2313 2313
2314 2314 This command is deprecated; use :hg:`commit --mq` instead."""
2315 2315 q = repo.mq
2316 2316 r = q.qrepo()
2317 2317 if not r:
2318 2318 raise util.Abort('no queue repository')
2319 2319 commands.commit(r.ui, r, *pats, **opts)
2320 2320
2321 2321 @command("qseries",
2322 2322 [('m', 'missing', None, _('print patches not in series')),
2323 2323 ] + seriesopts,
2324 2324 _('hg qseries [-ms]'))
2325 2325 def series(ui, repo, **opts):
2326 2326 """print the entire series file
2327 2327
2328 2328 Returns 0 on success."""
2329 2329 repo.mq.qseries(repo, missing=opts.get('missing'),
2330 2330 summary=opts.get('summary'))
2331 2331 return 0
2332 2332
2333 2333 @command("qtop", seriesopts, _('hg qtop [-s]'))
2334 2334 def top(ui, repo, **opts):
2335 2335 """print the name of the current patch
2336 2336
2337 2337 Returns 0 on success."""
2338 2338 q = repo.mq
2339 2339 t = q.applied and q.seriesend(True) or 0
2340 2340 if t:
2341 2341 q.qseries(repo, start=t - 1, length=1, status='A',
2342 2342 summary=opts.get('summary'))
2343 2343 else:
2344 2344 ui.write(_("no patches applied\n"))
2345 2345 return 1
2346 2346
2347 2347 @command("qnext", seriesopts, _('hg qnext [-s]'))
2348 2348 def next(ui, repo, **opts):
2349 2349 """print the name of the next pushable patch
2350 2350
2351 2351 Returns 0 on success."""
2352 2352 q = repo.mq
2353 2353 end = q.seriesend()
2354 2354 if end == len(q.series):
2355 2355 ui.write(_("all patches applied\n"))
2356 2356 return 1
2357 2357 q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
2358 2358
2359 2359 @command("qprev", seriesopts, _('hg qprev [-s]'))
2360 2360 def prev(ui, repo, **opts):
2361 2361 """print the name of the preceding applied patch
2362 2362
2363 2363 Returns 0 on success."""
2364 2364 q = repo.mq
2365 2365 l = len(q.applied)
2366 2366 if l == 1:
2367 2367 ui.write(_("only one patch applied\n"))
2368 2368 return 1
2369 2369 if not l:
2370 2370 ui.write(_("no patches applied\n"))
2371 2371 return 1
2372 2372 idx = q.series.index(q.applied[-2].name)
2373 2373 q.qseries(repo, start=idx, length=1, status='A',
2374 2374 summary=opts.get('summary'))
2375 2375
2376 2376 def setupheaderopts(ui, opts):
2377 2377 if not opts.get('user') and opts.get('currentuser'):
2378 2378 opts['user'] = ui.username()
2379 2379 if not opts.get('date') and opts.get('currentdate'):
2380 2380 opts['date'] = "%d %d" % util.makedate()
2381 2381
2382 2382 @command("^qnew",
2383 2383 [('e', 'edit', None, _('edit commit message')),
2384 2384 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2385 2385 ('g', 'git', None, _('use git extended diff format')),
2386 2386 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2387 2387 ('u', 'user', '',
2388 2388 _('add "From: <USER>" to patch'), _('USER')),
2389 2389 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2390 2390 ('d', 'date', '',
2391 2391 _('add "Date: <DATE>" to patch'), _('DATE'))
2392 2392 ] + commands.walkopts + commands.commitopts,
2393 2393 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'))
2394 2394 def new(ui, repo, patch, *args, **opts):
2395 2395 """create a new patch
2396 2396
2397 2397 qnew creates a new patch on top of the currently-applied patch (if
2398 2398 any). The patch will be initialized with any outstanding changes
2399 2399 in the working directory. You may also use -I/--include,
2400 2400 -X/--exclude, and/or a list of files after the patch name to add
2401 2401 only changes to matching files to the new patch, leaving the rest
2402 2402 as uncommitted modifications.
2403 2403
2404 2404 -u/--user and -d/--date can be used to set the (given) user and
2405 2405 date, respectively. -U/--currentuser and -D/--currentdate set user
2406 2406 to current user and date to current date.
2407 2407
2408 2408 -e/--edit, -m/--message or -l/--logfile set the patch header as
2409 2409 well as the commit message. If none is specified, the header is
2410 2410 empty and the commit message is '[mq]: PATCH'.
2411 2411
2412 2412 Use the -g/--git option to keep the patch in the git extended diff
2413 2413 format. Read the diffs help topic for more information on why this
2414 2414 is important for preserving permission changes and copy/rename
2415 2415 information.
2416 2416
2417 2417 Returns 0 on successful creation of a new patch.
2418 2418 """
2419 2419 msg = cmdutil.logmessage(ui, opts)
2420 2420 def getmsg():
2421 2421 return ui.edit(msg, opts.get('user') or ui.username())
2422 2422 q = repo.mq
2423 2423 opts['msg'] = msg
2424 2424 if opts.get('edit'):
2425 2425 opts['msg'] = getmsg
2426 2426 else:
2427 2427 opts['msg'] = msg
2428 2428 setupheaderopts(ui, opts)
2429 2429 q.new(repo, patch, *args, **opts)
2430 2430 q.savedirty()
2431 2431 return 0
2432 2432
2433 2433 @command("^qrefresh",
2434 2434 [('e', 'edit', None, _('edit commit message')),
2435 2435 ('g', 'git', None, _('use git extended diff format')),
2436 2436 ('s', 'short', None,
2437 2437 _('refresh only files already in the patch and specified files')),
2438 2438 ('U', 'currentuser', None,
2439 2439 _('add/update author field in patch with current user')),
2440 2440 ('u', 'user', '',
2441 2441 _('add/update author field in patch with given user'), _('USER')),
2442 2442 ('D', 'currentdate', None,
2443 2443 _('add/update date field in patch with current date')),
2444 2444 ('d', 'date', '',
2445 2445 _('add/update date field in patch with given date'), _('DATE'))
2446 2446 ] + commands.walkopts + commands.commitopts,
2447 2447 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'))
2448 2448 def refresh(ui, repo, *pats, **opts):
2449 2449 """update the current patch
2450 2450
2451 2451 If any file patterns are provided, the refreshed patch will
2452 2452 contain only the modifications that match those patterns; the
2453 2453 remaining modifications will remain in the working directory.
2454 2454
2455 2455 If -s/--short is specified, files currently included in the patch
2456 2456 will be refreshed just like matched files and remain in the patch.
2457 2457
2458 2458 If -e/--edit is specified, Mercurial will start your configured editor for
2459 2459 you to enter a message. In case qrefresh fails, you will find a backup of
2460 2460 your message in ``.hg/last-message.txt``.
2461 2461
2462 2462 hg add/remove/copy/rename work as usual, though you might want to
2463 2463 use git-style patches (-g/--git or [diff] git=1) to track copies
2464 2464 and renames. See the diffs help topic for more information on the
2465 2465 git diff format.
2466 2466
2467 2467 Returns 0 on success.
2468 2468 """
2469 2469 q = repo.mq
2470 2470 message = cmdutil.logmessage(ui, opts)
2471 2471 if opts.get('edit'):
2472 2472 if not q.applied:
2473 2473 ui.write(_("no patches applied\n"))
2474 2474 return 1
2475 2475 if message:
2476 2476 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
2477 2477 patch = q.applied[-1].name
2478 2478 ph = patchheader(q.join(patch), q.plainmode)
2479 2479 message = ui.edit('\n'.join(ph.message), ph.user or ui.username())
2480 2480 # We don't want to lose the patch message if qrefresh fails (issue2062)
2481 2481 repo.savecommitmessage(message)
2482 2482 setupheaderopts(ui, opts)
2483 2483 wlock = repo.wlock()
2484 2484 try:
2485 2485 ret = q.refresh(repo, pats, msg=message, **opts)
2486 2486 q.savedirty()
2487 2487 return ret
2488 2488 finally:
2489 2489 wlock.release()
2490 2490
2491 2491 @command("^qdiff",
2492 2492 commands.diffopts + commands.diffopts2 + commands.walkopts,
2493 2493 _('hg qdiff [OPTION]... [FILE]...'))
2494 2494 def diff(ui, repo, *pats, **opts):
2495 2495 """diff of the current patch and subsequent modifications
2496 2496
2497 2497 Shows a diff which includes the current patch as well as any
2498 2498 changes which have been made in the working directory since the
2499 2499 last refresh (thus showing what the current patch would become
2500 2500 after a qrefresh).
2501 2501
2502 2502 Use :hg:`diff` if you only want to see the changes made since the
2503 2503 last qrefresh, or :hg:`export qtip` if you want to see changes
2504 2504 made by the current patch without including changes made since the
2505 2505 qrefresh.
2506 2506
2507 2507 Returns 0 on success.
2508 2508 """
2509 2509 repo.mq.diff(repo, pats, opts)
2510 2510 return 0
2511 2511
2512 2512 @command('qfold',
2513 2513 [('e', 'edit', None, _('edit patch header')),
2514 2514 ('k', 'keep', None, _('keep folded patch files')),
2515 2515 ] + commands.commitopts,
2516 2516 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2517 2517 def fold(ui, repo, *files, **opts):
2518 2518 """fold the named patches into the current patch
2519 2519
2520 2520 Patches must not yet be applied. Each patch will be successively
2521 2521 applied to the current patch in the order given. If all the
2522 2522 patches apply successfully, the current patch will be refreshed
2523 2523 with the new cumulative patch, and the folded patches will be
2524 2524 deleted. With -k/--keep, the folded patch files will not be
2525 2525 removed afterwards.
2526 2526
2527 2527 The header for each folded patch will be concatenated with the
2528 2528 current patch header, separated by a line of ``* * *``.
2529 2529
2530 2530 Returns 0 on success."""
2531 2531 q = repo.mq
2532 2532 if not files:
2533 2533 raise util.Abort(_('qfold requires at least one patch name'))
2534 2534 if not q.checktoppatch(repo)[0]:
2535 2535 raise util.Abort(_('no patches applied'))
2536 2536 q.checklocalchanges(repo)
2537 2537
2538 2538 message = cmdutil.logmessage(ui, opts)
2539 2539 if opts.get('edit'):
2540 2540 if message:
2541 2541 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
2542 2542
2543 2543 parent = q.lookup('qtip')
2544 2544 patches = []
2545 2545 messages = []
2546 2546 for f in files:
2547 2547 p = q.lookup(f)
2548 2548 if p in patches or p == parent:
2549 2549 ui.warn(_('skipping already folded patch %s\n') % p)
2550 2550 if q.isapplied(p):
2551 2551 raise util.Abort(_('qfold cannot fold already applied patch %s')
2552 2552 % p)
2553 2553 patches.append(p)
2554 2554
2555 2555 for p in patches:
2556 2556 if not message:
2557 2557 ph = patchheader(q.join(p), q.plainmode)
2558 2558 if ph.message:
2559 2559 messages.append(ph.message)
2560 2560 pf = q.join(p)
2561 2561 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2562 2562 if not patchsuccess:
2563 2563 raise util.Abort(_('error folding patch %s') % p)
2564 2564
2565 2565 if not message:
2566 2566 ph = patchheader(q.join(parent), q.plainmode)
2567 2567 message, user = ph.message, ph.user
2568 2568 for msg in messages:
2569 2569 if msg:
2570 2570 if message:
2571 2571 message.append('* * *')
2572 2572 message.extend(msg)
2573 2573 message = '\n'.join(message)
2574 2574
2575 2575 if opts.get('edit'):
2576 2576 message = ui.edit(message, user or ui.username())
2577 2577 repo.savecommitmessage(message)
2578 2578
2579 2579 diffopts = q.patchopts(q.diffopts(), *patches)
2580 2580 wlock = repo.wlock()
2581 2581 try:
2582 2582 q.refresh(repo, msg=message, git=diffopts.git)
2583 2583 q.delete(repo, patches, opts)
2584 2584 q.savedirty()
2585 2585 finally:
2586 2586 wlock.release()
2587 2587
2588 2588 @command("qgoto",
2589 2589 [('', 'keep-changes', None,
2590 2590 _('tolerate non-conflicting local changes')),
2591 2591 ('f', 'force', None, _('overwrite any local changes')),
2592 2592 ('', 'no-backup', None, _('do not save backup copies of files'))],
2593 2593 _('hg qgoto [OPTION]... PATCH'))
2594 2594 def goto(ui, repo, patch, **opts):
2595 2595 '''push or pop patches until named patch is at top of stack
2596 2596
2597 2597 Returns 0 on success.'''
2598 2598 opts = fixkeepchangesopts(ui, opts)
2599 2599 q = repo.mq
2600 2600 patch = q.lookup(patch)
2601 2601 nobackup = opts.get('no_backup')
2602 2602 keepchanges = opts.get('keep_changes')
2603 2603 if q.isapplied(patch):
2604 2604 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2605 2605 keepchanges=keepchanges)
2606 2606 else:
2607 2607 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2608 2608 keepchanges=keepchanges)
2609 2609 q.savedirty()
2610 2610 return ret
2611 2611
2612 2612 @command("qguard",
2613 2613 [('l', 'list', None, _('list all patches and guards')),
2614 2614 ('n', 'none', None, _('drop all guards'))],
2615 2615 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2616 2616 def guard(ui, repo, *args, **opts):
2617 2617 '''set or print guards for a patch
2618 2618
2619 2619 Guards control whether a patch can be pushed. A patch with no
2620 2620 guards is always pushed. A patch with a positive guard ("+foo") is
2621 2621 pushed only if the :hg:`qselect` command has activated it. A patch with
2622 2622 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2623 2623 has activated it.
2624 2624
2625 2625 With no arguments, print the currently active guards.
2626 2626 With arguments, set guards for the named patch.
2627 2627
2628 2628 .. note::
2629 2629
2630 2630 Specifying negative guards now requires '--'.
2631 2631
2632 2632 To set guards on another patch::
2633 2633
2634 2634 hg qguard other.patch -- +2.6.17 -stable
2635 2635
2636 2636 Returns 0 on success.
2637 2637 '''
2638 2638 def status(idx):
2639 2639 guards = q.seriesguards[idx] or ['unguarded']
2640 2640 if q.series[idx] in applied:
2641 2641 state = 'applied'
2642 2642 elif q.pushable(idx)[0]:
2643 2643 state = 'unapplied'
2644 2644 else:
2645 2645 state = 'guarded'
2646 2646 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2647 2647 ui.write('%s: ' % ui.label(q.series[idx], label))
2648 2648
2649 2649 for i, guard in enumerate(guards):
2650 2650 if guard.startswith('+'):
2651 2651 ui.write(guard, label='qguard.positive')
2652 2652 elif guard.startswith('-'):
2653 2653 ui.write(guard, label='qguard.negative')
2654 2654 else:
2655 2655 ui.write(guard, label='qguard.unguarded')
2656 2656 if i != len(guards) - 1:
2657 2657 ui.write(' ')
2658 2658 ui.write('\n')
2659 2659 q = repo.mq
2660 2660 applied = set(p.name for p in q.applied)
2661 2661 patch = None
2662 2662 args = list(args)
2663 2663 if opts.get('list'):
2664 2664 if args or opts.get('none'):
2665 2665 raise util.Abort(_('cannot mix -l/--list with options or '
2666 2666 'arguments'))
2667 2667 for i in xrange(len(q.series)):
2668 2668 status(i)
2669 2669 return
2670 2670 if not args or args[0][0:1] in '-+':
2671 2671 if not q.applied:
2672 2672 raise util.Abort(_('no patches applied'))
2673 2673 patch = q.applied[-1].name
2674 2674 if patch is None and args[0][0:1] not in '-+':
2675 2675 patch = args.pop(0)
2676 2676 if patch is None:
2677 2677 raise util.Abort(_('no patch to work with'))
2678 2678 if args or opts.get('none'):
2679 2679 idx = q.findseries(patch)
2680 2680 if idx is None:
2681 2681 raise util.Abort(_('no patch named %s') % patch)
2682 2682 q.setguards(idx, args)
2683 2683 q.savedirty()
2684 2684 else:
2685 2685 status(q.series.index(q.lookup(patch)))
2686 2686
2687 2687 @command("qheader", [], _('hg qheader [PATCH]'))
2688 2688 def header(ui, repo, patch=None):
2689 2689 """print the header of the topmost or specified patch
2690 2690
2691 2691 Returns 0 on success."""
2692 2692 q = repo.mq
2693 2693
2694 2694 if patch:
2695 2695 patch = q.lookup(patch)
2696 2696 else:
2697 2697 if not q.applied:
2698 2698 ui.write(_('no patches applied\n'))
2699 2699 return 1
2700 2700 patch = q.lookup('qtip')
2701 2701 ph = patchheader(q.join(patch), q.plainmode)
2702 2702
2703 2703 ui.write('\n'.join(ph.message) + '\n')
2704 2704
2705 2705 def lastsavename(path):
2706 2706 (directory, base) = os.path.split(path)
2707 2707 names = os.listdir(directory)
2708 2708 namere = re.compile("%s.([0-9]+)" % base)
2709 2709 maxindex = None
2710 2710 maxname = None
2711 2711 for f in names:
2712 2712 m = namere.match(f)
2713 2713 if m:
2714 2714 index = int(m.group(1))
2715 2715 if maxindex is None or index > maxindex:
2716 2716 maxindex = index
2717 2717 maxname = f
2718 2718 if maxname:
2719 2719 return (os.path.join(directory, maxname), maxindex)
2720 2720 return (None, None)
2721 2721
2722 2722 def savename(path):
2723 2723 (last, index) = lastsavename(path)
2724 2724 if last is None:
2725 2725 index = 0
2726 2726 newpath = path + ".%d" % (index + 1)
2727 2727 return newpath
2728 2728
2729 2729 @command("^qpush",
2730 2730 [('', 'keep-changes', None,
2731 2731 _('tolerate non-conflicting local changes')),
2732 2732 ('f', 'force', None, _('apply on top of local changes')),
2733 2733 ('e', 'exact', None,
2734 2734 _('apply the target patch to its recorded parent')),
2735 2735 ('l', 'list', None, _('list patch name in commit text')),
2736 2736 ('a', 'all', None, _('apply all patches')),
2737 2737 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2738 2738 ('n', 'name', '',
2739 2739 _('merge queue name (DEPRECATED)'), _('NAME')),
2740 2740 ('', 'move', None,
2741 2741 _('reorder patch series and apply only the patch')),
2742 2742 ('', 'no-backup', None, _('do not save backup copies of files'))],
2743 2743 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2744 2744 def push(ui, repo, patch=None, **opts):
2745 2745 """push the next patch onto the stack
2746 2746
2747 2747 By default, abort if the working directory contains uncommitted
2748 2748 changes. With --keep-changes, abort only if the uncommitted files
2749 2749 overlap with patched files. With -f/--force, backup and patch over
2750 2750 uncommitted changes.
2751 2751
2752 2752 Return 0 on success.
2753 2753 """
2754 2754 q = repo.mq
2755 2755 mergeq = None
2756 2756
2757 2757 opts = fixkeepchangesopts(ui, opts)
2758 2758 if opts.get('merge'):
2759 2759 if opts.get('name'):
2760 2760 newpath = repo.join(opts.get('name'))
2761 2761 else:
2762 2762 newpath, i = lastsavename(q.path)
2763 2763 if not newpath:
2764 2764 ui.warn(_("no saved queues found, please use -n\n"))
2765 2765 return 1
2766 2766 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2767 2767 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2768 2768 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2769 2769 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2770 2770 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2771 2771 keepchanges=opts.get('keep_changes'))
2772 2772 return ret
2773 2773
2774 2774 @command("^qpop",
2775 2775 [('a', 'all', None, _('pop all patches')),
2776 2776 ('n', 'name', '',
2777 2777 _('queue name to pop (DEPRECATED)'), _('NAME')),
2778 2778 ('', 'keep-changes', None,
2779 2779 _('tolerate non-conflicting local changes')),
2780 2780 ('f', 'force', None, _('forget any local changes to patched files')),
2781 2781 ('', 'no-backup', None, _('do not save backup copies of files'))],
2782 2782 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2783 2783 def pop(ui, repo, patch=None, **opts):
2784 2784 """pop the current patch off the stack
2785 2785
2786 2786 Without argument, pops off the top of the patch stack. If given a
2787 2787 patch name, keeps popping off patches until the named patch is at
2788 2788 the top of the stack.
2789 2789
2790 2790 By default, abort if the working directory contains uncommitted
2791 2791 changes. With --keep-changes, abort only if the uncommitted files
2792 2792 overlap with patched files. With -f/--force, backup and discard
2793 2793 changes made to such files.
2794 2794
2795 2795 Return 0 on success.
2796 2796 """
2797 2797 opts = fixkeepchangesopts(ui, opts)
2798 2798 localupdate = True
2799 2799 if opts.get('name'):
2800 2800 q = queue(ui, repo.baseui, repo.path, repo.join(opts.get('name')))
2801 2801 ui.warn(_('using patch queue: %s\n') % q.path)
2802 2802 localupdate = False
2803 2803 else:
2804 2804 q = repo.mq
2805 2805 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
2806 2806 all=opts.get('all'), nobackup=opts.get('no_backup'),
2807 2807 keepchanges=opts.get('keep_changes'))
2808 2808 q.savedirty()
2809 2809 return ret
2810 2810
2811 2811 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
2812 2812 def rename(ui, repo, patch, name=None, **opts):
2813 2813 """rename a patch
2814 2814
2815 2815 With one argument, renames the current patch to PATCH1.
2816 2816 With two arguments, renames PATCH1 to PATCH2.
2817 2817
2818 2818 Returns 0 on success."""
2819 2819 q = repo.mq
2820 2820 if not name:
2821 2821 name = patch
2822 2822 patch = None
2823 2823
2824 2824 if patch:
2825 2825 patch = q.lookup(patch)
2826 2826 else:
2827 2827 if not q.applied:
2828 2828 ui.write(_('no patches applied\n'))
2829 2829 return
2830 2830 patch = q.lookup('qtip')
2831 2831 absdest = q.join(name)
2832 2832 if os.path.isdir(absdest):
2833 2833 name = normname(os.path.join(name, os.path.basename(patch)))
2834 2834 absdest = q.join(name)
2835 2835 q.checkpatchname(name)
2836 2836
2837 2837 ui.note(_('renaming %s to %s\n') % (patch, name))
2838 2838 i = q.findseries(patch)
2839 2839 guards = q.guard_re.findall(q.fullseries[i])
2840 2840 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
2841 2841 q.parseseries()
2842 2842 q.seriesdirty = True
2843 2843
2844 2844 info = q.isapplied(patch)
2845 2845 if info:
2846 2846 q.applied[info[0]] = statusentry(info[1], name)
2847 2847 q.applieddirty = True
2848 2848
2849 2849 destdir = os.path.dirname(absdest)
2850 2850 if not os.path.isdir(destdir):
2851 2851 os.makedirs(destdir)
2852 2852 util.rename(q.join(patch), absdest)
2853 2853 r = q.qrepo()
2854 2854 if r and patch in r.dirstate:
2855 2855 wctx = r[None]
2856 2856 wlock = r.wlock()
2857 2857 try:
2858 2858 if r.dirstate[patch] == 'a':
2859 2859 r.dirstate.drop(patch)
2860 2860 r.dirstate.add(name)
2861 2861 else:
2862 2862 wctx.copy(patch, name)
2863 2863 wctx.forget([patch])
2864 2864 finally:
2865 2865 wlock.release()
2866 2866
2867 2867 q.savedirty()
2868 2868
2869 2869 @command("qrestore",
2870 2870 [('d', 'delete', None, _('delete save entry')),
2871 2871 ('u', 'update', None, _('update queue working directory'))],
2872 2872 _('hg qrestore [-d] [-u] REV'))
2873 2873 def restore(ui, repo, rev, **opts):
2874 2874 """restore the queue state saved by a revision (DEPRECATED)
2875 2875
2876 2876 This command is deprecated, use :hg:`rebase` instead."""
2877 2877 rev = repo.lookup(rev)
2878 2878 q = repo.mq
2879 2879 q.restore(repo, rev, delete=opts.get('delete'),
2880 2880 qupdate=opts.get('update'))
2881 2881 q.savedirty()
2882 2882 return 0
2883 2883
2884 2884 @command("qsave",
2885 2885 [('c', 'copy', None, _('copy patch directory')),
2886 2886 ('n', 'name', '',
2887 2887 _('copy directory name'), _('NAME')),
2888 2888 ('e', 'empty', None, _('clear queue status file')),
2889 2889 ('f', 'force', None, _('force copy'))] + commands.commitopts,
2890 2890 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
2891 2891 def save(ui, repo, **opts):
2892 2892 """save current queue state (DEPRECATED)
2893 2893
2894 2894 This command is deprecated, use :hg:`rebase` instead."""
2895 2895 q = repo.mq
2896 2896 message = cmdutil.logmessage(ui, opts)
2897 2897 ret = q.save(repo, msg=message)
2898 2898 if ret:
2899 2899 return ret
2900 2900 q.savedirty() # save to .hg/patches before copying
2901 2901 if opts.get('copy'):
2902 2902 path = q.path
2903 2903 if opts.get('name'):
2904 2904 newpath = os.path.join(q.basepath, opts.get('name'))
2905 2905 if os.path.exists(newpath):
2906 2906 if not os.path.isdir(newpath):
2907 2907 raise util.Abort(_('destination %s exists and is not '
2908 2908 'a directory') % newpath)
2909 2909 if not opts.get('force'):
2910 2910 raise util.Abort(_('destination %s exists, '
2911 2911 'use -f to force') % newpath)
2912 2912 else:
2913 2913 newpath = savename(path)
2914 2914 ui.warn(_("copy %s to %s\n") % (path, newpath))
2915 2915 util.copyfiles(path, newpath)
2916 2916 if opts.get('empty'):
2917 2917 del q.applied[:]
2918 2918 q.applieddirty = True
2919 2919 q.savedirty()
2920 2920 return 0
2921 2921
2922 2922
2923 2923 @command("qselect",
2924 2924 [('n', 'none', None, _('disable all guards')),
2925 2925 ('s', 'series', None, _('list all guards in series file')),
2926 2926 ('', 'pop', None, _('pop to before first guarded applied patch')),
2927 2927 ('', 'reapply', None, _('pop, then reapply patches'))],
2928 2928 _('hg qselect [OPTION]... [GUARD]...'))
2929 2929 def select(ui, repo, *args, **opts):
2930 2930 '''set or print guarded patches to push
2931 2931
2932 2932 Use the :hg:`qguard` command to set or print guards on patch, then use
2933 2933 qselect to tell mq which guards to use. A patch will be pushed if
2934 2934 it has no guards or any positive guards match the currently
2935 2935 selected guard, but will not be pushed if any negative guards
2936 2936 match the current guard. For example::
2937 2937
2938 2938 qguard foo.patch -- -stable (negative guard)
2939 2939 qguard bar.patch +stable (positive guard)
2940 2940 qselect stable
2941 2941
2942 2942 This activates the "stable" guard. mq will skip foo.patch (because
2943 2943 it has a negative match) but push bar.patch (because it has a
2944 2944 positive match).
2945 2945
2946 2946 With no arguments, prints the currently active guards.
2947 2947 With one argument, sets the active guard.
2948 2948
2949 2949 Use -n/--none to deactivate guards (no other arguments needed).
2950 2950 When no guards are active, patches with positive guards are
2951 2951 skipped and patches with negative guards are pushed.
2952 2952
2953 2953 qselect can change the guards on applied patches. It does not pop
2954 2954 guarded patches by default. Use --pop to pop back to the last
2955 2955 applied patch that is not guarded. Use --reapply (which implies
2956 2956 --pop) to push back to the current patch afterwards, but skip
2957 2957 guarded patches.
2958 2958
2959 2959 Use -s/--series to print a list of all guards in the series file
2960 2960 (no other arguments needed). Use -v for more information.
2961 2961
2962 2962 Returns 0 on success.'''
2963 2963
2964 2964 q = repo.mq
2965 2965 guards = q.active()
2966 2966 if args or opts.get('none'):
2967 2967 old_unapplied = q.unapplied(repo)
2968 2968 old_guarded = [i for i in xrange(len(q.applied)) if
2969 2969 not q.pushable(i)[0]]
2970 2970 q.setactive(args)
2971 2971 q.savedirty()
2972 2972 if not args:
2973 2973 ui.status(_('guards deactivated\n'))
2974 2974 if not opts.get('pop') and not opts.get('reapply'):
2975 2975 unapplied = q.unapplied(repo)
2976 2976 guarded = [i for i in xrange(len(q.applied))
2977 2977 if not q.pushable(i)[0]]
2978 2978 if len(unapplied) != len(old_unapplied):
2979 2979 ui.status(_('number of unguarded, unapplied patches has '
2980 2980 'changed from %d to %d\n') %
2981 2981 (len(old_unapplied), len(unapplied)))
2982 2982 if len(guarded) != len(old_guarded):
2983 2983 ui.status(_('number of guarded, applied patches has changed '
2984 2984 'from %d to %d\n') %
2985 2985 (len(old_guarded), len(guarded)))
2986 2986 elif opts.get('series'):
2987 2987 guards = {}
2988 2988 noguards = 0
2989 2989 for gs in q.seriesguards:
2990 2990 if not gs:
2991 2991 noguards += 1
2992 2992 for g in gs:
2993 2993 guards.setdefault(g, 0)
2994 2994 guards[g] += 1
2995 2995 if ui.verbose:
2996 2996 guards['NONE'] = noguards
2997 2997 guards = guards.items()
2998 2998 guards.sort(key=lambda x: x[0][1:])
2999 2999 if guards:
3000 3000 ui.note(_('guards in series file:\n'))
3001 3001 for guard, count in guards:
3002 3002 ui.note('%2d ' % count)
3003 3003 ui.write(guard, '\n')
3004 3004 else:
3005 3005 ui.note(_('no guards in series file\n'))
3006 3006 else:
3007 3007 if guards:
3008 3008 ui.note(_('active guards:\n'))
3009 3009 for g in guards:
3010 3010 ui.write(g, '\n')
3011 3011 else:
3012 3012 ui.write(_('no active guards\n'))
3013 3013 reapply = opts.get('reapply') and q.applied and q.appliedname(-1)
3014 3014 popped = False
3015 3015 if opts.get('pop') or opts.get('reapply'):
3016 3016 for i in xrange(len(q.applied)):
3017 3017 pushable, reason = q.pushable(i)
3018 3018 if not pushable:
3019 3019 ui.status(_('popping guarded patches\n'))
3020 3020 popped = True
3021 3021 if i == 0:
3022 3022 q.pop(repo, all=True)
3023 3023 else:
3024 3024 q.pop(repo, str(i - 1))
3025 3025 break
3026 3026 if popped:
3027 3027 try:
3028 3028 if reapply:
3029 3029 ui.status(_('reapplying unguarded patches\n'))
3030 3030 q.push(repo, reapply)
3031 3031 finally:
3032 3032 q.savedirty()
3033 3033
3034 3034 @command("qfinish",
3035 3035 [('a', 'applied', None, _('finish all applied changesets'))],
3036 3036 _('hg qfinish [-a] [REV]...'))
3037 3037 def finish(ui, repo, *revrange, **opts):
3038 3038 """move applied patches into repository history
3039 3039
3040 3040 Finishes the specified revisions (corresponding to applied
3041 3041 patches) by moving them out of mq control into regular repository
3042 3042 history.
3043 3043
3044 3044 Accepts a revision range or the -a/--applied option. If --applied
3045 3045 is specified, all applied mq revisions are removed from mq
3046 3046 control. Otherwise, the given revisions must be at the base of the
3047 3047 stack of applied patches.
3048 3048
3049 3049 This can be especially useful if your changes have been applied to
3050 3050 an upstream repository, or if you are about to push your changes
3051 3051 to upstream.
3052 3052
3053 3053 Returns 0 on success.
3054 3054 """
3055 3055 if not opts.get('applied') and not revrange:
3056 3056 raise util.Abort(_('no revisions specified'))
3057 3057 elif opts.get('applied'):
3058 3058 revrange = ('qbase::qtip',) + revrange
3059 3059
3060 3060 q = repo.mq
3061 3061 if not q.applied:
3062 3062 ui.status(_('no patches applied\n'))
3063 3063 return 0
3064 3064
3065 3065 revs = scmutil.revrange(repo, revrange)
3066 3066 if repo['.'].rev() in revs and repo[None].files():
3067 3067 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3068 3068 # queue.finish may changes phases but leave the responsibility to lock the
3069 3069 # repo to the caller to avoid deadlock with wlock. This command code is
3070 3070 # responsibility for this locking.
3071 3071 lock = repo.lock()
3072 3072 try:
3073 3073 q.finish(repo, revs)
3074 3074 q.savedirty()
3075 3075 finally:
3076 3076 lock.release()
3077 3077 return 0
3078 3078
3079 3079 @command("qqueue",
3080 3080 [('l', 'list', False, _('list all available queues')),
3081 3081 ('', 'active', False, _('print name of active queue')),
3082 3082 ('c', 'create', False, _('create new queue')),
3083 3083 ('', 'rename', False, _('rename active queue')),
3084 3084 ('', 'delete', False, _('delete reference to queue')),
3085 3085 ('', 'purge', False, _('delete queue, and remove patch dir')),
3086 3086 ],
3087 3087 _('[OPTION] [QUEUE]'))
3088 3088 def qqueue(ui, repo, name=None, **opts):
3089 3089 '''manage multiple patch queues
3090 3090
3091 3091 Supports switching between different patch queues, as well as creating
3092 3092 new patch queues and deleting existing ones.
3093 3093
3094 3094 Omitting a queue name or specifying -l/--list will show you the registered
3095 3095 queues - by default the "normal" patches queue is registered. The currently
3096 3096 active queue will be marked with "(active)". Specifying --active will print
3097 3097 only the name of the active queue.
3098 3098
3099 3099 To create a new queue, use -c/--create. The queue is automatically made
3100 3100 active, except in the case where there are applied patches from the
3101 3101 currently active queue in the repository. Then the queue will only be
3102 3102 created and switching will fail.
3103 3103
3104 3104 To delete an existing queue, use --delete. You cannot delete the currently
3105 3105 active queue.
3106 3106
3107 3107 Returns 0 on success.
3108 3108 '''
3109 3109 q = repo.mq
3110 3110 _defaultqueue = 'patches'
3111 3111 _allqueues = 'patches.queues'
3112 3112 _activequeue = 'patches.queue'
3113 3113
3114 3114 def _getcurrent():
3115 3115 cur = os.path.basename(q.path)
3116 3116 if cur.startswith('patches-'):
3117 3117 cur = cur[8:]
3118 3118 return cur
3119 3119
3120 3120 def _noqueues():
3121 3121 try:
3122 3122 fh = repo.opener(_allqueues, 'r')
3123 3123 fh.close()
3124 3124 except IOError:
3125 3125 return True
3126 3126
3127 3127 return False
3128 3128
3129 3129 def _getqueues():
3130 3130 current = _getcurrent()
3131 3131
3132 3132 try:
3133 3133 fh = repo.opener(_allqueues, 'r')
3134 3134 queues = [queue.strip() for queue in fh if queue.strip()]
3135 3135 fh.close()
3136 3136 if current not in queues:
3137 3137 queues.append(current)
3138 3138 except IOError:
3139 3139 queues = [_defaultqueue]
3140 3140
3141 3141 return sorted(queues)
3142 3142
3143 3143 def _setactive(name):
3144 3144 if q.applied:
3145 3145 raise util.Abort(_('new queue created, but cannot make active '
3146 3146 'as patches are applied'))
3147 3147 _setactivenocheck(name)
3148 3148
3149 3149 def _setactivenocheck(name):
3150 3150 fh = repo.opener(_activequeue, 'w')
3151 3151 if name != 'patches':
3152 3152 fh.write(name)
3153 3153 fh.close()
3154 3154
3155 3155 def _addqueue(name):
3156 3156 fh = repo.opener(_allqueues, 'a')
3157 3157 fh.write('%s\n' % (name,))
3158 3158 fh.close()
3159 3159
3160 3160 def _queuedir(name):
3161 3161 if name == 'patches':
3162 3162 return repo.join('patches')
3163 3163 else:
3164 3164 return repo.join('patches-' + name)
3165 3165
3166 3166 def _validname(name):
3167 3167 for n in name:
3168 3168 if n in ':\\/.':
3169 3169 return False
3170 3170 return True
3171 3171
3172 3172 def _delete(name):
3173 3173 if name not in existing:
3174 3174 raise util.Abort(_('cannot delete queue that does not exist'))
3175 3175
3176 3176 current = _getcurrent()
3177 3177
3178 3178 if name == current:
3179 3179 raise util.Abort(_('cannot delete currently active queue'))
3180 3180
3181 3181 fh = repo.opener('patches.queues.new', 'w')
3182 3182 for queue in existing:
3183 3183 if queue == name:
3184 3184 continue
3185 3185 fh.write('%s\n' % (queue,))
3186 3186 fh.close()
3187 3187 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3188 3188
3189 3189 if not name or opts.get('list') or opts.get('active'):
3190 3190 current = _getcurrent()
3191 3191 if opts.get('active'):
3192 3192 ui.write('%s\n' % (current,))
3193 3193 return
3194 3194 for queue in _getqueues():
3195 3195 ui.write('%s' % (queue,))
3196 3196 if queue == current and not ui.quiet:
3197 3197 ui.write(_(' (active)\n'))
3198 3198 else:
3199 3199 ui.write('\n')
3200 3200 return
3201 3201
3202 3202 if not _validname(name):
3203 3203 raise util.Abort(
3204 3204 _('invalid queue name, may not contain the characters ":\\/."'))
3205 3205
3206 3206 existing = _getqueues()
3207 3207
3208 3208 if opts.get('create'):
3209 3209 if name in existing:
3210 3210 raise util.Abort(_('queue "%s" already exists') % name)
3211 3211 if _noqueues():
3212 3212 _addqueue(_defaultqueue)
3213 3213 _addqueue(name)
3214 3214 _setactive(name)
3215 3215 elif opts.get('rename'):
3216 3216 current = _getcurrent()
3217 3217 if name == current:
3218 3218 raise util.Abort(_('can\'t rename "%s" to its current name') % name)
3219 3219 if name in existing:
3220 3220 raise util.Abort(_('queue "%s" already exists') % name)
3221 3221
3222 3222 olddir = _queuedir(current)
3223 3223 newdir = _queuedir(name)
3224 3224
3225 3225 if os.path.exists(newdir):
3226 3226 raise util.Abort(_('non-queue directory "%s" already exists') %
3227 3227 newdir)
3228 3228
3229 3229 fh = repo.opener('patches.queues.new', 'w')
3230 3230 for queue in existing:
3231 3231 if queue == current:
3232 3232 fh.write('%s\n' % (name,))
3233 3233 if os.path.exists(olddir):
3234 3234 util.rename(olddir, newdir)
3235 3235 else:
3236 3236 fh.write('%s\n' % (queue,))
3237 3237 fh.close()
3238 3238 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3239 3239 _setactivenocheck(name)
3240 3240 elif opts.get('delete'):
3241 3241 _delete(name)
3242 3242 elif opts.get('purge'):
3243 3243 if name in existing:
3244 3244 _delete(name)
3245 3245 qdir = _queuedir(name)
3246 3246 if os.path.exists(qdir):
3247 3247 shutil.rmtree(qdir)
3248 3248 else:
3249 3249 if name not in existing:
3250 3250 raise util.Abort(_('use --create to create a new queue'))
3251 3251 _setactive(name)
3252 3252
3253 3253 def mqphasedefaults(repo, roots):
3254 3254 """callback used to set mq changeset as secret when no phase data exists"""
3255 3255 if repo.mq.applied:
3256 3256 if repo.ui.configbool('mq', 'secret', False):
3257 3257 mqphase = phases.secret
3258 3258 else:
3259 3259 mqphase = phases.draft
3260 3260 qbase = repo[repo.mq.applied[0].node]
3261 3261 roots[mqphase].add(qbase.node())
3262 3262 return roots
3263 3263
3264 3264 def reposetup(ui, repo):
3265 3265 class mqrepo(repo.__class__):
3266 3266 @localrepo.unfilteredpropertycache
3267 3267 def mq(self):
3268 3268 return queue(self.ui, self.baseui, self.path)
3269 3269
3270 3270 def invalidateall(self):
3271 3271 super(mqrepo, self).invalidateall()
3272 3272 if localrepo.hasunfilteredcache(self, 'mq'):
3273 3273 # recreate mq in case queue path was changed
3274 3274 delattr(self.unfiltered(), 'mq')
3275 3275
3276 3276 def abortifwdirpatched(self, errmsg, force=False):
3277 3277 if self.mq.applied and self.mq.checkapplied and not force:
3278 3278 parents = self.dirstate.parents()
3279 3279 patches = [s.node for s in self.mq.applied]
3280 3280 if parents[0] in patches or parents[1] in patches:
3281 3281 raise util.Abort(errmsg)
3282 3282
3283 3283 def commit(self, text="", user=None, date=None, match=None,
3284 3284 force=False, editor=False, extra={}):
3285 3285 self.abortifwdirpatched(
3286 3286 _('cannot commit over an applied mq patch'),
3287 3287 force)
3288 3288
3289 3289 return super(mqrepo, self).commit(text, user, date, match, force,
3290 3290 editor, extra)
3291 3291
3292 3292 def checkpush(self, force, revs):
3293 3293 if self.mq.applied and self.mq.checkapplied and not force:
3294 3294 outapplied = [e.node for e in self.mq.applied]
3295 3295 if revs:
3296 3296 # Assume applied patches have no non-patch descendants and
3297 3297 # are not on remote already. Filtering any changeset not
3298 3298 # pushed.
3299 3299 heads = set(revs)
3300 3300 for node in reversed(outapplied):
3301 3301 if node in heads:
3302 3302 break
3303 3303 else:
3304 3304 outapplied.pop()
3305 3305 # looking for pushed and shared changeset
3306 3306 for node in outapplied:
3307 3307 if self[node].phase() < phases.secret:
3308 3308 raise util.Abort(_('source has mq patches applied'))
3309 3309 # no non-secret patches pushed
3310 3310 super(mqrepo, self).checkpush(force, revs)
3311 3311
3312 3312 def _findtags(self):
3313 3313 '''augment tags from base class with patch tags'''
3314 3314 result = super(mqrepo, self)._findtags()
3315 3315
3316 3316 q = self.mq
3317 3317 if not q.applied:
3318 3318 return result
3319 3319
3320 3320 mqtags = [(patch.node, patch.name) for patch in q.applied]
3321 3321
3322 3322 try:
3323 3323 # for now ignore filtering business
3324 3324 self.unfiltered().changelog.rev(mqtags[-1][0])
3325 3325 except error.LookupError:
3326 3326 self.ui.warn(_('mq status file refers to unknown node %s\n')
3327 3327 % short(mqtags[-1][0]))
3328 3328 return result
3329 3329
3330 3330 # do not add fake tags for filtered revisions
3331 3331 included = self.changelog.hasnode
3332 3332 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3333 3333 if not mqtags:
3334 3334 return result
3335 3335
3336 3336 mqtags.append((mqtags[-1][0], 'qtip'))
3337 3337 mqtags.append((mqtags[0][0], 'qbase'))
3338 3338 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3339 3339 tags = result[0]
3340 3340 for patch in mqtags:
3341 3341 if patch[1] in tags:
3342 3342 self.ui.warn(_('tag %s overrides mq patch of the same '
3343 3343 'name\n') % patch[1])
3344 3344 else:
3345 3345 tags[patch[1]] = patch[0]
3346 3346
3347 3347 return result
3348 3348
3349 3349 if repo.local():
3350 3350 repo.__class__ = mqrepo
3351 3351
3352 3352 repo._phasedefaults.append(mqphasedefaults)
3353 3353
3354 3354 def mqimport(orig, ui, repo, *args, **kwargs):
3355 3355 if (util.safehasattr(repo, 'abortifwdirpatched')
3356 3356 and not kwargs.get('no_commit', False)):
3357 3357 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3358 3358 kwargs.get('force'))
3359 3359 return orig(ui, repo, *args, **kwargs)
3360 3360
3361 3361 def mqinit(orig, ui, *args, **kwargs):
3362 3362 mq = kwargs.pop('mq', None)
3363 3363
3364 3364 if not mq:
3365 3365 return orig(ui, *args, **kwargs)
3366 3366
3367 3367 if args:
3368 3368 repopath = args[0]
3369 3369 if not hg.islocal(repopath):
3370 3370 raise util.Abort(_('only a local queue repository '
3371 3371 'may be initialized'))
3372 3372 else:
3373 3373 repopath = cmdutil.findrepo(os.getcwd())
3374 3374 if not repopath:
3375 3375 raise util.Abort(_('there is no Mercurial repository here '
3376 3376 '(.hg not found)'))
3377 3377 repo = hg.repository(ui, repopath)
3378 3378 return qinit(ui, repo, True)
3379 3379
3380 3380 def mqcommand(orig, ui, repo, *args, **kwargs):
3381 3381 """Add --mq option to operate on patch repository instead of main"""
3382 3382
3383 3383 # some commands do not like getting unknown options
3384 3384 mq = kwargs.pop('mq', None)
3385 3385
3386 3386 if not mq:
3387 3387 return orig(ui, repo, *args, **kwargs)
3388 3388
3389 3389 q = repo.mq
3390 3390 r = q.qrepo()
3391 3391 if not r:
3392 3392 raise util.Abort(_('no queue repository'))
3393 3393 return orig(r.ui, r, *args, **kwargs)
3394 3394
3395 3395 def summaryhook(ui, repo):
3396 3396 q = repo.mq
3397 3397 m = []
3398 3398 a, u = len(q.applied), len(q.unapplied(repo))
3399 3399 if a:
3400 3400 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3401 3401 if u:
3402 3402 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3403 3403 if m:
3404 3404 # i18n: column positioning for "hg summary"
3405 3405 ui.write(_("mq: %s\n") % ', '.join(m))
3406 3406 else:
3407 3407 # i18n: column positioning for "hg summary"
3408 3408 ui.note(_("mq: (empty queue)\n"))
3409 3409
3410 3410 def revsetmq(repo, subset, x):
3411 3411 """``mq()``
3412 3412 Changesets managed by MQ.
3413 3413 """
3414 3414 revset.getargs(x, 0, 0, _("mq takes no arguments"))
3415 3415 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3416 3416 return revset.baseset([r for r in subset if r in applied])
3417 3417
3418 3418 # tell hggettext to extract docstrings from these functions:
3419 3419 i18nfunctions = [revsetmq]
3420 3420
3421 3421 def extsetup(ui):
3422 3422 # Ensure mq wrappers are called first, regardless of extension load order by
3423 3423 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3424 3424 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3425 3425
3426 3426 extensions.wrapcommand(commands.table, 'import', mqimport)
3427 3427 cmdutil.summaryhooks.add('mq', summaryhook)
3428 3428
3429 3429 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3430 3430 entry[1].extend(mqopt)
3431 3431
3432 3432 nowrap = set(commands.norepo.split(" "))
3433 3433
3434 3434 def dotable(cmdtable):
3435 3435 for cmd in cmdtable.keys():
3436 3436 cmd = cmdutil.parsealiases(cmd)[0]
3437 3437 if cmd in nowrap:
3438 3438 continue
3439 3439 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3440 3440 entry[1].extend(mqopt)
3441 3441
3442 3442 dotable(commands.table)
3443 3443
3444 3444 for extname, extmodule in extensions.extensions():
3445 3445 if extmodule.__file__ != __file__:
3446 3446 dotable(getattr(extmodule, 'cmdtable', {}))
3447 3447
3448 3448 revset.symbols['mq'] = revsetmq
3449 3449
3450 3450 colortable = {'qguard.negative': 'red',
3451 3451 'qguard.positive': 'yellow',
3452 3452 'qguard.unguarded': 'green',
3453 3453 'qseries.applied': 'blue bold underline',
3454 3454 'qseries.guarded': 'black bold',
3455 3455 'qseries.missing': 'red bold',
3456 3456 'qseries.unapplied': 'black bold'}
3457 3457
3458 3458 commands.inferrepo += " qnew qrefresh qdiff qcommit"
@@ -1,148 +1,148 b''
1 1 # pager.py - display output using a pager
2 2 #
3 3 # Copyright 2008 David Soria Parra <dsp@php.net>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 #
8 8 # To load the extension, add it to your configuration file:
9 9 #
10 10 # [extension]
11 11 # pager =
12 12 #
13 13 # Run "hg help pager" to get info on configuration.
14 14
15 15 '''browse command output with an external pager
16 16
17 17 To set the pager that should be used, set the application variable::
18 18
19 19 [pager]
20 20 pager = less -FRX
21 21
22 22 If no pager is set, the pager extensions uses the environment variable
23 23 $PAGER. If neither pager.pager, nor $PAGER is set, no pager is used.
24 24
25 25 You can disable the pager for certain commands by adding them to the
26 26 pager.ignore list::
27 27
28 28 [pager]
29 29 ignore = version, help, update
30 30
31 31 You can also enable the pager only for certain commands using
32 32 pager.attend. Below is the default list of commands to be paged::
33 33
34 34 [pager]
35 35 attend = annotate, cat, diff, export, glog, log, qdiff
36 36
37 37 Setting pager.attend to an empty value will cause all commands to be
38 38 paged.
39 39
40 40 If pager.attend is present, pager.ignore will be ignored.
41 41
42 42 To ignore global commands like :hg:`version` or :hg:`help`, you have
43 43 to specify them in your user configuration file.
44 44
45 45 The --pager=... option can also be used to control when the pager is
46 46 used. Use a boolean value like yes, no, on, off, or use auto for
47 47 normal behavior.
48 48 '''
49 49
50 50 import atexit, sys, os, signal, subprocess, errno, shlex
51 51 from mercurial import commands, dispatch, util, extensions, cmdutil
52 52 from mercurial.i18n import _
53 53
54 54 testedwith = 'internal'
55 55
56 56 def _pagerfork(ui, p):
57 57 if not util.safehasattr(os, 'fork'):
58 58 sys.stdout = util.popen(p, 'wb')
59 59 if ui._isatty(sys.stderr):
60 60 sys.stderr = sys.stdout
61 61 return
62 62 fdin, fdout = os.pipe()
63 63 pid = os.fork()
64 64 if pid == 0:
65 65 os.close(fdin)
66 66 os.dup2(fdout, sys.stdout.fileno())
67 67 if ui._isatty(sys.stderr):
68 68 os.dup2(fdout, sys.stderr.fileno())
69 69 os.close(fdout)
70 70 return
71 71 os.dup2(fdin, sys.stdin.fileno())
72 72 os.close(fdin)
73 73 os.close(fdout)
74 74 try:
75 75 os.execvp('/bin/sh', ['/bin/sh', '-c', p])
76 76 except OSError, e:
77 77 if e.errno == errno.ENOENT:
78 78 # no /bin/sh, try executing the pager directly
79 79 args = shlex.split(p)
80 80 os.execvp(args[0], args)
81 81 else:
82 82 raise
83 83
84 84 def _pagersubprocess(ui, p):
85 85 pager = subprocess.Popen(p, shell=True, bufsize=-1,
86 86 close_fds=util.closefds, stdin=subprocess.PIPE,
87 87 stdout=sys.stdout, stderr=sys.stderr)
88 88
89 89 stdout = os.dup(sys.stdout.fileno())
90 90 stderr = os.dup(sys.stderr.fileno())
91 91 os.dup2(pager.stdin.fileno(), sys.stdout.fileno())
92 92 if ui._isatty(sys.stderr):
93 93 os.dup2(pager.stdin.fileno(), sys.stderr.fileno())
94 94
95 95 @atexit.register
96 96 def killpager():
97 97 if util.safehasattr(signal, "SIGINT"):
98 98 signal.signal(signal.SIGINT, signal.SIG_IGN)
99 99 pager.stdin.close()
100 100 os.dup2(stdout, sys.stdout.fileno())
101 101 os.dup2(stderr, sys.stderr.fileno())
102 102 pager.wait()
103 103
104 104 def _runpager(ui, p):
105 105 # The subprocess module shipped with Python <= 2.4 is buggy (issue3533).
106 106 # The compat version is buggy on Windows (issue3225), but has been shipping
107 107 # with hg for a long time. Preserve existing functionality.
108 108 if sys.version_info >= (2, 5):
109 109 _pagersubprocess(ui, p)
110 110 else:
111 111 _pagerfork(ui, p)
112 112
113 113 def uisetup(ui):
114 114 if '--debugger' in sys.argv or not ui.formatted():
115 115 return
116 116
117 117 def pagecmd(orig, ui, options, cmd, cmdfunc):
118 118 p = ui.config("pager", "pager", os.environ.get("PAGER"))
119 119
120 120 if p:
121 121 attend = ui.configlist('pager', 'attend', attended)
122 122 auto = options['pager'] == 'auto'
123 123 always = util.parsebool(options['pager'])
124 124
125 125 cmds, _ = cmdutil.findcmd(cmd, commands.table)
126 126
127 127 ignore = ui.configlist('pager', 'ignore')
128 128 for cmd in cmds:
129 129 if (always or auto and
130 130 (cmd in attend or
131 131 (cmd not in ignore and not attend))):
132 ui.setconfig('ui', 'formatted', ui.formatted())
133 ui.setconfig('ui', 'interactive', False)
132 ui.setconfig('ui', 'formatted', ui.formatted(), 'pager')
133 ui.setconfig('ui', 'interactive', False, 'pager')
134 134 if util.safehasattr(signal, "SIGPIPE"):
135 135 signal.signal(signal.SIGPIPE, signal.SIG_DFL)
136 136 _runpager(ui, p)
137 137 break
138 138 return orig(ui, options, cmd, cmdfunc)
139 139
140 140 extensions.wrapfunction(dispatch, '_runcommand', pagecmd)
141 141
142 142 def extsetup(ui):
143 143 commands.globalopts.append(
144 144 ('', 'pager', 'auto',
145 145 _("when to paginate (boolean, always, auto, or never)"),
146 146 _('TYPE')))
147 147
148 148 attended = ['annotate', 'cat', 'diff', 'export', 'glog', 'log', 'qdiff']
@@ -1,565 +1,565 b''
1 1 # patchbomb.py - sending Mercurial changesets as patch emails
2 2 #
3 3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to send changesets as (a series of) patch emails
9 9
10 10 The series is started off with a "[PATCH 0 of N]" introduction, which
11 11 describes the series as a whole.
12 12
13 13 Each patch email has a Subject line of "[PATCH M of N] ...", using the
14 14 first line of the changeset description as the subject text. The
15 15 message contains two or three body parts:
16 16
17 17 - The changeset description.
18 18 - [Optional] The result of running diffstat on the patch.
19 19 - The patch itself, as generated by :hg:`export`.
20 20
21 21 Each message refers to the first in the series using the In-Reply-To
22 22 and References headers, so they will show up as a sequence in threaded
23 23 mail and news readers, and in mail archives.
24 24
25 25 To configure other defaults, add a section like this to your
26 26 configuration file::
27 27
28 28 [email]
29 29 from = My Name <my@email>
30 30 to = recipient1, recipient2, ...
31 31 cc = cc1, cc2, ...
32 32 bcc = bcc1, bcc2, ...
33 33 reply-to = address1, address2, ...
34 34
35 35 Use ``[patchbomb]`` as configuration section name if you need to
36 36 override global ``[email]`` address settings.
37 37
38 38 Then you can use the :hg:`email` command to mail a series of
39 39 changesets as a patchbomb.
40 40
41 41 You can also either configure the method option in the email section
42 42 to be a sendmail compatible mailer or fill out the [smtp] section so
43 43 that the patchbomb extension can automatically send patchbombs
44 44 directly from the commandline. See the [email] and [smtp] sections in
45 45 hgrc(5) for details.
46 46 '''
47 47
48 48 import os, errno, socket, tempfile, cStringIO
49 49 import email
50 50 # On python2.4 you have to import these by name or they fail to
51 51 # load. This was not a problem on Python 2.7.
52 52 import email.Generator
53 53 import email.MIMEMultipart
54 54
55 55 from mercurial import cmdutil, commands, hg, mail, patch, util
56 56 from mercurial import scmutil
57 57 from mercurial.i18n import _
58 58 from mercurial.node import bin
59 59
60 60 cmdtable = {}
61 61 command = cmdutil.command(cmdtable)
62 62 testedwith = 'internal'
63 63
64 64 def prompt(ui, prompt, default=None, rest=':'):
65 65 if default:
66 66 prompt += ' [%s]' % default
67 67 return ui.prompt(prompt + rest, default)
68 68
69 69 def introwanted(opts, number):
70 70 '''is an introductory message apparently wanted?'''
71 71 return number > 1 or opts.get('intro') or opts.get('desc')
72 72
73 73 def makepatch(ui, repo, patchlines, opts, _charsets, idx, total, numbered,
74 74 patchname=None):
75 75
76 76 desc = []
77 77 node = None
78 78 body = ''
79 79
80 80 for line in patchlines:
81 81 if line.startswith('#'):
82 82 if line.startswith('# Node ID'):
83 83 node = line.split()[-1]
84 84 continue
85 85 if line.startswith('diff -r') or line.startswith('diff --git'):
86 86 break
87 87 desc.append(line)
88 88
89 89 if not patchname and not node:
90 90 raise ValueError
91 91
92 92 if opts.get('attach') and not opts.get('body'):
93 93 body = ('\n'.join(desc[1:]).strip() or
94 94 'Patch subject is complete summary.')
95 95 body += '\n\n\n'
96 96
97 97 if opts.get('plain'):
98 98 while patchlines and patchlines[0].startswith('# '):
99 99 patchlines.pop(0)
100 100 if patchlines:
101 101 patchlines.pop(0)
102 102 while patchlines and not patchlines[0].strip():
103 103 patchlines.pop(0)
104 104
105 105 ds = patch.diffstat(patchlines, git=opts.get('git'))
106 106 if opts.get('diffstat'):
107 107 body += ds + '\n\n'
108 108
109 109 addattachment = opts.get('attach') or opts.get('inline')
110 110 if not addattachment or opts.get('body'):
111 111 body += '\n'.join(patchlines)
112 112
113 113 if addattachment:
114 114 msg = email.MIMEMultipart.MIMEMultipart()
115 115 if body:
116 116 msg.attach(mail.mimeencode(ui, body, _charsets, opts.get('test')))
117 117 p = mail.mimetextpatch('\n'.join(patchlines), 'x-patch',
118 118 opts.get('test'))
119 119 binnode = bin(node)
120 120 # if node is mq patch, it will have the patch file's name as a tag
121 121 if not patchname:
122 122 patchtags = [t for t in repo.nodetags(binnode)
123 123 if t.endswith('.patch') or t.endswith('.diff')]
124 124 if patchtags:
125 125 patchname = patchtags[0]
126 126 elif total > 1:
127 127 patchname = cmdutil.makefilename(repo, '%b-%n.patch',
128 128 binnode, seqno=idx,
129 129 total=total)
130 130 else:
131 131 patchname = cmdutil.makefilename(repo, '%b.patch', binnode)
132 132 disposition = 'inline'
133 133 if opts.get('attach'):
134 134 disposition = 'attachment'
135 135 p['Content-Disposition'] = disposition + '; filename=' + patchname
136 136 msg.attach(p)
137 137 else:
138 138 msg = mail.mimetextpatch(body, display=opts.get('test'))
139 139
140 140 flag = ' '.join(opts.get('flag'))
141 141 if flag:
142 142 flag = ' ' + flag
143 143
144 144 subj = desc[0].strip().rstrip('. ')
145 145 if not numbered:
146 146 subj = '[PATCH%s] %s' % (flag, opts.get('subject') or subj)
147 147 else:
148 148 tlen = len(str(total))
149 149 subj = '[PATCH %0*d of %d%s] %s' % (tlen, idx, total, flag, subj)
150 150 msg['Subject'] = mail.headencode(ui, subj, _charsets, opts.get('test'))
151 151 msg['X-Mercurial-Node'] = node
152 152 return msg, subj, ds
153 153
154 154 emailopts = [
155 155 ('', 'body', None, _('send patches as inline message text (default)')),
156 156 ('a', 'attach', None, _('send patches as attachments')),
157 157 ('i', 'inline', None, _('send patches as inline attachments')),
158 158 ('', 'bcc', [], _('email addresses of blind carbon copy recipients')),
159 159 ('c', 'cc', [], _('email addresses of copy recipients')),
160 160 ('', 'confirm', None, _('ask for confirmation before sending')),
161 161 ('d', 'diffstat', None, _('add diffstat output to messages')),
162 162 ('', 'date', '', _('use the given date as the sending date')),
163 163 ('', 'desc', '', _('use the given file as the series description')),
164 164 ('f', 'from', '', _('email address of sender')),
165 165 ('n', 'test', None, _('print messages that would be sent')),
166 166 ('m', 'mbox', '', _('write messages to mbox file instead of sending them')),
167 167 ('', 'reply-to', [], _('email addresses replies should be sent to')),
168 168 ('s', 'subject', '', _('subject of first message (intro or single patch)')),
169 169 ('', 'in-reply-to', '', _('message identifier to reply to')),
170 170 ('', 'flag', [], _('flags to add in subject prefixes')),
171 171 ('t', 'to', [], _('email addresses of recipients'))]
172 172
173 173 @command('email',
174 174 [('g', 'git', None, _('use git extended diff format')),
175 175 ('', 'plain', None, _('omit hg patch header')),
176 176 ('o', 'outgoing', None,
177 177 _('send changes not found in the target repository')),
178 178 ('b', 'bundle', None, _('send changes not in target as a binary bundle')),
179 179 ('', 'bundlename', 'bundle',
180 180 _('name of the bundle attachment file'), _('NAME')),
181 181 ('r', 'rev', [], _('a revision to send'), _('REV')),
182 182 ('', 'force', None, _('run even when remote repository is unrelated '
183 183 '(with -b/--bundle)')),
184 184 ('', 'base', [], _('a base changeset to specify instead of a destination '
185 185 '(with -b/--bundle)'), _('REV')),
186 186 ('', 'intro', None, _('send an introduction email for a single patch')),
187 187 ] + emailopts + commands.remoteopts,
188 188 _('hg email [OPTION]... [DEST]...'))
189 189 def patchbomb(ui, repo, *revs, **opts):
190 190 '''send changesets by email
191 191
192 192 By default, diffs are sent in the format generated by
193 193 :hg:`export`, one per message. The series starts with a "[PATCH 0
194 194 of N]" introduction, which describes the series as a whole.
195 195
196 196 Each patch email has a Subject line of "[PATCH M of N] ...", using
197 197 the first line of the changeset description as the subject text.
198 198 The message contains two or three parts. First, the changeset
199 199 description.
200 200
201 201 With the -d/--diffstat option, if the diffstat program is
202 202 installed, the result of running diffstat on the patch is inserted.
203 203
204 204 Finally, the patch itself, as generated by :hg:`export`.
205 205
206 206 With the -d/--diffstat or --confirm options, you will be presented
207 207 with a final summary of all messages and asked for confirmation before
208 208 the messages are sent.
209 209
210 210 By default the patch is included as text in the email body for
211 211 easy reviewing. Using the -a/--attach option will instead create
212 212 an attachment for the patch. With -i/--inline an inline attachment
213 213 will be created. You can include a patch both as text in the email
214 214 body and as a regular or an inline attachment by combining the
215 215 -a/--attach or -i/--inline with the --body option.
216 216
217 217 With -o/--outgoing, emails will be generated for patches not found
218 218 in the destination repository (or only those which are ancestors
219 219 of the specified revisions if any are provided)
220 220
221 221 With -b/--bundle, changesets are selected as for --outgoing, but a
222 222 single email containing a binary Mercurial bundle as an attachment
223 223 will be sent.
224 224
225 225 With -m/--mbox, instead of previewing each patchbomb message in a
226 226 pager or sending the messages directly, it will create a UNIX
227 227 mailbox file with the patch emails. This mailbox file can be
228 228 previewed with any mail user agent which supports UNIX mbox
229 229 files.
230 230
231 231 With -n/--test, all steps will run, but mail will not be sent.
232 232 You will be prompted for an email recipient address, a subject and
233 233 an introductory message describing the patches of your patchbomb.
234 234 Then when all is done, patchbomb messages are displayed. If the
235 235 PAGER environment variable is set, your pager will be fired up once
236 236 for each patchbomb message, so you can verify everything is alright.
237 237
238 238 In case email sending fails, you will find a backup of your series
239 239 introductory message in ``.hg/last-email.txt``.
240 240
241 241 Examples::
242 242
243 243 hg email -r 3000 # send patch 3000 only
244 244 hg email -r 3000 -r 3001 # send patches 3000 and 3001
245 245 hg email -r 3000:3005 # send patches 3000 through 3005
246 246 hg email 3000 # send patch 3000 (deprecated)
247 247
248 248 hg email -o # send all patches not in default
249 249 hg email -o DEST # send all patches not in DEST
250 250 hg email -o -r 3000 # send all ancestors of 3000 not in default
251 251 hg email -o -r 3000 DEST # send all ancestors of 3000 not in DEST
252 252
253 253 hg email -b # send bundle of all patches not in default
254 254 hg email -b DEST # send bundle of all patches not in DEST
255 255 hg email -b -r 3000 # bundle of all ancestors of 3000 not in default
256 256 hg email -b -r 3000 DEST # bundle of all ancestors of 3000 not in DEST
257 257
258 258 hg email -o -m mbox && # generate an mbox file...
259 259 mutt -R -f mbox # ... and view it with mutt
260 260 hg email -o -m mbox && # generate an mbox file ...
261 261 formail -s sendmail \\ # ... and use formail to send from the mbox
262 262 -bm -t < mbox # ... using sendmail
263 263
264 264 Before using this command, you will need to enable email in your
265 265 hgrc. See the [email] section in hgrc(5) for details.
266 266 '''
267 267
268 268 _charsets = mail._charsets(ui)
269 269
270 270 bundle = opts.get('bundle')
271 271 date = opts.get('date')
272 272 mbox = opts.get('mbox')
273 273 outgoing = opts.get('outgoing')
274 274 rev = opts.get('rev')
275 275 # internal option used by pbranches
276 276 patches = opts.get('patches')
277 277
278 278 def getoutgoing(dest, revs):
279 279 '''Return the revisions present locally but not in dest'''
280 280 url = ui.expandpath(dest or 'default-push', dest or 'default')
281 281 url = hg.parseurl(url)[0]
282 282 ui.status(_('comparing with %s\n') % util.hidepassword(url))
283 283
284 284 revs = [r for r in scmutil.revrange(repo, revs) if r >= 0]
285 285 if not revs:
286 286 revs = [len(repo) - 1]
287 287 revs = repo.revs('outgoing(%s) and ::%ld', dest or '', revs)
288 288 if not revs:
289 289 ui.status(_("no changes found\n"))
290 290 return []
291 291 return [str(r) for r in revs]
292 292
293 293 def getpatches(revs):
294 294 for r in scmutil.revrange(repo, revs):
295 295 output = cStringIO.StringIO()
296 296 cmdutil.export(repo, [r], fp=output,
297 297 opts=patch.diffopts(ui, opts))
298 298 yield output.getvalue().split('\n')
299 299
300 300 def getbundle(dest):
301 301 tmpdir = tempfile.mkdtemp(prefix='hg-email-bundle-')
302 302 tmpfn = os.path.join(tmpdir, 'bundle')
303 303 try:
304 304 commands.bundle(ui, repo, tmpfn, dest, **opts)
305 305 fp = open(tmpfn, 'rb')
306 306 data = fp.read()
307 307 fp.close()
308 308 return data
309 309 finally:
310 310 try:
311 311 os.unlink(tmpfn)
312 312 except OSError:
313 313 pass
314 314 os.rmdir(tmpdir)
315 315
316 316 if not (opts.get('test') or mbox):
317 317 # really sending
318 318 mail.validateconfig(ui)
319 319
320 320 if not (revs or rev or outgoing or bundle or patches):
321 321 raise util.Abort(_('specify at least one changeset with -r or -o'))
322 322
323 323 if outgoing and bundle:
324 324 raise util.Abort(_("--outgoing mode always on with --bundle;"
325 325 " do not re-specify --outgoing"))
326 326
327 327 if outgoing or bundle:
328 328 if len(revs) > 1:
329 329 raise util.Abort(_("too many destinations"))
330 330 dest = revs and revs[0] or None
331 331 revs = []
332 332
333 333 if rev:
334 334 if revs:
335 335 raise util.Abort(_('use only one form to specify the revision'))
336 336 revs = rev
337 337
338 338 if outgoing:
339 339 revs = getoutgoing(dest, rev)
340 340 if bundle:
341 341 opts['revs'] = revs
342 342
343 343 # start
344 344 if date:
345 345 start_time = util.parsedate(date)
346 346 else:
347 347 start_time = util.makedate()
348 348
349 349 def genmsgid(id):
350 350 return '<%s.%s@%s>' % (id[:20], int(start_time[0]), socket.getfqdn())
351 351
352 352 def getdescription(body, sender):
353 353 if opts.get('desc'):
354 354 body = open(opts.get('desc')).read()
355 355 else:
356 356 ui.write(_('\nWrite the introductory message for the '
357 357 'patch series.\n\n'))
358 358 body = ui.edit(body, sender)
359 359 # Save series description in case sendmail fails
360 360 msgfile = repo.opener('last-email.txt', 'wb')
361 361 msgfile.write(body)
362 362 msgfile.close()
363 363 return body
364 364
365 365 def getpatchmsgs(patches, patchnames=None):
366 366 msgs = []
367 367
368 368 ui.write(_('this patch series consists of %d patches.\n\n')
369 369 % len(patches))
370 370
371 371 # build the intro message, or skip it if the user declines
372 372 if introwanted(opts, len(patches)):
373 373 msg = makeintro(patches)
374 374 if msg:
375 375 msgs.append(msg)
376 376
377 377 # are we going to send more than one message?
378 378 numbered = len(msgs) + len(patches) > 1
379 379
380 380 # now generate the actual patch messages
381 381 name = None
382 382 for i, p in enumerate(patches):
383 383 if patchnames:
384 384 name = patchnames[i]
385 385 msg = makepatch(ui, repo, p, opts, _charsets, i + 1,
386 386 len(patches), numbered, name)
387 387 msgs.append(msg)
388 388
389 389 return msgs
390 390
391 391 def makeintro(patches):
392 392 tlen = len(str(len(patches)))
393 393
394 394 flag = opts.get('flag') or ''
395 395 if flag:
396 396 flag = ' ' + ' '.join(flag)
397 397 prefix = '[PATCH %0*d of %d%s]' % (tlen, 0, len(patches), flag)
398 398
399 399 subj = (opts.get('subject') or
400 400 prompt(ui, '(optional) Subject: ', rest=prefix, default=''))
401 401 if not subj:
402 402 return None # skip intro if the user doesn't bother
403 403
404 404 subj = prefix + ' ' + subj
405 405
406 406 body = ''
407 407 if opts.get('diffstat'):
408 408 # generate a cumulative diffstat of the whole patch series
409 409 diffstat = patch.diffstat(sum(patches, []))
410 410 body = '\n' + diffstat
411 411 else:
412 412 diffstat = None
413 413
414 414 body = getdescription(body, sender)
415 415 msg = mail.mimeencode(ui, body, _charsets, opts.get('test'))
416 416 msg['Subject'] = mail.headencode(ui, subj, _charsets,
417 417 opts.get('test'))
418 418 return (msg, subj, diffstat)
419 419
420 420 def getbundlemsgs(bundle):
421 421 subj = (opts.get('subject')
422 422 or prompt(ui, 'Subject:', 'A bundle for your repository'))
423 423
424 424 body = getdescription('', sender)
425 425 msg = email.MIMEMultipart.MIMEMultipart()
426 426 if body:
427 427 msg.attach(mail.mimeencode(ui, body, _charsets, opts.get('test')))
428 428 datapart = email.MIMEBase.MIMEBase('application', 'x-mercurial-bundle')
429 429 datapart.set_payload(bundle)
430 430 bundlename = '%s.hg' % opts.get('bundlename', 'bundle')
431 431 datapart.add_header('Content-Disposition', 'attachment',
432 432 filename=bundlename)
433 433 email.Encoders.encode_base64(datapart)
434 434 msg.attach(datapart)
435 435 msg['Subject'] = mail.headencode(ui, subj, _charsets, opts.get('test'))
436 436 return [(msg, subj, None)]
437 437
438 438 sender = (opts.get('from') or ui.config('email', 'from') or
439 439 ui.config('patchbomb', 'from') or
440 440 prompt(ui, 'From', ui.username()))
441 441
442 442 if patches:
443 443 msgs = getpatchmsgs(patches, opts.get('patchnames'))
444 444 elif bundle:
445 445 msgs = getbundlemsgs(getbundle(dest))
446 446 else:
447 447 msgs = getpatchmsgs(list(getpatches(revs)))
448 448
449 449 showaddrs = []
450 450
451 451 def getaddrs(header, ask=False, default=None):
452 452 configkey = header.lower()
453 453 opt = header.replace('-', '_').lower()
454 454 addrs = opts.get(opt)
455 455 if addrs:
456 456 showaddrs.append('%s: %s' % (header, ', '.join(addrs)))
457 457 return mail.addrlistencode(ui, addrs, _charsets, opts.get('test'))
458 458
459 459 # not on the command line: fallback to config and then maybe ask
460 460 addr = (ui.config('email', configkey) or
461 461 ui.config('patchbomb', configkey) or
462 462 '')
463 463 if not addr and ask:
464 464 addr = prompt(ui, header, default=default)
465 465 if addr:
466 466 showaddrs.append('%s: %s' % (header, addr))
467 467 return mail.addrlistencode(ui, [addr], _charsets, opts.get('test'))
468 468 else:
469 469 return default
470 470
471 471 to = getaddrs('To', ask=True)
472 472 if not to:
473 473 # we can get here in non-interactive mode
474 474 raise util.Abort(_('no recipient addresses provided'))
475 475 cc = getaddrs('Cc', ask=True, default='') or []
476 476 bcc = getaddrs('Bcc') or []
477 477 replyto = getaddrs('Reply-To')
478 478
479 479 if opts.get('diffstat') or opts.get('confirm'):
480 480 ui.write(_('\nFinal summary:\n\n'))
481 481 ui.write(('From: %s\n' % sender))
482 482 for addr in showaddrs:
483 483 ui.write('%s\n' % addr)
484 484 for m, subj, ds in msgs:
485 485 ui.write(('Subject: %s\n' % subj))
486 486 if ds:
487 487 ui.write(ds)
488 488 ui.write('\n')
489 489 if ui.promptchoice(_('are you sure you want to send (yn)?'
490 490 '$$ &Yes $$ &No')):
491 491 raise util.Abort(_('patchbomb canceled'))
492 492
493 493 ui.write('\n')
494 494
495 495 parent = opts.get('in_reply_to') or None
496 496 # angle brackets may be omitted, they're not semantically part of the msg-id
497 497 if parent is not None:
498 498 if not parent.startswith('<'):
499 499 parent = '<' + parent
500 500 if not parent.endswith('>'):
501 501 parent += '>'
502 502
503 503 sender_addr = email.Utils.parseaddr(sender)[1]
504 504 sender = mail.addressencode(ui, sender, _charsets, opts.get('test'))
505 505 sendmail = None
506 506 for i, (m, subj, ds) in enumerate(msgs):
507 507 try:
508 508 m['Message-Id'] = genmsgid(m['X-Mercurial-Node'])
509 509 except TypeError:
510 510 m['Message-Id'] = genmsgid('patchbomb')
511 511 if parent:
512 512 m['In-Reply-To'] = parent
513 513 m['References'] = parent
514 514 if not parent or 'X-Mercurial-Node' not in m:
515 515 parent = m['Message-Id']
516 516
517 517 m['User-Agent'] = 'Mercurial-patchbomb/%s' % util.version()
518 518 m['Date'] = email.Utils.formatdate(start_time[0], localtime=True)
519 519
520 520 start_time = (start_time[0] + 1, start_time[1])
521 521 m['From'] = sender
522 522 m['To'] = ', '.join(to)
523 523 if cc:
524 524 m['Cc'] = ', '.join(cc)
525 525 if bcc:
526 526 m['Bcc'] = ', '.join(bcc)
527 527 if replyto:
528 528 m['Reply-To'] = ', '.join(replyto)
529 529 if opts.get('test'):
530 530 ui.status(_('displaying '), subj, ' ...\n')
531 531 ui.flush()
532 532 if 'PAGER' in os.environ and not ui.plain():
533 533 fp = util.popen(os.environ['PAGER'], 'w')
534 534 else:
535 535 fp = ui
536 536 generator = email.Generator.Generator(fp, mangle_from_=False)
537 537 try:
538 538 generator.flatten(m, 0)
539 539 fp.write('\n')
540 540 except IOError, inst:
541 541 if inst.errno != errno.EPIPE:
542 542 raise
543 543 if fp is not ui:
544 544 fp.close()
545 545 else:
546 546 if not sendmail:
547 547 verifycert = ui.config('smtp', 'verifycert')
548 548 if opts.get('insecure'):
549 ui.setconfig('smtp', 'verifycert', 'loose')
549 ui.setconfig('smtp', 'verifycert', 'loose', 'patchbomb')
550 550 try:
551 551 sendmail = mail.connect(ui, mbox=mbox)
552 552 finally:
553 ui.setconfig('smtp', 'verifycert', verifycert)
553 ui.setconfig('smtp', 'verifycert', verifycert, 'patchbomb')
554 554 ui.status(_('sending '), subj, ' ...\n')
555 555 ui.progress(_('sending'), i, item=subj, total=len(msgs))
556 556 if not mbox:
557 557 # Exim does not remove the Bcc field
558 558 del m['Bcc']
559 559 fp = cStringIO.StringIO()
560 560 generator = email.Generator.Generator(fp, mangle_from_=False)
561 561 generator.flatten(m, 0)
562 562 sendmail(sender_addr, to + bcc + cc, fp.getvalue())
563 563
564 564 ui.progress(_('writing'), None)
565 565 ui.progress(_('sending'), None)
@@ -1,955 +1,956 b''
1 1 # rebase.py - rebasing feature for mercurial
2 2 #
3 3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to move sets of revisions to a different ancestor
9 9
10 10 This extension lets you rebase changesets in an existing Mercurial
11 11 repository.
12 12
13 13 For more information:
14 14 http://mercurial.selenic.com/wiki/RebaseExtension
15 15 '''
16 16
17 17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
18 18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
19 19 from mercurial.commands import templateopts
20 20 from mercurial.node import nullrev
21 21 from mercurial.lock import release
22 22 from mercurial.i18n import _
23 23 import os, errno
24 24
25 25 nullmerge = -2
26 26 revignored = -3
27 27
28 28 cmdtable = {}
29 29 command = cmdutil.command(cmdtable)
30 30 testedwith = 'internal'
31 31
32 32 def _savegraft(ctx, extra):
33 33 s = ctx.extra().get('source', None)
34 34 if s is not None:
35 35 extra['source'] = s
36 36
37 37 def _savebranch(ctx, extra):
38 38 extra['branch'] = ctx.branch()
39 39
40 40 def _makeextrafn(copiers):
41 41 """make an extrafn out of the given copy-functions.
42 42
43 43 A copy function takes a context and an extra dict, and mutates the
44 44 extra dict as needed based on the given context.
45 45 """
46 46 def extrafn(ctx, extra):
47 47 for c in copiers:
48 48 c(ctx, extra)
49 49 return extrafn
50 50
51 51 @command('rebase',
52 52 [('s', 'source', '',
53 53 _('rebase from the specified changeset'), _('REV')),
54 54 ('b', 'base', '',
55 55 _('rebase from the base of the specified changeset '
56 56 '(up to greatest common ancestor of base and dest)'),
57 57 _('REV')),
58 58 ('r', 'rev', [],
59 59 _('rebase these revisions'),
60 60 _('REV')),
61 61 ('d', 'dest', '',
62 62 _('rebase onto the specified changeset'), _('REV')),
63 63 ('', 'collapse', False, _('collapse the rebased changesets')),
64 64 ('m', 'message', '',
65 65 _('use text as collapse commit message'), _('TEXT')),
66 66 ('e', 'edit', False, _('invoke editor on commit messages')),
67 67 ('l', 'logfile', '',
68 68 _('read collapse commit message from file'), _('FILE')),
69 69 ('', 'keep', False, _('keep original changesets')),
70 70 ('', 'keepbranches', False, _('keep original branch names')),
71 71 ('D', 'detach', False, _('(DEPRECATED)')),
72 72 ('t', 'tool', '', _('specify merge tool')),
73 73 ('c', 'continue', False, _('continue an interrupted rebase')),
74 74 ('a', 'abort', False, _('abort an interrupted rebase'))] +
75 75 templateopts,
76 76 _('[-s REV | -b REV] [-d REV] [OPTION]'))
77 77 def rebase(ui, repo, **opts):
78 78 """move changeset (and descendants) to a different branch
79 79
80 80 Rebase uses repeated merging to graft changesets from one part of
81 81 history (the source) onto another (the destination). This can be
82 82 useful for linearizing *local* changes relative to a master
83 83 development tree.
84 84
85 85 You should not rebase changesets that have already been shared
86 86 with others. Doing so will force everybody else to perform the
87 87 same rebase or they will end up with duplicated changesets after
88 88 pulling in your rebased changesets.
89 89
90 90 In its default configuration, Mercurial will prevent you from
91 91 rebasing published changes. See :hg:`help phases` for details.
92 92
93 93 If you don't specify a destination changeset (``-d/--dest``),
94 94 rebase uses the current branch tip as the destination. (The
95 95 destination changeset is not modified by rebasing, but new
96 96 changesets are added as its descendants.)
97 97
98 98 You can specify which changesets to rebase in two ways: as a
99 99 "source" changeset or as a "base" changeset. Both are shorthand
100 100 for a topologically related set of changesets (the "source
101 101 branch"). If you specify source (``-s/--source``), rebase will
102 102 rebase that changeset and all of its descendants onto dest. If you
103 103 specify base (``-b/--base``), rebase will select ancestors of base
104 104 back to but not including the common ancestor with dest. Thus,
105 105 ``-b`` is less precise but more convenient than ``-s``: you can
106 106 specify any changeset in the source branch, and rebase will select
107 107 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
108 108 uses the parent of the working directory as the base.
109 109
110 110 For advanced usage, a third way is available through the ``--rev``
111 111 option. It allows you to specify an arbitrary set of changesets to
112 112 rebase. Descendants of revs you specify with this option are not
113 113 automatically included in the rebase.
114 114
115 115 By default, rebase recreates the changesets in the source branch
116 116 as descendants of dest and then destroys the originals. Use
117 117 ``--keep`` to preserve the original source changesets. Some
118 118 changesets in the source branch (e.g. merges from the destination
119 119 branch) may be dropped if they no longer contribute any change.
120 120
121 121 One result of the rules for selecting the destination changeset
122 122 and source branch is that, unlike ``merge``, rebase will do
123 123 nothing if you are at the branch tip of a named branch
124 124 with two heads. You need to explicitly specify source and/or
125 125 destination (or ``update`` to the other head, if it's the head of
126 126 the intended source branch).
127 127
128 128 If a rebase is interrupted to manually resolve a merge, it can be
129 129 continued with --continue/-c or aborted with --abort/-a.
130 130
131 131 Returns 0 on success, 1 if nothing to rebase or there are
132 132 unresolved conflicts.
133 133 """
134 134 originalwd = target = None
135 135 activebookmark = None
136 136 external = nullrev
137 137 state = {}
138 138 skipped = set()
139 139 targetancestors = set()
140 140
141 141 editor = None
142 142 if opts.get('edit'):
143 143 editor = cmdutil.commitforceeditor
144 144
145 145 lock = wlock = None
146 146 try:
147 147 wlock = repo.wlock()
148 148 lock = repo.lock()
149 149
150 150 # Validate input and define rebasing points
151 151 destf = opts.get('dest', None)
152 152 srcf = opts.get('source', None)
153 153 basef = opts.get('base', None)
154 154 revf = opts.get('rev', [])
155 155 contf = opts.get('continue')
156 156 abortf = opts.get('abort')
157 157 collapsef = opts.get('collapse', False)
158 158 collapsemsg = cmdutil.logmessage(ui, opts)
159 159 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
160 160 extrafns = [_savegraft]
161 161 if e:
162 162 extrafns = [e]
163 163 keepf = opts.get('keep', False)
164 164 keepbranchesf = opts.get('keepbranches', False)
165 165 # keepopen is not meant for use on the command line, but by
166 166 # other extensions
167 167 keepopen = opts.get('keepopen', False)
168 168
169 169 if collapsemsg and not collapsef:
170 170 raise util.Abort(
171 171 _('message can only be specified with collapse'))
172 172
173 173 if contf or abortf:
174 174 if contf and abortf:
175 175 raise util.Abort(_('cannot use both abort and continue'))
176 176 if collapsef:
177 177 raise util.Abort(
178 178 _('cannot use collapse with continue or abort'))
179 179 if srcf or basef or destf:
180 180 raise util.Abort(
181 181 _('abort and continue do not allow specifying revisions'))
182 182 if opts.get('tool', False):
183 183 ui.warn(_('tool option will be ignored\n'))
184 184
185 185 try:
186 186 (originalwd, target, state, skipped, collapsef, keepf,
187 187 keepbranchesf, external, activebookmark) = restorestatus(repo)
188 188 except error.RepoLookupError:
189 189 if abortf:
190 190 clearstatus(repo)
191 191 repo.ui.warn(_('rebase aborted (no revision is removed,'
192 192 ' only broken state is cleared)\n'))
193 193 return 0
194 194 else:
195 195 msg = _('cannot continue inconsistent rebase')
196 196 hint = _('use "hg rebase --abort" to clear broken state')
197 197 raise util.Abort(msg, hint=hint)
198 198 if abortf:
199 199 return abort(repo, originalwd, target, state)
200 200 else:
201 201 if srcf and basef:
202 202 raise util.Abort(_('cannot specify both a '
203 203 'source and a base'))
204 204 if revf and basef:
205 205 raise util.Abort(_('cannot specify both a '
206 206 'revision and a base'))
207 207 if revf and srcf:
208 208 raise util.Abort(_('cannot specify both a '
209 209 'revision and a source'))
210 210
211 211 cmdutil.checkunfinished(repo)
212 212 cmdutil.bailifchanged(repo)
213 213
214 214 if not destf:
215 215 # Destination defaults to the latest revision in the
216 216 # current branch
217 217 branch = repo[None].branch()
218 218 dest = repo[branch]
219 219 else:
220 220 dest = scmutil.revsingle(repo, destf)
221 221
222 222 if revf:
223 223 rebaseset = scmutil.revrange(repo, revf)
224 224 if not rebaseset:
225 225 raise util.Abort(_('empty "rev" revision set - '
226 226 'nothing to rebase'))
227 227 elif srcf:
228 228 src = scmutil.revrange(repo, [srcf])
229 229 if not src:
230 230 raise util.Abort(_('empty "source" revision set - '
231 231 'nothing to rebase'))
232 232 rebaseset = repo.revs('(%ld)::', src)
233 233 assert rebaseset
234 234 else:
235 235 base = scmutil.revrange(repo, [basef or '.'])
236 236 if not base:
237 237 raise util.Abort(_('empty "base" revision set - '
238 238 "can't compute rebase set"))
239 239 rebaseset = repo.revs(
240 240 '(children(ancestor(%ld, %d)) and ::(%ld))::',
241 241 base, dest, base)
242 242 if not rebaseset:
243 243 if base == [dest.rev()]:
244 244 if basef:
245 245 ui.status(_('nothing to rebase - %s is both "base"'
246 246 ' and destination\n') % dest)
247 247 else:
248 248 ui.status(_('nothing to rebase - working directory '
249 249 'parent is also destination\n'))
250 250 elif not repo.revs('%ld - ::%d', base, dest):
251 251 if basef:
252 252 ui.status(_('nothing to rebase - "base" %s is '
253 253 'already an ancestor of destination '
254 254 '%s\n') %
255 255 ('+'.join(str(repo[r]) for r in base),
256 256 dest))
257 257 else:
258 258 ui.status(_('nothing to rebase - working '
259 259 'directory parent is already an '
260 260 'ancestor of destination %s\n') % dest)
261 261 else: # can it happen?
262 262 ui.status(_('nothing to rebase from %s to %s\n') %
263 263 ('+'.join(str(repo[r]) for r in base), dest))
264 264 return 1
265 265
266 266 if (not (keepf or obsolete._enabled)
267 267 and repo.revs('first(children(%ld) - %ld)',
268 268 rebaseset, rebaseset)):
269 269 raise util.Abort(
270 270 _("can't remove original changesets with"
271 271 " unrebased descendants"),
272 272 hint=_('use --keep to keep original changesets'))
273 273
274 274 result = buildstate(repo, dest, rebaseset, collapsef)
275 275 if not result:
276 276 # Empty state built, nothing to rebase
277 277 ui.status(_('nothing to rebase\n'))
278 278 return 1
279 279
280 280 root = min(rebaseset)
281 281 if not keepf and not repo[root].mutable():
282 282 raise util.Abort(_("can't rebase immutable changeset %s")
283 283 % repo[root],
284 284 hint=_('see hg help phases for details'))
285 285
286 286 originalwd, target, state = result
287 287 if collapsef:
288 288 targetancestors = repo.changelog.ancestors([target],
289 289 inclusive=True)
290 290 external = externalparent(repo, state, targetancestors)
291 291
292 292 if keepbranchesf:
293 293 # insert _savebranch at the start of extrafns so if
294 294 # there's a user-provided extrafn it can clobber branch if
295 295 # desired
296 296 extrafns.insert(0, _savebranch)
297 297 if collapsef:
298 298 branches = set()
299 299 for rev in state:
300 300 branches.add(repo[rev].branch())
301 301 if len(branches) > 1:
302 302 raise util.Abort(_('cannot collapse multiple named '
303 303 'branches'))
304 304
305 305 # Rebase
306 306 if not targetancestors:
307 307 targetancestors = repo.changelog.ancestors([target], inclusive=True)
308 308
309 309 # Keep track of the current bookmarks in order to reset them later
310 310 currentbookmarks = repo._bookmarks.copy()
311 311 activebookmark = activebookmark or repo._bookmarkcurrent
312 312 if activebookmark:
313 313 bookmarks.unsetcurrent(repo)
314 314
315 315 extrafn = _makeextrafn(extrafns)
316 316
317 317 sortedstate = sorted(state)
318 318 total = len(sortedstate)
319 319 pos = 0
320 320 for rev in sortedstate:
321 321 pos += 1
322 322 if state[rev] == -1:
323 323 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, repo[rev])),
324 324 _('changesets'), total)
325 325 p1, p2 = defineparents(repo, rev, target, state,
326 326 targetancestors)
327 327 storestatus(repo, originalwd, target, state, collapsef, keepf,
328 328 keepbranchesf, external, activebookmark)
329 329 if len(repo.parents()) == 2:
330 330 repo.ui.debug('resuming interrupted rebase\n')
331 331 else:
332 332 try:
333 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''))
333 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
334 'rebase')
334 335 stats = rebasenode(repo, rev, p1, state, collapsef)
335 336 if stats and stats[3] > 0:
336 337 raise error.InterventionRequired(
337 338 _('unresolved conflicts (see hg '
338 339 'resolve, then hg rebase --continue)'))
339 340 finally:
340 ui.setconfig('ui', 'forcemerge', '')
341 ui.setconfig('ui', 'forcemerge', '', 'rebase')
341 342 cmdutil.duplicatecopies(repo, rev, target)
342 343 if not collapsef:
343 344 newrev = concludenode(repo, rev, p1, p2, extrafn=extrafn,
344 345 editor=editor)
345 346 else:
346 347 # Skip commit if we are collapsing
347 348 repo.setparents(repo[p1].node())
348 349 newrev = None
349 350 # Update the state
350 351 if newrev is not None:
351 352 state[rev] = repo[newrev].rev()
352 353 else:
353 354 if not collapsef:
354 355 ui.note(_('no changes, revision %d skipped\n') % rev)
355 356 ui.debug('next revision set to %s\n' % p1)
356 357 skipped.add(rev)
357 358 state[rev] = p1
358 359
359 360 ui.progress(_('rebasing'), None)
360 361 ui.note(_('rebase merging completed\n'))
361 362
362 363 if collapsef and not keepopen:
363 364 p1, p2 = defineparents(repo, min(state), target,
364 365 state, targetancestors)
365 366 if collapsemsg:
366 367 commitmsg = collapsemsg
367 368 else:
368 369 commitmsg = 'Collapsed revision'
369 370 for rebased in state:
370 371 if rebased not in skipped and state[rebased] > nullmerge:
371 372 commitmsg += '\n* %s' % repo[rebased].description()
372 373 editor = cmdutil.commitforceeditor
373 374 newrev = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
374 375 extrafn=extrafn, editor=editor)
375 376 for oldrev in state.iterkeys():
376 377 if state[oldrev] > nullmerge:
377 378 state[oldrev] = newrev
378 379
379 380 if 'qtip' in repo.tags():
380 381 updatemq(repo, state, skipped, **opts)
381 382
382 383 if currentbookmarks:
383 384 # Nodeids are needed to reset bookmarks
384 385 nstate = {}
385 386 for k, v in state.iteritems():
386 387 if v > nullmerge:
387 388 nstate[repo[k].node()] = repo[v].node()
388 389 # XXX this is the same as dest.node() for the non-continue path --
389 390 # this should probably be cleaned up
390 391 targetnode = repo[target].node()
391 392
392 393 # restore original working directory
393 394 # (we do this before stripping)
394 395 newwd = state.get(originalwd, originalwd)
395 396 if newwd not in [c.rev() for c in repo[None].parents()]:
396 397 ui.note(_("update back to initial working directory parent\n"))
397 398 hg.updaterepo(repo, newwd, False)
398 399
399 400 if not keepf:
400 401 collapsedas = None
401 402 if collapsef:
402 403 collapsedas = newrev
403 404 clearrebased(ui, repo, state, skipped, collapsedas)
404 405
405 406 if currentbookmarks:
406 407 updatebookmarks(repo, targetnode, nstate, currentbookmarks)
407 408 if activebookmark not in repo._bookmarks:
408 409 # active bookmark was divergent one and has been deleted
409 410 activebookmark = None
410 411
411 412 clearstatus(repo)
412 413 ui.note(_("rebase completed\n"))
413 414 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
414 415 if skipped:
415 416 ui.note(_("%d revisions have been skipped\n") % len(skipped))
416 417
417 418 if (activebookmark and
418 419 repo['.'].node() == repo._bookmarks[activebookmark]):
419 420 bookmarks.setcurrent(repo, activebookmark)
420 421
421 422 finally:
422 423 release(lock, wlock)
423 424
424 425 def externalparent(repo, state, targetancestors):
425 426 """Return the revision that should be used as the second parent
426 427 when the revisions in state is collapsed on top of targetancestors.
427 428 Abort if there is more than one parent.
428 429 """
429 430 parents = set()
430 431 source = min(state)
431 432 for rev in state:
432 433 if rev == source:
433 434 continue
434 435 for p in repo[rev].parents():
435 436 if (p.rev() not in state
436 437 and p.rev() not in targetancestors):
437 438 parents.add(p.rev())
438 439 if not parents:
439 440 return nullrev
440 441 if len(parents) == 1:
441 442 return parents.pop()
442 443 raise util.Abort(_('unable to collapse on top of %s, there is more '
443 444 'than one external parent: %s') %
444 445 (max(targetancestors),
445 446 ', '.join(str(p) for p in sorted(parents))))
446 447
447 448 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None):
448 449 'Commit the changes and store useful information in extra'
449 450 try:
450 451 repo.setparents(repo[p1].node(), repo[p2].node())
451 452 ctx = repo[rev]
452 453 if commitmsg is None:
453 454 commitmsg = ctx.description()
454 455 extra = {'rebase_source': ctx.hex()}
455 456 if extrafn:
456 457 extrafn(ctx, extra)
457 458 # Commit might fail if unresolved files exist
458 459 newrev = repo.commit(text=commitmsg, user=ctx.user(),
459 460 date=ctx.date(), extra=extra, editor=editor)
460 461 repo.dirstate.setbranch(repo[newrev].branch())
461 462 targetphase = max(ctx.phase(), phases.draft)
462 463 # retractboundary doesn't overwrite upper phase inherited from parent
463 464 newnode = repo[newrev].node()
464 465 if newnode:
465 466 phases.retractboundary(repo, targetphase, [newnode])
466 467 return newrev
467 468 except util.Abort:
468 469 # Invalidate the previous setparents
469 470 repo.dirstate.invalidate()
470 471 raise
471 472
472 473 def rebasenode(repo, rev, p1, state, collapse):
473 474 'Rebase a single revision'
474 475 # Merge phase
475 476 # Update to target and merge it with local
476 477 if repo['.'].rev() != repo[p1].rev():
477 478 repo.ui.debug(" update to %d:%s\n" % (repo[p1].rev(), repo[p1]))
478 479 merge.update(repo, p1, False, True, False)
479 480 else:
480 481 repo.ui.debug(" already in target\n")
481 482 repo.dirstate.write()
482 483 repo.ui.debug(" merge against %d:%s\n" % (repo[rev].rev(), repo[rev]))
483 484 if repo[rev].rev() == repo[min(state)].rev():
484 485 # Case (1) initial changeset of a non-detaching rebase.
485 486 # Let the merge mechanism find the base itself.
486 487 base = None
487 488 elif not repo[rev].p2():
488 489 # Case (2) detaching the node with a single parent, use this parent
489 490 base = repo[rev].p1().node()
490 491 else:
491 492 # In case of merge, we need to pick the right parent as merge base.
492 493 #
493 494 # Imagine we have:
494 495 # - M: currently rebase revision in this step
495 496 # - A: one parent of M
496 497 # - B: second parent of M
497 498 # - D: destination of this merge step (p1 var)
498 499 #
499 500 # If we are rebasing on D, D is the successors of A or B. The right
500 501 # merge base is the one D succeed to. We pretend it is B for the rest
501 502 # of this comment
502 503 #
503 504 # If we pick B as the base, the merge involves:
504 505 # - changes from B to M (actual changeset payload)
505 506 # - changes from B to D (induced by rebase) as D is a rebased
506 507 # version of B)
507 508 # Which exactly represent the rebase operation.
508 509 #
509 510 # If we pick the A as the base, the merge involves
510 511 # - changes from A to M (actual changeset payload)
511 512 # - changes from A to D (with include changes between unrelated A and B
512 513 # plus changes induced by rebase)
513 514 # Which does not represent anything sensible and creates a lot of
514 515 # conflicts.
515 516 for p in repo[rev].parents():
516 517 if state.get(p.rev()) == repo[p1].rev():
517 518 base = p.node()
518 519 break
519 520 else: # fallback when base not found
520 521 base = None
521 522
522 523 # Raise because this function is called wrong (see issue 4106)
523 524 raise AssertionError('no base found to rebase on '
524 525 '(rebasenode called wrong)')
525 526 if base is not None:
526 527 repo.ui.debug(" detach base %d:%s\n" % (repo[base].rev(), repo[base]))
527 528 # When collapsing in-place, the parent is the common ancestor, we
528 529 # have to allow merging with it.
529 530 return merge.update(repo, rev, True, True, False, base, collapse)
530 531
531 532 def nearestrebased(repo, rev, state):
532 533 """return the nearest ancestors of rev in the rebase result"""
533 534 rebased = [r for r in state if state[r] > nullmerge]
534 535 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
535 536 if candidates:
536 537 return state[candidates[0]]
537 538 else:
538 539 return None
539 540
540 541 def defineparents(repo, rev, target, state, targetancestors):
541 542 'Return the new parent relationship of the revision that will be rebased'
542 543 parents = repo[rev].parents()
543 544 p1 = p2 = nullrev
544 545
545 546 P1n = parents[0].rev()
546 547 if P1n in targetancestors:
547 548 p1 = target
548 549 elif P1n in state:
549 550 if state[P1n] == nullmerge:
550 551 p1 = target
551 552 elif state[P1n] == revignored:
552 553 p1 = nearestrebased(repo, P1n, state)
553 554 if p1 is None:
554 555 p1 = target
555 556 else:
556 557 p1 = state[P1n]
557 558 else: # P1n external
558 559 p1 = target
559 560 p2 = P1n
560 561
561 562 if len(parents) == 2 and parents[1].rev() not in targetancestors:
562 563 P2n = parents[1].rev()
563 564 # interesting second parent
564 565 if P2n in state:
565 566 if p1 == target: # P1n in targetancestors or external
566 567 p1 = state[P2n]
567 568 elif state[P2n] == revignored:
568 569 p2 = nearestrebased(repo, P2n, state)
569 570 if p2 is None:
570 571 # no ancestors rebased yet, detach
571 572 p2 = target
572 573 else:
573 574 p2 = state[P2n]
574 575 else: # P2n external
575 576 if p2 != nullrev: # P1n external too => rev is a merged revision
576 577 raise util.Abort(_('cannot use revision %d as base, result '
577 578 'would have 3 parents') % rev)
578 579 p2 = P2n
579 580 repo.ui.debug(" future parents are %d and %d\n" %
580 581 (repo[p1].rev(), repo[p2].rev()))
581 582 return p1, p2
582 583
583 584 def isagitpatch(repo, patchname):
584 585 'Return true if the given patch is in git format'
585 586 mqpatch = os.path.join(repo.mq.path, patchname)
586 587 for line in patch.linereader(file(mqpatch, 'rb')):
587 588 if line.startswith('diff --git'):
588 589 return True
589 590 return False
590 591
591 592 def updatemq(repo, state, skipped, **opts):
592 593 'Update rebased mq patches - finalize and then import them'
593 594 mqrebase = {}
594 595 mq = repo.mq
595 596 original_series = mq.fullseries[:]
596 597 skippedpatches = set()
597 598
598 599 for p in mq.applied:
599 600 rev = repo[p.node].rev()
600 601 if rev in state:
601 602 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
602 603 (rev, p.name))
603 604 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
604 605 else:
605 606 # Applied but not rebased, not sure this should happen
606 607 skippedpatches.add(p.name)
607 608
608 609 if mqrebase:
609 610 mq.finish(repo, mqrebase.keys())
610 611
611 612 # We must start import from the newest revision
612 613 for rev in sorted(mqrebase, reverse=True):
613 614 if rev not in skipped:
614 615 name, isgit = mqrebase[rev]
615 616 repo.ui.debug('import mq patch %d (%s)\n' % (state[rev], name))
616 617 mq.qimport(repo, (), patchname=name, git=isgit,
617 618 rev=[str(state[rev])])
618 619 else:
619 620 # Rebased and skipped
620 621 skippedpatches.add(mqrebase[rev][0])
621 622
622 623 # Patches were either applied and rebased and imported in
623 624 # order, applied and removed or unapplied. Discard the removed
624 625 # ones while preserving the original series order and guards.
625 626 newseries = [s for s in original_series
626 627 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
627 628 mq.fullseries[:] = newseries
628 629 mq.seriesdirty = True
629 630 mq.savedirty()
630 631
631 632 def updatebookmarks(repo, targetnode, nstate, originalbookmarks):
632 633 'Move bookmarks to their correct changesets, and delete divergent ones'
633 634 marks = repo._bookmarks
634 635 for k, v in originalbookmarks.iteritems():
635 636 if v in nstate:
636 637 # update the bookmarks for revs that have moved
637 638 marks[k] = nstate[v]
638 639 bookmarks.deletedivergent(repo, [targetnode], k)
639 640
640 641 marks.write()
641 642
642 643 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
643 644 external, activebookmark):
644 645 'Store the current status to allow recovery'
645 646 f = repo.opener("rebasestate", "w")
646 647 f.write(repo[originalwd].hex() + '\n')
647 648 f.write(repo[target].hex() + '\n')
648 649 f.write(repo[external].hex() + '\n')
649 650 f.write('%d\n' % int(collapse))
650 651 f.write('%d\n' % int(keep))
651 652 f.write('%d\n' % int(keepbranches))
652 653 f.write('%s\n' % (activebookmark or ''))
653 654 for d, v in state.iteritems():
654 655 oldrev = repo[d].hex()
655 656 if v > nullmerge:
656 657 newrev = repo[v].hex()
657 658 else:
658 659 newrev = v
659 660 f.write("%s:%s\n" % (oldrev, newrev))
660 661 f.close()
661 662 repo.ui.debug('rebase status stored\n')
662 663
663 664 def clearstatus(repo):
664 665 'Remove the status files'
665 666 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
666 667
667 668 def restorestatus(repo):
668 669 'Restore a previously stored status'
669 670 try:
670 671 keepbranches = None
671 672 target = None
672 673 collapse = False
673 674 external = nullrev
674 675 activebookmark = None
675 676 state = {}
676 677 f = repo.opener("rebasestate")
677 678 for i, l in enumerate(f.read().splitlines()):
678 679 if i == 0:
679 680 originalwd = repo[l].rev()
680 681 elif i == 1:
681 682 target = repo[l].rev()
682 683 elif i == 2:
683 684 external = repo[l].rev()
684 685 elif i == 3:
685 686 collapse = bool(int(l))
686 687 elif i == 4:
687 688 keep = bool(int(l))
688 689 elif i == 5:
689 690 keepbranches = bool(int(l))
690 691 elif i == 6 and not (len(l) == 81 and ':' in l):
691 692 # line 6 is a recent addition, so for backwards compatibility
692 693 # check that the line doesn't look like the oldrev:newrev lines
693 694 activebookmark = l
694 695 else:
695 696 oldrev, newrev = l.split(':')
696 697 if newrev in (str(nullmerge), str(revignored)):
697 698 state[repo[oldrev].rev()] = int(newrev)
698 699 else:
699 700 state[repo[oldrev].rev()] = repo[newrev].rev()
700 701
701 702 if keepbranches is None:
702 703 raise util.Abort(_('.hg/rebasestate is incomplete'))
703 704
704 705 skipped = set()
705 706 # recompute the set of skipped revs
706 707 if not collapse:
707 708 seen = set([target])
708 709 for old, new in sorted(state.items()):
709 710 if new != nullrev and new in seen:
710 711 skipped.add(old)
711 712 seen.add(new)
712 713 repo.ui.debug('computed skipped revs: %s\n' %
713 714 (' '.join(str(r) for r in sorted(skipped)) or None))
714 715 repo.ui.debug('rebase status resumed\n')
715 716 return (originalwd, target, state, skipped,
716 717 collapse, keep, keepbranches, external, activebookmark)
717 718 except IOError, err:
718 719 if err.errno != errno.ENOENT:
719 720 raise
720 721 raise util.Abort(_('no rebase in progress'))
721 722
722 723 def inrebase(repo, originalwd, state):
723 724 '''check whether the working dir is in an interrupted rebase'''
724 725 parents = [p.rev() for p in repo.parents()]
725 726 if originalwd in parents:
726 727 return True
727 728
728 729 for newrev in state.itervalues():
729 730 if newrev in parents:
730 731 return True
731 732
732 733 return False
733 734
734 735 def abort(repo, originalwd, target, state):
735 736 'Restore the repository to its original state'
736 737 dstates = [s for s in state.values() if s > nullrev]
737 738 immutable = [d for d in dstates if not repo[d].mutable()]
738 739 cleanup = True
739 740 if immutable:
740 741 repo.ui.warn(_("warning: can't clean up immutable changesets %s\n")
741 742 % ', '.join(str(repo[r]) for r in immutable),
742 743 hint=_('see hg help phases for details'))
743 744 cleanup = False
744 745
745 746 descendants = set()
746 747 if dstates:
747 748 descendants = set(repo.changelog.descendants(dstates))
748 749 if descendants - set(dstates):
749 750 repo.ui.warn(_("warning: new changesets detected on target branch, "
750 751 "can't strip\n"))
751 752 cleanup = False
752 753
753 754 if cleanup:
754 755 # Update away from the rebase if necessary
755 756 if inrebase(repo, originalwd, state):
756 757 merge.update(repo, repo[originalwd].rev(), False, True, False)
757 758
758 759 # Strip from the first rebased revision
759 760 rebased = filter(lambda x: x > -1 and x != target, state.values())
760 761 if rebased:
761 762 strippoints = [c.node() for c in repo.set('roots(%ld)', rebased)]
762 763 # no backup of rebased cset versions needed
763 764 repair.strip(repo.ui, repo, strippoints)
764 765
765 766 clearstatus(repo)
766 767 repo.ui.warn(_('rebase aborted\n'))
767 768 return 0
768 769
769 770 def buildstate(repo, dest, rebaseset, collapse):
770 771 '''Define which revisions are going to be rebased and where
771 772
772 773 repo: repo
773 774 dest: context
774 775 rebaseset: set of rev
775 776 '''
776 777
777 778 # This check isn't strictly necessary, since mq detects commits over an
778 779 # applied patch. But it prevents messing up the working directory when
779 780 # a partially completed rebase is blocked by mq.
780 781 if 'qtip' in repo.tags() and (dest.node() in
781 782 [s.node for s in repo.mq.applied]):
782 783 raise util.Abort(_('cannot rebase onto an applied mq patch'))
783 784
784 785 roots = list(repo.set('roots(%ld)', rebaseset))
785 786 if not roots:
786 787 raise util.Abort(_('no matching revisions'))
787 788 roots.sort()
788 789 state = {}
789 790 detachset = set()
790 791 for root in roots:
791 792 commonbase = root.ancestor(dest)
792 793 if commonbase == root:
793 794 raise util.Abort(_('source is ancestor of destination'))
794 795 if commonbase == dest:
795 796 samebranch = root.branch() == dest.branch()
796 797 if not collapse and samebranch and root in dest.children():
797 798 repo.ui.debug('source is a child of destination\n')
798 799 return None
799 800
800 801 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
801 802 state.update(dict.fromkeys(rebaseset, nullrev))
802 803 # Rebase tries to turn <dest> into a parent of <root> while
803 804 # preserving the number of parents of rebased changesets:
804 805 #
805 806 # - A changeset with a single parent will always be rebased as a
806 807 # changeset with a single parent.
807 808 #
808 809 # - A merge will be rebased as merge unless its parents are both
809 810 # ancestors of <dest> or are themselves in the rebased set and
810 811 # pruned while rebased.
811 812 #
812 813 # If one parent of <root> is an ancestor of <dest>, the rebased
813 814 # version of this parent will be <dest>. This is always true with
814 815 # --base option.
815 816 #
816 817 # Otherwise, we need to *replace* the original parents with
817 818 # <dest>. This "detaches" the rebased set from its former location
818 819 # and rebases it onto <dest>. Changes introduced by ancestors of
819 820 # <root> not common with <dest> (the detachset, marked as
820 821 # nullmerge) are "removed" from the rebased changesets.
821 822 #
822 823 # - If <root> has a single parent, set it to <dest>.
823 824 #
824 825 # - If <root> is a merge, we cannot decide which parent to
825 826 # replace, the rebase operation is not clearly defined.
826 827 #
827 828 # The table below sums up this behavior:
828 829 #
829 830 # +------------------+----------------------+-------------------------+
830 831 # | | one parent | merge |
831 832 # +------------------+----------------------+-------------------------+
832 833 # | parent in | new parent is <dest> | parents in ::<dest> are |
833 834 # | ::<dest> | | remapped to <dest> |
834 835 # +------------------+----------------------+-------------------------+
835 836 # | unrelated source | new parent is <dest> | ambiguous, abort |
836 837 # +------------------+----------------------+-------------------------+
837 838 #
838 839 # The actual abort is handled by `defineparents`
839 840 if len(root.parents()) <= 1:
840 841 # ancestors of <root> not ancestors of <dest>
841 842 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
842 843 [root.rev()]))
843 844 for r in detachset:
844 845 if r not in state:
845 846 state[r] = nullmerge
846 847 if len(roots) > 1:
847 848 # If we have multiple roots, we may have "hole" in the rebase set.
848 849 # Rebase roots that descend from those "hole" should not be detached as
849 850 # other root are. We use the special `revignored` to inform rebase that
850 851 # the revision should be ignored but that `defineparents` should search
851 852 # a rebase destination that make sense regarding rebased topology.
852 853 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
853 854 for ignored in set(rebasedomain) - set(rebaseset):
854 855 state[ignored] = revignored
855 856 return repo['.'].rev(), dest.rev(), state
856 857
857 858 def clearrebased(ui, repo, state, skipped, collapsedas=None):
858 859 """dispose of rebased revision at the end of the rebase
859 860
860 861 If `collapsedas` is not None, the rebase was a collapse whose result if the
861 862 `collapsedas` node."""
862 863 if obsolete._enabled:
863 864 markers = []
864 865 for rev, newrev in sorted(state.items()):
865 866 if newrev >= 0:
866 867 if rev in skipped:
867 868 succs = ()
868 869 elif collapsedas is not None:
869 870 succs = (repo[collapsedas],)
870 871 else:
871 872 succs = (repo[newrev],)
872 873 markers.append((repo[rev], succs))
873 874 if markers:
874 875 obsolete.createmarkers(repo, markers)
875 876 else:
876 877 rebased = [rev for rev in state if state[rev] > nullmerge]
877 878 if rebased:
878 879 stripped = []
879 880 for root in repo.set('roots(%ld)', rebased):
880 881 if set(repo.changelog.descendants([root.rev()])) - set(state):
881 882 ui.warn(_("warning: new changesets detected "
882 883 "on source branch, not stripping\n"))
883 884 else:
884 885 stripped.append(root.node())
885 886 if stripped:
886 887 # backup the old csets by default
887 888 repair.strip(ui, repo, stripped, "all")
888 889
889 890
890 891 def pullrebase(orig, ui, repo, *args, **opts):
891 892 'Call rebase after pull if the latter has been invoked with --rebase'
892 893 if opts.get('rebase'):
893 894 if opts.get('update'):
894 895 del opts['update']
895 896 ui.debug('--update and --rebase are not compatible, ignoring '
896 897 'the update flag\n')
897 898
898 899 movemarkfrom = repo['.'].node()
899 900 revsprepull = len(repo)
900 901 origpostincoming = commands.postincoming
901 902 def _dummy(*args, **kwargs):
902 903 pass
903 904 commands.postincoming = _dummy
904 905 try:
905 906 orig(ui, repo, *args, **opts)
906 907 finally:
907 908 commands.postincoming = origpostincoming
908 909 revspostpull = len(repo)
909 910 if revspostpull > revsprepull:
910 911 # --rev option from pull conflict with rebase own --rev
911 912 # dropping it
912 913 if 'rev' in opts:
913 914 del opts['rev']
914 915 rebase(ui, repo, **opts)
915 916 branch = repo[None].branch()
916 917 dest = repo[branch].rev()
917 918 if dest != repo['.'].rev():
918 919 # there was nothing to rebase we force an update
919 920 hg.update(repo, dest)
920 921 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
921 922 ui.status(_("updating bookmark %s\n")
922 923 % repo._bookmarkcurrent)
923 924 else:
924 925 if opts.get('tool'):
925 926 raise util.Abort(_('--tool can only be used with --rebase'))
926 927 orig(ui, repo, *args, **opts)
927 928
928 929 def summaryhook(ui, repo):
929 930 if not os.path.exists(repo.join('rebasestate')):
930 931 return
931 932 try:
932 933 state = restorestatus(repo)[2]
933 934 except error.RepoLookupError:
934 935 # i18n: column positioning for "hg summary"
935 936 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
936 937 ui.write(msg)
937 938 return
938 939 numrebased = len([i for i in state.itervalues() if i != -1])
939 940 # i18n: column positioning for "hg summary"
940 941 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
941 942 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
942 943 ui.label(_('%d remaining'), 'rebase.remaining') %
943 944 (len(state) - numrebased)))
944 945
945 946 def uisetup(ui):
946 947 'Replace pull with a decorator to provide --rebase option'
947 948 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
948 949 entry[1].append(('', 'rebase', None,
949 950 _("rebase working directory to branch head")))
950 951 entry[1].append(('t', 'tool', '',
951 952 _("specify merge tool for rebase")))
952 953 cmdutil.summaryhooks.add('rebase', summaryhook)
953 954 cmdutil.unfinishedstates.append(
954 955 ['rebasestate', False, False, _('rebase in progress'),
955 956 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
@@ -1,400 +1,400 b''
1 1 # bundlerepo.py - repository class for viewing uncompressed bundles
2 2 #
3 3 # Copyright 2006, 2007 Benoit Boissinot <bboissin@gmail.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """Repository class for viewing uncompressed bundles.
9 9
10 10 This provides a read-only repository interface to bundles as if they
11 11 were part of the actual repository.
12 12 """
13 13
14 14 from node import nullid
15 15 from i18n import _
16 16 import os, tempfile, shutil
17 17 import changegroup, util, mdiff, discovery, cmdutil, scmutil
18 18 import localrepo, changelog, manifest, filelog, revlog, error
19 19
20 20 class bundlerevlog(revlog.revlog):
21 21 def __init__(self, opener, indexfile, bundle, linkmapper):
22 22 # How it works:
23 23 # To retrieve a revision, we need to know the offset of the revision in
24 24 # the bundle (an unbundle object). We store this offset in the index
25 25 # (start). The base of the delta is stored in the base field.
26 26 #
27 27 # To differentiate a rev in the bundle from a rev in the revlog, we
28 28 # check revision against repotiprev.
29 29 opener = scmutil.readonlyvfs(opener)
30 30 revlog.revlog.__init__(self, opener, indexfile)
31 31 self.bundle = bundle
32 32 n = len(self)
33 33 self.repotiprev = n - 1
34 34 chain = None
35 35 self.bundlerevs = set() # used by 'bundle()' revset expression
36 36 while True:
37 37 chunkdata = bundle.deltachunk(chain)
38 38 if not chunkdata:
39 39 break
40 40 node = chunkdata['node']
41 41 p1 = chunkdata['p1']
42 42 p2 = chunkdata['p2']
43 43 cs = chunkdata['cs']
44 44 deltabase = chunkdata['deltabase']
45 45 delta = chunkdata['delta']
46 46
47 47 size = len(delta)
48 48 start = bundle.tell() - size
49 49
50 50 link = linkmapper(cs)
51 51 if node in self.nodemap:
52 52 # this can happen if two branches make the same change
53 53 chain = node
54 54 self.bundlerevs.add(self.nodemap[node])
55 55 continue
56 56
57 57 for p in (p1, p2):
58 58 if p not in self.nodemap:
59 59 raise error.LookupError(p, self.indexfile,
60 60 _("unknown parent"))
61 61
62 62 if deltabase not in self.nodemap:
63 63 raise LookupError(deltabase, self.indexfile,
64 64 _('unknown delta base'))
65 65
66 66 baserev = self.rev(deltabase)
67 67 # start, size, full unc. size, base (unused), link, p1, p2, node
68 68 e = (revlog.offset_type(start, 0), size, -1, baserev, link,
69 69 self.rev(p1), self.rev(p2), node)
70 70 self.index.insert(-1, e)
71 71 self.nodemap[node] = n
72 72 self.bundlerevs.add(n)
73 73 chain = node
74 74 n += 1
75 75
76 76 def _chunk(self, rev):
77 77 # Warning: in case of bundle, the diff is against what we stored as
78 78 # delta base, not against rev - 1
79 79 # XXX: could use some caching
80 80 if rev <= self.repotiprev:
81 81 return revlog.revlog._chunk(self, rev)
82 82 self.bundle.seek(self.start(rev))
83 83 return self.bundle.read(self.length(rev))
84 84
85 85 def revdiff(self, rev1, rev2):
86 86 """return or calculate a delta between two revisions"""
87 87 if rev1 > self.repotiprev and rev2 > self.repotiprev:
88 88 # hot path for bundle
89 89 revb = self.index[rev2][3]
90 90 if revb == rev1:
91 91 return self._chunk(rev2)
92 92 elif rev1 <= self.repotiprev and rev2 <= self.repotiprev:
93 93 return revlog.revlog.revdiff(self, rev1, rev2)
94 94
95 95 return mdiff.textdiff(self.revision(self.node(rev1)),
96 96 self.revision(self.node(rev2)))
97 97
98 98 def revision(self, nodeorrev):
99 99 """return an uncompressed revision of a given node or revision
100 100 number.
101 101 """
102 102 if isinstance(nodeorrev, int):
103 103 rev = nodeorrev
104 104 node = self.node(rev)
105 105 else:
106 106 node = nodeorrev
107 107 rev = self.rev(node)
108 108
109 109 if node == nullid:
110 110 return ""
111 111
112 112 text = None
113 113 chain = []
114 114 iterrev = rev
115 115 # reconstruct the revision if it is from a changegroup
116 116 while iterrev > self.repotiprev:
117 117 if self._cache and self._cache[1] == iterrev:
118 118 text = self._cache[2]
119 119 break
120 120 chain.append(iterrev)
121 121 iterrev = self.index[iterrev][3]
122 122 if text is None:
123 123 text = self.baserevision(iterrev)
124 124
125 125 while chain:
126 126 delta = self._chunk(chain.pop())
127 127 text = mdiff.patches(text, [delta])
128 128
129 129 self._checkhash(text, node, rev)
130 130 self._cache = (node, rev, text)
131 131 return text
132 132
133 133 def baserevision(self, nodeorrev):
134 134 # Revlog subclasses may override 'revision' method to modify format of
135 135 # content retrieved from revlog. To use bundlerevlog with such class one
136 136 # needs to override 'baserevision' and make more specific call here.
137 137 return revlog.revlog.revision(self, nodeorrev)
138 138
139 139 def addrevision(self, text, transaction, link, p1=None, p2=None, d=None):
140 140 raise NotImplementedError
141 141 def addgroup(self, revs, linkmapper, transaction):
142 142 raise NotImplementedError
143 143 def strip(self, rev, minlink):
144 144 raise NotImplementedError
145 145 def checksize(self):
146 146 raise NotImplementedError
147 147
148 148 class bundlechangelog(bundlerevlog, changelog.changelog):
149 149 def __init__(self, opener, bundle):
150 150 changelog.changelog.__init__(self, opener)
151 151 linkmapper = lambda x: x
152 152 bundlerevlog.__init__(self, opener, self.indexfile, bundle,
153 153 linkmapper)
154 154
155 155 def baserevision(self, nodeorrev):
156 156 # Although changelog doesn't override 'revision' method, some extensions
157 157 # may replace this class with another that does. Same story with
158 158 # manifest and filelog classes.
159 159 return changelog.changelog.revision(self, nodeorrev)
160 160
161 161 class bundlemanifest(bundlerevlog, manifest.manifest):
162 162 def __init__(self, opener, bundle, linkmapper):
163 163 manifest.manifest.__init__(self, opener)
164 164 bundlerevlog.__init__(self, opener, self.indexfile, bundle,
165 165 linkmapper)
166 166
167 167 def baserevision(self, nodeorrev):
168 168 return manifest.manifest.revision(self, nodeorrev)
169 169
170 170 class bundlefilelog(bundlerevlog, filelog.filelog):
171 171 def __init__(self, opener, path, bundle, linkmapper, repo):
172 172 filelog.filelog.__init__(self, opener, path)
173 173 bundlerevlog.__init__(self, opener, self.indexfile, bundle,
174 174 linkmapper)
175 175 self._repo = repo
176 176
177 177 def baserevision(self, nodeorrev):
178 178 return filelog.filelog.revision(self, nodeorrev)
179 179
180 180 def _file(self, f):
181 181 self._repo.file(f)
182 182
183 183 class bundlepeer(localrepo.localpeer):
184 184 def canpush(self):
185 185 return False
186 186
187 187 class bundlerepository(localrepo.localrepository):
188 188 def __init__(self, ui, path, bundlename):
189 189 self._tempparent = None
190 190 try:
191 191 localrepo.localrepository.__init__(self, ui, path)
192 192 except error.RepoError:
193 193 self._tempparent = tempfile.mkdtemp()
194 194 localrepo.instance(ui, self._tempparent, 1)
195 195 localrepo.localrepository.__init__(self, ui, self._tempparent)
196 self.ui.setconfig('phases', 'publish', False)
196 self.ui.setconfig('phases', 'publish', False, 'bundlerepo')
197 197
198 198 if path:
199 199 self._url = 'bundle:' + util.expandpath(path) + '+' + bundlename
200 200 else:
201 201 self._url = 'bundle:' + bundlename
202 202
203 203 self.tempfile = None
204 204 f = util.posixfile(bundlename, "rb")
205 205 self.bundle = changegroup.readbundle(f, bundlename)
206 206 if self.bundle.compressed():
207 207 fdtemp, temp = tempfile.mkstemp(prefix="hg-bundle-",
208 208 suffix=".hg10un", dir=self.path)
209 209 self.tempfile = temp
210 210 fptemp = os.fdopen(fdtemp, 'wb')
211 211
212 212 try:
213 213 fptemp.write("HG10UN")
214 214 while True:
215 215 chunk = self.bundle.read(2**18)
216 216 if not chunk:
217 217 break
218 218 fptemp.write(chunk)
219 219 finally:
220 220 fptemp.close()
221 221
222 222 f = util.posixfile(self.tempfile, "rb")
223 223 self.bundle = changegroup.readbundle(f, bundlename)
224 224
225 225 # dict with the mapping 'filename' -> position in the bundle
226 226 self.bundlefilespos = {}
227 227
228 228 @localrepo.unfilteredpropertycache
229 229 def changelog(self):
230 230 # consume the header if it exists
231 231 self.bundle.changelogheader()
232 232 c = bundlechangelog(self.sopener, self.bundle)
233 233 self.manstart = self.bundle.tell()
234 234 return c
235 235
236 236 @localrepo.unfilteredpropertycache
237 237 def manifest(self):
238 238 self.bundle.seek(self.manstart)
239 239 # consume the header if it exists
240 240 self.bundle.manifestheader()
241 241 m = bundlemanifest(self.sopener, self.bundle, self.changelog.rev)
242 242 self.filestart = self.bundle.tell()
243 243 return m
244 244
245 245 @localrepo.unfilteredpropertycache
246 246 def manstart(self):
247 247 self.changelog
248 248 return self.manstart
249 249
250 250 @localrepo.unfilteredpropertycache
251 251 def filestart(self):
252 252 self.manifest
253 253 return self.filestart
254 254
255 255 def url(self):
256 256 return self._url
257 257
258 258 def file(self, f):
259 259 if not self.bundlefilespos:
260 260 self.bundle.seek(self.filestart)
261 261 while True:
262 262 chunkdata = self.bundle.filelogheader()
263 263 if not chunkdata:
264 264 break
265 265 fname = chunkdata['filename']
266 266 self.bundlefilespos[fname] = self.bundle.tell()
267 267 while True:
268 268 c = self.bundle.deltachunk(None)
269 269 if not c:
270 270 break
271 271
272 272 if f in self.bundlefilespos:
273 273 self.bundle.seek(self.bundlefilespos[f])
274 274 return bundlefilelog(self.sopener, f, self.bundle,
275 275 self.changelog.rev, self)
276 276 else:
277 277 return filelog.filelog(self.sopener, f)
278 278
279 279 def close(self):
280 280 """Close assigned bundle file immediately."""
281 281 self.bundle.close()
282 282 if self.tempfile is not None:
283 283 os.unlink(self.tempfile)
284 284 if self._tempparent:
285 285 shutil.rmtree(self._tempparent, True)
286 286
287 287 def cancopy(self):
288 288 return False
289 289
290 290 def peer(self):
291 291 return bundlepeer(self)
292 292
293 293 def getcwd(self):
294 294 return os.getcwd() # always outside the repo
295 295
296 296
297 297 def instance(ui, path, create):
298 298 if create:
299 299 raise util.Abort(_('cannot create new bundle repository'))
300 300 parentpath = ui.config("bundle", "mainreporoot", "")
301 301 if not parentpath:
302 302 # try to find the correct path to the working directory repo
303 303 parentpath = cmdutil.findrepo(os.getcwd())
304 304 if parentpath is None:
305 305 parentpath = ''
306 306 if parentpath:
307 307 # Try to make the full path relative so we get a nice, short URL.
308 308 # In particular, we don't want temp dir names in test outputs.
309 309 cwd = os.getcwd()
310 310 if parentpath == cwd:
311 311 parentpath = ''
312 312 else:
313 313 cwd = os.path.join(cwd,'')
314 314 if parentpath.startswith(cwd):
315 315 parentpath = parentpath[len(cwd):]
316 316 u = util.url(path)
317 317 path = u.localpath()
318 318 if u.scheme == 'bundle':
319 319 s = path.split("+", 1)
320 320 if len(s) == 1:
321 321 repopath, bundlename = parentpath, s[0]
322 322 else:
323 323 repopath, bundlename = s
324 324 else:
325 325 repopath, bundlename = parentpath, path
326 326 return bundlerepository(ui, repopath, bundlename)
327 327
328 328 def getremotechanges(ui, repo, other, onlyheads=None, bundlename=None,
329 329 force=False):
330 330 '''obtains a bundle of changes incoming from other
331 331
332 332 "onlyheads" restricts the returned changes to those reachable from the
333 333 specified heads.
334 334 "bundlename", if given, stores the bundle to this file path permanently;
335 335 otherwise it's stored to a temp file and gets deleted again when you call
336 336 the returned "cleanupfn".
337 337 "force" indicates whether to proceed on unrelated repos.
338 338
339 339 Returns a tuple (local, csets, cleanupfn):
340 340
341 341 "local" is a local repo from which to obtain the actual incoming
342 342 changesets; it is a bundlerepo for the obtained bundle when the
343 343 original "other" is remote.
344 344 "csets" lists the incoming changeset node ids.
345 345 "cleanupfn" must be called without arguments when you're done processing
346 346 the changes; it closes both the original "other" and the one returned
347 347 here.
348 348 '''
349 349 tmp = discovery.findcommonincoming(repo, other, heads=onlyheads,
350 350 force=force)
351 351 common, incoming, rheads = tmp
352 352 if not incoming:
353 353 try:
354 354 if bundlename:
355 355 os.unlink(bundlename)
356 356 except OSError:
357 357 pass
358 358 return repo, [], other.close
359 359
360 360 bundle = None
361 361 bundlerepo = None
362 362 localrepo = other.local()
363 363 if bundlename or not localrepo:
364 364 # create a bundle (uncompressed if other repo is not local)
365 365
366 366 if other.capable('getbundle'):
367 367 cg = other.getbundle('incoming', common=common, heads=rheads)
368 368 elif onlyheads is None and not other.capable('changegroupsubset'):
369 369 # compat with older servers when pulling all remote heads
370 370 cg = other.changegroup(incoming, "incoming")
371 371 rheads = None
372 372 else:
373 373 cg = other.changegroupsubset(incoming, rheads, 'incoming')
374 374 bundletype = localrepo and "HG10BZ" or "HG10UN"
375 375 fname = bundle = changegroup.writebundle(cg, bundlename, bundletype)
376 376 # keep written bundle?
377 377 if bundlename:
378 378 bundle = None
379 379 if not localrepo:
380 380 # use the created uncompressed bundlerepo
381 381 localrepo = bundlerepo = bundlerepository(repo.baseui, repo.root,
382 382 fname)
383 383 # this repo contains local and other now, so filter out local again
384 384 common = repo.heads()
385 385 if localrepo:
386 386 # Part of common may be remotely filtered
387 387 # So use an unfiltered version
388 388 # The discovery process probably need cleanup to avoid that
389 389 localrepo = localrepo.unfiltered()
390 390
391 391 csets = localrepo.changelog.findmissing(common, rheads)
392 392
393 393 def cleanup():
394 394 if bundlerepo:
395 395 bundlerepo.close()
396 396 if bundle:
397 397 os.unlink(bundle)
398 398 other.close()
399 399
400 400 return (localrepo, csets, cleanup)
@@ -1,2361 +1,2361 b''
1 1 # cmdutil.py - help for command processing in mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from node import hex, nullid, nullrev, short
9 9 from i18n import _
10 10 import os, sys, errno, re, tempfile
11 11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
12 12 import match as matchmod
13 13 import context, repair, graphmod, revset, phases, obsolete, pathutil
14 14 import changelog
15 15 import bookmarks
16 16 import lock as lockmod
17 17
18 18 def parsealiases(cmd):
19 19 return cmd.lstrip("^").split("|")
20 20
21 21 def findpossible(cmd, table, strict=False):
22 22 """
23 23 Return cmd -> (aliases, command table entry)
24 24 for each matching command.
25 25 Return debug commands (or their aliases) only if no normal command matches.
26 26 """
27 27 choice = {}
28 28 debugchoice = {}
29 29
30 30 if cmd in table:
31 31 # short-circuit exact matches, "log" alias beats "^log|history"
32 32 keys = [cmd]
33 33 else:
34 34 keys = table.keys()
35 35
36 36 for e in keys:
37 37 aliases = parsealiases(e)
38 38 found = None
39 39 if cmd in aliases:
40 40 found = cmd
41 41 elif not strict:
42 42 for a in aliases:
43 43 if a.startswith(cmd):
44 44 found = a
45 45 break
46 46 if found is not None:
47 47 if aliases[0].startswith("debug") or found.startswith("debug"):
48 48 debugchoice[found] = (aliases, table[e])
49 49 else:
50 50 choice[found] = (aliases, table[e])
51 51
52 52 if not choice and debugchoice:
53 53 choice = debugchoice
54 54
55 55 return choice
56 56
57 57 def findcmd(cmd, table, strict=True):
58 58 """Return (aliases, command table entry) for command string."""
59 59 choice = findpossible(cmd, table, strict)
60 60
61 61 if cmd in choice:
62 62 return choice[cmd]
63 63
64 64 if len(choice) > 1:
65 65 clist = choice.keys()
66 66 clist.sort()
67 67 raise error.AmbiguousCommand(cmd, clist)
68 68
69 69 if choice:
70 70 return choice.values()[0]
71 71
72 72 raise error.UnknownCommand(cmd)
73 73
74 74 def findrepo(p):
75 75 while not os.path.isdir(os.path.join(p, ".hg")):
76 76 oldp, p = p, os.path.dirname(p)
77 77 if p == oldp:
78 78 return None
79 79
80 80 return p
81 81
82 82 def bailifchanged(repo):
83 83 if repo.dirstate.p2() != nullid:
84 84 raise util.Abort(_('outstanding uncommitted merge'))
85 85 modified, added, removed, deleted = repo.status()[:4]
86 86 if modified or added or removed or deleted:
87 87 raise util.Abort(_('uncommitted changes'))
88 88 ctx = repo[None]
89 89 for s in sorted(ctx.substate):
90 90 if ctx.sub(s).dirty():
91 91 raise util.Abort(_("uncommitted changes in subrepo %s") % s)
92 92
93 93 def logmessage(ui, opts):
94 94 """ get the log message according to -m and -l option """
95 95 message = opts.get('message')
96 96 logfile = opts.get('logfile')
97 97
98 98 if message and logfile:
99 99 raise util.Abort(_('options --message and --logfile are mutually '
100 100 'exclusive'))
101 101 if not message and logfile:
102 102 try:
103 103 if logfile == '-':
104 104 message = ui.fin.read()
105 105 else:
106 106 message = '\n'.join(util.readfile(logfile).splitlines())
107 107 except IOError, inst:
108 108 raise util.Abort(_("can't read commit message '%s': %s") %
109 109 (logfile, inst.strerror))
110 110 return message
111 111
112 112 def loglimit(opts):
113 113 """get the log limit according to option -l/--limit"""
114 114 limit = opts.get('limit')
115 115 if limit:
116 116 try:
117 117 limit = int(limit)
118 118 except ValueError:
119 119 raise util.Abort(_('limit must be a positive integer'))
120 120 if limit <= 0:
121 121 raise util.Abort(_('limit must be positive'))
122 122 else:
123 123 limit = None
124 124 return limit
125 125
126 126 def makefilename(repo, pat, node, desc=None,
127 127 total=None, seqno=None, revwidth=None, pathname=None):
128 128 node_expander = {
129 129 'H': lambda: hex(node),
130 130 'R': lambda: str(repo.changelog.rev(node)),
131 131 'h': lambda: short(node),
132 132 'm': lambda: re.sub('[^\w]', '_', str(desc))
133 133 }
134 134 expander = {
135 135 '%': lambda: '%',
136 136 'b': lambda: os.path.basename(repo.root),
137 137 }
138 138
139 139 try:
140 140 if node:
141 141 expander.update(node_expander)
142 142 if node:
143 143 expander['r'] = (lambda:
144 144 str(repo.changelog.rev(node)).zfill(revwidth or 0))
145 145 if total is not None:
146 146 expander['N'] = lambda: str(total)
147 147 if seqno is not None:
148 148 expander['n'] = lambda: str(seqno)
149 149 if total is not None and seqno is not None:
150 150 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
151 151 if pathname is not None:
152 152 expander['s'] = lambda: os.path.basename(pathname)
153 153 expander['d'] = lambda: os.path.dirname(pathname) or '.'
154 154 expander['p'] = lambda: pathname
155 155
156 156 newname = []
157 157 patlen = len(pat)
158 158 i = 0
159 159 while i < patlen:
160 160 c = pat[i]
161 161 if c == '%':
162 162 i += 1
163 163 c = pat[i]
164 164 c = expander[c]()
165 165 newname.append(c)
166 166 i += 1
167 167 return ''.join(newname)
168 168 except KeyError, inst:
169 169 raise util.Abort(_("invalid format spec '%%%s' in output filename") %
170 170 inst.args[0])
171 171
172 172 def makefileobj(repo, pat, node=None, desc=None, total=None,
173 173 seqno=None, revwidth=None, mode='wb', modemap=None,
174 174 pathname=None):
175 175
176 176 writable = mode not in ('r', 'rb')
177 177
178 178 if not pat or pat == '-':
179 179 fp = writable and repo.ui.fout or repo.ui.fin
180 180 if util.safehasattr(fp, 'fileno'):
181 181 return os.fdopen(os.dup(fp.fileno()), mode)
182 182 else:
183 183 # if this fp can't be duped properly, return
184 184 # a dummy object that can be closed
185 185 class wrappedfileobj(object):
186 186 noop = lambda x: None
187 187 def __init__(self, f):
188 188 self.f = f
189 189 def __getattr__(self, attr):
190 190 if attr == 'close':
191 191 return self.noop
192 192 else:
193 193 return getattr(self.f, attr)
194 194
195 195 return wrappedfileobj(fp)
196 196 if util.safehasattr(pat, 'write') and writable:
197 197 return pat
198 198 if util.safehasattr(pat, 'read') and 'r' in mode:
199 199 return pat
200 200 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
201 201 if modemap is not None:
202 202 mode = modemap.get(fn, mode)
203 203 if mode == 'wb':
204 204 modemap[fn] = 'ab'
205 205 return open(fn, mode)
206 206
207 207 def openrevlog(repo, cmd, file_, opts):
208 208 """opens the changelog, manifest, a filelog or a given revlog"""
209 209 cl = opts['changelog']
210 210 mf = opts['manifest']
211 211 msg = None
212 212 if cl and mf:
213 213 msg = _('cannot specify --changelog and --manifest at the same time')
214 214 elif cl or mf:
215 215 if file_:
216 216 msg = _('cannot specify filename with --changelog or --manifest')
217 217 elif not repo:
218 218 msg = _('cannot specify --changelog or --manifest '
219 219 'without a repository')
220 220 if msg:
221 221 raise util.Abort(msg)
222 222
223 223 r = None
224 224 if repo:
225 225 if cl:
226 226 r = repo.changelog
227 227 elif mf:
228 228 r = repo.manifest
229 229 elif file_:
230 230 filelog = repo.file(file_)
231 231 if len(filelog):
232 232 r = filelog
233 233 if not r:
234 234 if not file_:
235 235 raise error.CommandError(cmd, _('invalid arguments'))
236 236 if not os.path.isfile(file_):
237 237 raise util.Abort(_("revlog '%s' not found") % file_)
238 238 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
239 239 file_[:-2] + ".i")
240 240 return r
241 241
242 242 def copy(ui, repo, pats, opts, rename=False):
243 243 # called with the repo lock held
244 244 #
245 245 # hgsep => pathname that uses "/" to separate directories
246 246 # ossep => pathname that uses os.sep to separate directories
247 247 cwd = repo.getcwd()
248 248 targets = {}
249 249 after = opts.get("after")
250 250 dryrun = opts.get("dry_run")
251 251 wctx = repo[None]
252 252
253 253 def walkpat(pat):
254 254 srcs = []
255 255 badstates = after and '?' or '?r'
256 256 m = scmutil.match(repo[None], [pat], opts, globbed=True)
257 257 for abs in repo.walk(m):
258 258 state = repo.dirstate[abs]
259 259 rel = m.rel(abs)
260 260 exact = m.exact(abs)
261 261 if state in badstates:
262 262 if exact and state == '?':
263 263 ui.warn(_('%s: not copying - file is not managed\n') % rel)
264 264 if exact and state == 'r':
265 265 ui.warn(_('%s: not copying - file has been marked for'
266 266 ' remove\n') % rel)
267 267 continue
268 268 # abs: hgsep
269 269 # rel: ossep
270 270 srcs.append((abs, rel, exact))
271 271 return srcs
272 272
273 273 # abssrc: hgsep
274 274 # relsrc: ossep
275 275 # otarget: ossep
276 276 def copyfile(abssrc, relsrc, otarget, exact):
277 277 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
278 278 if '/' in abstarget:
279 279 # We cannot normalize abstarget itself, this would prevent
280 280 # case only renames, like a => A.
281 281 abspath, absname = abstarget.rsplit('/', 1)
282 282 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
283 283 reltarget = repo.pathto(abstarget, cwd)
284 284 target = repo.wjoin(abstarget)
285 285 src = repo.wjoin(abssrc)
286 286 state = repo.dirstate[abstarget]
287 287
288 288 scmutil.checkportable(ui, abstarget)
289 289
290 290 # check for collisions
291 291 prevsrc = targets.get(abstarget)
292 292 if prevsrc is not None:
293 293 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
294 294 (reltarget, repo.pathto(abssrc, cwd),
295 295 repo.pathto(prevsrc, cwd)))
296 296 return
297 297
298 298 # check for overwrites
299 299 exists = os.path.lexists(target)
300 300 samefile = False
301 301 if exists and abssrc != abstarget:
302 302 if (repo.dirstate.normalize(abssrc) ==
303 303 repo.dirstate.normalize(abstarget)):
304 304 if not rename:
305 305 ui.warn(_("%s: can't copy - same file\n") % reltarget)
306 306 return
307 307 exists = False
308 308 samefile = True
309 309
310 310 if not after and exists or after and state in 'mn':
311 311 if not opts['force']:
312 312 ui.warn(_('%s: not overwriting - file exists\n') %
313 313 reltarget)
314 314 return
315 315
316 316 if after:
317 317 if not exists:
318 318 if rename:
319 319 ui.warn(_('%s: not recording move - %s does not exist\n') %
320 320 (relsrc, reltarget))
321 321 else:
322 322 ui.warn(_('%s: not recording copy - %s does not exist\n') %
323 323 (relsrc, reltarget))
324 324 return
325 325 elif not dryrun:
326 326 try:
327 327 if exists:
328 328 os.unlink(target)
329 329 targetdir = os.path.dirname(target) or '.'
330 330 if not os.path.isdir(targetdir):
331 331 os.makedirs(targetdir)
332 332 if samefile:
333 333 tmp = target + "~hgrename"
334 334 os.rename(src, tmp)
335 335 os.rename(tmp, target)
336 336 else:
337 337 util.copyfile(src, target)
338 338 srcexists = True
339 339 except IOError, inst:
340 340 if inst.errno == errno.ENOENT:
341 341 ui.warn(_('%s: deleted in working copy\n') % relsrc)
342 342 srcexists = False
343 343 else:
344 344 ui.warn(_('%s: cannot copy - %s\n') %
345 345 (relsrc, inst.strerror))
346 346 return True # report a failure
347 347
348 348 if ui.verbose or not exact:
349 349 if rename:
350 350 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
351 351 else:
352 352 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
353 353
354 354 targets[abstarget] = abssrc
355 355
356 356 # fix up dirstate
357 357 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
358 358 dryrun=dryrun, cwd=cwd)
359 359 if rename and not dryrun:
360 360 if not after and srcexists and not samefile:
361 361 util.unlinkpath(repo.wjoin(abssrc))
362 362 wctx.forget([abssrc])
363 363
364 364 # pat: ossep
365 365 # dest ossep
366 366 # srcs: list of (hgsep, hgsep, ossep, bool)
367 367 # return: function that takes hgsep and returns ossep
368 368 def targetpathfn(pat, dest, srcs):
369 369 if os.path.isdir(pat):
370 370 abspfx = pathutil.canonpath(repo.root, cwd, pat)
371 371 abspfx = util.localpath(abspfx)
372 372 if destdirexists:
373 373 striplen = len(os.path.split(abspfx)[0])
374 374 else:
375 375 striplen = len(abspfx)
376 376 if striplen:
377 377 striplen += len(os.sep)
378 378 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
379 379 elif destdirexists:
380 380 res = lambda p: os.path.join(dest,
381 381 os.path.basename(util.localpath(p)))
382 382 else:
383 383 res = lambda p: dest
384 384 return res
385 385
386 386 # pat: ossep
387 387 # dest ossep
388 388 # srcs: list of (hgsep, hgsep, ossep, bool)
389 389 # return: function that takes hgsep and returns ossep
390 390 def targetpathafterfn(pat, dest, srcs):
391 391 if matchmod.patkind(pat):
392 392 # a mercurial pattern
393 393 res = lambda p: os.path.join(dest,
394 394 os.path.basename(util.localpath(p)))
395 395 else:
396 396 abspfx = pathutil.canonpath(repo.root, cwd, pat)
397 397 if len(abspfx) < len(srcs[0][0]):
398 398 # A directory. Either the target path contains the last
399 399 # component of the source path or it does not.
400 400 def evalpath(striplen):
401 401 score = 0
402 402 for s in srcs:
403 403 t = os.path.join(dest, util.localpath(s[0])[striplen:])
404 404 if os.path.lexists(t):
405 405 score += 1
406 406 return score
407 407
408 408 abspfx = util.localpath(abspfx)
409 409 striplen = len(abspfx)
410 410 if striplen:
411 411 striplen += len(os.sep)
412 412 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
413 413 score = evalpath(striplen)
414 414 striplen1 = len(os.path.split(abspfx)[0])
415 415 if striplen1:
416 416 striplen1 += len(os.sep)
417 417 if evalpath(striplen1) > score:
418 418 striplen = striplen1
419 419 res = lambda p: os.path.join(dest,
420 420 util.localpath(p)[striplen:])
421 421 else:
422 422 # a file
423 423 if destdirexists:
424 424 res = lambda p: os.path.join(dest,
425 425 os.path.basename(util.localpath(p)))
426 426 else:
427 427 res = lambda p: dest
428 428 return res
429 429
430 430
431 431 pats = scmutil.expandpats(pats)
432 432 if not pats:
433 433 raise util.Abort(_('no source or destination specified'))
434 434 if len(pats) == 1:
435 435 raise util.Abort(_('no destination specified'))
436 436 dest = pats.pop()
437 437 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
438 438 if not destdirexists:
439 439 if len(pats) > 1 or matchmod.patkind(pats[0]):
440 440 raise util.Abort(_('with multiple sources, destination must be an '
441 441 'existing directory'))
442 442 if util.endswithsep(dest):
443 443 raise util.Abort(_('destination %s is not a directory') % dest)
444 444
445 445 tfn = targetpathfn
446 446 if after:
447 447 tfn = targetpathafterfn
448 448 copylist = []
449 449 for pat in pats:
450 450 srcs = walkpat(pat)
451 451 if not srcs:
452 452 continue
453 453 copylist.append((tfn(pat, dest, srcs), srcs))
454 454 if not copylist:
455 455 raise util.Abort(_('no files to copy'))
456 456
457 457 errors = 0
458 458 for targetpath, srcs in copylist:
459 459 for abssrc, relsrc, exact in srcs:
460 460 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
461 461 errors += 1
462 462
463 463 if errors:
464 464 ui.warn(_('(consider using --after)\n'))
465 465
466 466 return errors != 0
467 467
468 468 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
469 469 runargs=None, appendpid=False):
470 470 '''Run a command as a service.'''
471 471
472 472 def writepid(pid):
473 473 if opts['pid_file']:
474 474 mode = appendpid and 'a' or 'w'
475 475 fp = open(opts['pid_file'], mode)
476 476 fp.write(str(pid) + '\n')
477 477 fp.close()
478 478
479 479 if opts['daemon'] and not opts['daemon_pipefds']:
480 480 # Signal child process startup with file removal
481 481 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
482 482 os.close(lockfd)
483 483 try:
484 484 if not runargs:
485 485 runargs = util.hgcmd() + sys.argv[1:]
486 486 runargs.append('--daemon-pipefds=%s' % lockpath)
487 487 # Don't pass --cwd to the child process, because we've already
488 488 # changed directory.
489 489 for i in xrange(1, len(runargs)):
490 490 if runargs[i].startswith('--cwd='):
491 491 del runargs[i]
492 492 break
493 493 elif runargs[i].startswith('--cwd'):
494 494 del runargs[i:i + 2]
495 495 break
496 496 def condfn():
497 497 return not os.path.exists(lockpath)
498 498 pid = util.rundetached(runargs, condfn)
499 499 if pid < 0:
500 500 raise util.Abort(_('child process failed to start'))
501 501 writepid(pid)
502 502 finally:
503 503 try:
504 504 os.unlink(lockpath)
505 505 except OSError, e:
506 506 if e.errno != errno.ENOENT:
507 507 raise
508 508 if parentfn:
509 509 return parentfn(pid)
510 510 else:
511 511 return
512 512
513 513 if initfn:
514 514 initfn()
515 515
516 516 if not opts['daemon']:
517 517 writepid(os.getpid())
518 518
519 519 if opts['daemon_pipefds']:
520 520 lockpath = opts['daemon_pipefds']
521 521 try:
522 522 os.setsid()
523 523 except AttributeError:
524 524 pass
525 525 os.unlink(lockpath)
526 526 util.hidewindow()
527 527 sys.stdout.flush()
528 528 sys.stderr.flush()
529 529
530 530 nullfd = os.open(os.devnull, os.O_RDWR)
531 531 logfilefd = nullfd
532 532 if logfile:
533 533 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
534 534 os.dup2(nullfd, 0)
535 535 os.dup2(logfilefd, 1)
536 536 os.dup2(logfilefd, 2)
537 537 if nullfd not in (0, 1, 2):
538 538 os.close(nullfd)
539 539 if logfile and logfilefd not in (0, 1, 2):
540 540 os.close(logfilefd)
541 541
542 542 if runfn:
543 543 return runfn()
544 544
545 545 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
546 546 """Utility function used by commands.import to import a single patch
547 547
548 548 This function is explicitly defined here to help the evolve extension to
549 549 wrap this part of the import logic.
550 550
551 551 The API is currently a bit ugly because it a simple code translation from
552 552 the import command. Feel free to make it better.
553 553
554 554 :hunk: a patch (as a binary string)
555 555 :parents: nodes that will be parent of the created commit
556 556 :opts: the full dict of option passed to the import command
557 557 :msgs: list to save commit message to.
558 558 (used in case we need to save it when failing)
559 559 :updatefunc: a function that update a repo to a given node
560 560 updatefunc(<repo>, <node>)
561 561 """
562 562 tmpname, message, user, date, branch, nodeid, p1, p2 = \
563 563 patch.extract(ui, hunk)
564 564
565 565 editor = commiteditor
566 566 if opts.get('edit'):
567 567 editor = commitforceeditor
568 568 update = not opts.get('bypass')
569 569 strip = opts["strip"]
570 570 sim = float(opts.get('similarity') or 0)
571 571 if not tmpname:
572 572 return (None, None)
573 573 msg = _('applied to working directory')
574 574
575 575 try:
576 576 cmdline_message = logmessage(ui, opts)
577 577 if cmdline_message:
578 578 # pickup the cmdline msg
579 579 message = cmdline_message
580 580 elif message:
581 581 # pickup the patch msg
582 582 message = message.strip()
583 583 else:
584 584 # launch the editor
585 585 message = None
586 586 ui.debug('message:\n%s\n' % message)
587 587
588 588 if len(parents) == 1:
589 589 parents.append(repo[nullid])
590 590 if opts.get('exact'):
591 591 if not nodeid or not p1:
592 592 raise util.Abort(_('not a Mercurial patch'))
593 593 p1 = repo[p1]
594 594 p2 = repo[p2 or nullid]
595 595 elif p2:
596 596 try:
597 597 p1 = repo[p1]
598 598 p2 = repo[p2]
599 599 # Without any options, consider p2 only if the
600 600 # patch is being applied on top of the recorded
601 601 # first parent.
602 602 if p1 != parents[0]:
603 603 p1 = parents[0]
604 604 p2 = repo[nullid]
605 605 except error.RepoError:
606 606 p1, p2 = parents
607 607 else:
608 608 p1, p2 = parents
609 609
610 610 n = None
611 611 if update:
612 612 if p1 != parents[0]:
613 613 updatefunc(repo, p1.node())
614 614 if p2 != parents[1]:
615 615 repo.setparents(p1.node(), p2.node())
616 616
617 617 if opts.get('exact') or opts.get('import_branch'):
618 618 repo.dirstate.setbranch(branch or 'default')
619 619
620 620 files = set()
621 621 patch.patch(ui, repo, tmpname, strip=strip, files=files,
622 622 eolmode=None, similarity=sim / 100.0)
623 623 files = list(files)
624 624 if opts.get('no_commit'):
625 625 if message:
626 626 msgs.append(message)
627 627 else:
628 628 if opts.get('exact') or p2:
629 629 # If you got here, you either use --force and know what
630 630 # you are doing or used --exact or a merge patch while
631 631 # being updated to its first parent.
632 632 m = None
633 633 else:
634 634 m = scmutil.matchfiles(repo, files or [])
635 635 n = repo.commit(message, opts.get('user') or user,
636 636 opts.get('date') or date, match=m,
637 637 editor=editor)
638 638 else:
639 639 if opts.get('exact') or opts.get('import_branch'):
640 640 branch = branch or 'default'
641 641 else:
642 642 branch = p1.branch()
643 643 store = patch.filestore()
644 644 try:
645 645 files = set()
646 646 try:
647 647 patch.patchrepo(ui, repo, p1, store, tmpname, strip,
648 648 files, eolmode=None)
649 649 except patch.PatchError, e:
650 650 raise util.Abort(str(e))
651 651 memctx = context.makememctx(repo, (p1.node(), p2.node()),
652 652 message,
653 653 opts.get('user') or user,
654 654 opts.get('date') or date,
655 655 branch, files, store,
656 656 editor=commiteditor)
657 657 repo.savecommitmessage(memctx.description())
658 658 n = memctx.commit()
659 659 finally:
660 660 store.close()
661 661 if opts.get('exact') and hex(n) != nodeid:
662 662 raise util.Abort(_('patch is damaged or loses information'))
663 663 if n:
664 664 # i18n: refers to a short changeset id
665 665 msg = _('created %s') % short(n)
666 666 return (msg, n)
667 667 finally:
668 668 os.unlink(tmpname)
669 669
670 670 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
671 671 opts=None):
672 672 '''export changesets as hg patches.'''
673 673
674 674 total = len(revs)
675 675 revwidth = max([len(str(rev)) for rev in revs])
676 676 filemode = {}
677 677
678 678 def single(rev, seqno, fp):
679 679 ctx = repo[rev]
680 680 node = ctx.node()
681 681 parents = [p.node() for p in ctx.parents() if p]
682 682 branch = ctx.branch()
683 683 if switch_parent:
684 684 parents.reverse()
685 685 prev = (parents and parents[0]) or nullid
686 686
687 687 shouldclose = False
688 688 if not fp and len(template) > 0:
689 689 desc_lines = ctx.description().rstrip().split('\n')
690 690 desc = desc_lines[0] #Commit always has a first line.
691 691 fp = makefileobj(repo, template, node, desc=desc, total=total,
692 692 seqno=seqno, revwidth=revwidth, mode='wb',
693 693 modemap=filemode)
694 694 if fp != template:
695 695 shouldclose = True
696 696 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
697 697 repo.ui.note("%s\n" % fp.name)
698 698
699 699 if not fp:
700 700 write = repo.ui.write
701 701 else:
702 702 def write(s, **kw):
703 703 fp.write(s)
704 704
705 705
706 706 write("# HG changeset patch\n")
707 707 write("# User %s\n" % ctx.user())
708 708 write("# Date %d %d\n" % ctx.date())
709 709 write("# %s\n" % util.datestr(ctx.date()))
710 710 if branch and branch != 'default':
711 711 write("# Branch %s\n" % branch)
712 712 write("# Node ID %s\n" % hex(node))
713 713 write("# Parent %s\n" % hex(prev))
714 714 if len(parents) > 1:
715 715 write("# Parent %s\n" % hex(parents[1]))
716 716 write(ctx.description().rstrip())
717 717 write("\n\n")
718 718
719 719 for chunk, label in patch.diffui(repo, prev, node, opts=opts):
720 720 write(chunk, label=label)
721 721
722 722 if shouldclose:
723 723 fp.close()
724 724
725 725 for seqno, rev in enumerate(revs):
726 726 single(rev, seqno + 1, fp)
727 727
728 728 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
729 729 changes=None, stat=False, fp=None, prefix='',
730 730 listsubrepos=False):
731 731 '''show diff or diffstat.'''
732 732 if fp is None:
733 733 write = ui.write
734 734 else:
735 735 def write(s, **kw):
736 736 fp.write(s)
737 737
738 738 if stat:
739 739 diffopts = diffopts.copy(context=0)
740 740 width = 80
741 741 if not ui.plain():
742 742 width = ui.termwidth()
743 743 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
744 744 prefix=prefix)
745 745 for chunk, label in patch.diffstatui(util.iterlines(chunks),
746 746 width=width,
747 747 git=diffopts.git):
748 748 write(chunk, label=label)
749 749 else:
750 750 for chunk, label in patch.diffui(repo, node1, node2, match,
751 751 changes, diffopts, prefix=prefix):
752 752 write(chunk, label=label)
753 753
754 754 if listsubrepos:
755 755 ctx1 = repo[node1]
756 756 ctx2 = repo[node2]
757 757 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
758 758 tempnode2 = node2
759 759 try:
760 760 if node2 is not None:
761 761 tempnode2 = ctx2.substate[subpath][1]
762 762 except KeyError:
763 763 # A subrepo that existed in node1 was deleted between node1 and
764 764 # node2 (inclusive). Thus, ctx2's substate won't contain that
765 765 # subpath. The best we can do is to ignore it.
766 766 tempnode2 = None
767 767 submatch = matchmod.narrowmatcher(subpath, match)
768 768 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
769 769 stat=stat, fp=fp, prefix=prefix)
770 770
771 771 class changeset_printer(object):
772 772 '''show changeset information when templating not requested.'''
773 773
774 774 def __init__(self, ui, repo, patch, diffopts, buffered):
775 775 self.ui = ui
776 776 self.repo = repo
777 777 self.buffered = buffered
778 778 self.patch = patch
779 779 self.diffopts = diffopts
780 780 self.header = {}
781 781 self.hunk = {}
782 782 self.lastheader = None
783 783 self.footer = None
784 784
785 785 def flush(self, rev):
786 786 if rev in self.header:
787 787 h = self.header[rev]
788 788 if h != self.lastheader:
789 789 self.lastheader = h
790 790 self.ui.write(h)
791 791 del self.header[rev]
792 792 if rev in self.hunk:
793 793 self.ui.write(self.hunk[rev])
794 794 del self.hunk[rev]
795 795 return 1
796 796 return 0
797 797
798 798 def close(self):
799 799 if self.footer:
800 800 self.ui.write(self.footer)
801 801
802 802 def show(self, ctx, copies=None, matchfn=None, **props):
803 803 if self.buffered:
804 804 self.ui.pushbuffer()
805 805 self._show(ctx, copies, matchfn, props)
806 806 self.hunk[ctx.rev()] = self.ui.popbuffer(labeled=True)
807 807 else:
808 808 self._show(ctx, copies, matchfn, props)
809 809
810 810 def _show(self, ctx, copies, matchfn, props):
811 811 '''show a single changeset or file revision'''
812 812 changenode = ctx.node()
813 813 rev = ctx.rev()
814 814
815 815 if self.ui.quiet:
816 816 self.ui.write("%d:%s\n" % (rev, short(changenode)),
817 817 label='log.node')
818 818 return
819 819
820 820 log = self.repo.changelog
821 821 date = util.datestr(ctx.date())
822 822
823 823 hexfunc = self.ui.debugflag and hex or short
824 824
825 825 parents = [(p, hexfunc(log.node(p)))
826 826 for p in self._meaningful_parentrevs(log, rev)]
827 827
828 828 # i18n: column positioning for "hg log"
829 829 self.ui.write(_("changeset: %d:%s\n") % (rev, hexfunc(changenode)),
830 830 label='log.changeset changeset.%s' % ctx.phasestr())
831 831
832 832 branch = ctx.branch()
833 833 # don't show the default branch name
834 834 if branch != 'default':
835 835 # i18n: column positioning for "hg log"
836 836 self.ui.write(_("branch: %s\n") % branch,
837 837 label='log.branch')
838 838 for bookmark in self.repo.nodebookmarks(changenode):
839 839 # i18n: column positioning for "hg log"
840 840 self.ui.write(_("bookmark: %s\n") % bookmark,
841 841 label='log.bookmark')
842 842 for tag in self.repo.nodetags(changenode):
843 843 # i18n: column positioning for "hg log"
844 844 self.ui.write(_("tag: %s\n") % tag,
845 845 label='log.tag')
846 846 if self.ui.debugflag and ctx.phase():
847 847 # i18n: column positioning for "hg log"
848 848 self.ui.write(_("phase: %s\n") % _(ctx.phasestr()),
849 849 label='log.phase')
850 850 for parent in parents:
851 851 # i18n: column positioning for "hg log"
852 852 self.ui.write(_("parent: %d:%s\n") % parent,
853 853 label='log.parent changeset.%s' % ctx.phasestr())
854 854
855 855 if self.ui.debugflag:
856 856 mnode = ctx.manifestnode()
857 857 # i18n: column positioning for "hg log"
858 858 self.ui.write(_("manifest: %d:%s\n") %
859 859 (self.repo.manifest.rev(mnode), hex(mnode)),
860 860 label='ui.debug log.manifest')
861 861 # i18n: column positioning for "hg log"
862 862 self.ui.write(_("user: %s\n") % ctx.user(),
863 863 label='log.user')
864 864 # i18n: column positioning for "hg log"
865 865 self.ui.write(_("date: %s\n") % date,
866 866 label='log.date')
867 867
868 868 if self.ui.debugflag:
869 869 files = self.repo.status(log.parents(changenode)[0], changenode)[:3]
870 870 for key, value in zip([# i18n: column positioning for "hg log"
871 871 _("files:"),
872 872 # i18n: column positioning for "hg log"
873 873 _("files+:"),
874 874 # i18n: column positioning for "hg log"
875 875 _("files-:")], files):
876 876 if value:
877 877 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
878 878 label='ui.debug log.files')
879 879 elif ctx.files() and self.ui.verbose:
880 880 # i18n: column positioning for "hg log"
881 881 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
882 882 label='ui.note log.files')
883 883 if copies and self.ui.verbose:
884 884 copies = ['%s (%s)' % c for c in copies]
885 885 # i18n: column positioning for "hg log"
886 886 self.ui.write(_("copies: %s\n") % ' '.join(copies),
887 887 label='ui.note log.copies')
888 888
889 889 extra = ctx.extra()
890 890 if extra and self.ui.debugflag:
891 891 for key, value in sorted(extra.items()):
892 892 # i18n: column positioning for "hg log"
893 893 self.ui.write(_("extra: %s=%s\n")
894 894 % (key, value.encode('string_escape')),
895 895 label='ui.debug log.extra')
896 896
897 897 description = ctx.description().strip()
898 898 if description:
899 899 if self.ui.verbose:
900 900 self.ui.write(_("description:\n"),
901 901 label='ui.note log.description')
902 902 self.ui.write(description,
903 903 label='ui.note log.description')
904 904 self.ui.write("\n\n")
905 905 else:
906 906 # i18n: column positioning for "hg log"
907 907 self.ui.write(_("summary: %s\n") %
908 908 description.splitlines()[0],
909 909 label='log.summary')
910 910 self.ui.write("\n")
911 911
912 912 self.showpatch(changenode, matchfn)
913 913
914 914 def showpatch(self, node, matchfn):
915 915 if not matchfn:
916 916 matchfn = self.patch
917 917 if matchfn:
918 918 stat = self.diffopts.get('stat')
919 919 diff = self.diffopts.get('patch')
920 920 diffopts = patch.diffopts(self.ui, self.diffopts)
921 921 prev = self.repo.changelog.parents(node)[0]
922 922 if stat:
923 923 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
924 924 match=matchfn, stat=True)
925 925 if diff:
926 926 if stat:
927 927 self.ui.write("\n")
928 928 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
929 929 match=matchfn, stat=False)
930 930 self.ui.write("\n")
931 931
932 932 def _meaningful_parentrevs(self, log, rev):
933 933 """Return list of meaningful (or all if debug) parentrevs for rev.
934 934
935 935 For merges (two non-nullrev revisions) both parents are meaningful.
936 936 Otherwise the first parent revision is considered meaningful if it
937 937 is not the preceding revision.
938 938 """
939 939 parents = log.parentrevs(rev)
940 940 if not self.ui.debugflag and parents[1] == nullrev:
941 941 if parents[0] >= rev - 1:
942 942 parents = []
943 943 else:
944 944 parents = [parents[0]]
945 945 return parents
946 946
947 947
948 948 class changeset_templater(changeset_printer):
949 949 '''format changeset information.'''
950 950
951 951 def __init__(self, ui, repo, patch, diffopts, tmpl, mapfile, buffered):
952 952 changeset_printer.__init__(self, ui, repo, patch, diffopts, buffered)
953 953 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
954 954 defaulttempl = {
955 955 'parent': '{rev}:{node|formatnode} ',
956 956 'manifest': '{rev}:{node|formatnode}',
957 957 'file_copy': '{name} ({source})',
958 958 'extra': '{key}={value|stringescape}'
959 959 }
960 960 # filecopy is preserved for compatibility reasons
961 961 defaulttempl['filecopy'] = defaulttempl['file_copy']
962 962 self.t = templater.templater(mapfile, {'formatnode': formatnode},
963 963 cache=defaulttempl)
964 964 if tmpl:
965 965 self.t.cache['changeset'] = tmpl
966 966
967 967 self.cache = {}
968 968
969 969 def _meaningful_parentrevs(self, ctx):
970 970 """Return list of meaningful (or all if debug) parentrevs for rev.
971 971 """
972 972 parents = ctx.parents()
973 973 if len(parents) > 1:
974 974 return parents
975 975 if self.ui.debugflag:
976 976 return [parents[0], self.repo['null']]
977 977 if parents[0].rev() >= ctx.rev() - 1:
978 978 return []
979 979 return parents
980 980
981 981 def _show(self, ctx, copies, matchfn, props):
982 982 '''show a single changeset or file revision'''
983 983
984 984 showlist = templatekw.showlist
985 985
986 986 # showparents() behaviour depends on ui trace level which
987 987 # causes unexpected behaviours at templating level and makes
988 988 # it harder to extract it in a standalone function. Its
989 989 # behaviour cannot be changed so leave it here for now.
990 990 def showparents(**args):
991 991 ctx = args['ctx']
992 992 parents = [[('rev', p.rev()), ('node', p.hex())]
993 993 for p in self._meaningful_parentrevs(ctx)]
994 994 return showlist('parent', parents, **args)
995 995
996 996 props = props.copy()
997 997 props.update(templatekw.keywords)
998 998 props['parents'] = showparents
999 999 props['templ'] = self.t
1000 1000 props['ctx'] = ctx
1001 1001 props['repo'] = self.repo
1002 1002 props['revcache'] = {'copies': copies}
1003 1003 props['cache'] = self.cache
1004 1004
1005 1005 # find correct templates for current mode
1006 1006
1007 1007 tmplmodes = [
1008 1008 (True, None),
1009 1009 (self.ui.verbose, 'verbose'),
1010 1010 (self.ui.quiet, 'quiet'),
1011 1011 (self.ui.debugflag, 'debug'),
1012 1012 ]
1013 1013
1014 1014 types = {'header': '', 'footer':'', 'changeset': 'changeset'}
1015 1015 for mode, postfix in tmplmodes:
1016 1016 for type in types:
1017 1017 cur = postfix and ('%s_%s' % (type, postfix)) or type
1018 1018 if mode and cur in self.t:
1019 1019 types[type] = cur
1020 1020
1021 1021 try:
1022 1022
1023 1023 # write header
1024 1024 if types['header']:
1025 1025 h = templater.stringify(self.t(types['header'], **props))
1026 1026 if self.buffered:
1027 1027 self.header[ctx.rev()] = h
1028 1028 else:
1029 1029 if self.lastheader != h:
1030 1030 self.lastheader = h
1031 1031 self.ui.write(h)
1032 1032
1033 1033 # write changeset metadata, then patch if requested
1034 1034 key = types['changeset']
1035 1035 self.ui.write(templater.stringify(self.t(key, **props)))
1036 1036 self.showpatch(ctx.node(), matchfn)
1037 1037
1038 1038 if types['footer']:
1039 1039 if not self.footer:
1040 1040 self.footer = templater.stringify(self.t(types['footer'],
1041 1041 **props))
1042 1042
1043 1043 except KeyError, inst:
1044 1044 msg = _("%s: no key named '%s'")
1045 1045 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
1046 1046 except SyntaxError, inst:
1047 1047 raise util.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1048 1048
1049 1049 def gettemplate(ui, tmpl, style):
1050 1050 """
1051 1051 Find the template matching the given template spec or style.
1052 1052 """
1053 1053
1054 1054 # ui settings
1055 1055 if not tmpl and not style:
1056 1056 tmpl = ui.config('ui', 'logtemplate')
1057 1057 if tmpl:
1058 1058 try:
1059 1059 tmpl = templater.parsestring(tmpl)
1060 1060 except SyntaxError:
1061 1061 tmpl = templater.parsestring(tmpl, quoted=False)
1062 1062 return tmpl, None
1063 1063 else:
1064 1064 style = util.expandpath(ui.config('ui', 'style', ''))
1065 1065
1066 1066 if style:
1067 1067 mapfile = style
1068 1068 if not os.path.split(mapfile)[0]:
1069 1069 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1070 1070 or templater.templatepath(mapfile))
1071 1071 if mapname:
1072 1072 mapfile = mapname
1073 1073 return None, mapfile
1074 1074
1075 1075 if not tmpl:
1076 1076 return None, None
1077 1077
1078 1078 # looks like a literal template?
1079 1079 if '{' in tmpl:
1080 1080 return tmpl, None
1081 1081
1082 1082 # perhaps a stock style?
1083 1083 if not os.path.split(tmpl)[0]:
1084 1084 mapname = (templater.templatepath('map-cmdline.' + tmpl)
1085 1085 or templater.templatepath(tmpl))
1086 1086 if mapname and os.path.isfile(mapname):
1087 1087 return None, mapname
1088 1088
1089 1089 # perhaps it's a reference to [templates]
1090 1090 t = ui.config('templates', tmpl)
1091 1091 if t:
1092 1092 try:
1093 1093 tmpl = templater.parsestring(t)
1094 1094 except SyntaxError:
1095 1095 tmpl = templater.parsestring(t, quoted=False)
1096 1096 return tmpl, None
1097 1097
1098 1098 # perhaps it's a path to a map or a template
1099 1099 if ('/' in tmpl or '\\' in tmpl) and os.path.isfile(tmpl):
1100 1100 # is it a mapfile for a style?
1101 1101 if os.path.basename(tmpl).startswith("map-"):
1102 1102 return None, os.path.realpath(tmpl)
1103 1103 tmpl = open(tmpl).read()
1104 1104 return tmpl, None
1105 1105
1106 1106 # constant string?
1107 1107 return tmpl, None
1108 1108
1109 1109 def show_changeset(ui, repo, opts, buffered=False):
1110 1110 """show one changeset using template or regular display.
1111 1111
1112 1112 Display format will be the first non-empty hit of:
1113 1113 1. option 'template'
1114 1114 2. option 'style'
1115 1115 3. [ui] setting 'logtemplate'
1116 1116 4. [ui] setting 'style'
1117 1117 If all of these values are either the unset or the empty string,
1118 1118 regular display via changeset_printer() is done.
1119 1119 """
1120 1120 # options
1121 1121 patch = None
1122 1122 if opts.get('patch') or opts.get('stat'):
1123 1123 patch = scmutil.matchall(repo)
1124 1124
1125 1125 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1126 1126
1127 1127 if not tmpl and not mapfile:
1128 1128 return changeset_printer(ui, repo, patch, opts, buffered)
1129 1129
1130 1130 try:
1131 1131 t = changeset_templater(ui, repo, patch, opts, tmpl, mapfile, buffered)
1132 1132 except SyntaxError, inst:
1133 1133 raise util.Abort(inst.args[0])
1134 1134 return t
1135 1135
1136 1136 def showmarker(ui, marker):
1137 1137 """utility function to display obsolescence marker in a readable way
1138 1138
1139 1139 To be used by debug function."""
1140 1140 ui.write(hex(marker.precnode()))
1141 1141 for repl in marker.succnodes():
1142 1142 ui.write(' ')
1143 1143 ui.write(hex(repl))
1144 1144 ui.write(' %X ' % marker._data[2])
1145 1145 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1146 1146 sorted(marker.metadata().items()))))
1147 1147 ui.write('\n')
1148 1148
1149 1149 def finddate(ui, repo, date):
1150 1150 """Find the tipmost changeset that matches the given date spec"""
1151 1151
1152 1152 df = util.matchdate(date)
1153 1153 m = scmutil.matchall(repo)
1154 1154 results = {}
1155 1155
1156 1156 def prep(ctx, fns):
1157 1157 d = ctx.date()
1158 1158 if df(d[0]):
1159 1159 results[ctx.rev()] = d
1160 1160
1161 1161 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1162 1162 rev = ctx.rev()
1163 1163 if rev in results:
1164 1164 ui.status(_("found revision %s from %s\n") %
1165 1165 (rev, util.datestr(results[rev])))
1166 1166 return str(rev)
1167 1167
1168 1168 raise util.Abort(_("revision matching date not found"))
1169 1169
1170 1170 def increasingwindows(windowsize=8, sizelimit=512):
1171 1171 while True:
1172 1172 yield windowsize
1173 1173 if windowsize < sizelimit:
1174 1174 windowsize *= 2
1175 1175
1176 1176 class FileWalkError(Exception):
1177 1177 pass
1178 1178
1179 1179 def walkfilerevs(repo, match, follow, revs, fncache):
1180 1180 '''Walks the file history for the matched files.
1181 1181
1182 1182 Returns the changeset revs that are involved in the file history.
1183 1183
1184 1184 Throws FileWalkError if the file history can't be walked using
1185 1185 filelogs alone.
1186 1186 '''
1187 1187 wanted = set()
1188 1188 copies = []
1189 1189 minrev, maxrev = min(revs), max(revs)
1190 1190 def filerevgen(filelog, last):
1191 1191 """
1192 1192 Only files, no patterns. Check the history of each file.
1193 1193
1194 1194 Examines filelog entries within minrev, maxrev linkrev range
1195 1195 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1196 1196 tuples in backwards order
1197 1197 """
1198 1198 cl_count = len(repo)
1199 1199 revs = []
1200 1200 for j in xrange(0, last + 1):
1201 1201 linkrev = filelog.linkrev(j)
1202 1202 if linkrev < minrev:
1203 1203 continue
1204 1204 # only yield rev for which we have the changelog, it can
1205 1205 # happen while doing "hg log" during a pull or commit
1206 1206 if linkrev >= cl_count:
1207 1207 break
1208 1208
1209 1209 parentlinkrevs = []
1210 1210 for p in filelog.parentrevs(j):
1211 1211 if p != nullrev:
1212 1212 parentlinkrevs.append(filelog.linkrev(p))
1213 1213 n = filelog.node(j)
1214 1214 revs.append((linkrev, parentlinkrevs,
1215 1215 follow and filelog.renamed(n)))
1216 1216
1217 1217 return reversed(revs)
1218 1218 def iterfiles():
1219 1219 pctx = repo['.']
1220 1220 for filename in match.files():
1221 1221 if follow:
1222 1222 if filename not in pctx:
1223 1223 raise util.Abort(_('cannot follow file not in parent '
1224 1224 'revision: "%s"') % filename)
1225 1225 yield filename, pctx[filename].filenode()
1226 1226 else:
1227 1227 yield filename, None
1228 1228 for filename_node in copies:
1229 1229 yield filename_node
1230 1230
1231 1231 for file_, node in iterfiles():
1232 1232 filelog = repo.file(file_)
1233 1233 if not len(filelog):
1234 1234 if node is None:
1235 1235 # A zero count may be a directory or deleted file, so
1236 1236 # try to find matching entries on the slow path.
1237 1237 if follow:
1238 1238 raise util.Abort(
1239 1239 _('cannot follow nonexistent file: "%s"') % file_)
1240 1240 raise FileWalkError("Cannot walk via filelog")
1241 1241 else:
1242 1242 continue
1243 1243
1244 1244 if node is None:
1245 1245 last = len(filelog) - 1
1246 1246 else:
1247 1247 last = filelog.rev(node)
1248 1248
1249 1249
1250 1250 # keep track of all ancestors of the file
1251 1251 ancestors = set([filelog.linkrev(last)])
1252 1252
1253 1253 # iterate from latest to oldest revision
1254 1254 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1255 1255 if not follow:
1256 1256 if rev > maxrev:
1257 1257 continue
1258 1258 else:
1259 1259 # Note that last might not be the first interesting
1260 1260 # rev to us:
1261 1261 # if the file has been changed after maxrev, we'll
1262 1262 # have linkrev(last) > maxrev, and we still need
1263 1263 # to explore the file graph
1264 1264 if rev not in ancestors:
1265 1265 continue
1266 1266 # XXX insert 1327 fix here
1267 1267 if flparentlinkrevs:
1268 1268 ancestors.update(flparentlinkrevs)
1269 1269
1270 1270 fncache.setdefault(rev, []).append(file_)
1271 1271 wanted.add(rev)
1272 1272 if copied:
1273 1273 copies.append(copied)
1274 1274
1275 1275 return wanted
1276 1276
1277 1277 def walkchangerevs(repo, match, opts, prepare):
1278 1278 '''Iterate over files and the revs in which they changed.
1279 1279
1280 1280 Callers most commonly need to iterate backwards over the history
1281 1281 in which they are interested. Doing so has awful (quadratic-looking)
1282 1282 performance, so we use iterators in a "windowed" way.
1283 1283
1284 1284 We walk a window of revisions in the desired order. Within the
1285 1285 window, we first walk forwards to gather data, then in the desired
1286 1286 order (usually backwards) to display it.
1287 1287
1288 1288 This function returns an iterator yielding contexts. Before
1289 1289 yielding each context, the iterator will first call the prepare
1290 1290 function on each context in the window in forward order.'''
1291 1291
1292 1292 follow = opts.get('follow') or opts.get('follow_first')
1293 1293
1294 1294 if opts.get('rev'):
1295 1295 revs = scmutil.revrange(repo, opts.get('rev'))
1296 1296 elif follow:
1297 1297 revs = repo.revs('reverse(:.)')
1298 1298 else:
1299 1299 revs = revset.spanset(repo)
1300 1300 revs.reverse()
1301 1301 if not revs:
1302 1302 return []
1303 1303 wanted = set()
1304 1304 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1305 1305 fncache = {}
1306 1306 change = repo.changectx
1307 1307
1308 1308 # First step is to fill wanted, the set of revisions that we want to yield.
1309 1309 # When it does not induce extra cost, we also fill fncache for revisions in
1310 1310 # wanted: a cache of filenames that were changed (ctx.files()) and that
1311 1311 # match the file filtering conditions.
1312 1312
1313 1313 if not slowpath and not match.files():
1314 1314 # No files, no patterns. Display all revs.
1315 1315 wanted = revs
1316 1316
1317 1317 if not slowpath and match.files():
1318 1318 # We only have to read through the filelog to find wanted revisions
1319 1319
1320 1320 try:
1321 1321 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1322 1322 except FileWalkError:
1323 1323 slowpath = True
1324 1324
1325 1325 # We decided to fall back to the slowpath because at least one
1326 1326 # of the paths was not a file. Check to see if at least one of them
1327 1327 # existed in history, otherwise simply return
1328 1328 for path in match.files():
1329 1329 if path == '.' or path in repo.store:
1330 1330 break
1331 1331 else:
1332 1332 return []
1333 1333
1334 1334 if slowpath:
1335 1335 # We have to read the changelog to match filenames against
1336 1336 # changed files
1337 1337
1338 1338 if follow:
1339 1339 raise util.Abort(_('can only follow copies/renames for explicit '
1340 1340 'filenames'))
1341 1341
1342 1342 # The slow path checks files modified in every changeset.
1343 1343 # This is really slow on large repos, so compute the set lazily.
1344 1344 class lazywantedset(object):
1345 1345 def __init__(self):
1346 1346 self.set = set()
1347 1347 self.revs = set(revs)
1348 1348
1349 1349 # No need to worry about locality here because it will be accessed
1350 1350 # in the same order as the increasing window below.
1351 1351 def __contains__(self, value):
1352 1352 if value in self.set:
1353 1353 return True
1354 1354 elif not value in self.revs:
1355 1355 return False
1356 1356 else:
1357 1357 self.revs.discard(value)
1358 1358 ctx = change(value)
1359 1359 matches = filter(match, ctx.files())
1360 1360 if matches:
1361 1361 fncache[value] = matches
1362 1362 self.set.add(value)
1363 1363 return True
1364 1364 return False
1365 1365
1366 1366 def discard(self, value):
1367 1367 self.revs.discard(value)
1368 1368 self.set.discard(value)
1369 1369
1370 1370 wanted = lazywantedset()
1371 1371
1372 1372 class followfilter(object):
1373 1373 def __init__(self, onlyfirst=False):
1374 1374 self.startrev = nullrev
1375 1375 self.roots = set()
1376 1376 self.onlyfirst = onlyfirst
1377 1377
1378 1378 def match(self, rev):
1379 1379 def realparents(rev):
1380 1380 if self.onlyfirst:
1381 1381 return repo.changelog.parentrevs(rev)[0:1]
1382 1382 else:
1383 1383 return filter(lambda x: x != nullrev,
1384 1384 repo.changelog.parentrevs(rev))
1385 1385
1386 1386 if self.startrev == nullrev:
1387 1387 self.startrev = rev
1388 1388 return True
1389 1389
1390 1390 if rev > self.startrev:
1391 1391 # forward: all descendants
1392 1392 if not self.roots:
1393 1393 self.roots.add(self.startrev)
1394 1394 for parent in realparents(rev):
1395 1395 if parent in self.roots:
1396 1396 self.roots.add(rev)
1397 1397 return True
1398 1398 else:
1399 1399 # backwards: all parents
1400 1400 if not self.roots:
1401 1401 self.roots.update(realparents(self.startrev))
1402 1402 if rev in self.roots:
1403 1403 self.roots.remove(rev)
1404 1404 self.roots.update(realparents(rev))
1405 1405 return True
1406 1406
1407 1407 return False
1408 1408
1409 1409 # it might be worthwhile to do this in the iterator if the rev range
1410 1410 # is descending and the prune args are all within that range
1411 1411 for rev in opts.get('prune', ()):
1412 1412 rev = repo[rev].rev()
1413 1413 ff = followfilter()
1414 1414 stop = min(revs[0], revs[-1])
1415 1415 for x in xrange(rev, stop - 1, -1):
1416 1416 if ff.match(x):
1417 1417 wanted = wanted - [x]
1418 1418
1419 1419 # Now that wanted is correctly initialized, we can iterate over the
1420 1420 # revision range, yielding only revisions in wanted.
1421 1421 def iterate():
1422 1422 if follow and not match.files():
1423 1423 ff = followfilter(onlyfirst=opts.get('follow_first'))
1424 1424 def want(rev):
1425 1425 return ff.match(rev) and rev in wanted
1426 1426 else:
1427 1427 def want(rev):
1428 1428 return rev in wanted
1429 1429
1430 1430 it = iter(revs)
1431 1431 stopiteration = False
1432 1432 for windowsize in increasingwindows():
1433 1433 nrevs = []
1434 1434 for i in xrange(windowsize):
1435 1435 try:
1436 1436 rev = it.next()
1437 1437 if want(rev):
1438 1438 nrevs.append(rev)
1439 1439 except (StopIteration):
1440 1440 stopiteration = True
1441 1441 break
1442 1442 for rev in sorted(nrevs):
1443 1443 fns = fncache.get(rev)
1444 1444 ctx = change(rev)
1445 1445 if not fns:
1446 1446 def fns_generator():
1447 1447 for f in ctx.files():
1448 1448 if match(f):
1449 1449 yield f
1450 1450 fns = fns_generator()
1451 1451 prepare(ctx, fns)
1452 1452 for rev in nrevs:
1453 1453 yield change(rev)
1454 1454
1455 1455 if stopiteration:
1456 1456 break
1457 1457
1458 1458 return iterate()
1459 1459
1460 1460 def _makegraphfilematcher(repo, pats, followfirst):
1461 1461 # When displaying a revision with --patch --follow FILE, we have
1462 1462 # to know which file of the revision must be diffed. With
1463 1463 # --follow, we want the names of the ancestors of FILE in the
1464 1464 # revision, stored in "fcache". "fcache" is populated by
1465 1465 # reproducing the graph traversal already done by --follow revset
1466 1466 # and relating linkrevs to file names (which is not "correct" but
1467 1467 # good enough).
1468 1468 fcache = {}
1469 1469 fcacheready = [False]
1470 1470 pctx = repo['.']
1471 1471 wctx = repo[None]
1472 1472
1473 1473 def populate():
1474 1474 for fn in pats:
1475 1475 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1476 1476 for c in i:
1477 1477 fcache.setdefault(c.linkrev(), set()).add(c.path())
1478 1478
1479 1479 def filematcher(rev):
1480 1480 if not fcacheready[0]:
1481 1481 # Lazy initialization
1482 1482 fcacheready[0] = True
1483 1483 populate()
1484 1484 return scmutil.match(wctx, fcache.get(rev, []), default='path')
1485 1485
1486 1486 return filematcher
1487 1487
1488 1488 def _makegraphlogrevset(repo, pats, opts, revs):
1489 1489 """Return (expr, filematcher) where expr is a revset string built
1490 1490 from log options and file patterns or None. If --stat or --patch
1491 1491 are not passed filematcher is None. Otherwise it is a callable
1492 1492 taking a revision number and returning a match objects filtering
1493 1493 the files to be detailed when displaying the revision.
1494 1494 """
1495 1495 opt2revset = {
1496 1496 'no_merges': ('not merge()', None),
1497 1497 'only_merges': ('merge()', None),
1498 1498 '_ancestors': ('ancestors(%(val)s)', None),
1499 1499 '_fancestors': ('_firstancestors(%(val)s)', None),
1500 1500 '_descendants': ('descendants(%(val)s)', None),
1501 1501 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1502 1502 '_matchfiles': ('_matchfiles(%(val)s)', None),
1503 1503 'date': ('date(%(val)r)', None),
1504 1504 'branch': ('branch(%(val)r)', ' or '),
1505 1505 '_patslog': ('filelog(%(val)r)', ' or '),
1506 1506 '_patsfollow': ('follow(%(val)r)', ' or '),
1507 1507 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1508 1508 'keyword': ('keyword(%(val)r)', ' or '),
1509 1509 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1510 1510 'user': ('user(%(val)r)', ' or '),
1511 1511 }
1512 1512
1513 1513 opts = dict(opts)
1514 1514 # follow or not follow?
1515 1515 follow = opts.get('follow') or opts.get('follow_first')
1516 1516 followfirst = opts.get('follow_first') and 1 or 0
1517 1517 # --follow with FILE behaviour depends on revs...
1518 1518 it = iter(revs)
1519 1519 startrev = it.next()
1520 1520 try:
1521 1521 followdescendants = startrev < it.next()
1522 1522 except (StopIteration):
1523 1523 followdescendants = False
1524 1524
1525 1525 # branch and only_branch are really aliases and must be handled at
1526 1526 # the same time
1527 1527 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1528 1528 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1529 1529 # pats/include/exclude are passed to match.match() directly in
1530 1530 # _matchfiles() revset but walkchangerevs() builds its matcher with
1531 1531 # scmutil.match(). The difference is input pats are globbed on
1532 1532 # platforms without shell expansion (windows).
1533 1533 pctx = repo[None]
1534 1534 match, pats = scmutil.matchandpats(pctx, pats, opts)
1535 1535 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1536 1536 if not slowpath:
1537 1537 for f in match.files():
1538 1538 if follow and f not in pctx:
1539 1539 raise util.Abort(_('cannot follow file not in parent '
1540 1540 'revision: "%s"') % f)
1541 1541 filelog = repo.file(f)
1542 1542 if not filelog:
1543 1543 # A zero count may be a directory or deleted file, so
1544 1544 # try to find matching entries on the slow path.
1545 1545 if follow:
1546 1546 raise util.Abort(
1547 1547 _('cannot follow nonexistent file: "%s"') % f)
1548 1548 slowpath = True
1549 1549
1550 1550 # We decided to fall back to the slowpath because at least one
1551 1551 # of the paths was not a file. Check to see if at least one of them
1552 1552 # existed in history - in that case, we'll continue down the
1553 1553 # slowpath; otherwise, we can turn off the slowpath
1554 1554 if slowpath:
1555 1555 for path in match.files():
1556 1556 if path == '.' or path in repo.store:
1557 1557 break
1558 1558 else:
1559 1559 slowpath = False
1560 1560
1561 1561 if slowpath:
1562 1562 # See walkchangerevs() slow path.
1563 1563 #
1564 1564 if follow:
1565 1565 raise util.Abort(_('can only follow copies/renames for explicit '
1566 1566 'filenames'))
1567 1567 # pats/include/exclude cannot be represented as separate
1568 1568 # revset expressions as their filtering logic applies at file
1569 1569 # level. For instance "-I a -X a" matches a revision touching
1570 1570 # "a" and "b" while "file(a) and not file(b)" does
1571 1571 # not. Besides, filesets are evaluated against the working
1572 1572 # directory.
1573 1573 matchargs = ['r:', 'd:relpath']
1574 1574 for p in pats:
1575 1575 matchargs.append('p:' + p)
1576 1576 for p in opts.get('include', []):
1577 1577 matchargs.append('i:' + p)
1578 1578 for p in opts.get('exclude', []):
1579 1579 matchargs.append('x:' + p)
1580 1580 matchargs = ','.join(('%r' % p) for p in matchargs)
1581 1581 opts['_matchfiles'] = matchargs
1582 1582 else:
1583 1583 if follow:
1584 1584 fpats = ('_patsfollow', '_patsfollowfirst')
1585 1585 fnopats = (('_ancestors', '_fancestors'),
1586 1586 ('_descendants', '_fdescendants'))
1587 1587 if pats:
1588 1588 # follow() revset interprets its file argument as a
1589 1589 # manifest entry, so use match.files(), not pats.
1590 1590 opts[fpats[followfirst]] = list(match.files())
1591 1591 else:
1592 1592 opts[fnopats[followdescendants][followfirst]] = str(startrev)
1593 1593 else:
1594 1594 opts['_patslog'] = list(pats)
1595 1595
1596 1596 filematcher = None
1597 1597 if opts.get('patch') or opts.get('stat'):
1598 1598 if follow:
1599 1599 filematcher = _makegraphfilematcher(repo, pats, followfirst)
1600 1600 else:
1601 1601 filematcher = lambda rev: match
1602 1602
1603 1603 expr = []
1604 1604 for op, val in opts.iteritems():
1605 1605 if not val:
1606 1606 continue
1607 1607 if op not in opt2revset:
1608 1608 continue
1609 1609 revop, andor = opt2revset[op]
1610 1610 if '%(val)' not in revop:
1611 1611 expr.append(revop)
1612 1612 else:
1613 1613 if not isinstance(val, list):
1614 1614 e = revop % {'val': val}
1615 1615 else:
1616 1616 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
1617 1617 expr.append(e)
1618 1618
1619 1619 if expr:
1620 1620 expr = '(' + ' and '.join(expr) + ')'
1621 1621 else:
1622 1622 expr = None
1623 1623 return expr, filematcher
1624 1624
1625 1625 def getgraphlogrevs(repo, pats, opts):
1626 1626 """Return (revs, expr, filematcher) where revs is an iterable of
1627 1627 revision numbers, expr is a revset string built from log options
1628 1628 and file patterns or None, and used to filter 'revs'. If --stat or
1629 1629 --patch are not passed filematcher is None. Otherwise it is a
1630 1630 callable taking a revision number and returning a match objects
1631 1631 filtering the files to be detailed when displaying the revision.
1632 1632 """
1633 1633 if not len(repo):
1634 1634 return [], None, None
1635 1635 limit = loglimit(opts)
1636 1636 # Default --rev value depends on --follow but --follow behaviour
1637 1637 # depends on revisions resolved from --rev...
1638 1638 follow = opts.get('follow') or opts.get('follow_first')
1639 1639 possiblyunsorted = False # whether revs might need sorting
1640 1640 if opts.get('rev'):
1641 1641 revs = scmutil.revrange(repo, opts['rev'])
1642 1642 # Don't sort here because _makegraphlogrevset might depend on the
1643 1643 # order of revs
1644 1644 possiblyunsorted = True
1645 1645 else:
1646 1646 if follow and len(repo) > 0:
1647 1647 revs = repo.revs('reverse(:.)')
1648 1648 else:
1649 1649 revs = revset.spanset(repo)
1650 1650 revs.reverse()
1651 1651 if not revs:
1652 1652 return revset.baseset(), None, None
1653 1653 expr, filematcher = _makegraphlogrevset(repo, pats, opts, revs)
1654 1654 if possiblyunsorted:
1655 1655 revs.sort(reverse=True)
1656 1656 if expr:
1657 1657 # Revset matchers often operate faster on revisions in changelog
1658 1658 # order, because most filters deal with the changelog.
1659 1659 revs.reverse()
1660 1660 matcher = revset.match(repo.ui, expr)
1661 1661 # Revset matches can reorder revisions. "A or B" typically returns
1662 1662 # returns the revision matching A then the revision matching B. Sort
1663 1663 # again to fix that.
1664 1664 revs = matcher(repo, revs)
1665 1665 revs.sort(reverse=True)
1666 1666 if limit is not None:
1667 1667 limitedrevs = revset.baseset()
1668 1668 for idx, rev in enumerate(revs):
1669 1669 if idx >= limit:
1670 1670 break
1671 1671 limitedrevs.append(rev)
1672 1672 revs = limitedrevs
1673 1673
1674 1674 return revs, expr, filematcher
1675 1675
1676 1676 def displaygraph(ui, dag, displayer, showparents, edgefn, getrenamed=None,
1677 1677 filematcher=None):
1678 1678 seen, state = [], graphmod.asciistate()
1679 1679 for rev, type, ctx, parents in dag:
1680 1680 char = 'o'
1681 1681 if ctx.node() in showparents:
1682 1682 char = '@'
1683 1683 elif ctx.obsolete():
1684 1684 char = 'x'
1685 1685 copies = None
1686 1686 if getrenamed and ctx.rev():
1687 1687 copies = []
1688 1688 for fn in ctx.files():
1689 1689 rename = getrenamed(fn, ctx.rev())
1690 1690 if rename:
1691 1691 copies.append((fn, rename[0]))
1692 1692 revmatchfn = None
1693 1693 if filematcher is not None:
1694 1694 revmatchfn = filematcher(ctx.rev())
1695 1695 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
1696 1696 lines = displayer.hunk.pop(rev).split('\n')
1697 1697 if not lines[-1]:
1698 1698 del lines[-1]
1699 1699 displayer.flush(rev)
1700 1700 edges = edgefn(type, char, lines, seen, rev, parents)
1701 1701 for type, char, lines, coldata in edges:
1702 1702 graphmod.ascii(ui, state, type, char, lines, coldata)
1703 1703 displayer.close()
1704 1704
1705 1705 def graphlog(ui, repo, *pats, **opts):
1706 1706 # Parameters are identical to log command ones
1707 1707 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
1708 1708 revdag = graphmod.dagwalker(repo, revs)
1709 1709
1710 1710 getrenamed = None
1711 1711 if opts.get('copies'):
1712 1712 endrev = None
1713 1713 if opts.get('rev'):
1714 1714 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
1715 1715 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
1716 1716 displayer = show_changeset(ui, repo, opts, buffered=True)
1717 1717 showparents = [ctx.node() for ctx in repo[None].parents()]
1718 1718 displaygraph(ui, revdag, displayer, showparents,
1719 1719 graphmod.asciiedges, getrenamed, filematcher)
1720 1720
1721 1721 def checkunsupportedgraphflags(pats, opts):
1722 1722 for op in ["newest_first"]:
1723 1723 if op in opts and opts[op]:
1724 1724 raise util.Abort(_("-G/--graph option is incompatible with --%s")
1725 1725 % op.replace("_", "-"))
1726 1726
1727 1727 def graphrevs(repo, nodes, opts):
1728 1728 limit = loglimit(opts)
1729 1729 nodes.reverse()
1730 1730 if limit is not None:
1731 1731 nodes = nodes[:limit]
1732 1732 return graphmod.nodes(repo, nodes)
1733 1733
1734 1734 def add(ui, repo, match, dryrun, listsubrepos, prefix, explicitonly):
1735 1735 join = lambda f: os.path.join(prefix, f)
1736 1736 bad = []
1737 1737 oldbad = match.bad
1738 1738 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
1739 1739 names = []
1740 1740 wctx = repo[None]
1741 1741 cca = None
1742 1742 abort, warn = scmutil.checkportabilityalert(ui)
1743 1743 if abort or warn:
1744 1744 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
1745 1745 for f in repo.walk(match):
1746 1746 exact = match.exact(f)
1747 1747 if exact or not explicitonly and f not in repo.dirstate:
1748 1748 if cca:
1749 1749 cca(f)
1750 1750 names.append(f)
1751 1751 if ui.verbose or not exact:
1752 1752 ui.status(_('adding %s\n') % match.rel(join(f)))
1753 1753
1754 1754 for subpath in sorted(wctx.substate):
1755 1755 sub = wctx.sub(subpath)
1756 1756 try:
1757 1757 submatch = matchmod.narrowmatcher(subpath, match)
1758 1758 if listsubrepos:
1759 1759 bad.extend(sub.add(ui, submatch, dryrun, listsubrepos, prefix,
1760 1760 False))
1761 1761 else:
1762 1762 bad.extend(sub.add(ui, submatch, dryrun, listsubrepos, prefix,
1763 1763 True))
1764 1764 except error.LookupError:
1765 1765 ui.status(_("skipping missing subrepository: %s\n")
1766 1766 % join(subpath))
1767 1767
1768 1768 if not dryrun:
1769 1769 rejected = wctx.add(names, prefix)
1770 1770 bad.extend(f for f in rejected if f in match.files())
1771 1771 return bad
1772 1772
1773 1773 def forget(ui, repo, match, prefix, explicitonly):
1774 1774 join = lambda f: os.path.join(prefix, f)
1775 1775 bad = []
1776 1776 oldbad = match.bad
1777 1777 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
1778 1778 wctx = repo[None]
1779 1779 forgot = []
1780 1780 s = repo.status(match=match, clean=True)
1781 1781 forget = sorted(s[0] + s[1] + s[3] + s[6])
1782 1782 if explicitonly:
1783 1783 forget = [f for f in forget if match.exact(f)]
1784 1784
1785 1785 for subpath in sorted(wctx.substate):
1786 1786 sub = wctx.sub(subpath)
1787 1787 try:
1788 1788 submatch = matchmod.narrowmatcher(subpath, match)
1789 1789 subbad, subforgot = sub.forget(ui, submatch, prefix)
1790 1790 bad.extend([subpath + '/' + f for f in subbad])
1791 1791 forgot.extend([subpath + '/' + f for f in subforgot])
1792 1792 except error.LookupError:
1793 1793 ui.status(_("skipping missing subrepository: %s\n")
1794 1794 % join(subpath))
1795 1795
1796 1796 if not explicitonly:
1797 1797 for f in match.files():
1798 1798 if f not in repo.dirstate and not os.path.isdir(match.rel(join(f))):
1799 1799 if f not in forgot:
1800 1800 if os.path.exists(match.rel(join(f))):
1801 1801 ui.warn(_('not removing %s: '
1802 1802 'file is already untracked\n')
1803 1803 % match.rel(join(f)))
1804 1804 bad.append(f)
1805 1805
1806 1806 for f in forget:
1807 1807 if ui.verbose or not match.exact(f):
1808 1808 ui.status(_('removing %s\n') % match.rel(join(f)))
1809 1809
1810 1810 rejected = wctx.forget(forget, prefix)
1811 1811 bad.extend(f for f in rejected if f in match.files())
1812 1812 forgot.extend(forget)
1813 1813 return bad, forgot
1814 1814
1815 1815 def duplicatecopies(repo, rev, fromrev):
1816 1816 '''reproduce copies from fromrev to rev in the dirstate'''
1817 1817 for dst, src in copies.pathcopies(repo[fromrev], repo[rev]).iteritems():
1818 1818 # copies.pathcopies returns backward renames, so dst might not
1819 1819 # actually be in the dirstate
1820 1820 if repo.dirstate[dst] in "nma":
1821 1821 repo.dirstate.copy(src, dst)
1822 1822
1823 1823 def commit(ui, repo, commitfunc, pats, opts):
1824 1824 '''commit the specified files or all outstanding changes'''
1825 1825 date = opts.get('date')
1826 1826 if date:
1827 1827 opts['date'] = util.parsedate(date)
1828 1828 message = logmessage(ui, opts)
1829 1829
1830 1830 # extract addremove carefully -- this function can be called from a command
1831 1831 # that doesn't support addremove
1832 1832 if opts.get('addremove'):
1833 1833 scmutil.addremove(repo, pats, opts)
1834 1834
1835 1835 return commitfunc(ui, repo, message,
1836 1836 scmutil.match(repo[None], pats, opts), opts)
1837 1837
1838 1838 def amend(ui, repo, commitfunc, old, extra, pats, opts):
1839 1839 ui.note(_('amending changeset %s\n') % old)
1840 1840 base = old.p1()
1841 1841
1842 1842 wlock = lock = newid = None
1843 1843 try:
1844 1844 wlock = repo.wlock()
1845 1845 lock = repo.lock()
1846 1846 tr = repo.transaction('amend')
1847 1847 try:
1848 1848 # See if we got a message from -m or -l, if not, open the editor
1849 1849 # with the message of the changeset to amend
1850 1850 message = logmessage(ui, opts)
1851 1851 # ensure logfile does not conflict with later enforcement of the
1852 1852 # message. potential logfile content has been processed by
1853 1853 # `logmessage` anyway.
1854 1854 opts.pop('logfile')
1855 1855 # First, do a regular commit to record all changes in the working
1856 1856 # directory (if there are any)
1857 1857 ui.callhooks = False
1858 1858 currentbookmark = repo._bookmarkcurrent
1859 1859 try:
1860 1860 repo._bookmarkcurrent = None
1861 1861 opts['message'] = 'temporary amend commit for %s' % old
1862 1862 node = commit(ui, repo, commitfunc, pats, opts)
1863 1863 finally:
1864 1864 repo._bookmarkcurrent = currentbookmark
1865 1865 ui.callhooks = True
1866 1866 ctx = repo[node]
1867 1867
1868 1868 # Participating changesets:
1869 1869 #
1870 1870 # node/ctx o - new (intermediate) commit that contains changes
1871 1871 # | from working dir to go into amending commit
1872 1872 # | (or a workingctx if there were no changes)
1873 1873 # |
1874 1874 # old o - changeset to amend
1875 1875 # |
1876 1876 # base o - parent of amending changeset
1877 1877
1878 1878 # Update extra dict from amended commit (e.g. to preserve graft
1879 1879 # source)
1880 1880 extra.update(old.extra())
1881 1881
1882 1882 # Also update it from the intermediate commit or from the wctx
1883 1883 extra.update(ctx.extra())
1884 1884
1885 1885 if len(old.parents()) > 1:
1886 1886 # ctx.files() isn't reliable for merges, so fall back to the
1887 1887 # slower repo.status() method
1888 1888 files = set([fn for st in repo.status(base, old)[:3]
1889 1889 for fn in st])
1890 1890 else:
1891 1891 files = set(old.files())
1892 1892
1893 1893 # Second, we use either the commit we just did, or if there were no
1894 1894 # changes the parent of the working directory as the version of the
1895 1895 # files in the final amend commit
1896 1896 if node:
1897 1897 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
1898 1898
1899 1899 user = ctx.user()
1900 1900 date = ctx.date()
1901 1901 # Recompute copies (avoid recording a -> b -> a)
1902 1902 copied = copies.pathcopies(base, ctx)
1903 1903
1904 1904 # Prune files which were reverted by the updates: if old
1905 1905 # introduced file X and our intermediate commit, node,
1906 1906 # renamed that file, then those two files are the same and
1907 1907 # we can discard X from our list of files. Likewise if X
1908 1908 # was deleted, it's no longer relevant
1909 1909 files.update(ctx.files())
1910 1910
1911 1911 def samefile(f):
1912 1912 if f in ctx.manifest():
1913 1913 a = ctx.filectx(f)
1914 1914 if f in base.manifest():
1915 1915 b = base.filectx(f)
1916 1916 return (not a.cmp(b)
1917 1917 and a.flags() == b.flags())
1918 1918 else:
1919 1919 return False
1920 1920 else:
1921 1921 return f not in base.manifest()
1922 1922 files = [f for f in files if not samefile(f)]
1923 1923
1924 1924 def filectxfn(repo, ctx_, path):
1925 1925 try:
1926 1926 fctx = ctx[path]
1927 1927 flags = fctx.flags()
1928 1928 mctx = context.memfilectx(fctx.path(), fctx.data(),
1929 1929 islink='l' in flags,
1930 1930 isexec='x' in flags,
1931 1931 copied=copied.get(path))
1932 1932 return mctx
1933 1933 except KeyError:
1934 1934 raise IOError
1935 1935 else:
1936 1936 ui.note(_('copying changeset %s to %s\n') % (old, base))
1937 1937
1938 1938 # Use version of files as in the old cset
1939 1939 def filectxfn(repo, ctx_, path):
1940 1940 try:
1941 1941 return old.filectx(path)
1942 1942 except KeyError:
1943 1943 raise IOError
1944 1944
1945 1945 user = opts.get('user') or old.user()
1946 1946 date = opts.get('date') or old.date()
1947 1947 editmsg = False
1948 1948 if not message:
1949 1949 editmsg = True
1950 1950 message = old.description()
1951 1951
1952 1952 pureextra = extra.copy()
1953 1953 extra['amend_source'] = old.hex()
1954 1954
1955 1955 new = context.memctx(repo,
1956 1956 parents=[base.node(), old.p2().node()],
1957 1957 text=message,
1958 1958 files=files,
1959 1959 filectxfn=filectxfn,
1960 1960 user=user,
1961 1961 date=date,
1962 1962 extra=extra)
1963 1963 if editmsg:
1964 1964 new._text = commitforceeditor(repo, new, [])
1965 1965 repo.savecommitmessage(new.description())
1966 1966
1967 1967 newdesc = changelog.stripdesc(new.description())
1968 1968 if ((not node)
1969 1969 and newdesc == old.description()
1970 1970 and user == old.user()
1971 1971 and date == old.date()
1972 1972 and pureextra == old.extra()):
1973 1973 # nothing changed. continuing here would create a new node
1974 1974 # anyway because of the amend_source noise.
1975 1975 #
1976 1976 # This not what we expect from amend.
1977 1977 return old.node()
1978 1978
1979 1979 ph = repo.ui.config('phases', 'new-commit', phases.draft)
1980 1980 try:
1981 1981 if opts.get('secret'):
1982 1982 commitphase = 'secret'
1983 1983 else:
1984 1984 commitphase = old.phase()
1985 repo.ui.setconfig('phases', 'new-commit', commitphase)
1985 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
1986 1986 newid = repo.commitctx(new)
1987 1987 finally:
1988 repo.ui.setconfig('phases', 'new-commit', ph)
1988 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
1989 1989 if newid != old.node():
1990 1990 # Reroute the working copy parent to the new changeset
1991 1991 repo.setparents(newid, nullid)
1992 1992
1993 1993 # Move bookmarks from old parent to amend commit
1994 1994 bms = repo.nodebookmarks(old.node())
1995 1995 if bms:
1996 1996 marks = repo._bookmarks
1997 1997 for bm in bms:
1998 1998 marks[bm] = newid
1999 1999 marks.write()
2000 2000 #commit the whole amend process
2001 2001 if obsolete._enabled and newid != old.node():
2002 2002 # mark the new changeset as successor of the rewritten one
2003 2003 new = repo[newid]
2004 2004 obs = [(old, (new,))]
2005 2005 if node:
2006 2006 obs.append((ctx, ()))
2007 2007
2008 2008 obsolete.createmarkers(repo, obs)
2009 2009 tr.close()
2010 2010 finally:
2011 2011 tr.release()
2012 2012 if (not obsolete._enabled) and newid != old.node():
2013 2013 # Strip the intermediate commit (if there was one) and the amended
2014 2014 # commit
2015 2015 if node:
2016 2016 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2017 2017 ui.note(_('stripping amended changeset %s\n') % old)
2018 2018 repair.strip(ui, repo, old.node(), topic='amend-backup')
2019 2019 finally:
2020 2020 if newid is None:
2021 2021 repo.dirstate.invalidate()
2022 2022 lockmod.release(lock, wlock)
2023 2023 return newid
2024 2024
2025 2025 def commiteditor(repo, ctx, subs):
2026 2026 if ctx.description():
2027 2027 return ctx.description()
2028 2028 return commitforceeditor(repo, ctx, subs)
2029 2029
2030 2030 def commitforceeditor(repo, ctx, subs):
2031 2031 edittext = []
2032 2032 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2033 2033 if ctx.description():
2034 2034 edittext.append(ctx.description())
2035 2035 edittext.append("")
2036 2036 edittext.append("") # Empty line between message and comments.
2037 2037 edittext.append(_("HG: Enter commit message."
2038 2038 " Lines beginning with 'HG:' are removed."))
2039 2039 edittext.append(_("HG: Leave message empty to abort commit."))
2040 2040 edittext.append("HG: --")
2041 2041 edittext.append(_("HG: user: %s") % ctx.user())
2042 2042 if ctx.p2():
2043 2043 edittext.append(_("HG: branch merge"))
2044 2044 if ctx.branch():
2045 2045 edittext.append(_("HG: branch '%s'") % ctx.branch())
2046 2046 if bookmarks.iscurrent(repo):
2047 2047 edittext.append(_("HG: bookmark '%s'") % repo._bookmarkcurrent)
2048 2048 edittext.extend([_("HG: subrepo %s") % s for s in subs])
2049 2049 edittext.extend([_("HG: added %s") % f for f in added])
2050 2050 edittext.extend([_("HG: changed %s") % f for f in modified])
2051 2051 edittext.extend([_("HG: removed %s") % f for f in removed])
2052 2052 if not added and not modified and not removed:
2053 2053 edittext.append(_("HG: no files changed"))
2054 2054 edittext.append("")
2055 2055 # run editor in the repository root
2056 2056 olddir = os.getcwd()
2057 2057 os.chdir(repo.root)
2058 2058 text = repo.ui.edit("\n".join(edittext), ctx.user(), ctx.extra())
2059 2059 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2060 2060 os.chdir(olddir)
2061 2061
2062 2062 if not text.strip():
2063 2063 raise util.Abort(_("empty commit message"))
2064 2064
2065 2065 return text
2066 2066
2067 2067 def commitstatus(repo, node, branch, bheads=None, opts={}):
2068 2068 ctx = repo[node]
2069 2069 parents = ctx.parents()
2070 2070
2071 2071 if (not opts.get('amend') and bheads and node not in bheads and not
2072 2072 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2073 2073 repo.ui.status(_('created new head\n'))
2074 2074 # The message is not printed for initial roots. For the other
2075 2075 # changesets, it is printed in the following situations:
2076 2076 #
2077 2077 # Par column: for the 2 parents with ...
2078 2078 # N: null or no parent
2079 2079 # B: parent is on another named branch
2080 2080 # C: parent is a regular non head changeset
2081 2081 # H: parent was a branch head of the current branch
2082 2082 # Msg column: whether we print "created new head" message
2083 2083 # In the following, it is assumed that there already exists some
2084 2084 # initial branch heads of the current branch, otherwise nothing is
2085 2085 # printed anyway.
2086 2086 #
2087 2087 # Par Msg Comment
2088 2088 # N N y additional topo root
2089 2089 #
2090 2090 # B N y additional branch root
2091 2091 # C N y additional topo head
2092 2092 # H N n usual case
2093 2093 #
2094 2094 # B B y weird additional branch root
2095 2095 # C B y branch merge
2096 2096 # H B n merge with named branch
2097 2097 #
2098 2098 # C C y additional head from merge
2099 2099 # C H n merge with a head
2100 2100 #
2101 2101 # H H n head merge: head count decreases
2102 2102
2103 2103 if not opts.get('close_branch'):
2104 2104 for r in parents:
2105 2105 if r.closesbranch() and r.branch() == branch:
2106 2106 repo.ui.status(_('reopening closed branch head %d\n') % r)
2107 2107
2108 2108 if repo.ui.debugflag:
2109 2109 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2110 2110 elif repo.ui.verbose:
2111 2111 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2112 2112
2113 2113 def revert(ui, repo, ctx, parents, *pats, **opts):
2114 2114 parent, p2 = parents
2115 2115 node = ctx.node()
2116 2116
2117 2117 mf = ctx.manifest()
2118 2118 if node == parent:
2119 2119 pmf = mf
2120 2120 else:
2121 2121 pmf = None
2122 2122
2123 2123 # need all matching names in dirstate and manifest of target rev,
2124 2124 # so have to walk both. do not print errors if files exist in one
2125 2125 # but not other.
2126 2126
2127 2127 names = {}
2128 2128
2129 2129 wlock = repo.wlock()
2130 2130 try:
2131 2131 # walk dirstate.
2132 2132
2133 2133 m = scmutil.match(repo[None], pats, opts)
2134 2134 m.bad = lambda x, y: False
2135 2135 for abs in repo.walk(m):
2136 2136 names[abs] = m.rel(abs), m.exact(abs)
2137 2137
2138 2138 # walk target manifest.
2139 2139
2140 2140 def badfn(path, msg):
2141 2141 if path in names:
2142 2142 return
2143 2143 if path in ctx.substate:
2144 2144 return
2145 2145 path_ = path + '/'
2146 2146 for f in names:
2147 2147 if f.startswith(path_):
2148 2148 return
2149 2149 ui.warn("%s: %s\n" % (m.rel(path), msg))
2150 2150
2151 2151 m = scmutil.match(ctx, pats, opts)
2152 2152 m.bad = badfn
2153 2153 for abs in ctx.walk(m):
2154 2154 if abs not in names:
2155 2155 names[abs] = m.rel(abs), m.exact(abs)
2156 2156
2157 2157 # get the list of subrepos that must be reverted
2158 2158 targetsubs = sorted(s for s in ctx.substate if m(s))
2159 2159 m = scmutil.matchfiles(repo, names)
2160 2160 changes = repo.status(match=m)[:4]
2161 2161 modified, added, removed, deleted = map(set, changes)
2162 2162
2163 2163 # if f is a rename, also revert the source
2164 2164 cwd = repo.getcwd()
2165 2165 for f in added:
2166 2166 src = repo.dirstate.copied(f)
2167 2167 if src and src not in names and repo.dirstate[src] == 'r':
2168 2168 removed.add(src)
2169 2169 names[src] = (repo.pathto(src, cwd), True)
2170 2170
2171 2171 def removeforget(abs):
2172 2172 if repo.dirstate[abs] == 'a':
2173 2173 return _('forgetting %s\n')
2174 2174 return _('removing %s\n')
2175 2175
2176 2176 revert = ([], _('reverting %s\n'))
2177 2177 add = ([], _('adding %s\n'))
2178 2178 remove = ([], removeforget)
2179 2179 undelete = ([], _('undeleting %s\n'))
2180 2180
2181 2181 disptable = (
2182 2182 # dispatch table:
2183 2183 # file state
2184 2184 # action if in target manifest
2185 2185 # action if not in target manifest
2186 2186 # make backup if in target manifest
2187 2187 # make backup if not in target manifest
2188 2188 (modified, revert, remove, True, True),
2189 2189 (added, revert, remove, True, False),
2190 2190 (removed, undelete, None, True, False),
2191 2191 (deleted, revert, remove, False, False),
2192 2192 )
2193 2193
2194 2194 for abs, (rel, exact) in sorted(names.items()):
2195 2195 mfentry = mf.get(abs)
2196 2196 target = repo.wjoin(abs)
2197 2197 def handle(xlist, dobackup):
2198 2198 xlist[0].append(abs)
2199 2199 if (dobackup and not opts.get('no_backup') and
2200 2200 os.path.lexists(target) and
2201 2201 abs in ctx and repo[None][abs].cmp(ctx[abs])):
2202 2202 bakname = "%s.orig" % rel
2203 2203 ui.note(_('saving current version of %s as %s\n') %
2204 2204 (rel, bakname))
2205 2205 if not opts.get('dry_run'):
2206 2206 util.rename(target, bakname)
2207 2207 if ui.verbose or not exact:
2208 2208 msg = xlist[1]
2209 2209 if not isinstance(msg, basestring):
2210 2210 msg = msg(abs)
2211 2211 ui.status(msg % rel)
2212 2212 for table, hitlist, misslist, backuphit, backupmiss in disptable:
2213 2213 if abs not in table:
2214 2214 continue
2215 2215 # file has changed in dirstate
2216 2216 if mfentry:
2217 2217 handle(hitlist, backuphit)
2218 2218 elif misslist is not None:
2219 2219 handle(misslist, backupmiss)
2220 2220 break
2221 2221 else:
2222 2222 if abs not in repo.dirstate:
2223 2223 if mfentry:
2224 2224 handle(add, True)
2225 2225 elif exact:
2226 2226 ui.warn(_('file not managed: %s\n') % rel)
2227 2227 continue
2228 2228 # file has not changed in dirstate
2229 2229 if node == parent:
2230 2230 if exact:
2231 2231 ui.warn(_('no changes needed to %s\n') % rel)
2232 2232 continue
2233 2233 if pmf is None:
2234 2234 # only need parent manifest in this unlikely case,
2235 2235 # so do not read by default
2236 2236 pmf = repo[parent].manifest()
2237 2237 if abs in pmf and mfentry:
2238 2238 # if version of file is same in parent and target
2239 2239 # manifests, do nothing
2240 2240 if (pmf[abs] != mfentry or
2241 2241 pmf.flags(abs) != mf.flags(abs)):
2242 2242 handle(revert, False)
2243 2243 else:
2244 2244 handle(remove, False)
2245 2245 if not opts.get('dry_run'):
2246 2246 _performrevert(repo, parents, ctx, revert, add, remove, undelete)
2247 2247
2248 2248 if targetsubs:
2249 2249 # Revert the subrepos on the revert list
2250 2250 for sub in targetsubs:
2251 2251 ctx.sub(sub).revert(ui, ctx.substate[sub], *pats, **opts)
2252 2252 finally:
2253 2253 wlock.release()
2254 2254
2255 2255 def _performrevert(repo, parents, ctx, revert, add, remove, undelete):
2256 2256 """function that actually perform all the action computed for revert
2257 2257
2258 2258 This is an independent function to let extension to plug in and react to
2259 2259 the imminent revert.
2260 2260
2261 2261 Make sure you have the working directory locked when caling this function.
2262 2262 """
2263 2263 parent, p2 = parents
2264 2264 node = ctx.node()
2265 2265 def checkout(f):
2266 2266 fc = ctx[f]
2267 2267 repo.wwrite(f, fc.data(), fc.flags())
2268 2268
2269 2269 audit_path = pathutil.pathauditor(repo.root)
2270 2270 for f in remove[0]:
2271 2271 if repo.dirstate[f] == 'a':
2272 2272 repo.dirstate.drop(f)
2273 2273 continue
2274 2274 audit_path(f)
2275 2275 try:
2276 2276 util.unlinkpath(repo.wjoin(f))
2277 2277 except OSError:
2278 2278 pass
2279 2279 repo.dirstate.remove(f)
2280 2280
2281 2281 normal = None
2282 2282 if node == parent:
2283 2283 # We're reverting to our parent. If possible, we'd like status
2284 2284 # to report the file as clean. We have to use normallookup for
2285 2285 # merges to avoid losing information about merged/dirty files.
2286 2286 if p2 != nullid:
2287 2287 normal = repo.dirstate.normallookup
2288 2288 else:
2289 2289 normal = repo.dirstate.normal
2290 2290 for f in revert[0]:
2291 2291 checkout(f)
2292 2292 if normal:
2293 2293 normal(f)
2294 2294
2295 2295 for f in add[0]:
2296 2296 checkout(f)
2297 2297 repo.dirstate.add(f)
2298 2298
2299 2299 normal = repo.dirstate.normallookup
2300 2300 if node == parent and p2 == nullid:
2301 2301 normal = repo.dirstate.normal
2302 2302 for f in undelete[0]:
2303 2303 checkout(f)
2304 2304 normal(f)
2305 2305
2306 2306 copied = copies.pathcopies(repo[parent], ctx)
2307 2307
2308 2308 for f in add[0] + undelete[0] + revert[0]:
2309 2309 if f in copied:
2310 2310 repo.dirstate.copy(copied[f], f)
2311 2311
2312 2312 def command(table):
2313 2313 '''returns a function object bound to table which can be used as
2314 2314 a decorator for populating table as a command table'''
2315 2315
2316 2316 def cmd(name, options=(), synopsis=None):
2317 2317 def decorator(func):
2318 2318 if synopsis:
2319 2319 table[name] = func, list(options), synopsis
2320 2320 else:
2321 2321 table[name] = func, list(options)
2322 2322 return func
2323 2323 return decorator
2324 2324
2325 2325 return cmd
2326 2326
2327 2327 # a list of (ui, repo) functions called by commands.summary
2328 2328 summaryhooks = util.hooks()
2329 2329
2330 2330 # A list of state files kept by multistep operations like graft.
2331 2331 # Since graft cannot be aborted, it is considered 'clearable' by update.
2332 2332 # note: bisect is intentionally excluded
2333 2333 # (state file, clearable, allowcommit, error, hint)
2334 2334 unfinishedstates = [
2335 2335 ('graftstate', True, False, _('graft in progress'),
2336 2336 _("use 'hg graft --continue' or 'hg update' to abort")),
2337 2337 ('updatestate', True, False, _('last update was interrupted'),
2338 2338 _("use 'hg update' to get a consistent checkout"))
2339 2339 ]
2340 2340
2341 2341 def checkunfinished(repo, commit=False):
2342 2342 '''Look for an unfinished multistep operation, like graft, and abort
2343 2343 if found. It's probably good to check this right before
2344 2344 bailifchanged().
2345 2345 '''
2346 2346 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2347 2347 if commit and allowcommit:
2348 2348 continue
2349 2349 if repo.vfs.exists(f):
2350 2350 raise util.Abort(msg, hint=hint)
2351 2351
2352 2352 def clearunfinished(repo):
2353 2353 '''Check for unfinished operations (as above), and clear the ones
2354 2354 that are clearable.
2355 2355 '''
2356 2356 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2357 2357 if not clearable and repo.vfs.exists(f):
2358 2358 raise util.Abort(msg, hint=hint)
2359 2359 for f, clearable, allowcommit, msg, hint in unfinishedstates:
2360 2360 if clearable and repo.vfs.exists(f):
2361 2361 util.unlink(repo.join(f))
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now