##// END OF EJS Templates
replace util.sort with sorted built-in...
Matt Mackall -
r8209:a1a5a57e default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,417 +1,417 b''
1 1 # bugzilla.py - bugzilla integration for mercurial
2 2 #
3 3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 '''Bugzilla integration
9 9
10 10 This hook extension adds comments on bugs in Bugzilla when changesets
11 11 that refer to bugs by Bugzilla ID are seen. The hook does not change
12 12 bug status.
13 13
14 14 The hook updates the Bugzilla database directly. Only Bugzilla
15 15 installations using MySQL are supported.
16 16
17 17 The hook relies on a Bugzilla script to send bug change notification
18 18 emails. That script changes between Bugzilla versions; the
19 19 'processmail' script used prior to 2.18 is replaced in 2.18 and
20 20 subsequent versions by 'config/sendbugmail.pl'. Note that these will
21 21 be run by Mercurial as the user pushing the change; you will need to
22 22 ensure the Bugzilla install file permissions are set appropriately.
23 23
24 24 Configuring the extension:
25 25
26 26 [bugzilla]
27 27
28 28 host Hostname of the MySQL server holding the Bugzilla
29 29 database.
30 30 db Name of the Bugzilla database in MySQL. Default 'bugs'.
31 31 user Username to use to access MySQL server. Default 'bugs'.
32 32 password Password to use to access MySQL server.
33 33 timeout Database connection timeout (seconds). Default 5.
34 34 version Bugzilla version. Specify '3.0' for Bugzilla versions
35 35 3.0 and later, '2.18' for Bugzilla versions from 2.18
36 36 and '2.16' for versions prior to 2.18.
37 37 bzuser Fallback Bugzilla user name to record comments with, if
38 38 changeset committer cannot be found as a Bugzilla user.
39 39 bzdir Bugzilla install directory. Used by default notify.
40 40 Default '/var/www/html/bugzilla'.
41 41 notify The command to run to get Bugzilla to send bug change
42 42 notification emails. Substitutes from a map with 3
43 43 keys, 'bzdir', 'id' (bug id) and 'user' (committer
44 44 bugzilla email). Default depends on version; from 2.18
45 45 it is "cd %(bzdir)s && perl -T contrib/sendbugmail.pl
46 46 %(id)s %(user)s".
47 47 regexp Regular expression to match bug IDs in changeset commit
48 48 message. Must contain one "()" group. The default
49 49 expression matches 'Bug 1234', 'Bug no. 1234', 'Bug
50 50 number 1234', 'Bugs 1234,5678', 'Bug 1234 and 5678' and
51 51 variations thereof. Matching is case insensitive.
52 52 style The style file to use when formatting comments.
53 53 template Template to use when formatting comments. Overrides
54 54 style if specified. In addition to the usual Mercurial
55 55 keywords, the extension specifies:
56 56 {bug} The Bugzilla bug ID.
57 57 {root} The full pathname of the Mercurial
58 58 repository.
59 59 {webroot} Stripped pathname of the Mercurial
60 60 repository.
61 61 {hgweb} Base URL for browsing Mercurial
62 62 repositories.
63 63 Default 'changeset {node|short} in repo {root} refers '
64 64 'to bug {bug}.\\ndetails:\\n\\t{desc|tabindent}'
65 65 strip The number of slashes to strip from the front of {root}
66 66 to produce {webroot}. Default 0.
67 67 usermap Path of file containing Mercurial committer ID to
68 68 Bugzilla user ID mappings. If specified, the file
69 69 should contain one mapping per line,
70 70 "committer"="Bugzilla user". See also the [usermap]
71 71 section.
72 72
73 73 [usermap]
74 74 Any entries in this section specify mappings of Mercurial
75 75 committer ID to Bugzilla user ID. See also [bugzilla].usermap.
76 76 "committer"="Bugzilla user"
77 77
78 78 [web]
79 79 baseurl Base URL for browsing Mercurial repositories. Reference
80 80 from templates as {hgweb}.
81 81
82 82 Activating the extension:
83 83
84 84 [extensions]
85 85 hgext.bugzilla =
86 86
87 87 [hooks]
88 88 # run bugzilla hook on every change pulled or pushed in here
89 89 incoming.bugzilla = python:hgext.bugzilla.hook
90 90
91 91 Example configuration:
92 92
93 93 This example configuration is for a collection of Mercurial
94 94 repositories in /var/local/hg/repos/ used with a local Bugzilla 3.2
95 95 installation in /opt/bugzilla-3.2.
96 96
97 97 [bugzilla]
98 98 host=localhost
99 99 password=XYZZY
100 100 version=3.0
101 101 bzuser=unknown@domain.com
102 102 bzdir=/opt/bugzilla-3.2
103 103 template=Changeset {node|short} in {root|basename}.\\n{hgweb}/{webroot}/rev/{node|short}\\n\\n{desc}\\n
104 104 strip=5
105 105
106 106 [web]
107 107 baseurl=http://dev.domain.com/hg
108 108
109 109 [usermap]
110 110 user@emaildomain.com=user.name@bugzilladomain.com
111 111
112 112 Commits add a comment to the Bugzilla bug record of the form:
113 113
114 114 Changeset 3b16791d6642 in repository-name.
115 115 http://dev.domain.com/hg/repository-name/rev/3b16791d6642
116 116
117 117 Changeset commit comment. Bug 1234.
118 118 '''
119 119
120 120 from mercurial.i18n import _
121 121 from mercurial.node import short
122 122 from mercurial import cmdutil, templater, util
123 123 import re, time
124 124
125 125 MySQLdb = None
126 126
127 127 def buglist(ids):
128 128 return '(' + ','.join(map(str, ids)) + ')'
129 129
130 130 class bugzilla_2_16(object):
131 131 '''support for bugzilla version 2.16.'''
132 132
133 133 def __init__(self, ui):
134 134 self.ui = ui
135 135 host = self.ui.config('bugzilla', 'host', 'localhost')
136 136 user = self.ui.config('bugzilla', 'user', 'bugs')
137 137 passwd = self.ui.config('bugzilla', 'password')
138 138 db = self.ui.config('bugzilla', 'db', 'bugs')
139 139 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
140 140 usermap = self.ui.config('bugzilla', 'usermap')
141 141 if usermap:
142 142 self.ui.readconfig(usermap, sections=['usermap'])
143 143 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
144 144 (host, db, user, '*' * len(passwd)))
145 145 self.conn = MySQLdb.connect(host=host, user=user, passwd=passwd,
146 146 db=db, connect_timeout=timeout)
147 147 self.cursor = self.conn.cursor()
148 148 self.longdesc_id = self.get_longdesc_id()
149 149 self.user_ids = {}
150 150 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
151 151
152 152 def run(self, *args, **kwargs):
153 153 '''run a query.'''
154 154 self.ui.note(_('query: %s %s\n') % (args, kwargs))
155 155 try:
156 156 self.cursor.execute(*args, **kwargs)
157 157 except MySQLdb.MySQLError:
158 158 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
159 159 raise
160 160
161 161 def get_longdesc_id(self):
162 162 '''get identity of longdesc field'''
163 163 self.run('select fieldid from fielddefs where name = "longdesc"')
164 164 ids = self.cursor.fetchall()
165 165 if len(ids) != 1:
166 166 raise util.Abort(_('unknown database schema'))
167 167 return ids[0][0]
168 168
169 169 def filter_real_bug_ids(self, ids):
170 170 '''filter not-existing bug ids from list.'''
171 171 self.run('select bug_id from bugs where bug_id in %s' % buglist(ids))
172 return util.sort([c[0] for c in self.cursor.fetchall()])
172 return sorted([c[0] for c in self.cursor.fetchall()])
173 173
174 174 def filter_unknown_bug_ids(self, node, ids):
175 175 '''filter bug ids from list that already refer to this changeset.'''
176 176
177 177 self.run('''select bug_id from longdescs where
178 178 bug_id in %s and thetext like "%%%s%%"''' %
179 179 (buglist(ids), short(node)))
180 180 unknown = set(ids)
181 181 for (id,) in self.cursor.fetchall():
182 182 self.ui.status(_('bug %d already knows about changeset %s\n') %
183 183 (id, short(node)))
184 184 unknown.discard(id)
185 return util.sort(unknown)
185 return sorted(unknown)
186 186
187 187 def notify(self, ids, committer):
188 188 '''tell bugzilla to send mail.'''
189 189
190 190 self.ui.status(_('telling bugzilla to send mail:\n'))
191 191 (user, userid) = self.get_bugzilla_user(committer)
192 192 for id in ids:
193 193 self.ui.status(_(' bug %s\n') % id)
194 194 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
195 195 bzdir = self.ui.config('bugzilla', 'bzdir', '/var/www/html/bugzilla')
196 196 try:
197 197 # Backwards-compatible with old notify string, which
198 198 # took one string. This will throw with a new format
199 199 # string.
200 200 cmd = cmdfmt % id
201 201 except TypeError:
202 202 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
203 203 self.ui.note(_('running notify command %s\n') % cmd)
204 204 fp = util.popen('(%s) 2>&1' % cmd)
205 205 out = fp.read()
206 206 ret = fp.close()
207 207 if ret:
208 208 self.ui.warn(out)
209 209 raise util.Abort(_('bugzilla notify command %s') %
210 210 util.explain_exit(ret)[0])
211 211 self.ui.status(_('done\n'))
212 212
213 213 def get_user_id(self, user):
214 214 '''look up numeric bugzilla user id.'''
215 215 try:
216 216 return self.user_ids[user]
217 217 except KeyError:
218 218 try:
219 219 userid = int(user)
220 220 except ValueError:
221 221 self.ui.note(_('looking up user %s\n') % user)
222 222 self.run('''select userid from profiles
223 223 where login_name like %s''', user)
224 224 all = self.cursor.fetchall()
225 225 if len(all) != 1:
226 226 raise KeyError(user)
227 227 userid = int(all[0][0])
228 228 self.user_ids[user] = userid
229 229 return userid
230 230
231 231 def map_committer(self, user):
232 232 '''map name of committer to bugzilla user name.'''
233 233 for committer, bzuser in self.ui.configitems('usermap'):
234 234 if committer.lower() == user.lower():
235 235 return bzuser
236 236 return user
237 237
238 238 def get_bugzilla_user(self, committer):
239 239 '''see if committer is a registered bugzilla user. Return
240 240 bugzilla username and userid if so. If not, return default
241 241 bugzilla username and userid.'''
242 242 user = self.map_committer(committer)
243 243 try:
244 244 userid = self.get_user_id(user)
245 245 except KeyError:
246 246 try:
247 247 defaultuser = self.ui.config('bugzilla', 'bzuser')
248 248 if not defaultuser:
249 249 raise util.Abort(_('cannot find bugzilla user id for %s') %
250 250 user)
251 251 userid = self.get_user_id(defaultuser)
252 252 user = defaultuser
253 253 except KeyError:
254 254 raise util.Abort(_('cannot find bugzilla user id for %s or %s') %
255 255 (user, defaultuser))
256 256 return (user, userid)
257 257
258 258 def add_comment(self, bugid, text, committer):
259 259 '''add comment to bug. try adding comment as committer of
260 260 changeset, otherwise as default bugzilla user.'''
261 261 (user, userid) = self.get_bugzilla_user(committer)
262 262 now = time.strftime('%Y-%m-%d %H:%M:%S')
263 263 self.run('''insert into longdescs
264 264 (bug_id, who, bug_when, thetext)
265 265 values (%s, %s, %s, %s)''',
266 266 (bugid, userid, now, text))
267 267 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
268 268 values (%s, %s, %s, %s)''',
269 269 (bugid, userid, now, self.longdesc_id))
270 270 self.conn.commit()
271 271
272 272 class bugzilla_2_18(bugzilla_2_16):
273 273 '''support for bugzilla 2.18 series.'''
274 274
275 275 def __init__(self, ui):
276 276 bugzilla_2_16.__init__(self, ui)
277 277 self.default_notify = "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
278 278
279 279 class bugzilla_3_0(bugzilla_2_18):
280 280 '''support for bugzilla 3.0 series.'''
281 281
282 282 def __init__(self, ui):
283 283 bugzilla_2_18.__init__(self, ui)
284 284
285 285 def get_longdesc_id(self):
286 286 '''get identity of longdesc field'''
287 287 self.run('select id from fielddefs where name = "longdesc"')
288 288 ids = self.cursor.fetchall()
289 289 if len(ids) != 1:
290 290 raise util.Abort(_('unknown database schema'))
291 291 return ids[0][0]
292 292
293 293 class bugzilla(object):
294 294 # supported versions of bugzilla. different versions have
295 295 # different schemas.
296 296 _versions = {
297 297 '2.16': bugzilla_2_16,
298 298 '2.18': bugzilla_2_18,
299 299 '3.0': bugzilla_3_0
300 300 }
301 301
302 302 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
303 303 r'((?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)')
304 304
305 305 _bz = None
306 306
307 307 def __init__(self, ui, repo):
308 308 self.ui = ui
309 309 self.repo = repo
310 310
311 311 def bz(self):
312 312 '''return object that knows how to talk to bugzilla version in
313 313 use.'''
314 314
315 315 if bugzilla._bz is None:
316 316 bzversion = self.ui.config('bugzilla', 'version')
317 317 try:
318 318 bzclass = bugzilla._versions[bzversion]
319 319 except KeyError:
320 320 raise util.Abort(_('bugzilla version %s not supported') %
321 321 bzversion)
322 322 bugzilla._bz = bzclass(self.ui)
323 323 return bugzilla._bz
324 324
325 325 def __getattr__(self, key):
326 326 return getattr(self.bz(), key)
327 327
328 328 _bug_re = None
329 329 _split_re = None
330 330
331 331 def find_bug_ids(self, ctx):
332 332 '''find valid bug ids that are referred to in changeset
333 333 comments and that do not already have references to this
334 334 changeset.'''
335 335
336 336 if bugzilla._bug_re is None:
337 337 bugzilla._bug_re = re.compile(
338 338 self.ui.config('bugzilla', 'regexp', bugzilla._default_bug_re),
339 339 re.IGNORECASE)
340 340 bugzilla._split_re = re.compile(r'\D+')
341 341 start = 0
342 342 ids = {}
343 343 while True:
344 344 m = bugzilla._bug_re.search(ctx.description(), start)
345 345 if not m:
346 346 break
347 347 start = m.end()
348 348 for id in bugzilla._split_re.split(m.group(1)):
349 349 if not id: continue
350 350 ids[int(id)] = 1
351 351 ids = ids.keys()
352 352 if ids:
353 353 ids = self.filter_real_bug_ids(ids)
354 354 if ids:
355 355 ids = self.filter_unknown_bug_ids(ctx.node(), ids)
356 356 return ids
357 357
358 358 def update(self, bugid, ctx):
359 359 '''update bugzilla bug with reference to changeset.'''
360 360
361 361 def webroot(root):
362 362 '''strip leading prefix of repo root and turn into
363 363 url-safe path.'''
364 364 count = int(self.ui.config('bugzilla', 'strip', 0))
365 365 root = util.pconvert(root)
366 366 while count > 0:
367 367 c = root.find('/')
368 368 if c == -1:
369 369 break
370 370 root = root[c+1:]
371 371 count -= 1
372 372 return root
373 373
374 374 mapfile = self.ui.config('bugzilla', 'style')
375 375 tmpl = self.ui.config('bugzilla', 'template')
376 376 t = cmdutil.changeset_templater(self.ui, self.repo,
377 377 False, None, mapfile, False)
378 378 if not mapfile and not tmpl:
379 379 tmpl = _('changeset {node|short} in repo {root} refers '
380 380 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
381 381 if tmpl:
382 382 tmpl = templater.parsestring(tmpl, quoted=False)
383 383 t.use_template(tmpl)
384 384 self.ui.pushbuffer()
385 385 t.show(ctx, changes=ctx.changeset(),
386 386 bug=str(bugid),
387 387 hgweb=self.ui.config('web', 'baseurl'),
388 388 root=self.repo.root,
389 389 webroot=webroot(self.repo.root))
390 390 data = self.ui.popbuffer()
391 391 self.add_comment(bugid, data, util.email(ctx.user()))
392 392
393 393 def hook(ui, repo, hooktype, node=None, **kwargs):
394 394 '''add comment to bugzilla for each changeset that refers to a
395 395 bugzilla bug id. only add a comment once per bug, so same change
396 396 seen multiple times does not fill bug with duplicate data.'''
397 397 try:
398 398 import MySQLdb as mysql
399 399 global MySQLdb
400 400 MySQLdb = mysql
401 401 except ImportError, err:
402 402 raise util.Abort(_('python mysql support not available: %s') % err)
403 403
404 404 if node is None:
405 405 raise util.Abort(_('hook type %s does not pass a changeset id') %
406 406 hooktype)
407 407 try:
408 408 bz = bugzilla(ui, repo)
409 409 ctx = repo[node]
410 410 ids = bz.find_bug_ids(ctx)
411 411 if ids:
412 412 for id in ids:
413 413 bz.update(id, ctx)
414 414 bz.notify(ids, util.email(ctx.user()))
415 415 except MySQLdb.MySQLError, err:
416 416 raise util.Abort(_('database error: %s') % err[1])
417 417
@@ -1,362 +1,362 b''
1 1 # CVS conversion code inspired by hg-cvs-import and git-cvsimport
2 2
3 3 import os, locale, re, socket, errno
4 4 from cStringIO import StringIO
5 5 from mercurial import util
6 6 from mercurial.i18n import _
7 7
8 8 from common import NoRepo, commit, converter_source, checktool
9 9 import cvsps
10 10
11 11 class convert_cvs(converter_source):
12 12 def __init__(self, ui, path, rev=None):
13 13 super(convert_cvs, self).__init__(ui, path, rev=rev)
14 14
15 15 cvs = os.path.join(path, "CVS")
16 16 if not os.path.exists(cvs):
17 17 raise NoRepo("%s does not look like a CVS checkout" % path)
18 18
19 19 checktool('cvs')
20 20 self.cmd = ui.config('convert', 'cvsps', 'builtin')
21 21 cvspsexe = self.cmd.split(None, 1)[0]
22 22 self.builtin = cvspsexe == 'builtin'
23 23
24 24 if not self.builtin:
25 25 checktool(cvspsexe)
26 26
27 27 self.changeset = None
28 28 self.files = {}
29 29 self.tags = {}
30 30 self.lastbranch = {}
31 31 self.parent = {}
32 32 self.socket = None
33 33 self.cvsroot = file(os.path.join(cvs, "Root")).read()[:-1]
34 34 self.cvsrepo = file(os.path.join(cvs, "Repository")).read()[:-1]
35 35 self.encoding = locale.getpreferredencoding()
36 36
37 37 self._connect()
38 38
39 39 def _parse(self):
40 40 if self.changeset is not None:
41 41 return
42 42 self.changeset = {}
43 43
44 44 maxrev = 0
45 45 cmd = self.cmd
46 46 if self.rev:
47 47 # TODO: handle tags
48 48 try:
49 49 # patchset number?
50 50 maxrev = int(self.rev)
51 51 except ValueError:
52 52 try:
53 53 # date
54 54 util.parsedate(self.rev, ['%Y/%m/%d %H:%M:%S'])
55 55 cmd = '%s -d "1970/01/01 00:00:01" -d "%s"' % (cmd, self.rev)
56 56 except util.Abort:
57 57 raise util.Abort(_('revision %s is not a patchset number or date') % self.rev)
58 58
59 59 d = os.getcwd()
60 60 try:
61 61 os.chdir(self.path)
62 62 id = None
63 63 state = 0
64 64 filerevids = {}
65 65
66 66 if self.builtin:
67 67 # builtin cvsps code
68 68 self.ui.status(_('using builtin cvsps\n'))
69 69
70 70 cache = 'update'
71 71 if not self.ui.configbool('convert', 'cvsps.cache', True):
72 72 cache = None
73 73 db = cvsps.createlog(self.ui, cache=cache)
74 74 db = cvsps.createchangeset(self.ui, db,
75 75 fuzz=int(self.ui.config('convert', 'cvsps.fuzz', 60)),
76 76 mergeto=self.ui.config('convert', 'cvsps.mergeto', None),
77 77 mergefrom=self.ui.config('convert', 'cvsps.mergefrom', None))
78 78
79 79 for cs in db:
80 80 if maxrev and cs.id>maxrev:
81 81 break
82 82 id = str(cs.id)
83 83 cs.author = self.recode(cs.author)
84 84 self.lastbranch[cs.branch] = id
85 85 cs.comment = self.recode(cs.comment)
86 86 date = util.datestr(cs.date)
87 87 self.tags.update(dict.fromkeys(cs.tags, id))
88 88
89 89 files = {}
90 90 for f in cs.entries:
91 91 files[f.file] = "%s%s" % ('.'.join([str(x) for x in f.revision]),
92 92 ['', '(DEAD)'][f.dead])
93 93
94 94 # add current commit to set
95 95 c = commit(author=cs.author, date=date,
96 96 parents=[str(p.id) for p in cs.parents],
97 97 desc=cs.comment, branch=cs.branch or '')
98 98 self.changeset[id] = c
99 99 self.files[id] = files
100 100 else:
101 101 # external cvsps
102 102 for l in util.popen(cmd):
103 103 if state == 0: # header
104 104 if l.startswith("PatchSet"):
105 105 id = l[9:-2]
106 106 if maxrev and int(id) > maxrev:
107 107 # ignore everything
108 108 state = 3
109 109 elif l.startswith("Date:"):
110 110 date = util.parsedate(l[6:-1], ["%Y/%m/%d %H:%M:%S"])
111 111 date = util.datestr(date)
112 112 elif l.startswith("Branch:"):
113 113 branch = l[8:-1]
114 114 self.parent[id] = self.lastbranch.get(branch, 'bad')
115 115 self.lastbranch[branch] = id
116 116 elif l.startswith("Ancestor branch:"):
117 117 ancestor = l[17:-1]
118 118 # figure out the parent later
119 119 self.parent[id] = self.lastbranch[ancestor]
120 120 elif l.startswith("Author:"):
121 121 author = self.recode(l[8:-1])
122 122 elif l.startswith("Tag:") or l.startswith("Tags:"):
123 123 t = l[l.index(':')+1:]
124 124 t = [ut.strip() for ut in t.split(',')]
125 125 if (len(t) > 1) or (t[0] and (t[0] != "(none)")):
126 126 self.tags.update(dict.fromkeys(t, id))
127 127 elif l.startswith("Log:"):
128 128 # switch to gathering log
129 129 state = 1
130 130 log = ""
131 131 elif state == 1: # log
132 132 if l == "Members: \n":
133 133 # switch to gathering members
134 134 files = {}
135 135 oldrevs = []
136 136 log = self.recode(log[:-1])
137 137 state = 2
138 138 else:
139 139 # gather log
140 140 log += l
141 141 elif state == 2: # members
142 142 if l == "\n": # start of next entry
143 143 state = 0
144 144 p = [self.parent[id]]
145 145 if id == "1":
146 146 p = []
147 147 if branch == "HEAD":
148 148 branch = ""
149 149 if branch:
150 150 latest = 0
151 151 # the last changeset that contains a base
152 152 # file is our parent
153 153 for r in oldrevs:
154 154 latest = max(filerevids.get(r, 0), latest)
155 155 if latest:
156 156 p = [latest]
157 157
158 158 # add current commit to set
159 159 c = commit(author=author, date=date, parents=p,
160 160 desc=log, branch=branch)
161 161 self.changeset[id] = c
162 162 self.files[id] = files
163 163 else:
164 164 colon = l.rfind(':')
165 165 file = l[1:colon]
166 166 rev = l[colon+1:-2]
167 167 oldrev, rev = rev.split("->")
168 168 files[file] = rev
169 169
170 170 # save some information for identifying branch points
171 171 oldrevs.append("%s:%s" % (oldrev, file))
172 172 filerevids["%s:%s" % (rev, file)] = id
173 173 elif state == 3:
174 174 # swallow all input
175 175 continue
176 176
177 177 self.heads = self.lastbranch.values()
178 178 finally:
179 179 os.chdir(d)
180 180
181 181 def _connect(self):
182 182 root = self.cvsroot
183 183 conntype = None
184 184 user, host = None, None
185 185 cmd = ['cvs', 'server']
186 186
187 187 self.ui.status(_("connecting to %s\n") % root)
188 188
189 189 if root.startswith(":pserver:"):
190 190 root = root[9:]
191 191 m = re.match(r'(?:(.*?)(?::(.*?))?@)?([^:\/]*)(?::(\d*))?(.*)',
192 192 root)
193 193 if m:
194 194 conntype = "pserver"
195 195 user, passw, serv, port, root = m.groups()
196 196 if not user:
197 197 user = "anonymous"
198 198 if not port:
199 199 port = 2401
200 200 else:
201 201 port = int(port)
202 202 format0 = ":pserver:%s@%s:%s" % (user, serv, root)
203 203 format1 = ":pserver:%s@%s:%d%s" % (user, serv, port, root)
204 204
205 205 if not passw:
206 206 passw = "A"
207 207 cvspass = os.path.expanduser("~/.cvspass")
208 208 try:
209 209 pf = open(cvspass)
210 210 for line in pf.read().splitlines():
211 211 part1, part2 = line.split(' ', 1)
212 212 if part1 == '/1':
213 213 # /1 :pserver:user@example.com:2401/cvsroot/foo Ah<Z
214 214 part1, part2 = part2.split(' ', 1)
215 215 format = format1
216 216 else:
217 217 # :pserver:user@example.com:/cvsroot/foo Ah<Z
218 218 format = format0
219 219 if part1 == format:
220 220 passw = part2
221 221 break
222 222 pf.close()
223 223 except IOError, inst:
224 224 if inst.errno != errno.ENOENT:
225 225 if not getattr(inst, 'filename', None):
226 226 inst.filename = cvspass
227 227 raise
228 228
229 229 sck = socket.socket()
230 230 sck.connect((serv, port))
231 231 sck.send("\n".join(["BEGIN AUTH REQUEST", root, user, passw,
232 232 "END AUTH REQUEST", ""]))
233 233 if sck.recv(128) != "I LOVE YOU\n":
234 234 raise util.Abort(_("CVS pserver authentication failed"))
235 235
236 236 self.writep = self.readp = sck.makefile('r+')
237 237
238 238 if not conntype and root.startswith(":local:"):
239 239 conntype = "local"
240 240 root = root[7:]
241 241
242 242 if not conntype:
243 243 # :ext:user@host/home/user/path/to/cvsroot
244 244 if root.startswith(":ext:"):
245 245 root = root[5:]
246 246 m = re.match(r'(?:([^@:/]+)@)?([^:/]+):?(.*)', root)
247 247 # Do not take Windows path "c:\foo\bar" for a connection strings
248 248 if os.path.isdir(root) or not m:
249 249 conntype = "local"
250 250 else:
251 251 conntype = "rsh"
252 252 user, host, root = m.group(1), m.group(2), m.group(3)
253 253
254 254 if conntype != "pserver":
255 255 if conntype == "rsh":
256 256 rsh = os.environ.get("CVS_RSH") or "ssh"
257 257 if user:
258 258 cmd = [rsh, '-l', user, host] + cmd
259 259 else:
260 260 cmd = [rsh, host] + cmd
261 261
262 262 # popen2 does not support argument lists under Windows
263 263 cmd = [util.shellquote(arg) for arg in cmd]
264 264 cmd = util.quotecommand(' '.join(cmd))
265 265 self.writep, self.readp = util.popen2(cmd, 'b')
266 266
267 267 self.realroot = root
268 268
269 269 self.writep.write("Root %s\n" % root)
270 270 self.writep.write("Valid-responses ok error Valid-requests Mode"
271 271 " M Mbinary E Checked-in Created Updated"
272 272 " Merged Removed\n")
273 273 self.writep.write("valid-requests\n")
274 274 self.writep.flush()
275 275 r = self.readp.readline()
276 276 if not r.startswith("Valid-requests"):
277 277 raise util.Abort(_("server sucks"))
278 278 if "UseUnchanged" in r:
279 279 self.writep.write("UseUnchanged\n")
280 280 self.writep.flush()
281 281 r = self.readp.readline()
282 282
283 283 def getheads(self):
284 284 self._parse()
285 285 return self.heads
286 286
287 287 def _getfile(self, name, rev):
288 288
289 289 def chunkedread(fp, count):
290 290 # file-objects returned by socked.makefile() do not handle
291 291 # large read() requests very well.
292 292 chunksize = 65536
293 293 output = StringIO()
294 294 while count > 0:
295 295 data = fp.read(min(count, chunksize))
296 296 if not data:
297 297 raise util.Abort(_("%d bytes missing from remote file") % count)
298 298 count -= len(data)
299 299 output.write(data)
300 300 return output.getvalue()
301 301
302 302 if rev.endswith("(DEAD)"):
303 303 raise IOError
304 304
305 305 args = ("-N -P -kk -r %s --" % rev).split()
306 306 args.append(self.cvsrepo + '/' + name)
307 307 for x in args:
308 308 self.writep.write("Argument %s\n" % x)
309 309 self.writep.write("Directory .\n%s\nco\n" % self.realroot)
310 310 self.writep.flush()
311 311
312 312 data = ""
313 313 while 1:
314 314 line = self.readp.readline()
315 315 if line.startswith("Created ") or line.startswith("Updated "):
316 316 self.readp.readline() # path
317 317 self.readp.readline() # entries
318 318 mode = self.readp.readline()[:-1]
319 319 count = int(self.readp.readline()[:-1])
320 320 data = chunkedread(self.readp, count)
321 321 elif line.startswith(" "):
322 322 data += line[1:]
323 323 elif line.startswith("M "):
324 324 pass
325 325 elif line.startswith("Mbinary "):
326 326 count = int(self.readp.readline()[:-1])
327 327 data = chunkedread(self.readp, count)
328 328 else:
329 329 if line == "ok\n":
330 330 return (data, "x" in mode and "x" or "")
331 331 elif line.startswith("E "):
332 332 self.ui.warn(_("cvs server: %s\n") % line[2:])
333 333 elif line.startswith("Remove"):
334 334 self.readp.readline()
335 335 else:
336 336 raise util.Abort(_("unknown CVS response: %s") % line)
337 337
338 338 def getfile(self, file, rev):
339 339 self._parse()
340 340 data, mode = self._getfile(file, rev)
341 341 self.modecache[(file, rev)] = mode
342 342 return data
343 343
344 344 def getmode(self, file, rev):
345 345 return self.modecache[(file, rev)]
346 346
347 347 def getchanges(self, rev):
348 348 self._parse()
349 349 self.modecache = {}
350 return util.sort(self.files[rev].items()), {}
350 return sorted(self.files[rev].iteritems()), {}
351 351
352 352 def getcommit(self, rev):
353 353 self._parse()
354 354 return self.changeset[rev]
355 355
356 356 def gettags(self):
357 357 self._parse()
358 358 return self.tags
359 359
360 360 def getchangedfiles(self, rev, i):
361 361 self._parse()
362 return util.sort(self.files[rev].keys())
362 return sorted(self.files[rev])
@@ -1,782 +1,782 b''
1 1 #
2 2 # Mercurial built-in replacement for cvsps.
3 3 #
4 4 # Copyright 2008, Frank Kingswood <frank@kingswood-consulting.co.uk>
5 5 #
6 6 # This software may be used and distributed according to the terms
7 7 # of the GNU General Public License, incorporated herein by reference.
8 8
9 9 import os
10 10 import re
11 11 import cPickle as pickle
12 12 from mercurial import util
13 13 from mercurial.i18n import _
14 14
15 15 def listsort(list, key):
16 16 "helper to sort by key in Python 2.3"
17 17 try:
18 18 list.sort(key=key)
19 19 except TypeError:
20 20 list.sort(lambda l, r: cmp(key(l), key(r)))
21 21
22 22 class logentry(object):
23 23 '''Class logentry has the following attributes:
24 24 .author - author name as CVS knows it
25 25 .branch - name of branch this revision is on
26 26 .branches - revision tuple of branches starting at this revision
27 27 .comment - commit message
28 28 .date - the commit date as a (time, tz) tuple
29 29 .dead - true if file revision is dead
30 30 .file - Name of file
31 31 .lines - a tuple (+lines, -lines) or None
32 32 .parent - Previous revision of this entry
33 33 .rcs - name of file as returned from CVS
34 34 .revision - revision number as tuple
35 35 .tags - list of tags on the file
36 36 .synthetic - is this a synthetic "file ... added on ..." revision?
37 37 .mergepoint- the branch that has been merged from (if present in rlog output)
38 38 '''
39 39 def __init__(self, **entries):
40 40 self.__dict__.update(entries)
41 41
42 42 def __repr__(self):
43 43 return "<%s at 0x%x: %s %s>" % (self.__class__.__name__,
44 44 id(self),
45 45 self.file,
46 46 ".".join(map(str, self.revision)))
47 47
48 48 class logerror(Exception):
49 49 pass
50 50
51 51 def getrepopath(cvspath):
52 52 """Return the repository path from a CVS path.
53 53
54 54 >>> getrepopath('/foo/bar')
55 55 '/foo/bar'
56 56 >>> getrepopath('c:/foo/bar')
57 57 'c:/foo/bar'
58 58 >>> getrepopath(':pserver:10/foo/bar')
59 59 '/foo/bar'
60 60 >>> getrepopath(':pserver:10c:/foo/bar')
61 61 '/foo/bar'
62 62 >>> getrepopath(':pserver:/foo/bar')
63 63 '/foo/bar'
64 64 >>> getrepopath(':pserver:c:/foo/bar')
65 65 'c:/foo/bar'
66 66 >>> getrepopath(':pserver:truc@foo.bar:/foo/bar')
67 67 '/foo/bar'
68 68 >>> getrepopath(':pserver:truc@foo.bar:c:/foo/bar')
69 69 'c:/foo/bar'
70 70 """
71 71 # According to CVS manual, CVS paths are expressed like:
72 72 # [:method:][[user][:password]@]hostname[:[port]]/path/to/repository
73 73 #
74 74 # Unfortunately, Windows absolute paths start with a drive letter
75 75 # like 'c:' making it harder to parse. Here we assume that drive
76 76 # letters are only one character long and any CVS component before
77 77 # the repository path is at least 2 characters long, and use this
78 78 # to disambiguate.
79 79 parts = cvspath.split(':')
80 80 if len(parts) == 1:
81 81 return parts[0]
82 82 # Here there is an ambiguous case if we have a port number
83 83 # immediately followed by a Windows driver letter. We assume this
84 84 # never happens and decide it must be CVS path component,
85 85 # therefore ignoring it.
86 86 if len(parts[-2]) > 1:
87 87 return parts[-1].lstrip('0123456789')
88 88 return parts[-2] + ':' + parts[-1]
89 89
90 90 def createlog(ui, directory=None, root="", rlog=True, cache=None):
91 91 '''Collect the CVS rlog'''
92 92
93 93 # Because we store many duplicate commit log messages, reusing strings
94 94 # saves a lot of memory and pickle storage space.
95 95 _scache = {}
96 96 def scache(s):
97 97 "return a shared version of a string"
98 98 return _scache.setdefault(s, s)
99 99
100 100 ui.status(_('collecting CVS rlog\n'))
101 101
102 102 log = [] # list of logentry objects containing the CVS state
103 103
104 104 # patterns to match in CVS (r)log output, by state of use
105 105 re_00 = re.compile('RCS file: (.+)$')
106 106 re_01 = re.compile('cvs \\[r?log aborted\\]: (.+)$')
107 107 re_02 = re.compile('cvs (r?log|server): (.+)\n$')
108 108 re_03 = re.compile("(Cannot access.+CVSROOT)|(can't create temporary directory.+)$")
109 109 re_10 = re.compile('Working file: (.+)$')
110 110 re_20 = re.compile('symbolic names:')
111 111 re_30 = re.compile('\t(.+): ([\\d.]+)$')
112 112 re_31 = re.compile('----------------------------$')
113 113 re_32 = re.compile('=============================================================================$')
114 114 re_50 = re.compile('revision ([\\d.]+)(\s+locked by:\s+.+;)?$')
115 115 re_60 = re.compile(r'date:\s+(.+);\s+author:\s+(.+);\s+state:\s+(.+?);(\s+lines:\s+(\+\d+)?\s+(-\d+)?;)?(.*mergepoint:\s+([^;]+);)?')
116 116 re_70 = re.compile('branches: (.+);$')
117 117
118 118 file_added_re = re.compile(r'file [^/]+ was (initially )?added on branch')
119 119
120 120 prefix = '' # leading path to strip of what we get from CVS
121 121
122 122 if directory is None:
123 123 # Current working directory
124 124
125 125 # Get the real directory in the repository
126 126 try:
127 127 prefix = file(os.path.join('CVS','Repository')).read().strip()
128 128 if prefix == ".":
129 129 prefix = ""
130 130 directory = prefix
131 131 except IOError:
132 132 raise logerror('Not a CVS sandbox')
133 133
134 134 if prefix and not prefix.endswith(os.sep):
135 135 prefix += os.sep
136 136
137 137 # Use the Root file in the sandbox, if it exists
138 138 try:
139 139 root = file(os.path.join('CVS','Root')).read().strip()
140 140 except IOError:
141 141 pass
142 142
143 143 if not root:
144 144 root = os.environ.get('CVSROOT', '')
145 145
146 146 # read log cache if one exists
147 147 oldlog = []
148 148 date = None
149 149
150 150 if cache:
151 151 cachedir = os.path.expanduser('~/.hg.cvsps')
152 152 if not os.path.exists(cachedir):
153 153 os.mkdir(cachedir)
154 154
155 155 # The cvsps cache pickle needs a uniquified name, based on the
156 156 # repository location. The address may have all sort of nasties
157 157 # in it, slashes, colons and such. So here we take just the
158 158 # alphanumerics, concatenated in a way that does not mix up the
159 159 # various components, so that
160 160 # :pserver:user@server:/path
161 161 # and
162 162 # /pserver/user/server/path
163 163 # are mapped to different cache file names.
164 164 cachefile = root.split(":") + [directory, "cache"]
165 165 cachefile = ['-'.join(re.findall(r'\w+', s)) for s in cachefile if s]
166 166 cachefile = os.path.join(cachedir,
167 167 '.'.join([s for s in cachefile if s]))
168 168
169 169 if cache == 'update':
170 170 try:
171 171 ui.note(_('reading cvs log cache %s\n') % cachefile)
172 172 oldlog = pickle.load(file(cachefile))
173 173 ui.note(_('cache has %d log entries\n') % len(oldlog))
174 174 except Exception, e:
175 175 ui.note(_('error reading cache: %r\n') % e)
176 176
177 177 if oldlog:
178 178 date = oldlog[-1].date # last commit date as a (time,tz) tuple
179 179 date = util.datestr(date, '%Y/%m/%d %H:%M:%S %1%2')
180 180
181 181 # build the CVS commandline
182 182 cmd = ['cvs', '-q']
183 183 if root:
184 184 cmd.append('-d%s' % root)
185 185 p = util.normpath(getrepopath(root))
186 186 if not p.endswith('/'):
187 187 p += '/'
188 188 prefix = p + util.normpath(prefix)
189 189 cmd.append(['log', 'rlog'][rlog])
190 190 if date:
191 191 # no space between option and date string
192 192 cmd.append('-d>%s' % date)
193 193 cmd.append(directory)
194 194
195 195 # state machine begins here
196 196 tags = {} # dictionary of revisions on current file with their tags
197 197 branchmap = {} # mapping between branch names and revision numbers
198 198 state = 0
199 199 store = False # set when a new record can be appended
200 200
201 201 cmd = [util.shellquote(arg) for arg in cmd]
202 202 ui.note(_("running %s\n") % (' '.join(cmd)))
203 203 ui.debug(_("prefix=%r directory=%r root=%r\n") % (prefix, directory, root))
204 204
205 205 pfp = util.popen(' '.join(cmd))
206 206 peek = pfp.readline()
207 207 while True:
208 208 line = peek
209 209 if line == '':
210 210 break
211 211 peek = pfp.readline()
212 212 if line.endswith('\n'):
213 213 line = line[:-1]
214 214 #ui.debug('state=%d line=%r\n' % (state, line))
215 215
216 216 if state == 0:
217 217 # initial state, consume input until we see 'RCS file'
218 218 match = re_00.match(line)
219 219 if match:
220 220 rcs = match.group(1)
221 221 tags = {}
222 222 if rlog:
223 223 filename = util.normpath(rcs[:-2])
224 224 if filename.startswith(prefix):
225 225 filename = filename[len(prefix):]
226 226 if filename.startswith('/'):
227 227 filename = filename[1:]
228 228 if filename.startswith('Attic/'):
229 229 filename = filename[6:]
230 230 else:
231 231 filename = filename.replace('/Attic/', '/')
232 232 state = 2
233 233 continue
234 234 state = 1
235 235 continue
236 236 match = re_01.match(line)
237 237 if match:
238 238 raise Exception(match.group(1))
239 239 match = re_02.match(line)
240 240 if match:
241 241 raise Exception(match.group(2))
242 242 if re_03.match(line):
243 243 raise Exception(line)
244 244
245 245 elif state == 1:
246 246 # expect 'Working file' (only when using log instead of rlog)
247 247 match = re_10.match(line)
248 248 assert match, _('RCS file must be followed by working file')
249 249 filename = util.normpath(match.group(1))
250 250 state = 2
251 251
252 252 elif state == 2:
253 253 # expect 'symbolic names'
254 254 if re_20.match(line):
255 255 branchmap = {}
256 256 state = 3
257 257
258 258 elif state == 3:
259 259 # read the symbolic names and store as tags
260 260 match = re_30.match(line)
261 261 if match:
262 262 rev = [int(x) for x in match.group(2).split('.')]
263 263
264 264 # Convert magic branch number to an odd-numbered one
265 265 revn = len(rev)
266 266 if revn > 3 and (revn % 2) == 0 and rev[-2] == 0:
267 267 rev = rev[:-2] + rev[-1:]
268 268 rev = tuple(rev)
269 269
270 270 if rev not in tags:
271 271 tags[rev] = []
272 272 tags[rev].append(match.group(1))
273 273 branchmap[match.group(1)] = match.group(2)
274 274
275 275 elif re_31.match(line):
276 276 state = 5
277 277 elif re_32.match(line):
278 278 state = 0
279 279
280 280 elif state == 4:
281 281 # expecting '------' separator before first revision
282 282 if re_31.match(line):
283 283 state = 5
284 284 else:
285 285 assert not re_32.match(line), _('must have at least some revisions')
286 286
287 287 elif state == 5:
288 288 # expecting revision number and possibly (ignored) lock indication
289 289 # we create the logentry here from values stored in states 0 to 4,
290 290 # as this state is re-entered for subsequent revisions of a file.
291 291 match = re_50.match(line)
292 292 assert match, _('expected revision number')
293 293 e = logentry(rcs=scache(rcs), file=scache(filename),
294 294 revision=tuple([int(x) for x in match.group(1).split('.')]),
295 295 branches=[], parent=None,
296 296 synthetic=False)
297 297 state = 6
298 298
299 299 elif state == 6:
300 300 # expecting date, author, state, lines changed
301 301 match = re_60.match(line)
302 302 assert match, _('revision must be followed by date line')
303 303 d = match.group(1)
304 304 if d[2] == '/':
305 305 # Y2K
306 306 d = '19' + d
307 307
308 308 if len(d.split()) != 3:
309 309 # cvs log dates always in GMT
310 310 d = d + ' UTC'
311 311 e.date = util.parsedate(d, ['%y/%m/%d %H:%M:%S', '%Y/%m/%d %H:%M:%S', '%Y-%m-%d %H:%M:%S'])
312 312 e.author = scache(match.group(2))
313 313 e.dead = match.group(3).lower() == 'dead'
314 314
315 315 if match.group(5):
316 316 if match.group(6):
317 317 e.lines = (int(match.group(5)), int(match.group(6)))
318 318 else:
319 319 e.lines = (int(match.group(5)), 0)
320 320 elif match.group(6):
321 321 e.lines = (0, int(match.group(6)))
322 322 else:
323 323 e.lines = None
324 324
325 325 if match.group(7): # cvsnt mergepoint
326 326 myrev = match.group(8).split('.')
327 327 if len(myrev) == 2: # head
328 328 e.mergepoint = 'HEAD'
329 329 else:
330 330 myrev = '.'.join(myrev[:-2] + ['0', myrev[-2]])
331 331 branches = [b for b in branchmap if branchmap[b] == myrev]
332 332 assert len(branches) == 1, 'unknown branch: %s' % e.mergepoint
333 333 e.mergepoint = branches[0]
334 334 else:
335 335 e.mergepoint = None
336 336 e.comment = []
337 337 state = 7
338 338
339 339 elif state == 7:
340 340 # read the revision numbers of branches that start at this revision
341 341 # or store the commit log message otherwise
342 342 m = re_70.match(line)
343 343 if m:
344 344 e.branches = [tuple([int(y) for y in x.strip().split('.')])
345 345 for x in m.group(1).split(';')]
346 346 state = 8
347 347 elif re_31.match(line) and re_50.match(peek):
348 348 state = 5
349 349 store = True
350 350 elif re_32.match(line):
351 351 state = 0
352 352 store = True
353 353 else:
354 354 e.comment.append(line)
355 355
356 356 elif state == 8:
357 357 # store commit log message
358 358 if re_31.match(line):
359 359 state = 5
360 360 store = True
361 361 elif re_32.match(line):
362 362 state = 0
363 363 store = True
364 364 else:
365 365 e.comment.append(line)
366 366
367 367 # When a file is added on a branch B1, CVS creates a synthetic
368 368 # dead trunk revision 1.1 so that the branch has a root.
369 369 # Likewise, if you merge such a file to a later branch B2 (one
370 370 # that already existed when the file was added on B1), CVS
371 371 # creates a synthetic dead revision 1.1.x.1 on B2. Don't drop
372 372 # these revisions now, but mark them synthetic so
373 373 # createchangeset() can take care of them.
374 374 if (store and
375 375 e.dead and
376 376 e.revision[-1] == 1 and # 1.1 or 1.1.x.1
377 377 len(e.comment) == 1 and
378 378 file_added_re.match(e.comment[0])):
379 379 ui.debug(_('found synthetic revision in %s: %r\n')
380 380 % (e.rcs, e.comment[0]))
381 381 e.synthetic = True
382 382
383 383 if store:
384 384 # clean up the results and save in the log.
385 385 store = False
386 e.tags = util.sort([scache(x) for x in tags.get(e.revision, [])])
386 e.tags = sorted([scache(x) for x in tags.get(e.revision, [])])
387 387 e.comment = scache('\n'.join(e.comment))
388 388
389 389 revn = len(e.revision)
390 390 if revn > 3 and (revn % 2) == 0:
391 391 e.branch = tags.get(e.revision[:-1], [None])[0]
392 392 else:
393 393 e.branch = None
394 394
395 395 log.append(e)
396 396
397 397 if len(log) % 100 == 0:
398 398 ui.status(util.ellipsis('%d %s' % (len(log), e.file), 80)+'\n')
399 399
400 400 listsort(log, key=lambda x:(x.rcs, x.revision))
401 401
402 402 # find parent revisions of individual files
403 403 versions = {}
404 404 for e in log:
405 405 branch = e.revision[:-1]
406 406 p = versions.get((e.rcs, branch), None)
407 407 if p is None:
408 408 p = e.revision[:-2]
409 409 e.parent = p
410 410 versions[(e.rcs, branch)] = e.revision
411 411
412 412 # update the log cache
413 413 if cache:
414 414 if log:
415 415 # join up the old and new logs
416 416 listsort(log, key=lambda x:x.date)
417 417
418 418 if oldlog and oldlog[-1].date >= log[0].date:
419 419 raise logerror('Log cache overlaps with new log entries,'
420 420 ' re-run without cache.')
421 421
422 422 log = oldlog + log
423 423
424 424 # write the new cachefile
425 425 ui.note(_('writing cvs log cache %s\n') % cachefile)
426 426 pickle.dump(log, file(cachefile, 'w'))
427 427 else:
428 428 log = oldlog
429 429
430 430 ui.status(_('%d log entries\n') % len(log))
431 431
432 432 return log
433 433
434 434
435 435 class changeset(object):
436 436 '''Class changeset has the following attributes:
437 437 .id - integer identifying this changeset (list index)
438 438 .author - author name as CVS knows it
439 439 .branch - name of branch this changeset is on, or None
440 440 .comment - commit message
441 441 .date - the commit date as a (time,tz) tuple
442 442 .entries - list of logentry objects in this changeset
443 443 .parents - list of one or two parent changesets
444 444 .tags - list of tags on this changeset
445 445 .synthetic - from synthetic revision "file ... added on branch ..."
446 446 .mergepoint- the branch that has been merged from (if present in rlog output)
447 447 '''
448 448 def __init__(self, **entries):
449 449 self.__dict__.update(entries)
450 450
451 451 def __repr__(self):
452 452 return "<%s at 0x%x: %s>" % (self.__class__.__name__,
453 453 id(self),
454 454 getattr(self, 'id', "(no id)"))
455 455
456 456 def createchangeset(ui, log, fuzz=60, mergefrom=None, mergeto=None):
457 457 '''Convert log into changesets.'''
458 458
459 459 ui.status(_('creating changesets\n'))
460 460
461 461 # Merge changesets
462 462
463 463 listsort(log, key=lambda x:(x.comment, x.author, x.branch, x.date))
464 464
465 465 changesets = []
466 466 files = {}
467 467 c = None
468 468 for i, e in enumerate(log):
469 469
470 470 # Check if log entry belongs to the current changeset or not.
471 471 if not (c and
472 472 e.comment == c.comment and
473 473 e.author == c.author and
474 474 e.branch == c.branch and
475 475 ((c.date[0] + c.date[1]) <=
476 476 (e.date[0] + e.date[1]) <=
477 477 (c.date[0] + c.date[1]) + fuzz) and
478 478 e.file not in files):
479 479 c = changeset(comment=e.comment, author=e.author,
480 480 branch=e.branch, date=e.date, entries=[],
481 481 mergepoint=getattr(e, 'mergepoint', None))
482 482 changesets.append(c)
483 483 files = {}
484 484 if len(changesets) % 100 == 0:
485 485 t = '%d %s' % (len(changesets), repr(e.comment)[1:-1])
486 486 ui.status(util.ellipsis(t, 80) + '\n')
487 487
488 488 c.entries.append(e)
489 489 files[e.file] = True
490 490 c.date = e.date # changeset date is date of latest commit in it
491 491
492 492 # Mark synthetic changesets
493 493
494 494 for c in changesets:
495 495 # Synthetic revisions always get their own changeset, because
496 496 # the log message includes the filename. E.g. if you add file3
497 497 # and file4 on a branch, you get four log entries and three
498 498 # changesets:
499 499 # "File file3 was added on branch ..." (synthetic, 1 entry)
500 500 # "File file4 was added on branch ..." (synthetic, 1 entry)
501 501 # "Add file3 and file4 to fix ..." (real, 2 entries)
502 502 # Hence the check for 1 entry here.
503 503 synth = getattr(c.entries[0], 'synthetic', None)
504 504 c.synthetic = (len(c.entries) == 1 and synth)
505 505
506 506 # Sort files in each changeset
507 507
508 508 for c in changesets:
509 509 def pathcompare(l, r):
510 510 'Mimic cvsps sorting order'
511 511 l = l.split('/')
512 512 r = r.split('/')
513 513 nl = len(l)
514 514 nr = len(r)
515 515 n = min(nl, nr)
516 516 for i in range(n):
517 517 if i + 1 == nl and nl < nr:
518 518 return -1
519 519 elif i + 1 == nr and nl > nr:
520 520 return +1
521 521 elif l[i] < r[i]:
522 522 return -1
523 523 elif l[i] > r[i]:
524 524 return +1
525 525 return 0
526 526 def entitycompare(l, r):
527 527 return pathcompare(l.file, r.file)
528 528
529 529 c.entries.sort(entitycompare)
530 530
531 531 # Sort changesets by date
532 532
533 533 def cscmp(l, r):
534 534 d = sum(l.date) - sum(r.date)
535 535 if d:
536 536 return d
537 537
538 538 # detect vendor branches and initial commits on a branch
539 539 le = {}
540 540 for e in l.entries:
541 541 le[e.rcs] = e.revision
542 542 re = {}
543 543 for e in r.entries:
544 544 re[e.rcs] = e.revision
545 545
546 546 d = 0
547 547 for e in l.entries:
548 548 if re.get(e.rcs, None) == e.parent:
549 549 assert not d
550 550 d = 1
551 551 break
552 552
553 553 for e in r.entries:
554 554 if le.get(e.rcs, None) == e.parent:
555 555 assert not d
556 556 d = -1
557 557 break
558 558
559 559 return d
560 560
561 561 changesets.sort(cscmp)
562 562
563 563 # Collect tags
564 564
565 565 globaltags = {}
566 566 for c in changesets:
567 567 tags = {}
568 568 for e in c.entries:
569 569 for tag in e.tags:
570 570 # remember which is the latest changeset to have this tag
571 571 globaltags[tag] = c
572 572
573 573 for c in changesets:
574 574 tags = {}
575 575 for e in c.entries:
576 576 for tag in e.tags:
577 577 tags[tag] = True
578 578 # remember tags only if this is the latest changeset to have it
579 c.tags = util.sort([tag for tag in tags if globaltags[tag] is c])
579 c.tags = sorted([tag for tag in tags if globaltags[tag] is c])
580 580
581 581 # Find parent changesets, handle {{mergetobranch BRANCHNAME}}
582 582 # by inserting dummy changesets with two parents, and handle
583 583 # {{mergefrombranch BRANCHNAME}} by setting two parents.
584 584
585 585 if mergeto is None:
586 586 mergeto = r'{{mergetobranch ([-\w]+)}}'
587 587 if mergeto:
588 588 mergeto = re.compile(mergeto)
589 589
590 590 if mergefrom is None:
591 591 mergefrom = r'{{mergefrombranch ([-\w]+)}}'
592 592 if mergefrom:
593 593 mergefrom = re.compile(mergefrom)
594 594
595 595 versions = {} # changeset index where we saw any particular file version
596 596 branches = {} # changeset index where we saw a branch
597 597 n = len(changesets)
598 598 i = 0
599 599 while i<n:
600 600 c = changesets[i]
601 601
602 602 for f in c.entries:
603 603 versions[(f.rcs, f.revision)] = i
604 604
605 605 p = None
606 606 if c.branch in branches:
607 607 p = branches[c.branch]
608 608 else:
609 609 for f in c.entries:
610 610 p = max(p, versions.get((f.rcs, f.parent), None))
611 611
612 612 c.parents = []
613 613 if p is not None:
614 614 p = changesets[p]
615 615
616 616 # Ensure no changeset has a synthetic changeset as a parent.
617 617 while p.synthetic:
618 618 assert len(p.parents) <= 1, \
619 619 _('synthetic changeset cannot have multiple parents')
620 620 if p.parents:
621 621 p = p.parents[0]
622 622 else:
623 623 p = None
624 624 break
625 625
626 626 if p is not None:
627 627 c.parents.append(p)
628 628
629 629 if c.mergepoint:
630 630 if c.mergepoint == 'HEAD':
631 631 c.mergepoint = None
632 632 c.parents.append(changesets[branches[c.mergepoint]])
633 633
634 634 if mergefrom:
635 635 m = mergefrom.search(c.comment)
636 636 if m:
637 637 m = m.group(1)
638 638 if m == 'HEAD':
639 639 m = None
640 640 try:
641 641 candidate = changesets[branches[m]]
642 642 except KeyError:
643 643 ui.warn(_("warning: CVS commit message references "
644 644 "non-existent branch %r:\n%s\n")
645 645 % (m, c.comment))
646 646 if m in branches and c.branch != m and not candidate.synthetic:
647 647 c.parents.append(candidate)
648 648
649 649 if mergeto:
650 650 m = mergeto.search(c.comment)
651 651 if m:
652 652 try:
653 653 m = m.group(1)
654 654 if m == 'HEAD':
655 655 m = None
656 656 except:
657 657 m = None # if no group found then merge to HEAD
658 658 if m in branches and c.branch != m:
659 659 # insert empty changeset for merge
660 660 cc = changeset(author=c.author, branch=m, date=c.date,
661 661 comment='convert-repo: CVS merge from branch %s' % c.branch,
662 662 entries=[], tags=[], parents=[changesets[branches[m]], c])
663 663 changesets.insert(i + 1, cc)
664 664 branches[m] = i + 1
665 665
666 666 # adjust our loop counters now we have inserted a new entry
667 667 n += 1
668 668 i += 2
669 669 continue
670 670
671 671 branches[c.branch] = i
672 672 i += 1
673 673
674 674 # Drop synthetic changesets (safe now that we have ensured no other
675 675 # changesets can have them as parents).
676 676 i = 0
677 677 while i < len(changesets):
678 678 if changesets[i].synthetic:
679 679 del changesets[i]
680 680 else:
681 681 i += 1
682 682
683 683 # Number changesets
684 684
685 685 for i, c in enumerate(changesets):
686 686 c.id = i + 1
687 687
688 688 ui.status(_('%d changeset entries\n') % len(changesets))
689 689
690 690 return changesets
691 691
692 692
693 693 def debugcvsps(ui, *args, **opts):
694 694 '''Read CVS rlog for current directory or named path in repository, and
695 695 convert the log to changesets based on matching commit log entries and dates.'''
696 696
697 697 if opts["new_cache"]:
698 698 cache = "write"
699 699 elif opts["update_cache"]:
700 700 cache = "update"
701 701 else:
702 702 cache = None
703 703
704 704 revisions = opts["revisions"]
705 705
706 706 try:
707 707 if args:
708 708 log = []
709 709 for d in args:
710 710 log += createlog(ui, d, root=opts["root"], cache=cache)
711 711 else:
712 712 log = createlog(ui, root=opts["root"], cache=cache)
713 713 except logerror, e:
714 714 ui.write("%r\n"%e)
715 715 return
716 716
717 717 changesets = createchangeset(ui, log, opts["fuzz"])
718 718 del log
719 719
720 720 # Print changesets (optionally filtered)
721 721
722 722 off = len(revisions)
723 723 branches = {} # latest version number in each branch
724 724 ancestors = {} # parent branch
725 725 for cs in changesets:
726 726
727 727 if opts["ancestors"]:
728 728 if cs.branch not in branches and cs.parents and cs.parents[0].id:
729 729 ancestors[cs.branch] = changesets[cs.parents[0].id-1].branch, cs.parents[0].id
730 730 branches[cs.branch] = cs.id
731 731
732 732 # limit by branches
733 733 if opts["branches"] and (cs.branch or 'HEAD') not in opts["branches"]:
734 734 continue
735 735
736 736 if not off:
737 737 # Note: trailing spaces on several lines here are needed to have
738 738 # bug-for-bug compatibility with cvsps.
739 739 ui.write('---------------------\n')
740 740 ui.write('PatchSet %d \n' % cs.id)
741 741 ui.write('Date: %s\n' % util.datestr(cs.date, '%Y/%m/%d %H:%M:%S %1%2'))
742 742 ui.write('Author: %s\n' % cs.author)
743 743 ui.write('Branch: %s\n' % (cs.branch or 'HEAD'))
744 744 ui.write('Tag%s: %s \n' % (['', 's'][len(cs.tags)>1],
745 745 ','.join(cs.tags) or '(none)'))
746 746 if opts["parents"] and cs.parents:
747 747 if len(cs.parents)>1:
748 748 ui.write('Parents: %s\n' % (','.join([str(p.id) for p in cs.parents])))
749 749 else:
750 750 ui.write('Parent: %d\n' % cs.parents[0].id)
751 751
752 752 if opts["ancestors"]:
753 753 b = cs.branch
754 754 r = []
755 755 while b:
756 756 b, c = ancestors[b]
757 757 r.append('%s:%d:%d' % (b or "HEAD", c, branches[b]))
758 758 if r:
759 759 ui.write('Ancestors: %s\n' % (','.join(r)))
760 760
761 761 ui.write('Log:\n')
762 762 ui.write('%s\n\n' % cs.comment)
763 763 ui.write('Members: \n')
764 764 for f in cs.entries:
765 765 fn = f.file
766 766 if fn.startswith(opts["prefix"]):
767 767 fn = fn[len(opts["prefix"]):]
768 768 ui.write('\t%s:%s->%s%s \n' % (fn, '.'.join([str(x) for x in f.parent]) or 'INITIAL',
769 769 '.'.join([str(x) for x in f.revision]), ['', '(DEAD)'][f.dead]))
770 770 ui.write('\n')
771 771
772 772 # have we seen the start tag?
773 773 if revisions and off:
774 774 if revisions[0] == str(cs.id) or \
775 775 revisions[0] in cs.tags:
776 776 off = False
777 777
778 778 # see if we reached the end tag
779 779 if len(revisions)>1 and not off:
780 780 if revisions[1] == str(cs.id) or \
781 781 revisions[1] in cs.tags:
782 782 break
@@ -1,126 +1,126 b''
1 1 # darcs support for the convert extension
2 2
3 3 from common import NoRepo, checktool, commandline, commit, converter_source
4 4 from mercurial.i18n import _
5 5 from mercurial import util
6 6 import os, shutil, tempfile
7 7
8 8 # The naming drift of ElementTree is fun!
9 9
10 10 try: from xml.etree.cElementTree import ElementTree
11 11 except ImportError:
12 12 try: from xml.etree.ElementTree import ElementTree
13 13 except ImportError:
14 14 try: from elementtree.cElementTree import ElementTree
15 15 except ImportError:
16 16 try: from elementtree.ElementTree import ElementTree
17 17 except ImportError: ElementTree = None
18 18
19 19
20 20 class darcs_source(converter_source, commandline):
21 21 def __init__(self, ui, path, rev=None):
22 22 converter_source.__init__(self, ui, path, rev=rev)
23 23 commandline.__init__(self, ui, 'darcs')
24 24
25 25 # check for _darcs, ElementTree, _darcs/inventory so that we can
26 26 # easily skip test-convert-darcs if ElementTree is not around
27 27 if not os.path.exists(os.path.join(path, '_darcs', 'inventories')):
28 28 raise NoRepo("%s does not look like a darcs repo" % path)
29 29
30 30 if not os.path.exists(os.path.join(path, '_darcs')):
31 31 raise NoRepo("%s does not look like a darcs repo" % path)
32 32
33 33 checktool('darcs')
34 34
35 35 if ElementTree is None:
36 36 raise util.Abort(_("Python ElementTree module is not available"))
37 37
38 38 self.path = os.path.realpath(path)
39 39
40 40 self.lastrev = None
41 41 self.changes = {}
42 42 self.parents = {}
43 43 self.tags = {}
44 44
45 45 def before(self):
46 46 self.tmppath = tempfile.mkdtemp(
47 47 prefix='convert-' + os.path.basename(self.path) + '-')
48 48 output, status = self.run('init', repodir=self.tmppath)
49 49 self.checkexit(status)
50 50
51 51 tree = self.xml('changes', xml_output=True, summary=True,
52 52 repodir=self.path)
53 53 tagname = None
54 54 child = None
55 55 for elt in tree.findall('patch'):
56 56 node = elt.get('hash')
57 57 name = elt.findtext('name', '')
58 58 if name.startswith('TAG '):
59 59 tagname = name[4:].strip()
60 60 elif tagname is not None:
61 61 self.tags[tagname] = node
62 62 tagname = None
63 63 self.changes[node] = elt
64 64 self.parents[child] = [node]
65 65 child = node
66 66 self.parents[child] = []
67 67
68 68 def after(self):
69 69 self.ui.debug(_('cleaning up %s\n') % self.tmppath)
70 70 shutil.rmtree(self.tmppath, ignore_errors=True)
71 71
72 72 def xml(self, cmd, **kwargs):
73 73 etree = ElementTree()
74 74 fp = self._run(cmd, **kwargs)
75 75 etree.parse(fp)
76 76 self.checkexit(fp.close())
77 77 return etree.getroot()
78 78
79 79 def getheads(self):
80 80 return self.parents[None]
81 81
82 82 def getcommit(self, rev):
83 83 elt = self.changes[rev]
84 84 date = util.strdate(elt.get('local_date'), '%a %b %d %H:%M:%S %Z %Y')
85 85 desc = elt.findtext('name') + '\n' + elt.findtext('comment', '')
86 86 return commit(author=elt.get('author'), date=util.datestr(date),
87 87 desc=desc.strip(), parents=self.parents[rev])
88 88
89 89 def pull(self, rev):
90 90 output, status = self.run('pull', self.path, all=True,
91 91 match='hash %s' % rev,
92 92 no_test=True, no_posthook=True,
93 93 external_merge='/bin/false',
94 94 repodir=self.tmppath)
95 95 if status:
96 96 if output.find('We have conflicts in') == -1:
97 97 self.checkexit(status, output)
98 98 output, status = self.run('revert', all=True, repodir=self.tmppath)
99 99 self.checkexit(status, output)
100 100
101 101 def getchanges(self, rev):
102 102 self.pull(rev)
103 103 copies = {}
104 104 changes = []
105 105 for elt in self.changes[rev].find('summary').getchildren():
106 106 if elt.tag in ('add_directory', 'remove_directory'):
107 107 continue
108 108 if elt.tag == 'move':
109 109 changes.append((elt.get('from'), rev))
110 110 copies[elt.get('from')] = elt.get('to')
111 111 else:
112 112 changes.append((elt.text.strip(), rev))
113 113 self.lastrev = rev
114 return util.sort(changes), copies
114 return sorted(changes), copies
115 115
116 116 def getfile(self, name, rev):
117 117 if rev != self.lastrev:
118 118 raise util.Abort(_('internal calling inconsistency'))
119 119 return open(os.path.join(self.tmppath, name), 'rb').read()
120 120
121 121 def getmode(self, name, rev):
122 122 mode = os.lstat(os.path.join(self.tmppath, name)).st_mode
123 123 return (mode & 0111) and 'x' or ''
124 124
125 125 def gettags(self):
126 126 return self.tags
@@ -1,335 +1,335 b''
1 1 # GNU Arch support for the convert extension
2 2
3 3 from common import NoRepo, commandline, commit, converter_source
4 4 from mercurial.i18n import _
5 5 from mercurial import util
6 6 import os, shutil, tempfile, stat, locale
7 7 from email.Parser import Parser
8 8
9 9 class gnuarch_source(converter_source, commandline):
10 10
11 11 class gnuarch_rev:
12 12 def __init__(self, rev):
13 13 self.rev = rev
14 14 self.summary = ''
15 15 self.date = None
16 16 self.author = ''
17 17 self.continuationof = None
18 18 self.add_files = []
19 19 self.mod_files = []
20 20 self.del_files = []
21 21 self.ren_files = {}
22 22 self.ren_dirs = {}
23 23
24 24 def __init__(self, ui, path, rev=None):
25 25 super(gnuarch_source, self).__init__(ui, path, rev=rev)
26 26
27 27 if not os.path.exists(os.path.join(path, '{arch}')):
28 28 raise NoRepo(_("%s does not look like a GNU Arch repo") % path)
29 29
30 30 # Could use checktool, but we want to check for baz or tla.
31 31 self.execmd = None
32 32 if util.find_exe('baz'):
33 33 self.execmd = 'baz'
34 34 else:
35 35 if util.find_exe('tla'):
36 36 self.execmd = 'tla'
37 37 else:
38 38 raise util.Abort(_('cannot find a GNU Arch tool'))
39 39
40 40 commandline.__init__(self, ui, self.execmd)
41 41
42 42 self.path = os.path.realpath(path)
43 43 self.tmppath = None
44 44
45 45 self.treeversion = None
46 46 self.lastrev = None
47 47 self.changes = {}
48 48 self.parents = {}
49 49 self.tags = {}
50 50 self.modecache = {}
51 51 self.catlogparser = Parser()
52 52 self.locale = locale.getpreferredencoding()
53 53 self.archives = []
54 54
55 55 def before(self):
56 56 # Get registered archives
57 57 self.archives = [i.rstrip('\n')
58 58 for i in self.runlines0('archives', '-n')]
59 59
60 60 if self.execmd == 'tla':
61 61 output = self.run0('tree-version', self.path)
62 62 else:
63 63 output = self.run0('tree-version', '-d', self.path)
64 64 self.treeversion = output.strip()
65 65
66 66 # Get name of temporary directory
67 67 version = self.treeversion.split('/')
68 68 self.tmppath = os.path.join(tempfile.gettempdir(),
69 69 'hg-%s' % version[1])
70 70
71 71 # Generate parents dictionary
72 72 self.parents[None] = []
73 73 treeversion = self.treeversion
74 74 child = None
75 75 while treeversion:
76 76 self.ui.status(_('analyzing tree version %s...\n') % treeversion)
77 77
78 78 archive = treeversion.split('/')[0]
79 79 if archive not in self.archives:
80 80 self.ui.status(_('tree analysis stopped because it points to an unregistered archive %s...\n') % archive)
81 81 break
82 82
83 83 # Get the complete list of revisions for that tree version
84 84 output, status = self.runlines('revisions', '-r', '-f', treeversion)
85 85 self.checkexit(status, 'failed retrieveing revisions for %s' % treeversion)
86 86
87 87 # No new iteration unless a revision has a continuation-of header
88 88 treeversion = None
89 89
90 90 for l in output:
91 91 rev = l.strip()
92 92 self.changes[rev] = self.gnuarch_rev(rev)
93 93 self.parents[rev] = []
94 94
95 95 # Read author, date and summary
96 96 catlog, status = self.run('cat-log', '-d', self.path, rev)
97 97 if status:
98 98 catlog = self.run0('cat-archive-log', rev)
99 99 self._parsecatlog(catlog, rev)
100 100
101 101 # Populate the parents map
102 102 self.parents[child].append(rev)
103 103
104 104 # Keep track of the current revision as the child of the next
105 105 # revision scanned
106 106 child = rev
107 107
108 108 # Check if we have to follow the usual incremental history
109 109 # or if we have to 'jump' to a different treeversion given
110 110 # by the continuation-of header.
111 111 if self.changes[rev].continuationof:
112 112 treeversion = '--'.join(self.changes[rev].continuationof.split('--')[:-1])
113 113 break
114 114
115 115 # If we reached a base-0 revision w/o any continuation-of
116 116 # header, it means the tree history ends here.
117 117 if rev[-6:] == 'base-0':
118 118 break
119 119
120 120 def after(self):
121 121 self.ui.debug(_('cleaning up %s\n') % self.tmppath)
122 122 shutil.rmtree(self.tmppath, ignore_errors=True)
123 123
124 124 def getheads(self):
125 125 return self.parents[None]
126 126
127 127 def getfile(self, name, rev):
128 128 if rev != self.lastrev:
129 129 raise util.Abort(_('internal calling inconsistency'))
130 130
131 131 # Raise IOError if necessary (i.e. deleted files).
132 132 if not os.path.exists(os.path.join(self.tmppath, name)):
133 133 raise IOError
134 134
135 135 data, mode = self._getfile(name, rev)
136 136 self.modecache[(name, rev)] = mode
137 137
138 138 return data
139 139
140 140 def getmode(self, name, rev):
141 141 return self.modecache[(name, rev)]
142 142
143 143 def getchanges(self, rev):
144 144 self.modecache = {}
145 145 self._update(rev)
146 146 changes = []
147 147 copies = {}
148 148
149 149 for f in self.changes[rev].add_files:
150 150 changes.append((f, rev))
151 151
152 152 for f in self.changes[rev].mod_files:
153 153 changes.append((f, rev))
154 154
155 155 for f in self.changes[rev].del_files:
156 156 changes.append((f, rev))
157 157
158 158 for src in self.changes[rev].ren_files:
159 159 to = self.changes[rev].ren_files[src]
160 160 changes.append((src, rev))
161 161 changes.append((to, rev))
162 162 copies[to] = src
163 163
164 164 for src in self.changes[rev].ren_dirs:
165 165 to = self.changes[rev].ren_dirs[src]
166 166 chgs, cps = self._rendirchanges(src, to);
167 167 changes += [(f, rev) for f in chgs]
168 168 copies.update(cps)
169 169
170 170 self.lastrev = rev
171 return util.sort(set(changes)), copies
171 return sorted(set(changes)), copies
172 172
173 173 def getcommit(self, rev):
174 174 changes = self.changes[rev]
175 175 return commit(author = changes.author, date = changes.date,
176 176 desc = changes.summary, parents = self.parents[rev], rev=rev)
177 177
178 178 def gettags(self):
179 179 return self.tags
180 180
181 181 def _execute(self, cmd, *args, **kwargs):
182 182 cmdline = [self.execmd, cmd]
183 183 cmdline += args
184 184 cmdline = [util.shellquote(arg) for arg in cmdline]
185 185 cmdline += ['>', util.nulldev, '2>', util.nulldev]
186 186 cmdline = util.quotecommand(' '.join(cmdline))
187 187 self.ui.debug(cmdline, '\n')
188 188 return os.system(cmdline)
189 189
190 190 def _update(self, rev):
191 191 self.ui.debug(_('applying revision %s...\n') % rev)
192 192 changeset, status = self.runlines('replay', '-d', self.tmppath,
193 193 rev)
194 194 if status:
195 195 # Something went wrong while merging (baz or tla
196 196 # issue?), get latest revision and try from there
197 197 shutil.rmtree(self.tmppath, ignore_errors=True)
198 198 self._obtainrevision(rev)
199 199 else:
200 200 old_rev = self.parents[rev][0]
201 201 self.ui.debug(_('computing changeset between %s and %s...\n')
202 202 % (old_rev, rev))
203 203 self._parsechangeset(changeset, rev)
204 204
205 205 def _getfile(self, name, rev):
206 206 mode = os.lstat(os.path.join(self.tmppath, name)).st_mode
207 207 if stat.S_ISLNK(mode):
208 208 data = os.readlink(os.path.join(self.tmppath, name))
209 209 mode = mode and 'l' or ''
210 210 else:
211 211 data = open(os.path.join(self.tmppath, name), 'rb').read()
212 212 mode = (mode & 0111) and 'x' or ''
213 213 return data, mode
214 214
215 215 def _exclude(self, name):
216 216 exclude = [ '{arch}', '.arch-ids', '.arch-inventory' ]
217 217 for exc in exclude:
218 218 if name.find(exc) != -1:
219 219 return True
220 220 return False
221 221
222 222 def _readcontents(self, path):
223 223 files = []
224 224 contents = os.listdir(path)
225 225 while len(contents) > 0:
226 226 c = contents.pop()
227 227 p = os.path.join(path, c)
228 228 # os.walk could be used, but here we avoid internal GNU
229 229 # Arch files and directories, thus saving a lot time.
230 230 if not self._exclude(p):
231 231 if os.path.isdir(p):
232 232 contents += [os.path.join(c, f) for f in os.listdir(p)]
233 233 else:
234 234 files.append(c)
235 235 return files
236 236
237 237 def _rendirchanges(self, src, dest):
238 238 changes = []
239 239 copies = {}
240 240 files = self._readcontents(os.path.join(self.tmppath, dest))
241 241 for f in files:
242 242 s = os.path.join(src, f)
243 243 d = os.path.join(dest, f)
244 244 changes.append(s)
245 245 changes.append(d)
246 246 copies[d] = s
247 247 return changes, copies
248 248
249 249 def _obtainrevision(self, rev):
250 250 self.ui.debug(_('obtaining revision %s...\n') % rev)
251 251 output = self._execute('get', rev, self.tmppath)
252 252 self.checkexit(output)
253 253 self.ui.debug(_('analysing revision %s...\n') % rev)
254 254 files = self._readcontents(self.tmppath)
255 255 self.changes[rev].add_files += files
256 256
257 257 def _stripbasepath(self, path):
258 258 if path.startswith('./'):
259 259 return path[2:]
260 260 return path
261 261
262 262 def _parsecatlog(self, data, rev):
263 263 try:
264 264 catlog = self.catlogparser.parsestr(data)
265 265
266 266 # Commit date
267 267 self.changes[rev].date = util.datestr(
268 268 util.strdate(catlog['Standard-date'],
269 269 '%Y-%m-%d %H:%M:%S'))
270 270
271 271 # Commit author
272 272 self.changes[rev].author = self.recode(catlog['Creator'])
273 273
274 274 # Commit description
275 275 self.changes[rev].summary = '\n\n'.join((catlog['Summary'],
276 276 catlog.get_payload()))
277 277 self.changes[rev].summary = self.recode(self.changes[rev].summary)
278 278
279 279 # Commit revision origin when dealing with a branch or tag
280 280 if catlog.has_key('Continuation-of'):
281 281 self.changes[rev].continuationof = self.recode(catlog['Continuation-of'])
282 282 except Exception:
283 283 raise util.Abort(_('could not parse cat-log of %s') % rev)
284 284
285 285 def _parsechangeset(self, data, rev):
286 286 for l in data:
287 287 l = l.strip()
288 288 # Added file (ignore added directory)
289 289 if l.startswith('A') and not l.startswith('A/'):
290 290 file = self._stripbasepath(l[1:].strip())
291 291 if not self._exclude(file):
292 292 self.changes[rev].add_files.append(file)
293 293 # Deleted file (ignore deleted directory)
294 294 elif l.startswith('D') and not l.startswith('D/'):
295 295 file = self._stripbasepath(l[1:].strip())
296 296 if not self._exclude(file):
297 297 self.changes[rev].del_files.append(file)
298 298 # Modified binary file
299 299 elif l.startswith('Mb'):
300 300 file = self._stripbasepath(l[2:].strip())
301 301 if not self._exclude(file):
302 302 self.changes[rev].mod_files.append(file)
303 303 # Modified link
304 304 elif l.startswith('M->'):
305 305 file = self._stripbasepath(l[3:].strip())
306 306 if not self._exclude(file):
307 307 self.changes[rev].mod_files.append(file)
308 308 # Modified file
309 309 elif l.startswith('M'):
310 310 file = self._stripbasepath(l[1:].strip())
311 311 if not self._exclude(file):
312 312 self.changes[rev].mod_files.append(file)
313 313 # Renamed file (or link)
314 314 elif l.startswith('=>'):
315 315 files = l[2:].strip().split(' ')
316 316 if len(files) == 1:
317 317 files = l[2:].strip().split('\t')
318 318 src = self._stripbasepath(files[0])
319 319 dst = self._stripbasepath(files[1])
320 320 if not self._exclude(src) and not self._exclude(dst):
321 321 self.changes[rev].ren_files[src] = dst
322 322 # Conversion from file to link or from link to file (modified)
323 323 elif l.startswith('ch'):
324 324 file = self._stripbasepath(l[2:].strip())
325 325 if not self._exclude(file):
326 326 self.changes[rev].mod_files.append(file)
327 327 # Renamed directory
328 328 elif l.startswith('/>'):
329 329 dirs = l[2:].strip().split(' ')
330 330 if len(dirs) == 1:
331 331 dirs = l[2:].strip().split('\t')
332 332 src = self._stripbasepath(dirs[0])
333 333 dst = self._stripbasepath(dirs[1])
334 334 if not self._exclude(src) and not self._exclude(dst):
335 335 self.changes[rev].ren_dirs[src] = dst
@@ -1,333 +1,333 b''
1 1 # hg backend for convert extension
2 2
3 3 # Notes for hg->hg conversion:
4 4 #
5 5 # * Old versions of Mercurial didn't trim the whitespace from the ends
6 6 # of commit messages, but new versions do. Changesets created by
7 7 # those older versions, then converted, may thus have different
8 8 # hashes for changesets that are otherwise identical.
9 9 #
10 10 # * By default, the source revision is stored in the converted
11 11 # revision. This will cause the converted revision to have a
12 12 # different identity than the source. To avoid this, use the
13 13 # following option: "--config convert.hg.saverev=false"
14 14
15 15
16 16 import os, time
17 17 from mercurial.i18n import _
18 18 from mercurial.node import bin, hex, nullid
19 19 from mercurial import hg, util, context, error
20 20
21 21 from common import NoRepo, commit, converter_source, converter_sink
22 22
23 23 class mercurial_sink(converter_sink):
24 24 def __init__(self, ui, path):
25 25 converter_sink.__init__(self, ui, path)
26 26 self.branchnames = ui.configbool('convert', 'hg.usebranchnames', True)
27 27 self.clonebranches = ui.configbool('convert', 'hg.clonebranches', False)
28 28 self.tagsbranch = ui.config('convert', 'hg.tagsbranch', 'default')
29 29 self.lastbranch = None
30 30 if os.path.isdir(path) and len(os.listdir(path)) > 0:
31 31 try:
32 32 self.repo = hg.repository(self.ui, path)
33 33 if not self.repo.local():
34 34 raise NoRepo(_('%s is not a local Mercurial repo') % path)
35 35 except error.RepoError, err:
36 36 ui.traceback()
37 37 raise NoRepo(err.args[0])
38 38 else:
39 39 try:
40 40 ui.status(_('initializing destination %s repository\n') % path)
41 41 self.repo = hg.repository(self.ui, path, create=True)
42 42 if not self.repo.local():
43 43 raise NoRepo(_('%s is not a local Mercurial repo') % path)
44 44 self.created.append(path)
45 45 except error.RepoError:
46 46 ui.traceback()
47 47 raise NoRepo("could not create hg repo %s as sink" % path)
48 48 self.lock = None
49 49 self.wlock = None
50 50 self.filemapmode = False
51 51
52 52 def before(self):
53 53 self.ui.debug(_('run hg sink pre-conversion action\n'))
54 54 self.wlock = self.repo.wlock()
55 55 self.lock = self.repo.lock()
56 56
57 57 def after(self):
58 58 self.ui.debug(_('run hg sink post-conversion action\n'))
59 59 self.lock.release()
60 60 self.wlock.release()
61 61
62 62 def revmapfile(self):
63 63 return os.path.join(self.path, ".hg", "shamap")
64 64
65 65 def authorfile(self):
66 66 return os.path.join(self.path, ".hg", "authormap")
67 67
68 68 def getheads(self):
69 69 h = self.repo.changelog.heads()
70 70 return [ hex(x) for x in h ]
71 71
72 72 def setbranch(self, branch, pbranches):
73 73 if not self.clonebranches:
74 74 return
75 75
76 76 setbranch = (branch != self.lastbranch)
77 77 self.lastbranch = branch
78 78 if not branch:
79 79 branch = 'default'
80 80 pbranches = [(b[0], b[1] and b[1] or 'default') for b in pbranches]
81 81 pbranch = pbranches and pbranches[0][1] or 'default'
82 82
83 83 branchpath = os.path.join(self.path, branch)
84 84 if setbranch:
85 85 self.after()
86 86 try:
87 87 self.repo = hg.repository(self.ui, branchpath)
88 88 except:
89 89 self.repo = hg.repository(self.ui, branchpath, create=True)
90 90 self.before()
91 91
92 92 # pbranches may bring revisions from other branches (merge parents)
93 93 # Make sure we have them, or pull them.
94 94 missings = {}
95 95 for b in pbranches:
96 96 try:
97 97 self.repo.lookup(b[0])
98 98 except:
99 99 missings.setdefault(b[1], []).append(b[0])
100 100
101 101 if missings:
102 102 self.after()
103 103 for pbranch, heads in missings.iteritems():
104 104 pbranchpath = os.path.join(self.path, pbranch)
105 105 prepo = hg.repository(self.ui, pbranchpath)
106 106 self.ui.note(_('pulling from %s into %s\n') % (pbranch, branch))
107 107 self.repo.pull(prepo, [prepo.lookup(h) for h in heads])
108 108 self.before()
109 109
110 110 def putcommit(self, files, copies, parents, commit, source):
111 111
112 112 files = dict(files)
113 113 def getfilectx(repo, memctx, f):
114 114 v = files[f]
115 115 data = source.getfile(f, v)
116 116 e = source.getmode(f, v)
117 117 return context.memfilectx(f, data, 'l' in e, 'x' in e, copies.get(f))
118 118
119 119 pl = []
120 120 for p in parents:
121 121 if p not in pl:
122 122 pl.append(p)
123 123 parents = pl
124 124 nparents = len(parents)
125 125 if self.filemapmode and nparents == 1:
126 126 m1node = self.repo.changelog.read(bin(parents[0]))[0]
127 127 parent = parents[0]
128 128
129 129 if len(parents) < 2: parents.append("0" * 40)
130 130 if len(parents) < 2: parents.append("0" * 40)
131 131 p2 = parents.pop(0)
132 132
133 133 text = commit.desc
134 134 extra = commit.extra.copy()
135 135 if self.branchnames and commit.branch:
136 136 extra['branch'] = commit.branch
137 137 if commit.rev:
138 138 extra['convert_revision'] = commit.rev
139 139
140 140 while parents:
141 141 p1 = p2
142 142 p2 = parents.pop(0)
143 143 ctx = context.memctx(self.repo, (p1, p2), text, files.keys(), getfilectx,
144 144 commit.author, commit.date, extra)
145 145 self.repo.commitctx(ctx)
146 146 text = "(octopus merge fixup)\n"
147 147 p2 = hex(self.repo.changelog.tip())
148 148
149 149 if self.filemapmode and nparents == 1:
150 150 man = self.repo.manifest
151 151 mnode = self.repo.changelog.read(bin(p2))[0]
152 152 if not man.cmp(m1node, man.revision(mnode)):
153 153 self.repo.rollback()
154 154 return parent
155 155 return p2
156 156
157 157 def puttags(self, tags):
158 158 try:
159 159 parentctx = self.repo[self.tagsbranch]
160 160 tagparent = parentctx.node()
161 161 except error.RepoError:
162 162 parentctx = None
163 163 tagparent = nullid
164 164
165 165 try:
166 oldlines = util.sort(parentctx['.hgtags'].data().splitlines(1))
166 oldlines = sorted(parentctx['.hgtags'].data().splitlines(1))
167 167 except:
168 168 oldlines = []
169 169
170 newlines = util.sort([("%s %s\n" % (tags[tag], tag)) for tag in tags])
170 newlines = sorted([("%s %s\n" % (tags[tag], tag)) for tag in tags])
171 171 if newlines == oldlines:
172 172 return None
173 173 data = "".join(newlines)
174 174 def getfilectx(repo, memctx, f):
175 175 return context.memfilectx(f, data, False, False, None)
176 176
177 177 self.ui.status(_("updating tags\n"))
178 178 date = "%s 0" % int(time.mktime(time.gmtime()))
179 179 extra = {'branch': self.tagsbranch}
180 180 ctx = context.memctx(self.repo, (tagparent, None), "update tags",
181 181 [".hgtags"], getfilectx, "convert-repo", date,
182 182 extra)
183 183 self.repo.commitctx(ctx)
184 184 return hex(self.repo.changelog.tip())
185 185
186 186 def setfilemapmode(self, active):
187 187 self.filemapmode = active
188 188
189 189 class mercurial_source(converter_source):
190 190 def __init__(self, ui, path, rev=None):
191 191 converter_source.__init__(self, ui, path, rev)
192 192 self.ignoreerrors = ui.configbool('convert', 'hg.ignoreerrors', False)
193 193 self.ignored = {}
194 194 self.saverev = ui.configbool('convert', 'hg.saverev', False)
195 195 try:
196 196 self.repo = hg.repository(self.ui, path)
197 197 # try to provoke an exception if this isn't really a hg
198 198 # repo, but some other bogus compatible-looking url
199 199 if not self.repo.local():
200 200 raise error.RepoError()
201 201 except error.RepoError:
202 202 ui.traceback()
203 203 raise NoRepo("%s is not a local Mercurial repo" % path)
204 204 self.lastrev = None
205 205 self.lastctx = None
206 206 self._changescache = None
207 207 self.convertfp = None
208 208 # Restrict converted revisions to startrev descendants
209 209 startnode = ui.config('convert', 'hg.startrev')
210 210 if startnode is not None:
211 211 try:
212 212 startnode = self.repo.lookup(startnode)
213 213 except error.RepoError:
214 214 raise util.Abort(_('%s is not a valid start revision')
215 215 % startnode)
216 216 startrev = self.repo.changelog.rev(startnode)
217 217 children = {startnode: 1}
218 218 for rev in self.repo.changelog.descendants(startrev):
219 219 children[self.repo.changelog.node(rev)] = 1
220 220 self.keep = children.__contains__
221 221 else:
222 222 self.keep = util.always
223 223
224 224 def changectx(self, rev):
225 225 if self.lastrev != rev:
226 226 self.lastctx = self.repo[rev]
227 227 self.lastrev = rev
228 228 return self.lastctx
229 229
230 230 def parents(self, ctx):
231 231 return [p.node() for p in ctx.parents()
232 232 if p and self.keep(p.node())]
233 233
234 234 def getheads(self):
235 235 if self.rev:
236 236 heads = [self.repo[self.rev].node()]
237 237 else:
238 238 heads = self.repo.heads()
239 239 return [hex(h) for h in heads if self.keep(h)]
240 240
241 241 def getfile(self, name, rev):
242 242 try:
243 243 return self.changectx(rev)[name].data()
244 244 except error.LookupError, err:
245 245 raise IOError(err)
246 246
247 247 def getmode(self, name, rev):
248 248 return self.changectx(rev).manifest().flags(name)
249 249
250 250 def getchanges(self, rev):
251 251 ctx = self.changectx(rev)
252 252 parents = self.parents(ctx)
253 253 if not parents:
254 files = util.sort(ctx.manifest().keys())
254 files = sorted(ctx.manifest())
255 255 if self.ignoreerrors:
256 256 # calling getcopies() is a simple way to detect missing
257 257 # revlogs and populate self.ignored
258 258 self.getcopies(ctx, files)
259 259 return [(f, rev) for f in files if f not in self.ignored], {}
260 260 if self._changescache and self._changescache[0] == rev:
261 261 m, a, r = self._changescache[1]
262 262 else:
263 263 m, a, r = self.repo.status(parents[0], ctx.node())[:3]
264 264 # getcopies() detects missing revlogs early, run it before
265 265 # filtering the changes.
266 266 copies = self.getcopies(ctx, m + a)
267 267 changes = [(name, rev) for name in m + a + r
268 268 if name not in self.ignored]
269 return util.sort(changes), copies
269 return sorted(changes), copies
270 270
271 271 def getcopies(self, ctx, files):
272 272 copies = {}
273 273 for name in files:
274 274 if name in self.ignored:
275 275 continue
276 276 try:
277 277 copysource, copynode = ctx.filectx(name).renamed()
278 278 if copysource in self.ignored or not self.keep(copynode):
279 279 continue
280 280 copies[name] = copysource
281 281 except TypeError:
282 282 pass
283 283 except error.LookupError, e:
284 284 if not self.ignoreerrors:
285 285 raise
286 286 self.ignored[name] = 1
287 287 self.ui.warn(_('ignoring: %s\n') % e)
288 288 return copies
289 289
290 290 def getcommit(self, rev):
291 291 ctx = self.changectx(rev)
292 292 parents = [hex(p) for p in self.parents(ctx)]
293 293 if self.saverev:
294 294 crev = rev
295 295 else:
296 296 crev = None
297 297 return commit(author=ctx.user(), date=util.datestr(ctx.date()),
298 298 desc=ctx.description(), rev=crev, parents=parents,
299 299 branch=ctx.branch(), extra=ctx.extra())
300 300
301 301 def gettags(self):
302 302 tags = [t for t in self.repo.tagslist() if t[0] != 'tip']
303 303 return dict([(name, hex(node)) for name, node in tags
304 304 if self.keep(node)])
305 305
306 306 def getchangedfiles(self, rev, i):
307 307 ctx = self.changectx(rev)
308 308 parents = self.parents(ctx)
309 309 if not parents and i is None:
310 310 i = 0
311 311 changes = [], ctx.manifest().keys(), []
312 312 else:
313 313 i = i or 0
314 314 changes = self.repo.status(parents[i], ctx.node())[:3]
315 315 changes = [[f for f in l if f not in self.ignored] for l in changes]
316 316
317 317 if i == 0:
318 318 self._changescache = (rev, changes)
319 319
320 320 return changes[0] + changes[1] + changes[2]
321 321
322 322 def converted(self, rev, destrev):
323 323 if self.convertfp is None:
324 324 self.convertfp = open(os.path.join(self.path, '.hg', 'shamap'),
325 325 'a')
326 326 self.convertfp.write('%s %s\n' % (destrev, rev))
327 327 self.convertfp.flush()
328 328
329 329 def before(self):
330 330 self.ui.debug(_('run hg source pre-conversion action\n'))
331 331
332 332 def after(self):
333 333 self.ui.debug(_('run hg source post-conversion action\n'))
@@ -1,179 +1,179 b''
1 1 #
2 2 # Perforce source for convert extension.
3 3 #
4 4 # Copyright 2009, Frank Kingswood <frank@kingswood-consulting.co.uk>
5 5 #
6 6 # This software may be used and distributed according to the terms
7 7 # of the GNU General Public License, incorporated herein by reference.
8 8 #
9 9
10 10 from mercurial import util
11 11 from mercurial.i18n import _
12 12
13 13 from common import commit, converter_source, checktool, NoRepo
14 14 import marshal
15 15
16 16 def loaditer(f):
17 17 "Yield the dictionary objects generated by p4"
18 18 try:
19 19 while True:
20 20 d = marshal.load(f)
21 21 if not d:
22 22 break
23 23 yield d
24 24 except EOFError:
25 25 pass
26 26
27 27 class p4_source(converter_source):
28 28 def __init__(self, ui, path, rev=None):
29 29 super(p4_source, self).__init__(ui, path, rev=rev)
30 30
31 31 if not path.startswith('//'):
32 32 raise NoRepo('%s does not look like a P4 repo' % path)
33 33
34 34 checktool('p4', abort=False)
35 35
36 36 self.p4changes = {}
37 37 self.heads = {}
38 38 self.changeset = {}
39 39 self.files = {}
40 40 self.tags = {}
41 41 self.lastbranch = {}
42 42 self.parent = {}
43 43 self.encoding = "latin_1"
44 44 self.depotname = {} # mapping from local name to depot name
45 45 self.modecache = {}
46 46
47 47 self._parse(ui, path)
48 48
49 49 def _parse_view(self, path):
50 50 "Read changes affecting the path"
51 51 cmd = 'p4 -G changes -s submitted "%s"' % path
52 52 stdout = util.popen(cmd)
53 53 for d in loaditer(stdout):
54 54 c = d.get("change", None)
55 55 if c:
56 56 self.p4changes[c] = True
57 57
58 58 def _parse(self, ui, path):
59 59 "Prepare list of P4 filenames and revisions to import"
60 60 ui.status(_('reading p4 views\n'))
61 61
62 62 # read client spec or view
63 63 if "/" in path:
64 64 self._parse_view(path)
65 65 if path.startswith("//") and path.endswith("/..."):
66 66 views = {path[:-3]:""}
67 67 else:
68 68 views = {"//": ""}
69 69 else:
70 70 cmd = 'p4 -G client -o "%s"' % path
71 71 clientspec = marshal.load(util.popen(cmd))
72 72
73 73 views = {}
74 74 for client in clientspec:
75 75 if client.startswith("View"):
76 76 sview, cview = clientspec[client].split()
77 77 self._parse_view(sview)
78 78 if sview.endswith("...") and cview.endswith("..."):
79 79 sview = sview[:-3]
80 80 cview = cview[:-3]
81 81 cview = cview[2:]
82 82 cview = cview[cview.find("/") + 1:]
83 83 views[sview] = cview
84 84
85 85 # list of changes that affect our source files
86 86 self.p4changes = self.p4changes.keys()
87 87 self.p4changes.sort(key=int)
88 88
89 89 # list with depot pathnames, longest first
90 90 vieworder = views.keys()
91 91 vieworder.sort(key=lambda x: -len(x))
92 92
93 93 # handle revision limiting
94 94 startrev = self.ui.config('convert', 'p4.startrev', default=0)
95 95 self.p4changes = [x for x in self.p4changes
96 96 if ((not startrev or int(x) >= int(startrev)) and
97 97 (not self.rev or int(x) <= int(self.rev)))]
98 98
99 99 # now read the full changelists to get the list of file revisions
100 100 ui.status(_('collecting p4 changelists\n'))
101 101 lastid = None
102 102 for change in self.p4changes:
103 103 cmd = "p4 -G describe %s" % change
104 104 stdout = util.popen(cmd)
105 105 d = marshal.load(stdout)
106 106
107 107 desc = self.recode(d["desc"])
108 108 shortdesc = desc.split("\n", 1)[0]
109 109 t = '%s %s' % (d["change"], repr(shortdesc)[1:-1])
110 110 ui.status(util.ellipsis(t, 80) + '\n')
111 111
112 112 if lastid:
113 113 parents = [lastid]
114 114 else:
115 115 parents = []
116 116
117 117 date = (int(d["time"]), 0) # timezone not set
118 118 c = commit(author=self.recode(d["user"]), date=util.datestr(date),
119 119 parents=parents, desc=desc, branch='', extra={"p4": change})
120 120
121 121 files = []
122 122 i = 0
123 123 while ("depotFile%d" % i) in d and ("rev%d" % i) in d:
124 124 oldname = d["depotFile%d" % i]
125 125 filename = None
126 126 for v in vieworder:
127 127 if oldname.startswith(v):
128 128 filename = views[v] + oldname[len(v):]
129 129 break
130 130 if filename:
131 131 files.append((filename, d["rev%d" % i]))
132 132 self.depotname[filename] = oldname
133 133 i += 1
134 134 self.changeset[change] = c
135 135 self.files[change] = files
136 136 lastid = change
137 137
138 138 if lastid:
139 139 self.heads = [lastid]
140 140
141 141 def getheads(self):
142 142 return self.heads
143 143
144 144 def getfile(self, name, rev):
145 145 cmd = 'p4 -G print "%s#%s"' % (self.depotname[name], rev)
146 146 stdout = util.popen(cmd)
147 147
148 148 mode = None
149 149 data = ""
150 150
151 151 for d in loaditer(stdout):
152 152 if d["code"] == "stat":
153 153 if "+x" in d["type"]:
154 154 mode = "x"
155 155 else:
156 156 mode = ""
157 157 elif d["code"] == "text":
158 158 data += d["data"]
159 159
160 160 if mode is None:
161 161 raise IOError()
162 162
163 163 self.modecache[(name, rev)] = mode
164 164 return data
165 165
166 166 def getmode(self, name, rev):
167 167 return self.modecache[(name, rev)]
168 168
169 169 def getchanges(self, rev):
170 170 return self.files[rev], {}
171 171
172 172 def getcommit(self, rev):
173 173 return self.changeset[rev]
174 174
175 175 def gettags(self):
176 176 return self.tags
177 177
178 178 def getchangedfiles(self, rev, i):
179 return util.sort([x[0] for x in self.files[rev]])
179 return sorted([x[0] for x in self.files[rev]])
@@ -1,1205 +1,1203 b''
1 1 # Subversion 1.4/1.5 Python API backend
2 2 #
3 3 # Copyright(C) 2007 Daniel Holth et al
4 4 #
5 5 # Configuration options:
6 6 #
7 7 # convert.svn.trunk
8 8 # Relative path to the trunk (default: "trunk")
9 9 # convert.svn.branches
10 10 # Relative path to tree of branches (default: "branches")
11 11 # convert.svn.tags
12 12 # Relative path to tree of tags (default: "tags")
13 13 #
14 14 # Set these in a hgrc, or on the command line as follows:
15 15 #
16 16 # hg convert --config convert.svn.trunk=wackoname [...]
17 17
18 18 import locale
19 19 import os
20 20 import re
21 21 import sys
22 22 import cPickle as pickle
23 23 import tempfile
24 24 import urllib
25 25
26 26 from mercurial import strutil, util
27 27 from mercurial.i18n import _
28 28
29 29 # Subversion stuff. Works best with very recent Python SVN bindings
30 30 # e.g. SVN 1.5 or backports. Thanks to the bzr folks for enhancing
31 31 # these bindings.
32 32
33 33 from cStringIO import StringIO
34 34
35 35 from common import NoRepo, MissingTool, commit, encodeargs, decodeargs
36 36 from common import commandline, converter_source, converter_sink, mapfile
37 37
38 38 try:
39 39 from svn.core import SubversionException, Pool
40 40 import svn
41 41 import svn.client
42 42 import svn.core
43 43 import svn.ra
44 44 import svn.delta
45 45 import transport
46 46 except ImportError:
47 47 pass
48 48
49 49 class SvnPathNotFound(Exception):
50 50 pass
51 51
52 52 def geturl(path):
53 53 try:
54 54 return svn.client.url_from_path(svn.core.svn_path_canonicalize(path))
55 55 except SubversionException:
56 56 pass
57 57 if os.path.isdir(path):
58 58 path = os.path.normpath(os.path.abspath(path))
59 59 if os.name == 'nt':
60 60 path = '/' + util.normpath(path)
61 61 return 'file://%s' % urllib.quote(path)
62 62 return path
63 63
64 64 def optrev(number):
65 65 optrev = svn.core.svn_opt_revision_t()
66 66 optrev.kind = svn.core.svn_opt_revision_number
67 67 optrev.value.number = number
68 68 return optrev
69 69
70 70 class changedpath(object):
71 71 def __init__(self, p):
72 72 self.copyfrom_path = p.copyfrom_path
73 73 self.copyfrom_rev = p.copyfrom_rev
74 74 self.action = p.action
75 75
76 76 def get_log_child(fp, url, paths, start, end, limit=0, discover_changed_paths=True,
77 77 strict_node_history=False):
78 78 protocol = -1
79 79 def receiver(orig_paths, revnum, author, date, message, pool):
80 80 if orig_paths is not None:
81 81 for k, v in orig_paths.iteritems():
82 82 orig_paths[k] = changedpath(v)
83 83 pickle.dump((orig_paths, revnum, author, date, message),
84 84 fp, protocol)
85 85
86 86 try:
87 87 # Use an ra of our own so that our parent can consume
88 88 # our results without confusing the server.
89 89 t = transport.SvnRaTransport(url=url)
90 90 svn.ra.get_log(t.ra, paths, start, end, limit,
91 91 discover_changed_paths,
92 92 strict_node_history,
93 93 receiver)
94 94 except SubversionException, (inst, num):
95 95 pickle.dump(num, fp, protocol)
96 96 except IOError:
97 97 # Caller may interrupt the iteration
98 98 pickle.dump(None, fp, protocol)
99 99 else:
100 100 pickle.dump(None, fp, protocol)
101 101 fp.close()
102 102 # With large history, cleanup process goes crazy and suddenly
103 103 # consumes *huge* amount of memory. The output file being closed,
104 104 # there is no need for clean termination.
105 105 os._exit(0)
106 106
107 107 def debugsvnlog(ui, **opts):
108 108 """Fetch SVN log in a subprocess and channel them back to parent to
109 109 avoid memory collection issues.
110 110 """
111 111 util.set_binary(sys.stdin)
112 112 util.set_binary(sys.stdout)
113 113 args = decodeargs(sys.stdin.read())
114 114 get_log_child(sys.stdout, *args)
115 115
116 116 class logstream:
117 117 """Interruptible revision log iterator."""
118 118 def __init__(self, stdout):
119 119 self._stdout = stdout
120 120
121 121 def __iter__(self):
122 122 while True:
123 123 entry = pickle.load(self._stdout)
124 124 try:
125 125 orig_paths, revnum, author, date, message = entry
126 126 except:
127 127 if entry is None:
128 128 break
129 129 raise SubversionException("child raised exception", entry)
130 130 yield entry
131 131
132 132 def close(self):
133 133 if self._stdout:
134 134 self._stdout.close()
135 135 self._stdout = None
136 136
137 137
138 138 # Check to see if the given path is a local Subversion repo. Verify this by
139 139 # looking for several svn-specific files and directories in the given
140 140 # directory.
141 141 def filecheck(path, proto):
142 142 for x in ('locks', 'hooks', 'format', 'db', ):
143 143 if not os.path.exists(os.path.join(path, x)):
144 144 return False
145 145 return True
146 146
147 147 # Check to see if a given path is the root of an svn repo over http. We verify
148 148 # this by requesting a version-controlled URL we know can't exist and looking
149 149 # for the svn-specific "not found" XML.
150 150 def httpcheck(path, proto):
151 151 return ('<m:human-readable errcode="160013">' in
152 152 urllib.urlopen('%s://%s/!svn/ver/0/.svn' % (proto, path)).read())
153 153
154 154 protomap = {'http': httpcheck,
155 155 'https': httpcheck,
156 156 'file': filecheck,
157 157 }
158 158 def issvnurl(url):
159 159 if not '://' in url:
160 160 return False
161 161 proto, path = url.split('://', 1)
162 162 check = protomap.get(proto, lambda p, p2: False)
163 163 while '/' in path:
164 164 if check(path, proto):
165 165 return True
166 166 path = path.rsplit('/', 1)[0]
167 167 return False
168 168
169 169 # SVN conversion code stolen from bzr-svn and tailor
170 170 #
171 171 # Subversion looks like a versioned filesystem, branches structures
172 172 # are defined by conventions and not enforced by the tool. First,
173 173 # we define the potential branches (modules) as "trunk" and "branches"
174 174 # children directories. Revisions are then identified by their
175 175 # module and revision number (and a repository identifier).
176 176 #
177 177 # The revision graph is really a tree (or a forest). By default, a
178 178 # revision parent is the previous revision in the same module. If the
179 179 # module directory is copied/moved from another module then the
180 180 # revision is the module root and its parent the source revision in
181 181 # the parent module. A revision has at most one parent.
182 182 #
183 183 class svn_source(converter_source):
184 184 def __init__(self, ui, url, rev=None):
185 185 super(svn_source, self).__init__(ui, url, rev=rev)
186 186
187 187 if not (url.startswith('svn://') or url.startswith('svn+ssh://') or
188 188 (os.path.exists(url) and
189 189 os.path.exists(os.path.join(url, '.svn'))) or
190 190 issvnurl(url)):
191 191 raise NoRepo("%s does not look like a Subversion repo" % url)
192 192
193 193 try:
194 194 SubversionException
195 195 except NameError:
196 196 raise MissingTool(_('Subversion python bindings could not be loaded'))
197 197
198 198 try:
199 199 version = svn.core.SVN_VER_MAJOR, svn.core.SVN_VER_MINOR
200 200 if version < (1, 4):
201 201 raise MissingTool(_('Subversion python bindings %d.%d found, '
202 202 '1.4 or later required') % version)
203 203 except AttributeError:
204 204 raise MissingTool(_('Subversion python bindings are too old, 1.4 '
205 205 'or later required'))
206 206
207 207 self.encoding = locale.getpreferredencoding()
208 208 self.lastrevs = {}
209 209
210 210 latest = None
211 211 try:
212 212 # Support file://path@rev syntax. Useful e.g. to convert
213 213 # deleted branches.
214 214 at = url.rfind('@')
215 215 if at >= 0:
216 216 latest = int(url[at+1:])
217 217 url = url[:at]
218 218 except ValueError:
219 219 pass
220 220 self.url = geturl(url)
221 221 self.encoding = 'UTF-8' # Subversion is always nominal UTF-8
222 222 try:
223 223 self.transport = transport.SvnRaTransport(url=self.url)
224 224 self.ra = self.transport.ra
225 225 self.ctx = self.transport.client
226 226 self.baseurl = svn.ra.get_repos_root(self.ra)
227 227 # Module is either empty or a repository path starting with
228 228 # a slash and not ending with a slash.
229 229 self.module = urllib.unquote(self.url[len(self.baseurl):])
230 230 self.prevmodule = None
231 231 self.rootmodule = self.module
232 232 self.commits = {}
233 233 self.paths = {}
234 234 self.uuid = svn.ra.get_uuid(self.ra).decode(self.encoding)
235 235 except SubversionException:
236 236 ui.traceback()
237 237 raise NoRepo("%s does not look like a Subversion repo" % self.url)
238 238
239 239 if rev:
240 240 try:
241 241 latest = int(rev)
242 242 except ValueError:
243 243 raise util.Abort(_('svn: revision %s is not an integer') % rev)
244 244
245 245 self.startrev = self.ui.config('convert', 'svn.startrev', default=0)
246 246 try:
247 247 self.startrev = int(self.startrev)
248 248 if self.startrev < 0:
249 249 self.startrev = 0
250 250 except ValueError:
251 251 raise util.Abort(_('svn: start revision %s is not an integer')
252 252 % self.startrev)
253 253
254 254 try:
255 255 self.get_blacklist()
256 256 except IOError:
257 257 pass
258 258
259 259 self.head = self.latest(self.module, latest)
260 260 if not self.head:
261 261 raise util.Abort(_('no revision found in module %s') %
262 262 self.module.encode(self.encoding))
263 263 self.last_changed = self.revnum(self.head)
264 264
265 265 self._changescache = None
266 266
267 267 if os.path.exists(os.path.join(url, '.svn/entries')):
268 268 self.wc = url
269 269 else:
270 270 self.wc = None
271 271 self.convertfp = None
272 272
273 273 def setrevmap(self, revmap):
274 274 lastrevs = {}
275 275 for revid in revmap.iterkeys():
276 276 uuid, module, revnum = self.revsplit(revid)
277 277 lastrevnum = lastrevs.setdefault(module, revnum)
278 278 if revnum > lastrevnum:
279 279 lastrevs[module] = revnum
280 280 self.lastrevs = lastrevs
281 281
282 282 def exists(self, path, optrev):
283 283 try:
284 284 svn.client.ls(self.url.rstrip('/') + '/' + urllib.quote(path),
285 285 optrev, False, self.ctx)
286 286 return True
287 287 except SubversionException:
288 288 return False
289 289
290 290 def getheads(self):
291 291
292 292 def isdir(path, revnum):
293 293 kind = self._checkpath(path, revnum)
294 294 return kind == svn.core.svn_node_dir
295 295
296 296 def getcfgpath(name, rev):
297 297 cfgpath = self.ui.config('convert', 'svn.' + name)
298 298 if cfgpath is not None and cfgpath.strip() == '':
299 299 return None
300 300 path = (cfgpath or name).strip('/')
301 301 if not self.exists(path, rev):
302 302 if cfgpath:
303 303 raise util.Abort(_('expected %s to be at %r, but not found')
304 304 % (name, path))
305 305 return None
306 306 self.ui.note(_('found %s at %r\n') % (name, path))
307 307 return path
308 308
309 309 rev = optrev(self.last_changed)
310 310 oldmodule = ''
311 311 trunk = getcfgpath('trunk', rev)
312 312 self.tags = getcfgpath('tags', rev)
313 313 branches = getcfgpath('branches', rev)
314 314
315 315 # If the project has a trunk or branches, we will extract heads
316 316 # from them. We keep the project root otherwise.
317 317 if trunk:
318 318 oldmodule = self.module or ''
319 319 self.module += '/' + trunk
320 320 self.head = self.latest(self.module, self.last_changed)
321 321 if not self.head:
322 322 raise util.Abort(_('no revision found in module %s') %
323 323 self.module.encode(self.encoding))
324 324
325 325 # First head in the list is the module's head
326 326 self.heads = [self.head]
327 327 if self.tags is not None:
328 328 self.tags = '%s/%s' % (oldmodule , (self.tags or 'tags'))
329 329
330 330 # Check if branches bring a few more heads to the list
331 331 if branches:
332 332 rpath = self.url.strip('/')
333 333 branchnames = svn.client.ls(rpath + '/' + urllib.quote(branches),
334 334 rev, False, self.ctx)
335 335 for branch in branchnames.keys():
336 336 module = '%s/%s/%s' % (oldmodule, branches, branch)
337 337 if not isdir(module, self.last_changed):
338 338 continue
339 339 brevid = self.latest(module, self.last_changed)
340 340 if not brevid:
341 341 self.ui.note(_('ignoring empty branch %s\n') %
342 342 branch.encode(self.encoding))
343 343 continue
344 344 self.ui.note(_('found branch %s at %d\n') %
345 345 (branch, self.revnum(brevid)))
346 346 self.heads.append(brevid)
347 347
348 348 if self.startrev and self.heads:
349 349 if len(self.heads) > 1:
350 350 raise util.Abort(_('svn: start revision is not supported '
351 351 'with more than one branch'))
352 352 revnum = self.revnum(self.heads[0])
353 353 if revnum < self.startrev:
354 354 raise util.Abort(_('svn: no revision found after start revision %d')
355 355 % self.startrev)
356 356
357 357 return self.heads
358 358
359 359 def getfile(self, file, rev):
360 360 data, mode = self._getfile(file, rev)
361 361 self.modecache[(file, rev)] = mode
362 362 return data
363 363
364 364 def getmode(self, file, rev):
365 365 return self.modecache[(file, rev)]
366 366
367 367 def getchanges(self, rev):
368 368 if self._changescache and self._changescache[0] == rev:
369 369 return self._changescache[1]
370 370 self._changescache = None
371 371 self.modecache = {}
372 372 (paths, parents) = self.paths[rev]
373 373 if parents:
374 374 files, copies = self.expandpaths(rev, paths, parents)
375 375 else:
376 376 # Perform a full checkout on roots
377 377 uuid, module, revnum = self.revsplit(rev)
378 378 entries = svn.client.ls(self.baseurl + urllib.quote(module),
379 379 optrev(revnum), True, self.ctx)
380 380 files = [n for n,e in entries.iteritems()
381 381 if e.kind == svn.core.svn_node_file]
382 382 copies = {}
383 383
384 384 files.sort()
385 385 files = zip(files, [rev] * len(files))
386 386
387 387 # caller caches the result, so free it here to release memory
388 388 del self.paths[rev]
389 389 return (files, copies)
390 390
391 391 def getchangedfiles(self, rev, i):
392 392 changes = self.getchanges(rev)
393 393 self._changescache = (rev, changes)
394 394 return [f[0] for f in changes[0]]
395 395
396 396 def getcommit(self, rev):
397 397 if rev not in self.commits:
398 398 uuid, module, revnum = self.revsplit(rev)
399 399 self.module = module
400 400 self.reparent(module)
401 401 # We assume that:
402 402 # - requests for revisions after "stop" come from the
403 403 # revision graph backward traversal. Cache all of them
404 404 # down to stop, they will be used eventually.
405 405 # - requests for revisions before "stop" come to get
406 406 # isolated branches parents. Just fetch what is needed.
407 407 stop = self.lastrevs.get(module, 0)
408 408 if revnum < stop:
409 409 stop = revnum + 1
410 410 self._fetch_revisions(revnum, stop)
411 411 commit = self.commits[rev]
412 412 # caller caches the result, so free it here to release memory
413 413 del self.commits[rev]
414 414 return commit
415 415
416 416 def gettags(self):
417 417 tags = {}
418 418 if self.tags is None:
419 419 return tags
420 420
421 421 # svn tags are just a convention, project branches left in a
422 422 # 'tags' directory. There is no other relationship than
423 423 # ancestry, which is expensive to discover and makes them hard
424 424 # to update incrementally. Worse, past revisions may be
425 425 # referenced by tags far away in the future, requiring a deep
426 426 # history traversal on every calculation. Current code
427 427 # performs a single backward traversal, tracking moves within
428 428 # the tags directory (tag renaming) and recording a new tag
429 429 # everytime a project is copied from outside the tags
430 430 # directory. It also lists deleted tags, this behaviour may
431 431 # change in the future.
432 432 pendings = []
433 433 tagspath = self.tags
434 434 start = svn.ra.get_latest_revnum(self.ra)
435 435 try:
436 436 for entry in self._getlog([self.tags], start, self.startrev):
437 437 origpaths, revnum, author, date, message = entry
438 438 copies = [(e.copyfrom_path, e.copyfrom_rev, p) for p, e
439 439 in origpaths.iteritems() if e.copyfrom_path]
440 440 copies.sort()
441 441 # Apply moves/copies from more specific to general
442 442 copies.reverse()
443 443
444 444 srctagspath = tagspath
445 445 if copies and copies[-1][2] == tagspath:
446 446 # Track tags directory moves
447 447 srctagspath = copies.pop()[0]
448 448
449 449 for source, sourcerev, dest in copies:
450 450 if not dest.startswith(tagspath + '/'):
451 451 continue
452 452 for tag in pendings:
453 453 if tag[0].startswith(dest):
454 454 tagpath = source + tag[0][len(dest):]
455 455 tag[:2] = [tagpath, sourcerev]
456 456 break
457 457 else:
458 458 pendings.append([source, sourcerev, dest.split('/')[-1]])
459 459
460 460 # Tell tag renamings from tag creations
461 461 remainings = []
462 462 for source, sourcerev, tagname in pendings:
463 463 if source.startswith(srctagspath):
464 464 remainings.append([source, sourcerev, tagname])
465 465 continue
466 466 # From revision may be fake, get one with changes
467 467 try:
468 468 tagid = self.latest(source, sourcerev)
469 469 if tagid:
470 470 tags[tagname] = tagid
471 471 except SvnPathNotFound:
472 472 # It happens when we are following directories we assumed
473 473 # were copied with their parents but were really created
474 474 # in the tag directory.
475 475 pass
476 476 pendings = remainings
477 477 tagspath = srctagspath
478 478
479 479 except SubversionException:
480 480 self.ui.note(_('no tags found at revision %d\n') % start)
481 481 return tags
482 482
483 483 def converted(self, rev, destrev):
484 484 if not self.wc:
485 485 return
486 486 if self.convertfp is None:
487 487 self.convertfp = open(os.path.join(self.wc, '.svn', 'hg-shamap'),
488 488 'a')
489 489 self.convertfp.write('%s %d\n' % (destrev, self.revnum(rev)))
490 490 self.convertfp.flush()
491 491
492 492 # -- helper functions --
493 493
494 494 def revid(self, revnum, module=None):
495 495 if not module:
496 496 module = self.module
497 497 return u"svn:%s%s@%s" % (self.uuid, module.decode(self.encoding),
498 498 revnum)
499 499
500 500 def revnum(self, rev):
501 501 return int(rev.split('@')[-1])
502 502
503 503 def revsplit(self, rev):
504 504 url, revnum = rev.encode(self.encoding).rsplit('@', 1)
505 505 revnum = int(revnum)
506 506 parts = url.split('/', 1)
507 507 uuid = parts.pop(0)[4:]
508 508 mod = ''
509 509 if parts:
510 510 mod = '/' + parts[0]
511 511 return uuid, mod, revnum
512 512
513 513 def latest(self, path, stop=0):
514 514 """Find the latest revid affecting path, up to stop. It may return
515 515 a revision in a different module, since a branch may be moved without
516 516 a change being reported. Return None if computed module does not
517 517 belong to rootmodule subtree.
518 518 """
519 519 if not path.startswith(self.rootmodule):
520 520 # Requests on foreign branches may be forbidden at server level
521 521 self.ui.debug(_('ignoring foreign branch %r\n') % path)
522 522 return None
523 523
524 524 if not stop:
525 525 stop = svn.ra.get_latest_revnum(self.ra)
526 526 try:
527 527 prevmodule = self.reparent('')
528 528 dirent = svn.ra.stat(self.ra, path.strip('/'), stop)
529 529 self.reparent(prevmodule)
530 530 except SubversionException:
531 531 dirent = None
532 532 if not dirent:
533 533 raise SvnPathNotFound(_('%s not found up to revision %d') % (path, stop))
534 534
535 535 # stat() gives us the previous revision on this line of development, but
536 536 # it might be in *another module*. Fetch the log and detect renames down
537 537 # to the latest revision.
538 538 stream = self._getlog([path], stop, dirent.created_rev)
539 539 try:
540 540 for entry in stream:
541 541 paths, revnum, author, date, message = entry
542 542 if revnum <= dirent.created_rev:
543 543 break
544 544
545 545 for p in paths:
546 546 if not path.startswith(p) or not paths[p].copyfrom_path:
547 547 continue
548 548 newpath = paths[p].copyfrom_path + path[len(p):]
549 549 self.ui.debug(_("branch renamed from %s to %s at %d\n") %
550 550 (path, newpath, revnum))
551 551 path = newpath
552 552 break
553 553 finally:
554 554 stream.close()
555 555
556 556 if not path.startswith(self.rootmodule):
557 557 self.ui.debug(_('ignoring foreign branch %r\n') % path)
558 558 return None
559 559 return self.revid(dirent.created_rev, path)
560 560
561 561 def get_blacklist(self):
562 562 """Avoid certain revision numbers.
563 563 It is not uncommon for two nearby revisions to cancel each other
564 564 out, e.g. 'I copied trunk into a subdirectory of itself instead
565 565 of making a branch'. The converted repository is significantly
566 566 smaller if we ignore such revisions."""
567 567 self.blacklist = set()
568 568 blacklist = self.blacklist
569 569 for line in file("blacklist.txt", "r"):
570 570 if not line.startswith("#"):
571 571 try:
572 572 svn_rev = int(line.strip())
573 573 blacklist.add(svn_rev)
574 574 except ValueError:
575 575 pass # not an integer or a comment
576 576
577 577 def is_blacklisted(self, svn_rev):
578 578 return svn_rev in self.blacklist
579 579
580 580 def reparent(self, module):
581 581 """Reparent the svn transport and return the previous parent."""
582 582 if self.prevmodule == module:
583 583 return module
584 584 svnurl = self.baseurl + urllib.quote(module)
585 585 prevmodule = self.prevmodule
586 586 if prevmodule is None:
587 587 prevmodule = ''
588 588 self.ui.debug(_("reparent to %s\n") % svnurl)
589 589 svn.ra.reparent(self.ra, svnurl)
590 590 self.prevmodule = module
591 591 return prevmodule
592 592
593 593 def expandpaths(self, rev, paths, parents):
594 594 entries = []
595 595 copyfrom = {} # Map of entrypath, revision for finding source of deleted revisions.
596 596 copies = {}
597 597
598 598 new_module, revnum = self.revsplit(rev)[1:]
599 599 if new_module != self.module:
600 600 self.module = new_module
601 601 self.reparent(self.module)
602 602
603 603 for path, ent in paths:
604 604 entrypath = self.getrelpath(path)
605 605 entry = entrypath.decode(self.encoding)
606 606
607 607 kind = self._checkpath(entrypath, revnum)
608 608 if kind == svn.core.svn_node_file:
609 609 entries.append(self.recode(entry))
610 610 if not ent.copyfrom_path or not parents:
611 611 continue
612 612 # Copy sources not in parent revisions cannot be represented,
613 613 # ignore their origin for now
614 614 pmodule, prevnum = self.revsplit(parents[0])[1:]
615 615 if ent.copyfrom_rev < prevnum:
616 616 continue
617 617 copyfrom_path = self.getrelpath(ent.copyfrom_path, pmodule)
618 618 if not copyfrom_path:
619 619 continue
620 620 self.ui.debug(_("copied to %s from %s@%s\n") %
621 621 (entrypath, copyfrom_path, ent.copyfrom_rev))
622 622 copies[self.recode(entry)] = self.recode(copyfrom_path)
623 623 elif kind == 0: # gone, but had better be a deleted *file*
624 624 self.ui.debug(_("gone from %s\n") % ent.copyfrom_rev)
625 625
626 626 # if a branch is created but entries are removed in the same
627 627 # changeset, get the right fromrev
628 628 # parents cannot be empty here, you cannot remove things from
629 629 # a root revision.
630 630 uuid, old_module, fromrev = self.revsplit(parents[0])
631 631
632 632 basepath = old_module + "/" + self.getrelpath(path)
633 633 entrypath = basepath
634 634
635 635 def lookup_parts(p):
636 636 rc = None
637 637 parts = p.split("/")
638 638 for i in range(len(parts)):
639 639 part = "/".join(parts[:i])
640 640 info = part, copyfrom.get(part, None)
641 641 if info[1] is not None:
642 642 self.ui.debug(_("found parent directory %s\n") % info[1])
643 643 rc = info
644 644 return rc
645 645
646 646 self.ui.debug(_("base, entry %s %s\n") % (basepath, entrypath))
647 647
648 648 frompath, froment = lookup_parts(entrypath) or (None, revnum - 1)
649 649
650 650 # need to remove fragment from lookup_parts and replace with copyfrom_path
651 651 if frompath is not None:
652 652 self.ui.debug(_("munge-o-matic\n"))
653 653 self.ui.debug(entrypath + '\n')
654 654 self.ui.debug(entrypath[len(frompath):] + '\n')
655 655 entrypath = froment.copyfrom_path + entrypath[len(frompath):]
656 656 fromrev = froment.copyfrom_rev
657 657 self.ui.debug(_("info: %s %s %s %s\n") % (frompath, froment, ent, entrypath))
658 658
659 659 # We can avoid the reparent calls if the module has not changed
660 660 # but it probably does not worth the pain.
661 661 prevmodule = self.reparent('')
662 662 fromkind = svn.ra.check_path(self.ra, entrypath.strip('/'), fromrev)
663 663 self.reparent(prevmodule)
664 664
665 665 if fromkind == svn.core.svn_node_file: # a deleted file
666 666 entries.append(self.recode(entry))
667 667 elif fromkind == svn.core.svn_node_dir:
668 668 # print "Deleted/moved non-file:", revnum, path, ent
669 669 # children = self._find_children(path, revnum - 1)
670 670 # print "find children %s@%d from %d action %s" % (path, revnum, ent.copyfrom_rev, ent.action)
671 671 # Sometimes this is tricky. For example: in
672 672 # The Subversion Repository revision 6940 a dir
673 673 # was copied and one of its files was deleted
674 674 # from the new location in the same commit. This
675 675 # code can't deal with that yet.
676 676 if ent.action == 'C':
677 677 children = self._find_children(path, fromrev)
678 678 else:
679 679 oroot = entrypath.strip('/')
680 680 nroot = path.strip('/')
681 681 children = self._find_children(oroot, fromrev)
682 682 children = [s.replace(oroot,nroot) for s in children]
683 683 # Mark all [files, not directories] as deleted.
684 684 for child in children:
685 685 # Can we move a child directory and its
686 686 # parent in the same commit? (probably can). Could
687 687 # cause problems if instead of revnum -1,
688 688 # we have to look in (copyfrom_path, revnum - 1)
689 689 entrypath = self.getrelpath("/" + child, module=old_module)
690 690 if entrypath:
691 691 entry = self.recode(entrypath.decode(self.encoding))
692 692 if entry in copies:
693 693 # deleted file within a copy
694 694 del copies[entry]
695 695 else:
696 696 entries.append(entry)
697 697 else:
698 698 self.ui.debug(_('unknown path in revision %d: %s\n') % \
699 699 (revnum, path))
700 700 elif kind == svn.core.svn_node_dir:
701 701 # Should probably synthesize normal file entries
702 702 # and handle as above to clean up copy/rename handling.
703 703
704 704 # If the directory just had a prop change,
705 705 # then we shouldn't need to look for its children.
706 706 if ent.action == 'M':
707 707 continue
708 708
709 709 # Also this could create duplicate entries. Not sure
710 710 # whether this will matter. Maybe should make entries a set.
711 711 # print "Changed directory", revnum, path, ent.action, ent.copyfrom_path, ent.copyfrom_rev
712 712 # This will fail if a directory was copied
713 713 # from another branch and then some of its files
714 714 # were deleted in the same transaction.
715 children = util.sort(self._find_children(path, revnum))
715 children = sorted(self._find_children(path, revnum))
716 716 for child in children:
717 717 # Can we move a child directory and its
718 718 # parent in the same commit? (probably can). Could
719 719 # cause problems if instead of revnum -1,
720 720 # we have to look in (copyfrom_path, revnum - 1)
721 721 entrypath = self.getrelpath("/" + child)
722 722 # print child, self.module, entrypath
723 723 if entrypath:
724 724 # Need to filter out directories here...
725 725 kind = self._checkpath(entrypath, revnum)
726 726 if kind != svn.core.svn_node_dir:
727 727 entries.append(self.recode(entrypath))
728 728
729 729 # Copies here (must copy all from source)
730 730 # Probably not a real problem for us if
731 731 # source does not exist
732 732 if not ent.copyfrom_path or not parents:
733 733 continue
734 734 # Copy sources not in parent revisions cannot be represented,
735 735 # ignore their origin for now
736 736 pmodule, prevnum = self.revsplit(parents[0])[1:]
737 737 if ent.copyfrom_rev < prevnum:
738 738 continue
739 739 copyfrompath = ent.copyfrom_path.decode(self.encoding)
740 740 copyfrompath = self.getrelpath(copyfrompath, pmodule)
741 741 if not copyfrompath:
742 742 continue
743 743 copyfrom[path] = ent
744 744 self.ui.debug(_("mark %s came from %s:%d\n")
745 745 % (path, copyfrompath, ent.copyfrom_rev))
746 746 children = self._find_children(ent.copyfrom_path, ent.copyfrom_rev)
747 747 children.sort()
748 748 for child in children:
749 749 entrypath = self.getrelpath("/" + child, pmodule)
750 750 if not entrypath:
751 751 continue
752 752 entry = entrypath.decode(self.encoding)
753 753 copytopath = path + entry[len(copyfrompath):]
754 754 copytopath = self.getrelpath(copytopath)
755 755 copies[self.recode(copytopath)] = self.recode(entry, pmodule)
756 756
757 757 return (list(set(entries)), copies)
758 758
759 759 def _fetch_revisions(self, from_revnum, to_revnum):
760 760 if from_revnum < to_revnum:
761 761 from_revnum, to_revnum = to_revnum, from_revnum
762 762
763 763 self.child_cset = None
764 764
765 765 def parselogentry(orig_paths, revnum, author, date, message):
766 766 """Return the parsed commit object or None, and True if
767 767 the revision is a branch root.
768 768 """
769 769 self.ui.debug(_("parsing revision %d (%d changes)\n") %
770 770 (revnum, len(orig_paths)))
771 771
772 772 branched = False
773 773 rev = self.revid(revnum)
774 774 # branch log might return entries for a parent we already have
775 775
776 776 if rev in self.commits or revnum < to_revnum:
777 777 return None, branched
778 778
779 779 parents = []
780 780 # check whether this revision is the start of a branch or part
781 781 # of a branch renaming
782 orig_paths = util.sort(orig_paths.items())
782 orig_paths = sorted(orig_paths.iteritems())
783 783 root_paths = [(p,e) for p,e in orig_paths if self.module.startswith(p)]
784 784 if root_paths:
785 785 path, ent = root_paths[-1]
786 786 if ent.copyfrom_path:
787 787 branched = True
788 788 newpath = ent.copyfrom_path + self.module[len(path):]
789 789 # ent.copyfrom_rev may not be the actual last revision
790 790 previd = self.latest(newpath, ent.copyfrom_rev)
791 791 if previd is not None:
792 792 prevmodule, prevnum = self.revsplit(previd)[1:]
793 793 if prevnum >= self.startrev:
794 794 parents = [previd]
795 795 self.ui.note(_('found parent of branch %s at %d: %s\n') %
796 796 (self.module, prevnum, prevmodule))
797 797 else:
798 798 self.ui.debug(_("no copyfrom path, don't know what to do.\n"))
799 799
800 800 paths = []
801 801 # filter out unrelated paths
802 802 for path, ent in orig_paths:
803 803 if self.getrelpath(path) is None:
804 804 continue
805 805 paths.append((path, ent))
806 806
807 807 # Example SVN datetime. Includes microseconds.
808 808 # ISO-8601 conformant
809 809 # '2007-01-04T17:35:00.902377Z'
810 810 date = util.parsedate(date[:19] + " UTC", ["%Y-%m-%dT%H:%M:%S"])
811 811
812 812 log = message and self.recode(message) or ''
813 813 author = author and self.recode(author) or ''
814 814 try:
815 815 branch = self.module.split("/")[-1]
816 816 if branch == 'trunk':
817 817 branch = ''
818 818 except IndexError:
819 819 branch = None
820 820
821 821 cset = commit(author=author,
822 822 date=util.datestr(date),
823 823 desc=log,
824 824 parents=parents,
825 825 branch=branch,
826 826 rev=rev.encode('utf-8'))
827 827
828 828 self.commits[rev] = cset
829 829 # The parents list is *shared* among self.paths and the
830 830 # commit object. Both will be updated below.
831 831 self.paths[rev] = (paths, cset.parents)
832 832 if self.child_cset and not self.child_cset.parents:
833 833 self.child_cset.parents[:] = [rev]
834 834 self.child_cset = cset
835 835 return cset, branched
836 836
837 837 self.ui.note(_('fetching revision log for "%s" from %d to %d\n') %
838 838 (self.module, from_revnum, to_revnum))
839 839
840 840 try:
841 841 firstcset = None
842 842 lastonbranch = False
843 843 stream = self._getlog([self.module], from_revnum, to_revnum)
844 844 try:
845 845 for entry in stream:
846 846 paths, revnum, author, date, message = entry
847 847 if revnum < self.startrev:
848 848 lastonbranch = True
849 849 break
850 850 if self.is_blacklisted(revnum):
851 851 self.ui.note(_('skipping blacklisted revision %d\n')
852 852 % revnum)
853 853 continue
854 854 if not paths:
855 855 self.ui.debug(_('revision %d has no entries\n') % revnum)
856 856 continue
857 857 cset, lastonbranch = parselogentry(paths, revnum, author,
858 858 date, message)
859 859 if cset:
860 860 firstcset = cset
861 861 if lastonbranch:
862 862 break
863 863 finally:
864 864 stream.close()
865 865
866 866 if not lastonbranch and firstcset and not firstcset.parents:
867 867 # The first revision of the sequence (the last fetched one)
868 868 # has invalid parents if not a branch root. Find the parent
869 869 # revision now, if any.
870 870 try:
871 871 firstrevnum = self.revnum(firstcset.rev)
872 872 if firstrevnum > 1:
873 873 latest = self.latest(self.module, firstrevnum - 1)
874 874 if latest:
875 875 firstcset.parents.append(latest)
876 876 except SvnPathNotFound:
877 877 pass
878 878 except SubversionException, (inst, num):
879 879 if num == svn.core.SVN_ERR_FS_NO_SUCH_REVISION:
880 880 raise util.Abort(_('svn: branch has no revision %s') % to_revnum)
881 881 raise
882 882
883 883 def _getfile(self, file, rev):
884 884 # TODO: ra.get_file transmits the whole file instead of diffs.
885 885 mode = ''
886 886 try:
887 887 new_module, revnum = self.revsplit(rev)[1:]
888 888 if self.module != new_module:
889 889 self.module = new_module
890 890 self.reparent(self.module)
891 891 io = StringIO()
892 892 info = svn.ra.get_file(self.ra, file, revnum, io)
893 893 data = io.getvalue()
894 894 # ra.get_files() seems to keep a reference on the input buffer
895 895 # preventing collection. Release it explicitely.
896 896 io.close()
897 897 if isinstance(info, list):
898 898 info = info[-1]
899 899 mode = ("svn:executable" in info) and 'x' or ''
900 900 mode = ("svn:special" in info) and 'l' or mode
901 901 except SubversionException, e:
902 902 notfound = (svn.core.SVN_ERR_FS_NOT_FOUND,
903 903 svn.core.SVN_ERR_RA_DAV_PATH_NOT_FOUND)
904 904 if e.apr_err in notfound: # File not found
905 905 raise IOError()
906 906 raise
907 907 if mode == 'l':
908 908 link_prefix = "link "
909 909 if data.startswith(link_prefix):
910 910 data = data[len(link_prefix):]
911 911 return data, mode
912 912
913 913 def _find_children(self, path, revnum):
914 914 path = path.strip('/')
915 915 pool = Pool()
916 916 rpath = '/'.join([self.baseurl, urllib.quote(path)]).strip('/')
917 917 return ['%s/%s' % (path, x) for x in
918 918 svn.client.ls(rpath, optrev(revnum), True, self.ctx, pool).keys()]
919 919
920 920 def getrelpath(self, path, module=None):
921 921 if module is None:
922 922 module = self.module
923 923 # Given the repository url of this wc, say
924 924 # "http://server/plone/CMFPlone/branches/Plone-2_0-branch"
925 925 # extract the "entry" portion (a relative path) from what
926 926 # svn log --xml says, ie
927 927 # "/CMFPlone/branches/Plone-2_0-branch/tests/PloneTestCase.py"
928 928 # that is to say "tests/PloneTestCase.py"
929 929 if path.startswith(module):
930 930 relative = path.rstrip('/')[len(module):]
931 931 if relative.startswith('/'):
932 932 return relative[1:]
933 933 elif relative == '':
934 934 return relative
935 935
936 936 # The path is outside our tracked tree...
937 937 self.ui.debug(_('%r is not under %r, ignoring\n') % (path, module))
938 938 return None
939 939
940 940 def _checkpath(self, path, revnum):
941 941 # ra.check_path does not like leading slashes very much, it leads
942 942 # to PROPFIND subversion errors
943 943 return svn.ra.check_path(self.ra, path.strip('/'), revnum)
944 944
945 945 def _getlog(self, paths, start, end, limit=0, discover_changed_paths=True,
946 946 strict_node_history=False):
947 947 # Normalize path names, svn >= 1.5 only wants paths relative to
948 948 # supplied URL
949 949 relpaths = []
950 950 for p in paths:
951 951 if not p.startswith('/'):
952 952 p = self.module + '/' + p
953 953 relpaths.append(p.strip('/'))
954 954 args = [self.baseurl, relpaths, start, end, limit, discover_changed_paths,
955 955 strict_node_history]
956 956 arg = encodeargs(args)
957 957 hgexe = util.hgexecutable()
958 958 cmd = '%s debugsvnlog' % util.shellquote(hgexe)
959 959 stdin, stdout = util.popen2(cmd, 'b')
960 960 stdin.write(arg)
961 961 stdin.close()
962 962 return logstream(stdout)
963 963
964 964 pre_revprop_change = '''#!/bin/sh
965 965
966 966 REPOS="$1"
967 967 REV="$2"
968 968 USER="$3"
969 969 PROPNAME="$4"
970 970 ACTION="$5"
971 971
972 972 if [ "$ACTION" = "M" -a "$PROPNAME" = "svn:log" ]; then exit 0; fi
973 973 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-branch" ]; then exit 0; fi
974 974 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-rev" ]; then exit 0; fi
975 975
976 976 echo "Changing prohibited revision property" >&2
977 977 exit 1
978 978 '''
979 979
980 980 class svn_sink(converter_sink, commandline):
981 981 commit_re = re.compile(r'Committed revision (\d+).', re.M)
982 982
983 983 def prerun(self):
984 984 if self.wc:
985 985 os.chdir(self.wc)
986 986
987 987 def postrun(self):
988 988 if self.wc:
989 989 os.chdir(self.cwd)
990 990
991 991 def join(self, name):
992 992 return os.path.join(self.wc, '.svn', name)
993 993
994 994 def revmapfile(self):
995 995 return self.join('hg-shamap')
996 996
997 997 def authorfile(self):
998 998 return self.join('hg-authormap')
999 999
1000 1000 def __init__(self, ui, path):
1001 1001 converter_sink.__init__(self, ui, path)
1002 1002 commandline.__init__(self, ui, 'svn')
1003 1003 self.delete = []
1004 1004 self.setexec = []
1005 1005 self.delexec = []
1006 1006 self.copies = []
1007 1007 self.wc = None
1008 1008 self.cwd = os.getcwd()
1009 1009
1010 1010 path = os.path.realpath(path)
1011 1011
1012 1012 created = False
1013 1013 if os.path.isfile(os.path.join(path, '.svn', 'entries')):
1014 1014 self.wc = path
1015 1015 self.run0('update')
1016 1016 else:
1017 1017 wcpath = os.path.join(os.getcwd(), os.path.basename(path) + '-wc')
1018 1018
1019 1019 if os.path.isdir(os.path.dirname(path)):
1020 1020 if not os.path.exists(os.path.join(path, 'db', 'fs-type')):
1021 1021 ui.status(_('initializing svn repo %r\n') %
1022 1022 os.path.basename(path))
1023 1023 commandline(ui, 'svnadmin').run0('create', path)
1024 1024 created = path
1025 1025 path = util.normpath(path)
1026 1026 if not path.startswith('/'):
1027 1027 path = '/' + path
1028 1028 path = 'file://' + path
1029 1029
1030 1030 ui.status(_('initializing svn wc %r\n') % os.path.basename(wcpath))
1031 1031 self.run0('checkout', path, wcpath)
1032 1032
1033 1033 self.wc = wcpath
1034 1034 self.opener = util.opener(self.wc)
1035 1035 self.wopener = util.opener(self.wc)
1036 1036 self.childmap = mapfile(ui, self.join('hg-childmap'))
1037 1037 self.is_exec = util.checkexec(self.wc) and util.is_exec or None
1038 1038
1039 1039 if created:
1040 1040 hook = os.path.join(created, 'hooks', 'pre-revprop-change')
1041 1041 fp = open(hook, 'w')
1042 1042 fp.write(pre_revprop_change)
1043 1043 fp.close()
1044 1044 util.set_flags(hook, False, True)
1045 1045
1046 1046 xport = transport.SvnRaTransport(url=geturl(path))
1047 1047 self.uuid = svn.ra.get_uuid(xport.ra)
1048 1048
1049 1049 def wjoin(self, *names):
1050 1050 return os.path.join(self.wc, *names)
1051 1051
1052 1052 def putfile(self, filename, flags, data):
1053 1053 if 'l' in flags:
1054 1054 self.wopener.symlink(data, filename)
1055 1055 else:
1056 1056 try:
1057 1057 if os.path.islink(self.wjoin(filename)):
1058 1058 os.unlink(filename)
1059 1059 except OSError:
1060 1060 pass
1061 1061 self.wopener(filename, 'w').write(data)
1062 1062
1063 1063 if self.is_exec:
1064 1064 was_exec = self.is_exec(self.wjoin(filename))
1065 1065 else:
1066 1066 # On filesystems not supporting execute-bit, there is no way
1067 1067 # to know if it is set but asking subversion. Setting it
1068 1068 # systematically is just as expensive and much simpler.
1069 1069 was_exec = 'x' not in flags
1070 1070
1071 1071 util.set_flags(self.wjoin(filename), False, 'x' in flags)
1072 1072 if was_exec:
1073 1073 if 'x' not in flags:
1074 1074 self.delexec.append(filename)
1075 1075 else:
1076 1076 if 'x' in flags:
1077 1077 self.setexec.append(filename)
1078 1078
1079 1079 def _copyfile(self, source, dest):
1080 1080 # SVN's copy command pukes if the destination file exists, but
1081 1081 # our copyfile method expects to record a copy that has
1082 1082 # already occurred. Cross the semantic gap.
1083 1083 wdest = self.wjoin(dest)
1084 1084 exists = os.path.exists(wdest)
1085 1085 if exists:
1086 1086 fd, tempname = tempfile.mkstemp(
1087 1087 prefix='hg-copy-', dir=os.path.dirname(wdest))
1088 1088 os.close(fd)
1089 1089 os.unlink(tempname)
1090 1090 os.rename(wdest, tempname)
1091 1091 try:
1092 1092 self.run0('copy', source, dest)
1093 1093 finally:
1094 1094 if exists:
1095 1095 try:
1096 1096 os.unlink(wdest)
1097 1097 except OSError:
1098 1098 pass
1099 1099 os.rename(tempname, wdest)
1100 1100
1101 1101 def dirs_of(self, files):
1102 1102 dirs = set()
1103 1103 for f in files:
1104 1104 if os.path.isdir(self.wjoin(f)):
1105 1105 dirs.add(f)
1106 1106 for i in strutil.rfindall(f, '/'):
1107 1107 dirs.add(f[:i])
1108 1108 return dirs
1109 1109
1110 1110 def add_dirs(self, files):
1111 add_dirs = [d for d in util.sort(self.dirs_of(files))
1111 add_dirs = [d for d in sorted(self.dirs_of(files))
1112 1112 if not os.path.exists(self.wjoin(d, '.svn', 'entries'))]
1113 1113 if add_dirs:
1114 1114 self.xargs(add_dirs, 'add', non_recursive=True, quiet=True)
1115 1115 return add_dirs
1116 1116
1117 1117 def add_files(self, files):
1118 1118 if files:
1119 1119 self.xargs(files, 'add', quiet=True)
1120 1120 return files
1121 1121
1122 1122 def tidy_dirs(self, names):
1123 dirs = util.sort(self.dirs_of(names))
1124 dirs.reverse()
1125 1123 deleted = []
1126 for d in dirs:
1124 for d in sorted(self.dirs_of(names), reverse=True):
1127 1125 wd = self.wjoin(d)
1128 1126 if os.listdir(wd) == '.svn':
1129 1127 self.run0('delete', d)
1130 1128 deleted.append(d)
1131 1129 return deleted
1132 1130
1133 1131 def addchild(self, parent, child):
1134 1132 self.childmap[parent] = child
1135 1133
1136 1134 def revid(self, rev):
1137 1135 return u"svn:%s@%s" % (self.uuid, rev)
1138 1136
1139 1137 def putcommit(self, files, copies, parents, commit, source):
1140 1138 # Apply changes to working copy
1141 1139 for f, v in files:
1142 1140 try:
1143 1141 data = source.getfile(f, v)
1144 1142 except IOError:
1145 1143 self.delete.append(f)
1146 1144 else:
1147 1145 e = source.getmode(f, v)
1148 1146 self.putfile(f, e, data)
1149 1147 if f in copies:
1150 1148 self.copies.append([copies[f], f])
1151 1149 files = [f[0] for f in files]
1152 1150
1153 1151 for parent in parents:
1154 1152 try:
1155 1153 return self.revid(self.childmap[parent])
1156 1154 except KeyError:
1157 1155 pass
1158 1156 entries = set(self.delete)
1159 1157 files = frozenset(files)
1160 1158 entries.update(self.add_dirs(files.difference(entries)))
1161 1159 if self.copies:
1162 1160 for s, d in self.copies:
1163 1161 self._copyfile(s, d)
1164 1162 self.copies = []
1165 1163 if self.delete:
1166 1164 self.xargs(self.delete, 'delete')
1167 1165 self.delete = []
1168 1166 entries.update(self.add_files(files.difference(entries)))
1169 1167 entries.update(self.tidy_dirs(entries))
1170 1168 if self.delexec:
1171 1169 self.xargs(self.delexec, 'propdel', 'svn:executable')
1172 1170 self.delexec = []
1173 1171 if self.setexec:
1174 1172 self.xargs(self.setexec, 'propset', 'svn:executable', '*')
1175 1173 self.setexec = []
1176 1174
1177 1175 fd, messagefile = tempfile.mkstemp(prefix='hg-convert-')
1178 1176 fp = os.fdopen(fd, 'w')
1179 1177 fp.write(commit.desc)
1180 1178 fp.close()
1181 1179 try:
1182 1180 output = self.run0('commit',
1183 1181 username=util.shortuser(commit.author),
1184 1182 file=messagefile,
1185 1183 encoding='utf-8')
1186 1184 try:
1187 1185 rev = self.commit_re.search(output).group(1)
1188 1186 except AttributeError:
1189 1187 self.ui.warn(_('unexpected svn output:\n'))
1190 1188 self.ui.warn(output)
1191 1189 raise util.Abort(_('unable to cope with svn output'))
1192 1190 if commit.rev:
1193 1191 self.run('propset', 'hg:convert-rev', commit.rev,
1194 1192 revprop=True, revision=rev)
1195 1193 if commit.branch and commit.branch != 'default':
1196 1194 self.run('propset', 'hg:convert-branch', commit.branch,
1197 1195 revprop=True, revision=rev)
1198 1196 for parent in parents:
1199 1197 self.addchild(parent, rev)
1200 1198 return self.revid(rev)
1201 1199 finally:
1202 1200 os.unlink(messagefile)
1203 1201
1204 1202 def puttags(self, tags):
1205 1203 self.ui.warn(_('XXX TAGS NOT IMPLEMENTED YET\n'))
@@ -1,762 +1,762 b''
1 1 # server.py - inotify status server
2 2 #
3 3 # Copyright 2006, 2007, 2008 Bryan O'Sullivan <bos@serpentine.com>
4 4 # Copyright 2007, 2008 Brendan Cully <brendan@kublai.com>
5 5 #
6 6 # This software may be used and distributed according to the terms
7 7 # of the GNU General Public License, incorporated herein by reference.
8 8
9 9 from mercurial.i18n import _
10 10 from mercurial import osutil, util
11 11 import common
12 12 import errno, os, select, socket, stat, struct, sys, tempfile, time
13 13
14 14 try:
15 15 import linux as inotify
16 16 from linux import watcher
17 17 except ImportError:
18 18 raise
19 19
20 20 class AlreadyStartedException(Exception): pass
21 21
22 22 def join(a, b):
23 23 if a:
24 24 if a[-1] == '/':
25 25 return a + b
26 26 return a + '/' + b
27 27 return b
28 28
29 29 walk_ignored_errors = (errno.ENOENT, errno.ENAMETOOLONG)
30 30
31 31 def walkrepodirs(repo):
32 32 '''Iterate over all subdirectories of this repo.
33 33 Exclude the .hg directory, any nested repos, and ignored dirs.'''
34 34 rootslash = repo.root + os.sep
35 35 def walkit(dirname, top):
36 36 hginside = False
37 37 try:
38 38 for name, kind in osutil.listdir(rootslash + dirname):
39 39 if kind == stat.S_IFDIR:
40 40 if name == '.hg':
41 41 hginside = True
42 42 if not top: break
43 43 else:
44 44 d = join(dirname, name)
45 45 if repo.dirstate._ignore(d):
46 46 continue
47 47 for subdir, hginsub in walkit(d, False):
48 48 if not hginsub:
49 49 yield subdir, False
50 50 except OSError, err:
51 51 if err.errno not in walk_ignored_errors:
52 52 raise
53 53 yield rootslash + dirname, hginside
54 54 for dirname, hginside in walkit('', True):
55 55 yield dirname
56 56
57 57 def walk(repo, root):
58 58 '''Like os.walk, but only yields regular files.'''
59 59
60 60 # This function is critical to performance during startup.
61 61
62 62 reporoot = root == ''
63 63 rootslash = repo.root + os.sep
64 64
65 65 def walkit(root, reporoot):
66 66 files, dirs = [], []
67 67 hginside = False
68 68
69 69 try:
70 70 fullpath = rootslash + root
71 71 for name, kind in osutil.listdir(fullpath):
72 72 if kind == stat.S_IFDIR:
73 73 if name == '.hg':
74 74 hginside = True
75 75 if reporoot:
76 76 continue
77 77 else:
78 78 break
79 79 dirs.append(name)
80 80 elif kind in (stat.S_IFREG, stat.S_IFLNK):
81 81 path = join(root, name)
82 82 files.append((name, kind))
83 83
84 84 yield hginside, fullpath, dirs, files
85 85
86 86 for subdir in dirs:
87 87 path = join(root, subdir)
88 88 if repo.dirstate._ignore(path):
89 89 continue
90 90 for result in walkit(path, False):
91 91 if not result[0]:
92 92 yield result
93 93 except OSError, err:
94 94 if err.errno not in walk_ignored_errors:
95 95 raise
96 96 for result in walkit(root, reporoot):
97 97 yield result[1:]
98 98
99 99 def _explain_watch_limit(ui, repo, count):
100 100 path = '/proc/sys/fs/inotify/max_user_watches'
101 101 try:
102 102 limit = int(file(path).read())
103 103 except IOError, err:
104 104 if err.errno != errno.ENOENT:
105 105 raise
106 106 raise util.Abort(_('this system does not seem to '
107 107 'support inotify'))
108 108 ui.warn(_('*** the current per-user limit on the number '
109 109 'of inotify watches is %s\n') % limit)
110 110 ui.warn(_('*** this limit is too low to watch every '
111 111 'directory in this repository\n'))
112 112 ui.warn(_('*** counting directories: '))
113 113 ndirs = len(list(walkrepodirs(repo)))
114 114 ui.warn(_('found %d\n') % ndirs)
115 115 newlimit = min(limit, 1024)
116 116 while newlimit < ((limit + ndirs) * 1.1):
117 117 newlimit *= 2
118 118 ui.warn(_('*** to raise the limit from %d to %d (run as root):\n') %
119 119 (limit, newlimit))
120 120 ui.warn(_('*** echo %d > %s\n') % (newlimit, path))
121 121 raise util.Abort(_('cannot watch %s until inotify watch limit is raised')
122 122 % repo.root)
123 123
124 124 class Watcher(object):
125 125 poll_events = select.POLLIN
126 126 statuskeys = 'almr!?'
127 127
128 128 def __init__(self, ui, repo, master):
129 129 self.ui = ui
130 130 self.repo = repo
131 131 self.wprefix = self.repo.wjoin('')
132 132 self.timeout = None
133 133 self.master = master
134 134 self.mask = (
135 135 inotify.IN_ATTRIB |
136 136 inotify.IN_CREATE |
137 137 inotify.IN_DELETE |
138 138 inotify.IN_DELETE_SELF |
139 139 inotify.IN_MODIFY |
140 140 inotify.IN_MOVED_FROM |
141 141 inotify.IN_MOVED_TO |
142 142 inotify.IN_MOVE_SELF |
143 143 inotify.IN_ONLYDIR |
144 144 inotify.IN_UNMOUNT |
145 145 0)
146 146 try:
147 147 self.watcher = watcher.Watcher()
148 148 except OSError, err:
149 149 raise util.Abort(_('inotify service not available: %s') %
150 150 err.strerror)
151 151 self.threshold = watcher.Threshold(self.watcher)
152 152 self.registered = True
153 153 self.fileno = self.watcher.fileno
154 154
155 155 self.repo.dirstate.__class__.inotifyserver = True
156 156
157 157 self.tree = {}
158 158 self.statcache = {}
159 159 self.statustrees = dict([(s, {}) for s in self.statuskeys])
160 160
161 161 self.watches = 0
162 162 self.last_event = None
163 163
164 164 self.eventq = {}
165 165 self.deferred = 0
166 166
167 167 self.ds_info = self.dirstate_info()
168 168 self.scan()
169 169
170 170 def event_time(self):
171 171 last = self.last_event
172 172 now = time.time()
173 173 self.last_event = now
174 174
175 175 if last is None:
176 176 return 'start'
177 177 delta = now - last
178 178 if delta < 5:
179 179 return '+%.3f' % delta
180 180 if delta < 50:
181 181 return '+%.2f' % delta
182 182 return '+%.1f' % delta
183 183
184 184 def dirstate_info(self):
185 185 try:
186 186 st = os.lstat(self.repo.join('dirstate'))
187 187 return st.st_mtime, st.st_ino
188 188 except OSError, err:
189 189 if err.errno != errno.ENOENT:
190 190 raise
191 191 return 0, 0
192 192
193 193 def add_watch(self, path, mask):
194 194 if not path:
195 195 return
196 196 if self.watcher.path(path) is None:
197 197 if self.ui.debugflag:
198 198 self.ui.note(_('watching %r\n') % path[len(self.wprefix):])
199 199 try:
200 200 self.watcher.add(path, mask)
201 201 self.watches += 1
202 202 except OSError, err:
203 203 if err.errno in (errno.ENOENT, errno.ENOTDIR):
204 204 return
205 205 if err.errno != errno.ENOSPC:
206 206 raise
207 207 _explain_watch_limit(self.ui, self.repo, self.watches)
208 208
209 209 def setup(self):
210 210 self.ui.note(_('watching directories under %r\n') % self.repo.root)
211 211 self.add_watch(self.repo.path, inotify.IN_DELETE)
212 212 self.check_dirstate()
213 213
214 214 def wpath(self, evt):
215 215 path = evt.fullpath
216 216 if path == self.repo.root:
217 217 return ''
218 218 if path.startswith(self.wprefix):
219 219 return path[len(self.wprefix):]
220 220 raise 'wtf? ' + path
221 221
222 222 def dir(self, tree, path):
223 223 if path:
224 224 for name in path.split('/'):
225 225 tree.setdefault(name, {})
226 226 tree = tree[name]
227 227 return tree
228 228
229 229 def lookup(self, path, tree):
230 230 if path:
231 231 try:
232 232 for name in path.split('/'):
233 233 tree = tree[name]
234 234 except KeyError:
235 235 return 'x'
236 236 except TypeError:
237 237 return 'd'
238 238 return tree
239 239
240 240 def split(self, path):
241 241 c = path.rfind('/')
242 242 if c == -1:
243 243 return '', path
244 244 return path[:c], path[c+1:]
245 245
246 246 def filestatus(self, fn, st):
247 247 try:
248 248 type_, mode, size, time = self.repo.dirstate._map[fn][:4]
249 249 except KeyError:
250 250 type_ = '?'
251 251 if type_ == 'n':
252 252 if not st:
253 253 return '!'
254 254 st_mode, st_size, st_mtime = st
255 255 if size == -1:
256 256 return 'l'
257 257 if size and (size != st_size or (mode ^ st_mode) & 0100):
258 258 return 'm'
259 259 if time != int(st_mtime):
260 260 return 'l'
261 261 return 'n'
262 262 if type_ in 'ma' and not st:
263 263 return '!'
264 264 if type_ == '?' and self.repo.dirstate._ignore(fn):
265 265 return 'i'
266 266 return type_
267 267
268 268 def updatestatus(self, wfn, st=None, status=None):
269 269 if st:
270 270 status = self.filestatus(wfn, st)
271 271 else:
272 272 self.statcache.pop(wfn, None)
273 273 root, fn = self.split(wfn)
274 274 d = self.dir(self.tree, root)
275 275 oldstatus = d.get(fn)
276 276 isdir = False
277 277 if oldstatus:
278 278 try:
279 279 if not status:
280 280 if oldstatus in 'almn':
281 281 status = '!'
282 282 elif oldstatus == 'r':
283 283 status = 'r'
284 284 except TypeError:
285 285 # oldstatus may be a dict left behind by a deleted
286 286 # directory
287 287 isdir = True
288 288 else:
289 289 if oldstatus in self.statuskeys and oldstatus != status:
290 290 del self.dir(self.statustrees[oldstatus], root)[fn]
291 291 if self.ui.debugflag and oldstatus != status:
292 292 if isdir:
293 293 self.ui.note(_('status: %r dir(%d) -> %s\n') %
294 294 (wfn, len(oldstatus), status))
295 295 else:
296 296 self.ui.note(_('status: %r %s -> %s\n') %
297 297 (wfn, oldstatus, status))
298 298 if not isdir:
299 299 if status and status != 'i':
300 300 d[fn] = status
301 301 if status in self.statuskeys:
302 302 dd = self.dir(self.statustrees[status], root)
303 303 if oldstatus != status or fn not in dd:
304 304 dd[fn] = status
305 305 else:
306 306 d.pop(fn, None)
307 307 elif not status:
308 308 # a directory is being removed, check its contents
309 309 for subfile, b in oldstatus.copy().iteritems():
310 310 self.updatestatus(wfn + '/' + subfile, None)
311 311
312 312
313 313 def check_deleted(self, key):
314 314 # Files that had been deleted but were present in the dirstate
315 315 # may have vanished from the dirstate; we must clean them up.
316 316 nuke = []
317 317 for wfn, ignore in self.walk(key, self.statustrees[key]):
318 318 if wfn not in self.repo.dirstate:
319 319 nuke.append(wfn)
320 320 for wfn in nuke:
321 321 root, fn = self.split(wfn)
322 322 del self.dir(self.statustrees[key], root)[fn]
323 323 del self.dir(self.tree, root)[fn]
324 324
325 325 def scan(self, topdir=''):
326 326 self.handle_timeout()
327 327 ds = self.repo.dirstate._map.copy()
328 328 self.add_watch(join(self.repo.root, topdir), self.mask)
329 329 for root, dirs, entries in walk(self.repo, topdir):
330 330 for d in dirs:
331 331 self.add_watch(join(root, d), self.mask)
332 332 wroot = root[len(self.wprefix):]
333 333 d = self.dir(self.tree, wroot)
334 334 for fn, kind in entries:
335 335 wfn = join(wroot, fn)
336 336 self.updatestatus(wfn, self.getstat(wfn))
337 337 ds.pop(wfn, None)
338 338 wtopdir = topdir
339 339 if wtopdir and wtopdir[-1] != '/':
340 340 wtopdir += '/'
341 341 for wfn, state in ds.iteritems():
342 342 if not wfn.startswith(wtopdir):
343 343 continue
344 344 try:
345 345 st = self.stat(wfn)
346 346 except OSError:
347 347 status = state[0]
348 348 self.updatestatus(wfn, None, status=status)
349 349 else:
350 350 self.updatestatus(wfn, st)
351 351 self.check_deleted('!')
352 352 self.check_deleted('r')
353 353
354 354 def check_dirstate(self):
355 355 ds_info = self.dirstate_info()
356 356 if ds_info == self.ds_info:
357 357 return
358 358 self.ds_info = ds_info
359 359 if not self.ui.debugflag:
360 360 self.last_event = None
361 361 self.ui.note(_('%s dirstate reload\n') % self.event_time())
362 362 self.repo.dirstate.invalidate()
363 363 self.scan()
364 364 self.ui.note(_('%s end dirstate reload\n') % self.event_time())
365 365
366 366 def walk(self, states, tree, prefix=''):
367 367 # This is the "inner loop" when talking to the client.
368 368
369 369 for name, val in tree.iteritems():
370 370 path = join(prefix, name)
371 371 try:
372 372 if val in states:
373 373 yield path, val
374 374 except TypeError:
375 375 for p in self.walk(states, val, path):
376 376 yield p
377 377
378 378 def update_hgignore(self):
379 379 # An update of the ignore file can potentially change the
380 380 # states of all unknown and ignored files.
381 381
382 382 # XXX If the user has other ignore files outside the repo, or
383 383 # changes their list of ignore files at run time, we'll
384 384 # potentially never see changes to them. We could get the
385 385 # client to report to us what ignore data they're using.
386 386 # But it's easier to do nothing than to open that can of
387 387 # worms.
388 388
389 389 if '_ignore' in self.repo.dirstate.__dict__:
390 390 delattr(self.repo.dirstate, '_ignore')
391 391 self.ui.note(_('rescanning due to .hgignore change\n'))
392 392 self.scan()
393 393
394 394 def getstat(self, wpath):
395 395 try:
396 396 return self.statcache[wpath]
397 397 except KeyError:
398 398 try:
399 399 return self.stat(wpath)
400 400 except OSError, err:
401 401 if err.errno != errno.ENOENT:
402 402 raise
403 403
404 404 def stat(self, wpath):
405 405 try:
406 406 st = os.lstat(join(self.wprefix, wpath))
407 407 ret = st.st_mode, st.st_size, st.st_mtime
408 408 self.statcache[wpath] = ret
409 409 return ret
410 410 except OSError:
411 411 self.statcache.pop(wpath, None)
412 412 raise
413 413
414 414 def created(self, wpath):
415 415 if wpath == '.hgignore':
416 416 self.update_hgignore()
417 417 try:
418 418 st = self.stat(wpath)
419 419 if stat.S_ISREG(st[0]):
420 420 self.updatestatus(wpath, st)
421 421 except OSError:
422 422 pass
423 423
424 424 def modified(self, wpath):
425 425 if wpath == '.hgignore':
426 426 self.update_hgignore()
427 427 try:
428 428 st = self.stat(wpath)
429 429 if stat.S_ISREG(st[0]):
430 430 if self.repo.dirstate[wpath] in 'lmn':
431 431 self.updatestatus(wpath, st)
432 432 except OSError:
433 433 pass
434 434
435 435 def deleted(self, wpath):
436 436 if wpath == '.hgignore':
437 437 self.update_hgignore()
438 438 elif wpath.startswith('.hg/'):
439 439 if wpath == '.hg/wlock':
440 440 self.check_dirstate()
441 441 return
442 442
443 443 self.updatestatus(wpath, None)
444 444
445 445 def schedule_work(self, wpath, evt):
446 446 self.eventq.setdefault(wpath, [])
447 447 prev = self.eventq[wpath]
448 448 try:
449 449 if prev and evt == 'm' and prev[-1] in 'cm':
450 450 return
451 451 self.eventq[wpath].append(evt)
452 452 finally:
453 453 self.deferred += 1
454 454 self.timeout = 250
455 455
456 456 def deferred_event(self, wpath, evt):
457 457 if evt == 'c':
458 458 self.created(wpath)
459 459 elif evt == 'm':
460 460 self.modified(wpath)
461 461 elif evt == 'd':
462 462 self.deleted(wpath)
463 463
464 464 def process_create(self, wpath, evt):
465 465 if self.ui.debugflag:
466 466 self.ui.note(_('%s event: created %s\n') %
467 467 (self.event_time(), wpath))
468 468
469 469 if evt.mask & inotify.IN_ISDIR:
470 470 self.scan(wpath)
471 471 else:
472 472 self.schedule_work(wpath, 'c')
473 473
474 474 def process_delete(self, wpath, evt):
475 475 if self.ui.debugflag:
476 476 self.ui.note(_('%s event: deleted %s\n') %
477 477 (self.event_time(), wpath))
478 478
479 479 if evt.mask & inotify.IN_ISDIR:
480 480 self.scan(wpath)
481 481 self.schedule_work(wpath, 'd')
482 482
483 483 def process_modify(self, wpath, evt):
484 484 if self.ui.debugflag:
485 485 self.ui.note(_('%s event: modified %s\n') %
486 486 (self.event_time(), wpath))
487 487
488 488 if not (evt.mask & inotify.IN_ISDIR):
489 489 self.schedule_work(wpath, 'm')
490 490
491 491 def process_unmount(self, evt):
492 492 self.ui.warn(_('filesystem containing %s was unmounted\n') %
493 493 evt.fullpath)
494 494 sys.exit(0)
495 495
496 496 def handle_event(self, fd, event):
497 497 if self.ui.debugflag:
498 498 self.ui.note(_('%s readable: %d bytes\n') %
499 499 (self.event_time(), self.threshold.readable()))
500 500 if not self.threshold():
501 501 if self.registered:
502 502 if self.ui.debugflag:
503 503 self.ui.note(_('%s below threshold - unhooking\n') %
504 504 (self.event_time()))
505 505 self.master.poll.unregister(fd)
506 506 self.registered = False
507 507 self.timeout = 250
508 508 else:
509 509 self.read_events()
510 510
511 511 def read_events(self, bufsize=None):
512 512 events = self.watcher.read(bufsize)
513 513 if self.ui.debugflag:
514 514 self.ui.note(_('%s reading %d events\n') %
515 515 (self.event_time(), len(events)))
516 516 for evt in events:
517 517 wpath = self.wpath(evt)
518 518 if evt.mask & inotify.IN_UNMOUNT:
519 519 self.process_unmount(wpath, evt)
520 520 elif evt.mask & (inotify.IN_MODIFY | inotify.IN_ATTRIB):
521 521 self.process_modify(wpath, evt)
522 522 elif evt.mask & (inotify.IN_DELETE | inotify.IN_DELETE_SELF |
523 523 inotify.IN_MOVED_FROM):
524 524 self.process_delete(wpath, evt)
525 525 elif evt.mask & (inotify.IN_CREATE | inotify.IN_MOVED_TO):
526 526 self.process_create(wpath, evt)
527 527
528 528 def handle_timeout(self):
529 529 if not self.registered:
530 530 if self.ui.debugflag:
531 531 self.ui.note(_('%s hooking back up with %d bytes readable\n') %
532 532 (self.event_time(), self.threshold.readable()))
533 533 self.read_events(0)
534 534 self.master.poll.register(self, select.POLLIN)
535 535 self.registered = True
536 536
537 537 if self.eventq:
538 538 if self.ui.debugflag:
539 539 self.ui.note(_('%s processing %d deferred events as %d\n') %
540 540 (self.event_time(), self.deferred,
541 541 len(self.eventq)))
542 for wpath, evts in util.sort(self.eventq.items()):
542 for wpath, evts in sorted(self.eventq.iteritems()):
543 543 for evt in evts:
544 544 self.deferred_event(wpath, evt)
545 545 self.eventq.clear()
546 546 self.deferred = 0
547 547 self.timeout = None
548 548
549 549 def shutdown(self):
550 550 self.watcher.close()
551 551
552 552 class Server(object):
553 553 poll_events = select.POLLIN
554 554
555 555 def __init__(self, ui, repo, watcher, timeout):
556 556 self.ui = ui
557 557 self.repo = repo
558 558 self.watcher = watcher
559 559 self.timeout = timeout
560 560 self.sock = socket.socket(socket.AF_UNIX)
561 561 self.sockpath = self.repo.join('inotify.sock')
562 562 self.realsockpath = None
563 563 try:
564 564 self.sock.bind(self.sockpath)
565 565 except socket.error, err:
566 566 if err[0] == errno.EADDRINUSE:
567 567 raise AlreadyStartedException(_('could not start server: %s')
568 568 % err[1])
569 569 if err[0] == "AF_UNIX path too long":
570 570 tempdir = tempfile.mkdtemp(prefix="hg-inotify-")
571 571 self.realsockpath = os.path.join(tempdir, "inotify.sock")
572 572 try:
573 573 self.sock.bind(self.realsockpath)
574 574 os.symlink(self.realsockpath, self.sockpath)
575 575 except (OSError, socket.error), inst:
576 576 try:
577 577 os.unlink(self.realsockpath)
578 578 except:
579 579 pass
580 580 os.rmdir(tempdir)
581 581 if inst.errno == errno.EEXIST:
582 582 raise AlreadyStartedException(_('could not start server: %s')
583 583 % inst.strerror)
584 584 raise
585 585 else:
586 586 raise
587 587 self.sock.listen(5)
588 588 self.fileno = self.sock.fileno
589 589
590 590 def handle_timeout(self):
591 591 pass
592 592
593 593 def handle_event(self, fd, event):
594 594 sock, addr = self.sock.accept()
595 595
596 596 cs = common.recvcs(sock)
597 597 version = ord(cs.read(1))
598 598
599 599 sock.sendall(chr(common.version))
600 600
601 601 if version != common.version:
602 602 self.ui.warn(_('received query from incompatible client '
603 603 'version %d\n') % version)
604 604 return
605 605
606 606 names = cs.read().split('\0')
607 607
608 608 states = names.pop()
609 609
610 610 self.ui.note(_('answering query for %r\n') % states)
611 611
612 612 if self.watcher.timeout:
613 613 # We got a query while a rescan is pending. Make sure we
614 614 # rescan before responding, or we could give back a wrong
615 615 # answer.
616 616 self.watcher.handle_timeout()
617 617
618 618 if not names:
619 619 def genresult(states, tree):
620 620 for fn, state in self.watcher.walk(states, tree):
621 621 yield fn
622 622 else:
623 623 def genresult(states, tree):
624 624 for fn in names:
625 625 l = self.watcher.lookup(fn, tree)
626 626 try:
627 627 if l in states:
628 628 yield fn
629 629 except TypeError:
630 630 for f, s in self.watcher.walk(states, l, fn):
631 631 yield f
632 632
633 633 results = ['\0'.join(r) for r in [
634 634 genresult('l', self.watcher.statustrees['l']),
635 635 genresult('m', self.watcher.statustrees['m']),
636 636 genresult('a', self.watcher.statustrees['a']),
637 637 genresult('r', self.watcher.statustrees['r']),
638 638 genresult('!', self.watcher.statustrees['!']),
639 639 '?' in states and genresult('?', self.watcher.statustrees['?']) or [],
640 640 [],
641 641 'c' in states and genresult('n', self.watcher.tree) or [],
642 642 ]]
643 643
644 644 try:
645 645 try:
646 646 sock.sendall(struct.pack(common.resphdrfmt,
647 647 *map(len, results)))
648 648 sock.sendall(''.join(results))
649 649 finally:
650 650 sock.shutdown(socket.SHUT_WR)
651 651 except socket.error, err:
652 652 if err[0] != errno.EPIPE:
653 653 raise
654 654
655 655 def shutdown(self):
656 656 self.sock.close()
657 657 try:
658 658 os.unlink(self.sockpath)
659 659 if self.realsockpath:
660 660 os.unlink(self.realsockpath)
661 661 os.rmdir(os.path.dirname(self.realsockpath))
662 662 except OSError, err:
663 663 if err.errno != errno.ENOENT:
664 664 raise
665 665
666 666 class Master(object):
667 667 def __init__(self, ui, repo, timeout=None):
668 668 self.ui = ui
669 669 self.repo = repo
670 670 self.poll = select.poll()
671 671 self.watcher = Watcher(ui, repo, self)
672 672 self.server = Server(ui, repo, self.watcher, timeout)
673 673 self.table = {}
674 674 for obj in (self.watcher, self.server):
675 675 fd = obj.fileno()
676 676 self.table[fd] = obj
677 677 self.poll.register(fd, obj.poll_events)
678 678
679 679 def register(self, fd, mask):
680 680 self.poll.register(fd, mask)
681 681
682 682 def shutdown(self):
683 683 for obj in self.table.itervalues():
684 684 obj.shutdown()
685 685
686 686 def run(self):
687 687 self.watcher.setup()
688 688 self.ui.note(_('finished setup\n'))
689 689 if os.getenv('TIME_STARTUP'):
690 690 sys.exit(0)
691 691 while True:
692 692 timeout = None
693 693 timeobj = None
694 694 for obj in self.table.itervalues():
695 695 if obj.timeout is not None and (timeout is None or obj.timeout < timeout):
696 696 timeout, timeobj = obj.timeout, obj
697 697 try:
698 698 if self.ui.debugflag:
699 699 if timeout is None:
700 700 self.ui.note(_('polling: no timeout\n'))
701 701 else:
702 702 self.ui.note(_('polling: %sms timeout\n') % timeout)
703 703 events = self.poll.poll(timeout)
704 704 except select.error, err:
705 705 if err[0] == errno.EINTR:
706 706 continue
707 707 raise
708 708 if events:
709 709 for fd, event in events:
710 710 self.table[fd].handle_event(fd, event)
711 711 elif timeobj:
712 712 timeobj.handle_timeout()
713 713
714 714 def start(ui, repo):
715 715 def closefds(ignore):
716 716 # (from python bug #1177468)
717 717 # close all inherited file descriptors
718 718 # Python 2.4.1 and later use /dev/urandom to seed the random module's RNG
719 719 # a file descriptor is kept internally as os._urandomfd (created on demand
720 720 # the first time os.urandom() is called), and should not be closed
721 721 try:
722 722 os.urandom(4)
723 723 urandom_fd = getattr(os, '_urandomfd', None)
724 724 except AttributeError:
725 725 urandom_fd = None
726 726 ignore.append(urandom_fd)
727 727 for fd in range(3, 256):
728 728 if fd in ignore:
729 729 continue
730 730 try:
731 731 os.close(fd)
732 732 except OSError:
733 733 pass
734 734
735 735 m = Master(ui, repo)
736 736 sys.stdout.flush()
737 737 sys.stderr.flush()
738 738
739 739 pid = os.fork()
740 740 if pid:
741 741 return pid
742 742
743 743 closefds([m.server.fileno(), m.watcher.fileno()])
744 744 os.setsid()
745 745
746 746 fd = os.open('/dev/null', os.O_RDONLY)
747 747 os.dup2(fd, 0)
748 748 if fd > 0:
749 749 os.close(fd)
750 750
751 751 fd = os.open(ui.config('inotify', 'log', '/dev/null'),
752 752 os.O_RDWR | os.O_CREAT | os.O_TRUNC)
753 753 os.dup2(fd, 1)
754 754 os.dup2(fd, 2)
755 755 if fd > 2:
756 756 os.close(fd)
757 757
758 758 try:
759 759 m.run()
760 760 finally:
761 761 m.shutdown()
762 762 os._exit(0)
@@ -1,539 +1,539 b''
1 1 # keyword.py - $Keyword$ expansion for Mercurial
2 2 #
3 3 # Copyright 2007, 2008 Christian Ebert <blacktrash@gmx.net>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7 #
8 8 # $Id$
9 9 #
10 10 # Keyword expansion hack against the grain of a DSCM
11 11 #
12 12 # There are many good reasons why this is not needed in a distributed
13 13 # SCM, still it may be useful in very small projects based on single
14 14 # files (like LaTeX packages), that are mostly addressed to an
15 15 # audience not running a version control system.
16 16 #
17 17 # For in-depth discussion refer to
18 18 # <http://www.selenic.com/mercurial/wiki/index.cgi/KeywordPlan>.
19 19 #
20 20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 21 #
22 22 # Binary files are not touched.
23 23 #
24 24 # Setup in hgrc:
25 25 #
26 26 # [extensions]
27 27 # # enable extension
28 28 # hgext.keyword =
29 29 #
30 30 # Files to act upon/ignore are specified in the [keyword] section.
31 31 # Customized keyword template mappings in the [keywordmaps] section.
32 32 #
33 33 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
34 34
35 35 '''keyword expansion in local repositories
36 36
37 37 This extension expands RCS/CVS-like or self-customized $Keywords$ in
38 38 tracked text files selected by your configuration.
39 39
40 40 Keywords are only expanded in local repositories and not stored in the
41 41 change history. The mechanism can be regarded as a convenience for the
42 42 current user or for archive distribution.
43 43
44 44 Configuration is done in the [keyword] and [keywordmaps] sections of
45 45 hgrc files.
46 46
47 47 Example:
48 48
49 49 [keyword]
50 50 # expand keywords in every python file except those matching "x*"
51 51 **.py =
52 52 x* = ignore
53 53
54 54 Note: the more specific you are in your filename patterns
55 55 the less you lose speed in huge repositories.
56 56
57 57 For [keywordmaps] template mapping and expansion demonstration and
58 58 control run "hg kwdemo".
59 59
60 60 An additional date template filter {date|utcdate} is provided.
61 61
62 62 The default template mappings (view with "hg kwdemo -d") can be
63 63 replaced with customized keywords and templates. Again, run "hg
64 64 kwdemo" to control the results of your config changes.
65 65
66 66 Before changing/disabling active keywords, run "hg kwshrink" to avoid
67 67 the risk of inadvertedly storing expanded keywords in the change
68 68 history.
69 69
70 70 To force expansion after enabling it, or a configuration change, run
71 71 "hg kwexpand".
72 72
73 73 Also, when committing with the record extension or using mq's qrecord,
74 74 be aware that keywords cannot be updated. Again, run "hg kwexpand" on
75 75 the files in question to update keyword expansions after all changes
76 76 have been checked in.
77 77
78 78 Expansions spanning more than one line and incremental expansions,
79 79 like CVS' $Log$, are not supported. A keyword template map
80 80 "Log = {desc}" expands to the first line of the changeset description.
81 81 '''
82 82
83 83 from mercurial import commands, cmdutil, dispatch, filelog, revlog, extensions
84 84 from mercurial import patch, localrepo, templater, templatefilters, util
85 85 from mercurial.hgweb import webcommands
86 86 from mercurial.lock import release
87 87 from mercurial.node import nullid, hex
88 88 from mercurial.i18n import _
89 89 import re, shutil, tempfile, time
90 90
91 91 commands.optionalrepo += ' kwdemo'
92 92
93 93 # hg commands that do not act on keywords
94 94 nokwcommands = ('add addremove annotate bundle copy export grep incoming init'
95 95 ' log outgoing push rename rollback tip verify'
96 96 ' convert email glog')
97 97
98 98 # hg commands that trigger expansion only when writing to working dir,
99 99 # not when reading filelog, and unexpand when reading from working dir
100 100 restricted = 'merge record resolve qfold qimport qnew qpush qrefresh qrecord'
101 101
102 102 def utcdate(date):
103 103 '''Returns hgdate in cvs-like UTC format.'''
104 104 return time.strftime('%Y/%m/%d %H:%M:%S', time.gmtime(date[0]))
105 105
106 106 # make keyword tools accessible
107 107 kwtools = {'templater': None, 'hgcmd': '', 'inc': [], 'exc': ['.hg*']}
108 108
109 109
110 110 class kwtemplater(object):
111 111 '''
112 112 Sets up keyword templates, corresponding keyword regex, and
113 113 provides keyword substitution functions.
114 114 '''
115 115 templates = {
116 116 'Revision': '{node|short}',
117 117 'Author': '{author|user}',
118 118 'Date': '{date|utcdate}',
119 119 'RCSFile': '{file|basename},v',
120 120 'Source': '{root}/{file},v',
121 121 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
122 122 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
123 123 }
124 124
125 125 def __init__(self, ui, repo):
126 126 self.ui = ui
127 127 self.repo = repo
128 128 self.matcher = util.matcher(repo.root,
129 129 inc=kwtools['inc'], exc=kwtools['exc'])[1]
130 130 self.restrict = kwtools['hgcmd'] in restricted.split()
131 131
132 132 kwmaps = self.ui.configitems('keywordmaps')
133 133 if kwmaps: # override default templates
134 134 kwmaps = [(k, templater.parsestring(v, False))
135 135 for (k, v) in kwmaps]
136 136 self.templates = dict(kwmaps)
137 137 escaped = map(re.escape, self.templates.keys())
138 138 kwpat = r'\$(%s)(: [^$\n\r]*? )??\$' % '|'.join(escaped)
139 139 self.re_kw = re.compile(kwpat)
140 140
141 141 templatefilters.filters['utcdate'] = utcdate
142 142 self.ct = cmdutil.changeset_templater(self.ui, self.repo,
143 143 False, None, '', False)
144 144
145 145 def substitute(self, data, path, ctx, subfunc):
146 146 '''Replaces keywords in data with expanded template.'''
147 147 def kwsub(mobj):
148 148 kw = mobj.group(1)
149 149 self.ct.use_template(self.templates[kw])
150 150 self.ui.pushbuffer()
151 151 self.ct.show(ctx, root=self.repo.root, file=path)
152 152 ekw = templatefilters.firstline(self.ui.popbuffer())
153 153 return '$%s: %s $' % (kw, ekw)
154 154 return subfunc(kwsub, data)
155 155
156 156 def expand(self, path, node, data):
157 157 '''Returns data with keywords expanded.'''
158 158 if not self.restrict and self.matcher(path) and not util.binary(data):
159 159 ctx = self.repo.filectx(path, fileid=node).changectx()
160 160 return self.substitute(data, path, ctx, self.re_kw.sub)
161 161 return data
162 162
163 163 def iskwfile(self, path, flagfunc):
164 164 '''Returns true if path matches [keyword] pattern
165 165 and is not a symbolic link.
166 166 Caveat: localrepository._link fails on Windows.'''
167 167 return self.matcher(path) and not 'l' in flagfunc(path)
168 168
169 169 def overwrite(self, node, expand, files):
170 170 '''Overwrites selected files expanding/shrinking keywords.'''
171 171 ctx = self.repo[node]
172 172 mf = ctx.manifest()
173 173 if node is not None: # commit
174 174 files = [f for f in ctx.files() if f in mf]
175 175 notify = self.ui.debug
176 176 else: # kwexpand/kwshrink
177 177 notify = self.ui.note
178 178 candidates = [f for f in files if self.iskwfile(f, ctx.flags)]
179 179 if candidates:
180 180 self.restrict = True # do not expand when reading
181 181 msg = (expand and _('overwriting %s expanding keywords\n')
182 182 or _('overwriting %s shrinking keywords\n'))
183 183 for f in candidates:
184 184 fp = self.repo.file(f)
185 185 data = fp.read(mf[f])
186 186 if util.binary(data):
187 187 continue
188 188 if expand:
189 189 if node is None:
190 190 ctx = self.repo.filectx(f, fileid=mf[f]).changectx()
191 191 data, found = self.substitute(data, f, ctx,
192 192 self.re_kw.subn)
193 193 else:
194 194 found = self.re_kw.search(data)
195 195 if found:
196 196 notify(msg % f)
197 197 self.repo.wwrite(f, data, mf.flags(f))
198 198 self.repo.dirstate.normal(f)
199 199 self.restrict = False
200 200
201 201 def shrinktext(self, text):
202 202 '''Unconditionally removes all keyword substitutions from text.'''
203 203 return self.re_kw.sub(r'$\1$', text)
204 204
205 205 def shrink(self, fname, text):
206 206 '''Returns text with all keyword substitutions removed.'''
207 207 if self.matcher(fname) and not util.binary(text):
208 208 return self.shrinktext(text)
209 209 return text
210 210
211 211 def shrinklines(self, fname, lines):
212 212 '''Returns lines with keyword substitutions removed.'''
213 213 if self.matcher(fname):
214 214 text = ''.join(lines)
215 215 if not util.binary(text):
216 216 return self.shrinktext(text).splitlines(True)
217 217 return lines
218 218
219 219 def wread(self, fname, data):
220 220 '''If in restricted mode returns data read from wdir with
221 221 keyword substitutions removed.'''
222 222 return self.restrict and self.shrink(fname, data) or data
223 223
224 224 class kwfilelog(filelog.filelog):
225 225 '''
226 226 Subclass of filelog to hook into its read, add, cmp methods.
227 227 Keywords are "stored" unexpanded, and processed on reading.
228 228 '''
229 229 def __init__(self, opener, kwt, path):
230 230 super(kwfilelog, self).__init__(opener, path)
231 231 self.kwt = kwt
232 232 self.path = path
233 233
234 234 def read(self, node):
235 235 '''Expands keywords when reading filelog.'''
236 236 data = super(kwfilelog, self).read(node)
237 237 return self.kwt.expand(self.path, node, data)
238 238
239 239 def add(self, text, meta, tr, link, p1=None, p2=None):
240 240 '''Removes keyword substitutions when adding to filelog.'''
241 241 text = self.kwt.shrink(self.path, text)
242 242 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
243 243
244 244 def cmp(self, node, text):
245 245 '''Removes keyword substitutions for comparison.'''
246 246 text = self.kwt.shrink(self.path, text)
247 247 if self.renamed(node):
248 248 t2 = super(kwfilelog, self).read(node)
249 249 return t2 != text
250 250 return revlog.revlog.cmp(self, node, text)
251 251
252 252 def _status(ui, repo, kwt, unknown, *pats, **opts):
253 253 '''Bails out if [keyword] configuration is not active.
254 254 Returns status of working directory.'''
255 255 if kwt:
256 256 matcher = cmdutil.match(repo, pats, opts)
257 257 return repo.status(match=matcher, unknown=unknown, clean=True)
258 258 if ui.configitems('keyword'):
259 259 raise util.Abort(_('[keyword] patterns cannot match'))
260 260 raise util.Abort(_('no [keyword] patterns configured'))
261 261
262 262 def _kwfwrite(ui, repo, expand, *pats, **opts):
263 263 '''Selects files and passes them to kwtemplater.overwrite.'''
264 264 if repo.dirstate.parents()[1] != nullid:
265 265 raise util.Abort(_('outstanding uncommitted merge'))
266 266 kwt = kwtools['templater']
267 267 status = _status(ui, repo, kwt, False, *pats, **opts)
268 268 modified, added, removed, deleted = status[:4]
269 269 if modified or added or removed or deleted:
270 270 raise util.Abort(_('outstanding uncommitted changes'))
271 271 wlock = lock = None
272 272 try:
273 273 wlock = repo.wlock()
274 274 lock = repo.lock()
275 275 kwt.overwrite(None, expand, status[6])
276 276 finally:
277 277 release(lock, wlock)
278 278
279 279 def demo(ui, repo, *args, **opts):
280 280 '''print [keywordmaps] configuration and an expansion example
281 281
282 282 Show current, custom, or default keyword template maps and their
283 283 expansion.
284 284
285 285 Extend current configuration by specifying maps as arguments and
286 286 optionally by reading from an additional hgrc file.
287 287
288 288 Override current keyword template maps with "default" option.
289 289 '''
290 290 def demostatus(stat):
291 291 ui.status(_('\n\t%s\n') % stat)
292 292
293 293 def demoitems(section, items):
294 294 ui.write('[%s]\n' % section)
295 295 for k, v in items:
296 296 ui.write('%s = %s\n' % (k, v))
297 297
298 298 msg = 'hg keyword config and expansion example'
299 299 kwstatus = 'current'
300 300 fn = 'demo.txt'
301 301 branchname = 'demobranch'
302 302 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
303 303 ui.note(_('creating temporary repository at %s\n') % tmpdir)
304 304 repo = localrepo.localrepository(ui, tmpdir, True)
305 305 ui.setconfig('keyword', fn, '')
306 306 if args or opts.get('rcfile'):
307 307 kwstatus = 'custom'
308 308 if opts.get('rcfile'):
309 309 ui.readconfig(opts.get('rcfile'))
310 310 if opts.get('default'):
311 311 kwstatus = 'default'
312 312 kwmaps = kwtemplater.templates
313 313 if ui.configitems('keywordmaps'):
314 314 # override maps from optional rcfile
315 315 for k, v in kwmaps.iteritems():
316 316 ui.setconfig('keywordmaps', k, v)
317 317 elif args:
318 318 # simulate hgrc parsing
319 319 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
320 320 fp = repo.opener('hgrc', 'w')
321 321 fp.writelines(rcmaps)
322 322 fp.close()
323 323 ui.readconfig(repo.join('hgrc'))
324 324 if not opts.get('default'):
325 325 kwmaps = dict(ui.configitems('keywordmaps')) or kwtemplater.templates
326 326 uisetup(ui)
327 327 reposetup(ui, repo)
328 328 for k, v in ui.configitems('extensions'):
329 329 if k.endswith('keyword'):
330 330 extension = '%s = %s' % (k, v)
331 331 break
332 332 demostatus('config using %s keyword template maps' % kwstatus)
333 333 ui.write('[extensions]\n%s\n' % extension)
334 334 demoitems('keyword', ui.configitems('keyword'))
335 335 demoitems('keywordmaps', kwmaps.iteritems())
336 336 keywords = '$' + '$\n$'.join(kwmaps.keys()) + '$\n'
337 337 repo.wopener(fn, 'w').write(keywords)
338 338 repo.add([fn])
339 339 path = repo.wjoin(fn)
340 340 ui.note(_('\n%s keywords written to %s:\n') % (kwstatus, path))
341 341 ui.note(keywords)
342 342 ui.note('\nhg -R "%s" branch "%s"\n' % (tmpdir, branchname))
343 343 # silence branch command if not verbose
344 344 quiet = ui.quiet
345 345 ui.quiet = not ui.verbose
346 346 commands.branch(ui, repo, branchname)
347 347 ui.quiet = quiet
348 348 for name, cmd in ui.configitems('hooks'):
349 349 if name.split('.', 1)[0].find('commit') > -1:
350 350 repo.ui.setconfig('hooks', name, '')
351 351 ui.note(_('unhooked all commit hooks\n'))
352 352 ui.note('hg -R "%s" ci -m "%s"\n' % (tmpdir, msg))
353 353 repo.commit(text=msg)
354 354 fmt = ui.verbose and ' in %s' % path or ''
355 355 demostatus('%s keywords expanded%s' % (kwstatus, fmt))
356 356 ui.write(repo.wread(fn))
357 357 ui.debug(_('\nremoving temporary repository %s\n') % tmpdir)
358 358 shutil.rmtree(tmpdir, ignore_errors=True)
359 359
360 360 def expand(ui, repo, *pats, **opts):
361 361 '''expand keywords in working directory
362 362
363 363 Run after (re)enabling keyword expansion.
364 364
365 365 kwexpand refuses to run if given files contain local changes.
366 366 '''
367 367 # 3rd argument sets expansion to True
368 368 _kwfwrite(ui, repo, True, *pats, **opts)
369 369
370 370 def files(ui, repo, *pats, **opts):
371 371 '''print files currently configured for keyword expansion
372 372
373 373 Crosscheck which files in working directory are potential targets
374 374 for keyword expansion. That is, files matched by [keyword] config
375 375 patterns but not symlinks.
376 376 '''
377 377 kwt = kwtools['templater']
378 378 status = _status(ui, repo, kwt, opts.get('untracked'), *pats, **opts)
379 379 modified, added, removed, deleted, unknown, ignored, clean = status
380 files = util.sort(modified + added + clean + unknown)
380 files = sorted(modified + added + clean + unknown)
381 381 wctx = repo[None]
382 382 kwfiles = [f for f in files if kwt.iskwfile(f, wctx.flags)]
383 383 cwd = pats and repo.getcwd() or ''
384 384 kwfstats = not opts.get('ignore') and (('K', kwfiles),) or ()
385 385 if opts.get('all') or opts.get('ignore'):
386 386 kwfstats += (('I', [f for f in files if f not in kwfiles]),)
387 387 for char, filenames in kwfstats:
388 388 fmt = (opts.get('all') or ui.verbose) and '%s %%s\n' % char or '%s\n'
389 389 for f in filenames:
390 390 ui.write(fmt % repo.pathto(f, cwd))
391 391
392 392 def shrink(ui, repo, *pats, **opts):
393 393 '''revert expanded keywords in working directory
394 394
395 395 Run before changing/disabling active keywords or if you experience
396 396 problems with "hg import" or "hg merge".
397 397
398 398 kwshrink refuses to run if given files contain local changes.
399 399 '''
400 400 # 3rd argument sets expansion to False
401 401 _kwfwrite(ui, repo, False, *pats, **opts)
402 402
403 403
404 404 def uisetup(ui):
405 405 '''Collects [keyword] config in kwtools.
406 406 Monkeypatches dispatch._parse if needed.'''
407 407
408 408 for pat, opt in ui.configitems('keyword'):
409 409 if opt != 'ignore':
410 410 kwtools['inc'].append(pat)
411 411 else:
412 412 kwtools['exc'].append(pat)
413 413
414 414 if kwtools['inc']:
415 415 def kwdispatch_parse(orig, ui, args):
416 416 '''Monkeypatch dispatch._parse to obtain running hg command.'''
417 417 cmd, func, args, options, cmdoptions = orig(ui, args)
418 418 kwtools['hgcmd'] = cmd
419 419 return cmd, func, args, options, cmdoptions
420 420
421 421 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
422 422
423 423 def reposetup(ui, repo):
424 424 '''Sets up repo as kwrepo for keyword substitution.
425 425 Overrides file method to return kwfilelog instead of filelog
426 426 if file matches user configuration.
427 427 Wraps commit to overwrite configured files with updated
428 428 keyword substitutions.
429 429 Monkeypatches patch and webcommands.'''
430 430
431 431 try:
432 432 if (not repo.local() or not kwtools['inc']
433 433 or kwtools['hgcmd'] in nokwcommands.split()
434 434 or '.hg' in util.splitpath(repo.root)
435 435 or repo._url.startswith('bundle:')):
436 436 return
437 437 except AttributeError:
438 438 pass
439 439
440 440 kwtools['templater'] = kwt = kwtemplater(ui, repo)
441 441
442 442 class kwrepo(repo.__class__):
443 443 def file(self, f):
444 444 if f[0] == '/':
445 445 f = f[1:]
446 446 return kwfilelog(self.sopener, kwt, f)
447 447
448 448 def wread(self, filename):
449 449 data = super(kwrepo, self).wread(filename)
450 450 return kwt.wread(filename, data)
451 451
452 452 def commit(self, files=None, text='', user=None, date=None,
453 453 match=None, force=False, force_editor=False,
454 454 p1=None, p2=None, extra={}, empty_ok=False):
455 455 wlock = lock = None
456 456 _p1 = _p2 = None
457 457 try:
458 458 wlock = self.wlock()
459 459 lock = self.lock()
460 460 # store and postpone commit hooks
461 461 commithooks = {}
462 462 for name, cmd in ui.configitems('hooks'):
463 463 if name.split('.', 1)[0] == 'commit':
464 464 commithooks[name] = cmd
465 465 ui.setconfig('hooks', name, None)
466 466 if commithooks:
467 467 # store parents for commit hook environment
468 468 if p1 is None:
469 469 _p1, _p2 = repo.dirstate.parents()
470 470 else:
471 471 _p1, _p2 = p1, p2 or nullid
472 472 _p1 = hex(_p1)
473 473 if _p2 == nullid:
474 474 _p2 = ''
475 475 else:
476 476 _p2 = hex(_p2)
477 477
478 478 n = super(kwrepo, self).commit(files, text, user, date, match,
479 479 force, force_editor, p1, p2,
480 480 extra, empty_ok)
481 481
482 482 # restore commit hooks
483 483 for name, cmd in commithooks.iteritems():
484 484 ui.setconfig('hooks', name, cmd)
485 485 if n is not None:
486 486 kwt.overwrite(n, True, None)
487 487 repo.hook('commit', node=n, parent1=_p1, parent2=_p2)
488 488 return n
489 489 finally:
490 490 release(lock, wlock)
491 491
492 492 # monkeypatches
493 493 def kwpatchfile_init(orig, self, ui, fname, opener, missing=False):
494 494 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
495 495 rejects or conflicts due to expanded keywords in working dir.'''
496 496 orig(self, ui, fname, opener, missing)
497 497 # shrink keywords read from working dir
498 498 self.lines = kwt.shrinklines(self.fname, self.lines)
499 499
500 500 def kw_diff(orig, repo, node1=None, node2=None, match=None, changes=None,
501 501 opts=None):
502 502 '''Monkeypatch patch.diff to avoid expansion except when
503 503 comparing against working dir.'''
504 504 if node2 is not None:
505 505 kwt.matcher = util.never
506 506 elif node1 is not None and node1 != repo['.'].node():
507 507 kwt.restrict = True
508 508 return orig(repo, node1, node2, match, changes, opts)
509 509
510 510 def kwweb_skip(orig, web, req, tmpl):
511 511 '''Wraps webcommands.x turning off keyword expansion.'''
512 512 kwt.matcher = util.never
513 513 return orig(web, req, tmpl)
514 514
515 515 repo.__class__ = kwrepo
516 516
517 517 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
518 518 extensions.wrapfunction(patch, 'diff', kw_diff)
519 519 for c in 'annotate changeset rev filediff diff'.split():
520 520 extensions.wrapfunction(webcommands, c, kwweb_skip)
521 521
522 522 cmdtable = {
523 523 'kwdemo':
524 524 (demo,
525 525 [('d', 'default', None, _('show default keyword template maps')),
526 526 ('f', 'rcfile', [], _('read maps from rcfile'))],
527 527 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...')),
528 528 'kwexpand': (expand, commands.walkopts,
529 529 _('hg kwexpand [OPTION]... [FILE]...')),
530 530 'kwfiles':
531 531 (files,
532 532 [('a', 'all', None, _('show keyword status flags of all files')),
533 533 ('i', 'ignore', None, _('show files excluded from expansion')),
534 534 ('u', 'untracked', None, _('additionally show untracked files')),
535 535 ] + commands.walkopts,
536 536 _('hg kwfiles [OPTION]... [FILE]...')),
537 537 'kwshrink': (shrink, commands.walkopts,
538 538 _('hg kwshrink [OPTION]... [FILE]...')),
539 539 }
@@ -1,2610 +1,2608 b''
1 1 # mq.py - patch queues for mercurial
2 2 #
3 3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 '''patch management and development
9 9
10 10 This extension lets you work with a stack of patches in a Mercurial
11 11 repository. It manages two stacks of patches - all known patches, and
12 12 applied patches (subset of known patches).
13 13
14 14 Known patches are represented as patch files in the .hg/patches
15 15 directory. Applied patches are both patch files and changesets.
16 16
17 17 Common tasks (use "hg help command" for more details):
18 18
19 19 prepare repository to work with patches qinit
20 20 create new patch qnew
21 21 import existing patch qimport
22 22
23 23 print patch series qseries
24 24 print applied patches qapplied
25 25 print name of top applied patch qtop
26 26
27 27 add known patch to applied stack qpush
28 28 remove patch from applied stack qpop
29 29 refresh contents of top applied patch qrefresh
30 30 '''
31 31
32 32 from mercurial.i18n import _
33 33 from mercurial.node import bin, hex, short, nullid, nullrev
34 34 from mercurial.lock import release
35 35 from mercurial import commands, cmdutil, hg, patch, util
36 36 from mercurial import repair, extensions, url, error
37 37 import os, sys, re, errno
38 38
39 39 commands.norepo += " qclone"
40 40
41 41 # Patch names looks like unix-file names.
42 42 # They must be joinable with queue directory and result in the patch path.
43 43 normname = util.normpath
44 44
45 45 class statusentry:
46 46 def __init__(self, rev, name=None):
47 47 if not name:
48 48 fields = rev.split(':', 1)
49 49 if len(fields) == 2:
50 50 self.rev, self.name = fields
51 51 else:
52 52 self.rev, self.name = None, None
53 53 else:
54 54 self.rev, self.name = rev, name
55 55
56 56 def __str__(self):
57 57 return self.rev + ':' + self.name
58 58
59 59 class patchheader(object):
60 60 def __init__(self, message, comments, user, date, haspatch):
61 61 self.message = message
62 62 self.comments = comments
63 63 self.user = user
64 64 self.date = date
65 65 self.haspatch = haspatch
66 66
67 67 def setuser(self, user):
68 68 if not self.setheader(['From: ', '# User '], user):
69 69 try:
70 70 patchheaderat = self.comments.index('# HG changeset patch')
71 71 self.comments.insert(patchheaderat + 1,'# User ' + user)
72 72 except ValueError:
73 73 self.comments = ['From: ' + user, ''] + self.comments
74 74 self.user = user
75 75
76 76 def setdate(self, date):
77 77 if self.setheader(['# Date '], date):
78 78 self.date = date
79 79
80 80 def setmessage(self, message):
81 81 if self.comments:
82 82 self._delmsg()
83 83 self.message = [message]
84 84 self.comments += self.message
85 85
86 86 def setheader(self, prefixes, new):
87 87 '''Update all references to a field in the patch header.
88 88 If none found, add it email style.'''
89 89 res = False
90 90 for prefix in prefixes:
91 91 for i in xrange(len(self.comments)):
92 92 if self.comments[i].startswith(prefix):
93 93 self.comments[i] = prefix + new
94 94 res = True
95 95 break
96 96 return res
97 97
98 98 def __str__(self):
99 99 if not self.comments:
100 100 return ''
101 101 return '\n'.join(self.comments) + '\n\n'
102 102
103 103 def _delmsg(self):
104 104 '''Remove existing message, keeping the rest of the comments fields.
105 105 If comments contains 'subject: ', message will prepend
106 106 the field and a blank line.'''
107 107 if self.message:
108 108 subj = 'subject: ' + self.message[0].lower()
109 109 for i in xrange(len(self.comments)):
110 110 if subj == self.comments[i].lower():
111 111 del self.comments[i]
112 112 self.message = self.message[2:]
113 113 break
114 114 ci = 0
115 115 for mi in xrange(len(self.message)):
116 116 while self.message[mi] != self.comments[ci]:
117 117 ci += 1
118 118 del self.comments[ci]
119 119
120 120 class queue:
121 121 def __init__(self, ui, path, patchdir=None):
122 122 self.basepath = path
123 123 self.path = patchdir or os.path.join(path, "patches")
124 124 self.opener = util.opener(self.path)
125 125 self.ui = ui
126 126 self.applied = []
127 127 self.full_series = []
128 128 self.applied_dirty = 0
129 129 self.series_dirty = 0
130 130 self.series_path = "series"
131 131 self.status_path = "status"
132 132 self.guards_path = "guards"
133 133 self.active_guards = None
134 134 self.guards_dirty = False
135 135 self._diffopts = None
136 136
137 137 if os.path.exists(self.join(self.series_path)):
138 138 self.full_series = self.opener(self.series_path).read().splitlines()
139 139 self.parse_series()
140 140
141 141 if os.path.exists(self.join(self.status_path)):
142 142 lines = self.opener(self.status_path).read().splitlines()
143 143 self.applied = [statusentry(l) for l in lines]
144 144
145 145 def diffopts(self):
146 146 if self._diffopts is None:
147 147 self._diffopts = patch.diffopts(self.ui)
148 148 return self._diffopts
149 149
150 150 def join(self, *p):
151 151 return os.path.join(self.path, *p)
152 152
153 153 def find_series(self, patch):
154 154 pre = re.compile("(\s*)([^#]+)")
155 155 index = 0
156 156 for l in self.full_series:
157 157 m = pre.match(l)
158 158 if m:
159 159 s = m.group(2)
160 160 s = s.rstrip()
161 161 if s == patch:
162 162 return index
163 163 index += 1
164 164 return None
165 165
166 166 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
167 167
168 168 def parse_series(self):
169 169 self.series = []
170 170 self.series_guards = []
171 171 for l in self.full_series:
172 172 h = l.find('#')
173 173 if h == -1:
174 174 patch = l
175 175 comment = ''
176 176 elif h == 0:
177 177 continue
178 178 else:
179 179 patch = l[:h]
180 180 comment = l[h:]
181 181 patch = patch.strip()
182 182 if patch:
183 183 if patch in self.series:
184 184 raise util.Abort(_('%s appears more than once in %s') %
185 185 (patch, self.join(self.series_path)))
186 186 self.series.append(patch)
187 187 self.series_guards.append(self.guard_re.findall(comment))
188 188
189 189 def check_guard(self, guard):
190 190 if not guard:
191 191 return _('guard cannot be an empty string')
192 192 bad_chars = '# \t\r\n\f'
193 193 first = guard[0]
194 194 for c in '-+':
195 195 if first == c:
196 196 return (_('guard %r starts with invalid character: %r') %
197 197 (guard, c))
198 198 for c in bad_chars:
199 199 if c in guard:
200 200 return _('invalid character in guard %r: %r') % (guard, c)
201 201
202 202 def set_active(self, guards):
203 203 for guard in guards:
204 204 bad = self.check_guard(guard)
205 205 if bad:
206 206 raise util.Abort(bad)
207 guards = util.sort(set(guards))
207 guards = sorted(set(guards))
208 208 self.ui.debug(_('active guards: %s\n') % ' '.join(guards))
209 209 self.active_guards = guards
210 210 self.guards_dirty = True
211 211
212 212 def active(self):
213 213 if self.active_guards is None:
214 214 self.active_guards = []
215 215 try:
216 216 guards = self.opener(self.guards_path).read().split()
217 217 except IOError, err:
218 218 if err.errno != errno.ENOENT: raise
219 219 guards = []
220 220 for i, guard in enumerate(guards):
221 221 bad = self.check_guard(guard)
222 222 if bad:
223 223 self.ui.warn('%s:%d: %s\n' %
224 224 (self.join(self.guards_path), i + 1, bad))
225 225 else:
226 226 self.active_guards.append(guard)
227 227 return self.active_guards
228 228
229 229 def set_guards(self, idx, guards):
230 230 for g in guards:
231 231 if len(g) < 2:
232 232 raise util.Abort(_('guard %r too short') % g)
233 233 if g[0] not in '-+':
234 234 raise util.Abort(_('guard %r starts with invalid char') % g)
235 235 bad = self.check_guard(g[1:])
236 236 if bad:
237 237 raise util.Abort(bad)
238 238 drop = self.guard_re.sub('', self.full_series[idx])
239 239 self.full_series[idx] = drop + ''.join([' #' + g for g in guards])
240 240 self.parse_series()
241 241 self.series_dirty = True
242 242
243 243 def pushable(self, idx):
244 244 if isinstance(idx, str):
245 245 idx = self.series.index(idx)
246 246 patchguards = self.series_guards[idx]
247 247 if not patchguards:
248 248 return True, None
249 249 guards = self.active()
250 250 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
251 251 if exactneg:
252 252 return False, exactneg[0]
253 253 pos = [g for g in patchguards if g[0] == '+']
254 254 exactpos = [g for g in pos if g[1:] in guards]
255 255 if pos:
256 256 if exactpos:
257 257 return True, exactpos[0]
258 258 return False, pos
259 259 return True, ''
260 260
261 261 def explain_pushable(self, idx, all_patches=False):
262 262 write = all_patches and self.ui.write or self.ui.warn
263 263 if all_patches or self.ui.verbose:
264 264 if isinstance(idx, str):
265 265 idx = self.series.index(idx)
266 266 pushable, why = self.pushable(idx)
267 267 if all_patches and pushable:
268 268 if why is None:
269 269 write(_('allowing %s - no guards in effect\n') %
270 270 self.series[idx])
271 271 else:
272 272 if not why:
273 273 write(_('allowing %s - no matching negative guards\n') %
274 274 self.series[idx])
275 275 else:
276 276 write(_('allowing %s - guarded by %r\n') %
277 277 (self.series[idx], why))
278 278 if not pushable:
279 279 if why:
280 280 write(_('skipping %s - guarded by %r\n') %
281 281 (self.series[idx], why))
282 282 else:
283 283 write(_('skipping %s - no matching guards\n') %
284 284 self.series[idx])
285 285
286 286 def save_dirty(self):
287 287 def write_list(items, path):
288 288 fp = self.opener(path, 'w')
289 289 for i in items:
290 290 fp.write("%s\n" % i)
291 291 fp.close()
292 292 if self.applied_dirty: write_list(map(str, self.applied), self.status_path)
293 293 if self.series_dirty: write_list(self.full_series, self.series_path)
294 294 if self.guards_dirty: write_list(self.active_guards, self.guards_path)
295 295
296 296 def readheaders(self, patch):
297 297 def eatdiff(lines):
298 298 while lines:
299 299 l = lines[-1]
300 300 if (l.startswith("diff -") or
301 301 l.startswith("Index:") or
302 302 l.startswith("===========")):
303 303 del lines[-1]
304 304 else:
305 305 break
306 306 def eatempty(lines):
307 307 while lines:
308 308 l = lines[-1]
309 309 if re.match('\s*$', l):
310 310 del lines[-1]
311 311 else:
312 312 break
313 313
314 314 pf = self.join(patch)
315 315 message = []
316 316 comments = []
317 317 user = None
318 318 date = None
319 319 format = None
320 320 subject = None
321 321 diffstart = 0
322 322
323 323 for line in file(pf):
324 324 line = line.rstrip()
325 325 if line.startswith('diff --git'):
326 326 diffstart = 2
327 327 break
328 328 if diffstart:
329 329 if line.startswith('+++ '):
330 330 diffstart = 2
331 331 break
332 332 if line.startswith("--- "):
333 333 diffstart = 1
334 334 continue
335 335 elif format == "hgpatch":
336 336 # parse values when importing the result of an hg export
337 337 if line.startswith("# User "):
338 338 user = line[7:]
339 339 elif line.startswith("# Date "):
340 340 date = line[7:]
341 341 elif not line.startswith("# ") and line:
342 342 message.append(line)
343 343 format = None
344 344 elif line == '# HG changeset patch':
345 345 format = "hgpatch"
346 346 elif (format != "tagdone" and (line.startswith("Subject: ") or
347 347 line.startswith("subject: "))):
348 348 subject = line[9:]
349 349 format = "tag"
350 350 elif (format != "tagdone" and (line.startswith("From: ") or
351 351 line.startswith("from: "))):
352 352 user = line[6:]
353 353 format = "tag"
354 354 elif format == "tag" and line == "":
355 355 # when looking for tags (subject: from: etc) they
356 356 # end once you find a blank line in the source
357 357 format = "tagdone"
358 358 elif message or line:
359 359 message.append(line)
360 360 comments.append(line)
361 361
362 362 eatdiff(message)
363 363 eatdiff(comments)
364 364 eatempty(message)
365 365 eatempty(comments)
366 366
367 367 # make sure message isn't empty
368 368 if format and format.startswith("tag") and subject:
369 369 message.insert(0, "")
370 370 message.insert(0, subject)
371 371 return patchheader(message, comments, user, date, diffstart > 1)
372 372
373 373 def removeundo(self, repo):
374 374 undo = repo.sjoin('undo')
375 375 if not os.path.exists(undo):
376 376 return
377 377 try:
378 378 os.unlink(undo)
379 379 except OSError, inst:
380 380 self.ui.warn(_('error removing undo: %s\n') % str(inst))
381 381
382 382 def printdiff(self, repo, node1, node2=None, files=None,
383 383 fp=None, changes=None, opts={}):
384 384 m = cmdutil.match(repo, files, opts)
385 385 chunks = patch.diff(repo, node1, node2, m, changes, self.diffopts())
386 386 write = fp is None and repo.ui.write or fp.write
387 387 for chunk in chunks:
388 388 write(chunk)
389 389
390 390 def mergeone(self, repo, mergeq, head, patch, rev):
391 391 # first try just applying the patch
392 392 (err, n) = self.apply(repo, [ patch ], update_status=False,
393 393 strict=True, merge=rev)
394 394
395 395 if err == 0:
396 396 return (err, n)
397 397
398 398 if n is None:
399 399 raise util.Abort(_("apply failed for patch %s") % patch)
400 400
401 401 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
402 402
403 403 # apply failed, strip away that rev and merge.
404 404 hg.clean(repo, head)
405 405 self.strip(repo, n, update=False, backup='strip')
406 406
407 407 ctx = repo[rev]
408 408 ret = hg.merge(repo, rev)
409 409 if ret:
410 410 raise util.Abort(_("update returned %d") % ret)
411 411 n = repo.commit(None, ctx.description(), ctx.user(), force=1)
412 412 if n == None:
413 413 raise util.Abort(_("repo commit failed"))
414 414 try:
415 415 ph = mergeq.readheaders(patch)
416 416 except:
417 417 raise util.Abort(_("unable to read %s") % patch)
418 418
419 419 patchf = self.opener(patch, "w")
420 420 comments = str(ph)
421 421 if comments:
422 422 patchf.write(comments)
423 423 self.printdiff(repo, head, n, fp=patchf)
424 424 patchf.close()
425 425 self.removeundo(repo)
426 426 return (0, n)
427 427
428 428 def qparents(self, repo, rev=None):
429 429 if rev is None:
430 430 (p1, p2) = repo.dirstate.parents()
431 431 if p2 == nullid:
432 432 return p1
433 433 if len(self.applied) == 0:
434 434 return None
435 435 return bin(self.applied[-1].rev)
436 436 pp = repo.changelog.parents(rev)
437 437 if pp[1] != nullid:
438 438 arevs = [ x.rev for x in self.applied ]
439 439 p0 = hex(pp[0])
440 440 p1 = hex(pp[1])
441 441 if p0 in arevs:
442 442 return pp[0]
443 443 if p1 in arevs:
444 444 return pp[1]
445 445 return pp[0]
446 446
447 447 def mergepatch(self, repo, mergeq, series):
448 448 if len(self.applied) == 0:
449 449 # each of the patches merged in will have two parents. This
450 450 # can confuse the qrefresh, qdiff, and strip code because it
451 451 # needs to know which parent is actually in the patch queue.
452 452 # so, we insert a merge marker with only one parent. This way
453 453 # the first patch in the queue is never a merge patch
454 454 #
455 455 pname = ".hg.patches.merge.marker"
456 456 n = repo.commit(None, '[mq]: merge marker', user=None, force=1)
457 457 self.removeundo(repo)
458 458 self.applied.append(statusentry(hex(n), pname))
459 459 self.applied_dirty = 1
460 460
461 461 head = self.qparents(repo)
462 462
463 463 for patch in series:
464 464 patch = mergeq.lookup(patch, strict=True)
465 465 if not patch:
466 466 self.ui.warn(_("patch %s does not exist\n") % patch)
467 467 return (1, None)
468 468 pushable, reason = self.pushable(patch)
469 469 if not pushable:
470 470 self.explain_pushable(patch, all_patches=True)
471 471 continue
472 472 info = mergeq.isapplied(patch)
473 473 if not info:
474 474 self.ui.warn(_("patch %s is not applied\n") % patch)
475 475 return (1, None)
476 476 rev = bin(info[1])
477 477 (err, head) = self.mergeone(repo, mergeq, head, patch, rev)
478 478 if head:
479 479 self.applied.append(statusentry(hex(head), patch))
480 480 self.applied_dirty = 1
481 481 if err:
482 482 return (err, head)
483 483 self.save_dirty()
484 484 return (0, head)
485 485
486 486 def patch(self, repo, patchfile):
487 487 '''Apply patchfile to the working directory.
488 488 patchfile: file name of patch'''
489 489 files = {}
490 490 try:
491 491 fuzz = patch.patch(patchfile, self.ui, strip=1, cwd=repo.root,
492 492 files=files)
493 493 except Exception, inst:
494 494 self.ui.note(str(inst) + '\n')
495 495 if not self.ui.verbose:
496 496 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
497 497 return (False, files, False)
498 498
499 499 return (True, files, fuzz)
500 500
501 501 def apply(self, repo, series, list=False, update_status=True,
502 502 strict=False, patchdir=None, merge=None, all_files={}):
503 503 wlock = lock = tr = None
504 504 try:
505 505 wlock = repo.wlock()
506 506 lock = repo.lock()
507 507 tr = repo.transaction()
508 508 try:
509 509 ret = self._apply(repo, series, list, update_status,
510 510 strict, patchdir, merge, all_files=all_files)
511 511 tr.close()
512 512 self.save_dirty()
513 513 return ret
514 514 except:
515 515 try:
516 516 tr.abort()
517 517 finally:
518 518 repo.invalidate()
519 519 repo.dirstate.invalidate()
520 520 raise
521 521 finally:
522 522 del tr
523 523 release(lock, wlock)
524 524 self.removeundo(repo)
525 525
526 526 def _apply(self, repo, series, list=False, update_status=True,
527 527 strict=False, patchdir=None, merge=None, all_files={}):
528 528 # TODO unify with commands.py
529 529 if not patchdir:
530 530 patchdir = self.path
531 531 err = 0
532 532 n = None
533 533 for patchname in series:
534 534 pushable, reason = self.pushable(patchname)
535 535 if not pushable:
536 536 self.explain_pushable(patchname, all_patches=True)
537 537 continue
538 538 self.ui.warn(_("applying %s\n") % patchname)
539 539 pf = os.path.join(patchdir, patchname)
540 540
541 541 try:
542 542 ph = self.readheaders(patchname)
543 543 except:
544 544 self.ui.warn(_("Unable to read %s\n") % patchname)
545 545 err = 1
546 546 break
547 547
548 548 message = ph.message
549 549 if not message:
550 550 message = _("imported patch %s\n") % patchname
551 551 else:
552 552 if list:
553 553 message.append(_("\nimported patch %s") % patchname)
554 554 message = '\n'.join(message)
555 555
556 556 if ph.haspatch:
557 557 (patcherr, files, fuzz) = self.patch(repo, pf)
558 558 all_files.update(files)
559 559 patcherr = not patcherr
560 560 else:
561 561 self.ui.warn(_("patch %s is empty\n") % patchname)
562 562 patcherr, files, fuzz = 0, [], 0
563 563
564 564 if merge and files:
565 565 # Mark as removed/merged and update dirstate parent info
566 566 removed = []
567 567 merged = []
568 568 for f in files:
569 569 if os.path.exists(repo.wjoin(f)):
570 570 merged.append(f)
571 571 else:
572 572 removed.append(f)
573 573 for f in removed:
574 574 repo.dirstate.remove(f)
575 575 for f in merged:
576 576 repo.dirstate.merge(f)
577 577 p1, p2 = repo.dirstate.parents()
578 578 repo.dirstate.setparents(p1, merge)
579 579
580 580 files = patch.updatedir(self.ui, repo, files)
581 581 match = cmdutil.matchfiles(repo, files or [])
582 582 n = repo.commit(files, message, ph.user, ph.date, match=match,
583 583 force=True)
584 584
585 585 if n == None:
586 586 raise util.Abort(_("repo commit failed"))
587 587
588 588 if update_status:
589 589 self.applied.append(statusentry(hex(n), patchname))
590 590
591 591 if patcherr:
592 592 self.ui.warn(_("patch failed, rejects left in working dir\n"))
593 593 err = 1
594 594 break
595 595
596 596 if fuzz and strict:
597 597 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
598 598 err = 1
599 599 break
600 600 return (err, n)
601 601
602 602 def _clean_series(self, patches):
603 indices = util.sort([self.find_series(p) for p in patches])
604 for i in indices[-1::-1]:
603 for i in sorted([self.find_series(p) for p in patches], reverse=True):
605 604 del self.full_series[i]
606 605 self.parse_series()
607 606 self.series_dirty = 1
608 607
609 608 def finish(self, repo, revs):
610 revs.sort()
611 609 firstrev = repo[self.applied[0].rev].rev()
612 610 appliedbase = 0
613 611 patches = []
614 for rev in util.sort(revs):
612 for rev in sorted(revs):
615 613 if rev < firstrev:
616 614 raise util.Abort(_('revision %d is not managed') % rev)
617 615 base = bin(self.applied[appliedbase].rev)
618 616 node = repo.changelog.node(rev)
619 617 if node != base:
620 618 raise util.Abort(_('cannot delete revision %d above '
621 619 'applied patches') % rev)
622 620 patches.append(self.applied[appliedbase].name)
623 621 appliedbase += 1
624 622
625 623 r = self.qrepo()
626 624 if r:
627 625 r.remove(patches, True)
628 626 else:
629 627 for p in patches:
630 628 os.unlink(self.join(p))
631 629
632 630 del self.applied[:appliedbase]
633 631 self.applied_dirty = 1
634 632 self._clean_series(patches)
635 633
636 634 def delete(self, repo, patches, opts):
637 635 if not patches and not opts.get('rev'):
638 636 raise util.Abort(_('qdelete requires at least one revision or '
639 637 'patch name'))
640 638
641 639 realpatches = []
642 640 for patch in patches:
643 641 patch = self.lookup(patch, strict=True)
644 642 info = self.isapplied(patch)
645 643 if info:
646 644 raise util.Abort(_("cannot delete applied patch %s") % patch)
647 645 if patch not in self.series:
648 646 raise util.Abort(_("patch %s not in series file") % patch)
649 647 realpatches.append(patch)
650 648
651 649 appliedbase = 0
652 650 if opts.get('rev'):
653 651 if not self.applied:
654 652 raise util.Abort(_('no patches applied'))
655 653 revs = cmdutil.revrange(repo, opts['rev'])
656 654 if len(revs) > 1 and revs[0] > revs[1]:
657 655 revs.reverse()
658 656 for rev in revs:
659 657 if appliedbase >= len(self.applied):
660 658 raise util.Abort(_("revision %d is not managed") % rev)
661 659
662 660 base = bin(self.applied[appliedbase].rev)
663 661 node = repo.changelog.node(rev)
664 662 if node != base:
665 663 raise util.Abort(_("cannot delete revision %d above "
666 664 "applied patches") % rev)
667 665 realpatches.append(self.applied[appliedbase].name)
668 666 appliedbase += 1
669 667
670 668 if not opts.get('keep'):
671 669 r = self.qrepo()
672 670 if r:
673 671 r.remove(realpatches, True)
674 672 else:
675 673 for p in realpatches:
676 674 os.unlink(self.join(p))
677 675
678 676 if appliedbase:
679 677 del self.applied[:appliedbase]
680 678 self.applied_dirty = 1
681 679 self._clean_series(realpatches)
682 680
683 681 def check_toppatch(self, repo):
684 682 if len(self.applied) > 0:
685 683 top = bin(self.applied[-1].rev)
686 684 pp = repo.dirstate.parents()
687 685 if top not in pp:
688 686 raise util.Abort(_("working directory revision is not qtip"))
689 687 return top
690 688 return None
691 689 def check_localchanges(self, repo, force=False, refresh=True):
692 690 m, a, r, d = repo.status()[:4]
693 691 if m or a or r or d:
694 692 if not force:
695 693 if refresh:
696 694 raise util.Abort(_("local changes found, refresh first"))
697 695 else:
698 696 raise util.Abort(_("local changes found"))
699 697 return m, a, r, d
700 698
701 699 _reserved = ('series', 'status', 'guards')
702 700 def check_reserved_name(self, name):
703 701 if (name in self._reserved or name.startswith('.hg')
704 702 or name.startswith('.mq')):
705 703 raise util.Abort(_('"%s" cannot be used as the name of a patch')
706 704 % name)
707 705
708 706 def new(self, repo, patchfn, *pats, **opts):
709 707 """options:
710 708 msg: a string or a no-argument function returning a string
711 709 """
712 710 msg = opts.get('msg')
713 711 force = opts.get('force')
714 712 user = opts.get('user')
715 713 date = opts.get('date')
716 714 if date:
717 715 date = util.parsedate(date)
718 716 self.check_reserved_name(patchfn)
719 717 if os.path.exists(self.join(patchfn)):
720 718 raise util.Abort(_('patch "%s" already exists') % patchfn)
721 719 if opts.get('include') or opts.get('exclude') or pats:
722 720 match = cmdutil.match(repo, pats, opts)
723 721 # detect missing files in pats
724 722 def badfn(f, msg):
725 723 raise util.Abort('%s: %s' % (f, msg))
726 724 match.bad = badfn
727 725 m, a, r, d = repo.status(match=match)[:4]
728 726 else:
729 727 m, a, r, d = self.check_localchanges(repo, force)
730 728 match = cmdutil.matchfiles(repo, m + a + r)
731 729 commitfiles = m + a + r
732 730 self.check_toppatch(repo)
733 731 insert = self.full_series_end()
734 732 wlock = repo.wlock()
735 733 try:
736 734 # if patch file write fails, abort early
737 735 p = self.opener(patchfn, "w")
738 736 try:
739 737 if date:
740 738 p.write("# HG changeset patch\n")
741 739 if user:
742 740 p.write("# User " + user + "\n")
743 741 p.write("# Date %d %d\n\n" % date)
744 742 elif user:
745 743 p.write("From: " + user + "\n\n")
746 744
747 745 if callable(msg):
748 746 msg = msg()
749 747 commitmsg = msg and msg or ("[mq]: %s" % patchfn)
750 748 n = repo.commit(commitfiles, commitmsg, user, date, match=match, force=True)
751 749 if n == None:
752 750 raise util.Abort(_("repo commit failed"))
753 751 try:
754 752 self.full_series[insert:insert] = [patchfn]
755 753 self.applied.append(statusentry(hex(n), patchfn))
756 754 self.parse_series()
757 755 self.series_dirty = 1
758 756 self.applied_dirty = 1
759 757 if msg:
760 758 msg = msg + "\n\n"
761 759 p.write(msg)
762 760 if commitfiles:
763 761 diffopts = self.diffopts()
764 762 if opts.get('git'): diffopts.git = True
765 763 parent = self.qparents(repo, n)
766 764 chunks = patch.diff(repo, node1=parent, node2=n,
767 765 match=match, opts=diffopts)
768 766 for chunk in chunks:
769 767 p.write(chunk)
770 768 p.close()
771 769 wlock.release()
772 770 wlock = None
773 771 r = self.qrepo()
774 772 if r: r.add([patchfn])
775 773 except:
776 774 repo.rollback()
777 775 raise
778 776 except Exception:
779 777 patchpath = self.join(patchfn)
780 778 try:
781 779 os.unlink(patchpath)
782 780 except:
783 781 self.ui.warn(_('error unlinking %s\n') % patchpath)
784 782 raise
785 783 self.removeundo(repo)
786 784 finally:
787 785 release(wlock)
788 786
789 787 def strip(self, repo, rev, update=True, backup="all", force=None):
790 788 wlock = lock = None
791 789 try:
792 790 wlock = repo.wlock()
793 791 lock = repo.lock()
794 792
795 793 if update:
796 794 self.check_localchanges(repo, force=force, refresh=False)
797 795 urev = self.qparents(repo, rev)
798 796 hg.clean(repo, urev)
799 797 repo.dirstate.write()
800 798
801 799 self.removeundo(repo)
802 800 repair.strip(self.ui, repo, rev, backup)
803 801 # strip may have unbundled a set of backed up revisions after
804 802 # the actual strip
805 803 self.removeundo(repo)
806 804 finally:
807 805 release(lock, wlock)
808 806
809 807 def isapplied(self, patch):
810 808 """returns (index, rev, patch)"""
811 809 for i in xrange(len(self.applied)):
812 810 a = self.applied[i]
813 811 if a.name == patch:
814 812 return (i, a.rev, a.name)
815 813 return None
816 814
817 815 # if the exact patch name does not exist, we try a few
818 816 # variations. If strict is passed, we try only #1
819 817 #
820 818 # 1) a number to indicate an offset in the series file
821 819 # 2) a unique substring of the patch name was given
822 820 # 3) patchname[-+]num to indicate an offset in the series file
823 821 def lookup(self, patch, strict=False):
824 822 patch = patch and str(patch)
825 823
826 824 def partial_name(s):
827 825 if s in self.series:
828 826 return s
829 827 matches = [x for x in self.series if s in x]
830 828 if len(matches) > 1:
831 829 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
832 830 for m in matches:
833 831 self.ui.warn(' %s\n' % m)
834 832 return None
835 833 if matches:
836 834 return matches[0]
837 835 if len(self.series) > 0 and len(self.applied) > 0:
838 836 if s == 'qtip':
839 837 return self.series[self.series_end(True)-1]
840 838 if s == 'qbase':
841 839 return self.series[0]
842 840 return None
843 841
844 842 if patch == None:
845 843 return None
846 844 if patch in self.series:
847 845 return patch
848 846
849 847 if not os.path.isfile(self.join(patch)):
850 848 try:
851 849 sno = int(patch)
852 850 except(ValueError, OverflowError):
853 851 pass
854 852 else:
855 853 if -len(self.series) <= sno < len(self.series):
856 854 return self.series[sno]
857 855
858 856 if not strict:
859 857 res = partial_name(patch)
860 858 if res:
861 859 return res
862 860 minus = patch.rfind('-')
863 861 if minus >= 0:
864 862 res = partial_name(patch[:minus])
865 863 if res:
866 864 i = self.series.index(res)
867 865 try:
868 866 off = int(patch[minus+1:] or 1)
869 867 except(ValueError, OverflowError):
870 868 pass
871 869 else:
872 870 if i - off >= 0:
873 871 return self.series[i - off]
874 872 plus = patch.rfind('+')
875 873 if plus >= 0:
876 874 res = partial_name(patch[:plus])
877 875 if res:
878 876 i = self.series.index(res)
879 877 try:
880 878 off = int(patch[plus+1:] or 1)
881 879 except(ValueError, OverflowError):
882 880 pass
883 881 else:
884 882 if i + off < len(self.series):
885 883 return self.series[i + off]
886 884 raise util.Abort(_("patch %s not in series") % patch)
887 885
888 886 def push(self, repo, patch=None, force=False, list=False,
889 887 mergeq=None, all=False):
890 888 wlock = repo.wlock()
891 889 if repo.dirstate.parents()[0] != repo.changelog.tip():
892 890 self.ui.status(_("(working directory not at tip)\n"))
893 891
894 892 if not self.series:
895 893 self.ui.warn(_('no patches in series\n'))
896 894 return 0
897 895
898 896 try:
899 897 patch = self.lookup(patch)
900 898 # Suppose our series file is: A B C and the current 'top'
901 899 # patch is B. qpush C should be performed (moving forward)
902 900 # qpush B is a NOP (no change) qpush A is an error (can't
903 901 # go backwards with qpush)
904 902 if patch:
905 903 info = self.isapplied(patch)
906 904 if info:
907 905 if info[0] < len(self.applied) - 1:
908 906 raise util.Abort(
909 907 _("cannot push to a previous patch: %s") % patch)
910 908 self.ui.warn(
911 909 _('qpush: %s is already at the top\n') % patch)
912 910 return
913 911 pushable, reason = self.pushable(patch)
914 912 if not pushable:
915 913 if reason:
916 914 reason = _('guarded by %r') % reason
917 915 else:
918 916 reason = _('no matching guards')
919 917 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
920 918 return 1
921 919 elif all:
922 920 patch = self.series[-1]
923 921 if self.isapplied(patch):
924 922 self.ui.warn(_('all patches are currently applied\n'))
925 923 return 0
926 924
927 925 # Following the above example, starting at 'top' of B:
928 926 # qpush should be performed (pushes C), but a subsequent
929 927 # qpush without an argument is an error (nothing to
930 928 # apply). This allows a loop of "...while hg qpush..." to
931 929 # work as it detects an error when done
932 930 start = self.series_end()
933 931 if start == len(self.series):
934 932 self.ui.warn(_('patch series already fully applied\n'))
935 933 return 1
936 934 if not force:
937 935 self.check_localchanges(repo)
938 936
939 937 self.applied_dirty = 1
940 938 if start > 0:
941 939 self.check_toppatch(repo)
942 940 if not patch:
943 941 patch = self.series[start]
944 942 end = start + 1
945 943 else:
946 944 end = self.series.index(patch, start) + 1
947 945 s = self.series[start:end]
948 946 all_files = {}
949 947 try:
950 948 if mergeq:
951 949 ret = self.mergepatch(repo, mergeq, s)
952 950 else:
953 951 ret = self.apply(repo, s, list, all_files=all_files)
954 952 except:
955 953 self.ui.warn(_('cleaning up working directory...'))
956 954 node = repo.dirstate.parents()[0]
957 955 hg.revert(repo, node, None)
958 956 unknown = repo.status(unknown=True)[4]
959 957 # only remove unknown files that we know we touched or
960 958 # created while patching
961 959 for f in unknown:
962 960 if f in all_files:
963 961 util.unlink(repo.wjoin(f))
964 962 self.ui.warn(_('done\n'))
965 963 raise
966 964 top = self.applied[-1].name
967 965 if ret[0]:
968 966 self.ui.write(_("errors during apply, please fix and "
969 967 "refresh %s\n") % top)
970 968 else:
971 969 self.ui.write(_("now at: %s\n") % top)
972 970 return ret[0]
973 971 finally:
974 972 wlock.release()
975 973
976 974 def pop(self, repo, patch=None, force=False, update=True, all=False):
977 975 def getfile(f, rev, flags):
978 976 t = repo.file(f).read(rev)
979 977 repo.wwrite(f, t, flags)
980 978
981 979 wlock = repo.wlock()
982 980 try:
983 981 if patch:
984 982 # index, rev, patch
985 983 info = self.isapplied(patch)
986 984 if not info:
987 985 patch = self.lookup(patch)
988 986 info = self.isapplied(patch)
989 987 if not info:
990 988 raise util.Abort(_("patch %s is not applied") % patch)
991 989
992 990 if len(self.applied) == 0:
993 991 # Allow qpop -a to work repeatedly,
994 992 # but not qpop without an argument
995 993 self.ui.warn(_("no patches applied\n"))
996 994 return not all
997 995
998 996 if all:
999 997 start = 0
1000 998 elif patch:
1001 999 start = info[0] + 1
1002 1000 else:
1003 1001 start = len(self.applied) - 1
1004 1002
1005 1003 if start >= len(self.applied):
1006 1004 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1007 1005 return
1008 1006
1009 1007 if not update:
1010 1008 parents = repo.dirstate.parents()
1011 1009 rr = [ bin(x.rev) for x in self.applied ]
1012 1010 for p in parents:
1013 1011 if p in rr:
1014 1012 self.ui.warn(_("qpop: forcing dirstate update\n"))
1015 1013 update = True
1016 1014 else:
1017 1015 parents = [p.hex() for p in repo[None].parents()]
1018 1016 needupdate = False
1019 1017 for entry in self.applied[start:]:
1020 1018 if entry.rev in parents:
1021 1019 needupdate = True
1022 1020 break
1023 1021 update = needupdate
1024 1022
1025 1023 if not force and update:
1026 1024 self.check_localchanges(repo)
1027 1025
1028 1026 self.applied_dirty = 1
1029 1027 end = len(self.applied)
1030 1028 rev = bin(self.applied[start].rev)
1031 1029 if update:
1032 1030 top = self.check_toppatch(repo)
1033 1031
1034 1032 try:
1035 1033 heads = repo.changelog.heads(rev)
1036 1034 except error.LookupError:
1037 1035 node = short(rev)
1038 1036 raise util.Abort(_('trying to pop unknown node %s') % node)
1039 1037
1040 1038 if heads != [bin(self.applied[-1].rev)]:
1041 1039 raise util.Abort(_("popping would remove a revision not "
1042 1040 "managed by this patch queue"))
1043 1041
1044 1042 # we know there are no local changes, so we can make a simplified
1045 1043 # form of hg.update.
1046 1044 if update:
1047 1045 qp = self.qparents(repo, rev)
1048 1046 changes = repo.changelog.read(qp)
1049 1047 mmap = repo.manifest.read(changes[0])
1050 1048 m, a, r, d = repo.status(qp, top)[:4]
1051 1049 if d:
1052 1050 raise util.Abort(_("deletions found between repo revs"))
1053 1051 for f in m:
1054 1052 getfile(f, mmap[f], mmap.flags(f))
1055 1053 for f in r:
1056 1054 getfile(f, mmap[f], mmap.flags(f))
1057 1055 for f in m + r:
1058 1056 repo.dirstate.normal(f)
1059 1057 for f in a:
1060 1058 try:
1061 1059 os.unlink(repo.wjoin(f))
1062 1060 except OSError, e:
1063 1061 if e.errno != errno.ENOENT:
1064 1062 raise
1065 1063 try: os.removedirs(os.path.dirname(repo.wjoin(f)))
1066 1064 except: pass
1067 1065 repo.dirstate.forget(f)
1068 1066 repo.dirstate.setparents(qp, nullid)
1069 1067 del self.applied[start:end]
1070 1068 self.strip(repo, rev, update=False, backup='strip')
1071 1069 if len(self.applied):
1072 1070 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1073 1071 else:
1074 1072 self.ui.write(_("patch queue now empty\n"))
1075 1073 finally:
1076 1074 wlock.release()
1077 1075
1078 1076 def diff(self, repo, pats, opts):
1079 1077 top = self.check_toppatch(repo)
1080 1078 if not top:
1081 1079 self.ui.write(_("no patches applied\n"))
1082 1080 return
1083 1081 qp = self.qparents(repo, top)
1084 1082 self._diffopts = patch.diffopts(self.ui, opts)
1085 1083 self.printdiff(repo, qp, files=pats, opts=opts)
1086 1084
1087 1085 def refresh(self, repo, pats=None, **opts):
1088 1086 if len(self.applied) == 0:
1089 1087 self.ui.write(_("no patches applied\n"))
1090 1088 return 1
1091 1089 msg = opts.get('msg', '').rstrip()
1092 1090 newuser = opts.get('user')
1093 1091 newdate = opts.get('date')
1094 1092 if newdate:
1095 1093 newdate = '%d %d' % util.parsedate(newdate)
1096 1094 wlock = repo.wlock()
1097 1095 try:
1098 1096 self.check_toppatch(repo)
1099 1097 (top, patchfn) = (self.applied[-1].rev, self.applied[-1].name)
1100 1098 top = bin(top)
1101 1099 if repo.changelog.heads(top) != [top]:
1102 1100 raise util.Abort(_("cannot refresh a revision with children"))
1103 1101 cparents = repo.changelog.parents(top)
1104 1102 patchparent = self.qparents(repo, top)
1105 1103 ph = self.readheaders(patchfn)
1106 1104
1107 1105 patchf = self.opener(patchfn, 'r')
1108 1106
1109 1107 # if the patch was a git patch, refresh it as a git patch
1110 1108 for line in patchf:
1111 1109 if line.startswith('diff --git'):
1112 1110 self.diffopts().git = True
1113 1111 break
1114 1112
1115 1113 if msg:
1116 1114 ph.setmessage(msg)
1117 1115 if newuser:
1118 1116 ph.setuser(newuser)
1119 1117 if newdate:
1120 1118 ph.setdate(newdate)
1121 1119
1122 1120 # only commit new patch when write is complete
1123 1121 patchf = self.opener(patchfn, 'w', atomictemp=True)
1124 1122
1125 1123 patchf.seek(0)
1126 1124 patchf.truncate()
1127 1125
1128 1126 comments = str(ph)
1129 1127 if comments:
1130 1128 patchf.write(comments)
1131 1129
1132 1130 if opts.get('git'):
1133 1131 self.diffopts().git = True
1134 1132 tip = repo.changelog.tip()
1135 1133 if top == tip:
1136 1134 # if the top of our patch queue is also the tip, there is an
1137 1135 # optimization here. We update the dirstate in place and strip
1138 1136 # off the tip commit. Then just commit the current directory
1139 1137 # tree. We can also send repo.commit the list of files
1140 1138 # changed to speed up the diff
1141 1139 #
1142 1140 # in short mode, we only diff the files included in the
1143 1141 # patch already plus specified files
1144 1142 #
1145 1143 # this should really read:
1146 1144 # mm, dd, aa, aa2 = repo.status(tip, patchparent)[:4]
1147 1145 # but we do it backwards to take advantage of manifest/chlog
1148 1146 # caching against the next repo.status call
1149 1147 #
1150 1148 mm, aa, dd, aa2 = repo.status(patchparent, tip)[:4]
1151 1149 changes = repo.changelog.read(tip)
1152 1150 man = repo.manifest.read(changes[0])
1153 1151 aaa = aa[:]
1154 1152 matchfn = cmdutil.match(repo, pats, opts)
1155 1153 if opts.get('short'):
1156 1154 # if amending a patch, we start with existing
1157 1155 # files plus specified files - unfiltered
1158 1156 match = cmdutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1159 1157 # filter with inc/exl options
1160 1158 matchfn = cmdutil.match(repo, opts=opts)
1161 1159 else:
1162 1160 match = cmdutil.matchall(repo)
1163 1161 m, a, r, d = repo.status(match=match)[:4]
1164 1162
1165 1163 # we might end up with files that were added between
1166 1164 # tip and the dirstate parent, but then changed in the
1167 1165 # local dirstate. in this case, we want them to only
1168 1166 # show up in the added section
1169 1167 for x in m:
1170 1168 if x not in aa:
1171 1169 mm.append(x)
1172 1170 # we might end up with files added by the local dirstate that
1173 1171 # were deleted by the patch. In this case, they should only
1174 1172 # show up in the changed section.
1175 1173 for x in a:
1176 1174 if x in dd:
1177 1175 del dd[dd.index(x)]
1178 1176 mm.append(x)
1179 1177 else:
1180 1178 aa.append(x)
1181 1179 # make sure any files deleted in the local dirstate
1182 1180 # are not in the add or change column of the patch
1183 1181 forget = []
1184 1182 for x in d + r:
1185 1183 if x in aa:
1186 1184 del aa[aa.index(x)]
1187 1185 forget.append(x)
1188 1186 continue
1189 1187 elif x in mm:
1190 1188 del mm[mm.index(x)]
1191 1189 dd.append(x)
1192 1190
1193 1191 m = list(set(mm))
1194 1192 r = list(set(dd))
1195 1193 a = list(set(aa))
1196 1194 c = [filter(matchfn, l) for l in (m, a, r)]
1197 1195 match = cmdutil.matchfiles(repo, set(c[0] + c[1] + c[2]))
1198 1196 chunks = patch.diff(repo, patchparent, match=match,
1199 1197 changes=c, opts=self.diffopts())
1200 1198 for chunk in chunks:
1201 1199 patchf.write(chunk)
1202 1200
1203 1201 try:
1204 1202 if self.diffopts().git:
1205 1203 copies = {}
1206 1204 for dst in a:
1207 1205 src = repo.dirstate.copied(dst)
1208 1206 # during qfold, the source file for copies may
1209 1207 # be removed. Treat this as a simple add.
1210 1208 if src is not None and src in repo.dirstate:
1211 1209 copies.setdefault(src, []).append(dst)
1212 1210 repo.dirstate.add(dst)
1213 1211 # remember the copies between patchparent and tip
1214 1212 for dst in aaa:
1215 1213 f = repo.file(dst)
1216 1214 src = f.renamed(man[dst])
1217 1215 if src:
1218 1216 copies.setdefault(src[0], []).extend(copies.get(dst, []))
1219 1217 if dst in a:
1220 1218 copies[src[0]].append(dst)
1221 1219 # we can't copy a file created by the patch itself
1222 1220 if dst in copies:
1223 1221 del copies[dst]
1224 1222 for src, dsts in copies.iteritems():
1225 1223 for dst in dsts:
1226 1224 repo.dirstate.copy(src, dst)
1227 1225 else:
1228 1226 for dst in a:
1229 1227 repo.dirstate.add(dst)
1230 1228 # Drop useless copy information
1231 1229 for f in list(repo.dirstate.copies()):
1232 1230 repo.dirstate.copy(None, f)
1233 1231 for f in r:
1234 1232 repo.dirstate.remove(f)
1235 1233 # if the patch excludes a modified file, mark that
1236 1234 # file with mtime=0 so status can see it.
1237 1235 mm = []
1238 1236 for i in xrange(len(m)-1, -1, -1):
1239 1237 if not matchfn(m[i]):
1240 1238 mm.append(m[i])
1241 1239 del m[i]
1242 1240 for f in m:
1243 1241 repo.dirstate.normal(f)
1244 1242 for f in mm:
1245 1243 repo.dirstate.normallookup(f)
1246 1244 for f in forget:
1247 1245 repo.dirstate.forget(f)
1248 1246
1249 1247 if not msg:
1250 1248 if not ph.message:
1251 1249 message = "[mq]: %s\n" % patchfn
1252 1250 else:
1253 1251 message = "\n".join(ph.message)
1254 1252 else:
1255 1253 message = msg
1256 1254
1257 1255 user = ph.user or changes[1]
1258 1256
1259 1257 # assumes strip can roll itself back if interrupted
1260 1258 repo.dirstate.setparents(*cparents)
1261 1259 self.applied.pop()
1262 1260 self.applied_dirty = 1
1263 1261 self.strip(repo, top, update=False,
1264 1262 backup='strip')
1265 1263 except:
1266 1264 repo.dirstate.invalidate()
1267 1265 raise
1268 1266
1269 1267 try:
1270 1268 # might be nice to attempt to roll back strip after this
1271 1269 patchf.rename()
1272 1270 n = repo.commit(match.files(), message, user, ph.date,
1273 1271 match=match, force=1)
1274 1272 self.applied.append(statusentry(hex(n), patchfn))
1275 1273 except:
1276 1274 ctx = repo[cparents[0]]
1277 1275 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1278 1276 self.save_dirty()
1279 1277 self.ui.warn(_('refresh interrupted while patch was popped! '
1280 1278 '(revert --all, qpush to recover)\n'))
1281 1279 raise
1282 1280 else:
1283 1281 self.printdiff(repo, patchparent, fp=patchf)
1284 1282 patchf.rename()
1285 1283 added = repo.status()[1]
1286 1284 for a in added:
1287 1285 f = repo.wjoin(a)
1288 1286 try:
1289 1287 os.unlink(f)
1290 1288 except OSError, e:
1291 1289 if e.errno != errno.ENOENT:
1292 1290 raise
1293 1291 try: os.removedirs(os.path.dirname(f))
1294 1292 except: pass
1295 1293 # forget the file copies in the dirstate
1296 1294 # push should readd the files later on
1297 1295 repo.dirstate.forget(a)
1298 1296 self.pop(repo, force=True)
1299 1297 self.push(repo, force=True)
1300 1298 finally:
1301 1299 wlock.release()
1302 1300 self.removeundo(repo)
1303 1301
1304 1302 def init(self, repo, create=False):
1305 1303 if not create and os.path.isdir(self.path):
1306 1304 raise util.Abort(_("patch queue directory already exists"))
1307 1305 try:
1308 1306 os.mkdir(self.path)
1309 1307 except OSError, inst:
1310 1308 if inst.errno != errno.EEXIST or not create:
1311 1309 raise
1312 1310 if create:
1313 1311 return self.qrepo(create=True)
1314 1312
1315 1313 def unapplied(self, repo, patch=None):
1316 1314 if patch and patch not in self.series:
1317 1315 raise util.Abort(_("patch %s is not in series file") % patch)
1318 1316 if not patch:
1319 1317 start = self.series_end()
1320 1318 else:
1321 1319 start = self.series.index(patch) + 1
1322 1320 unapplied = []
1323 1321 for i in xrange(start, len(self.series)):
1324 1322 pushable, reason = self.pushable(i)
1325 1323 if pushable:
1326 1324 unapplied.append((i, self.series[i]))
1327 1325 self.explain_pushable(i)
1328 1326 return unapplied
1329 1327
1330 1328 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1331 1329 summary=False):
1332 1330 def displayname(patchname):
1333 1331 if summary:
1334 1332 ph = self.readheaders(patchname)
1335 1333 msg = ph.message
1336 1334 msg = msg and ': ' + msg[0] or ': '
1337 1335 else:
1338 1336 msg = ''
1339 1337 return '%s%s' % (patchname, msg)
1340 1338
1341 1339 applied = set([p.name for p in self.applied])
1342 1340 if length is None:
1343 1341 length = len(self.series) - start
1344 1342 if not missing:
1345 1343 for i in xrange(start, start+length):
1346 1344 patch = self.series[i]
1347 1345 if patch in applied:
1348 1346 stat = 'A'
1349 1347 elif self.pushable(i)[0]:
1350 1348 stat = 'U'
1351 1349 else:
1352 1350 stat = 'G'
1353 1351 pfx = ''
1354 1352 if self.ui.verbose:
1355 1353 pfx = '%d %s ' % (i, stat)
1356 1354 elif status and status != stat:
1357 1355 continue
1358 1356 self.ui.write('%s%s\n' % (pfx, displayname(patch)))
1359 1357 else:
1360 1358 msng_list = []
1361 1359 for root, dirs, files in os.walk(self.path):
1362 1360 d = root[len(self.path) + 1:]
1363 1361 for f in files:
1364 1362 fl = os.path.join(d, f)
1365 1363 if (fl not in self.series and
1366 1364 fl not in (self.status_path, self.series_path,
1367 1365 self.guards_path)
1368 1366 and not fl.startswith('.')):
1369 1367 msng_list.append(fl)
1370 for x in util.sort(msng_list):
1368 for x in sorted(msng_list):
1371 1369 pfx = self.ui.verbose and ('D ') or ''
1372 1370 self.ui.write("%s%s\n" % (pfx, displayname(x)))
1373 1371
1374 1372 def issaveline(self, l):
1375 1373 if l.name == '.hg.patches.save.line':
1376 1374 return True
1377 1375
1378 1376 def qrepo(self, create=False):
1379 1377 if create or os.path.isdir(self.join(".hg")):
1380 1378 return hg.repository(self.ui, path=self.path, create=create)
1381 1379
1382 1380 def restore(self, repo, rev, delete=None, qupdate=None):
1383 1381 c = repo.changelog.read(rev)
1384 1382 desc = c[4].strip()
1385 1383 lines = desc.splitlines()
1386 1384 i = 0
1387 1385 datastart = None
1388 1386 series = []
1389 1387 applied = []
1390 1388 qpp = None
1391 1389 for i in xrange(0, len(lines)):
1392 1390 if lines[i] == 'Patch Data:':
1393 1391 datastart = i + 1
1394 1392 elif lines[i].startswith('Dirstate:'):
1395 1393 l = lines[i].rstrip()
1396 1394 l = l[10:].split(' ')
1397 1395 qpp = [ bin(x) for x in l ]
1398 1396 elif datastart != None:
1399 1397 l = lines[i].rstrip()
1400 1398 se = statusentry(l)
1401 1399 file_ = se.name
1402 1400 if se.rev:
1403 1401 applied.append(se)
1404 1402 else:
1405 1403 series.append(file_)
1406 1404 if datastart == None:
1407 1405 self.ui.warn(_("No saved patch data found\n"))
1408 1406 return 1
1409 1407 self.ui.warn(_("restoring status: %s\n") % lines[0])
1410 1408 self.full_series = series
1411 1409 self.applied = applied
1412 1410 self.parse_series()
1413 1411 self.series_dirty = 1
1414 1412 self.applied_dirty = 1
1415 1413 heads = repo.changelog.heads()
1416 1414 if delete:
1417 1415 if rev not in heads:
1418 1416 self.ui.warn(_("save entry has children, leaving it alone\n"))
1419 1417 else:
1420 1418 self.ui.warn(_("removing save entry %s\n") % short(rev))
1421 1419 pp = repo.dirstate.parents()
1422 1420 if rev in pp:
1423 1421 update = True
1424 1422 else:
1425 1423 update = False
1426 1424 self.strip(repo, rev, update=update, backup='strip')
1427 1425 if qpp:
1428 1426 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1429 1427 (short(qpp[0]), short(qpp[1])))
1430 1428 if qupdate:
1431 1429 self.ui.status(_("queue directory updating\n"))
1432 1430 r = self.qrepo()
1433 1431 if not r:
1434 1432 self.ui.warn(_("Unable to load queue repository\n"))
1435 1433 return 1
1436 1434 hg.clean(r, qpp[0])
1437 1435
1438 1436 def save(self, repo, msg=None):
1439 1437 if len(self.applied) == 0:
1440 1438 self.ui.warn(_("save: no patches applied, exiting\n"))
1441 1439 return 1
1442 1440 if self.issaveline(self.applied[-1]):
1443 1441 self.ui.warn(_("status is already saved\n"))
1444 1442 return 1
1445 1443
1446 1444 ar = [ ':' + x for x in self.full_series ]
1447 1445 if not msg:
1448 1446 msg = _("hg patches saved state")
1449 1447 else:
1450 1448 msg = "hg patches: " + msg.rstrip('\r\n')
1451 1449 r = self.qrepo()
1452 1450 if r:
1453 1451 pp = r.dirstate.parents()
1454 1452 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1455 1453 msg += "\n\nPatch Data:\n"
1456 1454 text = msg + "\n".join([str(x) for x in self.applied]) + '\n' + (ar and
1457 1455 "\n".join(ar) + '\n' or "")
1458 1456 n = repo.commit(None, text, user=None, force=1)
1459 1457 if not n:
1460 1458 self.ui.warn(_("repo commit failed\n"))
1461 1459 return 1
1462 1460 self.applied.append(statusentry(hex(n),'.hg.patches.save.line'))
1463 1461 self.applied_dirty = 1
1464 1462 self.removeundo(repo)
1465 1463
1466 1464 def full_series_end(self):
1467 1465 if len(self.applied) > 0:
1468 1466 p = self.applied[-1].name
1469 1467 end = self.find_series(p)
1470 1468 if end == None:
1471 1469 return len(self.full_series)
1472 1470 return end + 1
1473 1471 return 0
1474 1472
1475 1473 def series_end(self, all_patches=False):
1476 1474 """If all_patches is False, return the index of the next pushable patch
1477 1475 in the series, or the series length. If all_patches is True, return the
1478 1476 index of the first patch past the last applied one.
1479 1477 """
1480 1478 end = 0
1481 1479 def next(start):
1482 1480 if all_patches:
1483 1481 return start
1484 1482 i = start
1485 1483 while i < len(self.series):
1486 1484 p, reason = self.pushable(i)
1487 1485 if p:
1488 1486 break
1489 1487 self.explain_pushable(i)
1490 1488 i += 1
1491 1489 return i
1492 1490 if len(self.applied) > 0:
1493 1491 p = self.applied[-1].name
1494 1492 try:
1495 1493 end = self.series.index(p)
1496 1494 except ValueError:
1497 1495 return 0
1498 1496 return next(end + 1)
1499 1497 return next(end)
1500 1498
1501 1499 def appliedname(self, index):
1502 1500 pname = self.applied[index].name
1503 1501 if not self.ui.verbose:
1504 1502 p = pname
1505 1503 else:
1506 1504 p = str(self.series.index(pname)) + " " + pname
1507 1505 return p
1508 1506
1509 1507 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
1510 1508 force=None, git=False):
1511 1509 def checkseries(patchname):
1512 1510 if patchname in self.series:
1513 1511 raise util.Abort(_('patch %s is already in the series file')
1514 1512 % patchname)
1515 1513 def checkfile(patchname):
1516 1514 if not force and os.path.exists(self.join(patchname)):
1517 1515 raise util.Abort(_('patch "%s" already exists')
1518 1516 % patchname)
1519 1517
1520 1518 if rev:
1521 1519 if files:
1522 1520 raise util.Abort(_('option "-r" not valid when importing '
1523 1521 'files'))
1524 1522 rev = cmdutil.revrange(repo, rev)
1525 1523 rev.sort(lambda x, y: cmp(y, x))
1526 1524 if (len(files) > 1 or len(rev) > 1) and patchname:
1527 1525 raise util.Abort(_('option "-n" not valid when importing multiple '
1528 1526 'patches'))
1529 1527 i = 0
1530 1528 added = []
1531 1529 if rev:
1532 1530 # If mq patches are applied, we can only import revisions
1533 1531 # that form a linear path to qbase.
1534 1532 # Otherwise, they should form a linear path to a head.
1535 1533 heads = repo.changelog.heads(repo.changelog.node(rev[-1]))
1536 1534 if len(heads) > 1:
1537 1535 raise util.Abort(_('revision %d is the root of more than one '
1538 1536 'branch') % rev[-1])
1539 1537 if self.applied:
1540 1538 base = hex(repo.changelog.node(rev[0]))
1541 1539 if base in [n.rev for n in self.applied]:
1542 1540 raise util.Abort(_('revision %d is already managed')
1543 1541 % rev[0])
1544 1542 if heads != [bin(self.applied[-1].rev)]:
1545 1543 raise util.Abort(_('revision %d is not the parent of '
1546 1544 'the queue') % rev[0])
1547 1545 base = repo.changelog.rev(bin(self.applied[0].rev))
1548 1546 lastparent = repo.changelog.parentrevs(base)[0]
1549 1547 else:
1550 1548 if heads != [repo.changelog.node(rev[0])]:
1551 1549 raise util.Abort(_('revision %d has unmanaged children')
1552 1550 % rev[0])
1553 1551 lastparent = None
1554 1552
1555 1553 if git:
1556 1554 self.diffopts().git = True
1557 1555
1558 1556 for r in rev:
1559 1557 p1, p2 = repo.changelog.parentrevs(r)
1560 1558 n = repo.changelog.node(r)
1561 1559 if p2 != nullrev:
1562 1560 raise util.Abort(_('cannot import merge revision %d') % r)
1563 1561 if lastparent and lastparent != r:
1564 1562 raise util.Abort(_('revision %d is not the parent of %d')
1565 1563 % (r, lastparent))
1566 1564 lastparent = p1
1567 1565
1568 1566 if not patchname:
1569 1567 patchname = normname('%d.diff' % r)
1570 1568 self.check_reserved_name(patchname)
1571 1569 checkseries(patchname)
1572 1570 checkfile(patchname)
1573 1571 self.full_series.insert(0, patchname)
1574 1572
1575 1573 patchf = self.opener(patchname, "w")
1576 1574 patch.export(repo, [n], fp=patchf, opts=self.diffopts())
1577 1575 patchf.close()
1578 1576
1579 1577 se = statusentry(hex(n), patchname)
1580 1578 self.applied.insert(0, se)
1581 1579
1582 1580 added.append(patchname)
1583 1581 patchname = None
1584 1582 self.parse_series()
1585 1583 self.applied_dirty = 1
1586 1584
1587 1585 for filename in files:
1588 1586 if existing:
1589 1587 if filename == '-':
1590 1588 raise util.Abort(_('-e is incompatible with import from -'))
1591 1589 if not patchname:
1592 1590 patchname = normname(filename)
1593 1591 self.check_reserved_name(patchname)
1594 1592 if not os.path.isfile(self.join(patchname)):
1595 1593 raise util.Abort(_("patch %s does not exist") % patchname)
1596 1594 else:
1597 1595 try:
1598 1596 if filename == '-':
1599 1597 if not patchname:
1600 1598 raise util.Abort(_('need --name to import a patch from -'))
1601 1599 text = sys.stdin.read()
1602 1600 else:
1603 1601 text = url.open(self.ui, filename).read()
1604 1602 except (OSError, IOError):
1605 1603 raise util.Abort(_("unable to read %s") % filename)
1606 1604 if not patchname:
1607 1605 patchname = normname(os.path.basename(filename))
1608 1606 self.check_reserved_name(patchname)
1609 1607 checkfile(patchname)
1610 1608 patchf = self.opener(patchname, "w")
1611 1609 patchf.write(text)
1612 1610 if not force:
1613 1611 checkseries(patchname)
1614 1612 if patchname not in self.series:
1615 1613 index = self.full_series_end() + i
1616 1614 self.full_series[index:index] = [patchname]
1617 1615 self.parse_series()
1618 1616 self.ui.warn(_("adding %s to series file\n") % patchname)
1619 1617 i += 1
1620 1618 added.append(patchname)
1621 1619 patchname = None
1622 1620 self.series_dirty = 1
1623 1621 qrepo = self.qrepo()
1624 1622 if qrepo:
1625 1623 qrepo.add(added)
1626 1624
1627 1625 def delete(ui, repo, *patches, **opts):
1628 1626 """remove patches from queue
1629 1627
1630 1628 The patches must not be applied, unless they are arguments to the
1631 1629 -r/--rev parameter. At least one patch or revision is required.
1632 1630
1633 1631 With --rev, mq will stop managing the named revisions (converting
1634 1632 them to regular mercurial changesets). The qfinish command should
1635 1633 be used as an alternative for qdelete -r, as the latter option is
1636 1634 deprecated.
1637 1635
1638 1636 With -k/--keep, the patch files are preserved in the patch
1639 1637 directory."""
1640 1638 q = repo.mq
1641 1639 q.delete(repo, patches, opts)
1642 1640 q.save_dirty()
1643 1641 return 0
1644 1642
1645 1643 def applied(ui, repo, patch=None, **opts):
1646 1644 """print the patches already applied"""
1647 1645 q = repo.mq
1648 1646 if patch:
1649 1647 if patch not in q.series:
1650 1648 raise util.Abort(_("patch %s is not in series file") % patch)
1651 1649 end = q.series.index(patch) + 1
1652 1650 else:
1653 1651 end = q.series_end(True)
1654 1652 return q.qseries(repo, length=end, status='A', summary=opts.get('summary'))
1655 1653
1656 1654 def unapplied(ui, repo, patch=None, **opts):
1657 1655 """print the patches not yet applied"""
1658 1656 q = repo.mq
1659 1657 if patch:
1660 1658 if patch not in q.series:
1661 1659 raise util.Abort(_("patch %s is not in series file") % patch)
1662 1660 start = q.series.index(patch) + 1
1663 1661 else:
1664 1662 start = q.series_end(True)
1665 1663 q.qseries(repo, start=start, status='U', summary=opts.get('summary'))
1666 1664
1667 1665 def qimport(ui, repo, *filename, **opts):
1668 1666 """import a patch
1669 1667
1670 1668 The patch is inserted into the series after the last applied
1671 1669 patch. If no patches have been applied, qimport prepends the patch
1672 1670 to the series.
1673 1671
1674 1672 The patch will have the same name as its source file unless you
1675 1673 give it a new one with -n/--name.
1676 1674
1677 1675 You can register an existing patch inside the patch directory with
1678 1676 the -e/--existing flag.
1679 1677
1680 1678 With -f/--force, an existing patch of the same name will be
1681 1679 overwritten.
1682 1680
1683 1681 An existing changeset may be placed under mq control with -r/--rev
1684 1682 (e.g. qimport --rev tip -n patch will place tip under mq control).
1685 1683 With -g/--git, patches imported with --rev will use the git diff
1686 1684 format. See the diffs help topic for information on why this is
1687 1685 important for preserving rename/copy information and permission
1688 1686 changes.
1689 1687
1690 1688 To import a patch from standard input, pass - as the patch file.
1691 1689 When importing from standard input, a patch name must be specified
1692 1690 using the --name flag.
1693 1691 """
1694 1692 q = repo.mq
1695 1693 q.qimport(repo, filename, patchname=opts['name'],
1696 1694 existing=opts['existing'], force=opts['force'], rev=opts['rev'],
1697 1695 git=opts['git'])
1698 1696 q.save_dirty()
1699 1697 return 0
1700 1698
1701 1699 def init(ui, repo, **opts):
1702 1700 """init a new queue repository
1703 1701
1704 1702 The queue repository is unversioned by default. If
1705 1703 -c/--create-repo is specified, qinit will create a separate nested
1706 1704 repository for patches (qinit -c may also be run later to convert
1707 1705 an unversioned patch repository into a versioned one). You can use
1708 1706 qcommit to commit changes to this queue repository."""
1709 1707 q = repo.mq
1710 1708 r = q.init(repo, create=opts['create_repo'])
1711 1709 q.save_dirty()
1712 1710 if r:
1713 1711 if not os.path.exists(r.wjoin('.hgignore')):
1714 1712 fp = r.wopener('.hgignore', 'w')
1715 1713 fp.write('^\\.hg\n')
1716 1714 fp.write('^\\.mq\n')
1717 1715 fp.write('syntax: glob\n')
1718 1716 fp.write('status\n')
1719 1717 fp.write('guards\n')
1720 1718 fp.close()
1721 1719 if not os.path.exists(r.wjoin('series')):
1722 1720 r.wopener('series', 'w').close()
1723 1721 r.add(['.hgignore', 'series'])
1724 1722 commands.add(ui, r)
1725 1723 return 0
1726 1724
1727 1725 def clone(ui, source, dest=None, **opts):
1728 1726 '''clone main and patch repository at same time
1729 1727
1730 1728 If source is local, destination will have no patches applied. If
1731 1729 source is remote, this command can not check if patches are
1732 1730 applied in source, so cannot guarantee that patches are not
1733 1731 applied in destination. If you clone remote repository, be sure
1734 1732 before that it has no patches applied.
1735 1733
1736 1734 Source patch repository is looked for in <src>/.hg/patches by
1737 1735 default. Use -p <url> to change.
1738 1736
1739 1737 The patch directory must be a nested mercurial repository, as
1740 1738 would be created by qinit -c.
1741 1739 '''
1742 1740 def patchdir(repo):
1743 1741 url = repo.url()
1744 1742 if url.endswith('/'):
1745 1743 url = url[:-1]
1746 1744 return url + '/.hg/patches'
1747 1745 if dest is None:
1748 1746 dest = hg.defaultdest(source)
1749 1747 sr = hg.repository(cmdutil.remoteui(ui, opts), ui.expandpath(source))
1750 1748 if opts['patches']:
1751 1749 patchespath = ui.expandpath(opts['patches'])
1752 1750 else:
1753 1751 patchespath = patchdir(sr)
1754 1752 try:
1755 1753 hg.repository(ui, patchespath)
1756 1754 except error.RepoError:
1757 1755 raise util.Abort(_('versioned patch repository not found'
1758 1756 ' (see qinit -c)'))
1759 1757 qbase, destrev = None, None
1760 1758 if sr.local():
1761 1759 if sr.mq.applied:
1762 1760 qbase = bin(sr.mq.applied[0].rev)
1763 1761 if not hg.islocal(dest):
1764 1762 heads = set(sr.heads())
1765 1763 destrev = list(heads.difference(sr.heads(qbase)))
1766 1764 destrev.append(sr.changelog.parents(qbase)[0])
1767 1765 elif sr.capable('lookup'):
1768 1766 try:
1769 1767 qbase = sr.lookup('qbase')
1770 1768 except error.RepoError:
1771 1769 pass
1772 1770 ui.note(_('cloning main repository\n'))
1773 1771 sr, dr = hg.clone(ui, sr.url(), dest,
1774 1772 pull=opts['pull'],
1775 1773 rev=destrev,
1776 1774 update=False,
1777 1775 stream=opts['uncompressed'])
1778 1776 ui.note(_('cloning patch repository\n'))
1779 1777 hg.clone(ui, opts['patches'] or patchdir(sr), patchdir(dr),
1780 1778 pull=opts['pull'], update=not opts['noupdate'],
1781 1779 stream=opts['uncompressed'])
1782 1780 if dr.local():
1783 1781 if qbase:
1784 1782 ui.note(_('stripping applied patches from destination '
1785 1783 'repository\n'))
1786 1784 dr.mq.strip(dr, qbase, update=False, backup=None)
1787 1785 if not opts['noupdate']:
1788 1786 ui.note(_('updating destination repository\n'))
1789 1787 hg.update(dr, dr.changelog.tip())
1790 1788
1791 1789 def commit(ui, repo, *pats, **opts):
1792 1790 """commit changes in the queue repository"""
1793 1791 q = repo.mq
1794 1792 r = q.qrepo()
1795 1793 if not r: raise util.Abort('no queue repository')
1796 1794 commands.commit(r.ui, r, *pats, **opts)
1797 1795
1798 1796 def series(ui, repo, **opts):
1799 1797 """print the entire series file"""
1800 1798 repo.mq.qseries(repo, missing=opts['missing'], summary=opts['summary'])
1801 1799 return 0
1802 1800
1803 1801 def top(ui, repo, **opts):
1804 1802 """print the name of the current patch"""
1805 1803 q = repo.mq
1806 1804 t = q.applied and q.series_end(True) or 0
1807 1805 if t:
1808 1806 return q.qseries(repo, start=t-1, length=1, status='A',
1809 1807 summary=opts.get('summary'))
1810 1808 else:
1811 1809 ui.write(_("no patches applied\n"))
1812 1810 return 1
1813 1811
1814 1812 def next(ui, repo, **opts):
1815 1813 """print the name of the next patch"""
1816 1814 q = repo.mq
1817 1815 end = q.series_end()
1818 1816 if end == len(q.series):
1819 1817 ui.write(_("all patches applied\n"))
1820 1818 return 1
1821 1819 return q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
1822 1820
1823 1821 def prev(ui, repo, **opts):
1824 1822 """print the name of the previous patch"""
1825 1823 q = repo.mq
1826 1824 l = len(q.applied)
1827 1825 if l == 1:
1828 1826 ui.write(_("only one patch applied\n"))
1829 1827 return 1
1830 1828 if not l:
1831 1829 ui.write(_("no patches applied\n"))
1832 1830 return 1
1833 1831 return q.qseries(repo, start=l-2, length=1, status='A',
1834 1832 summary=opts.get('summary'))
1835 1833
1836 1834 def setupheaderopts(ui, opts):
1837 1835 def do(opt,val):
1838 1836 if not opts[opt] and opts['current' + opt]:
1839 1837 opts[opt] = val
1840 1838 do('user', ui.username())
1841 1839 do('date', "%d %d" % util.makedate())
1842 1840
1843 1841 def new(ui, repo, patch, *args, **opts):
1844 1842 """create a new patch
1845 1843
1846 1844 qnew creates a new patch on top of the currently-applied patch (if
1847 1845 any). It will refuse to run if there are any outstanding changes
1848 1846 unless -f/--force is specified, in which case the patch will be
1849 1847 initialized with them. You may also use -I/--include,
1850 1848 -X/--exclude, and/or a list of files after the patch name to add
1851 1849 only changes to matching files to the new patch, leaving the rest
1852 1850 as uncommitted modifications.
1853 1851
1854 1852 -u/--user and -d/--date can be used to set the (given) user and
1855 1853 date, respectively. -U/--currentuser and -D/--currentdate set user
1856 1854 to current user and date to current date.
1857 1855
1858 1856 -e/--edit, -m/--message or -l/--logfile set the patch header as
1859 1857 well as the commit message. If none is specified, the header is
1860 1858 empty and the commit message is '[mq]: PATCH'.
1861 1859
1862 1860 Use the -g/--git option to keep the patch in the git extended diff
1863 1861 format. Read the diffs help topic for more information on why this
1864 1862 is important for preserving permission changes and copy/rename
1865 1863 information.
1866 1864 """
1867 1865 msg = cmdutil.logmessage(opts)
1868 1866 def getmsg(): return ui.edit(msg, ui.username())
1869 1867 q = repo.mq
1870 1868 opts['msg'] = msg
1871 1869 if opts.get('edit'):
1872 1870 opts['msg'] = getmsg
1873 1871 else:
1874 1872 opts['msg'] = msg
1875 1873 setupheaderopts(ui, opts)
1876 1874 q.new(repo, patch, *args, **opts)
1877 1875 q.save_dirty()
1878 1876 return 0
1879 1877
1880 1878 def refresh(ui, repo, *pats, **opts):
1881 1879 """update the current patch
1882 1880
1883 1881 If any file patterns are provided, the refreshed patch will
1884 1882 contain only the modifications that match those patterns; the
1885 1883 remaining modifications will remain in the working directory.
1886 1884
1887 1885 If -s/--short is specified, files currently included in the patch
1888 1886 will be refreshed just like matched files and remain in the patch.
1889 1887
1890 1888 hg add/remove/copy/rename work as usual, though you might want to
1891 1889 use git-style patches (-g/--git or [diff] git=1) to track copies
1892 1890 and renames. See the diffs help topic for more information on the
1893 1891 git diff format.
1894 1892 """
1895 1893 q = repo.mq
1896 1894 message = cmdutil.logmessage(opts)
1897 1895 if opts['edit']:
1898 1896 if not q.applied:
1899 1897 ui.write(_("no patches applied\n"))
1900 1898 return 1
1901 1899 if message:
1902 1900 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
1903 1901 patch = q.applied[-1].name
1904 1902 ph = q.readheaders(patch)
1905 1903 message = ui.edit('\n'.join(ph.message), ph.user or ui.username())
1906 1904 setupheaderopts(ui, opts)
1907 1905 ret = q.refresh(repo, pats, msg=message, **opts)
1908 1906 q.save_dirty()
1909 1907 return ret
1910 1908
1911 1909 def diff(ui, repo, *pats, **opts):
1912 1910 """diff of the current patch and subsequent modifications
1913 1911
1914 1912 Shows a diff which includes the current patch as well as any
1915 1913 changes which have been made in the working directory since the
1916 1914 last refresh (thus showing what the current patch would become
1917 1915 after a qrefresh).
1918 1916
1919 1917 Use 'hg diff' if you only want to see the changes made since the
1920 1918 last qrefresh, or 'hg export qtip' if you want to see changes made
1921 1919 by the current patch without including changes made since the
1922 1920 qrefresh.
1923 1921 """
1924 1922 repo.mq.diff(repo, pats, opts)
1925 1923 return 0
1926 1924
1927 1925 def fold(ui, repo, *files, **opts):
1928 1926 """fold the named patches into the current patch
1929 1927
1930 1928 Patches must not yet be applied. Each patch will be successively
1931 1929 applied to the current patch in the order given. If all the
1932 1930 patches apply successfully, the current patch will be refreshed
1933 1931 with the new cumulative patch, and the folded patches will be
1934 1932 deleted. With -k/--keep, the folded patch files will not be
1935 1933 removed afterwards.
1936 1934
1937 1935 The header for each folded patch will be concatenated with the
1938 1936 current patch header, separated by a line of '* * *'."""
1939 1937
1940 1938 q = repo.mq
1941 1939
1942 1940 if not files:
1943 1941 raise util.Abort(_('qfold requires at least one patch name'))
1944 1942 if not q.check_toppatch(repo):
1945 1943 raise util.Abort(_('No patches applied'))
1946 1944
1947 1945 message = cmdutil.logmessage(opts)
1948 1946 if opts['edit']:
1949 1947 if message:
1950 1948 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
1951 1949
1952 1950 parent = q.lookup('qtip')
1953 1951 patches = []
1954 1952 messages = []
1955 1953 for f in files:
1956 1954 p = q.lookup(f)
1957 1955 if p in patches or p == parent:
1958 1956 ui.warn(_('Skipping already folded patch %s') % p)
1959 1957 if q.isapplied(p):
1960 1958 raise util.Abort(_('qfold cannot fold already applied patch %s') % p)
1961 1959 patches.append(p)
1962 1960
1963 1961 for p in patches:
1964 1962 if not message:
1965 1963 ph = q.readheaders(p)
1966 1964 if ph.message:
1967 1965 messages.append(ph.message)
1968 1966 pf = q.join(p)
1969 1967 (patchsuccess, files, fuzz) = q.patch(repo, pf)
1970 1968 if not patchsuccess:
1971 1969 raise util.Abort(_('Error folding patch %s') % p)
1972 1970 patch.updatedir(ui, repo, files)
1973 1971
1974 1972 if not message:
1975 1973 ph = q.readheaders(parent)
1976 1974 message, user = ph.message, ph.user
1977 1975 for msg in messages:
1978 1976 message.append('* * *')
1979 1977 message.extend(msg)
1980 1978 message = '\n'.join(message)
1981 1979
1982 1980 if opts['edit']:
1983 1981 message = ui.edit(message, user or ui.username())
1984 1982
1985 1983 q.refresh(repo, msg=message)
1986 1984 q.delete(repo, patches, opts)
1987 1985 q.save_dirty()
1988 1986
1989 1987 def goto(ui, repo, patch, **opts):
1990 1988 '''push or pop patches until named patch is at top of stack'''
1991 1989 q = repo.mq
1992 1990 patch = q.lookup(patch)
1993 1991 if q.isapplied(patch):
1994 1992 ret = q.pop(repo, patch, force=opts['force'])
1995 1993 else:
1996 1994 ret = q.push(repo, patch, force=opts['force'])
1997 1995 q.save_dirty()
1998 1996 return ret
1999 1997
2000 1998 def guard(ui, repo, *args, **opts):
2001 1999 '''set or print guards for a patch
2002 2000
2003 2001 Guards control whether a patch can be pushed. A patch with no
2004 2002 guards is always pushed. A patch with a positive guard ("+foo") is
2005 2003 pushed only if the qselect command has activated it. A patch with
2006 2004 a negative guard ("-foo") is never pushed if the qselect command
2007 2005 has activated it.
2008 2006
2009 2007 With no arguments, print the currently active guards.
2010 2008 With arguments, set guards for the named patch.
2011 2009 NOTE: Specifying negative guards now requires '--'.
2012 2010
2013 2011 To set guards on another patch:
2014 2012 hg qguard -- other.patch +2.6.17 -stable
2015 2013 '''
2016 2014 def status(idx):
2017 2015 guards = q.series_guards[idx] or ['unguarded']
2018 2016 ui.write('%s: %s\n' % (q.series[idx], ' '.join(guards)))
2019 2017 q = repo.mq
2020 2018 patch = None
2021 2019 args = list(args)
2022 2020 if opts['list']:
2023 2021 if args or opts['none']:
2024 2022 raise util.Abort(_('cannot mix -l/--list with options or arguments'))
2025 2023 for i in xrange(len(q.series)):
2026 2024 status(i)
2027 2025 return
2028 2026 if not args or args[0][0:1] in '-+':
2029 2027 if not q.applied:
2030 2028 raise util.Abort(_('no patches applied'))
2031 2029 patch = q.applied[-1].name
2032 2030 if patch is None and args[0][0:1] not in '-+':
2033 2031 patch = args.pop(0)
2034 2032 if patch is None:
2035 2033 raise util.Abort(_('no patch to work with'))
2036 2034 if args or opts['none']:
2037 2035 idx = q.find_series(patch)
2038 2036 if idx is None:
2039 2037 raise util.Abort(_('no patch named %s') % patch)
2040 2038 q.set_guards(idx, args)
2041 2039 q.save_dirty()
2042 2040 else:
2043 2041 status(q.series.index(q.lookup(patch)))
2044 2042
2045 2043 def header(ui, repo, patch=None):
2046 2044 """print the header of the topmost or specified patch"""
2047 2045 q = repo.mq
2048 2046
2049 2047 if patch:
2050 2048 patch = q.lookup(patch)
2051 2049 else:
2052 2050 if not q.applied:
2053 2051 ui.write('no patches applied\n')
2054 2052 return 1
2055 2053 patch = q.lookup('qtip')
2056 2054 ph = repo.mq.readheaders(patch)
2057 2055
2058 2056 ui.write('\n'.join(ph.message) + '\n')
2059 2057
2060 2058 def lastsavename(path):
2061 2059 (directory, base) = os.path.split(path)
2062 2060 names = os.listdir(directory)
2063 2061 namere = re.compile("%s.([0-9]+)" % base)
2064 2062 maxindex = None
2065 2063 maxname = None
2066 2064 for f in names:
2067 2065 m = namere.match(f)
2068 2066 if m:
2069 2067 index = int(m.group(1))
2070 2068 if maxindex == None or index > maxindex:
2071 2069 maxindex = index
2072 2070 maxname = f
2073 2071 if maxname:
2074 2072 return (os.path.join(directory, maxname), maxindex)
2075 2073 return (None, None)
2076 2074
2077 2075 def savename(path):
2078 2076 (last, index) = lastsavename(path)
2079 2077 if last is None:
2080 2078 index = 0
2081 2079 newpath = path + ".%d" % (index + 1)
2082 2080 return newpath
2083 2081
2084 2082 def push(ui, repo, patch=None, **opts):
2085 2083 """push the next patch onto the stack
2086 2084
2087 2085 When -f/--force is applied, all local changes in patched files
2088 2086 will be lost.
2089 2087 """
2090 2088 q = repo.mq
2091 2089 mergeq = None
2092 2090
2093 2091 if opts['merge']:
2094 2092 if opts['name']:
2095 2093 newpath = repo.join(opts['name'])
2096 2094 else:
2097 2095 newpath, i = lastsavename(q.path)
2098 2096 if not newpath:
2099 2097 ui.warn(_("no saved queues found, please use -n\n"))
2100 2098 return 1
2101 2099 mergeq = queue(ui, repo.join(""), newpath)
2102 2100 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2103 2101 ret = q.push(repo, patch, force=opts['force'], list=opts['list'],
2104 2102 mergeq=mergeq, all=opts.get('all'))
2105 2103 return ret
2106 2104
2107 2105 def pop(ui, repo, patch=None, **opts):
2108 2106 """pop the current patch off the stack
2109 2107
2110 2108 By default, pops off the top of the patch stack. If given a patch
2111 2109 name, keeps popping off patches until the named patch is at the
2112 2110 top of the stack.
2113 2111 """
2114 2112 localupdate = True
2115 2113 if opts['name']:
2116 2114 q = queue(ui, repo.join(""), repo.join(opts['name']))
2117 2115 ui.warn(_('using patch queue: %s\n') % q.path)
2118 2116 localupdate = False
2119 2117 else:
2120 2118 q = repo.mq
2121 2119 ret = q.pop(repo, patch, force=opts['force'], update=localupdate,
2122 2120 all=opts['all'])
2123 2121 q.save_dirty()
2124 2122 return ret
2125 2123
2126 2124 def rename(ui, repo, patch, name=None, **opts):
2127 2125 """rename a patch
2128 2126
2129 2127 With one argument, renames the current patch to PATCH1.
2130 2128 With two arguments, renames PATCH1 to PATCH2."""
2131 2129
2132 2130 q = repo.mq
2133 2131
2134 2132 if not name:
2135 2133 name = patch
2136 2134 patch = None
2137 2135
2138 2136 if patch:
2139 2137 patch = q.lookup(patch)
2140 2138 else:
2141 2139 if not q.applied:
2142 2140 ui.write(_('no patches applied\n'))
2143 2141 return
2144 2142 patch = q.lookup('qtip')
2145 2143 absdest = q.join(name)
2146 2144 if os.path.isdir(absdest):
2147 2145 name = normname(os.path.join(name, os.path.basename(patch)))
2148 2146 absdest = q.join(name)
2149 2147 if os.path.exists(absdest):
2150 2148 raise util.Abort(_('%s already exists') % absdest)
2151 2149
2152 2150 if name in q.series:
2153 2151 raise util.Abort(_('A patch named %s already exists in the series file') % name)
2154 2152
2155 2153 if ui.verbose:
2156 2154 ui.write('renaming %s to %s\n' % (patch, name))
2157 2155 i = q.find_series(patch)
2158 2156 guards = q.guard_re.findall(q.full_series[i])
2159 2157 q.full_series[i] = name + ''.join([' #' + g for g in guards])
2160 2158 q.parse_series()
2161 2159 q.series_dirty = 1
2162 2160
2163 2161 info = q.isapplied(patch)
2164 2162 if info:
2165 2163 q.applied[info[0]] = statusentry(info[1], name)
2166 2164 q.applied_dirty = 1
2167 2165
2168 2166 util.rename(q.join(patch), absdest)
2169 2167 r = q.qrepo()
2170 2168 if r:
2171 2169 wlock = r.wlock()
2172 2170 try:
2173 2171 if r.dirstate[patch] == 'a':
2174 2172 r.dirstate.forget(patch)
2175 2173 r.dirstate.add(name)
2176 2174 else:
2177 2175 if r.dirstate[name] == 'r':
2178 2176 r.undelete([name])
2179 2177 r.copy(patch, name)
2180 2178 r.remove([patch], False)
2181 2179 finally:
2182 2180 wlock.release()
2183 2181
2184 2182 q.save_dirty()
2185 2183
2186 2184 def restore(ui, repo, rev, **opts):
2187 2185 """restore the queue state saved by a revision"""
2188 2186 rev = repo.lookup(rev)
2189 2187 q = repo.mq
2190 2188 q.restore(repo, rev, delete=opts['delete'],
2191 2189 qupdate=opts['update'])
2192 2190 q.save_dirty()
2193 2191 return 0
2194 2192
2195 2193 def save(ui, repo, **opts):
2196 2194 """save current queue state"""
2197 2195 q = repo.mq
2198 2196 message = cmdutil.logmessage(opts)
2199 2197 ret = q.save(repo, msg=message)
2200 2198 if ret:
2201 2199 return ret
2202 2200 q.save_dirty()
2203 2201 if opts['copy']:
2204 2202 path = q.path
2205 2203 if opts['name']:
2206 2204 newpath = os.path.join(q.basepath, opts['name'])
2207 2205 if os.path.exists(newpath):
2208 2206 if not os.path.isdir(newpath):
2209 2207 raise util.Abort(_('destination %s exists and is not '
2210 2208 'a directory') % newpath)
2211 2209 if not opts['force']:
2212 2210 raise util.Abort(_('destination %s exists, '
2213 2211 'use -f to force') % newpath)
2214 2212 else:
2215 2213 newpath = savename(path)
2216 2214 ui.warn(_("copy %s to %s\n") % (path, newpath))
2217 2215 util.copyfiles(path, newpath)
2218 2216 if opts['empty']:
2219 2217 try:
2220 2218 os.unlink(q.join(q.status_path))
2221 2219 except:
2222 2220 pass
2223 2221 return 0
2224 2222
2225 2223 def strip(ui, repo, rev, **opts):
2226 2224 """strip a revision and all its descendants from the repository
2227 2225
2228 2226 If one of the working directory's parent revisions is stripped, the
2229 2227 working directory will be updated to the parent of the stripped
2230 2228 revision.
2231 2229 """
2232 2230 backup = 'all'
2233 2231 if opts['backup']:
2234 2232 backup = 'strip'
2235 2233 elif opts['nobackup']:
2236 2234 backup = 'none'
2237 2235
2238 2236 rev = repo.lookup(rev)
2239 2237 p = repo.dirstate.parents()
2240 2238 cl = repo.changelog
2241 2239 update = True
2242 2240 if p[0] == nullid:
2243 2241 update = False
2244 2242 elif p[1] == nullid and rev != cl.ancestor(p[0], rev):
2245 2243 update = False
2246 2244 elif rev not in (cl.ancestor(p[0], rev), cl.ancestor(p[1], rev)):
2247 2245 update = False
2248 2246
2249 2247 repo.mq.strip(repo, rev, backup=backup, update=update, force=opts['force'])
2250 2248 return 0
2251 2249
2252 2250 def select(ui, repo, *args, **opts):
2253 2251 '''set or print guarded patches to push
2254 2252
2255 2253 Use the qguard command to set or print guards on patch, then use
2256 2254 qselect to tell mq which guards to use. A patch will be pushed if
2257 2255 it has no guards or any positive guards match the currently
2258 2256 selected guard, but will not be pushed if any negative guards
2259 2257 match the current guard. For example:
2260 2258
2261 2259 qguard foo.patch -stable (negative guard)
2262 2260 qguard bar.patch +stable (positive guard)
2263 2261 qselect stable
2264 2262
2265 2263 This activates the "stable" guard. mq will skip foo.patch (because
2266 2264 it has a negative match) but push bar.patch (because it has a
2267 2265 positive match).
2268 2266
2269 2267 With no arguments, prints the currently active guards.
2270 2268 With one argument, sets the active guard.
2271 2269
2272 2270 Use -n/--none to deactivate guards (no other arguments needed).
2273 2271 When no guards are active, patches with positive guards are
2274 2272 skipped and patches with negative guards are pushed.
2275 2273
2276 2274 qselect can change the guards on applied patches. It does not pop
2277 2275 guarded patches by default. Use --pop to pop back to the last
2278 2276 applied patch that is not guarded. Use --reapply (which implies
2279 2277 --pop) to push back to the current patch afterwards, but skip
2280 2278 guarded patches.
2281 2279
2282 2280 Use -s/--series to print a list of all guards in the series file
2283 2281 (no other arguments needed). Use -v for more information.'''
2284 2282
2285 2283 q = repo.mq
2286 2284 guards = q.active()
2287 2285 if args or opts['none']:
2288 2286 old_unapplied = q.unapplied(repo)
2289 2287 old_guarded = [i for i in xrange(len(q.applied)) if
2290 2288 not q.pushable(i)[0]]
2291 2289 q.set_active(args)
2292 2290 q.save_dirty()
2293 2291 if not args:
2294 2292 ui.status(_('guards deactivated\n'))
2295 2293 if not opts['pop'] and not opts['reapply']:
2296 2294 unapplied = q.unapplied(repo)
2297 2295 guarded = [i for i in xrange(len(q.applied))
2298 2296 if not q.pushable(i)[0]]
2299 2297 if len(unapplied) != len(old_unapplied):
2300 2298 ui.status(_('number of unguarded, unapplied patches has '
2301 2299 'changed from %d to %d\n') %
2302 2300 (len(old_unapplied), len(unapplied)))
2303 2301 if len(guarded) != len(old_guarded):
2304 2302 ui.status(_('number of guarded, applied patches has changed '
2305 2303 'from %d to %d\n') %
2306 2304 (len(old_guarded), len(guarded)))
2307 2305 elif opts['series']:
2308 2306 guards = {}
2309 2307 noguards = 0
2310 2308 for gs in q.series_guards:
2311 2309 if not gs:
2312 2310 noguards += 1
2313 2311 for g in gs:
2314 2312 guards.setdefault(g, 0)
2315 2313 guards[g] += 1
2316 2314 if ui.verbose:
2317 2315 guards['NONE'] = noguards
2318 2316 guards = guards.items()
2319 2317 guards.sort(lambda a, b: cmp(a[0][1:], b[0][1:]))
2320 2318 if guards:
2321 2319 ui.note(_('guards in series file:\n'))
2322 2320 for guard, count in guards:
2323 2321 ui.note('%2d ' % count)
2324 2322 ui.write(guard, '\n')
2325 2323 else:
2326 2324 ui.note(_('no guards in series file\n'))
2327 2325 else:
2328 2326 if guards:
2329 2327 ui.note(_('active guards:\n'))
2330 2328 for g in guards:
2331 2329 ui.write(g, '\n')
2332 2330 else:
2333 2331 ui.write(_('no active guards\n'))
2334 2332 reapply = opts['reapply'] and q.applied and q.appliedname(-1)
2335 2333 popped = False
2336 2334 if opts['pop'] or opts['reapply']:
2337 2335 for i in xrange(len(q.applied)):
2338 2336 pushable, reason = q.pushable(i)
2339 2337 if not pushable:
2340 2338 ui.status(_('popping guarded patches\n'))
2341 2339 popped = True
2342 2340 if i == 0:
2343 2341 q.pop(repo, all=True)
2344 2342 else:
2345 2343 q.pop(repo, i-1)
2346 2344 break
2347 2345 if popped:
2348 2346 try:
2349 2347 if reapply:
2350 2348 ui.status(_('reapplying unguarded patches\n'))
2351 2349 q.push(repo, reapply)
2352 2350 finally:
2353 2351 q.save_dirty()
2354 2352
2355 2353 def finish(ui, repo, *revrange, **opts):
2356 2354 """move applied patches into repository history
2357 2355
2358 2356 Finishes the specified revisions (corresponding to applied
2359 2357 patches) by moving them out of mq control into regular repository
2360 2358 history.
2361 2359
2362 2360 Accepts a revision range or the -a/--applied option. If --applied
2363 2361 is specified, all applied mq revisions are removed from mq
2364 2362 control. Otherwise, the given revisions must be at the base of the
2365 2363 stack of applied patches.
2366 2364
2367 2365 This can be especially useful if your changes have been applied to
2368 2366 an upstream repository, or if you are about to push your changes
2369 2367 to upstream.
2370 2368 """
2371 2369 if not opts['applied'] and not revrange:
2372 2370 raise util.Abort(_('no revisions specified'))
2373 2371 elif opts['applied']:
2374 2372 revrange = ('qbase:qtip',) + revrange
2375 2373
2376 2374 q = repo.mq
2377 2375 if not q.applied:
2378 2376 ui.status(_('no patches applied\n'))
2379 2377 return 0
2380 2378
2381 2379 revs = cmdutil.revrange(repo, revrange)
2382 2380 q.finish(repo, revs)
2383 2381 q.save_dirty()
2384 2382 return 0
2385 2383
2386 2384 def reposetup(ui, repo):
2387 2385 class mqrepo(repo.__class__):
2388 2386 def abort_if_wdir_patched(self, errmsg, force=False):
2389 2387 if self.mq.applied and not force:
2390 2388 parent = hex(self.dirstate.parents()[0])
2391 2389 if parent in [s.rev for s in self.mq.applied]:
2392 2390 raise util.Abort(errmsg)
2393 2391
2394 2392 def commit(self, *args, **opts):
2395 2393 if len(args) >= 6:
2396 2394 force = args[5]
2397 2395 else:
2398 2396 force = opts.get('force')
2399 2397 self.abort_if_wdir_patched(
2400 2398 _('cannot commit over an applied mq patch'),
2401 2399 force)
2402 2400
2403 2401 return super(mqrepo, self).commit(*args, **opts)
2404 2402
2405 2403 def push(self, remote, force=False, revs=None):
2406 2404 if self.mq.applied and not force and not revs:
2407 2405 raise util.Abort(_('source has mq patches applied'))
2408 2406 return super(mqrepo, self).push(remote, force, revs)
2409 2407
2410 2408 def tags(self):
2411 2409 if self.tagscache:
2412 2410 return self.tagscache
2413 2411
2414 2412 tagscache = super(mqrepo, self).tags()
2415 2413
2416 2414 q = self.mq
2417 2415 if not q.applied:
2418 2416 return tagscache
2419 2417
2420 2418 mqtags = [(bin(patch.rev), patch.name) for patch in q.applied]
2421 2419
2422 2420 if mqtags[-1][0] not in self.changelog.nodemap:
2423 2421 self.ui.warn(_('mq status file refers to unknown node %s\n')
2424 2422 % short(mqtags[-1][0]))
2425 2423 return tagscache
2426 2424
2427 2425 mqtags.append((mqtags[-1][0], 'qtip'))
2428 2426 mqtags.append((mqtags[0][0], 'qbase'))
2429 2427 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
2430 2428 for patch in mqtags:
2431 2429 if patch[1] in tagscache:
2432 2430 self.ui.warn(_('Tag %s overrides mq patch of the same name\n')
2433 2431 % patch[1])
2434 2432 else:
2435 2433 tagscache[patch[1]] = patch[0]
2436 2434
2437 2435 return tagscache
2438 2436
2439 2437 def _branchtags(self, partial, lrev):
2440 2438 q = self.mq
2441 2439 if not q.applied:
2442 2440 return super(mqrepo, self)._branchtags(partial, lrev)
2443 2441
2444 2442 cl = self.changelog
2445 2443 qbasenode = bin(q.applied[0].rev)
2446 2444 if qbasenode not in cl.nodemap:
2447 2445 self.ui.warn(_('mq status file refers to unknown node %s\n')
2448 2446 % short(qbasenode))
2449 2447 return super(mqrepo, self)._branchtags(partial, lrev)
2450 2448
2451 2449 qbase = cl.rev(qbasenode)
2452 2450 start = lrev + 1
2453 2451 if start < qbase:
2454 2452 # update the cache (excluding the patches) and save it
2455 2453 self._updatebranchcache(partial, lrev+1, qbase)
2456 2454 self._writebranchcache(partial, cl.node(qbase-1), qbase-1)
2457 2455 start = qbase
2458 2456 # if start = qbase, the cache is as updated as it should be.
2459 2457 # if start > qbase, the cache includes (part of) the patches.
2460 2458 # we might as well use it, but we won't save it.
2461 2459
2462 2460 # update the cache up to the tip
2463 2461 self._updatebranchcache(partial, start, len(cl))
2464 2462
2465 2463 return partial
2466 2464
2467 2465 if repo.local():
2468 2466 repo.__class__ = mqrepo
2469 2467 repo.mq = queue(ui, repo.join(""))
2470 2468
2471 2469 def mqimport(orig, ui, repo, *args, **kwargs):
2472 2470 if hasattr(repo, 'abort_if_wdir_patched'):
2473 2471 repo.abort_if_wdir_patched(_('cannot import over an applied patch'),
2474 2472 kwargs.get('force'))
2475 2473 return orig(ui, repo, *args, **kwargs)
2476 2474
2477 2475 def uisetup(ui):
2478 2476 extensions.wrapcommand(commands.table, 'import', mqimport)
2479 2477
2480 2478 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
2481 2479
2482 2480 cmdtable = {
2483 2481 "qapplied": (applied, [] + seriesopts, _('hg qapplied [-s] [PATCH]')),
2484 2482 "qclone":
2485 2483 (clone,
2486 2484 [('', 'pull', None, _('use pull protocol to copy metadata')),
2487 2485 ('U', 'noupdate', None, _('do not update the new working directories')),
2488 2486 ('', 'uncompressed', None,
2489 2487 _('use uncompressed transfer (fast over LAN)')),
2490 2488 ('p', 'patches', '', _('location of source patch repository')),
2491 2489 ] + commands.remoteopts,
2492 2490 _('hg qclone [OPTION]... SOURCE [DEST]')),
2493 2491 "qcommit|qci":
2494 2492 (commit,
2495 2493 commands.table["^commit|ci"][1],
2496 2494 _('hg qcommit [OPTION]... [FILE]...')),
2497 2495 "^qdiff":
2498 2496 (diff,
2499 2497 commands.diffopts + commands.diffopts2 + commands.walkopts,
2500 2498 _('hg qdiff [OPTION]... [FILE]...')),
2501 2499 "qdelete|qremove|qrm":
2502 2500 (delete,
2503 2501 [('k', 'keep', None, _('keep patch file')),
2504 2502 ('r', 'rev', [], _('stop managing a revision'))],
2505 2503 _('hg qdelete [-k] [-r REV]... [PATCH]...')),
2506 2504 'qfold':
2507 2505 (fold,
2508 2506 [('e', 'edit', None, _('edit patch header')),
2509 2507 ('k', 'keep', None, _('keep folded patch files')),
2510 2508 ] + commands.commitopts,
2511 2509 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...')),
2512 2510 'qgoto':
2513 2511 (goto,
2514 2512 [('f', 'force', None, _('overwrite any local changes'))],
2515 2513 _('hg qgoto [OPTION]... PATCH')),
2516 2514 'qguard':
2517 2515 (guard,
2518 2516 [('l', 'list', None, _('list all patches and guards')),
2519 2517 ('n', 'none', None, _('drop all guards'))],
2520 2518 _('hg qguard [-l] [-n] -- [PATCH] [+GUARD]... [-GUARD]...')),
2521 2519 'qheader': (header, [], _('hg qheader [PATCH]')),
2522 2520 "^qimport":
2523 2521 (qimport,
2524 2522 [('e', 'existing', None, _('import file in patch directory')),
2525 2523 ('n', 'name', '', _('patch file name')),
2526 2524 ('f', 'force', None, _('overwrite existing files')),
2527 2525 ('r', 'rev', [], _('place existing revisions under mq control')),
2528 2526 ('g', 'git', None, _('use git extended diff format'))],
2529 2527 _('hg qimport [-e] [-n NAME] [-f] [-g] [-r REV]... FILE...')),
2530 2528 "^qinit":
2531 2529 (init,
2532 2530 [('c', 'create-repo', None, _('create queue repository'))],
2533 2531 _('hg qinit [-c]')),
2534 2532 "qnew":
2535 2533 (new,
2536 2534 [('e', 'edit', None, _('edit commit message')),
2537 2535 ('f', 'force', None, _('import uncommitted changes into patch')),
2538 2536 ('g', 'git', None, _('use git extended diff format')),
2539 2537 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2540 2538 ('u', 'user', '', _('add "From: <given user>" to patch')),
2541 2539 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2542 2540 ('d', 'date', '', _('add "Date: <given date>" to patch'))
2543 2541 ] + commands.walkopts + commands.commitopts,
2544 2542 _('hg qnew [-e] [-m TEXT] [-l FILE] [-f] PATCH [FILE]...')),
2545 2543 "qnext": (next, [] + seriesopts, _('hg qnext [-s]')),
2546 2544 "qprev": (prev, [] + seriesopts, _('hg qprev [-s]')),
2547 2545 "^qpop":
2548 2546 (pop,
2549 2547 [('a', 'all', None, _('pop all patches')),
2550 2548 ('n', 'name', '', _('queue name to pop')),
2551 2549 ('f', 'force', None, _('forget any local changes'))],
2552 2550 _('hg qpop [-a] [-n NAME] [-f] [PATCH | INDEX]')),
2553 2551 "^qpush":
2554 2552 (push,
2555 2553 [('f', 'force', None, _('apply if the patch has rejects')),
2556 2554 ('l', 'list', None, _('list patch name in commit text')),
2557 2555 ('a', 'all', None, _('apply all patches')),
2558 2556 ('m', 'merge', None, _('merge from another queue')),
2559 2557 ('n', 'name', '', _('merge queue name'))],
2560 2558 _('hg qpush [-f] [-l] [-a] [-m] [-n NAME] [PATCH | INDEX]')),
2561 2559 "^qrefresh":
2562 2560 (refresh,
2563 2561 [('e', 'edit', None, _('edit commit message')),
2564 2562 ('g', 'git', None, _('use git extended diff format')),
2565 2563 ('s', 'short', None, _('refresh only files already in the patch and specified files')),
2566 2564 ('U', 'currentuser', None, _('add/update "From: <current user>" in patch')),
2567 2565 ('u', 'user', '', _('add/update "From: <given user>" in patch')),
2568 2566 ('D', 'currentdate', None, _('update "Date: <current date>" in patch (if present)')),
2569 2567 ('d', 'date', '', _('update "Date: <given date>" in patch (if present)'))
2570 2568 ] + commands.walkopts + commands.commitopts,
2571 2569 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...')),
2572 2570 'qrename|qmv':
2573 2571 (rename, [], _('hg qrename PATCH1 [PATCH2]')),
2574 2572 "qrestore":
2575 2573 (restore,
2576 2574 [('d', 'delete', None, _('delete save entry')),
2577 2575 ('u', 'update', None, _('update queue working directory'))],
2578 2576 _('hg qrestore [-d] [-u] REV')),
2579 2577 "qsave":
2580 2578 (save,
2581 2579 [('c', 'copy', None, _('copy patch directory')),
2582 2580 ('n', 'name', '', _('copy directory name')),
2583 2581 ('e', 'empty', None, _('clear queue status file')),
2584 2582 ('f', 'force', None, _('force copy'))] + commands.commitopts,
2585 2583 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]')),
2586 2584 "qselect":
2587 2585 (select,
2588 2586 [('n', 'none', None, _('disable all guards')),
2589 2587 ('s', 'series', None, _('list all guards in series file')),
2590 2588 ('', 'pop', None, _('pop to before first guarded applied patch')),
2591 2589 ('', 'reapply', None, _('pop, then reapply patches'))],
2592 2590 _('hg qselect [OPTION]... [GUARD]...')),
2593 2591 "qseries":
2594 2592 (series,
2595 2593 [('m', 'missing', None, _('print patches not in series')),
2596 2594 ] + seriesopts,
2597 2595 _('hg qseries [-ms]')),
2598 2596 "^strip":
2599 2597 (strip,
2600 2598 [('f', 'force', None, _('force removal with local changes')),
2601 2599 ('b', 'backup', None, _('bundle unrelated changesets')),
2602 2600 ('n', 'nobackup', None, _('no backups'))],
2603 2601 _('hg strip [-f] [-b] [-n] REV')),
2604 2602 "qtop": (top, [] + seriesopts, _('hg qtop [-s]')),
2605 2603 "qunapplied": (unapplied, [] + seriesopts, _('hg qunapplied [-s] [PATCH]')),
2606 2604 "qfinish":
2607 2605 (finish,
2608 2606 [('a', 'applied', None, _('finish all applied changesets'))],
2609 2607 _('hg qfinish [-a] [REV...]')),
2610 2608 }
@@ -1,110 +1,110 b''
1 1 # Copyright (C) 2006 - Marco Barisione <marco@barisione.org>
2 2 #
3 3 # This is a small extension for Mercurial (http://www.selenic.com/mercurial)
4 4 # that removes files not known to mercurial
5 5 #
6 6 # This program was inspired by the "cvspurge" script contained in CVS utilities
7 7 # (http://www.red-bean.com/cvsutils/).
8 8 #
9 9 # To enable the "purge" extension put these lines in your ~/.hgrc:
10 10 # [extensions]
11 11 # hgext.purge =
12 12 #
13 13 # For help on the usage of "hg purge" use:
14 14 # hg help purge
15 15 #
16 16 # This program is free software; you can redistribute it and/or modify
17 17 # it under the terms of the GNU General Public License as published by
18 18 # the Free Software Foundation; either version 2 of the License, or
19 19 # (at your option) any later version.
20 20 #
21 21 # This program is distributed in the hope that it will be useful,
22 22 # but WITHOUT ANY WARRANTY; without even the implied warranty of
23 23 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
24 24 # GNU General Public License for more details.
25 25 #
26 26 # You should have received a copy of the GNU General Public License
27 27 # along with this program; if not, write to the Free Software
28 28 # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
29 29
30 30 from mercurial import util, commands, cmdutil
31 31 from mercurial.i18n import _
32 32 import os, stat
33 33
34 34 def purge(ui, repo, *dirs, **opts):
35 35 '''removes files not tracked by Mercurial
36 36
37 37 Delete files not known to Mercurial. This is useful to test local
38 38 and uncommitted changes in an otherwise-clean source tree.
39 39
40 40 This means that purge will delete:
41 41 - Unknown files: files marked with "?" by "hg status"
42 42 - Empty directories: in fact Mercurial ignores directories unless
43 43 they contain files under source control managment
44 44 But it will leave untouched:
45 45 - Modified and unmodified tracked files
46 46 - Ignored files (unless --all is specified)
47 47 - New files added to the repository (with "hg add")
48 48
49 49 If directories are given on the command line, only files in these
50 50 directories are considered.
51 51
52 52 Be careful with purge, as you could irreversibly delete some files
53 53 you forgot to add to the repository. If you only want to print the
54 54 list of files that this program would delete, use the --print
55 55 option.
56 56 '''
57 57 act = not opts['print']
58 58 eol = '\n'
59 59 if opts['print0']:
60 60 eol = '\0'
61 61 act = False # --print0 implies --print
62 62
63 63 def remove(remove_func, name):
64 64 if act:
65 65 try:
66 66 remove_func(repo.wjoin(name))
67 67 except OSError:
68 68 m = _('%s cannot be removed') % name
69 69 if opts['abort_on_err']:
70 70 raise util.Abort(m)
71 71 ui.warn(_('warning: %s\n') % m)
72 72 else:
73 73 ui.write('%s%s' % (name, eol))
74 74
75 75 def removefile(path):
76 76 try:
77 77 os.remove(path)
78 78 except OSError:
79 79 # read-only files cannot be unlinked under Windows
80 80 s = os.stat(path)
81 81 if (s.st_mode & stat.S_IWRITE) != 0:
82 82 raise
83 83 os.chmod(path, stat.S_IMODE(s.st_mode) | stat.S_IWRITE)
84 84 os.remove(path)
85 85
86 86 directories = []
87 87 match = cmdutil.match(repo, dirs, opts)
88 88 match.dir = directories.append
89 89 status = repo.status(match=match, ignored=opts['all'], unknown=True)
90 90
91 for f in util.sort(status[4] + status[5]):
91 for f in sorted(status[4] + status[5]):
92 92 ui.note(_('Removing file %s\n') % f)
93 93 remove(removefile, f)
94 94
95 for f in util.sort(directories)[::-1]:
95 for f in sorted(directories, reverse=True):
96 96 if match(f) and not os.listdir(repo.wjoin(f)):
97 97 ui.note(_('Removing directory %s\n') % f)
98 98 remove(os.rmdir, f)
99 99
100 100 cmdtable = {
101 101 'purge|clean':
102 102 (purge,
103 103 [('a', 'abort-on-err', None, _('abort if an error occurs')),
104 104 ('', 'all', None, _('purge ignored files too')),
105 105 ('p', 'print', None, _('print the file names instead of deleting them')),
106 106 ('0', 'print0', None, _('end filenames with NUL, for use with xargs'
107 107 ' (implies -p/--print)')),
108 108 ] + commands.walkopts,
109 109 _('hg purge [OPTION]... [DIR]...'))
110 110 }
@@ -1,472 +1,472 b''
1 1 # rebase.py - rebasing feature for mercurial
2 2 #
3 3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 '''move sets of revisions to a different ancestor
9 9
10 10 This extension lets you rebase changesets in an existing Mercurial
11 11 repository.
12 12
13 13 For more information:
14 14 http://www.selenic.com/mercurial/wiki/index.cgi/RebaseProject
15 15 '''
16 16
17 17 from mercurial import util, repair, merge, cmdutil, commands, error
18 18 from mercurial import extensions, ancestor, copies, patch
19 19 from mercurial.commands import templateopts
20 20 from mercurial.node import nullrev
21 21 from mercurial.lock import release
22 22 from mercurial.i18n import _
23 23 import os, errno
24 24
25 25 def rebasemerge(repo, rev, first=False):
26 26 'return the correct ancestor'
27 27 oldancestor = ancestor.ancestor
28 28
29 29 def newancestor(a, b, pfunc):
30 30 ancestor.ancestor = oldancestor
31 31 if b == rev:
32 32 return repo[rev].parents()[0].rev()
33 33 return ancestor.ancestor(a, b, pfunc)
34 34
35 35 if not first:
36 36 ancestor.ancestor = newancestor
37 37 else:
38 38 repo.ui.debug(_("first revision, do not change ancestor\n"))
39 39 stats = merge.update(repo, rev, True, True, False)
40 40 return stats
41 41
42 42 def rebase(ui, repo, **opts):
43 43 """move changeset (and descendants) to a different branch
44 44
45 45 Rebase uses repeated merging to graft changesets from one part of
46 46 history onto another. This can be useful for linearizing local
47 47 changes relative to a master development tree.
48 48
49 49 If a rebase is interrupted to manually resolve a merge, it can be
50 50 continued with --continue/-c or aborted with --abort/-a.
51 51 """
52 52 originalwd = target = None
53 53 external = nullrev
54 54 state = skipped = {}
55 55
56 56 lock = wlock = None
57 57 try:
58 58 lock = repo.lock()
59 59 wlock = repo.wlock()
60 60
61 61 # Validate input and define rebasing points
62 62 destf = opts.get('dest', None)
63 63 srcf = opts.get('source', None)
64 64 basef = opts.get('base', None)
65 65 contf = opts.get('continue')
66 66 abortf = opts.get('abort')
67 67 collapsef = opts.get('collapse', False)
68 68 extrafn = opts.get('extrafn')
69 69 keepf = opts.get('keep', False)
70 70 keepbranchesf = opts.get('keepbranches', False)
71 71
72 72 if contf or abortf:
73 73 if contf and abortf:
74 74 raise error.ParseError('rebase',
75 75 _('cannot use both abort and continue'))
76 76 if collapsef:
77 77 raise error.ParseError(
78 78 'rebase', _('cannot use collapse with continue or abort'))
79 79
80 80 if srcf or basef or destf:
81 81 raise error.ParseError('rebase',
82 82 _('abort and continue do not allow specifying revisions'))
83 83
84 84 (originalwd, target, state, collapsef, keepf,
85 85 keepbranchesf, external) = restorestatus(repo)
86 86 if abortf:
87 87 abort(repo, originalwd, target, state)
88 88 return
89 89 else:
90 90 if srcf and basef:
91 91 raise error.ParseError('rebase', _('cannot specify both a '
92 92 'revision and a base'))
93 93 cmdutil.bail_if_changed(repo)
94 94 result = buildstate(repo, destf, srcf, basef, collapsef)
95 95 if result:
96 96 originalwd, target, state, external = result
97 97 else: # Empty state built, nothing to rebase
98 98 repo.ui.status(_('nothing to rebase\n'))
99 99 return
100 100
101 101 if keepbranchesf:
102 102 if extrafn:
103 103 raise error.ParseError(
104 104 'rebase', _('cannot use both keepbranches and extrafn'))
105 105 def extrafn(ctx, extra):
106 106 extra['branch'] = ctx.branch()
107 107
108 108 # Rebase
109 109 targetancestors = list(repo.changelog.ancestors(target))
110 110 targetancestors.append(target)
111 111
112 for rev in util.sort(state):
112 for rev in sorted(state):
113 113 if state[rev] == -1:
114 114 storestatus(repo, originalwd, target, state, collapsef, keepf,
115 115 keepbranchesf, external)
116 116 rebasenode(repo, rev, target, state, skipped, targetancestors,
117 117 collapsef, extrafn)
118 118 ui.note(_('rebase merging completed\n'))
119 119
120 120 if collapsef:
121 121 p1, p2 = defineparents(repo, min(state), target,
122 122 state, targetancestors)
123 123 concludenode(repo, rev, p1, external, state, collapsef,
124 124 last=True, skipped=skipped, extrafn=extrafn)
125 125
126 126 if 'qtip' in repo.tags():
127 127 updatemq(repo, state, skipped, **opts)
128 128
129 129 if not keepf:
130 130 # Remove no more useful revisions
131 131 if set(repo.changelog.descendants(min(state))) - set(state):
132 132 ui.warn(_("warning: new changesets detected on source branch, "
133 133 "not stripping\n"))
134 134 else:
135 135 repair.strip(repo.ui, repo, repo[min(state)].node(), "strip")
136 136
137 137 clearstatus(repo)
138 138 ui.status(_("rebase completed\n"))
139 139 if os.path.exists(repo.sjoin('undo')):
140 140 util.unlink(repo.sjoin('undo'))
141 141 if skipped:
142 142 ui.note(_("%d revisions have been skipped\n") % len(skipped))
143 143 finally:
144 144 release(lock, wlock)
145 145
146 146 def concludenode(repo, rev, p1, p2, state, collapse, last=False, skipped={},
147 147 extrafn=None):
148 148 """Skip commit if collapsing has been required and rev is not the last
149 149 revision, commit otherwise
150 150 """
151 151 repo.ui.debug(_(" set parents\n"))
152 152 if collapse and not last:
153 153 repo.dirstate.setparents(repo[p1].node())
154 154 return None
155 155
156 156 repo.dirstate.setparents(repo[p1].node(), repo[p2].node())
157 157
158 158 # Commit, record the old nodeid
159 159 m, a, r = repo.status()[:3]
160 160 newrev = nullrev
161 161 try:
162 162 if last:
163 163 commitmsg = 'Collapsed revision'
164 164 for rebased in state:
165 165 if rebased not in skipped:
166 166 commitmsg += '\n* %s' % repo[rebased].description()
167 167 commitmsg = repo.ui.edit(commitmsg, repo.ui.username())
168 168 else:
169 169 commitmsg = repo[rev].description()
170 170 # Commit might fail if unresolved files exist
171 171 extra = {'rebase_source': repo[rev].hex()}
172 172 if extrafn:
173 173 extrafn(repo[rev], extra)
174 174 newrev = repo.commit(m+a+r,
175 175 text=commitmsg,
176 176 user=repo[rev].user(),
177 177 date=repo[rev].date(),
178 178 extra=extra)
179 179 return newrev
180 180 except util.Abort:
181 181 # Invalidate the previous setparents
182 182 repo.dirstate.invalidate()
183 183 raise
184 184
185 185 def rebasenode(repo, rev, target, state, skipped, targetancestors, collapse,
186 186 extrafn):
187 187 'Rebase a single revision'
188 188 repo.ui.debug(_("rebasing %d:%s\n") % (rev, repo[rev]))
189 189
190 190 p1, p2 = defineparents(repo, rev, target, state, targetancestors)
191 191
192 192 repo.ui.debug(_(" future parents are %d and %d\n") % (repo[p1].rev(),
193 193 repo[p2].rev()))
194 194
195 195 # Merge phase
196 196 if len(repo.parents()) != 2:
197 197 # Update to target and merge it with local
198 198 if repo['.'].rev() != repo[p1].rev():
199 199 repo.ui.debug(_(" update to %d:%s\n") % (repo[p1].rev(), repo[p1]))
200 200 merge.update(repo, p1, False, True, False)
201 201 else:
202 202 repo.ui.debug(_(" already in target\n"))
203 203 repo.dirstate.write()
204 204 repo.ui.debug(_(" merge against %d:%s\n") % (repo[rev].rev(), repo[rev]))
205 205 first = repo[rev].rev() == repo[min(state)].rev()
206 206 stats = rebasemerge(repo, rev, first)
207 207
208 208 if stats[3] > 0:
209 209 raise util.Abort(_('fix unresolved conflicts with hg resolve then '
210 210 'run hg rebase --continue'))
211 211 else: # we have an interrupted rebase
212 212 repo.ui.debug(_('resuming interrupted rebase\n'))
213 213
214 214 # Keep track of renamed files in the revision that is going to be rebased
215 215 # Here we simulate the copies and renames in the source changeset
216 216 cop, diver = copies.copies(repo, repo[rev], repo[target], repo[p2], True)
217 217 m1 = repo[rev].manifest()
218 218 m2 = repo[target].manifest()
219 219 for k, v in cop.iteritems():
220 220 if k in m1:
221 221 if v in m1 or v in m2:
222 222 repo.dirstate.copy(v, k)
223 223 if v in m2 and v not in m1:
224 224 repo.dirstate.remove(v)
225 225
226 226 newrev = concludenode(repo, rev, p1, p2, state, collapse,
227 227 extrafn=extrafn)
228 228
229 229 # Update the state
230 230 if newrev is not None:
231 231 state[rev] = repo[newrev].rev()
232 232 else:
233 233 if not collapse:
234 234 repo.ui.note(_('no changes, revision %d skipped\n') % rev)
235 235 repo.ui.debug(_('next revision set to %s\n') % p1)
236 236 skipped[rev] = True
237 237 state[rev] = p1
238 238
239 239 def defineparents(repo, rev, target, state, targetancestors):
240 240 'Return the new parent relationship of the revision that will be rebased'
241 241 parents = repo[rev].parents()
242 242 p1 = p2 = nullrev
243 243
244 244 P1n = parents[0].rev()
245 245 if P1n in targetancestors:
246 246 p1 = target
247 247 elif P1n in state:
248 248 p1 = state[P1n]
249 249 else: # P1n external
250 250 p1 = target
251 251 p2 = P1n
252 252
253 253 if len(parents) == 2 and parents[1].rev() not in targetancestors:
254 254 P2n = parents[1].rev()
255 255 # interesting second parent
256 256 if P2n in state:
257 257 if p1 == target: # P1n in targetancestors or external
258 258 p1 = state[P2n]
259 259 else:
260 260 p2 = state[P2n]
261 261 else: # P2n external
262 262 if p2 != nullrev: # P1n external too => rev is a merged revision
263 263 raise util.Abort(_('cannot use revision %d as base, result '
264 264 'would have 3 parents') % rev)
265 265 p2 = P2n
266 266 return p1, p2
267 267
268 268 def isagitpatch(repo, patchname):
269 269 'Return true if the given patch is in git format'
270 270 mqpatch = os.path.join(repo.mq.path, patchname)
271 271 for line in patch.linereader(file(mqpatch, 'rb')):
272 272 if line.startswith('diff --git'):
273 273 return True
274 274 return False
275 275
276 276 def updatemq(repo, state, skipped, **opts):
277 277 'Update rebased mq patches - finalize and then import them'
278 278 mqrebase = {}
279 279 for p in repo.mq.applied:
280 280 if repo[p.rev].rev() in state:
281 281 repo.ui.debug(_('revision %d is an mq patch (%s), finalize it.\n') %
282 282 (repo[p.rev].rev(), p.name))
283 283 mqrebase[repo[p.rev].rev()] = (p.name, isagitpatch(repo, p.name))
284 284
285 285 if mqrebase:
286 286 repo.mq.finish(repo, mqrebase.keys())
287 287
288 288 # We must start import from the newest revision
289 289 mq = mqrebase.keys()
290 290 mq.sort()
291 291 mq.reverse()
292 292 for rev in mq:
293 293 if rev not in skipped:
294 294 repo.ui.debug(_('import mq patch %d (%s)\n')
295 295 % (state[rev], mqrebase[rev][0]))
296 296 repo.mq.qimport(repo, (), patchname=mqrebase[rev][0],
297 297 git=mqrebase[rev][1],rev=[str(state[rev])])
298 298 repo.mq.save_dirty()
299 299
300 300 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
301 301 external):
302 302 'Store the current status to allow recovery'
303 303 f = repo.opener("rebasestate", "w")
304 304 f.write(repo[originalwd].hex() + '\n')
305 305 f.write(repo[target].hex() + '\n')
306 306 f.write(repo[external].hex() + '\n')
307 307 f.write('%d\n' % int(collapse))
308 308 f.write('%d\n' % int(keep))
309 309 f.write('%d\n' % int(keepbranches))
310 310 for d, v in state.iteritems():
311 311 oldrev = repo[d].hex()
312 312 newrev = repo[v].hex()
313 313 f.write("%s:%s\n" % (oldrev, newrev))
314 314 f.close()
315 315 repo.ui.debug(_('rebase status stored\n'))
316 316
317 317 def clearstatus(repo):
318 318 'Remove the status files'
319 319 if os.path.exists(repo.join("rebasestate")):
320 320 util.unlink(repo.join("rebasestate"))
321 321
322 322 def restorestatus(repo):
323 323 'Restore a previously stored status'
324 324 try:
325 325 target = None
326 326 collapse = False
327 327 external = nullrev
328 328 state = {}
329 329 f = repo.opener("rebasestate")
330 330 for i, l in enumerate(f.read().splitlines()):
331 331 if i == 0:
332 332 originalwd = repo[l].rev()
333 333 elif i == 1:
334 334 target = repo[l].rev()
335 335 elif i == 2:
336 336 external = repo[l].rev()
337 337 elif i == 3:
338 338 collapse = bool(int(l))
339 339 elif i == 4:
340 340 keep = bool(int(l))
341 341 elif i == 5:
342 342 keepbranches = bool(int(l))
343 343 else:
344 344 oldrev, newrev = l.split(':')
345 345 state[repo[oldrev].rev()] = repo[newrev].rev()
346 346 repo.ui.debug(_('rebase status resumed\n'))
347 347 return originalwd, target, state, collapse, keep, keepbranches, external
348 348 except IOError, err:
349 349 if err.errno != errno.ENOENT:
350 350 raise
351 351 raise util.Abort(_('no rebase in progress'))
352 352
353 353 def abort(repo, originalwd, target, state):
354 354 'Restore the repository to its original state'
355 355 if set(repo.changelog.descendants(target)) - set(state.values()):
356 356 repo.ui.warn(_("warning: new changesets detected on target branch, "
357 357 "not stripping\n"))
358 358 else:
359 359 # Strip from the first rebased revision
360 360 merge.update(repo, repo[originalwd].rev(), False, True, False)
361 361 rebased = filter(lambda x: x > -1, state.values())
362 362 if rebased:
363 363 strippoint = min(rebased)
364 364 repair.strip(repo.ui, repo, repo[strippoint].node(), "strip")
365 365 clearstatus(repo)
366 366 repo.ui.status(_('rebase aborted\n'))
367 367
368 368 def buildstate(repo, dest, src, base, collapse):
369 369 'Define which revisions are going to be rebased and where'
370 370 targetancestors = set()
371 371
372 372 if not dest:
373 373 # Destination defaults to the latest revision in the current branch
374 374 branch = repo[None].branch()
375 375 dest = repo[branch].rev()
376 376 else:
377 377 if 'qtip' in repo.tags() and (repo[dest].hex() in
378 378 [s.rev for s in repo.mq.applied]):
379 379 raise util.Abort(_('cannot rebase onto an applied mq patch'))
380 380 dest = repo[dest].rev()
381 381
382 382 if src:
383 383 commonbase = repo[src].ancestor(repo[dest])
384 384 if commonbase == repo[src]:
385 385 raise util.Abort(_('cannot rebase an ancestor'))
386 386 if commonbase == repo[dest]:
387 387 raise util.Abort(_('cannot rebase a descendant'))
388 388 source = repo[src].rev()
389 389 else:
390 390 if base:
391 391 cwd = repo[base].rev()
392 392 else:
393 393 cwd = repo['.'].rev()
394 394
395 395 if cwd == dest:
396 396 repo.ui.debug(_('already working on current\n'))
397 397 return None
398 398
399 399 targetancestors = set(repo.changelog.ancestors(dest))
400 400 if cwd in targetancestors:
401 401 repo.ui.debug(_('already working on the current branch\n'))
402 402 return None
403 403
404 404 cwdancestors = set(repo.changelog.ancestors(cwd))
405 405 cwdancestors.add(cwd)
406 406 rebasingbranch = cwdancestors - targetancestors
407 407 source = min(rebasingbranch)
408 408
409 409 repo.ui.debug(_('rebase onto %d starting from %d\n') % (dest, source))
410 410 state = dict.fromkeys(repo.changelog.descendants(source), nullrev)
411 411 external = nullrev
412 412 if collapse:
413 413 if not targetancestors:
414 414 targetancestors = set(repo.changelog.ancestors(dest))
415 415 for rev in state:
416 416 # Check externals and fail if there are more than one
417 417 for p in repo[rev].parents():
418 418 if (p.rev() not in state and p.rev() != source
419 419 and p.rev() not in targetancestors):
420 420 if external != nullrev:
421 421 raise util.Abort(_('unable to collapse, there is more '
422 422 'than one external parent'))
423 423 external = p.rev()
424 424
425 425 state[source] = nullrev
426 426 return repo['.'].rev(), repo[dest].rev(), state, external
427 427
428 428 def pullrebase(orig, ui, repo, *args, **opts):
429 429 'Call rebase after pull if the latter has been invoked with --rebase'
430 430 if opts.get('rebase'):
431 431 if opts.get('update'):
432 432 del opts.get['update']
433 433 ui.debug(_('--update and --rebase are not compatible, ignoring '
434 434 'the update flag\n'))
435 435
436 436 cmdutil.bail_if_changed(repo)
437 437 revsprepull = len(repo)
438 438 orig(ui, repo, *args, **opts)
439 439 revspostpull = len(repo)
440 440 if revspostpull > revsprepull:
441 441 rebase(ui, repo, **opts)
442 442 branch = repo[None].branch()
443 443 dest = repo[branch].rev()
444 444 if dest != repo['.'].rev():
445 445 # there was nothing to rebase we force an update
446 446 merge.update(repo, dest, False, False, False)
447 447 else:
448 448 orig(ui, repo, *args, **opts)
449 449
450 450 def uisetup(ui):
451 451 'Replace pull with a decorator to provide --rebase option'
452 452 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
453 453 entry[1].append(('', 'rebase', None,
454 454 _("rebase working directory to branch head"))
455 455 )
456 456
457 457 cmdtable = {
458 458 "rebase":
459 459 (rebase,
460 460 [
461 461 ('s', 'source', '', _('rebase from a given revision')),
462 462 ('b', 'base', '', _('rebase from the base of a given revision')),
463 463 ('d', 'dest', '', _('rebase onto a given revision')),
464 464 ('', 'collapse', False, _('collapse the rebased revisions')),
465 465 ('', 'keep', False, _('keep original revisions')),
466 466 ('', 'keepbranches', False, _('keep original branches')),
467 467 ('c', 'continue', False, _('continue an interrupted rebase')),
468 468 ('a', 'abort', False, _('abort an interrupted rebase')),] +
469 469 templateopts,
470 470 _('hg rebase [-s REV | -b REV] [-d REV] [--collapse] [--keep] '
471 471 '[--keepbranches] | [-c] | [-a]')),
472 472 }
@@ -1,602 +1,602 b''
1 1 # Patch transplanting extension for Mercurial
2 2 #
3 3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 '''patch transplanting tool
9 9
10 10 This extension allows you to transplant patches from another branch.
11 11
12 12 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 13 map from a changeset hash to its hash in the source repository.
14 14 '''
15 15
16 16 from mercurial.i18n import _
17 17 import os, tempfile
18 18 from mercurial import bundlerepo, changegroup, cmdutil, hg, merge
19 19 from mercurial import patch, revlog, util, error
20 20
21 21 class transplantentry:
22 22 def __init__(self, lnode, rnode):
23 23 self.lnode = lnode
24 24 self.rnode = rnode
25 25
26 26 class transplants:
27 27 def __init__(self, path=None, transplantfile=None, opener=None):
28 28 self.path = path
29 29 self.transplantfile = transplantfile
30 30 self.opener = opener
31 31
32 32 if not opener:
33 33 self.opener = util.opener(self.path)
34 34 self.transplants = []
35 35 self.dirty = False
36 36 self.read()
37 37
38 38 def read(self):
39 39 abspath = os.path.join(self.path, self.transplantfile)
40 40 if self.transplantfile and os.path.exists(abspath):
41 41 for line in self.opener(self.transplantfile).read().splitlines():
42 42 lnode, rnode = map(revlog.bin, line.split(':'))
43 43 self.transplants.append(transplantentry(lnode, rnode))
44 44
45 45 def write(self):
46 46 if self.dirty and self.transplantfile:
47 47 if not os.path.isdir(self.path):
48 48 os.mkdir(self.path)
49 49 fp = self.opener(self.transplantfile, 'w')
50 50 for c in self.transplants:
51 51 l, r = map(revlog.hex, (c.lnode, c.rnode))
52 52 fp.write(l + ':' + r + '\n')
53 53 fp.close()
54 54 self.dirty = False
55 55
56 56 def get(self, rnode):
57 57 return [t for t in self.transplants if t.rnode == rnode]
58 58
59 59 def set(self, lnode, rnode):
60 60 self.transplants.append(transplantentry(lnode, rnode))
61 61 self.dirty = True
62 62
63 63 def remove(self, transplant):
64 64 del self.transplants[self.transplants.index(transplant)]
65 65 self.dirty = True
66 66
67 67 class transplanter:
68 68 def __init__(self, ui, repo):
69 69 self.ui = ui
70 70 self.path = repo.join('transplant')
71 71 self.opener = util.opener(self.path)
72 72 self.transplants = transplants(self.path, 'transplants',
73 73 opener=self.opener)
74 74
75 75 def applied(self, repo, node, parent):
76 76 '''returns True if a node is already an ancestor of parent
77 77 or has already been transplanted'''
78 78 if hasnode(repo, node):
79 79 if node in repo.changelog.reachable(parent, stop=node):
80 80 return True
81 81 for t in self.transplants.get(node):
82 82 # it might have been stripped
83 83 if not hasnode(repo, t.lnode):
84 84 self.transplants.remove(t)
85 85 return False
86 86 if t.lnode in repo.changelog.reachable(parent, stop=t.lnode):
87 87 return True
88 88 return False
89 89
90 90 def apply(self, repo, source, revmap, merges, opts={}):
91 91 '''apply the revisions in revmap one by one in revision order'''
92 revs = util.sort(revmap)
92 revs = sorted(revmap)
93 93 p1, p2 = repo.dirstate.parents()
94 94 pulls = []
95 95 diffopts = patch.diffopts(self.ui, opts)
96 96 diffopts.git = True
97 97
98 98 lock = wlock = None
99 99 try:
100 100 wlock = repo.wlock()
101 101 lock = repo.lock()
102 102 for rev in revs:
103 103 node = revmap[rev]
104 104 revstr = '%s:%s' % (rev, revlog.short(node))
105 105
106 106 if self.applied(repo, node, p1):
107 107 self.ui.warn(_('skipping already applied revision %s\n') %
108 108 revstr)
109 109 continue
110 110
111 111 parents = source.changelog.parents(node)
112 112 if not opts.get('filter'):
113 113 # If the changeset parent is the same as the
114 114 # wdir's parent, just pull it.
115 115 if parents[0] == p1:
116 116 pulls.append(node)
117 117 p1 = node
118 118 continue
119 119 if pulls:
120 120 if source != repo:
121 121 repo.pull(source, heads=pulls)
122 122 merge.update(repo, pulls[-1], False, False, None)
123 123 p1, p2 = repo.dirstate.parents()
124 124 pulls = []
125 125
126 126 domerge = False
127 127 if node in merges:
128 128 # pulling all the merge revs at once would mean we
129 129 # couldn't transplant after the latest even if
130 130 # transplants before them fail.
131 131 domerge = True
132 132 if not hasnode(repo, node):
133 133 repo.pull(source, heads=[node])
134 134
135 135 if parents[1] != revlog.nullid:
136 136 self.ui.note(_('skipping merge changeset %s:%s\n')
137 137 % (rev, revlog.short(node)))
138 138 patchfile = None
139 139 else:
140 140 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
141 141 fp = os.fdopen(fd, 'w')
142 142 gen = patch.diff(source, parents[0], node, opts=diffopts)
143 143 for chunk in gen:
144 144 fp.write(chunk)
145 145 fp.close()
146 146
147 147 del revmap[rev]
148 148 if patchfile or domerge:
149 149 try:
150 150 n = self.applyone(repo, node,
151 151 source.changelog.read(node),
152 152 patchfile, merge=domerge,
153 153 log=opts.get('log'),
154 154 filter=opts.get('filter'))
155 155 if n and domerge:
156 156 self.ui.status(_('%s merged at %s\n') % (revstr,
157 157 revlog.short(n)))
158 158 elif n:
159 159 self.ui.status(_('%s transplanted to %s\n')
160 160 % (revlog.short(node),
161 161 revlog.short(n)))
162 162 finally:
163 163 if patchfile:
164 164 os.unlink(patchfile)
165 165 if pulls:
166 166 repo.pull(source, heads=pulls)
167 167 merge.update(repo, pulls[-1], False, False, None)
168 168 finally:
169 169 self.saveseries(revmap, merges)
170 170 self.transplants.write()
171 171 lock.release()
172 172 wlock.release()
173 173
174 174 def filter(self, filter, changelog, patchfile):
175 175 '''arbitrarily rewrite changeset before applying it'''
176 176
177 177 self.ui.status(_('filtering %s\n') % patchfile)
178 178 user, date, msg = (changelog[1], changelog[2], changelog[4])
179 179
180 180 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
181 181 fp = os.fdopen(fd, 'w')
182 182 fp.write("# HG changeset patch\n")
183 183 fp.write("# User %s\n" % user)
184 184 fp.write("# Date %d %d\n" % date)
185 185 fp.write(changelog[4])
186 186 fp.close()
187 187
188 188 try:
189 189 util.system('%s %s %s' % (filter, util.shellquote(headerfile),
190 190 util.shellquote(patchfile)),
191 191 environ={'HGUSER': changelog[1]},
192 192 onerr=util.Abort, errprefix=_('filter failed'))
193 193 user, date, msg = self.parselog(file(headerfile))[1:4]
194 194 finally:
195 195 os.unlink(headerfile)
196 196
197 197 return (user, date, msg)
198 198
199 199 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
200 200 filter=None):
201 201 '''apply the patch in patchfile to the repository as a transplant'''
202 202 (manifest, user, (time, timezone), files, message) = cl[:5]
203 203 date = "%d %d" % (time, timezone)
204 204 extra = {'transplant_source': node}
205 205 if filter:
206 206 (user, date, message) = self.filter(filter, cl, patchfile)
207 207
208 208 if log:
209 209 message += '\n(transplanted from %s)' % revlog.hex(node)
210 210
211 211 self.ui.status(_('applying %s\n') % revlog.short(node))
212 212 self.ui.note('%s %s\n%s\n' % (user, date, message))
213 213
214 214 if not patchfile and not merge:
215 215 raise util.Abort(_('can only omit patchfile if merging'))
216 216 if patchfile:
217 217 try:
218 218 files = {}
219 219 try:
220 220 patch.patch(patchfile, self.ui, cwd=repo.root,
221 221 files=files)
222 222 if not files:
223 223 self.ui.warn(_('%s: empty changeset')
224 224 % revlog.hex(node))
225 225 return None
226 226 finally:
227 227 files = patch.updatedir(self.ui, repo, files)
228 228 except Exception, inst:
229 229 if filter:
230 230 os.unlink(patchfile)
231 231 seriespath = os.path.join(self.path, 'series')
232 232 if os.path.exists(seriespath):
233 233 os.unlink(seriespath)
234 234 p1 = repo.dirstate.parents()[0]
235 235 p2 = node
236 236 self.log(user, date, message, p1, p2, merge=merge)
237 237 self.ui.write(str(inst) + '\n')
238 238 raise util.Abort(_('Fix up the merge and run '
239 239 'hg transplant --continue'))
240 240 else:
241 241 files = None
242 242 if merge:
243 243 p1, p2 = repo.dirstate.parents()
244 244 repo.dirstate.setparents(p1, node)
245 245
246 246 n = repo.commit(files, message, user, date, extra=extra)
247 247 if not merge:
248 248 self.transplants.set(n, node)
249 249
250 250 return n
251 251
252 252 def resume(self, repo, source, opts=None):
253 253 '''recover last transaction and apply remaining changesets'''
254 254 if os.path.exists(os.path.join(self.path, 'journal')):
255 255 n, node = self.recover(repo)
256 256 self.ui.status(_('%s transplanted as %s\n') % (revlog.short(node),
257 257 revlog.short(n)))
258 258 seriespath = os.path.join(self.path, 'series')
259 259 if not os.path.exists(seriespath):
260 260 self.transplants.write()
261 261 return
262 262 nodes, merges = self.readseries()
263 263 revmap = {}
264 264 for n in nodes:
265 265 revmap[source.changelog.rev(n)] = n
266 266 os.unlink(seriespath)
267 267
268 268 self.apply(repo, source, revmap, merges, opts)
269 269
270 270 def recover(self, repo):
271 271 '''commit working directory using journal metadata'''
272 272 node, user, date, message, parents = self.readlog()
273 273 merge = len(parents) == 2
274 274
275 275 if not user or not date or not message or not parents[0]:
276 276 raise util.Abort(_('transplant log file is corrupt'))
277 277
278 278 extra = {'transplant_source': node}
279 279 wlock = repo.wlock()
280 280 try:
281 281 p1, p2 = repo.dirstate.parents()
282 282 if p1 != parents[0]:
283 283 raise util.Abort(
284 284 _('working dir not at transplant parent %s') %
285 285 revlog.hex(parents[0]))
286 286 if merge:
287 287 repo.dirstate.setparents(p1, parents[1])
288 288 n = repo.commit(None, message, user, date, extra=extra)
289 289 if not n:
290 290 raise util.Abort(_('commit failed'))
291 291 if not merge:
292 292 self.transplants.set(n, node)
293 293 self.unlog()
294 294
295 295 return n, node
296 296 finally:
297 297 wlock.release()
298 298
299 299 def readseries(self):
300 300 nodes = []
301 301 merges = []
302 302 cur = nodes
303 303 for line in self.opener('series').read().splitlines():
304 304 if line.startswith('# Merges'):
305 305 cur = merges
306 306 continue
307 307 cur.append(revlog.bin(line))
308 308
309 309 return (nodes, merges)
310 310
311 311 def saveseries(self, revmap, merges):
312 312 if not revmap:
313 313 return
314 314
315 315 if not os.path.isdir(self.path):
316 316 os.mkdir(self.path)
317 317 series = self.opener('series', 'w')
318 for rev in util.sort(revmap):
318 for rev in sorted(revmap):
319 319 series.write(revlog.hex(revmap[rev]) + '\n')
320 320 if merges:
321 321 series.write('# Merges\n')
322 322 for m in merges:
323 323 series.write(revlog.hex(m) + '\n')
324 324 series.close()
325 325
326 326 def parselog(self, fp):
327 327 parents = []
328 328 message = []
329 329 node = revlog.nullid
330 330 inmsg = False
331 331 for line in fp.read().splitlines():
332 332 if inmsg:
333 333 message.append(line)
334 334 elif line.startswith('# User '):
335 335 user = line[7:]
336 336 elif line.startswith('# Date '):
337 337 date = line[7:]
338 338 elif line.startswith('# Node ID '):
339 339 node = revlog.bin(line[10:])
340 340 elif line.startswith('# Parent '):
341 341 parents.append(revlog.bin(line[9:]))
342 342 elif not line.startswith('#'):
343 343 inmsg = True
344 344 message.append(line)
345 345 return (node, user, date, '\n'.join(message), parents)
346 346
347 347 def log(self, user, date, message, p1, p2, merge=False):
348 348 '''journal changelog metadata for later recover'''
349 349
350 350 if not os.path.isdir(self.path):
351 351 os.mkdir(self.path)
352 352 fp = self.opener('journal', 'w')
353 353 fp.write('# User %s\n' % user)
354 354 fp.write('# Date %s\n' % date)
355 355 fp.write('# Node ID %s\n' % revlog.hex(p2))
356 356 fp.write('# Parent ' + revlog.hex(p1) + '\n')
357 357 if merge:
358 358 fp.write('# Parent ' + revlog.hex(p2) + '\n')
359 359 fp.write(message.rstrip() + '\n')
360 360 fp.close()
361 361
362 362 def readlog(self):
363 363 return self.parselog(self.opener('journal'))
364 364
365 365 def unlog(self):
366 366 '''remove changelog journal'''
367 367 absdst = os.path.join(self.path, 'journal')
368 368 if os.path.exists(absdst):
369 369 os.unlink(absdst)
370 370
371 371 def transplantfilter(self, repo, source, root):
372 372 def matchfn(node):
373 373 if self.applied(repo, node, root):
374 374 return False
375 375 if source.changelog.parents(node)[1] != revlog.nullid:
376 376 return False
377 377 extra = source.changelog.read(node)[5]
378 378 cnode = extra.get('transplant_source')
379 379 if cnode and self.applied(repo, cnode, root):
380 380 return False
381 381 return True
382 382
383 383 return matchfn
384 384
385 385 def hasnode(repo, node):
386 386 try:
387 387 return repo.changelog.rev(node) != None
388 388 except error.RevlogError:
389 389 return False
390 390
391 391 def browserevs(ui, repo, nodes, opts):
392 392 '''interactively transplant changesets'''
393 393 def browsehelp(ui):
394 394 ui.write('y: transplant this changeset\n'
395 395 'n: skip this changeset\n'
396 396 'm: merge at this changeset\n'
397 397 'p: show patch\n'
398 398 'c: commit selected changesets\n'
399 399 'q: cancel transplant\n'
400 400 '?: show this help\n')
401 401
402 402 displayer = cmdutil.show_changeset(ui, repo, opts)
403 403 transplants = []
404 404 merges = []
405 405 for node in nodes:
406 406 displayer.show(repo[node])
407 407 action = None
408 408 while not action:
409 409 action = ui.prompt(_('apply changeset? [ynmpcq?]:'))
410 410 if action == '?':
411 411 browsehelp(ui)
412 412 action = None
413 413 elif action == 'p':
414 414 parent = repo.changelog.parents(node)[0]
415 415 for chunk in patch.diff(repo, parent, node):
416 416 repo.ui.write(chunk)
417 417 action = None
418 418 elif action not in ('y', 'n', 'm', 'c', 'q'):
419 419 ui.write('no such option\n')
420 420 action = None
421 421 if action == 'y':
422 422 transplants.append(node)
423 423 elif action == 'm':
424 424 merges.append(node)
425 425 elif action == 'c':
426 426 break
427 427 elif action == 'q':
428 428 transplants = ()
429 429 merges = ()
430 430 break
431 431 return (transplants, merges)
432 432
433 433 def transplant(ui, repo, *revs, **opts):
434 434 '''transplant changesets from another branch
435 435
436 436 Selected changesets will be applied on top of the current working
437 437 directory with the log of the original changeset. If --log is
438 438 specified, log messages will have a comment appended of the form:
439 439
440 440 (transplanted from CHANGESETHASH)
441 441
442 442 You can rewrite the changelog message with the --filter option.
443 443 Its argument will be invoked with the current changelog message as
444 444 $1 and the patch as $2.
445 445
446 446 If --source/-s is specified, selects changesets from the named
447 447 repository. If --branch/-b is specified, selects changesets from
448 448 the branch holding the named revision, up to that revision. If
449 449 --all/-a is specified, all changesets on the branch will be
450 450 transplanted, otherwise you will be prompted to select the
451 451 changesets you want.
452 452
453 453 hg transplant --branch REVISION --all will rebase the selected
454 454 branch (up to the named revision) onto your current working
455 455 directory.
456 456
457 457 You can optionally mark selected transplanted changesets as merge
458 458 changesets. You will not be prompted to transplant any ancestors
459 459 of a merged transplant, and you can merge descendants of them
460 460 normally instead of transplanting them.
461 461
462 462 If no merges or revisions are provided, hg transplant will start
463 463 an interactive changeset browser.
464 464
465 465 If a changeset application fails, you can fix the merge by hand
466 466 and then resume where you left off by calling hg transplant
467 467 --continue/-c.
468 468 '''
469 469 def getremotechanges(repo, url):
470 470 sourcerepo = ui.expandpath(url)
471 471 source = hg.repository(ui, sourcerepo)
472 472 common, incoming, rheads = repo.findcommonincoming(source, force=True)
473 473 if not incoming:
474 474 return (source, None, None)
475 475
476 476 bundle = None
477 477 if not source.local():
478 478 if source.capable('changegroupsubset'):
479 479 cg = source.changegroupsubset(incoming, rheads, 'incoming')
480 480 else:
481 481 cg = source.changegroup(incoming, 'incoming')
482 482 bundle = changegroup.writebundle(cg, None, 'HG10UN')
483 483 source = bundlerepo.bundlerepository(ui, repo.root, bundle)
484 484
485 485 return (source, incoming, bundle)
486 486
487 487 def incwalk(repo, incoming, branches, match=util.always):
488 488 if not branches:
489 489 branches=None
490 490 for node in repo.changelog.nodesbetween(incoming, branches)[0]:
491 491 if match(node):
492 492 yield node
493 493
494 494 def transplantwalk(repo, root, branches, match=util.always):
495 495 if not branches:
496 496 branches = repo.heads()
497 497 ancestors = []
498 498 for branch in branches:
499 499 ancestors.append(repo.changelog.ancestor(root, branch))
500 500 for node in repo.changelog.nodesbetween(ancestors, branches)[0]:
501 501 if match(node):
502 502 yield node
503 503
504 504 def checkopts(opts, revs):
505 505 if opts.get('continue'):
506 506 if filter(lambda opt: opts.get(opt), ('branch', 'all', 'merge')):
507 507 raise util.Abort(_('--continue is incompatible with '
508 508 'branch, all or merge'))
509 509 return
510 510 if not (opts.get('source') or revs or
511 511 opts.get('merge') or opts.get('branch')):
512 512 raise util.Abort(_('no source URL, branch tag or revision '
513 513 'list provided'))
514 514 if opts.get('all'):
515 515 if not opts.get('branch'):
516 516 raise util.Abort(_('--all requires a branch revision'))
517 517 if revs:
518 518 raise util.Abort(_('--all is incompatible with a '
519 519 'revision list'))
520 520
521 521 checkopts(opts, revs)
522 522
523 523 if not opts.get('log'):
524 524 opts['log'] = ui.config('transplant', 'log')
525 525 if not opts.get('filter'):
526 526 opts['filter'] = ui.config('transplant', 'filter')
527 527
528 528 tp = transplanter(ui, repo)
529 529
530 530 p1, p2 = repo.dirstate.parents()
531 531 if len(repo) > 0 and p1 == revlog.nullid:
532 532 raise util.Abort(_('no revision checked out'))
533 533 if not opts.get('continue'):
534 534 if p2 != revlog.nullid:
535 535 raise util.Abort(_('outstanding uncommitted merges'))
536 536 m, a, r, d = repo.status()[:4]
537 537 if m or a or r or d:
538 538 raise util.Abort(_('outstanding local changes'))
539 539
540 540 bundle = None
541 541 source = opts.get('source')
542 542 if source:
543 543 (source, incoming, bundle) = getremotechanges(repo, source)
544 544 else:
545 545 source = repo
546 546
547 547 try:
548 548 if opts.get('continue'):
549 549 tp.resume(repo, source, opts)
550 550 return
551 551
552 552 tf=tp.transplantfilter(repo, source, p1)
553 553 if opts.get('prune'):
554 554 prune = [source.lookup(r)
555 555 for r in cmdutil.revrange(source, opts.get('prune'))]
556 556 matchfn = lambda x: tf(x) and x not in prune
557 557 else:
558 558 matchfn = tf
559 559 branches = map(source.lookup, opts.get('branch', ()))
560 560 merges = map(source.lookup, opts.get('merge', ()))
561 561 revmap = {}
562 562 if revs:
563 563 for r in cmdutil.revrange(source, revs):
564 564 revmap[int(r)] = source.lookup(r)
565 565 elif opts.get('all') or not merges:
566 566 if source != repo:
567 567 alltransplants = incwalk(source, incoming, branches,
568 568 match=matchfn)
569 569 else:
570 570 alltransplants = transplantwalk(source, p1, branches,
571 571 match=matchfn)
572 572 if opts.get('all'):
573 573 revs = alltransplants
574 574 else:
575 575 revs, newmerges = browserevs(ui, source, alltransplants, opts)
576 576 merges.extend(newmerges)
577 577 for r in revs:
578 578 revmap[source.changelog.rev(r)] = r
579 579 for r in merges:
580 580 revmap[source.changelog.rev(r)] = r
581 581
582 582 tp.apply(repo, source, revmap, merges, opts)
583 583 finally:
584 584 if bundle:
585 585 source.close()
586 586 os.unlink(bundle)
587 587
588 588 cmdtable = {
589 589 "transplant":
590 590 (transplant,
591 591 [('s', 'source', '', _('pull patches from REPOSITORY')),
592 592 ('b', 'branch', [], _('pull patches from branch BRANCH')),
593 593 ('a', 'all', None, _('pull all changesets up to BRANCH')),
594 594 ('p', 'prune', [], _('skip over REV')),
595 595 ('m', 'merge', [], _('merge at REV')),
596 596 ('', 'log', None, _('append transplant info to log message')),
597 597 ('c', 'continue', None, _('continue last transplant session '
598 598 'after repair')),
599 599 ('', 'filter', '', _('filter changesets through FILTER'))],
600 600 _('hg transplant [-s REPOSITORY] [-b BRANCH [-a]] [-p REV] '
601 601 '[-m REV] [REV]...'))
602 602 }
@@ -1,221 +1,221 b''
1 1 # changelog.py - changelog class for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from node import bin, hex, nullid
9 9 from i18n import _
10 10 import util, error, revlog, encoding
11 11
12 12 def _string_escape(text):
13 13 """
14 14 >>> d = {'nl': chr(10), 'bs': chr(92), 'cr': chr(13), 'nul': chr(0)}
15 15 >>> s = "ab%(nl)scd%(bs)s%(bs)sn%(nul)sab%(cr)scd%(bs)s%(nl)s" % d
16 16 >>> s
17 17 'ab\\ncd\\\\\\\\n\\x00ab\\rcd\\\\\\n'
18 18 >>> res = _string_escape(s)
19 19 >>> s == res.decode('string_escape')
20 20 True
21 21 """
22 22 # subset of the string_escape codec
23 23 text = text.replace('\\', '\\\\').replace('\n', '\\n').replace('\r', '\\r')
24 24 return text.replace('\0', '\\0')
25 25
26 26 class appender:
27 27 '''the changelog index must be updated last on disk, so we use this class
28 28 to delay writes to it'''
29 29 def __init__(self, fp, buf):
30 30 self.data = buf
31 31 self.fp = fp
32 32 self.offset = fp.tell()
33 33 self.size = util.fstat(fp).st_size
34 34
35 35 def end(self):
36 36 return self.size + len("".join(self.data))
37 37 def tell(self):
38 38 return self.offset
39 39 def flush(self):
40 40 pass
41 41 def close(self):
42 42 self.fp.close()
43 43
44 44 def seek(self, offset, whence=0):
45 45 '''virtual file offset spans real file and data'''
46 46 if whence == 0:
47 47 self.offset = offset
48 48 elif whence == 1:
49 49 self.offset += offset
50 50 elif whence == 2:
51 51 self.offset = self.end() + offset
52 52 if self.offset < self.size:
53 53 self.fp.seek(self.offset)
54 54
55 55 def read(self, count=-1):
56 56 '''only trick here is reads that span real file and data'''
57 57 ret = ""
58 58 if self.offset < self.size:
59 59 s = self.fp.read(count)
60 60 ret = s
61 61 self.offset += len(s)
62 62 if count > 0:
63 63 count -= len(s)
64 64 if count != 0:
65 65 doff = self.offset - self.size
66 66 self.data.insert(0, "".join(self.data))
67 67 del self.data[1:]
68 68 s = self.data[0][doff:doff+count]
69 69 self.offset += len(s)
70 70 ret += s
71 71 return ret
72 72
73 73 def write(self, s):
74 74 self.data.append(str(s))
75 75 self.offset += len(s)
76 76
77 77 class changelog(revlog.revlog):
78 78 def __init__(self, opener):
79 79 revlog.revlog.__init__(self, opener, "00changelog.i")
80 80
81 81 def delayupdate(self):
82 82 "delay visibility of index updates to other readers"
83 83 self._realopener = self.opener
84 84 self.opener = self._delayopener
85 85 self._delaycount = len(self)
86 86 self._delaybuf = []
87 87 self._delayname = None
88 88
89 89 def finalize(self, tr):
90 90 "finalize index updates"
91 91 self.opener = self._realopener
92 92 # move redirected index data back into place
93 93 if self._delayname:
94 94 util.rename(self._delayname + ".a", self._delayname)
95 95 elif self._delaybuf:
96 96 fp = self.opener(self.indexfile, 'a')
97 97 fp.write("".join(self._delaybuf))
98 98 fp.close()
99 99 self._delaybuf = []
100 100 # split when we're done
101 101 self.checkinlinesize(tr)
102 102
103 103 def _delayopener(self, name, mode='r'):
104 104 fp = self._realopener(name, mode)
105 105 # only divert the index
106 106 if not name == self.indexfile:
107 107 return fp
108 108 # if we're doing an initial clone, divert to another file
109 109 if self._delaycount == 0:
110 110 self._delayname = fp.name
111 111 if not len(self):
112 112 # make sure to truncate the file
113 113 mode = mode.replace('a', 'w')
114 114 return self._realopener(name + ".a", mode)
115 115 # otherwise, divert to memory
116 116 return appender(fp, self._delaybuf)
117 117
118 118 def readpending(self, file):
119 119 r = revlog.revlog(self.opener, file)
120 120 self.index = r.index
121 121 self.nodemap = r.nodemap
122 122 self._chunkcache = r._chunkcache
123 123
124 124 def writepending(self):
125 125 "create a file containing the unfinalized state for pretxnchangegroup"
126 126 if self._delaybuf:
127 127 # make a temporary copy of the index
128 128 fp1 = self._realopener(self.indexfile)
129 129 fp2 = self._realopener(self.indexfile + ".a", "w")
130 130 fp2.write(fp1.read())
131 131 # add pending data
132 132 fp2.write("".join(self._delaybuf))
133 133 fp2.close()
134 134 # switch modes so finalize can simply rename
135 135 self._delaybuf = []
136 136 self._delayname = fp1.name
137 137
138 138 if self._delayname:
139 139 return True
140 140
141 141 return False
142 142
143 143 def checkinlinesize(self, tr, fp=None):
144 144 if self.opener == self._delayopener:
145 145 return
146 146 return revlog.revlog.checkinlinesize(self, tr, fp)
147 147
148 148 def decode_extra(self, text):
149 149 extra = {}
150 150 for l in text.split('\0'):
151 151 if l:
152 152 k, v = l.decode('string_escape').split(':', 1)
153 153 extra[k] = v
154 154 return extra
155 155
156 156 def encode_extra(self, d):
157 157 # keys must be sorted to produce a deterministic changelog entry
158 items = [_string_escape('%s:%s' % (k, d[k])) for k in util.sort(d)]
158 items = [_string_escape('%s:%s' % (k, d[k])) for k in sorted(d)]
159 159 return "\0".join(items)
160 160
161 161 def read(self, node):
162 162 """
163 163 format used:
164 164 nodeid\n : manifest node in ascii
165 165 user\n : user, no \n or \r allowed
166 166 time tz extra\n : date (time is int or float, timezone is int)
167 167 : extra is metadatas, encoded and separated by '\0'
168 168 : older versions ignore it
169 169 files\n\n : files modified by the cset, no \n or \r allowed
170 170 (.*) : comment (free text, ideally utf-8)
171 171
172 172 changelog v0 doesn't use extra
173 173 """
174 174 text = self.revision(node)
175 175 if not text:
176 176 return (nullid, "", (0, 0), [], "", {'branch': 'default'})
177 177 last = text.index("\n\n")
178 178 desc = encoding.tolocal(text[last + 2:])
179 179 l = text[:last].split('\n')
180 180 manifest = bin(l[0])
181 181 user = encoding.tolocal(l[1])
182 182
183 183 extra_data = l[2].split(' ', 2)
184 184 if len(extra_data) != 3:
185 185 time = float(extra_data.pop(0))
186 186 try:
187 187 # various tools did silly things with the time zone field.
188 188 timezone = int(extra_data[0])
189 189 except:
190 190 timezone = 0
191 191 extra = {}
192 192 else:
193 193 time, timezone, extra = extra_data
194 194 time, timezone = float(time), int(timezone)
195 195 extra = self.decode_extra(extra)
196 196 if not extra.get('branch'):
197 197 extra['branch'] = 'default'
198 198 files = l[3:]
199 199 return (manifest, user, (time, timezone), files, desc, extra)
200 200
201 201 def add(self, manifest, files, desc, transaction, p1=None, p2=None,
202 202 user=None, date=None, extra={}):
203 203
204 204 user = user.strip()
205 205 if "\n" in user:
206 206 raise error.RevlogError(_("username %s contains a newline")
207 207 % repr(user))
208 208 user, desc = encoding.fromlocal(user), encoding.fromlocal(desc)
209 209
210 210 if date:
211 211 parseddate = "%d %d" % util.parsedate(date)
212 212 else:
213 213 parseddate = "%d %d" % util.makedate()
214 214 if extra and extra.get("branch") in ("default", ""):
215 215 del extra["branch"]
216 216 if extra:
217 217 extra = self.encode_extra(extra)
218 218 parseddate = "%s %s" % (parseddate, extra)
219 l = [hex(manifest), user, parseddate] + util.sort(files) + ["", desc]
219 l = [hex(manifest), user, parseddate] + sorted(files) + ["", desc]
220 220 text = "\n".join(l)
221 221 return self.addrevision(text, transaction, len(self), p1, p2)
@@ -1,1226 +1,1226 b''
1 1 # cmdutil.py - help for command processing in mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms
6 6 # of the GNU General Public License, incorporated herein by reference.
7 7
8 8 from node import hex, nullid, nullrev, short
9 9 from i18n import _
10 10 import os, sys, bisect, stat, encoding
11 11 import mdiff, bdiff, util, templater, templatefilters, patch, errno, error
12 12 import match as _match
13 13
14 14 revrangesep = ':'
15 15
16 16 def findpossible(cmd, table, strict=False):
17 17 """
18 18 Return cmd -> (aliases, command table entry)
19 19 for each matching command.
20 20 Return debug commands (or their aliases) only if no normal command matches.
21 21 """
22 22 choice = {}
23 23 debugchoice = {}
24 24 for e in table.keys():
25 25 aliases = e.lstrip("^").split("|")
26 26 found = None
27 27 if cmd in aliases:
28 28 found = cmd
29 29 elif not strict:
30 30 for a in aliases:
31 31 if a.startswith(cmd):
32 32 found = a
33 33 break
34 34 if found is not None:
35 35 if aliases[0].startswith("debug") or found.startswith("debug"):
36 36 debugchoice[found] = (aliases, table[e])
37 37 else:
38 38 choice[found] = (aliases, table[e])
39 39
40 40 if not choice and debugchoice:
41 41 choice = debugchoice
42 42
43 43 return choice
44 44
45 45 def findcmd(cmd, table, strict=True):
46 46 """Return (aliases, command table entry) for command string."""
47 47 choice = findpossible(cmd, table, strict)
48 48
49 49 if cmd in choice:
50 50 return choice[cmd]
51 51
52 52 if len(choice) > 1:
53 53 clist = choice.keys()
54 54 clist.sort()
55 55 raise error.AmbiguousCommand(cmd, clist)
56 56
57 57 if choice:
58 58 return choice.values()[0]
59 59
60 60 raise error.UnknownCommand(cmd)
61 61
62 62 def bail_if_changed(repo):
63 63 if repo.dirstate.parents()[1] != nullid:
64 64 raise util.Abort(_('outstanding uncommitted merge'))
65 65 modified, added, removed, deleted = repo.status()[:4]
66 66 if modified or added or removed or deleted:
67 67 raise util.Abort(_("outstanding uncommitted changes"))
68 68
69 69 def logmessage(opts):
70 70 """ get the log message according to -m and -l option """
71 71 message = opts.get('message')
72 72 logfile = opts.get('logfile')
73 73
74 74 if message and logfile:
75 75 raise util.Abort(_('options --message and --logfile are mutually '
76 76 'exclusive'))
77 77 if not message and logfile:
78 78 try:
79 79 if logfile == '-':
80 80 message = sys.stdin.read()
81 81 else:
82 82 message = open(logfile).read()
83 83 except IOError, inst:
84 84 raise util.Abort(_("can't read commit message '%s': %s") %
85 85 (logfile, inst.strerror))
86 86 return message
87 87
88 88 def loglimit(opts):
89 89 """get the log limit according to option -l/--limit"""
90 90 limit = opts.get('limit')
91 91 if limit:
92 92 try:
93 93 limit = int(limit)
94 94 except ValueError:
95 95 raise util.Abort(_('limit must be a positive integer'))
96 96 if limit <= 0: raise util.Abort(_('limit must be positive'))
97 97 else:
98 98 limit = sys.maxint
99 99 return limit
100 100
101 101 def remoteui(src, opts):
102 102 'build a remote ui from ui or repo and opts'
103 103 if hasattr(src, 'baseui'): # looks like a repository
104 104 dst = src.baseui # drop repo-specific config
105 105 src = src.ui # copy target options from repo
106 106 else: # assume it's a global ui object
107 107 dst = src # keep all global options
108 108
109 109 # copy ssh-specific options
110 110 for o in 'ssh', 'remotecmd':
111 111 v = opts.get(o) or src.config('ui', o)
112 112 if v:
113 113 dst.setconfig("ui", o, v)
114 114 # copy bundle-specific options
115 115 r = src.config('bundle', 'mainreporoot')
116 116 if r:
117 117 dst.setconfig('bundle', 'mainreporoot', r)
118 118
119 119 return dst
120 120
121 121 def revpair(repo, revs):
122 122 '''return pair of nodes, given list of revisions. second item can
123 123 be None, meaning use working dir.'''
124 124
125 125 def revfix(repo, val, defval):
126 126 if not val and val != 0 and defval is not None:
127 127 val = defval
128 128 return repo.lookup(val)
129 129
130 130 if not revs:
131 131 return repo.dirstate.parents()[0], None
132 132 end = None
133 133 if len(revs) == 1:
134 134 if revrangesep in revs[0]:
135 135 start, end = revs[0].split(revrangesep, 1)
136 136 start = revfix(repo, start, 0)
137 137 end = revfix(repo, end, len(repo) - 1)
138 138 else:
139 139 start = revfix(repo, revs[0], None)
140 140 elif len(revs) == 2:
141 141 if revrangesep in revs[0] or revrangesep in revs[1]:
142 142 raise util.Abort(_('too many revisions specified'))
143 143 start = revfix(repo, revs[0], None)
144 144 end = revfix(repo, revs[1], None)
145 145 else:
146 146 raise util.Abort(_('too many revisions specified'))
147 147 return start, end
148 148
149 149 def revrange(repo, revs):
150 150 """Yield revision as strings from a list of revision specifications."""
151 151
152 152 def revfix(repo, val, defval):
153 153 if not val and val != 0 and defval is not None:
154 154 return defval
155 155 return repo.changelog.rev(repo.lookup(val))
156 156
157 157 seen, l = {}, []
158 158 for spec in revs:
159 159 if revrangesep in spec:
160 160 start, end = spec.split(revrangesep, 1)
161 161 start = revfix(repo, start, 0)
162 162 end = revfix(repo, end, len(repo) - 1)
163 163 step = start > end and -1 or 1
164 164 for rev in xrange(start, end+step, step):
165 165 if rev in seen:
166 166 continue
167 167 seen[rev] = 1
168 168 l.append(rev)
169 169 else:
170 170 rev = revfix(repo, spec, None)
171 171 if rev in seen:
172 172 continue
173 173 seen[rev] = 1
174 174 l.append(rev)
175 175
176 176 return l
177 177
178 178 def make_filename(repo, pat, node,
179 179 total=None, seqno=None, revwidth=None, pathname=None):
180 180 node_expander = {
181 181 'H': lambda: hex(node),
182 182 'R': lambda: str(repo.changelog.rev(node)),
183 183 'h': lambda: short(node),
184 184 }
185 185 expander = {
186 186 '%': lambda: '%',
187 187 'b': lambda: os.path.basename(repo.root),
188 188 }
189 189
190 190 try:
191 191 if node:
192 192 expander.update(node_expander)
193 193 if node:
194 194 expander['r'] = (lambda:
195 195 str(repo.changelog.rev(node)).zfill(revwidth or 0))
196 196 if total is not None:
197 197 expander['N'] = lambda: str(total)
198 198 if seqno is not None:
199 199 expander['n'] = lambda: str(seqno)
200 200 if total is not None and seqno is not None:
201 201 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
202 202 if pathname is not None:
203 203 expander['s'] = lambda: os.path.basename(pathname)
204 204 expander['d'] = lambda: os.path.dirname(pathname) or '.'
205 205 expander['p'] = lambda: pathname
206 206
207 207 newname = []
208 208 patlen = len(pat)
209 209 i = 0
210 210 while i < patlen:
211 211 c = pat[i]
212 212 if c == '%':
213 213 i += 1
214 214 c = pat[i]
215 215 c = expander[c]()
216 216 newname.append(c)
217 217 i += 1
218 218 return ''.join(newname)
219 219 except KeyError, inst:
220 220 raise util.Abort(_("invalid format spec '%%%s' in output file name") %
221 221 inst.args[0])
222 222
223 223 def make_file(repo, pat, node=None,
224 224 total=None, seqno=None, revwidth=None, mode='wb', pathname=None):
225 225
226 226 writable = 'w' in mode or 'a' in mode
227 227
228 228 if not pat or pat == '-':
229 229 return writable and sys.stdout or sys.stdin
230 230 if hasattr(pat, 'write') and writable:
231 231 return pat
232 232 if hasattr(pat, 'read') and 'r' in mode:
233 233 return pat
234 234 return open(make_filename(repo, pat, node, total, seqno, revwidth,
235 235 pathname),
236 236 mode)
237 237
238 238 def match(repo, pats=[], opts={}, globbed=False, default='relpath'):
239 239 if not globbed and default == 'relpath':
240 240 pats = util.expand_glob(pats or [])
241 241 m = _match.match(repo.root, repo.getcwd(), pats,
242 242 opts.get('include'), opts.get('exclude'), default)
243 243 def badfn(f, msg):
244 244 repo.ui.warn("%s: %s\n" % (m.rel(f), msg))
245 245 return False
246 246 m.bad = badfn
247 247 return m
248 248
249 249 def matchall(repo):
250 250 return _match.always(repo.root, repo.getcwd())
251 251
252 252 def matchfiles(repo, files):
253 253 return _match.exact(repo.root, repo.getcwd(), files)
254 254
255 255 def findrenames(repo, added=None, removed=None, threshold=0.5):
256 256 '''find renamed files -- yields (before, after, score) tuples'''
257 257 if added is None or removed is None:
258 258 added, removed = repo.status()[1:3]
259 259 ctx = repo['.']
260 260 for a in added:
261 261 aa = repo.wread(a)
262 262 bestname, bestscore = None, threshold
263 263 for r in removed:
264 264 rr = ctx.filectx(r).data()
265 265
266 266 # bdiff.blocks() returns blocks of matching lines
267 267 # count the number of bytes in each
268 268 equal = 0
269 269 alines = mdiff.splitnewlines(aa)
270 270 matches = bdiff.blocks(aa, rr)
271 271 for x1,x2,y1,y2 in matches:
272 272 for line in alines[x1:x2]:
273 273 equal += len(line)
274 274
275 275 lengths = len(aa) + len(rr)
276 276 if lengths:
277 277 myscore = equal*2.0 / lengths
278 278 if myscore >= bestscore:
279 279 bestname, bestscore = r, myscore
280 280 if bestname:
281 281 yield bestname, a, bestscore
282 282
283 283 def addremove(repo, pats=[], opts={}, dry_run=None, similarity=None):
284 284 if dry_run is None:
285 285 dry_run = opts.get('dry_run')
286 286 if similarity is None:
287 287 similarity = float(opts.get('similarity') or 0)
288 288 add, remove = [], []
289 289 mapping = {}
290 290 audit_path = util.path_auditor(repo.root)
291 291 m = match(repo, pats, opts)
292 292 for abs in repo.walk(m):
293 293 target = repo.wjoin(abs)
294 294 good = True
295 295 try:
296 296 audit_path(abs)
297 297 except:
298 298 good = False
299 299 rel = m.rel(abs)
300 300 exact = m.exact(abs)
301 301 if good and abs not in repo.dirstate:
302 302 add.append(abs)
303 303 mapping[abs] = rel, m.exact(abs)
304 304 if repo.ui.verbose or not exact:
305 305 repo.ui.status(_('adding %s\n') % ((pats and rel) or abs))
306 306 if repo.dirstate[abs] != 'r' and (not good or not util.lexists(target)
307 307 or (os.path.isdir(target) and not os.path.islink(target))):
308 308 remove.append(abs)
309 309 mapping[abs] = rel, exact
310 310 if repo.ui.verbose or not exact:
311 311 repo.ui.status(_('removing %s\n') % ((pats and rel) or abs))
312 312 if not dry_run:
313 313 repo.remove(remove)
314 314 repo.add(add)
315 315 if similarity > 0:
316 316 for old, new, score in findrenames(repo, add, remove, similarity):
317 317 oldrel, oldexact = mapping[old]
318 318 newrel, newexact = mapping[new]
319 319 if repo.ui.verbose or not oldexact or not newexact:
320 320 repo.ui.status(_('recording removal of %s as rename to %s '
321 321 '(%d%% similar)\n') %
322 322 (oldrel, newrel, score * 100))
323 323 if not dry_run:
324 324 repo.copy(old, new)
325 325
326 326 def copy(ui, repo, pats, opts, rename=False):
327 327 # called with the repo lock held
328 328 #
329 329 # hgsep => pathname that uses "/" to separate directories
330 330 # ossep => pathname that uses os.sep to separate directories
331 331 cwd = repo.getcwd()
332 332 targets = {}
333 333 after = opts.get("after")
334 334 dryrun = opts.get("dry_run")
335 335
336 336 def walkpat(pat):
337 337 srcs = []
338 338 m = match(repo, [pat], opts, globbed=True)
339 339 for abs in repo.walk(m):
340 340 state = repo.dirstate[abs]
341 341 rel = m.rel(abs)
342 342 exact = m.exact(abs)
343 343 if state in '?r':
344 344 if exact and state == '?':
345 345 ui.warn(_('%s: not copying - file is not managed\n') % rel)
346 346 if exact and state == 'r':
347 347 ui.warn(_('%s: not copying - file has been marked for'
348 348 ' remove\n') % rel)
349 349 continue
350 350 # abs: hgsep
351 351 # rel: ossep
352 352 srcs.append((abs, rel, exact))
353 353 return srcs
354 354
355 355 # abssrc: hgsep
356 356 # relsrc: ossep
357 357 # otarget: ossep
358 358 def copyfile(abssrc, relsrc, otarget, exact):
359 359 abstarget = util.canonpath(repo.root, cwd, otarget)
360 360 reltarget = repo.pathto(abstarget, cwd)
361 361 target = repo.wjoin(abstarget)
362 362 src = repo.wjoin(abssrc)
363 363 state = repo.dirstate[abstarget]
364 364
365 365 # check for collisions
366 366 prevsrc = targets.get(abstarget)
367 367 if prevsrc is not None:
368 368 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
369 369 (reltarget, repo.pathto(abssrc, cwd),
370 370 repo.pathto(prevsrc, cwd)))
371 371 return
372 372
373 373 # check for overwrites
374 374 exists = os.path.exists(target)
375 375 if not after and exists or after and state in 'mn':
376 376 if not opts['force']:
377 377 ui.warn(_('%s: not overwriting - file exists\n') %
378 378 reltarget)
379 379 return
380 380
381 381 if after:
382 382 if not exists:
383 383 return
384 384 elif not dryrun:
385 385 try:
386 386 if exists:
387 387 os.unlink(target)
388 388 targetdir = os.path.dirname(target) or '.'
389 389 if not os.path.isdir(targetdir):
390 390 os.makedirs(targetdir)
391 391 util.copyfile(src, target)
392 392 except IOError, inst:
393 393 if inst.errno == errno.ENOENT:
394 394 ui.warn(_('%s: deleted in working copy\n') % relsrc)
395 395 else:
396 396 ui.warn(_('%s: cannot copy - %s\n') %
397 397 (relsrc, inst.strerror))
398 398 return True # report a failure
399 399
400 400 if ui.verbose or not exact:
401 401 if rename:
402 402 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
403 403 else:
404 404 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
405 405
406 406 targets[abstarget] = abssrc
407 407
408 408 # fix up dirstate
409 409 origsrc = repo.dirstate.copied(abssrc) or abssrc
410 410 if abstarget == origsrc: # copying back a copy?
411 411 if state not in 'mn' and not dryrun:
412 412 repo.dirstate.normallookup(abstarget)
413 413 else:
414 414 if repo.dirstate[origsrc] == 'a' and origsrc == abssrc:
415 415 if not ui.quiet:
416 416 ui.warn(_("%s has not been committed yet, so no copy "
417 417 "data will be stored for %s.\n")
418 418 % (repo.pathto(origsrc, cwd), reltarget))
419 419 if repo.dirstate[abstarget] in '?r' and not dryrun:
420 420 repo.add([abstarget])
421 421 elif not dryrun:
422 422 repo.copy(origsrc, abstarget)
423 423
424 424 if rename and not dryrun:
425 425 repo.remove([abssrc], not after)
426 426
427 427 # pat: ossep
428 428 # dest ossep
429 429 # srcs: list of (hgsep, hgsep, ossep, bool)
430 430 # return: function that takes hgsep and returns ossep
431 431 def targetpathfn(pat, dest, srcs):
432 432 if os.path.isdir(pat):
433 433 abspfx = util.canonpath(repo.root, cwd, pat)
434 434 abspfx = util.localpath(abspfx)
435 435 if destdirexists:
436 436 striplen = len(os.path.split(abspfx)[0])
437 437 else:
438 438 striplen = len(abspfx)
439 439 if striplen:
440 440 striplen += len(os.sep)
441 441 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
442 442 elif destdirexists:
443 443 res = lambda p: os.path.join(dest,
444 444 os.path.basename(util.localpath(p)))
445 445 else:
446 446 res = lambda p: dest
447 447 return res
448 448
449 449 # pat: ossep
450 450 # dest ossep
451 451 # srcs: list of (hgsep, hgsep, ossep, bool)
452 452 # return: function that takes hgsep and returns ossep
453 453 def targetpathafterfn(pat, dest, srcs):
454 454 if util.patkind(pat, None)[0]:
455 455 # a mercurial pattern
456 456 res = lambda p: os.path.join(dest,
457 457 os.path.basename(util.localpath(p)))
458 458 else:
459 459 abspfx = util.canonpath(repo.root, cwd, pat)
460 460 if len(abspfx) < len(srcs[0][0]):
461 461 # A directory. Either the target path contains the last
462 462 # component of the source path or it does not.
463 463 def evalpath(striplen):
464 464 score = 0
465 465 for s in srcs:
466 466 t = os.path.join(dest, util.localpath(s[0])[striplen:])
467 467 if os.path.exists(t):
468 468 score += 1
469 469 return score
470 470
471 471 abspfx = util.localpath(abspfx)
472 472 striplen = len(abspfx)
473 473 if striplen:
474 474 striplen += len(os.sep)
475 475 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
476 476 score = evalpath(striplen)
477 477 striplen1 = len(os.path.split(abspfx)[0])
478 478 if striplen1:
479 479 striplen1 += len(os.sep)
480 480 if evalpath(striplen1) > score:
481 481 striplen = striplen1
482 482 res = lambda p: os.path.join(dest,
483 483 util.localpath(p)[striplen:])
484 484 else:
485 485 # a file
486 486 if destdirexists:
487 487 res = lambda p: os.path.join(dest,
488 488 os.path.basename(util.localpath(p)))
489 489 else:
490 490 res = lambda p: dest
491 491 return res
492 492
493 493
494 494 pats = util.expand_glob(pats)
495 495 if not pats:
496 496 raise util.Abort(_('no source or destination specified'))
497 497 if len(pats) == 1:
498 498 raise util.Abort(_('no destination specified'))
499 499 dest = pats.pop()
500 500 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
501 501 if not destdirexists:
502 502 if len(pats) > 1 or util.patkind(pats[0], None)[0]:
503 503 raise util.Abort(_('with multiple sources, destination must be an '
504 504 'existing directory'))
505 505 if util.endswithsep(dest):
506 506 raise util.Abort(_('destination %s is not a directory') % dest)
507 507
508 508 tfn = targetpathfn
509 509 if after:
510 510 tfn = targetpathafterfn
511 511 copylist = []
512 512 for pat in pats:
513 513 srcs = walkpat(pat)
514 514 if not srcs:
515 515 continue
516 516 copylist.append((tfn(pat, dest, srcs), srcs))
517 517 if not copylist:
518 518 raise util.Abort(_('no files to copy'))
519 519
520 520 errors = 0
521 521 for targetpath, srcs in copylist:
522 522 for abssrc, relsrc, exact in srcs:
523 523 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
524 524 errors += 1
525 525
526 526 if errors:
527 527 ui.warn(_('(consider using --after)\n'))
528 528
529 529 return errors
530 530
531 531 def service(opts, parentfn=None, initfn=None, runfn=None):
532 532 '''Run a command as a service.'''
533 533
534 534 if opts['daemon'] and not opts['daemon_pipefds']:
535 535 rfd, wfd = os.pipe()
536 536 args = sys.argv[:]
537 537 args.append('--daemon-pipefds=%d,%d' % (rfd, wfd))
538 538 # Don't pass --cwd to the child process, because we've already
539 539 # changed directory.
540 540 for i in xrange(1,len(args)):
541 541 if args[i].startswith('--cwd='):
542 542 del args[i]
543 543 break
544 544 elif args[i].startswith('--cwd'):
545 545 del args[i:i+2]
546 546 break
547 547 pid = os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
548 548 args[0], args)
549 549 os.close(wfd)
550 550 os.read(rfd, 1)
551 551 if parentfn:
552 552 return parentfn(pid)
553 553 else:
554 554 os._exit(0)
555 555
556 556 if initfn:
557 557 initfn()
558 558
559 559 if opts['pid_file']:
560 560 fp = open(opts['pid_file'], 'w')
561 561 fp.write(str(os.getpid()) + '\n')
562 562 fp.close()
563 563
564 564 if opts['daemon_pipefds']:
565 565 rfd, wfd = [int(x) for x in opts['daemon_pipefds'].split(',')]
566 566 os.close(rfd)
567 567 try:
568 568 os.setsid()
569 569 except AttributeError:
570 570 pass
571 571 os.write(wfd, 'y')
572 572 os.close(wfd)
573 573 sys.stdout.flush()
574 574 sys.stderr.flush()
575 575 fd = os.open(util.nulldev, os.O_RDWR)
576 576 if fd != 0: os.dup2(fd, 0)
577 577 if fd != 1: os.dup2(fd, 1)
578 578 if fd != 2: os.dup2(fd, 2)
579 579 if fd not in (0, 1, 2): os.close(fd)
580 580
581 581 if runfn:
582 582 return runfn()
583 583
584 584 class changeset_printer(object):
585 585 '''show changeset information when templating not requested.'''
586 586
587 587 def __init__(self, ui, repo, patch, diffopts, buffered):
588 588 self.ui = ui
589 589 self.repo = repo
590 590 self.buffered = buffered
591 591 self.patch = patch
592 592 self.diffopts = diffopts
593 593 self.header = {}
594 594 self.hunk = {}
595 595 self.lastheader = None
596 596
597 597 def flush(self, rev):
598 598 if rev in self.header:
599 599 h = self.header[rev]
600 600 if h != self.lastheader:
601 601 self.lastheader = h
602 602 self.ui.write(h)
603 603 del self.header[rev]
604 604 if rev in self.hunk:
605 605 self.ui.write(self.hunk[rev])
606 606 del self.hunk[rev]
607 607 return 1
608 608 return 0
609 609
610 610 def show(self, ctx, copies=(), **props):
611 611 if self.buffered:
612 612 self.ui.pushbuffer()
613 613 self._show(ctx, copies, props)
614 614 self.hunk[ctx.rev()] = self.ui.popbuffer()
615 615 else:
616 616 self._show(ctx, copies, props)
617 617
618 618 def _show(self, ctx, copies, props):
619 619 '''show a single changeset or file revision'''
620 620 changenode = ctx.node()
621 621 rev = ctx.rev()
622 622
623 623 if self.ui.quiet:
624 624 self.ui.write("%d:%s\n" % (rev, short(changenode)))
625 625 return
626 626
627 627 log = self.repo.changelog
628 628 changes = log.read(changenode)
629 629 date = util.datestr(changes[2])
630 630 extra = changes[5]
631 631 branch = extra.get("branch")
632 632
633 633 hexfunc = self.ui.debugflag and hex or short
634 634
635 635 parents = [(p, hexfunc(log.node(p)))
636 636 for p in self._meaningful_parentrevs(log, rev)]
637 637
638 638 self.ui.write(_("changeset: %d:%s\n") % (rev, hexfunc(changenode)))
639 639
640 640 # don't show the default branch name
641 641 if branch != 'default':
642 642 branch = encoding.tolocal(branch)
643 643 self.ui.write(_("branch: %s\n") % branch)
644 644 for tag in self.repo.nodetags(changenode):
645 645 self.ui.write(_("tag: %s\n") % tag)
646 646 for parent in parents:
647 647 self.ui.write(_("parent: %d:%s\n") % parent)
648 648
649 649 if self.ui.debugflag:
650 650 self.ui.write(_("manifest: %d:%s\n") %
651 651 (self.repo.manifest.rev(changes[0]), hex(changes[0])))
652 652 self.ui.write(_("user: %s\n") % changes[1])
653 653 self.ui.write(_("date: %s\n") % date)
654 654
655 655 if self.ui.debugflag:
656 656 files = self.repo.status(log.parents(changenode)[0], changenode)[:3]
657 657 for key, value in zip([_("files:"), _("files+:"), _("files-:")],
658 658 files):
659 659 if value:
660 660 self.ui.write("%-12s %s\n" % (key, " ".join(value)))
661 661 elif changes[3] and self.ui.verbose:
662 662 self.ui.write(_("files: %s\n") % " ".join(changes[3]))
663 663 if copies and self.ui.verbose:
664 664 copies = ['%s (%s)' % c for c in copies]
665 665 self.ui.write(_("copies: %s\n") % ' '.join(copies))
666 666
667 667 if extra and self.ui.debugflag:
668 for key, value in util.sort(extra.items()):
668 for key, value in sorted(extra.items()):
669 669 self.ui.write(_("extra: %s=%s\n")
670 670 % (key, value.encode('string_escape')))
671 671
672 672 description = changes[4].strip()
673 673 if description:
674 674 if self.ui.verbose:
675 675 self.ui.write(_("description:\n"))
676 676 self.ui.write(description)
677 677 self.ui.write("\n\n")
678 678 else:
679 679 self.ui.write(_("summary: %s\n") %
680 680 description.splitlines()[0])
681 681 self.ui.write("\n")
682 682
683 683 self.showpatch(changenode)
684 684
685 685 def showpatch(self, node):
686 686 if self.patch:
687 687 prev = self.repo.changelog.parents(node)[0]
688 688 chunks = patch.diff(self.repo, prev, node, match=self.patch,
689 689 opts=patch.diffopts(self.ui, self.diffopts))
690 690 for chunk in chunks:
691 691 self.ui.write(chunk)
692 692 self.ui.write("\n")
693 693
694 694 def _meaningful_parentrevs(self, log, rev):
695 695 """Return list of meaningful (or all if debug) parentrevs for rev.
696 696
697 697 For merges (two non-nullrev revisions) both parents are meaningful.
698 698 Otherwise the first parent revision is considered meaningful if it
699 699 is not the preceding revision.
700 700 """
701 701 parents = log.parentrevs(rev)
702 702 if not self.ui.debugflag and parents[1] == nullrev:
703 703 if parents[0] >= rev - 1:
704 704 parents = []
705 705 else:
706 706 parents = [parents[0]]
707 707 return parents
708 708
709 709
710 710 class changeset_templater(changeset_printer):
711 711 '''format changeset information.'''
712 712
713 713 def __init__(self, ui, repo, patch, diffopts, mapfile, buffered):
714 714 changeset_printer.__init__(self, ui, repo, patch, diffopts, buffered)
715 715 filters = templatefilters.filters.copy()
716 716 filters['formatnode'] = (ui.debugflag and (lambda x: x)
717 717 or (lambda x: x[:12]))
718 718 self.t = templater.templater(mapfile, filters,
719 719 cache={
720 720 'parent': '{rev}:{node|formatnode} ',
721 721 'manifest': '{rev}:{node|formatnode}',
722 722 'filecopy': '{name} ({source})'})
723 723
724 724 def use_template(self, t):
725 725 '''set template string to use'''
726 726 self.t.cache['changeset'] = t
727 727
728 728 def _meaningful_parentrevs(self, ctx):
729 729 """Return list of meaningful (or all if debug) parentrevs for rev.
730 730 """
731 731 parents = ctx.parents()
732 732 if len(parents) > 1:
733 733 return parents
734 734 if self.ui.debugflag:
735 735 return [parents[0], self.repo['null']]
736 736 if parents[0].rev() >= ctx.rev() - 1:
737 737 return []
738 738 return parents
739 739
740 740 def _show(self, ctx, copies, props):
741 741 '''show a single changeset or file revision'''
742 742
743 743 def showlist(name, values, plural=None, **args):
744 744 '''expand set of values.
745 745 name is name of key in template map.
746 746 values is list of strings or dicts.
747 747 plural is plural of name, if not simply name + 's'.
748 748
749 749 expansion works like this, given name 'foo'.
750 750
751 751 if values is empty, expand 'no_foos'.
752 752
753 753 if 'foo' not in template map, return values as a string,
754 754 joined by space.
755 755
756 756 expand 'start_foos'.
757 757
758 758 for each value, expand 'foo'. if 'last_foo' in template
759 759 map, expand it instead of 'foo' for last key.
760 760
761 761 expand 'end_foos'.
762 762 '''
763 763 if plural: names = plural
764 764 else: names = name + 's'
765 765 if not values:
766 766 noname = 'no_' + names
767 767 if noname in self.t:
768 768 yield self.t(noname, **args)
769 769 return
770 770 if name not in self.t:
771 771 if isinstance(values[0], str):
772 772 yield ' '.join(values)
773 773 else:
774 774 for v in values:
775 775 yield dict(v, **args)
776 776 return
777 777 startname = 'start_' + names
778 778 if startname in self.t:
779 779 yield self.t(startname, **args)
780 780 vargs = args.copy()
781 781 def one(v, tag=name):
782 782 try:
783 783 vargs.update(v)
784 784 except (AttributeError, ValueError):
785 785 try:
786 786 for a, b in v:
787 787 vargs[a] = b
788 788 except ValueError:
789 789 vargs[name] = v
790 790 return self.t(tag, **vargs)
791 791 lastname = 'last_' + name
792 792 if lastname in self.t:
793 793 last = values.pop()
794 794 else:
795 795 last = None
796 796 for v in values:
797 797 yield one(v)
798 798 if last is not None:
799 799 yield one(last, tag=lastname)
800 800 endname = 'end_' + names
801 801 if endname in self.t:
802 802 yield self.t(endname, **args)
803 803
804 804 def showbranches(**args):
805 805 branch = ctx.branch()
806 806 if branch != 'default':
807 807 branch = encoding.tolocal(branch)
808 808 return showlist('branch', [branch], plural='branches', **args)
809 809
810 810 def showparents(**args):
811 811 parents = [[('rev', p.rev()), ('node', p.hex())]
812 812 for p in self._meaningful_parentrevs(ctx)]
813 813 return showlist('parent', parents, **args)
814 814
815 815 def showtags(**args):
816 816 return showlist('tag', ctx.tags(), **args)
817 817
818 818 def showextras(**args):
819 for key, value in util.sort(ctx.extra().items()):
819 for key, value in sorted(ctx.extra().items()):
820 820 args = args.copy()
821 821 args.update(dict(key=key, value=value))
822 822 yield self.t('extra', **args)
823 823
824 824 def showcopies(**args):
825 825 c = [{'name': x[0], 'source': x[1]} for x in copies]
826 826 return showlist('file_copy', c, plural='file_copies', **args)
827 827
828 828 files = []
829 829 def getfiles():
830 830 if not files:
831 831 files[:] = self.repo.status(ctx.parents()[0].node(),
832 832 ctx.node())[:3]
833 833 return files
834 834 def showfiles(**args):
835 835 return showlist('file', ctx.files(), **args)
836 836 def showmods(**args):
837 837 return showlist('file_mod', getfiles()[0], **args)
838 838 def showadds(**args):
839 839 return showlist('file_add', getfiles()[1], **args)
840 840 def showdels(**args):
841 841 return showlist('file_del', getfiles()[2], **args)
842 842 def showmanifest(**args):
843 843 args = args.copy()
844 844 args.update(dict(rev=self.repo.manifest.rev(ctx.changeset()[0]),
845 845 node=hex(ctx.changeset()[0])))
846 846 return self.t('manifest', **args)
847 847
848 848 def showdiffstat(**args):
849 849 diff = patch.diff(self.repo, ctx.parents()[0].node(), ctx.node())
850 850 files, adds, removes = 0, 0, 0
851 851 for i in patch.diffstatdata(util.iterlines(diff)):
852 852 files += 1
853 853 adds += i[1]
854 854 removes += i[2]
855 855 return '%s: +%s/-%s' % (files, adds, removes)
856 856
857 857 defprops = {
858 858 'author': ctx.user(),
859 859 'branches': showbranches,
860 860 'date': ctx.date(),
861 861 'desc': ctx.description().strip(),
862 862 'file_adds': showadds,
863 863 'file_dels': showdels,
864 864 'file_mods': showmods,
865 865 'files': showfiles,
866 866 'file_copies': showcopies,
867 867 'manifest': showmanifest,
868 868 'node': ctx.hex(),
869 869 'parents': showparents,
870 870 'rev': ctx.rev(),
871 871 'tags': showtags,
872 872 'extras': showextras,
873 873 'diffstat': showdiffstat,
874 874 }
875 875 props = props.copy()
876 876 props.update(defprops)
877 877
878 878 # find correct templates for current mode
879 879
880 880 tmplmodes = [
881 881 (True, None),
882 882 (self.ui.verbose, 'verbose'),
883 883 (self.ui.quiet, 'quiet'),
884 884 (self.ui.debugflag, 'debug'),
885 885 ]
886 886
887 887 types = {'header': '', 'changeset': 'changeset'}
888 888 for mode, postfix in tmplmodes:
889 889 for type in types:
890 890 cur = postfix and ('%s_%s' % (type, postfix)) or type
891 891 if mode and cur in self.t:
892 892 types[type] = cur
893 893
894 894 try:
895 895
896 896 # write header
897 897 if types['header']:
898 898 h = templater.stringify(self.t(types['header'], **props))
899 899 if self.buffered:
900 900 self.header[ctx.rev()] = h
901 901 else:
902 902 self.ui.write(h)
903 903
904 904 # write changeset metadata, then patch if requested
905 905 key = types['changeset']
906 906 self.ui.write(templater.stringify(self.t(key, **props)))
907 907 self.showpatch(ctx.node())
908 908
909 909 except KeyError, inst:
910 910 msg = _("%s: no key named '%s'")
911 911 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
912 912 except SyntaxError, inst:
913 913 raise util.Abort(_('%s: %s') % (self.t.mapfile, inst.args[0]))
914 914
915 915 def show_changeset(ui, repo, opts, buffered=False, matchfn=False):
916 916 """show one changeset using template or regular display.
917 917
918 918 Display format will be the first non-empty hit of:
919 919 1. option 'template'
920 920 2. option 'style'
921 921 3. [ui] setting 'logtemplate'
922 922 4. [ui] setting 'style'
923 923 If all of these values are either the unset or the empty string,
924 924 regular display via changeset_printer() is done.
925 925 """
926 926 # options
927 927 patch = False
928 928 if opts.get('patch'):
929 929 patch = matchfn or matchall(repo)
930 930
931 931 tmpl = opts.get('template')
932 932 style = None
933 933 if tmpl:
934 934 tmpl = templater.parsestring(tmpl, quoted=False)
935 935 else:
936 936 style = opts.get('style')
937 937
938 938 # ui settings
939 939 if not (tmpl or style):
940 940 tmpl = ui.config('ui', 'logtemplate')
941 941 if tmpl:
942 942 tmpl = templater.parsestring(tmpl)
943 943 else:
944 944 style = ui.config('ui', 'style')
945 945
946 946 if not (tmpl or style):
947 947 return changeset_printer(ui, repo, patch, opts, buffered)
948 948
949 949 mapfile = None
950 950 if style and not tmpl:
951 951 mapfile = style
952 952 if not os.path.split(mapfile)[0]:
953 953 mapname = (templater.templatepath('map-cmdline.' + mapfile)
954 954 or templater.templatepath(mapfile))
955 955 if mapname: mapfile = mapname
956 956
957 957 try:
958 958 t = changeset_templater(ui, repo, patch, opts, mapfile, buffered)
959 959 except SyntaxError, inst:
960 960 raise util.Abort(inst.args[0])
961 961 if tmpl: t.use_template(tmpl)
962 962 return t
963 963
964 964 def finddate(ui, repo, date):
965 965 """Find the tipmost changeset that matches the given date spec"""
966 966 df = util.matchdate(date)
967 967 get = util.cachefunc(lambda r: repo[r].changeset())
968 968 changeiter, matchfn = walkchangerevs(ui, repo, [], get, {'rev':None})
969 969 results = {}
970 970 for st, rev, fns in changeiter:
971 971 if st == 'add':
972 972 d = get(rev)[2]
973 973 if df(d[0]):
974 974 results[rev] = d
975 975 elif st == 'iter':
976 976 if rev in results:
977 977 ui.status(_("Found revision %s from %s\n") %
978 978 (rev, util.datestr(results[rev])))
979 979 return str(rev)
980 980
981 981 raise util.Abort(_("revision matching date not found"))
982 982
983 983 def walkchangerevs(ui, repo, pats, change, opts):
984 984 '''Iterate over files and the revs in which they changed.
985 985
986 986 Callers most commonly need to iterate backwards over the history
987 987 in which they are interested. Doing so has awful (quadratic-looking)
988 988 performance, so we use iterators in a "windowed" way.
989 989
990 990 We walk a window of revisions in the desired order. Within the
991 991 window, we first walk forwards to gather data, then in the desired
992 992 order (usually backwards) to display it.
993 993
994 994 This function returns an (iterator, matchfn) tuple. The iterator
995 995 yields 3-tuples. They will be of one of the following forms:
996 996
997 997 "window", incrementing, lastrev: stepping through a window,
998 998 positive if walking forwards through revs, last rev in the
999 999 sequence iterated over - use to reset state for the current window
1000 1000
1001 1001 "add", rev, fns: out-of-order traversal of the given file names
1002 1002 fns, which changed during revision rev - use to gather data for
1003 1003 possible display
1004 1004
1005 1005 "iter", rev, None: in-order traversal of the revs earlier iterated
1006 1006 over with "add" - use to display data'''
1007 1007
1008 1008 def increasing_windows(start, end, windowsize=8, sizelimit=512):
1009 1009 if start < end:
1010 1010 while start < end:
1011 1011 yield start, min(windowsize, end-start)
1012 1012 start += windowsize
1013 1013 if windowsize < sizelimit:
1014 1014 windowsize *= 2
1015 1015 else:
1016 1016 while start > end:
1017 1017 yield start, min(windowsize, start-end-1)
1018 1018 start -= windowsize
1019 1019 if windowsize < sizelimit:
1020 1020 windowsize *= 2
1021 1021
1022 1022 m = match(repo, pats, opts)
1023 1023 follow = opts.get('follow') or opts.get('follow_first')
1024 1024
1025 1025 if not len(repo):
1026 1026 return [], m
1027 1027
1028 1028 if follow:
1029 1029 defrange = '%s:0' % repo['.'].rev()
1030 1030 else:
1031 1031 defrange = '-1:0'
1032 1032 revs = revrange(repo, opts['rev'] or [defrange])
1033 1033 wanted = set()
1034 1034 slowpath = m.anypats() or (m.files() and opts.get('removed'))
1035 1035 fncache = {}
1036 1036
1037 1037 if not slowpath and not m.files():
1038 1038 # No files, no patterns. Display all revs.
1039 1039 wanted = set(revs)
1040 1040 copies = []
1041 1041 if not slowpath:
1042 1042 # Only files, no patterns. Check the history of each file.
1043 1043 def filerevgen(filelog, node):
1044 1044 cl_count = len(repo)
1045 1045 if node is None:
1046 1046 last = len(filelog) - 1
1047 1047 else:
1048 1048 last = filelog.rev(node)
1049 1049 for i, window in increasing_windows(last, nullrev):
1050 1050 revs = []
1051 1051 for j in xrange(i - window, i + 1):
1052 1052 n = filelog.node(j)
1053 1053 revs.append((filelog.linkrev(j),
1054 1054 follow and filelog.renamed(n)))
1055 1055 revs.reverse()
1056 1056 for rev in revs:
1057 1057 # only yield rev for which we have the changelog, it can
1058 1058 # happen while doing "hg log" during a pull or commit
1059 1059 if rev[0] < cl_count:
1060 1060 yield rev
1061 1061 def iterfiles():
1062 1062 for filename in m.files():
1063 1063 yield filename, None
1064 1064 for filename_node in copies:
1065 1065 yield filename_node
1066 1066 minrev, maxrev = min(revs), max(revs)
1067 1067 for file_, node in iterfiles():
1068 1068 filelog = repo.file(file_)
1069 1069 if not len(filelog):
1070 1070 if node is None:
1071 1071 # A zero count may be a directory or deleted file, so
1072 1072 # try to find matching entries on the slow path.
1073 1073 if follow:
1074 1074 raise util.Abort(_('cannot follow nonexistent file: "%s"') % file_)
1075 1075 slowpath = True
1076 1076 break
1077 1077 else:
1078 1078 ui.warn(_('%s:%s copy source revision cannot be found!\n')
1079 1079 % (file_, short(node)))
1080 1080 continue
1081 1081 for rev, copied in filerevgen(filelog, node):
1082 1082 if rev <= maxrev:
1083 1083 if rev < minrev:
1084 1084 break
1085 1085 fncache.setdefault(rev, [])
1086 1086 fncache[rev].append(file_)
1087 1087 wanted.add(rev)
1088 1088 if follow and copied:
1089 1089 copies.append(copied)
1090 1090 if slowpath:
1091 1091 if follow:
1092 1092 raise util.Abort(_('can only follow copies/renames for explicit '
1093 1093 'file names'))
1094 1094
1095 1095 # The slow path checks files modified in every changeset.
1096 1096 def changerevgen():
1097 1097 for i, window in increasing_windows(len(repo) - 1, nullrev):
1098 1098 for j in xrange(i - window, i + 1):
1099 1099 yield j, change(j)[3]
1100 1100
1101 1101 for rev, changefiles in changerevgen():
1102 1102 matches = filter(m, changefiles)
1103 1103 if matches:
1104 1104 fncache[rev] = matches
1105 1105 wanted.add(rev)
1106 1106
1107 1107 class followfilter:
1108 1108 def __init__(self, onlyfirst=False):
1109 1109 self.startrev = nullrev
1110 1110 self.roots = []
1111 1111 self.onlyfirst = onlyfirst
1112 1112
1113 1113 def match(self, rev):
1114 1114 def realparents(rev):
1115 1115 if self.onlyfirst:
1116 1116 return repo.changelog.parentrevs(rev)[0:1]
1117 1117 else:
1118 1118 return filter(lambda x: x != nullrev,
1119 1119 repo.changelog.parentrevs(rev))
1120 1120
1121 1121 if self.startrev == nullrev:
1122 1122 self.startrev = rev
1123 1123 return True
1124 1124
1125 1125 if rev > self.startrev:
1126 1126 # forward: all descendants
1127 1127 if not self.roots:
1128 1128 self.roots.append(self.startrev)
1129 1129 for parent in realparents(rev):
1130 1130 if parent in self.roots:
1131 1131 self.roots.append(rev)
1132 1132 return True
1133 1133 else:
1134 1134 # backwards: all parents
1135 1135 if not self.roots:
1136 1136 self.roots.extend(realparents(self.startrev))
1137 1137 if rev in self.roots:
1138 1138 self.roots.remove(rev)
1139 1139 self.roots.extend(realparents(rev))
1140 1140 return True
1141 1141
1142 1142 return False
1143 1143
1144 1144 # it might be worthwhile to do this in the iterator if the rev range
1145 1145 # is descending and the prune args are all within that range
1146 1146 for rev in opts.get('prune', ()):
1147 1147 rev = repo.changelog.rev(repo.lookup(rev))
1148 1148 ff = followfilter()
1149 1149 stop = min(revs[0], revs[-1])
1150 1150 for x in xrange(rev, stop-1, -1):
1151 1151 if ff.match(x):
1152 1152 wanted.discard(x)
1153 1153
1154 1154 def iterate():
1155 1155 if follow and not m.files():
1156 1156 ff = followfilter(onlyfirst=opts.get('follow_first'))
1157 1157 def want(rev):
1158 1158 return ff.match(rev) and rev in wanted
1159 1159 else:
1160 1160 def want(rev):
1161 1161 return rev in wanted
1162 1162
1163 1163 for i, window in increasing_windows(0, len(revs)):
1164 1164 yield 'window', revs[0] < revs[-1], revs[-1]
1165 1165 nrevs = [rev for rev in revs[i:i+window] if want(rev)]
1166 for rev in util.sort(list(nrevs)):
1166 for rev in sorted(nrevs):
1167 1167 fns = fncache.get(rev)
1168 1168 if not fns:
1169 1169 def fns_generator():
1170 1170 for f in change(rev)[3]:
1171 1171 if m(f):
1172 1172 yield f
1173 1173 fns = fns_generator()
1174 1174 yield 'add', rev, fns
1175 1175 for rev in nrevs:
1176 1176 yield 'iter', rev, None
1177 1177 return iterate(), m
1178 1178
1179 1179 def commit(ui, repo, commitfunc, pats, opts):
1180 1180 '''commit the specified files or all outstanding changes'''
1181 1181 date = opts.get('date')
1182 1182 if date:
1183 1183 opts['date'] = util.parsedate(date)
1184 1184 message = logmessage(opts)
1185 1185
1186 1186 # extract addremove carefully -- this function can be called from a command
1187 1187 # that doesn't support addremove
1188 1188 if opts.get('addremove'):
1189 1189 addremove(repo, pats, opts)
1190 1190
1191 1191 m = match(repo, pats, opts)
1192 1192 if pats:
1193 1193 modified, added, removed = repo.status(match=m)[:3]
1194 files = util.sort(modified + added + removed)
1194 files = sorted(modified + added + removed)
1195 1195
1196 1196 def is_dir(f):
1197 1197 name = f + '/'
1198 1198 i = bisect.bisect(files, name)
1199 1199 return i < len(files) and files[i].startswith(name)
1200 1200
1201 1201 for f in m.files():
1202 1202 if f == '.':
1203 1203 continue
1204 1204 if f not in files:
1205 1205 rf = repo.wjoin(f)
1206 1206 rel = repo.pathto(f)
1207 1207 try:
1208 1208 mode = os.lstat(rf)[stat.ST_MODE]
1209 1209 except OSError:
1210 1210 if is_dir(f): # deleted directory ?
1211 1211 continue
1212 1212 raise util.Abort(_("file %s not found!") % rel)
1213 1213 if stat.S_ISDIR(mode):
1214 1214 if not is_dir(f):
1215 1215 raise util.Abort(_("no match under directory %s!")
1216 1216 % rel)
1217 1217 elif not (stat.S_ISREG(mode) or stat.S_ISLNK(mode)):
1218 1218 raise util.Abort(_("can't commit %s: "
1219 1219 "unsupported file type!") % rel)
1220 1220 elif f not in repo.dirstate:
1221 1221 raise util.Abort(_("file %s not tracked!") % rel)
1222 1222 m = matchfiles(repo, files)
1223 1223 try:
1224 1224 return commitfunc(ui, repo, message, m, opts)
1225 1225 except ValueError, inst:
1226 1226 raise util.Abort(str(inst))
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now