##// END OF EJS Templates
replace util.sort with sorted built-in...
Matt Mackall -
r8209:a1a5a57e default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,417 +1,417 b''
1 # bugzilla.py - bugzilla integration for mercurial
1 # bugzilla.py - bugzilla integration for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 '''Bugzilla integration
8 '''Bugzilla integration
9
9
10 This hook extension adds comments on bugs in Bugzilla when changesets
10 This hook extension adds comments on bugs in Bugzilla when changesets
11 that refer to bugs by Bugzilla ID are seen. The hook does not change
11 that refer to bugs by Bugzilla ID are seen. The hook does not change
12 bug status.
12 bug status.
13
13
14 The hook updates the Bugzilla database directly. Only Bugzilla
14 The hook updates the Bugzilla database directly. Only Bugzilla
15 installations using MySQL are supported.
15 installations using MySQL are supported.
16
16
17 The hook relies on a Bugzilla script to send bug change notification
17 The hook relies on a Bugzilla script to send bug change notification
18 emails. That script changes between Bugzilla versions; the
18 emails. That script changes between Bugzilla versions; the
19 'processmail' script used prior to 2.18 is replaced in 2.18 and
19 'processmail' script used prior to 2.18 is replaced in 2.18 and
20 subsequent versions by 'config/sendbugmail.pl'. Note that these will
20 subsequent versions by 'config/sendbugmail.pl'. Note that these will
21 be run by Mercurial as the user pushing the change; you will need to
21 be run by Mercurial as the user pushing the change; you will need to
22 ensure the Bugzilla install file permissions are set appropriately.
22 ensure the Bugzilla install file permissions are set appropriately.
23
23
24 Configuring the extension:
24 Configuring the extension:
25
25
26 [bugzilla]
26 [bugzilla]
27
27
28 host Hostname of the MySQL server holding the Bugzilla
28 host Hostname of the MySQL server holding the Bugzilla
29 database.
29 database.
30 db Name of the Bugzilla database in MySQL. Default 'bugs'.
30 db Name of the Bugzilla database in MySQL. Default 'bugs'.
31 user Username to use to access MySQL server. Default 'bugs'.
31 user Username to use to access MySQL server. Default 'bugs'.
32 password Password to use to access MySQL server.
32 password Password to use to access MySQL server.
33 timeout Database connection timeout (seconds). Default 5.
33 timeout Database connection timeout (seconds). Default 5.
34 version Bugzilla version. Specify '3.0' for Bugzilla versions
34 version Bugzilla version. Specify '3.0' for Bugzilla versions
35 3.0 and later, '2.18' for Bugzilla versions from 2.18
35 3.0 and later, '2.18' for Bugzilla versions from 2.18
36 and '2.16' for versions prior to 2.18.
36 and '2.16' for versions prior to 2.18.
37 bzuser Fallback Bugzilla user name to record comments with, if
37 bzuser Fallback Bugzilla user name to record comments with, if
38 changeset committer cannot be found as a Bugzilla user.
38 changeset committer cannot be found as a Bugzilla user.
39 bzdir Bugzilla install directory. Used by default notify.
39 bzdir Bugzilla install directory. Used by default notify.
40 Default '/var/www/html/bugzilla'.
40 Default '/var/www/html/bugzilla'.
41 notify The command to run to get Bugzilla to send bug change
41 notify The command to run to get Bugzilla to send bug change
42 notification emails. Substitutes from a map with 3
42 notification emails. Substitutes from a map with 3
43 keys, 'bzdir', 'id' (bug id) and 'user' (committer
43 keys, 'bzdir', 'id' (bug id) and 'user' (committer
44 bugzilla email). Default depends on version; from 2.18
44 bugzilla email). Default depends on version; from 2.18
45 it is "cd %(bzdir)s && perl -T contrib/sendbugmail.pl
45 it is "cd %(bzdir)s && perl -T contrib/sendbugmail.pl
46 %(id)s %(user)s".
46 %(id)s %(user)s".
47 regexp Regular expression to match bug IDs in changeset commit
47 regexp Regular expression to match bug IDs in changeset commit
48 message. Must contain one "()" group. The default
48 message. Must contain one "()" group. The default
49 expression matches 'Bug 1234', 'Bug no. 1234', 'Bug
49 expression matches 'Bug 1234', 'Bug no. 1234', 'Bug
50 number 1234', 'Bugs 1234,5678', 'Bug 1234 and 5678' and
50 number 1234', 'Bugs 1234,5678', 'Bug 1234 and 5678' and
51 variations thereof. Matching is case insensitive.
51 variations thereof. Matching is case insensitive.
52 style The style file to use when formatting comments.
52 style The style file to use when formatting comments.
53 template Template to use when formatting comments. Overrides
53 template Template to use when formatting comments. Overrides
54 style if specified. In addition to the usual Mercurial
54 style if specified. In addition to the usual Mercurial
55 keywords, the extension specifies:
55 keywords, the extension specifies:
56 {bug} The Bugzilla bug ID.
56 {bug} The Bugzilla bug ID.
57 {root} The full pathname of the Mercurial
57 {root} The full pathname of the Mercurial
58 repository.
58 repository.
59 {webroot} Stripped pathname of the Mercurial
59 {webroot} Stripped pathname of the Mercurial
60 repository.
60 repository.
61 {hgweb} Base URL for browsing Mercurial
61 {hgweb} Base URL for browsing Mercurial
62 repositories.
62 repositories.
63 Default 'changeset {node|short} in repo {root} refers '
63 Default 'changeset {node|short} in repo {root} refers '
64 'to bug {bug}.\\ndetails:\\n\\t{desc|tabindent}'
64 'to bug {bug}.\\ndetails:\\n\\t{desc|tabindent}'
65 strip The number of slashes to strip from the front of {root}
65 strip The number of slashes to strip from the front of {root}
66 to produce {webroot}. Default 0.
66 to produce {webroot}. Default 0.
67 usermap Path of file containing Mercurial committer ID to
67 usermap Path of file containing Mercurial committer ID to
68 Bugzilla user ID mappings. If specified, the file
68 Bugzilla user ID mappings. If specified, the file
69 should contain one mapping per line,
69 should contain one mapping per line,
70 "committer"="Bugzilla user". See also the [usermap]
70 "committer"="Bugzilla user". See also the [usermap]
71 section.
71 section.
72
72
73 [usermap]
73 [usermap]
74 Any entries in this section specify mappings of Mercurial
74 Any entries in this section specify mappings of Mercurial
75 committer ID to Bugzilla user ID. See also [bugzilla].usermap.
75 committer ID to Bugzilla user ID. See also [bugzilla].usermap.
76 "committer"="Bugzilla user"
76 "committer"="Bugzilla user"
77
77
78 [web]
78 [web]
79 baseurl Base URL for browsing Mercurial repositories. Reference
79 baseurl Base URL for browsing Mercurial repositories. Reference
80 from templates as {hgweb}.
80 from templates as {hgweb}.
81
81
82 Activating the extension:
82 Activating the extension:
83
83
84 [extensions]
84 [extensions]
85 hgext.bugzilla =
85 hgext.bugzilla =
86
86
87 [hooks]
87 [hooks]
88 # run bugzilla hook on every change pulled or pushed in here
88 # run bugzilla hook on every change pulled or pushed in here
89 incoming.bugzilla = python:hgext.bugzilla.hook
89 incoming.bugzilla = python:hgext.bugzilla.hook
90
90
91 Example configuration:
91 Example configuration:
92
92
93 This example configuration is for a collection of Mercurial
93 This example configuration is for a collection of Mercurial
94 repositories in /var/local/hg/repos/ used with a local Bugzilla 3.2
94 repositories in /var/local/hg/repos/ used with a local Bugzilla 3.2
95 installation in /opt/bugzilla-3.2.
95 installation in /opt/bugzilla-3.2.
96
96
97 [bugzilla]
97 [bugzilla]
98 host=localhost
98 host=localhost
99 password=XYZZY
99 password=XYZZY
100 version=3.0
100 version=3.0
101 bzuser=unknown@domain.com
101 bzuser=unknown@domain.com
102 bzdir=/opt/bugzilla-3.2
102 bzdir=/opt/bugzilla-3.2
103 template=Changeset {node|short} in {root|basename}.\\n{hgweb}/{webroot}/rev/{node|short}\\n\\n{desc}\\n
103 template=Changeset {node|short} in {root|basename}.\\n{hgweb}/{webroot}/rev/{node|short}\\n\\n{desc}\\n
104 strip=5
104 strip=5
105
105
106 [web]
106 [web]
107 baseurl=http://dev.domain.com/hg
107 baseurl=http://dev.domain.com/hg
108
108
109 [usermap]
109 [usermap]
110 user@emaildomain.com=user.name@bugzilladomain.com
110 user@emaildomain.com=user.name@bugzilladomain.com
111
111
112 Commits add a comment to the Bugzilla bug record of the form:
112 Commits add a comment to the Bugzilla bug record of the form:
113
113
114 Changeset 3b16791d6642 in repository-name.
114 Changeset 3b16791d6642 in repository-name.
115 http://dev.domain.com/hg/repository-name/rev/3b16791d6642
115 http://dev.domain.com/hg/repository-name/rev/3b16791d6642
116
116
117 Changeset commit comment. Bug 1234.
117 Changeset commit comment. Bug 1234.
118 '''
118 '''
119
119
120 from mercurial.i18n import _
120 from mercurial.i18n import _
121 from mercurial.node import short
121 from mercurial.node import short
122 from mercurial import cmdutil, templater, util
122 from mercurial import cmdutil, templater, util
123 import re, time
123 import re, time
124
124
125 MySQLdb = None
125 MySQLdb = None
126
126
127 def buglist(ids):
127 def buglist(ids):
128 return '(' + ','.join(map(str, ids)) + ')'
128 return '(' + ','.join(map(str, ids)) + ')'
129
129
130 class bugzilla_2_16(object):
130 class bugzilla_2_16(object):
131 '''support for bugzilla version 2.16.'''
131 '''support for bugzilla version 2.16.'''
132
132
133 def __init__(self, ui):
133 def __init__(self, ui):
134 self.ui = ui
134 self.ui = ui
135 host = self.ui.config('bugzilla', 'host', 'localhost')
135 host = self.ui.config('bugzilla', 'host', 'localhost')
136 user = self.ui.config('bugzilla', 'user', 'bugs')
136 user = self.ui.config('bugzilla', 'user', 'bugs')
137 passwd = self.ui.config('bugzilla', 'password')
137 passwd = self.ui.config('bugzilla', 'password')
138 db = self.ui.config('bugzilla', 'db', 'bugs')
138 db = self.ui.config('bugzilla', 'db', 'bugs')
139 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
139 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
140 usermap = self.ui.config('bugzilla', 'usermap')
140 usermap = self.ui.config('bugzilla', 'usermap')
141 if usermap:
141 if usermap:
142 self.ui.readconfig(usermap, sections=['usermap'])
142 self.ui.readconfig(usermap, sections=['usermap'])
143 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
143 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
144 (host, db, user, '*' * len(passwd)))
144 (host, db, user, '*' * len(passwd)))
145 self.conn = MySQLdb.connect(host=host, user=user, passwd=passwd,
145 self.conn = MySQLdb.connect(host=host, user=user, passwd=passwd,
146 db=db, connect_timeout=timeout)
146 db=db, connect_timeout=timeout)
147 self.cursor = self.conn.cursor()
147 self.cursor = self.conn.cursor()
148 self.longdesc_id = self.get_longdesc_id()
148 self.longdesc_id = self.get_longdesc_id()
149 self.user_ids = {}
149 self.user_ids = {}
150 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
150 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
151
151
152 def run(self, *args, **kwargs):
152 def run(self, *args, **kwargs):
153 '''run a query.'''
153 '''run a query.'''
154 self.ui.note(_('query: %s %s\n') % (args, kwargs))
154 self.ui.note(_('query: %s %s\n') % (args, kwargs))
155 try:
155 try:
156 self.cursor.execute(*args, **kwargs)
156 self.cursor.execute(*args, **kwargs)
157 except MySQLdb.MySQLError:
157 except MySQLdb.MySQLError:
158 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
158 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
159 raise
159 raise
160
160
161 def get_longdesc_id(self):
161 def get_longdesc_id(self):
162 '''get identity of longdesc field'''
162 '''get identity of longdesc field'''
163 self.run('select fieldid from fielddefs where name = "longdesc"')
163 self.run('select fieldid from fielddefs where name = "longdesc"')
164 ids = self.cursor.fetchall()
164 ids = self.cursor.fetchall()
165 if len(ids) != 1:
165 if len(ids) != 1:
166 raise util.Abort(_('unknown database schema'))
166 raise util.Abort(_('unknown database schema'))
167 return ids[0][0]
167 return ids[0][0]
168
168
169 def filter_real_bug_ids(self, ids):
169 def filter_real_bug_ids(self, ids):
170 '''filter not-existing bug ids from list.'''
170 '''filter not-existing bug ids from list.'''
171 self.run('select bug_id from bugs where bug_id in %s' % buglist(ids))
171 self.run('select bug_id from bugs where bug_id in %s' % buglist(ids))
172 return util.sort([c[0] for c in self.cursor.fetchall()])
172 return sorted([c[0] for c in self.cursor.fetchall()])
173
173
174 def filter_unknown_bug_ids(self, node, ids):
174 def filter_unknown_bug_ids(self, node, ids):
175 '''filter bug ids from list that already refer to this changeset.'''
175 '''filter bug ids from list that already refer to this changeset.'''
176
176
177 self.run('''select bug_id from longdescs where
177 self.run('''select bug_id from longdescs where
178 bug_id in %s and thetext like "%%%s%%"''' %
178 bug_id in %s and thetext like "%%%s%%"''' %
179 (buglist(ids), short(node)))
179 (buglist(ids), short(node)))
180 unknown = set(ids)
180 unknown = set(ids)
181 for (id,) in self.cursor.fetchall():
181 for (id,) in self.cursor.fetchall():
182 self.ui.status(_('bug %d already knows about changeset %s\n') %
182 self.ui.status(_('bug %d already knows about changeset %s\n') %
183 (id, short(node)))
183 (id, short(node)))
184 unknown.discard(id)
184 unknown.discard(id)
185 return util.sort(unknown)
185 return sorted(unknown)
186
186
187 def notify(self, ids, committer):
187 def notify(self, ids, committer):
188 '''tell bugzilla to send mail.'''
188 '''tell bugzilla to send mail.'''
189
189
190 self.ui.status(_('telling bugzilla to send mail:\n'))
190 self.ui.status(_('telling bugzilla to send mail:\n'))
191 (user, userid) = self.get_bugzilla_user(committer)
191 (user, userid) = self.get_bugzilla_user(committer)
192 for id in ids:
192 for id in ids:
193 self.ui.status(_(' bug %s\n') % id)
193 self.ui.status(_(' bug %s\n') % id)
194 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
194 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
195 bzdir = self.ui.config('bugzilla', 'bzdir', '/var/www/html/bugzilla')
195 bzdir = self.ui.config('bugzilla', 'bzdir', '/var/www/html/bugzilla')
196 try:
196 try:
197 # Backwards-compatible with old notify string, which
197 # Backwards-compatible with old notify string, which
198 # took one string. This will throw with a new format
198 # took one string. This will throw with a new format
199 # string.
199 # string.
200 cmd = cmdfmt % id
200 cmd = cmdfmt % id
201 except TypeError:
201 except TypeError:
202 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
202 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
203 self.ui.note(_('running notify command %s\n') % cmd)
203 self.ui.note(_('running notify command %s\n') % cmd)
204 fp = util.popen('(%s) 2>&1' % cmd)
204 fp = util.popen('(%s) 2>&1' % cmd)
205 out = fp.read()
205 out = fp.read()
206 ret = fp.close()
206 ret = fp.close()
207 if ret:
207 if ret:
208 self.ui.warn(out)
208 self.ui.warn(out)
209 raise util.Abort(_('bugzilla notify command %s') %
209 raise util.Abort(_('bugzilla notify command %s') %
210 util.explain_exit(ret)[0])
210 util.explain_exit(ret)[0])
211 self.ui.status(_('done\n'))
211 self.ui.status(_('done\n'))
212
212
213 def get_user_id(self, user):
213 def get_user_id(self, user):
214 '''look up numeric bugzilla user id.'''
214 '''look up numeric bugzilla user id.'''
215 try:
215 try:
216 return self.user_ids[user]
216 return self.user_ids[user]
217 except KeyError:
217 except KeyError:
218 try:
218 try:
219 userid = int(user)
219 userid = int(user)
220 except ValueError:
220 except ValueError:
221 self.ui.note(_('looking up user %s\n') % user)
221 self.ui.note(_('looking up user %s\n') % user)
222 self.run('''select userid from profiles
222 self.run('''select userid from profiles
223 where login_name like %s''', user)
223 where login_name like %s''', user)
224 all = self.cursor.fetchall()
224 all = self.cursor.fetchall()
225 if len(all) != 1:
225 if len(all) != 1:
226 raise KeyError(user)
226 raise KeyError(user)
227 userid = int(all[0][0])
227 userid = int(all[0][0])
228 self.user_ids[user] = userid
228 self.user_ids[user] = userid
229 return userid
229 return userid
230
230
231 def map_committer(self, user):
231 def map_committer(self, user):
232 '''map name of committer to bugzilla user name.'''
232 '''map name of committer to bugzilla user name.'''
233 for committer, bzuser in self.ui.configitems('usermap'):
233 for committer, bzuser in self.ui.configitems('usermap'):
234 if committer.lower() == user.lower():
234 if committer.lower() == user.lower():
235 return bzuser
235 return bzuser
236 return user
236 return user
237
237
238 def get_bugzilla_user(self, committer):
238 def get_bugzilla_user(self, committer):
239 '''see if committer is a registered bugzilla user. Return
239 '''see if committer is a registered bugzilla user. Return
240 bugzilla username and userid if so. If not, return default
240 bugzilla username and userid if so. If not, return default
241 bugzilla username and userid.'''
241 bugzilla username and userid.'''
242 user = self.map_committer(committer)
242 user = self.map_committer(committer)
243 try:
243 try:
244 userid = self.get_user_id(user)
244 userid = self.get_user_id(user)
245 except KeyError:
245 except KeyError:
246 try:
246 try:
247 defaultuser = self.ui.config('bugzilla', 'bzuser')
247 defaultuser = self.ui.config('bugzilla', 'bzuser')
248 if not defaultuser:
248 if not defaultuser:
249 raise util.Abort(_('cannot find bugzilla user id for %s') %
249 raise util.Abort(_('cannot find bugzilla user id for %s') %
250 user)
250 user)
251 userid = self.get_user_id(defaultuser)
251 userid = self.get_user_id(defaultuser)
252 user = defaultuser
252 user = defaultuser
253 except KeyError:
253 except KeyError:
254 raise util.Abort(_('cannot find bugzilla user id for %s or %s') %
254 raise util.Abort(_('cannot find bugzilla user id for %s or %s') %
255 (user, defaultuser))
255 (user, defaultuser))
256 return (user, userid)
256 return (user, userid)
257
257
258 def add_comment(self, bugid, text, committer):
258 def add_comment(self, bugid, text, committer):
259 '''add comment to bug. try adding comment as committer of
259 '''add comment to bug. try adding comment as committer of
260 changeset, otherwise as default bugzilla user.'''
260 changeset, otherwise as default bugzilla user.'''
261 (user, userid) = self.get_bugzilla_user(committer)
261 (user, userid) = self.get_bugzilla_user(committer)
262 now = time.strftime('%Y-%m-%d %H:%M:%S')
262 now = time.strftime('%Y-%m-%d %H:%M:%S')
263 self.run('''insert into longdescs
263 self.run('''insert into longdescs
264 (bug_id, who, bug_when, thetext)
264 (bug_id, who, bug_when, thetext)
265 values (%s, %s, %s, %s)''',
265 values (%s, %s, %s, %s)''',
266 (bugid, userid, now, text))
266 (bugid, userid, now, text))
267 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
267 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
268 values (%s, %s, %s, %s)''',
268 values (%s, %s, %s, %s)''',
269 (bugid, userid, now, self.longdesc_id))
269 (bugid, userid, now, self.longdesc_id))
270 self.conn.commit()
270 self.conn.commit()
271
271
272 class bugzilla_2_18(bugzilla_2_16):
272 class bugzilla_2_18(bugzilla_2_16):
273 '''support for bugzilla 2.18 series.'''
273 '''support for bugzilla 2.18 series.'''
274
274
275 def __init__(self, ui):
275 def __init__(self, ui):
276 bugzilla_2_16.__init__(self, ui)
276 bugzilla_2_16.__init__(self, ui)
277 self.default_notify = "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
277 self.default_notify = "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
278
278
279 class bugzilla_3_0(bugzilla_2_18):
279 class bugzilla_3_0(bugzilla_2_18):
280 '''support for bugzilla 3.0 series.'''
280 '''support for bugzilla 3.0 series.'''
281
281
282 def __init__(self, ui):
282 def __init__(self, ui):
283 bugzilla_2_18.__init__(self, ui)
283 bugzilla_2_18.__init__(self, ui)
284
284
285 def get_longdesc_id(self):
285 def get_longdesc_id(self):
286 '''get identity of longdesc field'''
286 '''get identity of longdesc field'''
287 self.run('select id from fielddefs where name = "longdesc"')
287 self.run('select id from fielddefs where name = "longdesc"')
288 ids = self.cursor.fetchall()
288 ids = self.cursor.fetchall()
289 if len(ids) != 1:
289 if len(ids) != 1:
290 raise util.Abort(_('unknown database schema'))
290 raise util.Abort(_('unknown database schema'))
291 return ids[0][0]
291 return ids[0][0]
292
292
293 class bugzilla(object):
293 class bugzilla(object):
294 # supported versions of bugzilla. different versions have
294 # supported versions of bugzilla. different versions have
295 # different schemas.
295 # different schemas.
296 _versions = {
296 _versions = {
297 '2.16': bugzilla_2_16,
297 '2.16': bugzilla_2_16,
298 '2.18': bugzilla_2_18,
298 '2.18': bugzilla_2_18,
299 '3.0': bugzilla_3_0
299 '3.0': bugzilla_3_0
300 }
300 }
301
301
302 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
302 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
303 r'((?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)')
303 r'((?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)')
304
304
305 _bz = None
305 _bz = None
306
306
307 def __init__(self, ui, repo):
307 def __init__(self, ui, repo):
308 self.ui = ui
308 self.ui = ui
309 self.repo = repo
309 self.repo = repo
310
310
311 def bz(self):
311 def bz(self):
312 '''return object that knows how to talk to bugzilla version in
312 '''return object that knows how to talk to bugzilla version in
313 use.'''
313 use.'''
314
314
315 if bugzilla._bz is None:
315 if bugzilla._bz is None:
316 bzversion = self.ui.config('bugzilla', 'version')
316 bzversion = self.ui.config('bugzilla', 'version')
317 try:
317 try:
318 bzclass = bugzilla._versions[bzversion]
318 bzclass = bugzilla._versions[bzversion]
319 except KeyError:
319 except KeyError:
320 raise util.Abort(_('bugzilla version %s not supported') %
320 raise util.Abort(_('bugzilla version %s not supported') %
321 bzversion)
321 bzversion)
322 bugzilla._bz = bzclass(self.ui)
322 bugzilla._bz = bzclass(self.ui)
323 return bugzilla._bz
323 return bugzilla._bz
324
324
325 def __getattr__(self, key):
325 def __getattr__(self, key):
326 return getattr(self.bz(), key)
326 return getattr(self.bz(), key)
327
327
328 _bug_re = None
328 _bug_re = None
329 _split_re = None
329 _split_re = None
330
330
331 def find_bug_ids(self, ctx):
331 def find_bug_ids(self, ctx):
332 '''find valid bug ids that are referred to in changeset
332 '''find valid bug ids that are referred to in changeset
333 comments and that do not already have references to this
333 comments and that do not already have references to this
334 changeset.'''
334 changeset.'''
335
335
336 if bugzilla._bug_re is None:
336 if bugzilla._bug_re is None:
337 bugzilla._bug_re = re.compile(
337 bugzilla._bug_re = re.compile(
338 self.ui.config('bugzilla', 'regexp', bugzilla._default_bug_re),
338 self.ui.config('bugzilla', 'regexp', bugzilla._default_bug_re),
339 re.IGNORECASE)
339 re.IGNORECASE)
340 bugzilla._split_re = re.compile(r'\D+')
340 bugzilla._split_re = re.compile(r'\D+')
341 start = 0
341 start = 0
342 ids = {}
342 ids = {}
343 while True:
343 while True:
344 m = bugzilla._bug_re.search(ctx.description(), start)
344 m = bugzilla._bug_re.search(ctx.description(), start)
345 if not m:
345 if not m:
346 break
346 break
347 start = m.end()
347 start = m.end()
348 for id in bugzilla._split_re.split(m.group(1)):
348 for id in bugzilla._split_re.split(m.group(1)):
349 if not id: continue
349 if not id: continue
350 ids[int(id)] = 1
350 ids[int(id)] = 1
351 ids = ids.keys()
351 ids = ids.keys()
352 if ids:
352 if ids:
353 ids = self.filter_real_bug_ids(ids)
353 ids = self.filter_real_bug_ids(ids)
354 if ids:
354 if ids:
355 ids = self.filter_unknown_bug_ids(ctx.node(), ids)
355 ids = self.filter_unknown_bug_ids(ctx.node(), ids)
356 return ids
356 return ids
357
357
358 def update(self, bugid, ctx):
358 def update(self, bugid, ctx):
359 '''update bugzilla bug with reference to changeset.'''
359 '''update bugzilla bug with reference to changeset.'''
360
360
361 def webroot(root):
361 def webroot(root):
362 '''strip leading prefix of repo root and turn into
362 '''strip leading prefix of repo root and turn into
363 url-safe path.'''
363 url-safe path.'''
364 count = int(self.ui.config('bugzilla', 'strip', 0))
364 count = int(self.ui.config('bugzilla', 'strip', 0))
365 root = util.pconvert(root)
365 root = util.pconvert(root)
366 while count > 0:
366 while count > 0:
367 c = root.find('/')
367 c = root.find('/')
368 if c == -1:
368 if c == -1:
369 break
369 break
370 root = root[c+1:]
370 root = root[c+1:]
371 count -= 1
371 count -= 1
372 return root
372 return root
373
373
374 mapfile = self.ui.config('bugzilla', 'style')
374 mapfile = self.ui.config('bugzilla', 'style')
375 tmpl = self.ui.config('bugzilla', 'template')
375 tmpl = self.ui.config('bugzilla', 'template')
376 t = cmdutil.changeset_templater(self.ui, self.repo,
376 t = cmdutil.changeset_templater(self.ui, self.repo,
377 False, None, mapfile, False)
377 False, None, mapfile, False)
378 if not mapfile and not tmpl:
378 if not mapfile and not tmpl:
379 tmpl = _('changeset {node|short} in repo {root} refers '
379 tmpl = _('changeset {node|short} in repo {root} refers '
380 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
380 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
381 if tmpl:
381 if tmpl:
382 tmpl = templater.parsestring(tmpl, quoted=False)
382 tmpl = templater.parsestring(tmpl, quoted=False)
383 t.use_template(tmpl)
383 t.use_template(tmpl)
384 self.ui.pushbuffer()
384 self.ui.pushbuffer()
385 t.show(ctx, changes=ctx.changeset(),
385 t.show(ctx, changes=ctx.changeset(),
386 bug=str(bugid),
386 bug=str(bugid),
387 hgweb=self.ui.config('web', 'baseurl'),
387 hgweb=self.ui.config('web', 'baseurl'),
388 root=self.repo.root,
388 root=self.repo.root,
389 webroot=webroot(self.repo.root))
389 webroot=webroot(self.repo.root))
390 data = self.ui.popbuffer()
390 data = self.ui.popbuffer()
391 self.add_comment(bugid, data, util.email(ctx.user()))
391 self.add_comment(bugid, data, util.email(ctx.user()))
392
392
393 def hook(ui, repo, hooktype, node=None, **kwargs):
393 def hook(ui, repo, hooktype, node=None, **kwargs):
394 '''add comment to bugzilla for each changeset that refers to a
394 '''add comment to bugzilla for each changeset that refers to a
395 bugzilla bug id. only add a comment once per bug, so same change
395 bugzilla bug id. only add a comment once per bug, so same change
396 seen multiple times does not fill bug with duplicate data.'''
396 seen multiple times does not fill bug with duplicate data.'''
397 try:
397 try:
398 import MySQLdb as mysql
398 import MySQLdb as mysql
399 global MySQLdb
399 global MySQLdb
400 MySQLdb = mysql
400 MySQLdb = mysql
401 except ImportError, err:
401 except ImportError, err:
402 raise util.Abort(_('python mysql support not available: %s') % err)
402 raise util.Abort(_('python mysql support not available: %s') % err)
403
403
404 if node is None:
404 if node is None:
405 raise util.Abort(_('hook type %s does not pass a changeset id') %
405 raise util.Abort(_('hook type %s does not pass a changeset id') %
406 hooktype)
406 hooktype)
407 try:
407 try:
408 bz = bugzilla(ui, repo)
408 bz = bugzilla(ui, repo)
409 ctx = repo[node]
409 ctx = repo[node]
410 ids = bz.find_bug_ids(ctx)
410 ids = bz.find_bug_ids(ctx)
411 if ids:
411 if ids:
412 for id in ids:
412 for id in ids:
413 bz.update(id, ctx)
413 bz.update(id, ctx)
414 bz.notify(ids, util.email(ctx.user()))
414 bz.notify(ids, util.email(ctx.user()))
415 except MySQLdb.MySQLError, err:
415 except MySQLdb.MySQLError, err:
416 raise util.Abort(_('database error: %s') % err[1])
416 raise util.Abort(_('database error: %s') % err[1])
417
417
@@ -1,362 +1,362 b''
1 # CVS conversion code inspired by hg-cvs-import and git-cvsimport
1 # CVS conversion code inspired by hg-cvs-import and git-cvsimport
2
2
3 import os, locale, re, socket, errno
3 import os, locale, re, socket, errno
4 from cStringIO import StringIO
4 from cStringIO import StringIO
5 from mercurial import util
5 from mercurial import util
6 from mercurial.i18n import _
6 from mercurial.i18n import _
7
7
8 from common import NoRepo, commit, converter_source, checktool
8 from common import NoRepo, commit, converter_source, checktool
9 import cvsps
9 import cvsps
10
10
11 class convert_cvs(converter_source):
11 class convert_cvs(converter_source):
12 def __init__(self, ui, path, rev=None):
12 def __init__(self, ui, path, rev=None):
13 super(convert_cvs, self).__init__(ui, path, rev=rev)
13 super(convert_cvs, self).__init__(ui, path, rev=rev)
14
14
15 cvs = os.path.join(path, "CVS")
15 cvs = os.path.join(path, "CVS")
16 if not os.path.exists(cvs):
16 if not os.path.exists(cvs):
17 raise NoRepo("%s does not look like a CVS checkout" % path)
17 raise NoRepo("%s does not look like a CVS checkout" % path)
18
18
19 checktool('cvs')
19 checktool('cvs')
20 self.cmd = ui.config('convert', 'cvsps', 'builtin')
20 self.cmd = ui.config('convert', 'cvsps', 'builtin')
21 cvspsexe = self.cmd.split(None, 1)[0]
21 cvspsexe = self.cmd.split(None, 1)[0]
22 self.builtin = cvspsexe == 'builtin'
22 self.builtin = cvspsexe == 'builtin'
23
23
24 if not self.builtin:
24 if not self.builtin:
25 checktool(cvspsexe)
25 checktool(cvspsexe)
26
26
27 self.changeset = None
27 self.changeset = None
28 self.files = {}
28 self.files = {}
29 self.tags = {}
29 self.tags = {}
30 self.lastbranch = {}
30 self.lastbranch = {}
31 self.parent = {}
31 self.parent = {}
32 self.socket = None
32 self.socket = None
33 self.cvsroot = file(os.path.join(cvs, "Root")).read()[:-1]
33 self.cvsroot = file(os.path.join(cvs, "Root")).read()[:-1]
34 self.cvsrepo = file(os.path.join(cvs, "Repository")).read()[:-1]
34 self.cvsrepo = file(os.path.join(cvs, "Repository")).read()[:-1]
35 self.encoding = locale.getpreferredencoding()
35 self.encoding = locale.getpreferredencoding()
36
36
37 self._connect()
37 self._connect()
38
38
39 def _parse(self):
39 def _parse(self):
40 if self.changeset is not None:
40 if self.changeset is not None:
41 return
41 return
42 self.changeset = {}
42 self.changeset = {}
43
43
44 maxrev = 0
44 maxrev = 0
45 cmd = self.cmd
45 cmd = self.cmd
46 if self.rev:
46 if self.rev:
47 # TODO: handle tags
47 # TODO: handle tags
48 try:
48 try:
49 # patchset number?
49 # patchset number?
50 maxrev = int(self.rev)
50 maxrev = int(self.rev)
51 except ValueError:
51 except ValueError:
52 try:
52 try:
53 # date
53 # date
54 util.parsedate(self.rev, ['%Y/%m/%d %H:%M:%S'])
54 util.parsedate(self.rev, ['%Y/%m/%d %H:%M:%S'])
55 cmd = '%s -d "1970/01/01 00:00:01" -d "%s"' % (cmd, self.rev)
55 cmd = '%s -d "1970/01/01 00:00:01" -d "%s"' % (cmd, self.rev)
56 except util.Abort:
56 except util.Abort:
57 raise util.Abort(_('revision %s is not a patchset number or date') % self.rev)
57 raise util.Abort(_('revision %s is not a patchset number or date') % self.rev)
58
58
59 d = os.getcwd()
59 d = os.getcwd()
60 try:
60 try:
61 os.chdir(self.path)
61 os.chdir(self.path)
62 id = None
62 id = None
63 state = 0
63 state = 0
64 filerevids = {}
64 filerevids = {}
65
65
66 if self.builtin:
66 if self.builtin:
67 # builtin cvsps code
67 # builtin cvsps code
68 self.ui.status(_('using builtin cvsps\n'))
68 self.ui.status(_('using builtin cvsps\n'))
69
69
70 cache = 'update'
70 cache = 'update'
71 if not self.ui.configbool('convert', 'cvsps.cache', True):
71 if not self.ui.configbool('convert', 'cvsps.cache', True):
72 cache = None
72 cache = None
73 db = cvsps.createlog(self.ui, cache=cache)
73 db = cvsps.createlog(self.ui, cache=cache)
74 db = cvsps.createchangeset(self.ui, db,
74 db = cvsps.createchangeset(self.ui, db,
75 fuzz=int(self.ui.config('convert', 'cvsps.fuzz', 60)),
75 fuzz=int(self.ui.config('convert', 'cvsps.fuzz', 60)),
76 mergeto=self.ui.config('convert', 'cvsps.mergeto', None),
76 mergeto=self.ui.config('convert', 'cvsps.mergeto', None),
77 mergefrom=self.ui.config('convert', 'cvsps.mergefrom', None))
77 mergefrom=self.ui.config('convert', 'cvsps.mergefrom', None))
78
78
79 for cs in db:
79 for cs in db:
80 if maxrev and cs.id>maxrev:
80 if maxrev and cs.id>maxrev:
81 break
81 break
82 id = str(cs.id)
82 id = str(cs.id)
83 cs.author = self.recode(cs.author)
83 cs.author = self.recode(cs.author)
84 self.lastbranch[cs.branch] = id
84 self.lastbranch[cs.branch] = id
85 cs.comment = self.recode(cs.comment)
85 cs.comment = self.recode(cs.comment)
86 date = util.datestr(cs.date)
86 date = util.datestr(cs.date)
87 self.tags.update(dict.fromkeys(cs.tags, id))
87 self.tags.update(dict.fromkeys(cs.tags, id))
88
88
89 files = {}
89 files = {}
90 for f in cs.entries:
90 for f in cs.entries:
91 files[f.file] = "%s%s" % ('.'.join([str(x) for x in f.revision]),
91 files[f.file] = "%s%s" % ('.'.join([str(x) for x in f.revision]),
92 ['', '(DEAD)'][f.dead])
92 ['', '(DEAD)'][f.dead])
93
93
94 # add current commit to set
94 # add current commit to set
95 c = commit(author=cs.author, date=date,
95 c = commit(author=cs.author, date=date,
96 parents=[str(p.id) for p in cs.parents],
96 parents=[str(p.id) for p in cs.parents],
97 desc=cs.comment, branch=cs.branch or '')
97 desc=cs.comment, branch=cs.branch or '')
98 self.changeset[id] = c
98 self.changeset[id] = c
99 self.files[id] = files
99 self.files[id] = files
100 else:
100 else:
101 # external cvsps
101 # external cvsps
102 for l in util.popen(cmd):
102 for l in util.popen(cmd):
103 if state == 0: # header
103 if state == 0: # header
104 if l.startswith("PatchSet"):
104 if l.startswith("PatchSet"):
105 id = l[9:-2]
105 id = l[9:-2]
106 if maxrev and int(id) > maxrev:
106 if maxrev and int(id) > maxrev:
107 # ignore everything
107 # ignore everything
108 state = 3
108 state = 3
109 elif l.startswith("Date:"):
109 elif l.startswith("Date:"):
110 date = util.parsedate(l[6:-1], ["%Y/%m/%d %H:%M:%S"])
110 date = util.parsedate(l[6:-1], ["%Y/%m/%d %H:%M:%S"])
111 date = util.datestr(date)
111 date = util.datestr(date)
112 elif l.startswith("Branch:"):
112 elif l.startswith("Branch:"):
113 branch = l[8:-1]
113 branch = l[8:-1]
114 self.parent[id] = self.lastbranch.get(branch, 'bad')
114 self.parent[id] = self.lastbranch.get(branch, 'bad')
115 self.lastbranch[branch] = id
115 self.lastbranch[branch] = id
116 elif l.startswith("Ancestor branch:"):
116 elif l.startswith("Ancestor branch:"):
117 ancestor = l[17:-1]
117 ancestor = l[17:-1]
118 # figure out the parent later
118 # figure out the parent later
119 self.parent[id] = self.lastbranch[ancestor]
119 self.parent[id] = self.lastbranch[ancestor]
120 elif l.startswith("Author:"):
120 elif l.startswith("Author:"):
121 author = self.recode(l[8:-1])
121 author = self.recode(l[8:-1])
122 elif l.startswith("Tag:") or l.startswith("Tags:"):
122 elif l.startswith("Tag:") or l.startswith("Tags:"):
123 t = l[l.index(':')+1:]
123 t = l[l.index(':')+1:]
124 t = [ut.strip() for ut in t.split(',')]
124 t = [ut.strip() for ut in t.split(',')]
125 if (len(t) > 1) or (t[0] and (t[0] != "(none)")):
125 if (len(t) > 1) or (t[0] and (t[0] != "(none)")):
126 self.tags.update(dict.fromkeys(t, id))
126 self.tags.update(dict.fromkeys(t, id))
127 elif l.startswith("Log:"):
127 elif l.startswith("Log:"):
128 # switch to gathering log
128 # switch to gathering log
129 state = 1
129 state = 1
130 log = ""
130 log = ""
131 elif state == 1: # log
131 elif state == 1: # log
132 if l == "Members: \n":
132 if l == "Members: \n":
133 # switch to gathering members
133 # switch to gathering members
134 files = {}
134 files = {}
135 oldrevs = []
135 oldrevs = []
136 log = self.recode(log[:-1])
136 log = self.recode(log[:-1])
137 state = 2
137 state = 2
138 else:
138 else:
139 # gather log
139 # gather log
140 log += l
140 log += l
141 elif state == 2: # members
141 elif state == 2: # members
142 if l == "\n": # start of next entry
142 if l == "\n": # start of next entry
143 state = 0
143 state = 0
144 p = [self.parent[id]]
144 p = [self.parent[id]]
145 if id == "1":
145 if id == "1":
146 p = []
146 p = []
147 if branch == "HEAD":
147 if branch == "HEAD":
148 branch = ""
148 branch = ""
149 if branch:
149 if branch:
150 latest = 0
150 latest = 0
151 # the last changeset that contains a base
151 # the last changeset that contains a base
152 # file is our parent
152 # file is our parent
153 for r in oldrevs:
153 for r in oldrevs:
154 latest = max(filerevids.get(r, 0), latest)
154 latest = max(filerevids.get(r, 0), latest)
155 if latest:
155 if latest:
156 p = [latest]
156 p = [latest]
157
157
158 # add current commit to set
158 # add current commit to set
159 c = commit(author=author, date=date, parents=p,
159 c = commit(author=author, date=date, parents=p,
160 desc=log, branch=branch)
160 desc=log, branch=branch)
161 self.changeset[id] = c
161 self.changeset[id] = c
162 self.files[id] = files
162 self.files[id] = files
163 else:
163 else:
164 colon = l.rfind(':')
164 colon = l.rfind(':')
165 file = l[1:colon]
165 file = l[1:colon]
166 rev = l[colon+1:-2]
166 rev = l[colon+1:-2]
167 oldrev, rev = rev.split("->")
167 oldrev, rev = rev.split("->")
168 files[file] = rev
168 files[file] = rev
169
169
170 # save some information for identifying branch points
170 # save some information for identifying branch points
171 oldrevs.append("%s:%s" % (oldrev, file))
171 oldrevs.append("%s:%s" % (oldrev, file))
172 filerevids["%s:%s" % (rev, file)] = id
172 filerevids["%s:%s" % (rev, file)] = id
173 elif state == 3:
173 elif state == 3:
174 # swallow all input
174 # swallow all input
175 continue
175 continue
176
176
177 self.heads = self.lastbranch.values()
177 self.heads = self.lastbranch.values()
178 finally:
178 finally:
179 os.chdir(d)
179 os.chdir(d)
180
180
181 def _connect(self):
181 def _connect(self):
182 root = self.cvsroot
182 root = self.cvsroot
183 conntype = None
183 conntype = None
184 user, host = None, None
184 user, host = None, None
185 cmd = ['cvs', 'server']
185 cmd = ['cvs', 'server']
186
186
187 self.ui.status(_("connecting to %s\n") % root)
187 self.ui.status(_("connecting to %s\n") % root)
188
188
189 if root.startswith(":pserver:"):
189 if root.startswith(":pserver:"):
190 root = root[9:]
190 root = root[9:]
191 m = re.match(r'(?:(.*?)(?::(.*?))?@)?([^:\/]*)(?::(\d*))?(.*)',
191 m = re.match(r'(?:(.*?)(?::(.*?))?@)?([^:\/]*)(?::(\d*))?(.*)',
192 root)
192 root)
193 if m:
193 if m:
194 conntype = "pserver"
194 conntype = "pserver"
195 user, passw, serv, port, root = m.groups()
195 user, passw, serv, port, root = m.groups()
196 if not user:
196 if not user:
197 user = "anonymous"
197 user = "anonymous"
198 if not port:
198 if not port:
199 port = 2401
199 port = 2401
200 else:
200 else:
201 port = int(port)
201 port = int(port)
202 format0 = ":pserver:%s@%s:%s" % (user, serv, root)
202 format0 = ":pserver:%s@%s:%s" % (user, serv, root)
203 format1 = ":pserver:%s@%s:%d%s" % (user, serv, port, root)
203 format1 = ":pserver:%s@%s:%d%s" % (user, serv, port, root)
204
204
205 if not passw:
205 if not passw:
206 passw = "A"
206 passw = "A"
207 cvspass = os.path.expanduser("~/.cvspass")
207 cvspass = os.path.expanduser("~/.cvspass")
208 try:
208 try:
209 pf = open(cvspass)
209 pf = open(cvspass)
210 for line in pf.read().splitlines():
210 for line in pf.read().splitlines():
211 part1, part2 = line.split(' ', 1)
211 part1, part2 = line.split(' ', 1)
212 if part1 == '/1':
212 if part1 == '/1':
213 # /1 :pserver:user@example.com:2401/cvsroot/foo Ah<Z
213 # /1 :pserver:user@example.com:2401/cvsroot/foo Ah<Z
214 part1, part2 = part2.split(' ', 1)
214 part1, part2 = part2.split(' ', 1)
215 format = format1
215 format = format1
216 else:
216 else:
217 # :pserver:user@example.com:/cvsroot/foo Ah<Z
217 # :pserver:user@example.com:/cvsroot/foo Ah<Z
218 format = format0
218 format = format0
219 if part1 == format:
219 if part1 == format:
220 passw = part2
220 passw = part2
221 break
221 break
222 pf.close()
222 pf.close()
223 except IOError, inst:
223 except IOError, inst:
224 if inst.errno != errno.ENOENT:
224 if inst.errno != errno.ENOENT:
225 if not getattr(inst, 'filename', None):
225 if not getattr(inst, 'filename', None):
226 inst.filename = cvspass
226 inst.filename = cvspass
227 raise
227 raise
228
228
229 sck = socket.socket()
229 sck = socket.socket()
230 sck.connect((serv, port))
230 sck.connect((serv, port))
231 sck.send("\n".join(["BEGIN AUTH REQUEST", root, user, passw,
231 sck.send("\n".join(["BEGIN AUTH REQUEST", root, user, passw,
232 "END AUTH REQUEST", ""]))
232 "END AUTH REQUEST", ""]))
233 if sck.recv(128) != "I LOVE YOU\n":
233 if sck.recv(128) != "I LOVE YOU\n":
234 raise util.Abort(_("CVS pserver authentication failed"))
234 raise util.Abort(_("CVS pserver authentication failed"))
235
235
236 self.writep = self.readp = sck.makefile('r+')
236 self.writep = self.readp = sck.makefile('r+')
237
237
238 if not conntype and root.startswith(":local:"):
238 if not conntype and root.startswith(":local:"):
239 conntype = "local"
239 conntype = "local"
240 root = root[7:]
240 root = root[7:]
241
241
242 if not conntype:
242 if not conntype:
243 # :ext:user@host/home/user/path/to/cvsroot
243 # :ext:user@host/home/user/path/to/cvsroot
244 if root.startswith(":ext:"):
244 if root.startswith(":ext:"):
245 root = root[5:]
245 root = root[5:]
246 m = re.match(r'(?:([^@:/]+)@)?([^:/]+):?(.*)', root)
246 m = re.match(r'(?:([^@:/]+)@)?([^:/]+):?(.*)', root)
247 # Do not take Windows path "c:\foo\bar" for a connection strings
247 # Do not take Windows path "c:\foo\bar" for a connection strings
248 if os.path.isdir(root) or not m:
248 if os.path.isdir(root) or not m:
249 conntype = "local"
249 conntype = "local"
250 else:
250 else:
251 conntype = "rsh"
251 conntype = "rsh"
252 user, host, root = m.group(1), m.group(2), m.group(3)
252 user, host, root = m.group(1), m.group(2), m.group(3)
253
253
254 if conntype != "pserver":
254 if conntype != "pserver":
255 if conntype == "rsh":
255 if conntype == "rsh":
256 rsh = os.environ.get("CVS_RSH") or "ssh"
256 rsh = os.environ.get("CVS_RSH") or "ssh"
257 if user:
257 if user:
258 cmd = [rsh, '-l', user, host] + cmd
258 cmd = [rsh, '-l', user, host] + cmd
259 else:
259 else:
260 cmd = [rsh, host] + cmd
260 cmd = [rsh, host] + cmd
261
261
262 # popen2 does not support argument lists under Windows
262 # popen2 does not support argument lists under Windows
263 cmd = [util.shellquote(arg) for arg in cmd]
263 cmd = [util.shellquote(arg) for arg in cmd]
264 cmd = util.quotecommand(' '.join(cmd))
264 cmd = util.quotecommand(' '.join(cmd))
265 self.writep, self.readp = util.popen2(cmd, 'b')
265 self.writep, self.readp = util.popen2(cmd, 'b')
266
266
267 self.realroot = root
267 self.realroot = root
268
268
269 self.writep.write("Root %s\n" % root)
269 self.writep.write("Root %s\n" % root)
270 self.writep.write("Valid-responses ok error Valid-requests Mode"
270 self.writep.write("Valid-responses ok error Valid-requests Mode"
271 " M Mbinary E Checked-in Created Updated"
271 " M Mbinary E Checked-in Created Updated"
272 " Merged Removed\n")
272 " Merged Removed\n")
273 self.writep.write("valid-requests\n")
273 self.writep.write("valid-requests\n")
274 self.writep.flush()
274 self.writep.flush()
275 r = self.readp.readline()
275 r = self.readp.readline()
276 if not r.startswith("Valid-requests"):
276 if not r.startswith("Valid-requests"):
277 raise util.Abort(_("server sucks"))
277 raise util.Abort(_("server sucks"))
278 if "UseUnchanged" in r:
278 if "UseUnchanged" in r:
279 self.writep.write("UseUnchanged\n")
279 self.writep.write("UseUnchanged\n")
280 self.writep.flush()
280 self.writep.flush()
281 r = self.readp.readline()
281 r = self.readp.readline()
282
282
283 def getheads(self):
283 def getheads(self):
284 self._parse()
284 self._parse()
285 return self.heads
285 return self.heads
286
286
287 def _getfile(self, name, rev):
287 def _getfile(self, name, rev):
288
288
289 def chunkedread(fp, count):
289 def chunkedread(fp, count):
290 # file-objects returned by socked.makefile() do not handle
290 # file-objects returned by socked.makefile() do not handle
291 # large read() requests very well.
291 # large read() requests very well.
292 chunksize = 65536
292 chunksize = 65536
293 output = StringIO()
293 output = StringIO()
294 while count > 0:
294 while count > 0:
295 data = fp.read(min(count, chunksize))
295 data = fp.read(min(count, chunksize))
296 if not data:
296 if not data:
297 raise util.Abort(_("%d bytes missing from remote file") % count)
297 raise util.Abort(_("%d bytes missing from remote file") % count)
298 count -= len(data)
298 count -= len(data)
299 output.write(data)
299 output.write(data)
300 return output.getvalue()
300 return output.getvalue()
301
301
302 if rev.endswith("(DEAD)"):
302 if rev.endswith("(DEAD)"):
303 raise IOError
303 raise IOError
304
304
305 args = ("-N -P -kk -r %s --" % rev).split()
305 args = ("-N -P -kk -r %s --" % rev).split()
306 args.append(self.cvsrepo + '/' + name)
306 args.append(self.cvsrepo + '/' + name)
307 for x in args:
307 for x in args:
308 self.writep.write("Argument %s\n" % x)
308 self.writep.write("Argument %s\n" % x)
309 self.writep.write("Directory .\n%s\nco\n" % self.realroot)
309 self.writep.write("Directory .\n%s\nco\n" % self.realroot)
310 self.writep.flush()
310 self.writep.flush()
311
311
312 data = ""
312 data = ""
313 while 1:
313 while 1:
314 line = self.readp.readline()
314 line = self.readp.readline()
315 if line.startswith("Created ") or line.startswith("Updated "):
315 if line.startswith("Created ") or line.startswith("Updated "):
316 self.readp.readline() # path
316 self.readp.readline() # path
317 self.readp.readline() # entries
317 self.readp.readline() # entries
318 mode = self.readp.readline()[:-1]
318 mode = self.readp.readline()[:-1]
319 count = int(self.readp.readline()[:-1])
319 count = int(self.readp.readline()[:-1])
320 data = chunkedread(self.readp, count)
320 data = chunkedread(self.readp, count)
321 elif line.startswith(" "):
321 elif line.startswith(" "):
322 data += line[1:]
322 data += line[1:]
323 elif line.startswith("M "):
323 elif line.startswith("M "):
324 pass
324 pass
325 elif line.startswith("Mbinary "):
325 elif line.startswith("Mbinary "):
326 count = int(self.readp.readline()[:-1])
326 count = int(self.readp.readline()[:-1])
327 data = chunkedread(self.readp, count)
327 data = chunkedread(self.readp, count)
328 else:
328 else:
329 if line == "ok\n":
329 if line == "ok\n":
330 return (data, "x" in mode and "x" or "")
330 return (data, "x" in mode and "x" or "")
331 elif line.startswith("E "):
331 elif line.startswith("E "):
332 self.ui.warn(_("cvs server: %s\n") % line[2:])
332 self.ui.warn(_("cvs server: %s\n") % line[2:])
333 elif line.startswith("Remove"):
333 elif line.startswith("Remove"):
334 self.readp.readline()
334 self.readp.readline()
335 else:
335 else:
336 raise util.Abort(_("unknown CVS response: %s") % line)
336 raise util.Abort(_("unknown CVS response: %s") % line)
337
337
338 def getfile(self, file, rev):
338 def getfile(self, file, rev):
339 self._parse()
339 self._parse()
340 data, mode = self._getfile(file, rev)
340 data, mode = self._getfile(file, rev)
341 self.modecache[(file, rev)] = mode
341 self.modecache[(file, rev)] = mode
342 return data
342 return data
343
343
344 def getmode(self, file, rev):
344 def getmode(self, file, rev):
345 return self.modecache[(file, rev)]
345 return self.modecache[(file, rev)]
346
346
347 def getchanges(self, rev):
347 def getchanges(self, rev):
348 self._parse()
348 self._parse()
349 self.modecache = {}
349 self.modecache = {}
350 return util.sort(self.files[rev].items()), {}
350 return sorted(self.files[rev].iteritems()), {}
351
351
352 def getcommit(self, rev):
352 def getcommit(self, rev):
353 self._parse()
353 self._parse()
354 return self.changeset[rev]
354 return self.changeset[rev]
355
355
356 def gettags(self):
356 def gettags(self):
357 self._parse()
357 self._parse()
358 return self.tags
358 return self.tags
359
359
360 def getchangedfiles(self, rev, i):
360 def getchangedfiles(self, rev, i):
361 self._parse()
361 self._parse()
362 return util.sort(self.files[rev].keys())
362 return sorted(self.files[rev])
@@ -1,782 +1,782 b''
1 #
1 #
2 # Mercurial built-in replacement for cvsps.
2 # Mercurial built-in replacement for cvsps.
3 #
3 #
4 # Copyright 2008, Frank Kingswood <frank@kingswood-consulting.co.uk>
4 # Copyright 2008, Frank Kingswood <frank@kingswood-consulting.co.uk>
5 #
5 #
6 # This software may be used and distributed according to the terms
6 # This software may be used and distributed according to the terms
7 # of the GNU General Public License, incorporated herein by reference.
7 # of the GNU General Public License, incorporated herein by reference.
8
8
9 import os
9 import os
10 import re
10 import re
11 import cPickle as pickle
11 import cPickle as pickle
12 from mercurial import util
12 from mercurial import util
13 from mercurial.i18n import _
13 from mercurial.i18n import _
14
14
15 def listsort(list, key):
15 def listsort(list, key):
16 "helper to sort by key in Python 2.3"
16 "helper to sort by key in Python 2.3"
17 try:
17 try:
18 list.sort(key=key)
18 list.sort(key=key)
19 except TypeError:
19 except TypeError:
20 list.sort(lambda l, r: cmp(key(l), key(r)))
20 list.sort(lambda l, r: cmp(key(l), key(r)))
21
21
22 class logentry(object):
22 class logentry(object):
23 '''Class logentry has the following attributes:
23 '''Class logentry has the following attributes:
24 .author - author name as CVS knows it
24 .author - author name as CVS knows it
25 .branch - name of branch this revision is on
25 .branch - name of branch this revision is on
26 .branches - revision tuple of branches starting at this revision
26 .branches - revision tuple of branches starting at this revision
27 .comment - commit message
27 .comment - commit message
28 .date - the commit date as a (time, tz) tuple
28 .date - the commit date as a (time, tz) tuple
29 .dead - true if file revision is dead
29 .dead - true if file revision is dead
30 .file - Name of file
30 .file - Name of file
31 .lines - a tuple (+lines, -lines) or None
31 .lines - a tuple (+lines, -lines) or None
32 .parent - Previous revision of this entry
32 .parent - Previous revision of this entry
33 .rcs - name of file as returned from CVS
33 .rcs - name of file as returned from CVS
34 .revision - revision number as tuple
34 .revision - revision number as tuple
35 .tags - list of tags on the file
35 .tags - list of tags on the file
36 .synthetic - is this a synthetic "file ... added on ..." revision?
36 .synthetic - is this a synthetic "file ... added on ..." revision?
37 .mergepoint- the branch that has been merged from (if present in rlog output)
37 .mergepoint- the branch that has been merged from (if present in rlog output)
38 '''
38 '''
39 def __init__(self, **entries):
39 def __init__(self, **entries):
40 self.__dict__.update(entries)
40 self.__dict__.update(entries)
41
41
42 def __repr__(self):
42 def __repr__(self):
43 return "<%s at 0x%x: %s %s>" % (self.__class__.__name__,
43 return "<%s at 0x%x: %s %s>" % (self.__class__.__name__,
44 id(self),
44 id(self),
45 self.file,
45 self.file,
46 ".".join(map(str, self.revision)))
46 ".".join(map(str, self.revision)))
47
47
48 class logerror(Exception):
48 class logerror(Exception):
49 pass
49 pass
50
50
51 def getrepopath(cvspath):
51 def getrepopath(cvspath):
52 """Return the repository path from a CVS path.
52 """Return the repository path from a CVS path.
53
53
54 >>> getrepopath('/foo/bar')
54 >>> getrepopath('/foo/bar')
55 '/foo/bar'
55 '/foo/bar'
56 >>> getrepopath('c:/foo/bar')
56 >>> getrepopath('c:/foo/bar')
57 'c:/foo/bar'
57 'c:/foo/bar'
58 >>> getrepopath(':pserver:10/foo/bar')
58 >>> getrepopath(':pserver:10/foo/bar')
59 '/foo/bar'
59 '/foo/bar'
60 >>> getrepopath(':pserver:10c:/foo/bar')
60 >>> getrepopath(':pserver:10c:/foo/bar')
61 '/foo/bar'
61 '/foo/bar'
62 >>> getrepopath(':pserver:/foo/bar')
62 >>> getrepopath(':pserver:/foo/bar')
63 '/foo/bar'
63 '/foo/bar'
64 >>> getrepopath(':pserver:c:/foo/bar')
64 >>> getrepopath(':pserver:c:/foo/bar')
65 'c:/foo/bar'
65 'c:/foo/bar'
66 >>> getrepopath(':pserver:truc@foo.bar:/foo/bar')
66 >>> getrepopath(':pserver:truc@foo.bar:/foo/bar')
67 '/foo/bar'
67 '/foo/bar'
68 >>> getrepopath(':pserver:truc@foo.bar:c:/foo/bar')
68 >>> getrepopath(':pserver:truc@foo.bar:c:/foo/bar')
69 'c:/foo/bar'
69 'c:/foo/bar'
70 """
70 """
71 # According to CVS manual, CVS paths are expressed like:
71 # According to CVS manual, CVS paths are expressed like:
72 # [:method:][[user][:password]@]hostname[:[port]]/path/to/repository
72 # [:method:][[user][:password]@]hostname[:[port]]/path/to/repository
73 #
73 #
74 # Unfortunately, Windows absolute paths start with a drive letter
74 # Unfortunately, Windows absolute paths start with a drive letter
75 # like 'c:' making it harder to parse. Here we assume that drive
75 # like 'c:' making it harder to parse. Here we assume that drive
76 # letters are only one character long and any CVS component before
76 # letters are only one character long and any CVS component before
77 # the repository path is at least 2 characters long, and use this
77 # the repository path is at least 2 characters long, and use this
78 # to disambiguate.
78 # to disambiguate.
79 parts = cvspath.split(':')
79 parts = cvspath.split(':')
80 if len(parts) == 1:
80 if len(parts) == 1:
81 return parts[0]
81 return parts[0]
82 # Here there is an ambiguous case if we have a port number
82 # Here there is an ambiguous case if we have a port number
83 # immediately followed by a Windows driver letter. We assume this
83 # immediately followed by a Windows driver letter. We assume this
84 # never happens and decide it must be CVS path component,
84 # never happens and decide it must be CVS path component,
85 # therefore ignoring it.
85 # therefore ignoring it.
86 if len(parts[-2]) > 1:
86 if len(parts[-2]) > 1:
87 return parts[-1].lstrip('0123456789')
87 return parts[-1].lstrip('0123456789')
88 return parts[-2] + ':' + parts[-1]
88 return parts[-2] + ':' + parts[-1]
89
89
90 def createlog(ui, directory=None, root="", rlog=True, cache=None):
90 def createlog(ui, directory=None, root="", rlog=True, cache=None):
91 '''Collect the CVS rlog'''
91 '''Collect the CVS rlog'''
92
92
93 # Because we store many duplicate commit log messages, reusing strings
93 # Because we store many duplicate commit log messages, reusing strings
94 # saves a lot of memory and pickle storage space.
94 # saves a lot of memory and pickle storage space.
95 _scache = {}
95 _scache = {}
96 def scache(s):
96 def scache(s):
97 "return a shared version of a string"
97 "return a shared version of a string"
98 return _scache.setdefault(s, s)
98 return _scache.setdefault(s, s)
99
99
100 ui.status(_('collecting CVS rlog\n'))
100 ui.status(_('collecting CVS rlog\n'))
101
101
102 log = [] # list of logentry objects containing the CVS state
102 log = [] # list of logentry objects containing the CVS state
103
103
104 # patterns to match in CVS (r)log output, by state of use
104 # patterns to match in CVS (r)log output, by state of use
105 re_00 = re.compile('RCS file: (.+)$')
105 re_00 = re.compile('RCS file: (.+)$')
106 re_01 = re.compile('cvs \\[r?log aborted\\]: (.+)$')
106 re_01 = re.compile('cvs \\[r?log aborted\\]: (.+)$')
107 re_02 = re.compile('cvs (r?log|server): (.+)\n$')
107 re_02 = re.compile('cvs (r?log|server): (.+)\n$')
108 re_03 = re.compile("(Cannot access.+CVSROOT)|(can't create temporary directory.+)$")
108 re_03 = re.compile("(Cannot access.+CVSROOT)|(can't create temporary directory.+)$")
109 re_10 = re.compile('Working file: (.+)$')
109 re_10 = re.compile('Working file: (.+)$')
110 re_20 = re.compile('symbolic names:')
110 re_20 = re.compile('symbolic names:')
111 re_30 = re.compile('\t(.+): ([\\d.]+)$')
111 re_30 = re.compile('\t(.+): ([\\d.]+)$')
112 re_31 = re.compile('----------------------------$')
112 re_31 = re.compile('----------------------------$')
113 re_32 = re.compile('=============================================================================$')
113 re_32 = re.compile('=============================================================================$')
114 re_50 = re.compile('revision ([\\d.]+)(\s+locked by:\s+.+;)?$')
114 re_50 = re.compile('revision ([\\d.]+)(\s+locked by:\s+.+;)?$')
115 re_60 = re.compile(r'date:\s+(.+);\s+author:\s+(.+);\s+state:\s+(.+?);(\s+lines:\s+(\+\d+)?\s+(-\d+)?;)?(.*mergepoint:\s+([^;]+);)?')
115 re_60 = re.compile(r'date:\s+(.+);\s+author:\s+(.+);\s+state:\s+(.+?);(\s+lines:\s+(\+\d+)?\s+(-\d+)?;)?(.*mergepoint:\s+([^;]+);)?')
116 re_70 = re.compile('branches: (.+);$')
116 re_70 = re.compile('branches: (.+);$')
117
117
118 file_added_re = re.compile(r'file [^/]+ was (initially )?added on branch')
118 file_added_re = re.compile(r'file [^/]+ was (initially )?added on branch')
119
119
120 prefix = '' # leading path to strip of what we get from CVS
120 prefix = '' # leading path to strip of what we get from CVS
121
121
122 if directory is None:
122 if directory is None:
123 # Current working directory
123 # Current working directory
124
124
125 # Get the real directory in the repository
125 # Get the real directory in the repository
126 try:
126 try:
127 prefix = file(os.path.join('CVS','Repository')).read().strip()
127 prefix = file(os.path.join('CVS','Repository')).read().strip()
128 if prefix == ".":
128 if prefix == ".":
129 prefix = ""
129 prefix = ""
130 directory = prefix
130 directory = prefix
131 except IOError:
131 except IOError:
132 raise logerror('Not a CVS sandbox')
132 raise logerror('Not a CVS sandbox')
133
133
134 if prefix and not prefix.endswith(os.sep):
134 if prefix and not prefix.endswith(os.sep):
135 prefix += os.sep
135 prefix += os.sep
136
136
137 # Use the Root file in the sandbox, if it exists
137 # Use the Root file in the sandbox, if it exists
138 try:
138 try:
139 root = file(os.path.join('CVS','Root')).read().strip()
139 root = file(os.path.join('CVS','Root')).read().strip()
140 except IOError:
140 except IOError:
141 pass
141 pass
142
142
143 if not root:
143 if not root:
144 root = os.environ.get('CVSROOT', '')
144 root = os.environ.get('CVSROOT', '')
145
145
146 # read log cache if one exists
146 # read log cache if one exists
147 oldlog = []
147 oldlog = []
148 date = None
148 date = None
149
149
150 if cache:
150 if cache:
151 cachedir = os.path.expanduser('~/.hg.cvsps')
151 cachedir = os.path.expanduser('~/.hg.cvsps')
152 if not os.path.exists(cachedir):
152 if not os.path.exists(cachedir):
153 os.mkdir(cachedir)
153 os.mkdir(cachedir)
154
154
155 # The cvsps cache pickle needs a uniquified name, based on the
155 # The cvsps cache pickle needs a uniquified name, based on the
156 # repository location. The address may have all sort of nasties
156 # repository location. The address may have all sort of nasties
157 # in it, slashes, colons and such. So here we take just the
157 # in it, slashes, colons and such. So here we take just the
158 # alphanumerics, concatenated in a way that does not mix up the
158 # alphanumerics, concatenated in a way that does not mix up the
159 # various components, so that
159 # various components, so that
160 # :pserver:user@server:/path
160 # :pserver:user@server:/path
161 # and
161 # and
162 # /pserver/user/server/path
162 # /pserver/user/server/path
163 # are mapped to different cache file names.
163 # are mapped to different cache file names.
164 cachefile = root.split(":") + [directory, "cache"]
164 cachefile = root.split(":") + [directory, "cache"]
165 cachefile = ['-'.join(re.findall(r'\w+', s)) for s in cachefile if s]
165 cachefile = ['-'.join(re.findall(r'\w+', s)) for s in cachefile if s]
166 cachefile = os.path.join(cachedir,
166 cachefile = os.path.join(cachedir,
167 '.'.join([s for s in cachefile if s]))
167 '.'.join([s for s in cachefile if s]))
168
168
169 if cache == 'update':
169 if cache == 'update':
170 try:
170 try:
171 ui.note(_('reading cvs log cache %s\n') % cachefile)
171 ui.note(_('reading cvs log cache %s\n') % cachefile)
172 oldlog = pickle.load(file(cachefile))
172 oldlog = pickle.load(file(cachefile))
173 ui.note(_('cache has %d log entries\n') % len(oldlog))
173 ui.note(_('cache has %d log entries\n') % len(oldlog))
174 except Exception, e:
174 except Exception, e:
175 ui.note(_('error reading cache: %r\n') % e)
175 ui.note(_('error reading cache: %r\n') % e)
176
176
177 if oldlog:
177 if oldlog:
178 date = oldlog[-1].date # last commit date as a (time,tz) tuple
178 date = oldlog[-1].date # last commit date as a (time,tz) tuple
179 date = util.datestr(date, '%Y/%m/%d %H:%M:%S %1%2')
179 date = util.datestr(date, '%Y/%m/%d %H:%M:%S %1%2')
180
180
181 # build the CVS commandline
181 # build the CVS commandline
182 cmd = ['cvs', '-q']
182 cmd = ['cvs', '-q']
183 if root:
183 if root:
184 cmd.append('-d%s' % root)
184 cmd.append('-d%s' % root)
185 p = util.normpath(getrepopath(root))
185 p = util.normpath(getrepopath(root))
186 if not p.endswith('/'):
186 if not p.endswith('/'):
187 p += '/'
187 p += '/'
188 prefix = p + util.normpath(prefix)
188 prefix = p + util.normpath(prefix)
189 cmd.append(['log', 'rlog'][rlog])
189 cmd.append(['log', 'rlog'][rlog])
190 if date:
190 if date:
191 # no space between option and date string
191 # no space between option and date string
192 cmd.append('-d>%s' % date)
192 cmd.append('-d>%s' % date)
193 cmd.append(directory)
193 cmd.append(directory)
194
194
195 # state machine begins here
195 # state machine begins here
196 tags = {} # dictionary of revisions on current file with their tags
196 tags = {} # dictionary of revisions on current file with their tags
197 branchmap = {} # mapping between branch names and revision numbers
197 branchmap = {} # mapping between branch names and revision numbers
198 state = 0
198 state = 0
199 store = False # set when a new record can be appended
199 store = False # set when a new record can be appended
200
200
201 cmd = [util.shellquote(arg) for arg in cmd]
201 cmd = [util.shellquote(arg) for arg in cmd]
202 ui.note(_("running %s\n") % (' '.join(cmd)))
202 ui.note(_("running %s\n") % (' '.join(cmd)))
203 ui.debug(_("prefix=%r directory=%r root=%r\n") % (prefix, directory, root))
203 ui.debug(_("prefix=%r directory=%r root=%r\n") % (prefix, directory, root))
204
204
205 pfp = util.popen(' '.join(cmd))
205 pfp = util.popen(' '.join(cmd))
206 peek = pfp.readline()
206 peek = pfp.readline()
207 while True:
207 while True:
208 line = peek
208 line = peek
209 if line == '':
209 if line == '':
210 break
210 break
211 peek = pfp.readline()
211 peek = pfp.readline()
212 if line.endswith('\n'):
212 if line.endswith('\n'):
213 line = line[:-1]
213 line = line[:-1]
214 #ui.debug('state=%d line=%r\n' % (state, line))
214 #ui.debug('state=%d line=%r\n' % (state, line))
215
215
216 if state == 0:
216 if state == 0:
217 # initial state, consume input until we see 'RCS file'
217 # initial state, consume input until we see 'RCS file'
218 match = re_00.match(line)
218 match = re_00.match(line)
219 if match:
219 if match:
220 rcs = match.group(1)
220 rcs = match.group(1)
221 tags = {}
221 tags = {}
222 if rlog:
222 if rlog:
223 filename = util.normpath(rcs[:-2])
223 filename = util.normpath(rcs[:-2])
224 if filename.startswith(prefix):
224 if filename.startswith(prefix):
225 filename = filename[len(prefix):]
225 filename = filename[len(prefix):]
226 if filename.startswith('/'):
226 if filename.startswith('/'):
227 filename = filename[1:]
227 filename = filename[1:]
228 if filename.startswith('Attic/'):
228 if filename.startswith('Attic/'):
229 filename = filename[6:]
229 filename = filename[6:]
230 else:
230 else:
231 filename = filename.replace('/Attic/', '/')
231 filename = filename.replace('/Attic/', '/')
232 state = 2
232 state = 2
233 continue
233 continue
234 state = 1
234 state = 1
235 continue
235 continue
236 match = re_01.match(line)
236 match = re_01.match(line)
237 if match:
237 if match:
238 raise Exception(match.group(1))
238 raise Exception(match.group(1))
239 match = re_02.match(line)
239 match = re_02.match(line)
240 if match:
240 if match:
241 raise Exception(match.group(2))
241 raise Exception(match.group(2))
242 if re_03.match(line):
242 if re_03.match(line):
243 raise Exception(line)
243 raise Exception(line)
244
244
245 elif state == 1:
245 elif state == 1:
246 # expect 'Working file' (only when using log instead of rlog)
246 # expect 'Working file' (only when using log instead of rlog)
247 match = re_10.match(line)
247 match = re_10.match(line)
248 assert match, _('RCS file must be followed by working file')
248 assert match, _('RCS file must be followed by working file')
249 filename = util.normpath(match.group(1))
249 filename = util.normpath(match.group(1))
250 state = 2
250 state = 2
251
251
252 elif state == 2:
252 elif state == 2:
253 # expect 'symbolic names'
253 # expect 'symbolic names'
254 if re_20.match(line):
254 if re_20.match(line):
255 branchmap = {}
255 branchmap = {}
256 state = 3
256 state = 3
257
257
258 elif state == 3:
258 elif state == 3:
259 # read the symbolic names and store as tags
259 # read the symbolic names and store as tags
260 match = re_30.match(line)
260 match = re_30.match(line)
261 if match:
261 if match:
262 rev = [int(x) for x in match.group(2).split('.')]
262 rev = [int(x) for x in match.group(2).split('.')]
263
263
264 # Convert magic branch number to an odd-numbered one
264 # Convert magic branch number to an odd-numbered one
265 revn = len(rev)
265 revn = len(rev)
266 if revn > 3 and (revn % 2) == 0 and rev[-2] == 0:
266 if revn > 3 and (revn % 2) == 0 and rev[-2] == 0:
267 rev = rev[:-2] + rev[-1:]
267 rev = rev[:-2] + rev[-1:]
268 rev = tuple(rev)
268 rev = tuple(rev)
269
269
270 if rev not in tags:
270 if rev not in tags:
271 tags[rev] = []
271 tags[rev] = []
272 tags[rev].append(match.group(1))
272 tags[rev].append(match.group(1))
273 branchmap[match.group(1)] = match.group(2)
273 branchmap[match.group(1)] = match.group(2)
274
274
275 elif re_31.match(line):
275 elif re_31.match(line):
276 state = 5
276 state = 5
277 elif re_32.match(line):
277 elif re_32.match(line):
278 state = 0
278 state = 0
279
279
280 elif state == 4:
280 elif state == 4:
281 # expecting '------' separator before first revision
281 # expecting '------' separator before first revision
282 if re_31.match(line):
282 if re_31.match(line):
283 state = 5
283 state = 5
284 else:
284 else:
285 assert not re_32.match(line), _('must have at least some revisions')
285 assert not re_32.match(line), _('must have at least some revisions')
286
286
287 elif state == 5:
287 elif state == 5:
288 # expecting revision number and possibly (ignored) lock indication
288 # expecting revision number and possibly (ignored) lock indication
289 # we create the logentry here from values stored in states 0 to 4,
289 # we create the logentry here from values stored in states 0 to 4,
290 # as this state is re-entered for subsequent revisions of a file.
290 # as this state is re-entered for subsequent revisions of a file.
291 match = re_50.match(line)
291 match = re_50.match(line)
292 assert match, _('expected revision number')
292 assert match, _('expected revision number')
293 e = logentry(rcs=scache(rcs), file=scache(filename),
293 e = logentry(rcs=scache(rcs), file=scache(filename),
294 revision=tuple([int(x) for x in match.group(1).split('.')]),
294 revision=tuple([int(x) for x in match.group(1).split('.')]),
295 branches=[], parent=None,
295 branches=[], parent=None,
296 synthetic=False)
296 synthetic=False)
297 state = 6
297 state = 6
298
298
299 elif state == 6:
299 elif state == 6:
300 # expecting date, author, state, lines changed
300 # expecting date, author, state, lines changed
301 match = re_60.match(line)
301 match = re_60.match(line)
302 assert match, _('revision must be followed by date line')
302 assert match, _('revision must be followed by date line')
303 d = match.group(1)
303 d = match.group(1)
304 if d[2] == '/':
304 if d[2] == '/':
305 # Y2K
305 # Y2K
306 d = '19' + d
306 d = '19' + d
307
307
308 if len(d.split()) != 3:
308 if len(d.split()) != 3:
309 # cvs log dates always in GMT
309 # cvs log dates always in GMT
310 d = d + ' UTC'
310 d = d + ' UTC'
311 e.date = util.parsedate(d, ['%y/%m/%d %H:%M:%S', '%Y/%m/%d %H:%M:%S', '%Y-%m-%d %H:%M:%S'])
311 e.date = util.parsedate(d, ['%y/%m/%d %H:%M:%S', '%Y/%m/%d %H:%M:%S', '%Y-%m-%d %H:%M:%S'])
312 e.author = scache(match.group(2))
312 e.author = scache(match.group(2))
313 e.dead = match.group(3).lower() == 'dead'
313 e.dead = match.group(3).lower() == 'dead'
314
314
315 if match.group(5):
315 if match.group(5):
316 if match.group(6):
316 if match.group(6):
317 e.lines = (int(match.group(5)), int(match.group(6)))
317 e.lines = (int(match.group(5)), int(match.group(6)))
318 else:
318 else:
319 e.lines = (int(match.group(5)), 0)
319 e.lines = (int(match.group(5)), 0)
320 elif match.group(6):
320 elif match.group(6):
321 e.lines = (0, int(match.group(6)))
321 e.lines = (0, int(match.group(6)))
322 else:
322 else:
323 e.lines = None
323 e.lines = None
324
324
325 if match.group(7): # cvsnt mergepoint
325 if match.group(7): # cvsnt mergepoint
326 myrev = match.group(8).split('.')
326 myrev = match.group(8).split('.')
327 if len(myrev) == 2: # head
327 if len(myrev) == 2: # head
328 e.mergepoint = 'HEAD'
328 e.mergepoint = 'HEAD'
329 else:
329 else:
330 myrev = '.'.join(myrev[:-2] + ['0', myrev[-2]])
330 myrev = '.'.join(myrev[:-2] + ['0', myrev[-2]])
331 branches = [b for b in branchmap if branchmap[b] == myrev]
331 branches = [b for b in branchmap if branchmap[b] == myrev]
332 assert len(branches) == 1, 'unknown branch: %s' % e.mergepoint
332 assert len(branches) == 1, 'unknown branch: %s' % e.mergepoint
333 e.mergepoint = branches[0]
333 e.mergepoint = branches[0]
334 else:
334 else:
335 e.mergepoint = None
335 e.mergepoint = None
336 e.comment = []
336 e.comment = []
337 state = 7
337 state = 7
338
338
339 elif state == 7:
339 elif state == 7:
340 # read the revision numbers of branches that start at this revision
340 # read the revision numbers of branches that start at this revision
341 # or store the commit log message otherwise
341 # or store the commit log message otherwise
342 m = re_70.match(line)
342 m = re_70.match(line)
343 if m:
343 if m:
344 e.branches = [tuple([int(y) for y in x.strip().split('.')])
344 e.branches = [tuple([int(y) for y in x.strip().split('.')])
345 for x in m.group(1).split(';')]
345 for x in m.group(1).split(';')]
346 state = 8
346 state = 8
347 elif re_31.match(line) and re_50.match(peek):
347 elif re_31.match(line) and re_50.match(peek):
348 state = 5
348 state = 5
349 store = True
349 store = True
350 elif re_32.match(line):
350 elif re_32.match(line):
351 state = 0
351 state = 0
352 store = True
352 store = True
353 else:
353 else:
354 e.comment.append(line)
354 e.comment.append(line)
355
355
356 elif state == 8:
356 elif state == 8:
357 # store commit log message
357 # store commit log message
358 if re_31.match(line):
358 if re_31.match(line):
359 state = 5
359 state = 5
360 store = True
360 store = True
361 elif re_32.match(line):
361 elif re_32.match(line):
362 state = 0
362 state = 0
363 store = True
363 store = True
364 else:
364 else:
365 e.comment.append(line)
365 e.comment.append(line)
366
366
367 # When a file is added on a branch B1, CVS creates a synthetic
367 # When a file is added on a branch B1, CVS creates a synthetic
368 # dead trunk revision 1.1 so that the branch has a root.
368 # dead trunk revision 1.1 so that the branch has a root.
369 # Likewise, if you merge such a file to a later branch B2 (one
369 # Likewise, if you merge such a file to a later branch B2 (one
370 # that already existed when the file was added on B1), CVS
370 # that already existed when the file was added on B1), CVS
371 # creates a synthetic dead revision 1.1.x.1 on B2. Don't drop
371 # creates a synthetic dead revision 1.1.x.1 on B2. Don't drop
372 # these revisions now, but mark them synthetic so
372 # these revisions now, but mark them synthetic so
373 # createchangeset() can take care of them.
373 # createchangeset() can take care of them.
374 if (store and
374 if (store and
375 e.dead and
375 e.dead and
376 e.revision[-1] == 1 and # 1.1 or 1.1.x.1
376 e.revision[-1] == 1 and # 1.1 or 1.1.x.1
377 len(e.comment) == 1 and
377 len(e.comment) == 1 and
378 file_added_re.match(e.comment[0])):
378 file_added_re.match(e.comment[0])):
379 ui.debug(_('found synthetic revision in %s: %r\n')
379 ui.debug(_('found synthetic revision in %s: %r\n')
380 % (e.rcs, e.comment[0]))
380 % (e.rcs, e.comment[0]))
381 e.synthetic = True
381 e.synthetic = True
382
382
383 if store:
383 if store:
384 # clean up the results and save in the log.
384 # clean up the results and save in the log.
385 store = False
385 store = False
386 e.tags = util.sort([scache(x) for x in tags.get(e.revision, [])])
386 e.tags = sorted([scache(x) for x in tags.get(e.revision, [])])
387 e.comment = scache('\n'.join(e.comment))
387 e.comment = scache('\n'.join(e.comment))
388
388
389 revn = len(e.revision)
389 revn = len(e.revision)
390 if revn > 3 and (revn % 2) == 0:
390 if revn > 3 and (revn % 2) == 0:
391 e.branch = tags.get(e.revision[:-1], [None])[0]
391 e.branch = tags.get(e.revision[:-1], [None])[0]
392 else:
392 else:
393 e.branch = None
393 e.branch = None
394
394
395 log.append(e)
395 log.append(e)
396
396
397 if len(log) % 100 == 0:
397 if len(log) % 100 == 0:
398 ui.status(util.ellipsis('%d %s' % (len(log), e.file), 80)+'\n')
398 ui.status(util.ellipsis('%d %s' % (len(log), e.file), 80)+'\n')
399
399
400 listsort(log, key=lambda x:(x.rcs, x.revision))
400 listsort(log, key=lambda x:(x.rcs, x.revision))
401
401
402 # find parent revisions of individual files
402 # find parent revisions of individual files
403 versions = {}
403 versions = {}
404 for e in log:
404 for e in log:
405 branch = e.revision[:-1]
405 branch = e.revision[:-1]
406 p = versions.get((e.rcs, branch), None)
406 p = versions.get((e.rcs, branch), None)
407 if p is None:
407 if p is None:
408 p = e.revision[:-2]
408 p = e.revision[:-2]
409 e.parent = p
409 e.parent = p
410 versions[(e.rcs, branch)] = e.revision
410 versions[(e.rcs, branch)] = e.revision
411
411
412 # update the log cache
412 # update the log cache
413 if cache:
413 if cache:
414 if log:
414 if log:
415 # join up the old and new logs
415 # join up the old and new logs
416 listsort(log, key=lambda x:x.date)
416 listsort(log, key=lambda x:x.date)
417
417
418 if oldlog and oldlog[-1].date >= log[0].date:
418 if oldlog and oldlog[-1].date >= log[0].date:
419 raise logerror('Log cache overlaps with new log entries,'
419 raise logerror('Log cache overlaps with new log entries,'
420 ' re-run without cache.')
420 ' re-run without cache.')
421
421
422 log = oldlog + log
422 log = oldlog + log
423
423
424 # write the new cachefile
424 # write the new cachefile
425 ui.note(_('writing cvs log cache %s\n') % cachefile)
425 ui.note(_('writing cvs log cache %s\n') % cachefile)
426 pickle.dump(log, file(cachefile, 'w'))
426 pickle.dump(log, file(cachefile, 'w'))
427 else:
427 else:
428 log = oldlog
428 log = oldlog
429
429
430 ui.status(_('%d log entries\n') % len(log))
430 ui.status(_('%d log entries\n') % len(log))
431
431
432 return log
432 return log
433
433
434
434
435 class changeset(object):
435 class changeset(object):
436 '''Class changeset has the following attributes:
436 '''Class changeset has the following attributes:
437 .id - integer identifying this changeset (list index)
437 .id - integer identifying this changeset (list index)
438 .author - author name as CVS knows it
438 .author - author name as CVS knows it
439 .branch - name of branch this changeset is on, or None
439 .branch - name of branch this changeset is on, or None
440 .comment - commit message
440 .comment - commit message
441 .date - the commit date as a (time,tz) tuple
441 .date - the commit date as a (time,tz) tuple
442 .entries - list of logentry objects in this changeset
442 .entries - list of logentry objects in this changeset
443 .parents - list of one or two parent changesets
443 .parents - list of one or two parent changesets
444 .tags - list of tags on this changeset
444 .tags - list of tags on this changeset
445 .synthetic - from synthetic revision "file ... added on branch ..."
445 .synthetic - from synthetic revision "file ... added on branch ..."
446 .mergepoint- the branch that has been merged from (if present in rlog output)
446 .mergepoint- the branch that has been merged from (if present in rlog output)
447 '''
447 '''
448 def __init__(self, **entries):
448 def __init__(self, **entries):
449 self.__dict__.update(entries)
449 self.__dict__.update(entries)
450
450
451 def __repr__(self):
451 def __repr__(self):
452 return "<%s at 0x%x: %s>" % (self.__class__.__name__,
452 return "<%s at 0x%x: %s>" % (self.__class__.__name__,
453 id(self),
453 id(self),
454 getattr(self, 'id', "(no id)"))
454 getattr(self, 'id', "(no id)"))
455
455
456 def createchangeset(ui, log, fuzz=60, mergefrom=None, mergeto=None):
456 def createchangeset(ui, log, fuzz=60, mergefrom=None, mergeto=None):
457 '''Convert log into changesets.'''
457 '''Convert log into changesets.'''
458
458
459 ui.status(_('creating changesets\n'))
459 ui.status(_('creating changesets\n'))
460
460
461 # Merge changesets
461 # Merge changesets
462
462
463 listsort(log, key=lambda x:(x.comment, x.author, x.branch, x.date))
463 listsort(log, key=lambda x:(x.comment, x.author, x.branch, x.date))
464
464
465 changesets = []
465 changesets = []
466 files = {}
466 files = {}
467 c = None
467 c = None
468 for i, e in enumerate(log):
468 for i, e in enumerate(log):
469
469
470 # Check if log entry belongs to the current changeset or not.
470 # Check if log entry belongs to the current changeset or not.
471 if not (c and
471 if not (c and
472 e.comment == c.comment and
472 e.comment == c.comment and
473 e.author == c.author and
473 e.author == c.author and
474 e.branch == c.branch and
474 e.branch == c.branch and
475 ((c.date[0] + c.date[1]) <=
475 ((c.date[0] + c.date[1]) <=
476 (e.date[0] + e.date[1]) <=
476 (e.date[0] + e.date[1]) <=
477 (c.date[0] + c.date[1]) + fuzz) and
477 (c.date[0] + c.date[1]) + fuzz) and
478 e.file not in files):
478 e.file not in files):
479 c = changeset(comment=e.comment, author=e.author,
479 c = changeset(comment=e.comment, author=e.author,
480 branch=e.branch, date=e.date, entries=[],
480 branch=e.branch, date=e.date, entries=[],
481 mergepoint=getattr(e, 'mergepoint', None))
481 mergepoint=getattr(e, 'mergepoint', None))
482 changesets.append(c)
482 changesets.append(c)
483 files = {}
483 files = {}
484 if len(changesets) % 100 == 0:
484 if len(changesets) % 100 == 0:
485 t = '%d %s' % (len(changesets), repr(e.comment)[1:-1])
485 t = '%d %s' % (len(changesets), repr(e.comment)[1:-1])
486 ui.status(util.ellipsis(t, 80) + '\n')
486 ui.status(util.ellipsis(t, 80) + '\n')
487
487
488 c.entries.append(e)
488 c.entries.append(e)
489 files[e.file] = True
489 files[e.file] = True
490 c.date = e.date # changeset date is date of latest commit in it
490 c.date = e.date # changeset date is date of latest commit in it
491
491
492 # Mark synthetic changesets
492 # Mark synthetic changesets
493
493
494 for c in changesets:
494 for c in changesets:
495 # Synthetic revisions always get their own changeset, because
495 # Synthetic revisions always get their own changeset, because
496 # the log message includes the filename. E.g. if you add file3
496 # the log message includes the filename. E.g. if you add file3
497 # and file4 on a branch, you get four log entries and three
497 # and file4 on a branch, you get four log entries and three
498 # changesets:
498 # changesets:
499 # "File file3 was added on branch ..." (synthetic, 1 entry)
499 # "File file3 was added on branch ..." (synthetic, 1 entry)
500 # "File file4 was added on branch ..." (synthetic, 1 entry)
500 # "File file4 was added on branch ..." (synthetic, 1 entry)
501 # "Add file3 and file4 to fix ..." (real, 2 entries)
501 # "Add file3 and file4 to fix ..." (real, 2 entries)
502 # Hence the check for 1 entry here.
502 # Hence the check for 1 entry here.
503 synth = getattr(c.entries[0], 'synthetic', None)
503 synth = getattr(c.entries[0], 'synthetic', None)
504 c.synthetic = (len(c.entries) == 1 and synth)
504 c.synthetic = (len(c.entries) == 1 and synth)
505
505
506 # Sort files in each changeset
506 # Sort files in each changeset
507
507
508 for c in changesets:
508 for c in changesets:
509 def pathcompare(l, r):
509 def pathcompare(l, r):
510 'Mimic cvsps sorting order'
510 'Mimic cvsps sorting order'
511 l = l.split('/')
511 l = l.split('/')
512 r = r.split('/')
512 r = r.split('/')
513 nl = len(l)
513 nl = len(l)
514 nr = len(r)
514 nr = len(r)
515 n = min(nl, nr)
515 n = min(nl, nr)
516 for i in range(n):
516 for i in range(n):
517 if i + 1 == nl and nl < nr:
517 if i + 1 == nl and nl < nr:
518 return -1
518 return -1
519 elif i + 1 == nr and nl > nr:
519 elif i + 1 == nr and nl > nr:
520 return +1
520 return +1
521 elif l[i] < r[i]:
521 elif l[i] < r[i]:
522 return -1
522 return -1
523 elif l[i] > r[i]:
523 elif l[i] > r[i]:
524 return +1
524 return +1
525 return 0
525 return 0
526 def entitycompare(l, r):
526 def entitycompare(l, r):
527 return pathcompare(l.file, r.file)
527 return pathcompare(l.file, r.file)
528
528
529 c.entries.sort(entitycompare)
529 c.entries.sort(entitycompare)
530
530
531 # Sort changesets by date
531 # Sort changesets by date
532
532
533 def cscmp(l, r):
533 def cscmp(l, r):
534 d = sum(l.date) - sum(r.date)
534 d = sum(l.date) - sum(r.date)
535 if d:
535 if d:
536 return d
536 return d
537
537
538 # detect vendor branches and initial commits on a branch
538 # detect vendor branches and initial commits on a branch
539 le = {}
539 le = {}
540 for e in l.entries:
540 for e in l.entries:
541 le[e.rcs] = e.revision
541 le[e.rcs] = e.revision
542 re = {}
542 re = {}
543 for e in r.entries:
543 for e in r.entries:
544 re[e.rcs] = e.revision
544 re[e.rcs] = e.revision
545
545
546 d = 0
546 d = 0
547 for e in l.entries:
547 for e in l.entries:
548 if re.get(e.rcs, None) == e.parent:
548 if re.get(e.rcs, None) == e.parent:
549 assert not d
549 assert not d
550 d = 1
550 d = 1
551 break
551 break
552
552
553 for e in r.entries:
553 for e in r.entries:
554 if le.get(e.rcs, None) == e.parent:
554 if le.get(e.rcs, None) == e.parent:
555 assert not d
555 assert not d
556 d = -1
556 d = -1
557 break
557 break
558
558
559 return d
559 return d
560
560
561 changesets.sort(cscmp)
561 changesets.sort(cscmp)
562
562
563 # Collect tags
563 # Collect tags
564
564
565 globaltags = {}
565 globaltags = {}
566 for c in changesets:
566 for c in changesets:
567 tags = {}
567 tags = {}
568 for e in c.entries:
568 for e in c.entries:
569 for tag in e.tags:
569 for tag in e.tags:
570 # remember which is the latest changeset to have this tag
570 # remember which is the latest changeset to have this tag
571 globaltags[tag] = c
571 globaltags[tag] = c
572
572
573 for c in changesets:
573 for c in changesets:
574 tags = {}
574 tags = {}
575 for e in c.entries:
575 for e in c.entries:
576 for tag in e.tags:
576 for tag in e.tags:
577 tags[tag] = True
577 tags[tag] = True
578 # remember tags only if this is the latest changeset to have it
578 # remember tags only if this is the latest changeset to have it
579 c.tags = util.sort([tag for tag in tags if globaltags[tag] is c])
579 c.tags = sorted([tag for tag in tags if globaltags[tag] is c])
580
580
581 # Find parent changesets, handle {{mergetobranch BRANCHNAME}}
581 # Find parent changesets, handle {{mergetobranch BRANCHNAME}}
582 # by inserting dummy changesets with two parents, and handle
582 # by inserting dummy changesets with two parents, and handle
583 # {{mergefrombranch BRANCHNAME}} by setting two parents.
583 # {{mergefrombranch BRANCHNAME}} by setting two parents.
584
584
585 if mergeto is None:
585 if mergeto is None:
586 mergeto = r'{{mergetobranch ([-\w]+)}}'
586 mergeto = r'{{mergetobranch ([-\w]+)}}'
587 if mergeto:
587 if mergeto:
588 mergeto = re.compile(mergeto)
588 mergeto = re.compile(mergeto)
589
589
590 if mergefrom is None:
590 if mergefrom is None:
591 mergefrom = r'{{mergefrombranch ([-\w]+)}}'
591 mergefrom = r'{{mergefrombranch ([-\w]+)}}'
592 if mergefrom:
592 if mergefrom:
593 mergefrom = re.compile(mergefrom)
593 mergefrom = re.compile(mergefrom)
594
594
595 versions = {} # changeset index where we saw any particular file version
595 versions = {} # changeset index where we saw any particular file version
596 branches = {} # changeset index where we saw a branch
596 branches = {} # changeset index where we saw a branch
597 n = len(changesets)
597 n = len(changesets)
598 i = 0
598 i = 0
599 while i<n:
599 while i<n:
600 c = changesets[i]
600 c = changesets[i]
601
601
602 for f in c.entries:
602 for f in c.entries:
603 versions[(f.rcs, f.revision)] = i
603 versions[(f.rcs, f.revision)] = i
604
604
605 p = None
605 p = None
606 if c.branch in branches:
606 if c.branch in branches:
607 p = branches[c.branch]
607 p = branches[c.branch]
608 else:
608 else:
609 for f in c.entries:
609 for f in c.entries:
610 p = max(p, versions.get((f.rcs, f.parent), None))
610 p = max(p, versions.get((f.rcs, f.parent), None))
611
611
612 c.parents = []
612 c.parents = []
613 if p is not None:
613 if p is not None:
614 p = changesets[p]
614 p = changesets[p]
615
615
616 # Ensure no changeset has a synthetic changeset as a parent.
616 # Ensure no changeset has a synthetic changeset as a parent.
617 while p.synthetic:
617 while p.synthetic:
618 assert len(p.parents) <= 1, \
618 assert len(p.parents) <= 1, \
619 _('synthetic changeset cannot have multiple parents')
619 _('synthetic changeset cannot have multiple parents')
620 if p.parents:
620 if p.parents:
621 p = p.parents[0]
621 p = p.parents[0]
622 else:
622 else:
623 p = None
623 p = None
624 break
624 break
625
625
626 if p is not None:
626 if p is not None:
627 c.parents.append(p)
627 c.parents.append(p)
628
628
629 if c.mergepoint:
629 if c.mergepoint:
630 if c.mergepoint == 'HEAD':
630 if c.mergepoint == 'HEAD':
631 c.mergepoint = None
631 c.mergepoint = None
632 c.parents.append(changesets[branches[c.mergepoint]])
632 c.parents.append(changesets[branches[c.mergepoint]])
633
633
634 if mergefrom:
634 if mergefrom:
635 m = mergefrom.search(c.comment)
635 m = mergefrom.search(c.comment)
636 if m:
636 if m:
637 m = m.group(1)
637 m = m.group(1)
638 if m == 'HEAD':
638 if m == 'HEAD':
639 m = None
639 m = None
640 try:
640 try:
641 candidate = changesets[branches[m]]
641 candidate = changesets[branches[m]]
642 except KeyError:
642 except KeyError:
643 ui.warn(_("warning: CVS commit message references "
643 ui.warn(_("warning: CVS commit message references "
644 "non-existent branch %r:\n%s\n")
644 "non-existent branch %r:\n%s\n")
645 % (m, c.comment))
645 % (m, c.comment))
646 if m in branches and c.branch != m and not candidate.synthetic:
646 if m in branches and c.branch != m and not candidate.synthetic:
647 c.parents.append(candidate)
647 c.parents.append(candidate)
648
648
649 if mergeto:
649 if mergeto:
650 m = mergeto.search(c.comment)
650 m = mergeto.search(c.comment)
651 if m:
651 if m:
652 try:
652 try:
653 m = m.group(1)
653 m = m.group(1)
654 if m == 'HEAD':
654 if m == 'HEAD':
655 m = None
655 m = None
656 except:
656 except:
657 m = None # if no group found then merge to HEAD
657 m = None # if no group found then merge to HEAD
658 if m in branches and c.branch != m:
658 if m in branches and c.branch != m:
659 # insert empty changeset for merge
659 # insert empty changeset for merge
660 cc = changeset(author=c.author, branch=m, date=c.date,
660 cc = changeset(author=c.author, branch=m, date=c.date,
661 comment='convert-repo: CVS merge from branch %s' % c.branch,
661 comment='convert-repo: CVS merge from branch %s' % c.branch,
662 entries=[], tags=[], parents=[changesets[branches[m]], c])
662 entries=[], tags=[], parents=[changesets[branches[m]], c])
663 changesets.insert(i + 1, cc)
663 changesets.insert(i + 1, cc)
664 branches[m] = i + 1
664 branches[m] = i + 1
665
665
666 # adjust our loop counters now we have inserted a new entry
666 # adjust our loop counters now we have inserted a new entry
667 n += 1
667 n += 1
668 i += 2
668 i += 2
669 continue
669 continue
670
670
671 branches[c.branch] = i
671 branches[c.branch] = i
672 i += 1
672 i += 1
673
673
674 # Drop synthetic changesets (safe now that we have ensured no other
674 # Drop synthetic changesets (safe now that we have ensured no other
675 # changesets can have them as parents).
675 # changesets can have them as parents).
676 i = 0
676 i = 0
677 while i < len(changesets):
677 while i < len(changesets):
678 if changesets[i].synthetic:
678 if changesets[i].synthetic:
679 del changesets[i]
679 del changesets[i]
680 else:
680 else:
681 i += 1
681 i += 1
682
682
683 # Number changesets
683 # Number changesets
684
684
685 for i, c in enumerate(changesets):
685 for i, c in enumerate(changesets):
686 c.id = i + 1
686 c.id = i + 1
687
687
688 ui.status(_('%d changeset entries\n') % len(changesets))
688 ui.status(_('%d changeset entries\n') % len(changesets))
689
689
690 return changesets
690 return changesets
691
691
692
692
693 def debugcvsps(ui, *args, **opts):
693 def debugcvsps(ui, *args, **opts):
694 '''Read CVS rlog for current directory or named path in repository, and
694 '''Read CVS rlog for current directory or named path in repository, and
695 convert the log to changesets based on matching commit log entries and dates.'''
695 convert the log to changesets based on matching commit log entries and dates.'''
696
696
697 if opts["new_cache"]:
697 if opts["new_cache"]:
698 cache = "write"
698 cache = "write"
699 elif opts["update_cache"]:
699 elif opts["update_cache"]:
700 cache = "update"
700 cache = "update"
701 else:
701 else:
702 cache = None
702 cache = None
703
703
704 revisions = opts["revisions"]
704 revisions = opts["revisions"]
705
705
706 try:
706 try:
707 if args:
707 if args:
708 log = []
708 log = []
709 for d in args:
709 for d in args:
710 log += createlog(ui, d, root=opts["root"], cache=cache)
710 log += createlog(ui, d, root=opts["root"], cache=cache)
711 else:
711 else:
712 log = createlog(ui, root=opts["root"], cache=cache)
712 log = createlog(ui, root=opts["root"], cache=cache)
713 except logerror, e:
713 except logerror, e:
714 ui.write("%r\n"%e)
714 ui.write("%r\n"%e)
715 return
715 return
716
716
717 changesets = createchangeset(ui, log, opts["fuzz"])
717 changesets = createchangeset(ui, log, opts["fuzz"])
718 del log
718 del log
719
719
720 # Print changesets (optionally filtered)
720 # Print changesets (optionally filtered)
721
721
722 off = len(revisions)
722 off = len(revisions)
723 branches = {} # latest version number in each branch
723 branches = {} # latest version number in each branch
724 ancestors = {} # parent branch
724 ancestors = {} # parent branch
725 for cs in changesets:
725 for cs in changesets:
726
726
727 if opts["ancestors"]:
727 if opts["ancestors"]:
728 if cs.branch not in branches and cs.parents and cs.parents[0].id:
728 if cs.branch not in branches and cs.parents and cs.parents[0].id:
729 ancestors[cs.branch] = changesets[cs.parents[0].id-1].branch, cs.parents[0].id
729 ancestors[cs.branch] = changesets[cs.parents[0].id-1].branch, cs.parents[0].id
730 branches[cs.branch] = cs.id
730 branches[cs.branch] = cs.id
731
731
732 # limit by branches
732 # limit by branches
733 if opts["branches"] and (cs.branch or 'HEAD') not in opts["branches"]:
733 if opts["branches"] and (cs.branch or 'HEAD') not in opts["branches"]:
734 continue
734 continue
735
735
736 if not off:
736 if not off:
737 # Note: trailing spaces on several lines here are needed to have
737 # Note: trailing spaces on several lines here are needed to have
738 # bug-for-bug compatibility with cvsps.
738 # bug-for-bug compatibility with cvsps.
739 ui.write('---------------------\n')
739 ui.write('---------------------\n')
740 ui.write('PatchSet %d \n' % cs.id)
740 ui.write('PatchSet %d \n' % cs.id)
741 ui.write('Date: %s\n' % util.datestr(cs.date, '%Y/%m/%d %H:%M:%S %1%2'))
741 ui.write('Date: %s\n' % util.datestr(cs.date, '%Y/%m/%d %H:%M:%S %1%2'))
742 ui.write('Author: %s\n' % cs.author)
742 ui.write('Author: %s\n' % cs.author)
743 ui.write('Branch: %s\n' % (cs.branch or 'HEAD'))
743 ui.write('Branch: %s\n' % (cs.branch or 'HEAD'))
744 ui.write('Tag%s: %s \n' % (['', 's'][len(cs.tags)>1],
744 ui.write('Tag%s: %s \n' % (['', 's'][len(cs.tags)>1],
745 ','.join(cs.tags) or '(none)'))
745 ','.join(cs.tags) or '(none)'))
746 if opts["parents"] and cs.parents:
746 if opts["parents"] and cs.parents:
747 if len(cs.parents)>1:
747 if len(cs.parents)>1:
748 ui.write('Parents: %s\n' % (','.join([str(p.id) for p in cs.parents])))
748 ui.write('Parents: %s\n' % (','.join([str(p.id) for p in cs.parents])))
749 else:
749 else:
750 ui.write('Parent: %d\n' % cs.parents[0].id)
750 ui.write('Parent: %d\n' % cs.parents[0].id)
751
751
752 if opts["ancestors"]:
752 if opts["ancestors"]:
753 b = cs.branch
753 b = cs.branch
754 r = []
754 r = []
755 while b:
755 while b:
756 b, c = ancestors[b]
756 b, c = ancestors[b]
757 r.append('%s:%d:%d' % (b or "HEAD", c, branches[b]))
757 r.append('%s:%d:%d' % (b or "HEAD", c, branches[b]))
758 if r:
758 if r:
759 ui.write('Ancestors: %s\n' % (','.join(r)))
759 ui.write('Ancestors: %s\n' % (','.join(r)))
760
760
761 ui.write('Log:\n')
761 ui.write('Log:\n')
762 ui.write('%s\n\n' % cs.comment)
762 ui.write('%s\n\n' % cs.comment)
763 ui.write('Members: \n')
763 ui.write('Members: \n')
764 for f in cs.entries:
764 for f in cs.entries:
765 fn = f.file
765 fn = f.file
766 if fn.startswith(opts["prefix"]):
766 if fn.startswith(opts["prefix"]):
767 fn = fn[len(opts["prefix"]):]
767 fn = fn[len(opts["prefix"]):]
768 ui.write('\t%s:%s->%s%s \n' % (fn, '.'.join([str(x) for x in f.parent]) or 'INITIAL',
768 ui.write('\t%s:%s->%s%s \n' % (fn, '.'.join([str(x) for x in f.parent]) or 'INITIAL',
769 '.'.join([str(x) for x in f.revision]), ['', '(DEAD)'][f.dead]))
769 '.'.join([str(x) for x in f.revision]), ['', '(DEAD)'][f.dead]))
770 ui.write('\n')
770 ui.write('\n')
771
771
772 # have we seen the start tag?
772 # have we seen the start tag?
773 if revisions and off:
773 if revisions and off:
774 if revisions[0] == str(cs.id) or \
774 if revisions[0] == str(cs.id) or \
775 revisions[0] in cs.tags:
775 revisions[0] in cs.tags:
776 off = False
776 off = False
777
777
778 # see if we reached the end tag
778 # see if we reached the end tag
779 if len(revisions)>1 and not off:
779 if len(revisions)>1 and not off:
780 if revisions[1] == str(cs.id) or \
780 if revisions[1] == str(cs.id) or \
781 revisions[1] in cs.tags:
781 revisions[1] in cs.tags:
782 break
782 break
@@ -1,126 +1,126 b''
1 # darcs support for the convert extension
1 # darcs support for the convert extension
2
2
3 from common import NoRepo, checktool, commandline, commit, converter_source
3 from common import NoRepo, checktool, commandline, commit, converter_source
4 from mercurial.i18n import _
4 from mercurial.i18n import _
5 from mercurial import util
5 from mercurial import util
6 import os, shutil, tempfile
6 import os, shutil, tempfile
7
7
8 # The naming drift of ElementTree is fun!
8 # The naming drift of ElementTree is fun!
9
9
10 try: from xml.etree.cElementTree import ElementTree
10 try: from xml.etree.cElementTree import ElementTree
11 except ImportError:
11 except ImportError:
12 try: from xml.etree.ElementTree import ElementTree
12 try: from xml.etree.ElementTree import ElementTree
13 except ImportError:
13 except ImportError:
14 try: from elementtree.cElementTree import ElementTree
14 try: from elementtree.cElementTree import ElementTree
15 except ImportError:
15 except ImportError:
16 try: from elementtree.ElementTree import ElementTree
16 try: from elementtree.ElementTree import ElementTree
17 except ImportError: ElementTree = None
17 except ImportError: ElementTree = None
18
18
19
19
20 class darcs_source(converter_source, commandline):
20 class darcs_source(converter_source, commandline):
21 def __init__(self, ui, path, rev=None):
21 def __init__(self, ui, path, rev=None):
22 converter_source.__init__(self, ui, path, rev=rev)
22 converter_source.__init__(self, ui, path, rev=rev)
23 commandline.__init__(self, ui, 'darcs')
23 commandline.__init__(self, ui, 'darcs')
24
24
25 # check for _darcs, ElementTree, _darcs/inventory so that we can
25 # check for _darcs, ElementTree, _darcs/inventory so that we can
26 # easily skip test-convert-darcs if ElementTree is not around
26 # easily skip test-convert-darcs if ElementTree is not around
27 if not os.path.exists(os.path.join(path, '_darcs', 'inventories')):
27 if not os.path.exists(os.path.join(path, '_darcs', 'inventories')):
28 raise NoRepo("%s does not look like a darcs repo" % path)
28 raise NoRepo("%s does not look like a darcs repo" % path)
29
29
30 if not os.path.exists(os.path.join(path, '_darcs')):
30 if not os.path.exists(os.path.join(path, '_darcs')):
31 raise NoRepo("%s does not look like a darcs repo" % path)
31 raise NoRepo("%s does not look like a darcs repo" % path)
32
32
33 checktool('darcs')
33 checktool('darcs')
34
34
35 if ElementTree is None:
35 if ElementTree is None:
36 raise util.Abort(_("Python ElementTree module is not available"))
36 raise util.Abort(_("Python ElementTree module is not available"))
37
37
38 self.path = os.path.realpath(path)
38 self.path = os.path.realpath(path)
39
39
40 self.lastrev = None
40 self.lastrev = None
41 self.changes = {}
41 self.changes = {}
42 self.parents = {}
42 self.parents = {}
43 self.tags = {}
43 self.tags = {}
44
44
45 def before(self):
45 def before(self):
46 self.tmppath = tempfile.mkdtemp(
46 self.tmppath = tempfile.mkdtemp(
47 prefix='convert-' + os.path.basename(self.path) + '-')
47 prefix='convert-' + os.path.basename(self.path) + '-')
48 output, status = self.run('init', repodir=self.tmppath)
48 output, status = self.run('init', repodir=self.tmppath)
49 self.checkexit(status)
49 self.checkexit(status)
50
50
51 tree = self.xml('changes', xml_output=True, summary=True,
51 tree = self.xml('changes', xml_output=True, summary=True,
52 repodir=self.path)
52 repodir=self.path)
53 tagname = None
53 tagname = None
54 child = None
54 child = None
55 for elt in tree.findall('patch'):
55 for elt in tree.findall('patch'):
56 node = elt.get('hash')
56 node = elt.get('hash')
57 name = elt.findtext('name', '')
57 name = elt.findtext('name', '')
58 if name.startswith('TAG '):
58 if name.startswith('TAG '):
59 tagname = name[4:].strip()
59 tagname = name[4:].strip()
60 elif tagname is not None:
60 elif tagname is not None:
61 self.tags[tagname] = node
61 self.tags[tagname] = node
62 tagname = None
62 tagname = None
63 self.changes[node] = elt
63 self.changes[node] = elt
64 self.parents[child] = [node]
64 self.parents[child] = [node]
65 child = node
65 child = node
66 self.parents[child] = []
66 self.parents[child] = []
67
67
68 def after(self):
68 def after(self):
69 self.ui.debug(_('cleaning up %s\n') % self.tmppath)
69 self.ui.debug(_('cleaning up %s\n') % self.tmppath)
70 shutil.rmtree(self.tmppath, ignore_errors=True)
70 shutil.rmtree(self.tmppath, ignore_errors=True)
71
71
72 def xml(self, cmd, **kwargs):
72 def xml(self, cmd, **kwargs):
73 etree = ElementTree()
73 etree = ElementTree()
74 fp = self._run(cmd, **kwargs)
74 fp = self._run(cmd, **kwargs)
75 etree.parse(fp)
75 etree.parse(fp)
76 self.checkexit(fp.close())
76 self.checkexit(fp.close())
77 return etree.getroot()
77 return etree.getroot()
78
78
79 def getheads(self):
79 def getheads(self):
80 return self.parents[None]
80 return self.parents[None]
81
81
82 def getcommit(self, rev):
82 def getcommit(self, rev):
83 elt = self.changes[rev]
83 elt = self.changes[rev]
84 date = util.strdate(elt.get('local_date'), '%a %b %d %H:%M:%S %Z %Y')
84 date = util.strdate(elt.get('local_date'), '%a %b %d %H:%M:%S %Z %Y')
85 desc = elt.findtext('name') + '\n' + elt.findtext('comment', '')
85 desc = elt.findtext('name') + '\n' + elt.findtext('comment', '')
86 return commit(author=elt.get('author'), date=util.datestr(date),
86 return commit(author=elt.get('author'), date=util.datestr(date),
87 desc=desc.strip(), parents=self.parents[rev])
87 desc=desc.strip(), parents=self.parents[rev])
88
88
89 def pull(self, rev):
89 def pull(self, rev):
90 output, status = self.run('pull', self.path, all=True,
90 output, status = self.run('pull', self.path, all=True,
91 match='hash %s' % rev,
91 match='hash %s' % rev,
92 no_test=True, no_posthook=True,
92 no_test=True, no_posthook=True,
93 external_merge='/bin/false',
93 external_merge='/bin/false',
94 repodir=self.tmppath)
94 repodir=self.tmppath)
95 if status:
95 if status:
96 if output.find('We have conflicts in') == -1:
96 if output.find('We have conflicts in') == -1:
97 self.checkexit(status, output)
97 self.checkexit(status, output)
98 output, status = self.run('revert', all=True, repodir=self.tmppath)
98 output, status = self.run('revert', all=True, repodir=self.tmppath)
99 self.checkexit(status, output)
99 self.checkexit(status, output)
100
100
101 def getchanges(self, rev):
101 def getchanges(self, rev):
102 self.pull(rev)
102 self.pull(rev)
103 copies = {}
103 copies = {}
104 changes = []
104 changes = []
105 for elt in self.changes[rev].find('summary').getchildren():
105 for elt in self.changes[rev].find('summary').getchildren():
106 if elt.tag in ('add_directory', 'remove_directory'):
106 if elt.tag in ('add_directory', 'remove_directory'):
107 continue
107 continue
108 if elt.tag == 'move':
108 if elt.tag == 'move':
109 changes.append((elt.get('from'), rev))
109 changes.append((elt.get('from'), rev))
110 copies[elt.get('from')] = elt.get('to')
110 copies[elt.get('from')] = elt.get('to')
111 else:
111 else:
112 changes.append((elt.text.strip(), rev))
112 changes.append((elt.text.strip(), rev))
113 self.lastrev = rev
113 self.lastrev = rev
114 return util.sort(changes), copies
114 return sorted(changes), copies
115
115
116 def getfile(self, name, rev):
116 def getfile(self, name, rev):
117 if rev != self.lastrev:
117 if rev != self.lastrev:
118 raise util.Abort(_('internal calling inconsistency'))
118 raise util.Abort(_('internal calling inconsistency'))
119 return open(os.path.join(self.tmppath, name), 'rb').read()
119 return open(os.path.join(self.tmppath, name), 'rb').read()
120
120
121 def getmode(self, name, rev):
121 def getmode(self, name, rev):
122 mode = os.lstat(os.path.join(self.tmppath, name)).st_mode
122 mode = os.lstat(os.path.join(self.tmppath, name)).st_mode
123 return (mode & 0111) and 'x' or ''
123 return (mode & 0111) and 'x' or ''
124
124
125 def gettags(self):
125 def gettags(self):
126 return self.tags
126 return self.tags
@@ -1,335 +1,335 b''
1 # GNU Arch support for the convert extension
1 # GNU Arch support for the convert extension
2
2
3 from common import NoRepo, commandline, commit, converter_source
3 from common import NoRepo, commandline, commit, converter_source
4 from mercurial.i18n import _
4 from mercurial.i18n import _
5 from mercurial import util
5 from mercurial import util
6 import os, shutil, tempfile, stat, locale
6 import os, shutil, tempfile, stat, locale
7 from email.Parser import Parser
7 from email.Parser import Parser
8
8
9 class gnuarch_source(converter_source, commandline):
9 class gnuarch_source(converter_source, commandline):
10
10
11 class gnuarch_rev:
11 class gnuarch_rev:
12 def __init__(self, rev):
12 def __init__(self, rev):
13 self.rev = rev
13 self.rev = rev
14 self.summary = ''
14 self.summary = ''
15 self.date = None
15 self.date = None
16 self.author = ''
16 self.author = ''
17 self.continuationof = None
17 self.continuationof = None
18 self.add_files = []
18 self.add_files = []
19 self.mod_files = []
19 self.mod_files = []
20 self.del_files = []
20 self.del_files = []
21 self.ren_files = {}
21 self.ren_files = {}
22 self.ren_dirs = {}
22 self.ren_dirs = {}
23
23
24 def __init__(self, ui, path, rev=None):
24 def __init__(self, ui, path, rev=None):
25 super(gnuarch_source, self).__init__(ui, path, rev=rev)
25 super(gnuarch_source, self).__init__(ui, path, rev=rev)
26
26
27 if not os.path.exists(os.path.join(path, '{arch}')):
27 if not os.path.exists(os.path.join(path, '{arch}')):
28 raise NoRepo(_("%s does not look like a GNU Arch repo") % path)
28 raise NoRepo(_("%s does not look like a GNU Arch repo") % path)
29
29
30 # Could use checktool, but we want to check for baz or tla.
30 # Could use checktool, but we want to check for baz or tla.
31 self.execmd = None
31 self.execmd = None
32 if util.find_exe('baz'):
32 if util.find_exe('baz'):
33 self.execmd = 'baz'
33 self.execmd = 'baz'
34 else:
34 else:
35 if util.find_exe('tla'):
35 if util.find_exe('tla'):
36 self.execmd = 'tla'
36 self.execmd = 'tla'
37 else:
37 else:
38 raise util.Abort(_('cannot find a GNU Arch tool'))
38 raise util.Abort(_('cannot find a GNU Arch tool'))
39
39
40 commandline.__init__(self, ui, self.execmd)
40 commandline.__init__(self, ui, self.execmd)
41
41
42 self.path = os.path.realpath(path)
42 self.path = os.path.realpath(path)
43 self.tmppath = None
43 self.tmppath = None
44
44
45 self.treeversion = None
45 self.treeversion = None
46 self.lastrev = None
46 self.lastrev = None
47 self.changes = {}
47 self.changes = {}
48 self.parents = {}
48 self.parents = {}
49 self.tags = {}
49 self.tags = {}
50 self.modecache = {}
50 self.modecache = {}
51 self.catlogparser = Parser()
51 self.catlogparser = Parser()
52 self.locale = locale.getpreferredencoding()
52 self.locale = locale.getpreferredencoding()
53 self.archives = []
53 self.archives = []
54
54
55 def before(self):
55 def before(self):
56 # Get registered archives
56 # Get registered archives
57 self.archives = [i.rstrip('\n')
57 self.archives = [i.rstrip('\n')
58 for i in self.runlines0('archives', '-n')]
58 for i in self.runlines0('archives', '-n')]
59
59
60 if self.execmd == 'tla':
60 if self.execmd == 'tla':
61 output = self.run0('tree-version', self.path)
61 output = self.run0('tree-version', self.path)
62 else:
62 else:
63 output = self.run0('tree-version', '-d', self.path)
63 output = self.run0('tree-version', '-d', self.path)
64 self.treeversion = output.strip()
64 self.treeversion = output.strip()
65
65
66 # Get name of temporary directory
66 # Get name of temporary directory
67 version = self.treeversion.split('/')
67 version = self.treeversion.split('/')
68 self.tmppath = os.path.join(tempfile.gettempdir(),
68 self.tmppath = os.path.join(tempfile.gettempdir(),
69 'hg-%s' % version[1])
69 'hg-%s' % version[1])
70
70
71 # Generate parents dictionary
71 # Generate parents dictionary
72 self.parents[None] = []
72 self.parents[None] = []
73 treeversion = self.treeversion
73 treeversion = self.treeversion
74 child = None
74 child = None
75 while treeversion:
75 while treeversion:
76 self.ui.status(_('analyzing tree version %s...\n') % treeversion)
76 self.ui.status(_('analyzing tree version %s...\n') % treeversion)
77
77
78 archive = treeversion.split('/')[0]
78 archive = treeversion.split('/')[0]
79 if archive not in self.archives:
79 if archive not in self.archives:
80 self.ui.status(_('tree analysis stopped because it points to an unregistered archive %s...\n') % archive)
80 self.ui.status(_('tree analysis stopped because it points to an unregistered archive %s...\n') % archive)
81 break
81 break
82
82
83 # Get the complete list of revisions for that tree version
83 # Get the complete list of revisions for that tree version
84 output, status = self.runlines('revisions', '-r', '-f', treeversion)
84 output, status = self.runlines('revisions', '-r', '-f', treeversion)
85 self.checkexit(status, 'failed retrieveing revisions for %s' % treeversion)
85 self.checkexit(status, 'failed retrieveing revisions for %s' % treeversion)
86
86
87 # No new iteration unless a revision has a continuation-of header
87 # No new iteration unless a revision has a continuation-of header
88 treeversion = None
88 treeversion = None
89
89
90 for l in output:
90 for l in output:
91 rev = l.strip()
91 rev = l.strip()
92 self.changes[rev] = self.gnuarch_rev(rev)
92 self.changes[rev] = self.gnuarch_rev(rev)
93 self.parents[rev] = []
93 self.parents[rev] = []
94
94
95 # Read author, date and summary
95 # Read author, date and summary
96 catlog, status = self.run('cat-log', '-d', self.path, rev)
96 catlog, status = self.run('cat-log', '-d', self.path, rev)
97 if status:
97 if status:
98 catlog = self.run0('cat-archive-log', rev)
98 catlog = self.run0('cat-archive-log', rev)
99 self._parsecatlog(catlog, rev)
99 self._parsecatlog(catlog, rev)
100
100
101 # Populate the parents map
101 # Populate the parents map
102 self.parents[child].append(rev)
102 self.parents[child].append(rev)
103
103
104 # Keep track of the current revision as the child of the next
104 # Keep track of the current revision as the child of the next
105 # revision scanned
105 # revision scanned
106 child = rev
106 child = rev
107
107
108 # Check if we have to follow the usual incremental history
108 # Check if we have to follow the usual incremental history
109 # or if we have to 'jump' to a different treeversion given
109 # or if we have to 'jump' to a different treeversion given
110 # by the continuation-of header.
110 # by the continuation-of header.
111 if self.changes[rev].continuationof:
111 if self.changes[rev].continuationof:
112 treeversion = '--'.join(self.changes[rev].continuationof.split('--')[:-1])
112 treeversion = '--'.join(self.changes[rev].continuationof.split('--')[:-1])
113 break
113 break
114
114
115 # If we reached a base-0 revision w/o any continuation-of
115 # If we reached a base-0 revision w/o any continuation-of
116 # header, it means the tree history ends here.
116 # header, it means the tree history ends here.
117 if rev[-6:] == 'base-0':
117 if rev[-6:] == 'base-0':
118 break
118 break
119
119
120 def after(self):
120 def after(self):
121 self.ui.debug(_('cleaning up %s\n') % self.tmppath)
121 self.ui.debug(_('cleaning up %s\n') % self.tmppath)
122 shutil.rmtree(self.tmppath, ignore_errors=True)
122 shutil.rmtree(self.tmppath, ignore_errors=True)
123
123
124 def getheads(self):
124 def getheads(self):
125 return self.parents[None]
125 return self.parents[None]
126
126
127 def getfile(self, name, rev):
127 def getfile(self, name, rev):
128 if rev != self.lastrev:
128 if rev != self.lastrev:
129 raise util.Abort(_('internal calling inconsistency'))
129 raise util.Abort(_('internal calling inconsistency'))
130
130
131 # Raise IOError if necessary (i.e. deleted files).
131 # Raise IOError if necessary (i.e. deleted files).
132 if not os.path.exists(os.path.join(self.tmppath, name)):
132 if not os.path.exists(os.path.join(self.tmppath, name)):
133 raise IOError
133 raise IOError
134
134
135 data, mode = self._getfile(name, rev)
135 data, mode = self._getfile(name, rev)
136 self.modecache[(name, rev)] = mode
136 self.modecache[(name, rev)] = mode
137
137
138 return data
138 return data
139
139
140 def getmode(self, name, rev):
140 def getmode(self, name, rev):
141 return self.modecache[(name, rev)]
141 return self.modecache[(name, rev)]
142
142
143 def getchanges(self, rev):
143 def getchanges(self, rev):
144 self.modecache = {}
144 self.modecache = {}
145 self._update(rev)
145 self._update(rev)
146 changes = []
146 changes = []
147 copies = {}
147 copies = {}
148
148
149 for f in self.changes[rev].add_files:
149 for f in self.changes[rev].add_files:
150 changes.append((f, rev))
150 changes.append((f, rev))
151
151
152 for f in self.changes[rev].mod_files:
152 for f in self.changes[rev].mod_files:
153 changes.append((f, rev))
153 changes.append((f, rev))
154
154
155 for f in self.changes[rev].del_files:
155 for f in self.changes[rev].del_files:
156 changes.append((f, rev))
156 changes.append((f, rev))
157
157
158 for src in self.changes[rev].ren_files:
158 for src in self.changes[rev].ren_files:
159 to = self.changes[rev].ren_files[src]
159 to = self.changes[rev].ren_files[src]
160 changes.append((src, rev))
160 changes.append((src, rev))
161 changes.append((to, rev))
161 changes.append((to, rev))
162 copies[to] = src
162 copies[to] = src
163
163
164 for src in self.changes[rev].ren_dirs:
164 for src in self.changes[rev].ren_dirs:
165 to = self.changes[rev].ren_dirs[src]
165 to = self.changes[rev].ren_dirs[src]
166 chgs, cps = self._rendirchanges(src, to);
166 chgs, cps = self._rendirchanges(src, to);
167 changes += [(f, rev) for f in chgs]
167 changes += [(f, rev) for f in chgs]
168 copies.update(cps)
168 copies.update(cps)
169
169
170 self.lastrev = rev
170 self.lastrev = rev
171 return util.sort(set(changes)), copies
171 return sorted(set(changes)), copies
172
172
173 def getcommit(self, rev):
173 def getcommit(self, rev):
174 changes = self.changes[rev]
174 changes = self.changes[rev]
175 return commit(author = changes.author, date = changes.date,
175 return commit(author = changes.author, date = changes.date,
176 desc = changes.summary, parents = self.parents[rev], rev=rev)
176 desc = changes.summary, parents = self.parents[rev], rev=rev)
177
177
178 def gettags(self):
178 def gettags(self):
179 return self.tags
179 return self.tags
180
180
181 def _execute(self, cmd, *args, **kwargs):
181 def _execute(self, cmd, *args, **kwargs):
182 cmdline = [self.execmd, cmd]
182 cmdline = [self.execmd, cmd]
183 cmdline += args
183 cmdline += args
184 cmdline = [util.shellquote(arg) for arg in cmdline]
184 cmdline = [util.shellquote(arg) for arg in cmdline]
185 cmdline += ['>', util.nulldev, '2>', util.nulldev]
185 cmdline += ['>', util.nulldev, '2>', util.nulldev]
186 cmdline = util.quotecommand(' '.join(cmdline))
186 cmdline = util.quotecommand(' '.join(cmdline))
187 self.ui.debug(cmdline, '\n')
187 self.ui.debug(cmdline, '\n')
188 return os.system(cmdline)
188 return os.system(cmdline)
189
189
190 def _update(self, rev):
190 def _update(self, rev):
191 self.ui.debug(_('applying revision %s...\n') % rev)
191 self.ui.debug(_('applying revision %s...\n') % rev)
192 changeset, status = self.runlines('replay', '-d', self.tmppath,
192 changeset, status = self.runlines('replay', '-d', self.tmppath,
193 rev)
193 rev)
194 if status:
194 if status:
195 # Something went wrong while merging (baz or tla
195 # Something went wrong while merging (baz or tla
196 # issue?), get latest revision and try from there
196 # issue?), get latest revision and try from there
197 shutil.rmtree(self.tmppath, ignore_errors=True)
197 shutil.rmtree(self.tmppath, ignore_errors=True)
198 self._obtainrevision(rev)
198 self._obtainrevision(rev)
199 else:
199 else:
200 old_rev = self.parents[rev][0]
200 old_rev = self.parents[rev][0]
201 self.ui.debug(_('computing changeset between %s and %s...\n')
201 self.ui.debug(_('computing changeset between %s and %s...\n')
202 % (old_rev, rev))
202 % (old_rev, rev))
203 self._parsechangeset(changeset, rev)
203 self._parsechangeset(changeset, rev)
204
204
205 def _getfile(self, name, rev):
205 def _getfile(self, name, rev):
206 mode = os.lstat(os.path.join(self.tmppath, name)).st_mode
206 mode = os.lstat(os.path.join(self.tmppath, name)).st_mode
207 if stat.S_ISLNK(mode):
207 if stat.S_ISLNK(mode):
208 data = os.readlink(os.path.join(self.tmppath, name))
208 data = os.readlink(os.path.join(self.tmppath, name))
209 mode = mode and 'l' or ''
209 mode = mode and 'l' or ''
210 else:
210 else:
211 data = open(os.path.join(self.tmppath, name), 'rb').read()
211 data = open(os.path.join(self.tmppath, name), 'rb').read()
212 mode = (mode & 0111) and 'x' or ''
212 mode = (mode & 0111) and 'x' or ''
213 return data, mode
213 return data, mode
214
214
215 def _exclude(self, name):
215 def _exclude(self, name):
216 exclude = [ '{arch}', '.arch-ids', '.arch-inventory' ]
216 exclude = [ '{arch}', '.arch-ids', '.arch-inventory' ]
217 for exc in exclude:
217 for exc in exclude:
218 if name.find(exc) != -1:
218 if name.find(exc) != -1:
219 return True
219 return True
220 return False
220 return False
221
221
222 def _readcontents(self, path):
222 def _readcontents(self, path):
223 files = []
223 files = []
224 contents = os.listdir(path)
224 contents = os.listdir(path)
225 while len(contents) > 0:
225 while len(contents) > 0:
226 c = contents.pop()
226 c = contents.pop()
227 p = os.path.join(path, c)
227 p = os.path.join(path, c)
228 # os.walk could be used, but here we avoid internal GNU
228 # os.walk could be used, but here we avoid internal GNU
229 # Arch files and directories, thus saving a lot time.
229 # Arch files and directories, thus saving a lot time.
230 if not self._exclude(p):
230 if not self._exclude(p):
231 if os.path.isdir(p):
231 if os.path.isdir(p):
232 contents += [os.path.join(c, f) for f in os.listdir(p)]
232 contents += [os.path.join(c, f) for f in os.listdir(p)]
233 else:
233 else:
234 files.append(c)
234 files.append(c)
235 return files
235 return files
236
236
237 def _rendirchanges(self, src, dest):
237 def _rendirchanges(self, src, dest):
238 changes = []
238 changes = []
239 copies = {}
239 copies = {}
240 files = self._readcontents(os.path.join(self.tmppath, dest))
240 files = self._readcontents(os.path.join(self.tmppath, dest))
241 for f in files:
241 for f in files:
242 s = os.path.join(src, f)
242 s = os.path.join(src, f)
243 d = os.path.join(dest, f)
243 d = os.path.join(dest, f)
244 changes.append(s)
244 changes.append(s)
245 changes.append(d)
245 changes.append(d)
246 copies[d] = s
246 copies[d] = s
247 return changes, copies
247 return changes, copies
248
248
249 def _obtainrevision(self, rev):
249 def _obtainrevision(self, rev):
250 self.ui.debug(_('obtaining revision %s...\n') % rev)
250 self.ui.debug(_('obtaining revision %s...\n') % rev)
251 output = self._execute('get', rev, self.tmppath)
251 output = self._execute('get', rev, self.tmppath)
252 self.checkexit(output)
252 self.checkexit(output)
253 self.ui.debug(_('analysing revision %s...\n') % rev)
253 self.ui.debug(_('analysing revision %s...\n') % rev)
254 files = self._readcontents(self.tmppath)
254 files = self._readcontents(self.tmppath)
255 self.changes[rev].add_files += files
255 self.changes[rev].add_files += files
256
256
257 def _stripbasepath(self, path):
257 def _stripbasepath(self, path):
258 if path.startswith('./'):
258 if path.startswith('./'):
259 return path[2:]
259 return path[2:]
260 return path
260 return path
261
261
262 def _parsecatlog(self, data, rev):
262 def _parsecatlog(self, data, rev):
263 try:
263 try:
264 catlog = self.catlogparser.parsestr(data)
264 catlog = self.catlogparser.parsestr(data)
265
265
266 # Commit date
266 # Commit date
267 self.changes[rev].date = util.datestr(
267 self.changes[rev].date = util.datestr(
268 util.strdate(catlog['Standard-date'],
268 util.strdate(catlog['Standard-date'],
269 '%Y-%m-%d %H:%M:%S'))
269 '%Y-%m-%d %H:%M:%S'))
270
270
271 # Commit author
271 # Commit author
272 self.changes[rev].author = self.recode(catlog['Creator'])
272 self.changes[rev].author = self.recode(catlog['Creator'])
273
273
274 # Commit description
274 # Commit description
275 self.changes[rev].summary = '\n\n'.join((catlog['Summary'],
275 self.changes[rev].summary = '\n\n'.join((catlog['Summary'],
276 catlog.get_payload()))
276 catlog.get_payload()))
277 self.changes[rev].summary = self.recode(self.changes[rev].summary)
277 self.changes[rev].summary = self.recode(self.changes[rev].summary)
278
278
279 # Commit revision origin when dealing with a branch or tag
279 # Commit revision origin when dealing with a branch or tag
280 if catlog.has_key('Continuation-of'):
280 if catlog.has_key('Continuation-of'):
281 self.changes[rev].continuationof = self.recode(catlog['Continuation-of'])
281 self.changes[rev].continuationof = self.recode(catlog['Continuation-of'])
282 except Exception:
282 except Exception:
283 raise util.Abort(_('could not parse cat-log of %s') % rev)
283 raise util.Abort(_('could not parse cat-log of %s') % rev)
284
284
285 def _parsechangeset(self, data, rev):
285 def _parsechangeset(self, data, rev):
286 for l in data:
286 for l in data:
287 l = l.strip()
287 l = l.strip()
288 # Added file (ignore added directory)
288 # Added file (ignore added directory)
289 if l.startswith('A') and not l.startswith('A/'):
289 if l.startswith('A') and not l.startswith('A/'):
290 file = self._stripbasepath(l[1:].strip())
290 file = self._stripbasepath(l[1:].strip())
291 if not self._exclude(file):
291 if not self._exclude(file):
292 self.changes[rev].add_files.append(file)
292 self.changes[rev].add_files.append(file)
293 # Deleted file (ignore deleted directory)
293 # Deleted file (ignore deleted directory)
294 elif l.startswith('D') and not l.startswith('D/'):
294 elif l.startswith('D') and not l.startswith('D/'):
295 file = self._stripbasepath(l[1:].strip())
295 file = self._stripbasepath(l[1:].strip())
296 if not self._exclude(file):
296 if not self._exclude(file):
297 self.changes[rev].del_files.append(file)
297 self.changes[rev].del_files.append(file)
298 # Modified binary file
298 # Modified binary file
299 elif l.startswith('Mb'):
299 elif l.startswith('Mb'):
300 file = self._stripbasepath(l[2:].strip())
300 file = self._stripbasepath(l[2:].strip())
301 if not self._exclude(file):
301 if not self._exclude(file):
302 self.changes[rev].mod_files.append(file)
302 self.changes[rev].mod_files.append(file)
303 # Modified link
303 # Modified link
304 elif l.startswith('M->'):
304 elif l.startswith('M->'):
305 file = self._stripbasepath(l[3:].strip())
305 file = self._stripbasepath(l[3:].strip())
306 if not self._exclude(file):
306 if not self._exclude(file):
307 self.changes[rev].mod_files.append(file)
307 self.changes[rev].mod_files.append(file)
308 # Modified file
308 # Modified file
309 elif l.startswith('M'):
309 elif l.startswith('M'):
310 file = self._stripbasepath(l[1:].strip())
310 file = self._stripbasepath(l[1:].strip())
311 if not self._exclude(file):
311 if not self._exclude(file):
312 self.changes[rev].mod_files.append(file)
312 self.changes[rev].mod_files.append(file)
313 # Renamed file (or link)
313 # Renamed file (or link)
314 elif l.startswith('=>'):
314 elif l.startswith('=>'):
315 files = l[2:].strip().split(' ')
315 files = l[2:].strip().split(' ')
316 if len(files) == 1:
316 if len(files) == 1:
317 files = l[2:].strip().split('\t')
317 files = l[2:].strip().split('\t')
318 src = self._stripbasepath(files[0])
318 src = self._stripbasepath(files[0])
319 dst = self._stripbasepath(files[1])
319 dst = self._stripbasepath(files[1])
320 if not self._exclude(src) and not self._exclude(dst):
320 if not self._exclude(src) and not self._exclude(dst):
321 self.changes[rev].ren_files[src] = dst
321 self.changes[rev].ren_files[src] = dst
322 # Conversion from file to link or from link to file (modified)
322 # Conversion from file to link or from link to file (modified)
323 elif l.startswith('ch'):
323 elif l.startswith('ch'):
324 file = self._stripbasepath(l[2:].strip())
324 file = self._stripbasepath(l[2:].strip())
325 if not self._exclude(file):
325 if not self._exclude(file):
326 self.changes[rev].mod_files.append(file)
326 self.changes[rev].mod_files.append(file)
327 # Renamed directory
327 # Renamed directory
328 elif l.startswith('/>'):
328 elif l.startswith('/>'):
329 dirs = l[2:].strip().split(' ')
329 dirs = l[2:].strip().split(' ')
330 if len(dirs) == 1:
330 if len(dirs) == 1:
331 dirs = l[2:].strip().split('\t')
331 dirs = l[2:].strip().split('\t')
332 src = self._stripbasepath(dirs[0])
332 src = self._stripbasepath(dirs[0])
333 dst = self._stripbasepath(dirs[1])
333 dst = self._stripbasepath(dirs[1])
334 if not self._exclude(src) and not self._exclude(dst):
334 if not self._exclude(src) and not self._exclude(dst):
335 self.changes[rev].ren_dirs[src] = dst
335 self.changes[rev].ren_dirs[src] = dst
@@ -1,333 +1,333 b''
1 # hg backend for convert extension
1 # hg backend for convert extension
2
2
3 # Notes for hg->hg conversion:
3 # Notes for hg->hg conversion:
4 #
4 #
5 # * Old versions of Mercurial didn't trim the whitespace from the ends
5 # * Old versions of Mercurial didn't trim the whitespace from the ends
6 # of commit messages, but new versions do. Changesets created by
6 # of commit messages, but new versions do. Changesets created by
7 # those older versions, then converted, may thus have different
7 # those older versions, then converted, may thus have different
8 # hashes for changesets that are otherwise identical.
8 # hashes for changesets that are otherwise identical.
9 #
9 #
10 # * By default, the source revision is stored in the converted
10 # * By default, the source revision is stored in the converted
11 # revision. This will cause the converted revision to have a
11 # revision. This will cause the converted revision to have a
12 # different identity than the source. To avoid this, use the
12 # different identity than the source. To avoid this, use the
13 # following option: "--config convert.hg.saverev=false"
13 # following option: "--config convert.hg.saverev=false"
14
14
15
15
16 import os, time
16 import os, time
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18 from mercurial.node import bin, hex, nullid
18 from mercurial.node import bin, hex, nullid
19 from mercurial import hg, util, context, error
19 from mercurial import hg, util, context, error
20
20
21 from common import NoRepo, commit, converter_source, converter_sink
21 from common import NoRepo, commit, converter_source, converter_sink
22
22
23 class mercurial_sink(converter_sink):
23 class mercurial_sink(converter_sink):
24 def __init__(self, ui, path):
24 def __init__(self, ui, path):
25 converter_sink.__init__(self, ui, path)
25 converter_sink.__init__(self, ui, path)
26 self.branchnames = ui.configbool('convert', 'hg.usebranchnames', True)
26 self.branchnames = ui.configbool('convert', 'hg.usebranchnames', True)
27 self.clonebranches = ui.configbool('convert', 'hg.clonebranches', False)
27 self.clonebranches = ui.configbool('convert', 'hg.clonebranches', False)
28 self.tagsbranch = ui.config('convert', 'hg.tagsbranch', 'default')
28 self.tagsbranch = ui.config('convert', 'hg.tagsbranch', 'default')
29 self.lastbranch = None
29 self.lastbranch = None
30 if os.path.isdir(path) and len(os.listdir(path)) > 0:
30 if os.path.isdir(path) and len(os.listdir(path)) > 0:
31 try:
31 try:
32 self.repo = hg.repository(self.ui, path)
32 self.repo = hg.repository(self.ui, path)
33 if not self.repo.local():
33 if not self.repo.local():
34 raise NoRepo(_('%s is not a local Mercurial repo') % path)
34 raise NoRepo(_('%s is not a local Mercurial repo') % path)
35 except error.RepoError, err:
35 except error.RepoError, err:
36 ui.traceback()
36 ui.traceback()
37 raise NoRepo(err.args[0])
37 raise NoRepo(err.args[0])
38 else:
38 else:
39 try:
39 try:
40 ui.status(_('initializing destination %s repository\n') % path)
40 ui.status(_('initializing destination %s repository\n') % path)
41 self.repo = hg.repository(self.ui, path, create=True)
41 self.repo = hg.repository(self.ui, path, create=True)
42 if not self.repo.local():
42 if not self.repo.local():
43 raise NoRepo(_('%s is not a local Mercurial repo') % path)
43 raise NoRepo(_('%s is not a local Mercurial repo') % path)
44 self.created.append(path)
44 self.created.append(path)
45 except error.RepoError:
45 except error.RepoError:
46 ui.traceback()
46 ui.traceback()
47 raise NoRepo("could not create hg repo %s as sink" % path)
47 raise NoRepo("could not create hg repo %s as sink" % path)
48 self.lock = None
48 self.lock = None
49 self.wlock = None
49 self.wlock = None
50 self.filemapmode = False
50 self.filemapmode = False
51
51
52 def before(self):
52 def before(self):
53 self.ui.debug(_('run hg sink pre-conversion action\n'))
53 self.ui.debug(_('run hg sink pre-conversion action\n'))
54 self.wlock = self.repo.wlock()
54 self.wlock = self.repo.wlock()
55 self.lock = self.repo.lock()
55 self.lock = self.repo.lock()
56
56
57 def after(self):
57 def after(self):
58 self.ui.debug(_('run hg sink post-conversion action\n'))
58 self.ui.debug(_('run hg sink post-conversion action\n'))
59 self.lock.release()
59 self.lock.release()
60 self.wlock.release()
60 self.wlock.release()
61
61
62 def revmapfile(self):
62 def revmapfile(self):
63 return os.path.join(self.path, ".hg", "shamap")
63 return os.path.join(self.path, ".hg", "shamap")
64
64
65 def authorfile(self):
65 def authorfile(self):
66 return os.path.join(self.path, ".hg", "authormap")
66 return os.path.join(self.path, ".hg", "authormap")
67
67
68 def getheads(self):
68 def getheads(self):
69 h = self.repo.changelog.heads()
69 h = self.repo.changelog.heads()
70 return [ hex(x) for x in h ]
70 return [ hex(x) for x in h ]
71
71
72 def setbranch(self, branch, pbranches):
72 def setbranch(self, branch, pbranches):
73 if not self.clonebranches:
73 if not self.clonebranches:
74 return
74 return
75
75
76 setbranch = (branch != self.lastbranch)
76 setbranch = (branch != self.lastbranch)
77 self.lastbranch = branch
77 self.lastbranch = branch
78 if not branch:
78 if not branch:
79 branch = 'default'
79 branch = 'default'
80 pbranches = [(b[0], b[1] and b[1] or 'default') for b in pbranches]
80 pbranches = [(b[0], b[1] and b[1] or 'default') for b in pbranches]
81 pbranch = pbranches and pbranches[0][1] or 'default'
81 pbranch = pbranches and pbranches[0][1] or 'default'
82
82
83 branchpath = os.path.join(self.path, branch)
83 branchpath = os.path.join(self.path, branch)
84 if setbranch:
84 if setbranch:
85 self.after()
85 self.after()
86 try:
86 try:
87 self.repo = hg.repository(self.ui, branchpath)
87 self.repo = hg.repository(self.ui, branchpath)
88 except:
88 except:
89 self.repo = hg.repository(self.ui, branchpath, create=True)
89 self.repo = hg.repository(self.ui, branchpath, create=True)
90 self.before()
90 self.before()
91
91
92 # pbranches may bring revisions from other branches (merge parents)
92 # pbranches may bring revisions from other branches (merge parents)
93 # Make sure we have them, or pull them.
93 # Make sure we have them, or pull them.
94 missings = {}
94 missings = {}
95 for b in pbranches:
95 for b in pbranches:
96 try:
96 try:
97 self.repo.lookup(b[0])
97 self.repo.lookup(b[0])
98 except:
98 except:
99 missings.setdefault(b[1], []).append(b[0])
99 missings.setdefault(b[1], []).append(b[0])
100
100
101 if missings:
101 if missings:
102 self.after()
102 self.after()
103 for pbranch, heads in missings.iteritems():
103 for pbranch, heads in missings.iteritems():
104 pbranchpath = os.path.join(self.path, pbranch)
104 pbranchpath = os.path.join(self.path, pbranch)
105 prepo = hg.repository(self.ui, pbranchpath)
105 prepo = hg.repository(self.ui, pbranchpath)
106 self.ui.note(_('pulling from %s into %s\n') % (pbranch, branch))
106 self.ui.note(_('pulling from %s into %s\n') % (pbranch, branch))
107 self.repo.pull(prepo, [prepo.lookup(h) for h in heads])
107 self.repo.pull(prepo, [prepo.lookup(h) for h in heads])
108 self.before()
108 self.before()
109
109
110 def putcommit(self, files, copies, parents, commit, source):
110 def putcommit(self, files, copies, parents, commit, source):
111
111
112 files = dict(files)
112 files = dict(files)
113 def getfilectx(repo, memctx, f):
113 def getfilectx(repo, memctx, f):
114 v = files[f]
114 v = files[f]
115 data = source.getfile(f, v)
115 data = source.getfile(f, v)
116 e = source.getmode(f, v)
116 e = source.getmode(f, v)
117 return context.memfilectx(f, data, 'l' in e, 'x' in e, copies.get(f))
117 return context.memfilectx(f, data, 'l' in e, 'x' in e, copies.get(f))
118
118
119 pl = []
119 pl = []
120 for p in parents:
120 for p in parents:
121 if p not in pl:
121 if p not in pl:
122 pl.append(p)
122 pl.append(p)
123 parents = pl
123 parents = pl
124 nparents = len(parents)
124 nparents = len(parents)
125 if self.filemapmode and nparents == 1:
125 if self.filemapmode and nparents == 1:
126 m1node = self.repo.changelog.read(bin(parents[0]))[0]
126 m1node = self.repo.changelog.read(bin(parents[0]))[0]
127 parent = parents[0]
127 parent = parents[0]
128
128
129 if len(parents) < 2: parents.append("0" * 40)
129 if len(parents) < 2: parents.append("0" * 40)
130 if len(parents) < 2: parents.append("0" * 40)
130 if len(parents) < 2: parents.append("0" * 40)
131 p2 = parents.pop(0)
131 p2 = parents.pop(0)
132
132
133 text = commit.desc
133 text = commit.desc
134 extra = commit.extra.copy()
134 extra = commit.extra.copy()
135 if self.branchnames and commit.branch:
135 if self.branchnames and commit.branch:
136 extra['branch'] = commit.branch
136 extra['branch'] = commit.branch
137 if commit.rev:
137 if commit.rev:
138 extra['convert_revision'] = commit.rev
138 extra['convert_revision'] = commit.rev
139
139
140 while parents:
140 while parents:
141 p1 = p2
141 p1 = p2
142 p2 = parents.pop(0)
142 p2 = parents.pop(0)
143 ctx = context.memctx(self.repo, (p1, p2), text, files.keys(), getfilectx,
143 ctx = context.memctx(self.repo, (p1, p2), text, files.keys(), getfilectx,
144 commit.author, commit.date, extra)
144 commit.author, commit.date, extra)
145 self.repo.commitctx(ctx)
145 self.repo.commitctx(ctx)
146 text = "(octopus merge fixup)\n"
146 text = "(octopus merge fixup)\n"
147 p2 = hex(self.repo.changelog.tip())
147 p2 = hex(self.repo.changelog.tip())
148
148
149 if self.filemapmode and nparents == 1:
149 if self.filemapmode and nparents == 1:
150 man = self.repo.manifest
150 man = self.repo.manifest
151 mnode = self.repo.changelog.read(bin(p2))[0]
151 mnode = self.repo.changelog.read(bin(p2))[0]
152 if not man.cmp(m1node, man.revision(mnode)):
152 if not man.cmp(m1node, man.revision(mnode)):
153 self.repo.rollback()
153 self.repo.rollback()
154 return parent
154 return parent
155 return p2
155 return p2
156
156
157 def puttags(self, tags):
157 def puttags(self, tags):
158 try:
158 try:
159 parentctx = self.repo[self.tagsbranch]
159 parentctx = self.repo[self.tagsbranch]
160 tagparent = parentctx.node()
160 tagparent = parentctx.node()
161 except error.RepoError:
161 except error.RepoError:
162 parentctx = None
162 parentctx = None
163 tagparent = nullid
163 tagparent = nullid
164
164
165 try:
165 try:
166 oldlines = util.sort(parentctx['.hgtags'].data().splitlines(1))
166 oldlines = sorted(parentctx['.hgtags'].data().splitlines(1))
167 except:
167 except:
168 oldlines = []
168 oldlines = []
169
169
170 newlines = util.sort([("%s %s\n" % (tags[tag], tag)) for tag in tags])
170 newlines = sorted([("%s %s\n" % (tags[tag], tag)) for tag in tags])
171 if newlines == oldlines:
171 if newlines == oldlines:
172 return None
172 return None
173 data = "".join(newlines)
173 data = "".join(newlines)
174 def getfilectx(repo, memctx, f):
174 def getfilectx(repo, memctx, f):
175 return context.memfilectx(f, data, False, False, None)
175 return context.memfilectx(f, data, False, False, None)
176
176
177 self.ui.status(_("updating tags\n"))
177 self.ui.status(_("updating tags\n"))
178 date = "%s 0" % int(time.mktime(time.gmtime()))
178 date = "%s 0" % int(time.mktime(time.gmtime()))
179 extra = {'branch': self.tagsbranch}
179 extra = {'branch': self.tagsbranch}
180 ctx = context.memctx(self.repo, (tagparent, None), "update tags",
180 ctx = context.memctx(self.repo, (tagparent, None), "update tags",
181 [".hgtags"], getfilectx, "convert-repo", date,
181 [".hgtags"], getfilectx, "convert-repo", date,
182 extra)
182 extra)
183 self.repo.commitctx(ctx)
183 self.repo.commitctx(ctx)
184 return hex(self.repo.changelog.tip())
184 return hex(self.repo.changelog.tip())
185
185
186 def setfilemapmode(self, active):
186 def setfilemapmode(self, active):
187 self.filemapmode = active
187 self.filemapmode = active
188
188
189 class mercurial_source(converter_source):
189 class mercurial_source(converter_source):
190 def __init__(self, ui, path, rev=None):
190 def __init__(self, ui, path, rev=None):
191 converter_source.__init__(self, ui, path, rev)
191 converter_source.__init__(self, ui, path, rev)
192 self.ignoreerrors = ui.configbool('convert', 'hg.ignoreerrors', False)
192 self.ignoreerrors = ui.configbool('convert', 'hg.ignoreerrors', False)
193 self.ignored = {}
193 self.ignored = {}
194 self.saverev = ui.configbool('convert', 'hg.saverev', False)
194 self.saverev = ui.configbool('convert', 'hg.saverev', False)
195 try:
195 try:
196 self.repo = hg.repository(self.ui, path)
196 self.repo = hg.repository(self.ui, path)
197 # try to provoke an exception if this isn't really a hg
197 # try to provoke an exception if this isn't really a hg
198 # repo, but some other bogus compatible-looking url
198 # repo, but some other bogus compatible-looking url
199 if not self.repo.local():
199 if not self.repo.local():
200 raise error.RepoError()
200 raise error.RepoError()
201 except error.RepoError:
201 except error.RepoError:
202 ui.traceback()
202 ui.traceback()
203 raise NoRepo("%s is not a local Mercurial repo" % path)
203 raise NoRepo("%s is not a local Mercurial repo" % path)
204 self.lastrev = None
204 self.lastrev = None
205 self.lastctx = None
205 self.lastctx = None
206 self._changescache = None
206 self._changescache = None
207 self.convertfp = None
207 self.convertfp = None
208 # Restrict converted revisions to startrev descendants
208 # Restrict converted revisions to startrev descendants
209 startnode = ui.config('convert', 'hg.startrev')
209 startnode = ui.config('convert', 'hg.startrev')
210 if startnode is not None:
210 if startnode is not None:
211 try:
211 try:
212 startnode = self.repo.lookup(startnode)
212 startnode = self.repo.lookup(startnode)
213 except error.RepoError:
213 except error.RepoError:
214 raise util.Abort(_('%s is not a valid start revision')
214 raise util.Abort(_('%s is not a valid start revision')
215 % startnode)
215 % startnode)
216 startrev = self.repo.changelog.rev(startnode)
216 startrev = self.repo.changelog.rev(startnode)
217 children = {startnode: 1}
217 children = {startnode: 1}
218 for rev in self.repo.changelog.descendants(startrev):
218 for rev in self.repo.changelog.descendants(startrev):
219 children[self.repo.changelog.node(rev)] = 1
219 children[self.repo.changelog.node(rev)] = 1
220 self.keep = children.__contains__
220 self.keep = children.__contains__
221 else:
221 else:
222 self.keep = util.always
222 self.keep = util.always
223
223
224 def changectx(self, rev):
224 def changectx(self, rev):
225 if self.lastrev != rev:
225 if self.lastrev != rev:
226 self.lastctx = self.repo[rev]
226 self.lastctx = self.repo[rev]
227 self.lastrev = rev
227 self.lastrev = rev
228 return self.lastctx
228 return self.lastctx
229
229
230 def parents(self, ctx):
230 def parents(self, ctx):
231 return [p.node() for p in ctx.parents()
231 return [p.node() for p in ctx.parents()
232 if p and self.keep(p.node())]
232 if p and self.keep(p.node())]
233
233
234 def getheads(self):
234 def getheads(self):
235 if self.rev:
235 if self.rev:
236 heads = [self.repo[self.rev].node()]
236 heads = [self.repo[self.rev].node()]
237 else:
237 else:
238 heads = self.repo.heads()
238 heads = self.repo.heads()
239 return [hex(h) for h in heads if self.keep(h)]
239 return [hex(h) for h in heads if self.keep(h)]
240
240
241 def getfile(self, name, rev):
241 def getfile(self, name, rev):
242 try:
242 try:
243 return self.changectx(rev)[name].data()
243 return self.changectx(rev)[name].data()
244 except error.LookupError, err:
244 except error.LookupError, err:
245 raise IOError(err)
245 raise IOError(err)
246
246
247 def getmode(self, name, rev):
247 def getmode(self, name, rev):
248 return self.changectx(rev).manifest().flags(name)
248 return self.changectx(rev).manifest().flags(name)
249
249
250 def getchanges(self, rev):
250 def getchanges(self, rev):
251 ctx = self.changectx(rev)
251 ctx = self.changectx(rev)
252 parents = self.parents(ctx)
252 parents = self.parents(ctx)
253 if not parents:
253 if not parents:
254 files = util.sort(ctx.manifest().keys())
254 files = sorted(ctx.manifest())
255 if self.ignoreerrors:
255 if self.ignoreerrors:
256 # calling getcopies() is a simple way to detect missing
256 # calling getcopies() is a simple way to detect missing
257 # revlogs and populate self.ignored
257 # revlogs and populate self.ignored
258 self.getcopies(ctx, files)
258 self.getcopies(ctx, files)
259 return [(f, rev) for f in files if f not in self.ignored], {}
259 return [(f, rev) for f in files if f not in self.ignored], {}
260 if self._changescache and self._changescache[0] == rev:
260 if self._changescache and self._changescache[0] == rev:
261 m, a, r = self._changescache[1]
261 m, a, r = self._changescache[1]
262 else:
262 else:
263 m, a, r = self.repo.status(parents[0], ctx.node())[:3]
263 m, a, r = self.repo.status(parents[0], ctx.node())[:3]
264 # getcopies() detects missing revlogs early, run it before
264 # getcopies() detects missing revlogs early, run it before
265 # filtering the changes.
265 # filtering the changes.
266 copies = self.getcopies(ctx, m + a)
266 copies = self.getcopies(ctx, m + a)
267 changes = [(name, rev) for name in m + a + r
267 changes = [(name, rev) for name in m + a + r
268 if name not in self.ignored]
268 if name not in self.ignored]
269 return util.sort(changes), copies
269 return sorted(changes), copies
270
270
271 def getcopies(self, ctx, files):
271 def getcopies(self, ctx, files):
272 copies = {}
272 copies = {}
273 for name in files:
273 for name in files:
274 if name in self.ignored:
274 if name in self.ignored:
275 continue
275 continue
276 try:
276 try:
277 copysource, copynode = ctx.filectx(name).renamed()
277 copysource, copynode = ctx.filectx(name).renamed()
278 if copysource in self.ignored or not self.keep(copynode):
278 if copysource in self.ignored or not self.keep(copynode):
279 continue
279 continue
280 copies[name] = copysource
280 copies[name] = copysource
281 except TypeError:
281 except TypeError:
282 pass
282 pass
283 except error.LookupError, e:
283 except error.LookupError, e:
284 if not self.ignoreerrors:
284 if not self.ignoreerrors:
285 raise
285 raise
286 self.ignored[name] = 1
286 self.ignored[name] = 1
287 self.ui.warn(_('ignoring: %s\n') % e)
287 self.ui.warn(_('ignoring: %s\n') % e)
288 return copies
288 return copies
289
289
290 def getcommit(self, rev):
290 def getcommit(self, rev):
291 ctx = self.changectx(rev)
291 ctx = self.changectx(rev)
292 parents = [hex(p) for p in self.parents(ctx)]
292 parents = [hex(p) for p in self.parents(ctx)]
293 if self.saverev:
293 if self.saverev:
294 crev = rev
294 crev = rev
295 else:
295 else:
296 crev = None
296 crev = None
297 return commit(author=ctx.user(), date=util.datestr(ctx.date()),
297 return commit(author=ctx.user(), date=util.datestr(ctx.date()),
298 desc=ctx.description(), rev=crev, parents=parents,
298 desc=ctx.description(), rev=crev, parents=parents,
299 branch=ctx.branch(), extra=ctx.extra())
299 branch=ctx.branch(), extra=ctx.extra())
300
300
301 def gettags(self):
301 def gettags(self):
302 tags = [t for t in self.repo.tagslist() if t[0] != 'tip']
302 tags = [t for t in self.repo.tagslist() if t[0] != 'tip']
303 return dict([(name, hex(node)) for name, node in tags
303 return dict([(name, hex(node)) for name, node in tags
304 if self.keep(node)])
304 if self.keep(node)])
305
305
306 def getchangedfiles(self, rev, i):
306 def getchangedfiles(self, rev, i):
307 ctx = self.changectx(rev)
307 ctx = self.changectx(rev)
308 parents = self.parents(ctx)
308 parents = self.parents(ctx)
309 if not parents and i is None:
309 if not parents and i is None:
310 i = 0
310 i = 0
311 changes = [], ctx.manifest().keys(), []
311 changes = [], ctx.manifest().keys(), []
312 else:
312 else:
313 i = i or 0
313 i = i or 0
314 changes = self.repo.status(parents[i], ctx.node())[:3]
314 changes = self.repo.status(parents[i], ctx.node())[:3]
315 changes = [[f for f in l if f not in self.ignored] for l in changes]
315 changes = [[f for f in l if f not in self.ignored] for l in changes]
316
316
317 if i == 0:
317 if i == 0:
318 self._changescache = (rev, changes)
318 self._changescache = (rev, changes)
319
319
320 return changes[0] + changes[1] + changes[2]
320 return changes[0] + changes[1] + changes[2]
321
321
322 def converted(self, rev, destrev):
322 def converted(self, rev, destrev):
323 if self.convertfp is None:
323 if self.convertfp is None:
324 self.convertfp = open(os.path.join(self.path, '.hg', 'shamap'),
324 self.convertfp = open(os.path.join(self.path, '.hg', 'shamap'),
325 'a')
325 'a')
326 self.convertfp.write('%s %s\n' % (destrev, rev))
326 self.convertfp.write('%s %s\n' % (destrev, rev))
327 self.convertfp.flush()
327 self.convertfp.flush()
328
328
329 def before(self):
329 def before(self):
330 self.ui.debug(_('run hg source pre-conversion action\n'))
330 self.ui.debug(_('run hg source pre-conversion action\n'))
331
331
332 def after(self):
332 def after(self):
333 self.ui.debug(_('run hg source post-conversion action\n'))
333 self.ui.debug(_('run hg source post-conversion action\n'))
@@ -1,179 +1,179 b''
1 #
1 #
2 # Perforce source for convert extension.
2 # Perforce source for convert extension.
3 #
3 #
4 # Copyright 2009, Frank Kingswood <frank@kingswood-consulting.co.uk>
4 # Copyright 2009, Frank Kingswood <frank@kingswood-consulting.co.uk>
5 #
5 #
6 # This software may be used and distributed according to the terms
6 # This software may be used and distributed according to the terms
7 # of the GNU General Public License, incorporated herein by reference.
7 # of the GNU General Public License, incorporated herein by reference.
8 #
8 #
9
9
10 from mercurial import util
10 from mercurial import util
11 from mercurial.i18n import _
11 from mercurial.i18n import _
12
12
13 from common import commit, converter_source, checktool, NoRepo
13 from common import commit, converter_source, checktool, NoRepo
14 import marshal
14 import marshal
15
15
16 def loaditer(f):
16 def loaditer(f):
17 "Yield the dictionary objects generated by p4"
17 "Yield the dictionary objects generated by p4"
18 try:
18 try:
19 while True:
19 while True:
20 d = marshal.load(f)
20 d = marshal.load(f)
21 if not d:
21 if not d:
22 break
22 break
23 yield d
23 yield d
24 except EOFError:
24 except EOFError:
25 pass
25 pass
26
26
27 class p4_source(converter_source):
27 class p4_source(converter_source):
28 def __init__(self, ui, path, rev=None):
28 def __init__(self, ui, path, rev=None):
29 super(p4_source, self).__init__(ui, path, rev=rev)
29 super(p4_source, self).__init__(ui, path, rev=rev)
30
30
31 if not path.startswith('//'):
31 if not path.startswith('//'):
32 raise NoRepo('%s does not look like a P4 repo' % path)
32 raise NoRepo('%s does not look like a P4 repo' % path)
33
33
34 checktool('p4', abort=False)
34 checktool('p4', abort=False)
35
35
36 self.p4changes = {}
36 self.p4changes = {}
37 self.heads = {}
37 self.heads = {}
38 self.changeset = {}
38 self.changeset = {}
39 self.files = {}
39 self.files = {}
40 self.tags = {}
40 self.tags = {}
41 self.lastbranch = {}
41 self.lastbranch = {}
42 self.parent = {}
42 self.parent = {}
43 self.encoding = "latin_1"
43 self.encoding = "latin_1"
44 self.depotname = {} # mapping from local name to depot name
44 self.depotname = {} # mapping from local name to depot name
45 self.modecache = {}
45 self.modecache = {}
46
46
47 self._parse(ui, path)
47 self._parse(ui, path)
48
48
49 def _parse_view(self, path):
49 def _parse_view(self, path):
50 "Read changes affecting the path"
50 "Read changes affecting the path"
51 cmd = 'p4 -G changes -s submitted "%s"' % path
51 cmd = 'p4 -G changes -s submitted "%s"' % path
52 stdout = util.popen(cmd)
52 stdout = util.popen(cmd)
53 for d in loaditer(stdout):
53 for d in loaditer(stdout):
54 c = d.get("change", None)
54 c = d.get("change", None)
55 if c:
55 if c:
56 self.p4changes[c] = True
56 self.p4changes[c] = True
57
57
58 def _parse(self, ui, path):
58 def _parse(self, ui, path):
59 "Prepare list of P4 filenames and revisions to import"
59 "Prepare list of P4 filenames and revisions to import"
60 ui.status(_('reading p4 views\n'))
60 ui.status(_('reading p4 views\n'))
61
61
62 # read client spec or view
62 # read client spec or view
63 if "/" in path:
63 if "/" in path:
64 self._parse_view(path)
64 self._parse_view(path)
65 if path.startswith("//") and path.endswith("/..."):
65 if path.startswith("//") and path.endswith("/..."):
66 views = {path[:-3]:""}
66 views = {path[:-3]:""}
67 else:
67 else:
68 views = {"//": ""}
68 views = {"//": ""}
69 else:
69 else:
70 cmd = 'p4 -G client -o "%s"' % path
70 cmd = 'p4 -G client -o "%s"' % path
71 clientspec = marshal.load(util.popen(cmd))
71 clientspec = marshal.load(util.popen(cmd))
72
72
73 views = {}
73 views = {}
74 for client in clientspec:
74 for client in clientspec:
75 if client.startswith("View"):
75 if client.startswith("View"):
76 sview, cview = clientspec[client].split()
76 sview, cview = clientspec[client].split()
77 self._parse_view(sview)
77 self._parse_view(sview)
78 if sview.endswith("...") and cview.endswith("..."):
78 if sview.endswith("...") and cview.endswith("..."):
79 sview = sview[:-3]
79 sview = sview[:-3]
80 cview = cview[:-3]
80 cview = cview[:-3]
81 cview = cview[2:]
81 cview = cview[2:]
82 cview = cview[cview.find("/") + 1:]
82 cview = cview[cview.find("/") + 1:]
83 views[sview] = cview
83 views[sview] = cview
84
84
85 # list of changes that affect our source files
85 # list of changes that affect our source files
86 self.p4changes = self.p4changes.keys()
86 self.p4changes = self.p4changes.keys()
87 self.p4changes.sort(key=int)
87 self.p4changes.sort(key=int)
88
88
89 # list with depot pathnames, longest first
89 # list with depot pathnames, longest first
90 vieworder = views.keys()
90 vieworder = views.keys()
91 vieworder.sort(key=lambda x: -len(x))
91 vieworder.sort(key=lambda x: -len(x))
92
92
93 # handle revision limiting
93 # handle revision limiting
94 startrev = self.ui.config('convert', 'p4.startrev', default=0)
94 startrev = self.ui.config('convert', 'p4.startrev', default=0)
95 self.p4changes = [x for x in self.p4changes
95 self.p4changes = [x for x in self.p4changes
96 if ((not startrev or int(x) >= int(startrev)) and
96 if ((not startrev or int(x) >= int(startrev)) and
97 (not self.rev or int(x) <= int(self.rev)))]
97 (not self.rev or int(x) <= int(self.rev)))]
98
98
99 # now read the full changelists to get the list of file revisions
99 # now read the full changelists to get the list of file revisions
100 ui.status(_('collecting p4 changelists\n'))
100 ui.status(_('collecting p4 changelists\n'))
101 lastid = None
101 lastid = None
102 for change in self.p4changes:
102 for change in self.p4changes:
103 cmd = "p4 -G describe %s" % change
103 cmd = "p4 -G describe %s" % change
104 stdout = util.popen(cmd)
104 stdout = util.popen(cmd)
105 d = marshal.load(stdout)
105 d = marshal.load(stdout)
106
106
107 desc = self.recode(d["desc"])
107 desc = self.recode(d["desc"])
108 shortdesc = desc.split("\n", 1)[0]
108 shortdesc = desc.split("\n", 1)[0]
109 t = '%s %s' % (d["change"], repr(shortdesc)[1:-1])
109 t = '%s %s' % (d["change"], repr(shortdesc)[1:-1])
110 ui.status(util.ellipsis(t, 80) + '\n')
110 ui.status(util.ellipsis(t, 80) + '\n')
111
111
112 if lastid:
112 if lastid:
113 parents = [lastid]
113 parents = [lastid]
114 else:
114 else:
115 parents = []
115 parents = []
116
116
117 date = (int(d["time"]), 0) # timezone not set
117 date = (int(d["time"]), 0) # timezone not set
118 c = commit(author=self.recode(d["user"]), date=util.datestr(date),
118 c = commit(author=self.recode(d["user"]), date=util.datestr(date),
119 parents=parents, desc=desc, branch='', extra={"p4": change})
119 parents=parents, desc=desc, branch='', extra={"p4": change})
120
120
121 files = []
121 files = []
122 i = 0
122 i = 0
123 while ("depotFile%d" % i) in d and ("rev%d" % i) in d:
123 while ("depotFile%d" % i) in d and ("rev%d" % i) in d:
124 oldname = d["depotFile%d" % i]
124 oldname = d["depotFile%d" % i]
125 filename = None
125 filename = None
126 for v in vieworder:
126 for v in vieworder:
127 if oldname.startswith(v):
127 if oldname.startswith(v):
128 filename = views[v] + oldname[len(v):]
128 filename = views[v] + oldname[len(v):]
129 break
129 break
130 if filename:
130 if filename:
131 files.append((filename, d["rev%d" % i]))
131 files.append((filename, d["rev%d" % i]))
132 self.depotname[filename] = oldname
132 self.depotname[filename] = oldname
133 i += 1
133 i += 1
134 self.changeset[change] = c
134 self.changeset[change] = c
135 self.files[change] = files
135 self.files[change] = files
136 lastid = change
136 lastid = change
137
137
138 if lastid:
138 if lastid:
139 self.heads = [lastid]
139 self.heads = [lastid]
140
140
141 def getheads(self):
141 def getheads(self):
142 return self.heads
142 return self.heads
143
143
144 def getfile(self, name, rev):
144 def getfile(self, name, rev):
145 cmd = 'p4 -G print "%s#%s"' % (self.depotname[name], rev)
145 cmd = 'p4 -G print "%s#%s"' % (self.depotname[name], rev)
146 stdout = util.popen(cmd)
146 stdout = util.popen(cmd)
147
147
148 mode = None
148 mode = None
149 data = ""
149 data = ""
150
150
151 for d in loaditer(stdout):
151 for d in loaditer(stdout):
152 if d["code"] == "stat":
152 if d["code"] == "stat":
153 if "+x" in d["type"]:
153 if "+x" in d["type"]:
154 mode = "x"
154 mode = "x"
155 else:
155 else:
156 mode = ""
156 mode = ""
157 elif d["code"] == "text":
157 elif d["code"] == "text":
158 data += d["data"]
158 data += d["data"]
159
159
160 if mode is None:
160 if mode is None:
161 raise IOError()
161 raise IOError()
162
162
163 self.modecache[(name, rev)] = mode
163 self.modecache[(name, rev)] = mode
164 return data
164 return data
165
165
166 def getmode(self, name, rev):
166 def getmode(self, name, rev):
167 return self.modecache[(name, rev)]
167 return self.modecache[(name, rev)]
168
168
169 def getchanges(self, rev):
169 def getchanges(self, rev):
170 return self.files[rev], {}
170 return self.files[rev], {}
171
171
172 def getcommit(self, rev):
172 def getcommit(self, rev):
173 return self.changeset[rev]
173 return self.changeset[rev]
174
174
175 def gettags(self):
175 def gettags(self):
176 return self.tags
176 return self.tags
177
177
178 def getchangedfiles(self, rev, i):
178 def getchangedfiles(self, rev, i):
179 return util.sort([x[0] for x in self.files[rev]])
179 return sorted([x[0] for x in self.files[rev]])
@@ -1,1205 +1,1203 b''
1 # Subversion 1.4/1.5 Python API backend
1 # Subversion 1.4/1.5 Python API backend
2 #
2 #
3 # Copyright(C) 2007 Daniel Holth et al
3 # Copyright(C) 2007 Daniel Holth et al
4 #
4 #
5 # Configuration options:
5 # Configuration options:
6 #
6 #
7 # convert.svn.trunk
7 # convert.svn.trunk
8 # Relative path to the trunk (default: "trunk")
8 # Relative path to the trunk (default: "trunk")
9 # convert.svn.branches
9 # convert.svn.branches
10 # Relative path to tree of branches (default: "branches")
10 # Relative path to tree of branches (default: "branches")
11 # convert.svn.tags
11 # convert.svn.tags
12 # Relative path to tree of tags (default: "tags")
12 # Relative path to tree of tags (default: "tags")
13 #
13 #
14 # Set these in a hgrc, or on the command line as follows:
14 # Set these in a hgrc, or on the command line as follows:
15 #
15 #
16 # hg convert --config convert.svn.trunk=wackoname [...]
16 # hg convert --config convert.svn.trunk=wackoname [...]
17
17
18 import locale
18 import locale
19 import os
19 import os
20 import re
20 import re
21 import sys
21 import sys
22 import cPickle as pickle
22 import cPickle as pickle
23 import tempfile
23 import tempfile
24 import urllib
24 import urllib
25
25
26 from mercurial import strutil, util
26 from mercurial import strutil, util
27 from mercurial.i18n import _
27 from mercurial.i18n import _
28
28
29 # Subversion stuff. Works best with very recent Python SVN bindings
29 # Subversion stuff. Works best with very recent Python SVN bindings
30 # e.g. SVN 1.5 or backports. Thanks to the bzr folks for enhancing
30 # e.g. SVN 1.5 or backports. Thanks to the bzr folks for enhancing
31 # these bindings.
31 # these bindings.
32
32
33 from cStringIO import StringIO
33 from cStringIO import StringIO
34
34
35 from common import NoRepo, MissingTool, commit, encodeargs, decodeargs
35 from common import NoRepo, MissingTool, commit, encodeargs, decodeargs
36 from common import commandline, converter_source, converter_sink, mapfile
36 from common import commandline, converter_source, converter_sink, mapfile
37
37
38 try:
38 try:
39 from svn.core import SubversionException, Pool
39 from svn.core import SubversionException, Pool
40 import svn
40 import svn
41 import svn.client
41 import svn.client
42 import svn.core
42 import svn.core
43 import svn.ra
43 import svn.ra
44 import svn.delta
44 import svn.delta
45 import transport
45 import transport
46 except ImportError:
46 except ImportError:
47 pass
47 pass
48
48
49 class SvnPathNotFound(Exception):
49 class SvnPathNotFound(Exception):
50 pass
50 pass
51
51
52 def geturl(path):
52 def geturl(path):
53 try:
53 try:
54 return svn.client.url_from_path(svn.core.svn_path_canonicalize(path))
54 return svn.client.url_from_path(svn.core.svn_path_canonicalize(path))
55 except SubversionException:
55 except SubversionException:
56 pass
56 pass
57 if os.path.isdir(path):
57 if os.path.isdir(path):
58 path = os.path.normpath(os.path.abspath(path))
58 path = os.path.normpath(os.path.abspath(path))
59 if os.name == 'nt':
59 if os.name == 'nt':
60 path = '/' + util.normpath(path)
60 path = '/' + util.normpath(path)
61 return 'file://%s' % urllib.quote(path)
61 return 'file://%s' % urllib.quote(path)
62 return path
62 return path
63
63
64 def optrev(number):
64 def optrev(number):
65 optrev = svn.core.svn_opt_revision_t()
65 optrev = svn.core.svn_opt_revision_t()
66 optrev.kind = svn.core.svn_opt_revision_number
66 optrev.kind = svn.core.svn_opt_revision_number
67 optrev.value.number = number
67 optrev.value.number = number
68 return optrev
68 return optrev
69
69
70 class changedpath(object):
70 class changedpath(object):
71 def __init__(self, p):
71 def __init__(self, p):
72 self.copyfrom_path = p.copyfrom_path
72 self.copyfrom_path = p.copyfrom_path
73 self.copyfrom_rev = p.copyfrom_rev
73 self.copyfrom_rev = p.copyfrom_rev
74 self.action = p.action
74 self.action = p.action
75
75
76 def get_log_child(fp, url, paths, start, end, limit=0, discover_changed_paths=True,
76 def get_log_child(fp, url, paths, start, end, limit=0, discover_changed_paths=True,
77 strict_node_history=False):
77 strict_node_history=False):
78 protocol = -1
78 protocol = -1
79 def receiver(orig_paths, revnum, author, date, message, pool):
79 def receiver(orig_paths, revnum, author, date, message, pool):
80 if orig_paths is not None:
80 if orig_paths is not None:
81 for k, v in orig_paths.iteritems():
81 for k, v in orig_paths.iteritems():
82 orig_paths[k] = changedpath(v)
82 orig_paths[k] = changedpath(v)
83 pickle.dump((orig_paths, revnum, author, date, message),
83 pickle.dump((orig_paths, revnum, author, date, message),
84 fp, protocol)
84 fp, protocol)
85
85
86 try:
86 try:
87 # Use an ra of our own so that our parent can consume
87 # Use an ra of our own so that our parent can consume
88 # our results without confusing the server.
88 # our results without confusing the server.
89 t = transport.SvnRaTransport(url=url)
89 t = transport.SvnRaTransport(url=url)
90 svn.ra.get_log(t.ra, paths, start, end, limit,
90 svn.ra.get_log(t.ra, paths, start, end, limit,
91 discover_changed_paths,
91 discover_changed_paths,
92 strict_node_history,
92 strict_node_history,
93 receiver)
93 receiver)
94 except SubversionException, (inst, num):
94 except SubversionException, (inst, num):
95 pickle.dump(num, fp, protocol)
95 pickle.dump(num, fp, protocol)
96 except IOError:
96 except IOError:
97 # Caller may interrupt the iteration
97 # Caller may interrupt the iteration
98 pickle.dump(None, fp, protocol)
98 pickle.dump(None, fp, protocol)
99 else:
99 else:
100 pickle.dump(None, fp, protocol)
100 pickle.dump(None, fp, protocol)
101 fp.close()
101 fp.close()
102 # With large history, cleanup process goes crazy and suddenly
102 # With large history, cleanup process goes crazy and suddenly
103 # consumes *huge* amount of memory. The output file being closed,
103 # consumes *huge* amount of memory. The output file being closed,
104 # there is no need for clean termination.
104 # there is no need for clean termination.
105 os._exit(0)
105 os._exit(0)
106
106
107 def debugsvnlog(ui, **opts):
107 def debugsvnlog(ui, **opts):
108 """Fetch SVN log in a subprocess and channel them back to parent to
108 """Fetch SVN log in a subprocess and channel them back to parent to
109 avoid memory collection issues.
109 avoid memory collection issues.
110 """
110 """
111 util.set_binary(sys.stdin)
111 util.set_binary(sys.stdin)
112 util.set_binary(sys.stdout)
112 util.set_binary(sys.stdout)
113 args = decodeargs(sys.stdin.read())
113 args = decodeargs(sys.stdin.read())
114 get_log_child(sys.stdout, *args)
114 get_log_child(sys.stdout, *args)
115
115
116 class logstream:
116 class logstream:
117 """Interruptible revision log iterator."""
117 """Interruptible revision log iterator."""
118 def __init__(self, stdout):
118 def __init__(self, stdout):
119 self._stdout = stdout
119 self._stdout = stdout
120
120
121 def __iter__(self):
121 def __iter__(self):
122 while True:
122 while True:
123 entry = pickle.load(self._stdout)
123 entry = pickle.load(self._stdout)
124 try:
124 try:
125 orig_paths, revnum, author, date, message = entry
125 orig_paths, revnum, author, date, message = entry
126 except:
126 except:
127 if entry is None:
127 if entry is None:
128 break
128 break
129 raise SubversionException("child raised exception", entry)
129 raise SubversionException("child raised exception", entry)
130 yield entry
130 yield entry
131
131
132 def close(self):
132 def close(self):
133 if self._stdout:
133 if self._stdout:
134 self._stdout.close()
134 self._stdout.close()
135 self._stdout = None
135 self._stdout = None
136
136
137
137
138 # Check to see if the given path is a local Subversion repo. Verify this by
138 # Check to see if the given path is a local Subversion repo. Verify this by
139 # looking for several svn-specific files and directories in the given
139 # looking for several svn-specific files and directories in the given
140 # directory.
140 # directory.
141 def filecheck(path, proto):
141 def filecheck(path, proto):
142 for x in ('locks', 'hooks', 'format', 'db', ):
142 for x in ('locks', 'hooks', 'format', 'db', ):
143 if not os.path.exists(os.path.join(path, x)):
143 if not os.path.exists(os.path.join(path, x)):
144 return False
144 return False
145 return True
145 return True
146
146
147 # Check to see if a given path is the root of an svn repo over http. We verify
147 # Check to see if a given path is the root of an svn repo over http. We verify
148 # this by requesting a version-controlled URL we know can't exist and looking
148 # this by requesting a version-controlled URL we know can't exist and looking
149 # for the svn-specific "not found" XML.
149 # for the svn-specific "not found" XML.
150 def httpcheck(path, proto):
150 def httpcheck(path, proto):
151 return ('<m:human-readable errcode="160013">' in
151 return ('<m:human-readable errcode="160013">' in
152 urllib.urlopen('%s://%s/!svn/ver/0/.svn' % (proto, path)).read())
152 urllib.urlopen('%s://%s/!svn/ver/0/.svn' % (proto, path)).read())
153
153
154 protomap = {'http': httpcheck,
154 protomap = {'http': httpcheck,
155 'https': httpcheck,
155 'https': httpcheck,
156 'file': filecheck,
156 'file': filecheck,
157 }
157 }
158 def issvnurl(url):
158 def issvnurl(url):
159 if not '://' in url:
159 if not '://' in url:
160 return False
160 return False
161 proto, path = url.split('://', 1)
161 proto, path = url.split('://', 1)
162 check = protomap.get(proto, lambda p, p2: False)
162 check = protomap.get(proto, lambda p, p2: False)
163 while '/' in path:
163 while '/' in path:
164 if check(path, proto):
164 if check(path, proto):
165 return True
165 return True
166 path = path.rsplit('/', 1)[0]
166 path = path.rsplit('/', 1)[0]
167 return False
167 return False
168
168
169 # SVN conversion code stolen from bzr-svn and tailor
169 # SVN conversion code stolen from bzr-svn and tailor
170 #
170 #
171 # Subversion looks like a versioned filesystem, branches structures
171 # Subversion looks like a versioned filesystem, branches structures
172 # are defined by conventions and not enforced by the tool. First,
172 # are defined by conventions and not enforced by the tool. First,
173 # we define the potential branches (modules) as "trunk" and "branches"
173 # we define the potential branches (modules) as "trunk" and "branches"
174 # children directories. Revisions are then identified by their
174 # children directories. Revisions are then identified by their
175 # module and revision number (and a repository identifier).
175 # module and revision number (and a repository identifier).
176 #
176 #
177 # The revision graph is really a tree (or a forest). By default, a
177 # The revision graph is really a tree (or a forest). By default, a
178 # revision parent is the previous revision in the same module. If the
178 # revision parent is the previous revision in the same module. If the
179 # module directory is copied/moved from another module then the
179 # module directory is copied/moved from another module then the
180 # revision is the module root and its parent the source revision in
180 # revision is the module root and its parent the source revision in
181 # the parent module. A revision has at most one parent.
181 # the parent module. A revision has at most one parent.
182 #
182 #
183 class svn_source(converter_source):
183 class svn_source(converter_source):
184 def __init__(self, ui, url, rev=None):
184 def __init__(self, ui, url, rev=None):
185 super(svn_source, self).__init__(ui, url, rev=rev)
185 super(svn_source, self).__init__(ui, url, rev=rev)
186
186
187 if not (url.startswith('svn://') or url.startswith('svn+ssh://') or
187 if not (url.startswith('svn://') or url.startswith('svn+ssh://') or
188 (os.path.exists(url) and
188 (os.path.exists(url) and
189 os.path.exists(os.path.join(url, '.svn'))) or
189 os.path.exists(os.path.join(url, '.svn'))) or
190 issvnurl(url)):
190 issvnurl(url)):
191 raise NoRepo("%s does not look like a Subversion repo" % url)
191 raise NoRepo("%s does not look like a Subversion repo" % url)
192
192
193 try:
193 try:
194 SubversionException
194 SubversionException
195 except NameError:
195 except NameError:
196 raise MissingTool(_('Subversion python bindings could not be loaded'))
196 raise MissingTool(_('Subversion python bindings could not be loaded'))
197
197
198 try:
198 try:
199 version = svn.core.SVN_VER_MAJOR, svn.core.SVN_VER_MINOR
199 version = svn.core.SVN_VER_MAJOR, svn.core.SVN_VER_MINOR
200 if version < (1, 4):
200 if version < (1, 4):
201 raise MissingTool(_('Subversion python bindings %d.%d found, '
201 raise MissingTool(_('Subversion python bindings %d.%d found, '
202 '1.4 or later required') % version)
202 '1.4 or later required') % version)
203 except AttributeError:
203 except AttributeError:
204 raise MissingTool(_('Subversion python bindings are too old, 1.4 '
204 raise MissingTool(_('Subversion python bindings are too old, 1.4 '
205 'or later required'))
205 'or later required'))
206
206
207 self.encoding = locale.getpreferredencoding()
207 self.encoding = locale.getpreferredencoding()
208 self.lastrevs = {}
208 self.lastrevs = {}
209
209
210 latest = None
210 latest = None
211 try:
211 try:
212 # Support file://path@rev syntax. Useful e.g. to convert
212 # Support file://path@rev syntax. Useful e.g. to convert
213 # deleted branches.
213 # deleted branches.
214 at = url.rfind('@')
214 at = url.rfind('@')
215 if at >= 0:
215 if at >= 0:
216 latest = int(url[at+1:])
216 latest = int(url[at+1:])
217 url = url[:at]
217 url = url[:at]
218 except ValueError:
218 except ValueError:
219 pass
219 pass
220 self.url = geturl(url)
220 self.url = geturl(url)
221 self.encoding = 'UTF-8' # Subversion is always nominal UTF-8
221 self.encoding = 'UTF-8' # Subversion is always nominal UTF-8
222 try:
222 try:
223 self.transport = transport.SvnRaTransport(url=self.url)
223 self.transport = transport.SvnRaTransport(url=self.url)
224 self.ra = self.transport.ra
224 self.ra = self.transport.ra
225 self.ctx = self.transport.client
225 self.ctx = self.transport.client
226 self.baseurl = svn.ra.get_repos_root(self.ra)
226 self.baseurl = svn.ra.get_repos_root(self.ra)
227 # Module is either empty or a repository path starting with
227 # Module is either empty or a repository path starting with
228 # a slash and not ending with a slash.
228 # a slash and not ending with a slash.
229 self.module = urllib.unquote(self.url[len(self.baseurl):])
229 self.module = urllib.unquote(self.url[len(self.baseurl):])
230 self.prevmodule = None
230 self.prevmodule = None
231 self.rootmodule = self.module
231 self.rootmodule = self.module
232 self.commits = {}
232 self.commits = {}
233 self.paths = {}
233 self.paths = {}
234 self.uuid = svn.ra.get_uuid(self.ra).decode(self.encoding)
234 self.uuid = svn.ra.get_uuid(self.ra).decode(self.encoding)
235 except SubversionException:
235 except SubversionException:
236 ui.traceback()
236 ui.traceback()
237 raise NoRepo("%s does not look like a Subversion repo" % self.url)
237 raise NoRepo("%s does not look like a Subversion repo" % self.url)
238
238
239 if rev:
239 if rev:
240 try:
240 try:
241 latest = int(rev)
241 latest = int(rev)
242 except ValueError:
242 except ValueError:
243 raise util.Abort(_('svn: revision %s is not an integer') % rev)
243 raise util.Abort(_('svn: revision %s is not an integer') % rev)
244
244
245 self.startrev = self.ui.config('convert', 'svn.startrev', default=0)
245 self.startrev = self.ui.config('convert', 'svn.startrev', default=0)
246 try:
246 try:
247 self.startrev = int(self.startrev)
247 self.startrev = int(self.startrev)
248 if self.startrev < 0:
248 if self.startrev < 0:
249 self.startrev = 0
249 self.startrev = 0
250 except ValueError:
250 except ValueError:
251 raise util.Abort(_('svn: start revision %s is not an integer')
251 raise util.Abort(_('svn: start revision %s is not an integer')
252 % self.startrev)
252 % self.startrev)
253
253
254 try:
254 try:
255 self.get_blacklist()
255 self.get_blacklist()
256 except IOError:
256 except IOError:
257 pass
257 pass
258
258
259 self.head = self.latest(self.module, latest)
259 self.head = self.latest(self.module, latest)
260 if not self.head:
260 if not self.head:
261 raise util.Abort(_('no revision found in module %s') %
261 raise util.Abort(_('no revision found in module %s') %
262 self.module.encode(self.encoding))
262 self.module.encode(self.encoding))
263 self.last_changed = self.revnum(self.head)
263 self.last_changed = self.revnum(self.head)
264
264
265 self._changescache = None
265 self._changescache = None
266
266
267 if os.path.exists(os.path.join(url, '.svn/entries')):
267 if os.path.exists(os.path.join(url, '.svn/entries')):
268 self.wc = url
268 self.wc = url
269 else:
269 else:
270 self.wc = None
270 self.wc = None
271 self.convertfp = None
271 self.convertfp = None
272
272
273 def setrevmap(self, revmap):
273 def setrevmap(self, revmap):
274 lastrevs = {}
274 lastrevs = {}
275 for revid in revmap.iterkeys():
275 for revid in revmap.iterkeys():
276 uuid, module, revnum = self.revsplit(revid)
276 uuid, module, revnum = self.revsplit(revid)
277 lastrevnum = lastrevs.setdefault(module, revnum)
277 lastrevnum = lastrevs.setdefault(module, revnum)
278 if revnum > lastrevnum:
278 if revnum > lastrevnum:
279 lastrevs[module] = revnum
279 lastrevs[module] = revnum
280 self.lastrevs = lastrevs
280 self.lastrevs = lastrevs
281
281
282 def exists(self, path, optrev):
282 def exists(self, path, optrev):
283 try:
283 try:
284 svn.client.ls(self.url.rstrip('/') + '/' + urllib.quote(path),
284 svn.client.ls(self.url.rstrip('/') + '/' + urllib.quote(path),
285 optrev, False, self.ctx)
285 optrev, False, self.ctx)
286 return True
286 return True
287 except SubversionException:
287 except SubversionException:
288 return False
288 return False
289
289
290 def getheads(self):
290 def getheads(self):
291
291
292 def isdir(path, revnum):
292 def isdir(path, revnum):
293 kind = self._checkpath(path, revnum)
293 kind = self._checkpath(path, revnum)
294 return kind == svn.core.svn_node_dir
294 return kind == svn.core.svn_node_dir
295
295
296 def getcfgpath(name, rev):
296 def getcfgpath(name, rev):
297 cfgpath = self.ui.config('convert', 'svn.' + name)
297 cfgpath = self.ui.config('convert', 'svn.' + name)
298 if cfgpath is not None and cfgpath.strip() == '':
298 if cfgpath is not None and cfgpath.strip() == '':
299 return None
299 return None
300 path = (cfgpath or name).strip('/')
300 path = (cfgpath or name).strip('/')
301 if not self.exists(path, rev):
301 if not self.exists(path, rev):
302 if cfgpath:
302 if cfgpath:
303 raise util.Abort(_('expected %s to be at %r, but not found')
303 raise util.Abort(_('expected %s to be at %r, but not found')
304 % (name, path))
304 % (name, path))
305 return None
305 return None
306 self.ui.note(_('found %s at %r\n') % (name, path))
306 self.ui.note(_('found %s at %r\n') % (name, path))
307 return path
307 return path
308
308
309 rev = optrev(self.last_changed)
309 rev = optrev(self.last_changed)
310 oldmodule = ''
310 oldmodule = ''
311 trunk = getcfgpath('trunk', rev)
311 trunk = getcfgpath('trunk', rev)
312 self.tags = getcfgpath('tags', rev)
312 self.tags = getcfgpath('tags', rev)
313 branches = getcfgpath('branches', rev)
313 branches = getcfgpath('branches', rev)
314
314
315 # If the project has a trunk or branches, we will extract heads
315 # If the project has a trunk or branches, we will extract heads
316 # from them. We keep the project root otherwise.
316 # from them. We keep the project root otherwise.
317 if trunk:
317 if trunk:
318 oldmodule = self.module or ''
318 oldmodule = self.module or ''
319 self.module += '/' + trunk
319 self.module += '/' + trunk
320 self.head = self.latest(self.module, self.last_changed)
320 self.head = self.latest(self.module, self.last_changed)
321 if not self.head:
321 if not self.head:
322 raise util.Abort(_('no revision found in module %s') %
322 raise util.Abort(_('no revision found in module %s') %
323 self.module.encode(self.encoding))
323 self.module.encode(self.encoding))
324
324
325 # First head in the list is the module's head
325 # First head in the list is the module's head
326 self.heads = [self.head]
326 self.heads = [self.head]
327 if self.tags is not None:
327 if self.tags is not None:
328 self.tags = '%s/%s' % (oldmodule , (self.tags or 'tags'))
328 self.tags = '%s/%s' % (oldmodule , (self.tags or 'tags'))
329
329
330 # Check if branches bring a few more heads to the list
330 # Check if branches bring a few more heads to the list
331 if branches:
331 if branches:
332 rpath = self.url.strip('/')
332 rpath = self.url.strip('/')
333 branchnames = svn.client.ls(rpath + '/' + urllib.quote(branches),
333 branchnames = svn.client.ls(rpath + '/' + urllib.quote(branches),
334 rev, False, self.ctx)
334 rev, False, self.ctx)
335 for branch in branchnames.keys():
335 for branch in branchnames.keys():
336 module = '%s/%s/%s' % (oldmodule, branches, branch)
336 module = '%s/%s/%s' % (oldmodule, branches, branch)
337 if not isdir(module, self.last_changed):
337 if not isdir(module, self.last_changed):
338 continue
338 continue
339 brevid = self.latest(module, self.last_changed)
339 brevid = self.latest(module, self.last_changed)
340 if not brevid:
340 if not brevid:
341 self.ui.note(_('ignoring empty branch %s\n') %
341 self.ui.note(_('ignoring empty branch %s\n') %
342 branch.encode(self.encoding))
342 branch.encode(self.encoding))
343 continue
343 continue
344 self.ui.note(_('found branch %s at %d\n') %
344 self.ui.note(_('found branch %s at %d\n') %
345 (branch, self.revnum(brevid)))
345 (branch, self.revnum(brevid)))
346 self.heads.append(brevid)
346 self.heads.append(brevid)
347
347
348 if self.startrev and self.heads:
348 if self.startrev and self.heads:
349 if len(self.heads) > 1:
349 if len(self.heads) > 1:
350 raise util.Abort(_('svn: start revision is not supported '
350 raise util.Abort(_('svn: start revision is not supported '
351 'with more than one branch'))
351 'with more than one branch'))
352 revnum = self.revnum(self.heads[0])
352 revnum = self.revnum(self.heads[0])
353 if revnum < self.startrev:
353 if revnum < self.startrev:
354 raise util.Abort(_('svn: no revision found after start revision %d')
354 raise util.Abort(_('svn: no revision found after start revision %d')
355 % self.startrev)
355 % self.startrev)
356
356
357 return self.heads
357 return self.heads
358
358
359 def getfile(self, file, rev):
359 def getfile(self, file, rev):
360 data, mode = self._getfile(file, rev)
360 data, mode = self._getfile(file, rev)
361 self.modecache[(file, rev)] = mode
361 self.modecache[(file, rev)] = mode
362 return data
362 return data
363
363
364 def getmode(self, file, rev):
364 def getmode(self, file, rev):
365 return self.modecache[(file, rev)]
365 return self.modecache[(file, rev)]
366
366
367 def getchanges(self, rev):
367 def getchanges(self, rev):
368 if self._changescache and self._changescache[0] == rev:
368 if self._changescache and self._changescache[0] == rev:
369 return self._changescache[1]
369 return self._changescache[1]
370 self._changescache = None
370 self._changescache = None
371 self.modecache = {}
371 self.modecache = {}
372 (paths, parents) = self.paths[rev]
372 (paths, parents) = self.paths[rev]
373 if parents:
373 if parents:
374 files, copies = self.expandpaths(rev, paths, parents)
374 files, copies = self.expandpaths(rev, paths, parents)
375 else:
375 else:
376 # Perform a full checkout on roots
376 # Perform a full checkout on roots
377 uuid, module, revnum = self.revsplit(rev)
377 uuid, module, revnum = self.revsplit(rev)
378 entries = svn.client.ls(self.baseurl + urllib.quote(module),
378 entries = svn.client.ls(self.baseurl + urllib.quote(module),
379 optrev(revnum), True, self.ctx)
379 optrev(revnum), True, self.ctx)
380 files = [n for n,e in entries.iteritems()
380 files = [n for n,e in entries.iteritems()
381 if e.kind == svn.core.svn_node_file]
381 if e.kind == svn.core.svn_node_file]
382 copies = {}
382 copies = {}
383
383
384 files.sort()
384 files.sort()
385 files = zip(files, [rev] * len(files))
385 files = zip(files, [rev] * len(files))
386
386
387 # caller caches the result, so free it here to release memory
387 # caller caches the result, so free it here to release memory
388 del self.paths[rev]
388 del self.paths[rev]
389 return (files, copies)
389 return (files, copies)
390
390
391 def getchangedfiles(self, rev, i):
391 def getchangedfiles(self, rev, i):
392 changes = self.getchanges(rev)
392 changes = self.getchanges(rev)
393 self._changescache = (rev, changes)
393 self._changescache = (rev, changes)
394 return [f[0] for f in changes[0]]
394 return [f[0] for f in changes[0]]
395
395
396 def getcommit(self, rev):
396 def getcommit(self, rev):
397 if rev not in self.commits:
397 if rev not in self.commits:
398 uuid, module, revnum = self.revsplit(rev)
398 uuid, module, revnum = self.revsplit(rev)
399 self.module = module
399 self.module = module
400 self.reparent(module)
400 self.reparent(module)
401 # We assume that:
401 # We assume that:
402 # - requests for revisions after "stop" come from the
402 # - requests for revisions after "stop" come from the
403 # revision graph backward traversal. Cache all of them
403 # revision graph backward traversal. Cache all of them
404 # down to stop, they will be used eventually.
404 # down to stop, they will be used eventually.
405 # - requests for revisions before "stop" come to get
405 # - requests for revisions before "stop" come to get
406 # isolated branches parents. Just fetch what is needed.
406 # isolated branches parents. Just fetch what is needed.
407 stop = self.lastrevs.get(module, 0)
407 stop = self.lastrevs.get(module, 0)
408 if revnum < stop:
408 if revnum < stop:
409 stop = revnum + 1
409 stop = revnum + 1
410 self._fetch_revisions(revnum, stop)
410 self._fetch_revisions(revnum, stop)
411 commit = self.commits[rev]
411 commit = self.commits[rev]
412 # caller caches the result, so free it here to release memory
412 # caller caches the result, so free it here to release memory
413 del self.commits[rev]
413 del self.commits[rev]
414 return commit
414 return commit
415
415
416 def gettags(self):
416 def gettags(self):
417 tags = {}
417 tags = {}
418 if self.tags is None:
418 if self.tags is None:
419 return tags
419 return tags
420
420
421 # svn tags are just a convention, project branches left in a
421 # svn tags are just a convention, project branches left in a
422 # 'tags' directory. There is no other relationship than
422 # 'tags' directory. There is no other relationship than
423 # ancestry, which is expensive to discover and makes them hard
423 # ancestry, which is expensive to discover and makes them hard
424 # to update incrementally. Worse, past revisions may be
424 # to update incrementally. Worse, past revisions may be
425 # referenced by tags far away in the future, requiring a deep
425 # referenced by tags far away in the future, requiring a deep
426 # history traversal on every calculation. Current code
426 # history traversal on every calculation. Current code
427 # performs a single backward traversal, tracking moves within
427 # performs a single backward traversal, tracking moves within
428 # the tags directory (tag renaming) and recording a new tag
428 # the tags directory (tag renaming) and recording a new tag
429 # everytime a project is copied from outside the tags
429 # everytime a project is copied from outside the tags
430 # directory. It also lists deleted tags, this behaviour may
430 # directory. It also lists deleted tags, this behaviour may
431 # change in the future.
431 # change in the future.
432 pendings = []
432 pendings = []
433 tagspath = self.tags
433 tagspath = self.tags
434 start = svn.ra.get_latest_revnum(self.ra)
434 start = svn.ra.get_latest_revnum(self.ra)
435 try:
435 try:
436 for entry in self._getlog([self.tags], start, self.startrev):
436 for entry in self._getlog([self.tags], start, self.startrev):
437 origpaths, revnum, author, date, message = entry
437 origpaths, revnum, author, date, message = entry
438 copies = [(e.copyfrom_path, e.copyfrom_rev, p) for p, e
438 copies = [(e.copyfrom_path, e.copyfrom_rev, p) for p, e
439 in origpaths.iteritems() if e.copyfrom_path]
439 in origpaths.iteritems() if e.copyfrom_path]
440 copies.sort()
440 copies.sort()
441 # Apply moves/copies from more specific to general
441 # Apply moves/copies from more specific to general
442 copies.reverse()
442 copies.reverse()
443
443
444 srctagspath = tagspath
444 srctagspath = tagspath
445 if copies and copies[-1][2] == tagspath:
445 if copies and copies[-1][2] == tagspath:
446 # Track tags directory moves
446 # Track tags directory moves
447 srctagspath = copies.pop()[0]
447 srctagspath = copies.pop()[0]
448
448
449 for source, sourcerev, dest in copies:
449 for source, sourcerev, dest in copies:
450 if not dest.startswith(tagspath + '/'):
450 if not dest.startswith(tagspath + '/'):
451 continue
451 continue
452 for tag in pendings:
452 for tag in pendings:
453 if tag[0].startswith(dest):
453 if tag[0].startswith(dest):
454 tagpath = source + tag[0][len(dest):]
454 tagpath = source + tag[0][len(dest):]
455 tag[:2] = [tagpath, sourcerev]
455 tag[:2] = [tagpath, sourcerev]
456 break
456 break
457 else:
457 else:
458 pendings.append([source, sourcerev, dest.split('/')[-1]])
458 pendings.append([source, sourcerev, dest.split('/')[-1]])
459
459
460 # Tell tag renamings from tag creations
460 # Tell tag renamings from tag creations
461 remainings = []
461 remainings = []
462 for source, sourcerev, tagname in pendings:
462 for source, sourcerev, tagname in pendings:
463 if source.startswith(srctagspath):
463 if source.startswith(srctagspath):
464 remainings.append([source, sourcerev, tagname])
464 remainings.append([source, sourcerev, tagname])
465 continue
465 continue
466 # From revision may be fake, get one with changes
466 # From revision may be fake, get one with changes
467 try:
467 try:
468 tagid = self.latest(source, sourcerev)
468 tagid = self.latest(source, sourcerev)
469 if tagid:
469 if tagid:
470 tags[tagname] = tagid
470 tags[tagname] = tagid
471 except SvnPathNotFound:
471 except SvnPathNotFound:
472 # It happens when we are following directories we assumed
472 # It happens when we are following directories we assumed
473 # were copied with their parents but were really created
473 # were copied with their parents but were really created
474 # in the tag directory.
474 # in the tag directory.
475 pass
475 pass
476 pendings = remainings
476 pendings = remainings
477 tagspath = srctagspath
477 tagspath = srctagspath
478
478
479 except SubversionException:
479 except SubversionException:
480 self.ui.note(_('no tags found at revision %d\n') % start)
480 self.ui.note(_('no tags found at revision %d\n') % start)
481 return tags
481 return tags
482
482
483 def converted(self, rev, destrev):
483 def converted(self, rev, destrev):
484 if not self.wc:
484 if not self.wc:
485 return
485 return
486 if self.convertfp is None:
486 if self.convertfp is None:
487 self.convertfp = open(os.path.join(self.wc, '.svn', 'hg-shamap'),
487 self.convertfp = open(os.path.join(self.wc, '.svn', 'hg-shamap'),
488 'a')
488 'a')
489 self.convertfp.write('%s %d\n' % (destrev, self.revnum(rev)))
489 self.convertfp.write('%s %d\n' % (destrev, self.revnum(rev)))
490 self.convertfp.flush()
490 self.convertfp.flush()
491
491
492 # -- helper functions --
492 # -- helper functions --
493
493
494 def revid(self, revnum, module=None):
494 def revid(self, revnum, module=None):
495 if not module:
495 if not module:
496 module = self.module
496 module = self.module
497 return u"svn:%s%s@%s" % (self.uuid, module.decode(self.encoding),
497 return u"svn:%s%s@%s" % (self.uuid, module.decode(self.encoding),
498 revnum)
498 revnum)
499
499
500 def revnum(self, rev):
500 def revnum(self, rev):
501 return int(rev.split('@')[-1])
501 return int(rev.split('@')[-1])
502
502
503 def revsplit(self, rev):
503 def revsplit(self, rev):
504 url, revnum = rev.encode(self.encoding).rsplit('@', 1)
504 url, revnum = rev.encode(self.encoding).rsplit('@', 1)
505 revnum = int(revnum)
505 revnum = int(revnum)
506 parts = url.split('/', 1)
506 parts = url.split('/', 1)
507 uuid = parts.pop(0)[4:]
507 uuid = parts.pop(0)[4:]
508 mod = ''
508 mod = ''
509 if parts:
509 if parts:
510 mod = '/' + parts[0]
510 mod = '/' + parts[0]
511 return uuid, mod, revnum
511 return uuid, mod, revnum
512
512
513 def latest(self, path, stop=0):
513 def latest(self, path, stop=0):
514 """Find the latest revid affecting path, up to stop. It may return
514 """Find the latest revid affecting path, up to stop. It may return
515 a revision in a different module, since a branch may be moved without
515 a revision in a different module, since a branch may be moved without
516 a change being reported. Return None if computed module does not
516 a change being reported. Return None if computed module does not
517 belong to rootmodule subtree.
517 belong to rootmodule subtree.
518 """
518 """
519 if not path.startswith(self.rootmodule):
519 if not path.startswith(self.rootmodule):
520 # Requests on foreign branches may be forbidden at server level
520 # Requests on foreign branches may be forbidden at server level
521 self.ui.debug(_('ignoring foreign branch %r\n') % path)
521 self.ui.debug(_('ignoring foreign branch %r\n') % path)
522 return None
522 return None
523
523
524 if not stop:
524 if not stop:
525 stop = svn.ra.get_latest_revnum(self.ra)
525 stop = svn.ra.get_latest_revnum(self.ra)
526 try:
526 try:
527 prevmodule = self.reparent('')
527 prevmodule = self.reparent('')
528 dirent = svn.ra.stat(self.ra, path.strip('/'), stop)
528 dirent = svn.ra.stat(self.ra, path.strip('/'), stop)
529 self.reparent(prevmodule)
529 self.reparent(prevmodule)
530 except SubversionException:
530 except SubversionException:
531 dirent = None
531 dirent = None
532 if not dirent:
532 if not dirent:
533 raise SvnPathNotFound(_('%s not found up to revision %d') % (path, stop))
533 raise SvnPathNotFound(_('%s not found up to revision %d') % (path, stop))
534
534
535 # stat() gives us the previous revision on this line of development, but
535 # stat() gives us the previous revision on this line of development, but
536 # it might be in *another module*. Fetch the log and detect renames down
536 # it might be in *another module*. Fetch the log and detect renames down
537 # to the latest revision.
537 # to the latest revision.
538 stream = self._getlog([path], stop, dirent.created_rev)
538 stream = self._getlog([path], stop, dirent.created_rev)
539 try:
539 try:
540 for entry in stream:
540 for entry in stream:
541 paths, revnum, author, date, message = entry
541 paths, revnum, author, date, message = entry
542 if revnum <= dirent.created_rev:
542 if revnum <= dirent.created_rev:
543 break
543 break
544
544
545 for p in paths:
545 for p in paths:
546 if not path.startswith(p) or not paths[p].copyfrom_path:
546 if not path.startswith(p) or not paths[p].copyfrom_path:
547 continue
547 continue
548 newpath = paths[p].copyfrom_path + path[len(p):]
548 newpath = paths[p].copyfrom_path + path[len(p):]
549 self.ui.debug(_("branch renamed from %s to %s at %d\n") %
549 self.ui.debug(_("branch renamed from %s to %s at %d\n") %
550 (path, newpath, revnum))
550 (path, newpath, revnum))
551 path = newpath
551 path = newpath
552 break
552 break
553 finally:
553 finally:
554 stream.close()
554 stream.close()
555
555
556 if not path.startswith(self.rootmodule):
556 if not path.startswith(self.rootmodule):
557 self.ui.debug(_('ignoring foreign branch %r\n') % path)
557 self.ui.debug(_('ignoring foreign branch %r\n') % path)
558 return None
558 return None
559 return self.revid(dirent.created_rev, path)
559 return self.revid(dirent.created_rev, path)
560
560
561 def get_blacklist(self):
561 def get_blacklist(self):
562 """Avoid certain revision numbers.
562 """Avoid certain revision numbers.
563 It is not uncommon for two nearby revisions to cancel each other
563 It is not uncommon for two nearby revisions to cancel each other
564 out, e.g. 'I copied trunk into a subdirectory of itself instead
564 out, e.g. 'I copied trunk into a subdirectory of itself instead
565 of making a branch'. The converted repository is significantly
565 of making a branch'. The converted repository is significantly
566 smaller if we ignore such revisions."""
566 smaller if we ignore such revisions."""
567 self.blacklist = set()
567 self.blacklist = set()
568 blacklist = self.blacklist
568 blacklist = self.blacklist
569 for line in file("blacklist.txt", "r"):
569 for line in file("blacklist.txt", "r"):
570 if not line.startswith("#"):
570 if not line.startswith("#"):
571 try:
571 try:
572 svn_rev = int(line.strip())
572 svn_rev = int(line.strip())
573 blacklist.add(svn_rev)
573 blacklist.add(svn_rev)
574 except ValueError:
574 except ValueError:
575 pass # not an integer or a comment
575 pass # not an integer or a comment
576
576
577 def is_blacklisted(self, svn_rev):
577 def is_blacklisted(self, svn_rev):
578 return svn_rev in self.blacklist
578 return svn_rev in self.blacklist
579
579
580 def reparent(self, module):
580 def reparent(self, module):
581 """Reparent the svn transport and return the previous parent."""
581 """Reparent the svn transport and return the previous parent."""
582 if self.prevmodule == module:
582 if self.prevmodule == module:
583 return module
583 return module
584 svnurl = self.baseurl + urllib.quote(module)
584 svnurl = self.baseurl + urllib.quote(module)
585 prevmodule = self.prevmodule
585 prevmodule = self.prevmodule
586 if prevmodule is None:
586 if prevmodule is None:
587 prevmodule = ''
587 prevmodule = ''
588 self.ui.debug(_("reparent to %s\n") % svnurl)
588 self.ui.debug(_("reparent to %s\n") % svnurl)
589 svn.ra.reparent(self.ra, svnurl)
589 svn.ra.reparent(self.ra, svnurl)
590 self.prevmodule = module
590 self.prevmodule = module
591 return prevmodule
591 return prevmodule
592
592
593 def expandpaths(self, rev, paths, parents):
593 def expandpaths(self, rev, paths, parents):
594 entries = []
594 entries = []
595 copyfrom = {} # Map of entrypath, revision for finding source of deleted revisions.
595 copyfrom = {} # Map of entrypath, revision for finding source of deleted revisions.
596 copies = {}
596 copies = {}
597
597
598 new_module, revnum = self.revsplit(rev)[1:]
598 new_module, revnum = self.revsplit(rev)[1:]
599 if new_module != self.module:
599 if new_module != self.module:
600 self.module = new_module
600 self.module = new_module
601 self.reparent(self.module)
601 self.reparent(self.module)
602
602
603 for path, ent in paths:
603 for path, ent in paths:
604 entrypath = self.getrelpath(path)
604 entrypath = self.getrelpath(path)
605 entry = entrypath.decode(self.encoding)
605 entry = entrypath.decode(self.encoding)
606
606
607 kind = self._checkpath(entrypath, revnum)
607 kind = self._checkpath(entrypath, revnum)
608 if kind == svn.core.svn_node_file:
608 if kind == svn.core.svn_node_file:
609 entries.append(self.recode(entry))
609 entries.append(self.recode(entry))
610 if not ent.copyfrom_path or not parents:
610 if not ent.copyfrom_path or not parents:
611 continue
611 continue
612 # Copy sources not in parent revisions cannot be represented,
612 # Copy sources not in parent revisions cannot be represented,
613 # ignore their origin for now
613 # ignore their origin for now
614 pmodule, prevnum = self.revsplit(parents[0])[1:]
614 pmodule, prevnum = self.revsplit(parents[0])[1:]
615 if ent.copyfrom_rev < prevnum:
615 if ent.copyfrom_rev < prevnum:
616 continue
616 continue
617 copyfrom_path = self.getrelpath(ent.copyfrom_path, pmodule)
617 copyfrom_path = self.getrelpath(ent.copyfrom_path, pmodule)
618 if not copyfrom_path:
618 if not copyfrom_path:
619 continue
619 continue
620 self.ui.debug(_("copied to %s from %s@%s\n") %
620 self.ui.debug(_("copied to %s from %s@%s\n") %
621 (entrypath, copyfrom_path, ent.copyfrom_rev))
621 (entrypath, copyfrom_path, ent.copyfrom_rev))
622 copies[self.recode(entry)] = self.recode(copyfrom_path)
622 copies[self.recode(entry)] = self.recode(copyfrom_path)
623 elif kind == 0: # gone, but had better be a deleted *file*
623 elif kind == 0: # gone, but had better be a deleted *file*
624 self.ui.debug(_("gone from %s\n") % ent.copyfrom_rev)
624 self.ui.debug(_("gone from %s\n") % ent.copyfrom_rev)
625
625
626 # if a branch is created but entries are removed in the same
626 # if a branch is created but entries are removed in the same
627 # changeset, get the right fromrev
627 # changeset, get the right fromrev
628 # parents cannot be empty here, you cannot remove things from
628 # parents cannot be empty here, you cannot remove things from
629 # a root revision.
629 # a root revision.
630 uuid, old_module, fromrev = self.revsplit(parents[0])
630 uuid, old_module, fromrev = self.revsplit(parents[0])
631
631
632 basepath = old_module + "/" + self.getrelpath(path)
632 basepath = old_module + "/" + self.getrelpath(path)
633 entrypath = basepath
633 entrypath = basepath
634
634
635 def lookup_parts(p):
635 def lookup_parts(p):
636 rc = None
636 rc = None
637 parts = p.split("/")
637 parts = p.split("/")
638 for i in range(len(parts)):
638 for i in range(len(parts)):
639 part = "/".join(parts[:i])
639 part = "/".join(parts[:i])
640 info = part, copyfrom.get(part, None)
640 info = part, copyfrom.get(part, None)
641 if info[1] is not None:
641 if info[1] is not None:
642 self.ui.debug(_("found parent directory %s\n") % info[1])
642 self.ui.debug(_("found parent directory %s\n") % info[1])
643 rc = info
643 rc = info
644 return rc
644 return rc
645
645
646 self.ui.debug(_("base, entry %s %s\n") % (basepath, entrypath))
646 self.ui.debug(_("base, entry %s %s\n") % (basepath, entrypath))
647
647
648 frompath, froment = lookup_parts(entrypath) or (None, revnum - 1)
648 frompath, froment = lookup_parts(entrypath) or (None, revnum - 1)
649
649
650 # need to remove fragment from lookup_parts and replace with copyfrom_path
650 # need to remove fragment from lookup_parts and replace with copyfrom_path
651 if frompath is not None:
651 if frompath is not None:
652 self.ui.debug(_("munge-o-matic\n"))
652 self.ui.debug(_("munge-o-matic\n"))
653 self.ui.debug(entrypath + '\n')
653 self.ui.debug(entrypath + '\n')
654 self.ui.debug(entrypath[len(frompath):] + '\n')
654 self.ui.debug(entrypath[len(frompath):] + '\n')
655 entrypath = froment.copyfrom_path + entrypath[len(frompath):]
655 entrypath = froment.copyfrom_path + entrypath[len(frompath):]
656 fromrev = froment.copyfrom_rev
656 fromrev = froment.copyfrom_rev
657 self.ui.debug(_("info: %s %s %s %s\n") % (frompath, froment, ent, entrypath))
657 self.ui.debug(_("info: %s %s %s %s\n") % (frompath, froment, ent, entrypath))
658
658
659 # We can avoid the reparent calls if the module has not changed
659 # We can avoid the reparent calls if the module has not changed
660 # but it probably does not worth the pain.
660 # but it probably does not worth the pain.
661 prevmodule = self.reparent('')
661 prevmodule = self.reparent('')
662 fromkind = svn.ra.check_path(self.ra, entrypath.strip('/'), fromrev)
662 fromkind = svn.ra.check_path(self.ra, entrypath.strip('/'), fromrev)
663 self.reparent(prevmodule)
663 self.reparent(prevmodule)
664
664
665 if fromkind == svn.core.svn_node_file: # a deleted file
665 if fromkind == svn.core.svn_node_file: # a deleted file
666 entries.append(self.recode(entry))
666 entries.append(self.recode(entry))
667 elif fromkind == svn.core.svn_node_dir:
667 elif fromkind == svn.core.svn_node_dir:
668 # print "Deleted/moved non-file:", revnum, path, ent
668 # print "Deleted/moved non-file:", revnum, path, ent
669 # children = self._find_children(path, revnum - 1)
669 # children = self._find_children(path, revnum - 1)
670 # print "find children %s@%d from %d action %s" % (path, revnum, ent.copyfrom_rev, ent.action)
670 # print "find children %s@%d from %d action %s" % (path, revnum, ent.copyfrom_rev, ent.action)
671 # Sometimes this is tricky. For example: in
671 # Sometimes this is tricky. For example: in
672 # The Subversion Repository revision 6940 a dir
672 # The Subversion Repository revision 6940 a dir
673 # was copied and one of its files was deleted
673 # was copied and one of its files was deleted
674 # from the new location in the same commit. This
674 # from the new location in the same commit. This
675 # code can't deal with that yet.
675 # code can't deal with that yet.
676 if ent.action == 'C':
676 if ent.action == 'C':
677 children = self._find_children(path, fromrev)
677 children = self._find_children(path, fromrev)
678 else:
678 else:
679 oroot = entrypath.strip('/')
679 oroot = entrypath.strip('/')
680 nroot = path.strip('/')
680 nroot = path.strip('/')
681 children = self._find_children(oroot, fromrev)
681 children = self._find_children(oroot, fromrev)
682 children = [s.replace(oroot,nroot) for s in children]
682 children = [s.replace(oroot,nroot) for s in children]
683 # Mark all [files, not directories] as deleted.
683 # Mark all [files, not directories] as deleted.
684 for child in children:
684 for child in children:
685 # Can we move a child directory and its
685 # Can we move a child directory and its
686 # parent in the same commit? (probably can). Could
686 # parent in the same commit? (probably can). Could
687 # cause problems if instead of revnum -1,
687 # cause problems if instead of revnum -1,
688 # we have to look in (copyfrom_path, revnum - 1)
688 # we have to look in (copyfrom_path, revnum - 1)
689 entrypath = self.getrelpath("/" + child, module=old_module)
689 entrypath = self.getrelpath("/" + child, module=old_module)
690 if entrypath:
690 if entrypath:
691 entry = self.recode(entrypath.decode(self.encoding))
691 entry = self.recode(entrypath.decode(self.encoding))
692 if entry in copies:
692 if entry in copies:
693 # deleted file within a copy
693 # deleted file within a copy
694 del copies[entry]
694 del copies[entry]
695 else:
695 else:
696 entries.append(entry)
696 entries.append(entry)
697 else:
697 else:
698 self.ui.debug(_('unknown path in revision %d: %s\n') % \
698 self.ui.debug(_('unknown path in revision %d: %s\n') % \
699 (revnum, path))
699 (revnum, path))
700 elif kind == svn.core.svn_node_dir:
700 elif kind == svn.core.svn_node_dir:
701 # Should probably synthesize normal file entries
701 # Should probably synthesize normal file entries
702 # and handle as above to clean up copy/rename handling.
702 # and handle as above to clean up copy/rename handling.
703
703
704 # If the directory just had a prop change,
704 # If the directory just had a prop change,
705 # then we shouldn't need to look for its children.
705 # then we shouldn't need to look for its children.
706 if ent.action == 'M':
706 if ent.action == 'M':
707 continue
707 continue
708
708
709 # Also this could create duplicate entries. Not sure
709 # Also this could create duplicate entries. Not sure
710 # whether this will matter. Maybe should make entries a set.
710 # whether this will matter. Maybe should make entries a set.
711 # print "Changed directory", revnum, path, ent.action, ent.copyfrom_path, ent.copyfrom_rev
711 # print "Changed directory", revnum, path, ent.action, ent.copyfrom_path, ent.copyfrom_rev
712 # This will fail if a directory was copied
712 # This will fail if a directory was copied
713 # from another branch and then some of its files
713 # from another branch and then some of its files
714 # were deleted in the same transaction.
714 # were deleted in the same transaction.
715 children = util.sort(self._find_children(path, revnum))
715 children = sorted(self._find_children(path, revnum))
716 for child in children:
716 for child in children:
717 # Can we move a child directory and its
717 # Can we move a child directory and its
718 # parent in the same commit? (probably can). Could
718 # parent in the same commit? (probably can). Could
719 # cause problems if instead of revnum -1,
719 # cause problems if instead of revnum -1,
720 # we have to look in (copyfrom_path, revnum - 1)
720 # we have to look in (copyfrom_path, revnum - 1)
721 entrypath = self.getrelpath("/" + child)
721 entrypath = self.getrelpath("/" + child)
722 # print child, self.module, entrypath
722 # print child, self.module, entrypath
723 if entrypath:
723 if entrypath:
724 # Need to filter out directories here...
724 # Need to filter out directories here...
725 kind = self._checkpath(entrypath, revnum)
725 kind = self._checkpath(entrypath, revnum)
726 if kind != svn.core.svn_node_dir:
726 if kind != svn.core.svn_node_dir:
727 entries.append(self.recode(entrypath))
727 entries.append(self.recode(entrypath))
728
728
729 # Copies here (must copy all from source)
729 # Copies here (must copy all from source)
730 # Probably not a real problem for us if
730 # Probably not a real problem for us if
731 # source does not exist
731 # source does not exist
732 if not ent.copyfrom_path or not parents:
732 if not ent.copyfrom_path or not parents:
733 continue
733 continue
734 # Copy sources not in parent revisions cannot be represented,
734 # Copy sources not in parent revisions cannot be represented,
735 # ignore their origin for now
735 # ignore their origin for now
736 pmodule, prevnum = self.revsplit(parents[0])[1:]
736 pmodule, prevnum = self.revsplit(parents[0])[1:]
737 if ent.copyfrom_rev < prevnum:
737 if ent.copyfrom_rev < prevnum:
738 continue
738 continue
739 copyfrompath = ent.copyfrom_path.decode(self.encoding)
739 copyfrompath = ent.copyfrom_path.decode(self.encoding)
740 copyfrompath = self.getrelpath(copyfrompath, pmodule)
740 copyfrompath = self.getrelpath(copyfrompath, pmodule)
741 if not copyfrompath:
741 if not copyfrompath:
742 continue
742 continue
743 copyfrom[path] = ent
743 copyfrom[path] = ent
744 self.ui.debug(_("mark %s came from %s:%d\n")
744 self.ui.debug(_("mark %s came from %s:%d\n")
745 % (path, copyfrompath, ent.copyfrom_rev))
745 % (path, copyfrompath, ent.copyfrom_rev))
746 children = self._find_children(ent.copyfrom_path, ent.copyfrom_rev)
746 children = self._find_children(ent.copyfrom_path, ent.copyfrom_rev)
747 children.sort()
747 children.sort()
748 for child in children:
748 for child in children:
749 entrypath = self.getrelpath("/" + child, pmodule)
749 entrypath = self.getrelpath("/" + child, pmodule)
750 if not entrypath:
750 if not entrypath:
751 continue
751 continue
752 entry = entrypath.decode(self.encoding)
752 entry = entrypath.decode(self.encoding)
753 copytopath = path + entry[len(copyfrompath):]
753 copytopath = path + entry[len(copyfrompath):]
754 copytopath = self.getrelpath(copytopath)
754 copytopath = self.getrelpath(copytopath)
755 copies[self.recode(copytopath)] = self.recode(entry, pmodule)
755 copies[self.recode(copytopath)] = self.recode(entry, pmodule)
756
756
757 return (list(set(entries)), copies)
757 return (list(set(entries)), copies)
758
758
759 def _fetch_revisions(self, from_revnum, to_revnum):
759 def _fetch_revisions(self, from_revnum, to_revnum):
760 if from_revnum < to_revnum:
760 if from_revnum < to_revnum:
761 from_revnum, to_revnum = to_revnum, from_revnum
761 from_revnum, to_revnum = to_revnum, from_revnum
762
762
763 self.child_cset = None
763 self.child_cset = None
764
764
765 def parselogentry(orig_paths, revnum, author, date, message):
765 def parselogentry(orig_paths, revnum, author, date, message):
766 """Return the parsed commit object or None, and True if
766 """Return the parsed commit object or None, and True if
767 the revision is a branch root.
767 the revision is a branch root.
768 """
768 """
769 self.ui.debug(_("parsing revision %d (%d changes)\n") %
769 self.ui.debug(_("parsing revision %d (%d changes)\n") %
770 (revnum, len(orig_paths)))
770 (revnum, len(orig_paths)))
771
771
772 branched = False
772 branched = False
773 rev = self.revid(revnum)
773 rev = self.revid(revnum)
774 # branch log might return entries for a parent we already have
774 # branch log might return entries for a parent we already have
775
775
776 if rev in self.commits or revnum < to_revnum:
776 if rev in self.commits or revnum < to_revnum:
777 return None, branched
777 return None, branched
778
778
779 parents = []
779 parents = []
780 # check whether this revision is the start of a branch or part
780 # check whether this revision is the start of a branch or part
781 # of a branch renaming
781 # of a branch renaming
782 orig_paths = util.sort(orig_paths.items())
782 orig_paths = sorted(orig_paths.iteritems())
783 root_paths = [(p,e) for p,e in orig_paths if self.module.startswith(p)]
783 root_paths = [(p,e) for p,e in orig_paths if self.module.startswith(p)]
784 if root_paths:
784 if root_paths:
785 path, ent = root_paths[-1]
785 path, ent = root_paths[-1]
786 if ent.copyfrom_path:
786 if ent.copyfrom_path:
787 branched = True
787 branched = True
788 newpath = ent.copyfrom_path + self.module[len(path):]
788 newpath = ent.copyfrom_path + self.module[len(path):]
789 # ent.copyfrom_rev may not be the actual last revision
789 # ent.copyfrom_rev may not be the actual last revision
790 previd = self.latest(newpath, ent.copyfrom_rev)
790 previd = self.latest(newpath, ent.copyfrom_rev)
791 if previd is not None:
791 if previd is not None:
792 prevmodule, prevnum = self.revsplit(previd)[1:]
792 prevmodule, prevnum = self.revsplit(previd)[1:]
793 if prevnum >= self.startrev:
793 if prevnum >= self.startrev:
794 parents = [previd]
794 parents = [previd]
795 self.ui.note(_('found parent of branch %s at %d: %s\n') %
795 self.ui.note(_('found parent of branch %s at %d: %s\n') %
796 (self.module, prevnum, prevmodule))
796 (self.module, prevnum, prevmodule))
797 else:
797 else:
798 self.ui.debug(_("no copyfrom path, don't know what to do.\n"))
798 self.ui.debug(_("no copyfrom path, don't know what to do.\n"))
799
799
800 paths = []
800 paths = []
801 # filter out unrelated paths
801 # filter out unrelated paths
802 for path, ent in orig_paths:
802 for path, ent in orig_paths:
803 if self.getrelpath(path) is None:
803 if self.getrelpath(path) is None:
804 continue
804 continue
805 paths.append((path, ent))
805 paths.append((path, ent))
806
806
807 # Example SVN datetime. Includes microseconds.
807 # Example SVN datetime. Includes microseconds.
808 # ISO-8601 conformant
808 # ISO-8601 conformant
809 # '2007-01-04T17:35:00.902377Z'
809 # '2007-01-04T17:35:00.902377Z'
810 date = util.parsedate(date[:19] + " UTC", ["%Y-%m-%dT%H:%M:%S"])
810 date = util.parsedate(date[:19] + " UTC", ["%Y-%m-%dT%H:%M:%S"])
811
811
812 log = message and self.recode(message) or ''
812 log = message and self.recode(message) or ''
813 author = author and self.recode(author) or ''
813 author = author and self.recode(author) or ''
814 try:
814 try:
815 branch = self.module.split("/")[-1]
815 branch = self.module.split("/")[-1]
816 if branch == 'trunk':
816 if branch == 'trunk':
817 branch = ''
817 branch = ''
818 except IndexError:
818 except IndexError:
819 branch = None
819 branch = None
820
820
821 cset = commit(author=author,
821 cset = commit(author=author,
822 date=util.datestr(date),
822 date=util.datestr(date),
823 desc=log,
823 desc=log,
824 parents=parents,
824 parents=parents,
825 branch=branch,
825 branch=branch,
826 rev=rev.encode('utf-8'))
826 rev=rev.encode('utf-8'))
827
827
828 self.commits[rev] = cset
828 self.commits[rev] = cset
829 # The parents list is *shared* among self.paths and the
829 # The parents list is *shared* among self.paths and the
830 # commit object. Both will be updated below.
830 # commit object. Both will be updated below.
831 self.paths[rev] = (paths, cset.parents)
831 self.paths[rev] = (paths, cset.parents)
832 if self.child_cset and not self.child_cset.parents:
832 if self.child_cset and not self.child_cset.parents:
833 self.child_cset.parents[:] = [rev]
833 self.child_cset.parents[:] = [rev]
834 self.child_cset = cset
834 self.child_cset = cset
835 return cset, branched
835 return cset, branched
836
836
837 self.ui.note(_('fetching revision log for "%s" from %d to %d\n') %
837 self.ui.note(_('fetching revision log for "%s" from %d to %d\n') %
838 (self.module, from_revnum, to_revnum))
838 (self.module, from_revnum, to_revnum))
839
839
840 try:
840 try:
841 firstcset = None
841 firstcset = None
842 lastonbranch = False
842 lastonbranch = False
843 stream = self._getlog([self.module], from_revnum, to_revnum)
843 stream = self._getlog([self.module], from_revnum, to_revnum)
844 try:
844 try:
845 for entry in stream:
845 for entry in stream:
846 paths, revnum, author, date, message = entry
846 paths, revnum, author, date, message = entry
847 if revnum < self.startrev:
847 if revnum < self.startrev:
848 lastonbranch = True
848 lastonbranch = True
849 break
849 break
850 if self.is_blacklisted(revnum):
850 if self.is_blacklisted(revnum):
851 self.ui.note(_('skipping blacklisted revision %d\n')
851 self.ui.note(_('skipping blacklisted revision %d\n')
852 % revnum)
852 % revnum)
853 continue
853 continue
854 if not paths:
854 if not paths:
855 self.ui.debug(_('revision %d has no entries\n') % revnum)
855 self.ui.debug(_('revision %d has no entries\n') % revnum)
856 continue
856 continue
857 cset, lastonbranch = parselogentry(paths, revnum, author,
857 cset, lastonbranch = parselogentry(paths, revnum, author,
858 date, message)
858 date, message)
859 if cset:
859 if cset:
860 firstcset = cset
860 firstcset = cset
861 if lastonbranch:
861 if lastonbranch:
862 break
862 break
863 finally:
863 finally:
864 stream.close()
864 stream.close()
865
865
866 if not lastonbranch and firstcset and not firstcset.parents:
866 if not lastonbranch and firstcset and not firstcset.parents:
867 # The first revision of the sequence (the last fetched one)
867 # The first revision of the sequence (the last fetched one)
868 # has invalid parents if not a branch root. Find the parent
868 # has invalid parents if not a branch root. Find the parent
869 # revision now, if any.
869 # revision now, if any.
870 try:
870 try:
871 firstrevnum = self.revnum(firstcset.rev)
871 firstrevnum = self.revnum(firstcset.rev)
872 if firstrevnum > 1:
872 if firstrevnum > 1:
873 latest = self.latest(self.module, firstrevnum - 1)
873 latest = self.latest(self.module, firstrevnum - 1)
874 if latest:
874 if latest:
875 firstcset.parents.append(latest)
875 firstcset.parents.append(latest)
876 except SvnPathNotFound:
876 except SvnPathNotFound:
877 pass
877 pass
878 except SubversionException, (inst, num):
878 except SubversionException, (inst, num):
879 if num == svn.core.SVN_ERR_FS_NO_SUCH_REVISION:
879 if num == svn.core.SVN_ERR_FS_NO_SUCH_REVISION:
880 raise util.Abort(_('svn: branch has no revision %s') % to_revnum)
880 raise util.Abort(_('svn: branch has no revision %s') % to_revnum)
881 raise
881 raise
882
882
883 def _getfile(self, file, rev):
883 def _getfile(self, file, rev):
884 # TODO: ra.get_file transmits the whole file instead of diffs.
884 # TODO: ra.get_file transmits the whole file instead of diffs.
885 mode = ''
885 mode = ''
886 try:
886 try:
887 new_module, revnum = self.revsplit(rev)[1:]
887 new_module, revnum = self.revsplit(rev)[1:]
888 if self.module != new_module:
888 if self.module != new_module:
889 self.module = new_module
889 self.module = new_module
890 self.reparent(self.module)
890 self.reparent(self.module)
891 io = StringIO()
891 io = StringIO()
892 info = svn.ra.get_file(self.ra, file, revnum, io)
892 info = svn.ra.get_file(self.ra, file, revnum, io)
893 data = io.getvalue()
893 data = io.getvalue()
894 # ra.get_files() seems to keep a reference on the input buffer
894 # ra.get_files() seems to keep a reference on the input buffer
895 # preventing collection. Release it explicitely.
895 # preventing collection. Release it explicitely.
896 io.close()
896 io.close()
897 if isinstance(info, list):
897 if isinstance(info, list):
898 info = info[-1]
898 info = info[-1]
899 mode = ("svn:executable" in info) and 'x' or ''
899 mode = ("svn:executable" in info) and 'x' or ''
900 mode = ("svn:special" in info) and 'l' or mode
900 mode = ("svn:special" in info) and 'l' or mode
901 except SubversionException, e:
901 except SubversionException, e:
902 notfound = (svn.core.SVN_ERR_FS_NOT_FOUND,
902 notfound = (svn.core.SVN_ERR_FS_NOT_FOUND,
903 svn.core.SVN_ERR_RA_DAV_PATH_NOT_FOUND)
903 svn.core.SVN_ERR_RA_DAV_PATH_NOT_FOUND)
904 if e.apr_err in notfound: # File not found
904 if e.apr_err in notfound: # File not found
905 raise IOError()
905 raise IOError()
906 raise
906 raise
907 if mode == 'l':
907 if mode == 'l':
908 link_prefix = "link "
908 link_prefix = "link "
909 if data.startswith(link_prefix):
909 if data.startswith(link_prefix):
910 data = data[len(link_prefix):]
910 data = data[len(link_prefix):]
911 return data, mode
911 return data, mode
912
912
913 def _find_children(self, path, revnum):
913 def _find_children(self, path, revnum):
914 path = path.strip('/')
914 path = path.strip('/')
915 pool = Pool()
915 pool = Pool()
916 rpath = '/'.join([self.baseurl, urllib.quote(path)]).strip('/')
916 rpath = '/'.join([self.baseurl, urllib.quote(path)]).strip('/')
917 return ['%s/%s' % (path, x) for x in
917 return ['%s/%s' % (path, x) for x in
918 svn.client.ls(rpath, optrev(revnum), True, self.ctx, pool).keys()]
918 svn.client.ls(rpath, optrev(revnum), True, self.ctx, pool).keys()]
919
919
920 def getrelpath(self, path, module=None):
920 def getrelpath(self, path, module=None):
921 if module is None:
921 if module is None:
922 module = self.module
922 module = self.module
923 # Given the repository url of this wc, say
923 # Given the repository url of this wc, say
924 # "http://server/plone/CMFPlone/branches/Plone-2_0-branch"
924 # "http://server/plone/CMFPlone/branches/Plone-2_0-branch"
925 # extract the "entry" portion (a relative path) from what
925 # extract the "entry" portion (a relative path) from what
926 # svn log --xml says, ie
926 # svn log --xml says, ie
927 # "/CMFPlone/branches/Plone-2_0-branch/tests/PloneTestCase.py"
927 # "/CMFPlone/branches/Plone-2_0-branch/tests/PloneTestCase.py"
928 # that is to say "tests/PloneTestCase.py"
928 # that is to say "tests/PloneTestCase.py"
929 if path.startswith(module):
929 if path.startswith(module):
930 relative = path.rstrip('/')[len(module):]
930 relative = path.rstrip('/')[len(module):]
931 if relative.startswith('/'):
931 if relative.startswith('/'):
932 return relative[1:]
932 return relative[1:]
933 elif relative == '':
933 elif relative == '':
934 return relative
934 return relative
935
935
936 # The path is outside our tracked tree...
936 # The path is outside our tracked tree...
937 self.ui.debug(_('%r is not under %r, ignoring\n') % (path, module))
937 self.ui.debug(_('%r is not under %r, ignoring\n') % (path, module))
938 return None
938 return None
939
939
940 def _checkpath(self, path, revnum):
940 def _checkpath(self, path, revnum):
941 # ra.check_path does not like leading slashes very much, it leads
941 # ra.check_path does not like leading slashes very much, it leads
942 # to PROPFIND subversion errors
942 # to PROPFIND subversion errors
943 return svn.ra.check_path(self.ra, path.strip('/'), revnum)
943 return svn.ra.check_path(self.ra, path.strip('/'), revnum)
944
944
945 def _getlog(self, paths, start, end, limit=0, discover_changed_paths=True,
945 def _getlog(self, paths, start, end, limit=0, discover_changed_paths=True,
946 strict_node_history=False):
946 strict_node_history=False):
947 # Normalize path names, svn >= 1.5 only wants paths relative to
947 # Normalize path names, svn >= 1.5 only wants paths relative to
948 # supplied URL
948 # supplied URL
949 relpaths = []
949 relpaths = []
950 for p in paths:
950 for p in paths:
951 if not p.startswith('/'):
951 if not p.startswith('/'):
952 p = self.module + '/' + p
952 p = self.module + '/' + p
953 relpaths.append(p.strip('/'))
953 relpaths.append(p.strip('/'))
954 args = [self.baseurl, relpaths, start, end, limit, discover_changed_paths,
954 args = [self.baseurl, relpaths, start, end, limit, discover_changed_paths,
955 strict_node_history]
955 strict_node_history]
956 arg = encodeargs(args)
956 arg = encodeargs(args)
957 hgexe = util.hgexecutable()
957 hgexe = util.hgexecutable()
958 cmd = '%s debugsvnlog' % util.shellquote(hgexe)
958 cmd = '%s debugsvnlog' % util.shellquote(hgexe)
959 stdin, stdout = util.popen2(cmd, 'b')
959 stdin, stdout = util.popen2(cmd, 'b')
960 stdin.write(arg)
960 stdin.write(arg)
961 stdin.close()
961 stdin.close()
962 return logstream(stdout)
962 return logstream(stdout)
963
963
964 pre_revprop_change = '''#!/bin/sh
964 pre_revprop_change = '''#!/bin/sh
965
965
966 REPOS="$1"
966 REPOS="$1"
967 REV="$2"
967 REV="$2"
968 USER="$3"
968 USER="$3"
969 PROPNAME="$4"
969 PROPNAME="$4"
970 ACTION="$5"
970 ACTION="$5"
971
971
972 if [ "$ACTION" = "M" -a "$PROPNAME" = "svn:log" ]; then exit 0; fi
972 if [ "$ACTION" = "M" -a "$PROPNAME" = "svn:log" ]; then exit 0; fi
973 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-branch" ]; then exit 0; fi
973 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-branch" ]; then exit 0; fi
974 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-rev" ]; then exit 0; fi
974 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-rev" ]; then exit 0; fi
975
975
976 echo "Changing prohibited revision property" >&2
976 echo "Changing prohibited revision property" >&2
977 exit 1
977 exit 1
978 '''
978 '''
979
979
980 class svn_sink(converter_sink, commandline):
980 class svn_sink(converter_sink, commandline):
981 commit_re = re.compile(r'Committed revision (\d+).', re.M)
981 commit_re = re.compile(r'Committed revision (\d+).', re.M)
982
982
983 def prerun(self):
983 def prerun(self):
984 if self.wc:
984 if self.wc:
985 os.chdir(self.wc)
985 os.chdir(self.wc)
986
986
987 def postrun(self):
987 def postrun(self):
988 if self.wc:
988 if self.wc:
989 os.chdir(self.cwd)
989 os.chdir(self.cwd)
990
990
991 def join(self, name):
991 def join(self, name):
992 return os.path.join(self.wc, '.svn', name)
992 return os.path.join(self.wc, '.svn', name)
993
993
994 def revmapfile(self):
994 def revmapfile(self):
995 return self.join('hg-shamap')
995 return self.join('hg-shamap')
996
996
997 def authorfile(self):
997 def authorfile(self):
998 return self.join('hg-authormap')
998 return self.join('hg-authormap')
999
999
1000 def __init__(self, ui, path):
1000 def __init__(self, ui, path):
1001 converter_sink.__init__(self, ui, path)
1001 converter_sink.__init__(self, ui, path)
1002 commandline.__init__(self, ui, 'svn')
1002 commandline.__init__(self, ui, 'svn')
1003 self.delete = []
1003 self.delete = []
1004 self.setexec = []
1004 self.setexec = []
1005 self.delexec = []
1005 self.delexec = []
1006 self.copies = []
1006 self.copies = []
1007 self.wc = None
1007 self.wc = None
1008 self.cwd = os.getcwd()
1008 self.cwd = os.getcwd()
1009
1009
1010 path = os.path.realpath(path)
1010 path = os.path.realpath(path)
1011
1011
1012 created = False
1012 created = False
1013 if os.path.isfile(os.path.join(path, '.svn', 'entries')):
1013 if os.path.isfile(os.path.join(path, '.svn', 'entries')):
1014 self.wc = path
1014 self.wc = path
1015 self.run0('update')
1015 self.run0('update')
1016 else:
1016 else:
1017 wcpath = os.path.join(os.getcwd(), os.path.basename(path) + '-wc')
1017 wcpath = os.path.join(os.getcwd(), os.path.basename(path) + '-wc')
1018
1018
1019 if os.path.isdir(os.path.dirname(path)):
1019 if os.path.isdir(os.path.dirname(path)):
1020 if not os.path.exists(os.path.join(path, 'db', 'fs-type')):
1020 if not os.path.exists(os.path.join(path, 'db', 'fs-type')):
1021 ui.status(_('initializing svn repo %r\n') %
1021 ui.status(_('initializing svn repo %r\n') %
1022 os.path.basename(path))
1022 os.path.basename(path))
1023 commandline(ui, 'svnadmin').run0('create', path)
1023 commandline(ui, 'svnadmin').run0('create', path)
1024 created = path
1024 created = path
1025 path = util.normpath(path)
1025 path = util.normpath(path)
1026 if not path.startswith('/'):
1026 if not path.startswith('/'):
1027 path = '/' + path
1027 path = '/' + path
1028 path = 'file://' + path
1028 path = 'file://' + path
1029
1029
1030 ui.status(_('initializing svn wc %r\n') % os.path.basename(wcpath))
1030 ui.status(_('initializing svn wc %r\n') % os.path.basename(wcpath))
1031 self.run0('checkout', path, wcpath)
1031 self.run0('checkout', path, wcpath)
1032
1032
1033 self.wc = wcpath
1033 self.wc = wcpath
1034 self.opener = util.opener(self.wc)
1034 self.opener = util.opener(self.wc)
1035 self.wopener = util.opener(self.wc)
1035 self.wopener = util.opener(self.wc)
1036 self.childmap = mapfile(ui, self.join('hg-childmap'))
1036 self.childmap = mapfile(ui, self.join('hg-childmap'))
1037 self.is_exec = util.checkexec(self.wc) and util.is_exec or None
1037 self.is_exec = util.checkexec(self.wc) and util.is_exec or None
1038
1038
1039 if created:
1039 if created:
1040 hook = os.path.join(created, 'hooks', 'pre-revprop-change')
1040 hook = os.path.join(created, 'hooks', 'pre-revprop-change')
1041 fp = open(hook, 'w')
1041 fp = open(hook, 'w')
1042 fp.write(pre_revprop_change)
1042 fp.write(pre_revprop_change)
1043 fp.close()
1043 fp.close()
1044 util.set_flags(hook, False, True)
1044 util.set_flags(hook, False, True)
1045
1045
1046 xport = transport.SvnRaTransport(url=geturl(path))
1046 xport = transport.SvnRaTransport(url=geturl(path))
1047 self.uuid = svn.ra.get_uuid(xport.ra)
1047 self.uuid = svn.ra.get_uuid(xport.ra)
1048
1048
1049 def wjoin(self, *names):
1049 def wjoin(self, *names):
1050 return os.path.join(self.wc, *names)
1050 return os.path.join(self.wc, *names)
1051
1051
1052 def putfile(self, filename, flags, data):
1052 def putfile(self, filename, flags, data):
1053 if 'l' in flags:
1053 if 'l' in flags:
1054 self.wopener.symlink(data, filename)
1054 self.wopener.symlink(data, filename)
1055 else:
1055 else:
1056 try:
1056 try:
1057 if os.path.islink(self.wjoin(filename)):
1057 if os.path.islink(self.wjoin(filename)):
1058 os.unlink(filename)
1058 os.unlink(filename)
1059 except OSError:
1059 except OSError:
1060 pass
1060 pass
1061 self.wopener(filename, 'w').write(data)
1061 self.wopener(filename, 'w').write(data)
1062
1062
1063 if self.is_exec:
1063 if self.is_exec:
1064 was_exec = self.is_exec(self.wjoin(filename))
1064 was_exec = self.is_exec(self.wjoin(filename))
1065 else:
1065 else:
1066 # On filesystems not supporting execute-bit, there is no way
1066 # On filesystems not supporting execute-bit, there is no way
1067 # to know if it is set but asking subversion. Setting it
1067 # to know if it is set but asking subversion. Setting it
1068 # systematically is just as expensive and much simpler.
1068 # systematically is just as expensive and much simpler.
1069 was_exec = 'x' not in flags
1069 was_exec = 'x' not in flags
1070
1070
1071 util.set_flags(self.wjoin(filename), False, 'x' in flags)
1071 util.set_flags(self.wjoin(filename), False, 'x' in flags)
1072 if was_exec:
1072 if was_exec:
1073 if 'x' not in flags:
1073 if 'x' not in flags:
1074 self.delexec.append(filename)
1074 self.delexec.append(filename)
1075 else:
1075 else:
1076 if 'x' in flags:
1076 if 'x' in flags:
1077 self.setexec.append(filename)
1077 self.setexec.append(filename)
1078
1078
1079 def _copyfile(self, source, dest):
1079 def _copyfile(self, source, dest):
1080 # SVN's copy command pukes if the destination file exists, but
1080 # SVN's copy command pukes if the destination file exists, but
1081 # our copyfile method expects to record a copy that has
1081 # our copyfile method expects to record a copy that has
1082 # already occurred. Cross the semantic gap.
1082 # already occurred. Cross the semantic gap.
1083 wdest = self.wjoin(dest)
1083 wdest = self.wjoin(dest)
1084 exists = os.path.exists(wdest)
1084 exists = os.path.exists(wdest)
1085 if exists:
1085 if exists:
1086 fd, tempname = tempfile.mkstemp(
1086 fd, tempname = tempfile.mkstemp(
1087 prefix='hg-copy-', dir=os.path.dirname(wdest))
1087 prefix='hg-copy-', dir=os.path.dirname(wdest))
1088 os.close(fd)
1088 os.close(fd)
1089 os.unlink(tempname)
1089 os.unlink(tempname)
1090 os.rename(wdest, tempname)
1090 os.rename(wdest, tempname)
1091 try:
1091 try:
1092 self.run0('copy', source, dest)
1092 self.run0('copy', source, dest)
1093 finally:
1093 finally:
1094 if exists:
1094 if exists:
1095 try:
1095 try:
1096 os.unlink(wdest)
1096 os.unlink(wdest)
1097 except OSError:
1097 except OSError:
1098 pass
1098 pass
1099 os.rename(tempname, wdest)
1099 os.rename(tempname, wdest)
1100
1100
1101 def dirs_of(self, files):
1101 def dirs_of(self, files):
1102 dirs = set()
1102 dirs = set()
1103 for f in files:
1103 for f in files:
1104 if os.path.isdir(self.wjoin(f)):
1104 if os.path.isdir(self.wjoin(f)):
1105 dirs.add(f)
1105 dirs.add(f)
1106 for i in strutil.rfindall(f, '/'):
1106 for i in strutil.rfindall(f, '/'):
1107 dirs.add(f[:i])
1107 dirs.add(f[:i])
1108 return dirs
1108 return dirs
1109
1109
1110 def add_dirs(self, files):
1110 def add_dirs(self, files):
1111 add_dirs = [d for d in util.sort(self.dirs_of(files))
1111 add_dirs = [d for d in sorted(self.dirs_of(files))
1112 if not os.path.exists(self.wjoin(d, '.svn', 'entries'))]
1112 if not os.path.exists(self.wjoin(d, '.svn', 'entries'))]
1113 if add_dirs:
1113 if add_dirs:
1114 self.xargs(add_dirs, 'add', non_recursive=True, quiet=True)
1114 self.xargs(add_dirs, 'add', non_recursive=True, quiet=True)
1115 return add_dirs
1115 return add_dirs
1116
1116
1117 def add_files(self, files):
1117 def add_files(self, files):
1118 if files:
1118 if files:
1119 self.xargs(files, 'add', quiet=True)
1119 self.xargs(files, 'add', quiet=True)
1120 return files
1120 return files
1121
1121
1122 def tidy_dirs(self, names):
1122 def tidy_dirs(self, names):
1123 dirs = util.sort(self.dirs_of(names))
1124 dirs.reverse()
1125 deleted = []
1123 deleted = []
1126 for d in dirs:
1124 for d in sorted(self.dirs_of(names), reverse=True):
1127 wd = self.wjoin(d)
1125 wd = self.wjoin(d)
1128 if os.listdir(wd) == '.svn':
1126 if os.listdir(wd) == '.svn':
1129 self.run0('delete', d)
1127 self.run0('delete', d)
1130 deleted.append(d)
1128 deleted.append(d)
1131 return deleted
1129 return deleted
1132
1130
1133 def addchild(self, parent, child):
1131 def addchild(self, parent, child):
1134 self.childmap[parent] = child
1132 self.childmap[parent] = child
1135
1133
1136 def revid(self, rev):
1134 def revid(self, rev):
1137 return u"svn:%s@%s" % (self.uuid, rev)
1135 return u"svn:%s@%s" % (self.uuid, rev)
1138
1136
1139 def putcommit(self, files, copies, parents, commit, source):
1137 def putcommit(self, files, copies, parents, commit, source):
1140 # Apply changes to working copy
1138 # Apply changes to working copy
1141 for f, v in files:
1139 for f, v in files:
1142 try:
1140 try:
1143 data = source.getfile(f, v)
1141 data = source.getfile(f, v)
1144 except IOError:
1142 except IOError:
1145 self.delete.append(f)
1143 self.delete.append(f)
1146 else:
1144 else:
1147 e = source.getmode(f, v)
1145 e = source.getmode(f, v)
1148 self.putfile(f, e, data)
1146 self.putfile(f, e, data)
1149 if f in copies:
1147 if f in copies:
1150 self.copies.append([copies[f], f])
1148 self.copies.append([copies[f], f])
1151 files = [f[0] for f in files]
1149 files = [f[0] for f in files]
1152
1150
1153 for parent in parents:
1151 for parent in parents:
1154 try:
1152 try:
1155 return self.revid(self.childmap[parent])
1153 return self.revid(self.childmap[parent])
1156 except KeyError:
1154 except KeyError:
1157 pass
1155 pass
1158 entries = set(self.delete)
1156 entries = set(self.delete)
1159 files = frozenset(files)
1157 files = frozenset(files)
1160 entries.update(self.add_dirs(files.difference(entries)))
1158 entries.update(self.add_dirs(files.difference(entries)))
1161 if self.copies:
1159 if self.copies:
1162 for s, d in self.copies:
1160 for s, d in self.copies:
1163 self._copyfile(s, d)
1161 self._copyfile(s, d)
1164 self.copies = []
1162 self.copies = []
1165 if self.delete:
1163 if self.delete:
1166 self.xargs(self.delete, 'delete')
1164 self.xargs(self.delete, 'delete')
1167 self.delete = []
1165 self.delete = []
1168 entries.update(self.add_files(files.difference(entries)))
1166 entries.update(self.add_files(files.difference(entries)))
1169 entries.update(self.tidy_dirs(entries))
1167 entries.update(self.tidy_dirs(entries))
1170 if self.delexec:
1168 if self.delexec:
1171 self.xargs(self.delexec, 'propdel', 'svn:executable')
1169 self.xargs(self.delexec, 'propdel', 'svn:executable')
1172 self.delexec = []
1170 self.delexec = []
1173 if self.setexec:
1171 if self.setexec:
1174 self.xargs(self.setexec, 'propset', 'svn:executable', '*')
1172 self.xargs(self.setexec, 'propset', 'svn:executable', '*')
1175 self.setexec = []
1173 self.setexec = []
1176
1174
1177 fd, messagefile = tempfile.mkstemp(prefix='hg-convert-')
1175 fd, messagefile = tempfile.mkstemp(prefix='hg-convert-')
1178 fp = os.fdopen(fd, 'w')
1176 fp = os.fdopen(fd, 'w')
1179 fp.write(commit.desc)
1177 fp.write(commit.desc)
1180 fp.close()
1178 fp.close()
1181 try:
1179 try:
1182 output = self.run0('commit',
1180 output = self.run0('commit',
1183 username=util.shortuser(commit.author),
1181 username=util.shortuser(commit.author),
1184 file=messagefile,
1182 file=messagefile,
1185 encoding='utf-8')
1183 encoding='utf-8')
1186 try:
1184 try:
1187 rev = self.commit_re.search(output).group(1)
1185 rev = self.commit_re.search(output).group(1)
1188 except AttributeError:
1186 except AttributeError:
1189 self.ui.warn(_('unexpected svn output:\n'))
1187 self.ui.warn(_('unexpected svn output:\n'))
1190 self.ui.warn(output)
1188 self.ui.warn(output)
1191 raise util.Abort(_('unable to cope with svn output'))
1189 raise util.Abort(_('unable to cope with svn output'))
1192 if commit.rev:
1190 if commit.rev:
1193 self.run('propset', 'hg:convert-rev', commit.rev,
1191 self.run('propset', 'hg:convert-rev', commit.rev,
1194 revprop=True, revision=rev)
1192 revprop=True, revision=rev)
1195 if commit.branch and commit.branch != 'default':
1193 if commit.branch and commit.branch != 'default':
1196 self.run('propset', 'hg:convert-branch', commit.branch,
1194 self.run('propset', 'hg:convert-branch', commit.branch,
1197 revprop=True, revision=rev)
1195 revprop=True, revision=rev)
1198 for parent in parents:
1196 for parent in parents:
1199 self.addchild(parent, rev)
1197 self.addchild(parent, rev)
1200 return self.revid(rev)
1198 return self.revid(rev)
1201 finally:
1199 finally:
1202 os.unlink(messagefile)
1200 os.unlink(messagefile)
1203
1201
1204 def puttags(self, tags):
1202 def puttags(self, tags):
1205 self.ui.warn(_('XXX TAGS NOT IMPLEMENTED YET\n'))
1203 self.ui.warn(_('XXX TAGS NOT IMPLEMENTED YET\n'))
@@ -1,762 +1,762 b''
1 # server.py - inotify status server
1 # server.py - inotify status server
2 #
2 #
3 # Copyright 2006, 2007, 2008 Bryan O'Sullivan <bos@serpentine.com>
3 # Copyright 2006, 2007, 2008 Bryan O'Sullivan <bos@serpentine.com>
4 # Copyright 2007, 2008 Brendan Cully <brendan@kublai.com>
4 # Copyright 2007, 2008 Brendan Cully <brendan@kublai.com>
5 #
5 #
6 # This software may be used and distributed according to the terms
6 # This software may be used and distributed according to the terms
7 # of the GNU General Public License, incorporated herein by reference.
7 # of the GNU General Public License, incorporated herein by reference.
8
8
9 from mercurial.i18n import _
9 from mercurial.i18n import _
10 from mercurial import osutil, util
10 from mercurial import osutil, util
11 import common
11 import common
12 import errno, os, select, socket, stat, struct, sys, tempfile, time
12 import errno, os, select, socket, stat, struct, sys, tempfile, time
13
13
14 try:
14 try:
15 import linux as inotify
15 import linux as inotify
16 from linux import watcher
16 from linux import watcher
17 except ImportError:
17 except ImportError:
18 raise
18 raise
19
19
20 class AlreadyStartedException(Exception): pass
20 class AlreadyStartedException(Exception): pass
21
21
22 def join(a, b):
22 def join(a, b):
23 if a:
23 if a:
24 if a[-1] == '/':
24 if a[-1] == '/':
25 return a + b
25 return a + b
26 return a + '/' + b
26 return a + '/' + b
27 return b
27 return b
28
28
29 walk_ignored_errors = (errno.ENOENT, errno.ENAMETOOLONG)
29 walk_ignored_errors = (errno.ENOENT, errno.ENAMETOOLONG)
30
30
31 def walkrepodirs(repo):
31 def walkrepodirs(repo):
32 '''Iterate over all subdirectories of this repo.
32 '''Iterate over all subdirectories of this repo.
33 Exclude the .hg directory, any nested repos, and ignored dirs.'''
33 Exclude the .hg directory, any nested repos, and ignored dirs.'''
34 rootslash = repo.root + os.sep
34 rootslash = repo.root + os.sep
35 def walkit(dirname, top):
35 def walkit(dirname, top):
36 hginside = False
36 hginside = False
37 try:
37 try:
38 for name, kind in osutil.listdir(rootslash + dirname):
38 for name, kind in osutil.listdir(rootslash + dirname):
39 if kind == stat.S_IFDIR:
39 if kind == stat.S_IFDIR:
40 if name == '.hg':
40 if name == '.hg':
41 hginside = True
41 hginside = True
42 if not top: break
42 if not top: break
43 else:
43 else:
44 d = join(dirname, name)
44 d = join(dirname, name)
45 if repo.dirstate._ignore(d):
45 if repo.dirstate._ignore(d):
46 continue
46 continue
47 for subdir, hginsub in walkit(d, False):
47 for subdir, hginsub in walkit(d, False):
48 if not hginsub:
48 if not hginsub:
49 yield subdir, False
49 yield subdir, False
50 except OSError, err:
50 except OSError, err:
51 if err.errno not in walk_ignored_errors:
51 if err.errno not in walk_ignored_errors:
52 raise
52 raise
53 yield rootslash + dirname, hginside
53 yield rootslash + dirname, hginside
54 for dirname, hginside in walkit('', True):
54 for dirname, hginside in walkit('', True):
55 yield dirname
55 yield dirname
56
56
57 def walk(repo, root):
57 def walk(repo, root):
58 '''Like os.walk, but only yields regular files.'''
58 '''Like os.walk, but only yields regular files.'''
59
59
60 # This function is critical to performance during startup.
60 # This function is critical to performance during startup.
61
61
62 reporoot = root == ''
62 reporoot = root == ''
63 rootslash = repo.root + os.sep
63 rootslash = repo.root + os.sep
64
64
65 def walkit(root, reporoot):
65 def walkit(root, reporoot):
66 files, dirs = [], []
66 files, dirs = [], []
67 hginside = False
67 hginside = False
68
68
69 try:
69 try:
70 fullpath = rootslash + root
70 fullpath = rootslash + root
71 for name, kind in osutil.listdir(fullpath):
71 for name, kind in osutil.listdir(fullpath):
72 if kind == stat.S_IFDIR:
72 if kind == stat.S_IFDIR:
73 if name == '.hg':
73 if name == '.hg':
74 hginside = True
74 hginside = True
75 if reporoot:
75 if reporoot:
76 continue
76 continue
77 else:
77 else:
78 break
78 break
79 dirs.append(name)
79 dirs.append(name)
80 elif kind in (stat.S_IFREG, stat.S_IFLNK):
80 elif kind in (stat.S_IFREG, stat.S_IFLNK):
81 path = join(root, name)
81 path = join(root, name)
82 files.append((name, kind))
82 files.append((name, kind))
83
83
84 yield hginside, fullpath, dirs, files
84 yield hginside, fullpath, dirs, files
85
85
86 for subdir in dirs:
86 for subdir in dirs:
87 path = join(root, subdir)
87 path = join(root, subdir)
88 if repo.dirstate._ignore(path):
88 if repo.dirstate._ignore(path):
89 continue
89 continue
90 for result in walkit(path, False):
90 for result in walkit(path, False):
91 if not result[0]:
91 if not result[0]:
92 yield result
92 yield result
93 except OSError, err:
93 except OSError, err:
94 if err.errno not in walk_ignored_errors:
94 if err.errno not in walk_ignored_errors:
95 raise
95 raise
96 for result in walkit(root, reporoot):
96 for result in walkit(root, reporoot):
97 yield result[1:]
97 yield result[1:]
98
98
99 def _explain_watch_limit(ui, repo, count):
99 def _explain_watch_limit(ui, repo, count):
100 path = '/proc/sys/fs/inotify/max_user_watches'
100 path = '/proc/sys/fs/inotify/max_user_watches'
101 try:
101 try:
102 limit = int(file(path).read())
102 limit = int(file(path).read())
103 except IOError, err:
103 except IOError, err:
104 if err.errno != errno.ENOENT:
104 if err.errno != errno.ENOENT:
105 raise
105 raise
106 raise util.Abort(_('this system does not seem to '
106 raise util.Abort(_('this system does not seem to '
107 'support inotify'))
107 'support inotify'))
108 ui.warn(_('*** the current per-user limit on the number '
108 ui.warn(_('*** the current per-user limit on the number '
109 'of inotify watches is %s\n') % limit)
109 'of inotify watches is %s\n') % limit)
110 ui.warn(_('*** this limit is too low to watch every '
110 ui.warn(_('*** this limit is too low to watch every '
111 'directory in this repository\n'))
111 'directory in this repository\n'))
112 ui.warn(_('*** counting directories: '))
112 ui.warn(_('*** counting directories: '))
113 ndirs = len(list(walkrepodirs(repo)))
113 ndirs = len(list(walkrepodirs(repo)))
114 ui.warn(_('found %d\n') % ndirs)
114 ui.warn(_('found %d\n') % ndirs)
115 newlimit = min(limit, 1024)
115 newlimit = min(limit, 1024)
116 while newlimit < ((limit + ndirs) * 1.1):
116 while newlimit < ((limit + ndirs) * 1.1):
117 newlimit *= 2
117 newlimit *= 2
118 ui.warn(_('*** to raise the limit from %d to %d (run as root):\n') %
118 ui.warn(_('*** to raise the limit from %d to %d (run as root):\n') %
119 (limit, newlimit))
119 (limit, newlimit))
120 ui.warn(_('*** echo %d > %s\n') % (newlimit, path))
120 ui.warn(_('*** echo %d > %s\n') % (newlimit, path))
121 raise util.Abort(_('cannot watch %s until inotify watch limit is raised')
121 raise util.Abort(_('cannot watch %s until inotify watch limit is raised')
122 % repo.root)
122 % repo.root)
123
123
124 class Watcher(object):
124 class Watcher(object):
125 poll_events = select.POLLIN
125 poll_events = select.POLLIN
126 statuskeys = 'almr!?'
126 statuskeys = 'almr!?'
127
127
128 def __init__(self, ui, repo, master):
128 def __init__(self, ui, repo, master):
129 self.ui = ui
129 self.ui = ui
130 self.repo = repo
130 self.repo = repo
131 self.wprefix = self.repo.wjoin('')
131 self.wprefix = self.repo.wjoin('')
132 self.timeout = None
132 self.timeout = None
133 self.master = master
133 self.master = master
134 self.mask = (
134 self.mask = (
135 inotify.IN_ATTRIB |
135 inotify.IN_ATTRIB |
136 inotify.IN_CREATE |
136 inotify.IN_CREATE |
137 inotify.IN_DELETE |
137 inotify.IN_DELETE |
138 inotify.IN_DELETE_SELF |
138 inotify.IN_DELETE_SELF |
139 inotify.IN_MODIFY |
139 inotify.IN_MODIFY |
140 inotify.IN_MOVED_FROM |
140 inotify.IN_MOVED_FROM |
141 inotify.IN_MOVED_TO |
141 inotify.IN_MOVED_TO |
142 inotify.IN_MOVE_SELF |
142 inotify.IN_MOVE_SELF |
143 inotify.IN_ONLYDIR |
143 inotify.IN_ONLYDIR |
144 inotify.IN_UNMOUNT |
144 inotify.IN_UNMOUNT |
145 0)
145 0)
146 try:
146 try:
147 self.watcher = watcher.Watcher()
147 self.watcher = watcher.Watcher()
148 except OSError, err:
148 except OSError, err:
149 raise util.Abort(_('inotify service not available: %s') %
149 raise util.Abort(_('inotify service not available: %s') %
150 err.strerror)
150 err.strerror)
151 self.threshold = watcher.Threshold(self.watcher)
151 self.threshold = watcher.Threshold(self.watcher)
152 self.registered = True
152 self.registered = True
153 self.fileno = self.watcher.fileno
153 self.fileno = self.watcher.fileno
154
154
155 self.repo.dirstate.__class__.inotifyserver = True
155 self.repo.dirstate.__class__.inotifyserver = True
156
156
157 self.tree = {}
157 self.tree = {}
158 self.statcache = {}
158 self.statcache = {}
159 self.statustrees = dict([(s, {}) for s in self.statuskeys])
159 self.statustrees = dict([(s, {}) for s in self.statuskeys])
160
160
161 self.watches = 0
161 self.watches = 0
162 self.last_event = None
162 self.last_event = None
163
163
164 self.eventq = {}
164 self.eventq = {}
165 self.deferred = 0
165 self.deferred = 0
166
166
167 self.ds_info = self.dirstate_info()
167 self.ds_info = self.dirstate_info()
168 self.scan()
168 self.scan()
169
169
170 def event_time(self):
170 def event_time(self):
171 last = self.last_event
171 last = self.last_event
172 now = time.time()
172 now = time.time()
173 self.last_event = now
173 self.last_event = now
174
174
175 if last is None:
175 if last is None:
176 return 'start'
176 return 'start'
177 delta = now - last
177 delta = now - last
178 if delta < 5:
178 if delta < 5:
179 return '+%.3f' % delta
179 return '+%.3f' % delta
180 if delta < 50:
180 if delta < 50:
181 return '+%.2f' % delta
181 return '+%.2f' % delta
182 return '+%.1f' % delta
182 return '+%.1f' % delta
183
183
184 def dirstate_info(self):
184 def dirstate_info(self):
185 try:
185 try:
186 st = os.lstat(self.repo.join('dirstate'))
186 st = os.lstat(self.repo.join('dirstate'))
187 return st.st_mtime, st.st_ino
187 return st.st_mtime, st.st_ino
188 except OSError, err:
188 except OSError, err:
189 if err.errno != errno.ENOENT:
189 if err.errno != errno.ENOENT:
190 raise
190 raise
191 return 0, 0
191 return 0, 0
192
192
193 def add_watch(self, path, mask):
193 def add_watch(self, path, mask):
194 if not path:
194 if not path:
195 return
195 return
196 if self.watcher.path(path) is None:
196 if self.watcher.path(path) is None:
197 if self.ui.debugflag:
197 if self.ui.debugflag:
198 self.ui.note(_('watching %r\n') % path[len(self.wprefix):])
198 self.ui.note(_('watching %r\n') % path[len(self.wprefix):])
199 try:
199 try:
200 self.watcher.add(path, mask)
200 self.watcher.add(path, mask)
201 self.watches += 1
201 self.watches += 1
202 except OSError, err:
202 except OSError, err:
203 if err.errno in (errno.ENOENT, errno.ENOTDIR):
203 if err.errno in (errno.ENOENT, errno.ENOTDIR):
204 return
204 return
205 if err.errno != errno.ENOSPC:
205 if err.errno != errno.ENOSPC:
206 raise
206 raise
207 _explain_watch_limit(self.ui, self.repo, self.watches)
207 _explain_watch_limit(self.ui, self.repo, self.watches)
208
208
209 def setup(self):
209 def setup(self):
210 self.ui.note(_('watching directories under %r\n') % self.repo.root)
210 self.ui.note(_('watching directories under %r\n') % self.repo.root)
211 self.add_watch(self.repo.path, inotify.IN_DELETE)
211 self.add_watch(self.repo.path, inotify.IN_DELETE)
212 self.check_dirstate()
212 self.check_dirstate()
213
213
214 def wpath(self, evt):
214 def wpath(self, evt):
215 path = evt.fullpath
215 path = evt.fullpath
216 if path == self.repo.root:
216 if path == self.repo.root:
217 return ''
217 return ''
218 if path.startswith(self.wprefix):
218 if path.startswith(self.wprefix):
219 return path[len(self.wprefix):]
219 return path[len(self.wprefix):]
220 raise 'wtf? ' + path
220 raise 'wtf? ' + path
221
221
222 def dir(self, tree, path):
222 def dir(self, tree, path):
223 if path:
223 if path:
224 for name in path.split('/'):
224 for name in path.split('/'):
225 tree.setdefault(name, {})
225 tree.setdefault(name, {})
226 tree = tree[name]
226 tree = tree[name]
227 return tree
227 return tree
228
228
229 def lookup(self, path, tree):
229 def lookup(self, path, tree):
230 if path:
230 if path:
231 try:
231 try:
232 for name in path.split('/'):
232 for name in path.split('/'):
233 tree = tree[name]
233 tree = tree[name]
234 except KeyError:
234 except KeyError:
235 return 'x'
235 return 'x'
236 except TypeError:
236 except TypeError:
237 return 'd'
237 return 'd'
238 return tree
238 return tree
239
239
240 def split(self, path):
240 def split(self, path):
241 c = path.rfind('/')
241 c = path.rfind('/')
242 if c == -1:
242 if c == -1:
243 return '', path
243 return '', path
244 return path[:c], path[c+1:]
244 return path[:c], path[c+1:]
245
245
246 def filestatus(self, fn, st):
246 def filestatus(self, fn, st):
247 try:
247 try:
248 type_, mode, size, time = self.repo.dirstate._map[fn][:4]
248 type_, mode, size, time = self.repo.dirstate._map[fn][:4]
249 except KeyError:
249 except KeyError:
250 type_ = '?'
250 type_ = '?'
251 if type_ == 'n':
251 if type_ == 'n':
252 if not st:
252 if not st:
253 return '!'
253 return '!'
254 st_mode, st_size, st_mtime = st
254 st_mode, st_size, st_mtime = st
255 if size == -1:
255 if size == -1:
256 return 'l'
256 return 'l'
257 if size and (size != st_size or (mode ^ st_mode) & 0100):
257 if size and (size != st_size or (mode ^ st_mode) & 0100):
258 return 'm'
258 return 'm'
259 if time != int(st_mtime):
259 if time != int(st_mtime):
260 return 'l'
260 return 'l'
261 return 'n'
261 return 'n'
262 if type_ in 'ma' and not st:
262 if type_ in 'ma' and not st:
263 return '!'
263 return '!'
264 if type_ == '?' and self.repo.dirstate._ignore(fn):
264 if type_ == '?' and self.repo.dirstate._ignore(fn):
265 return 'i'
265 return 'i'
266 return type_
266 return type_
267
267
268 def updatestatus(self, wfn, st=None, status=None):
268 def updatestatus(self, wfn, st=None, status=None):
269 if st:
269 if st:
270 status = self.filestatus(wfn, st)
270 status = self.filestatus(wfn, st)
271 else:
271 else:
272 self.statcache.pop(wfn, None)
272 self.statcache.pop(wfn, None)
273 root, fn = self.split(wfn)
273 root, fn = self.split(wfn)
274 d = self.dir(self.tree, root)
274 d = self.dir(self.tree, root)
275 oldstatus = d.get(fn)
275 oldstatus = d.get(fn)
276 isdir = False
276 isdir = False
277 if oldstatus:
277 if oldstatus:
278 try:
278 try:
279 if not status:
279 if not status:
280 if oldstatus in 'almn':
280 if oldstatus in 'almn':
281 status = '!'
281 status = '!'
282 elif oldstatus == 'r':
282 elif oldstatus == 'r':
283 status = 'r'
283 status = 'r'
284 except TypeError:
284 except TypeError:
285 # oldstatus may be a dict left behind by a deleted
285 # oldstatus may be a dict left behind by a deleted
286 # directory
286 # directory
287 isdir = True
287 isdir = True
288 else:
288 else:
289 if oldstatus in self.statuskeys and oldstatus != status:
289 if oldstatus in self.statuskeys and oldstatus != status:
290 del self.dir(self.statustrees[oldstatus], root)[fn]
290 del self.dir(self.statustrees[oldstatus], root)[fn]
291 if self.ui.debugflag and oldstatus != status:
291 if self.ui.debugflag and oldstatus != status:
292 if isdir:
292 if isdir:
293 self.ui.note(_('status: %r dir(%d) -> %s\n') %
293 self.ui.note(_('status: %r dir(%d) -> %s\n') %
294 (wfn, len(oldstatus), status))
294 (wfn, len(oldstatus), status))
295 else:
295 else:
296 self.ui.note(_('status: %r %s -> %s\n') %
296 self.ui.note(_('status: %r %s -> %s\n') %
297 (wfn, oldstatus, status))
297 (wfn, oldstatus, status))
298 if not isdir:
298 if not isdir:
299 if status and status != 'i':
299 if status and status != 'i':
300 d[fn] = status
300 d[fn] = status
301 if status in self.statuskeys:
301 if status in self.statuskeys:
302 dd = self.dir(self.statustrees[status], root)
302 dd = self.dir(self.statustrees[status], root)
303 if oldstatus != status or fn not in dd:
303 if oldstatus != status or fn not in dd:
304 dd[fn] = status
304 dd[fn] = status
305 else:
305 else:
306 d.pop(fn, None)
306 d.pop(fn, None)
307 elif not status:
307 elif not status:
308 # a directory is being removed, check its contents
308 # a directory is being removed, check its contents
309 for subfile, b in oldstatus.copy().iteritems():
309 for subfile, b in oldstatus.copy().iteritems():
310 self.updatestatus(wfn + '/' + subfile, None)
310 self.updatestatus(wfn + '/' + subfile, None)
311
311
312
312
313 def check_deleted(self, key):
313 def check_deleted(self, key):
314 # Files that had been deleted but were present in the dirstate
314 # Files that had been deleted but were present in the dirstate
315 # may have vanished from the dirstate; we must clean them up.
315 # may have vanished from the dirstate; we must clean them up.
316 nuke = []
316 nuke = []
317 for wfn, ignore in self.walk(key, self.statustrees[key]):
317 for wfn, ignore in self.walk(key, self.statustrees[key]):
318 if wfn not in self.repo.dirstate:
318 if wfn not in self.repo.dirstate:
319 nuke.append(wfn)
319 nuke.append(wfn)
320 for wfn in nuke:
320 for wfn in nuke:
321 root, fn = self.split(wfn)
321 root, fn = self.split(wfn)
322 del self.dir(self.statustrees[key], root)[fn]
322 del self.dir(self.statustrees[key], root)[fn]
323 del self.dir(self.tree, root)[fn]
323 del self.dir(self.tree, root)[fn]
324
324
325 def scan(self, topdir=''):
325 def scan(self, topdir=''):
326 self.handle_timeout()
326 self.handle_timeout()
327 ds = self.repo.dirstate._map.copy()
327 ds = self.repo.dirstate._map.copy()
328 self.add_watch(join(self.repo.root, topdir), self.mask)
328 self.add_watch(join(self.repo.root, topdir), self.mask)
329 for root, dirs, entries in walk(self.repo, topdir):
329 for root, dirs, entries in walk(self.repo, topdir):
330 for d in dirs:
330 for d in dirs:
331 self.add_watch(join(root, d), self.mask)
331 self.add_watch(join(root, d), self.mask)
332 wroot = root[len(self.wprefix):]
332 wroot = root[len(self.wprefix):]
333 d = self.dir(self.tree, wroot)
333 d = self.dir(self.tree, wroot)
334 for fn, kind in entries:
334 for fn, kind in entries:
335 wfn = join(wroot, fn)
335 wfn = join(wroot, fn)
336 self.updatestatus(wfn, self.getstat(wfn))
336 self.updatestatus(wfn, self.getstat(wfn))
337 ds.pop(wfn, None)
337 ds.pop(wfn, None)
338 wtopdir = topdir
338 wtopdir = topdir
339 if wtopdir and wtopdir[-1] != '/':
339 if wtopdir and wtopdir[-1] != '/':
340 wtopdir += '/'
340 wtopdir += '/'
341 for wfn, state in ds.iteritems():
341 for wfn, state in ds.iteritems():
342 if not wfn.startswith(wtopdir):
342 if not wfn.startswith(wtopdir):
343 continue
343 continue
344 try:
344 try:
345 st = self.stat(wfn)
345 st = self.stat(wfn)
346 except OSError:
346 except OSError:
347 status = state[0]
347 status = state[0]
348 self.updatestatus(wfn, None, status=status)
348 self.updatestatus(wfn, None, status=status)
349 else:
349 else:
350 self.updatestatus(wfn, st)
350 self.updatestatus(wfn, st)
351 self.check_deleted('!')
351 self.check_deleted('!')
352 self.check_deleted('r')
352 self.check_deleted('r')
353
353
354 def check_dirstate(self):
354 def check_dirstate(self):
355 ds_info = self.dirstate_info()
355 ds_info = self.dirstate_info()
356 if ds_info == self.ds_info:
356 if ds_info == self.ds_info:
357 return
357 return
358 self.ds_info = ds_info
358 self.ds_info = ds_info
359 if not self.ui.debugflag:
359 if not self.ui.debugflag:
360 self.last_event = None
360 self.last_event = None
361 self.ui.note(_('%s dirstate reload\n') % self.event_time())
361 self.ui.note(_('%s dirstate reload\n') % self.event_time())
362 self.repo.dirstate.invalidate()
362 self.repo.dirstate.invalidate()
363 self.scan()
363 self.scan()
364 self.ui.note(_('%s end dirstate reload\n') % self.event_time())
364 self.ui.note(_('%s end dirstate reload\n') % self.event_time())
365
365
366 def walk(self, states, tree, prefix=''):
366 def walk(self, states, tree, prefix=''):
367 # This is the "inner loop" when talking to the client.
367 # This is the "inner loop" when talking to the client.
368
368
369 for name, val in tree.iteritems():
369 for name, val in tree.iteritems():
370 path = join(prefix, name)
370 path = join(prefix, name)
371 try:
371 try:
372 if val in states:
372 if val in states:
373 yield path, val
373 yield path, val
374 except TypeError:
374 except TypeError:
375 for p in self.walk(states, val, path):
375 for p in self.walk(states, val, path):
376 yield p
376 yield p
377
377
378 def update_hgignore(self):
378 def update_hgignore(self):
379 # An update of the ignore file can potentially change the
379 # An update of the ignore file can potentially change the
380 # states of all unknown and ignored files.
380 # states of all unknown and ignored files.
381
381
382 # XXX If the user has other ignore files outside the repo, or
382 # XXX If the user has other ignore files outside the repo, or
383 # changes their list of ignore files at run time, we'll
383 # changes their list of ignore files at run time, we'll
384 # potentially never see changes to them. We could get the
384 # potentially never see changes to them. We could get the
385 # client to report to us what ignore data they're using.
385 # client to report to us what ignore data they're using.
386 # But it's easier to do nothing than to open that can of
386 # But it's easier to do nothing than to open that can of
387 # worms.
387 # worms.
388
388
389 if '_ignore' in self.repo.dirstate.__dict__:
389 if '_ignore' in self.repo.dirstate.__dict__:
390 delattr(self.repo.dirstate, '_ignore')
390 delattr(self.repo.dirstate, '_ignore')
391 self.ui.note(_('rescanning due to .hgignore change\n'))
391 self.ui.note(_('rescanning due to .hgignore change\n'))
392 self.scan()
392 self.scan()
393
393
394 def getstat(self, wpath):
394 def getstat(self, wpath):
395 try:
395 try:
396 return self.statcache[wpath]
396 return self.statcache[wpath]
397 except KeyError:
397 except KeyError:
398 try:
398 try:
399 return self.stat(wpath)
399 return self.stat(wpath)
400 except OSError, err:
400 except OSError, err:
401 if err.errno != errno.ENOENT:
401 if err.errno != errno.ENOENT:
402 raise
402 raise
403
403
404 def stat(self, wpath):
404 def stat(self, wpath):
405 try:
405 try:
406 st = os.lstat(join(self.wprefix, wpath))
406 st = os.lstat(join(self.wprefix, wpath))
407 ret = st.st_mode, st.st_size, st.st_mtime
407 ret = st.st_mode, st.st_size, st.st_mtime
408 self.statcache[wpath] = ret
408 self.statcache[wpath] = ret
409 return ret
409 return ret
410 except OSError:
410 except OSError:
411 self.statcache.pop(wpath, None)
411 self.statcache.pop(wpath, None)
412 raise
412 raise
413
413
414 def created(self, wpath):
414 def created(self, wpath):
415 if wpath == '.hgignore':
415 if wpath == '.hgignore':
416 self.update_hgignore()
416 self.update_hgignore()
417 try:
417 try:
418 st = self.stat(wpath)
418 st = self.stat(wpath)
419 if stat.S_ISREG(st[0]):
419 if stat.S_ISREG(st[0]):
420 self.updatestatus(wpath, st)
420 self.updatestatus(wpath, st)
421 except OSError:
421 except OSError:
422 pass
422 pass
423
423
424 def modified(self, wpath):
424 def modified(self, wpath):
425 if wpath == '.hgignore':
425 if wpath == '.hgignore':
426 self.update_hgignore()
426 self.update_hgignore()
427 try:
427 try:
428 st = self.stat(wpath)
428 st = self.stat(wpath)
429 if stat.S_ISREG(st[0]):
429 if stat.S_ISREG(st[0]):
430 if self.repo.dirstate[wpath] in 'lmn':
430 if self.repo.dirstate[wpath] in 'lmn':
431 self.updatestatus(wpath, st)
431 self.updatestatus(wpath, st)
432 except OSError:
432 except OSError:
433 pass
433 pass
434
434
435 def deleted(self, wpath):
435 def deleted(self, wpath):
436 if wpath == '.hgignore':
436 if wpath == '.hgignore':
437 self.update_hgignore()
437 self.update_hgignore()
438 elif wpath.startswith('.hg/'):
438 elif wpath.startswith('.hg/'):
439 if wpath == '.hg/wlock':
439 if wpath == '.hg/wlock':
440 self.check_dirstate()
440 self.check_dirstate()
441 return
441 return
442
442
443 self.updatestatus(wpath, None)
443 self.updatestatus(wpath, None)
444
444
445 def schedule_work(self, wpath, evt):
445 def schedule_work(self, wpath, evt):
446 self.eventq.setdefault(wpath, [])
446 self.eventq.setdefault(wpath, [])
447 prev = self.eventq[wpath]
447 prev = self.eventq[wpath]
448 try:
448 try:
449 if prev and evt == 'm' and prev[-1] in 'cm':
449 if prev and evt == 'm' and prev[-1] in 'cm':
450 return
450 return
451 self.eventq[wpath].append(evt)
451 self.eventq[wpath].append(evt)
452 finally:
452 finally:
453 self.deferred += 1
453 self.deferred += 1
454 self.timeout = 250
454 self.timeout = 250
455
455
456 def deferred_event(self, wpath, evt):
456 def deferred_event(self, wpath, evt):
457 if evt == 'c':
457 if evt == 'c':
458 self.created(wpath)
458 self.created(wpath)
459 elif evt == 'm':
459 elif evt == 'm':
460 self.modified(wpath)
460 self.modified(wpath)
461 elif evt == 'd':
461 elif evt == 'd':
462 self.deleted(wpath)
462 self.deleted(wpath)
463
463
464 def process_create(self, wpath, evt):
464 def process_create(self, wpath, evt):
465 if self.ui.debugflag:
465 if self.ui.debugflag:
466 self.ui.note(_('%s event: created %s\n') %
466 self.ui.note(_('%s event: created %s\n') %
467 (self.event_time(), wpath))
467 (self.event_time(), wpath))
468
468
469 if evt.mask & inotify.IN_ISDIR:
469 if evt.mask & inotify.IN_ISDIR:
470 self.scan(wpath)
470 self.scan(wpath)
471 else:
471 else:
472 self.schedule_work(wpath, 'c')
472 self.schedule_work(wpath, 'c')
473
473
474 def process_delete(self, wpath, evt):
474 def process_delete(self, wpath, evt):
475 if self.ui.debugflag:
475 if self.ui.debugflag:
476 self.ui.note(_('%s event: deleted %s\n') %
476 self.ui.note(_('%s event: deleted %s\n') %
477 (self.event_time(), wpath))
477 (self.event_time(), wpath))
478
478
479 if evt.mask & inotify.IN_ISDIR:
479 if evt.mask & inotify.IN_ISDIR:
480 self.scan(wpath)
480 self.scan(wpath)
481 self.schedule_work(wpath, 'd')
481 self.schedule_work(wpath, 'd')
482
482
483 def process_modify(self, wpath, evt):
483 def process_modify(self, wpath, evt):
484 if self.ui.debugflag:
484 if self.ui.debugflag:
485 self.ui.note(_('%s event: modified %s\n') %
485 self.ui.note(_('%s event: modified %s\n') %
486 (self.event_time(), wpath))
486 (self.event_time(), wpath))
487
487
488 if not (evt.mask & inotify.IN_ISDIR):
488 if not (evt.mask & inotify.IN_ISDIR):
489 self.schedule_work(wpath, 'm')
489 self.schedule_work(wpath, 'm')
490
490
491 def process_unmount(self, evt):
491 def process_unmount(self, evt):
492 self.ui.warn(_('filesystem containing %s was unmounted\n') %
492 self.ui.warn(_('filesystem containing %s was unmounted\n') %
493 evt.fullpath)
493 evt.fullpath)
494 sys.exit(0)
494 sys.exit(0)
495
495
496 def handle_event(self, fd, event):
496 def handle_event(self, fd, event):
497 if self.ui.debugflag:
497 if self.ui.debugflag:
498 self.ui.note(_('%s readable: %d bytes\n') %
498 self.ui.note(_('%s readable: %d bytes\n') %
499 (self.event_time(), self.threshold.readable()))
499 (self.event_time(), self.threshold.readable()))
500 if not self.threshold():
500 if not self.threshold():
501 if self.registered:
501 if self.registered:
502 if self.ui.debugflag:
502 if self.ui.debugflag:
503 self.ui.note(_('%s below threshold - unhooking\n') %
503 self.ui.note(_('%s below threshold - unhooking\n') %
504 (self.event_time()))
504 (self.event_time()))
505 self.master.poll.unregister(fd)
505 self.master.poll.unregister(fd)
506 self.registered = False
506 self.registered = False
507 self.timeout = 250
507 self.timeout = 250
508 else:
508 else:
509 self.read_events()
509 self.read_events()
510
510
511 def read_events(self, bufsize=None):
511 def read_events(self, bufsize=None):
512 events = self.watcher.read(bufsize)
512 events = self.watcher.read(bufsize)
513 if self.ui.debugflag:
513 if self.ui.debugflag:
514 self.ui.note(_('%s reading %d events\n') %
514 self.ui.note(_('%s reading %d events\n') %
515 (self.event_time(), len(events)))
515 (self.event_time(), len(events)))
516 for evt in events:
516 for evt in events:
517 wpath = self.wpath(evt)
517 wpath = self.wpath(evt)
518 if evt.mask & inotify.IN_UNMOUNT:
518 if evt.mask & inotify.IN_UNMOUNT:
519 self.process_unmount(wpath, evt)
519 self.process_unmount(wpath, evt)
520 elif evt.mask & (inotify.IN_MODIFY | inotify.IN_ATTRIB):
520 elif evt.mask & (inotify.IN_MODIFY | inotify.IN_ATTRIB):
521 self.process_modify(wpath, evt)
521 self.process_modify(wpath, evt)
522 elif evt.mask & (inotify.IN_DELETE | inotify.IN_DELETE_SELF |
522 elif evt.mask & (inotify.IN_DELETE | inotify.IN_DELETE_SELF |
523 inotify.IN_MOVED_FROM):
523 inotify.IN_MOVED_FROM):
524 self.process_delete(wpath, evt)
524 self.process_delete(wpath, evt)
525 elif evt.mask & (inotify.IN_CREATE | inotify.IN_MOVED_TO):
525 elif evt.mask & (inotify.IN_CREATE | inotify.IN_MOVED_TO):
526 self.process_create(wpath, evt)
526 self.process_create(wpath, evt)
527
527
528 def handle_timeout(self):
528 def handle_timeout(self):
529 if not self.registered:
529 if not self.registered:
530 if self.ui.debugflag:
530 if self.ui.debugflag:
531 self.ui.note(_('%s hooking back up with %d bytes readable\n') %
531 self.ui.note(_('%s hooking back up with %d bytes readable\n') %
532 (self.event_time(), self.threshold.readable()))
532 (self.event_time(), self.threshold.readable()))
533 self.read_events(0)
533 self.read_events(0)
534 self.master.poll.register(self, select.POLLIN)
534 self.master.poll.register(self, select.POLLIN)
535 self.registered = True
535 self.registered = True
536
536
537 if self.eventq:
537 if self.eventq:
538 if self.ui.debugflag:
538 if self.ui.debugflag:
539 self.ui.note(_('%s processing %d deferred events as %d\n') %
539 self.ui.note(_('%s processing %d deferred events as %d\n') %
540 (self.event_time(), self.deferred,
540 (self.event_time(), self.deferred,
541 len(self.eventq)))
541 len(self.eventq)))
542 for wpath, evts in util.sort(self.eventq.items()):
542 for wpath, evts in sorted(self.eventq.iteritems()):
543 for evt in evts:
543 for evt in evts:
544 self.deferred_event(wpath, evt)
544 self.deferred_event(wpath, evt)
545 self.eventq.clear()
545 self.eventq.clear()
546 self.deferred = 0
546 self.deferred = 0
547 self.timeout = None
547 self.timeout = None
548
548
549 def shutdown(self):
549 def shutdown(self):
550 self.watcher.close()
550 self.watcher.close()
551
551
552 class Server(object):
552 class Server(object):
553 poll_events = select.POLLIN
553 poll_events = select.POLLIN
554
554
555 def __init__(self, ui, repo, watcher, timeout):
555 def __init__(self, ui, repo, watcher, timeout):
556 self.ui = ui
556 self.ui = ui
557 self.repo = repo
557 self.repo = repo
558 self.watcher = watcher
558 self.watcher = watcher
559 self.timeout = timeout
559 self.timeout = timeout
560 self.sock = socket.socket(socket.AF_UNIX)
560 self.sock = socket.socket(socket.AF_UNIX)
561 self.sockpath = self.repo.join('inotify.sock')
561 self.sockpath = self.repo.join('inotify.sock')
562 self.realsockpath = None
562 self.realsockpath = None
563 try:
563 try:
564 self.sock.bind(self.sockpath)
564 self.sock.bind(self.sockpath)
565 except socket.error, err:
565 except socket.error, err:
566 if err[0] == errno.EADDRINUSE:
566 if err[0] == errno.EADDRINUSE:
567 raise AlreadyStartedException(_('could not start server: %s')
567 raise AlreadyStartedException(_('could not start server: %s')
568 % err[1])
568 % err[1])
569 if err[0] == "AF_UNIX path too long":
569 if err[0] == "AF_UNIX path too long":
570 tempdir = tempfile.mkdtemp(prefix="hg-inotify-")
570 tempdir = tempfile.mkdtemp(prefix="hg-inotify-")
571 self.realsockpath = os.path.join(tempdir, "inotify.sock")
571 self.realsockpath = os.path.join(tempdir, "inotify.sock")
572 try:
572 try:
573 self.sock.bind(self.realsockpath)
573 self.sock.bind(self.realsockpath)
574 os.symlink(self.realsockpath, self.sockpath)
574 os.symlink(self.realsockpath, self.sockpath)
575 except (OSError, socket.error), inst:
575 except (OSError, socket.error), inst:
576 try:
576 try:
577 os.unlink(self.realsockpath)
577 os.unlink(self.realsockpath)
578 except:
578 except:
579 pass
579 pass
580 os.rmdir(tempdir)
580 os.rmdir(tempdir)
581 if inst.errno == errno.EEXIST:
581 if inst.errno == errno.EEXIST:
582 raise AlreadyStartedException(_('could not start server: %s')
582 raise AlreadyStartedException(_('could not start server: %s')
583 % inst.strerror)
583 % inst.strerror)
584 raise
584 raise
585 else:
585 else:
586 raise
586 raise
587 self.sock.listen(5)
587 self.sock.listen(5)
588 self.fileno = self.sock.fileno
588 self.fileno = self.sock.fileno
589
589
590 def handle_timeout(self):
590 def handle_timeout(self):
591 pass
591 pass
592
592
593 def handle_event(self, fd, event):
593 def handle_event(self, fd, event):
594 sock, addr = self.sock.accept()
594 sock, addr = self.sock.accept()
595
595
596 cs = common.recvcs(sock)
596 cs = common.recvcs(sock)
597 version = ord(cs.read(1))
597 version = ord(cs.read(1))
598
598
599 sock.sendall(chr(common.version))
599 sock.sendall(chr(common.version))
600
600
601 if version != common.version:
601 if version != common.version:
602 self.ui.warn(_('received query from incompatible client '
602 self.ui.warn(_('received query from incompatible client '
603 'version %d\n') % version)
603 'version %d\n') % version)
604 return
604 return
605
605
606 names = cs.read().split('\0')
606 names = cs.read().split('\0')
607
607
608 states = names.pop()
608 states = names.pop()
609
609
610 self.ui.note(_('answering query for %r\n') % states)
610 self.ui.note(_('answering query for %r\n') % states)
611
611
612 if self.watcher.timeout:
612 if self.watcher.timeout:
613 # We got a query while a rescan is pending. Make sure we
613 # We got a query while a rescan is pending. Make sure we
614 # rescan before responding, or we could give back a wrong
614 # rescan before responding, or we could give back a wrong
615 # answer.
615 # answer.
616 self.watcher.handle_timeout()
616 self.watcher.handle_timeout()
617
617
618 if not names:
618 if not names:
619 def genresult(states, tree):
619 def genresult(states, tree):
620 for fn, state in self.watcher.walk(states, tree):
620 for fn, state in self.watcher.walk(states, tree):
621 yield fn
621 yield fn
622 else:
622 else:
623 def genresult(states, tree):
623 def genresult(states, tree):
624 for fn in names:
624 for fn in names:
625 l = self.watcher.lookup(fn, tree)
625 l = self.watcher.lookup(fn, tree)
626 try:
626 try:
627 if l in states:
627 if l in states:
628 yield fn
628 yield fn
629 except TypeError:
629 except TypeError:
630 for f, s in self.watcher.walk(states, l, fn):
630 for f, s in self.watcher.walk(states, l, fn):
631 yield f
631 yield f
632
632
633 results = ['\0'.join(r) for r in [
633 results = ['\0'.join(r) for r in [
634 genresult('l', self.watcher.statustrees['l']),
634 genresult('l', self.watcher.statustrees['l']),
635 genresult('m', self.watcher.statustrees['m']),
635 genresult('m', self.watcher.statustrees['m']),
636 genresult('a', self.watcher.statustrees['a']),
636 genresult('a', self.watcher.statustrees['a']),
637 genresult('r', self.watcher.statustrees['r']),
637 genresult('r', self.watcher.statustrees['r']),
638 genresult('!', self.watcher.statustrees['!']),
638 genresult('!', self.watcher.statustrees['!']),
639 '?' in states and genresult('?', self.watcher.statustrees['?']) or [],
639 '?' in states and genresult('?', self.watcher.statustrees['?']) or [],
640 [],
640 [],
641 'c' in states and genresult('n', self.watcher.tree) or [],
641 'c' in states and genresult('n', self.watcher.tree) or [],
642 ]]
642 ]]
643
643
644 try:
644 try:
645 try:
645 try:
646 sock.sendall(struct.pack(common.resphdrfmt,
646 sock.sendall(struct.pack(common.resphdrfmt,
647 *map(len, results)))
647 *map(len, results)))
648 sock.sendall(''.join(results))
648 sock.sendall(''.join(results))
649 finally:
649 finally:
650 sock.shutdown(socket.SHUT_WR)
650 sock.shutdown(socket.SHUT_WR)
651 except socket.error, err:
651 except socket.error, err:
652 if err[0] != errno.EPIPE:
652 if err[0] != errno.EPIPE:
653 raise
653 raise
654
654
655 def shutdown(self):
655 def shutdown(self):
656 self.sock.close()
656 self.sock.close()
657 try:
657 try:
658 os.unlink(self.sockpath)
658 os.unlink(self.sockpath)
659 if self.realsockpath:
659 if self.realsockpath:
660 os.unlink(self.realsockpath)
660 os.unlink(self.realsockpath)
661 os.rmdir(os.path.dirname(self.realsockpath))
661 os.rmdir(os.path.dirname(self.realsockpath))
662 except OSError, err:
662 except OSError, err:
663 if err.errno != errno.ENOENT:
663 if err.errno != errno.ENOENT:
664 raise
664 raise
665
665
666 class Master(object):
666 class Master(object):
667 def __init__(self, ui, repo, timeout=None):
667 def __init__(self, ui, repo, timeout=None):
668 self.ui = ui
668 self.ui = ui
669 self.repo = repo
669 self.repo = repo
670 self.poll = select.poll()
670 self.poll = select.poll()
671 self.watcher = Watcher(ui, repo, self)
671 self.watcher = Watcher(ui, repo, self)
672 self.server = Server(ui, repo, self.watcher, timeout)
672 self.server = Server(ui, repo, self.watcher, timeout)
673 self.table = {}
673 self.table = {}
674 for obj in (self.watcher, self.server):
674 for obj in (self.watcher, self.server):
675 fd = obj.fileno()
675 fd = obj.fileno()
676 self.table[fd] = obj
676 self.table[fd] = obj
677 self.poll.register(fd, obj.poll_events)
677 self.poll.register(fd, obj.poll_events)
678
678
679 def register(self, fd, mask):
679 def register(self, fd, mask):
680 self.poll.register(fd, mask)
680 self.poll.register(fd, mask)
681
681
682 def shutdown(self):
682 def shutdown(self):
683 for obj in self.table.itervalues():
683 for obj in self.table.itervalues():
684 obj.shutdown()
684 obj.shutdown()
685
685
686 def run(self):
686 def run(self):
687 self.watcher.setup()
687 self.watcher.setup()
688 self.ui.note(_('finished setup\n'))
688 self.ui.note(_('finished setup\n'))
689 if os.getenv('TIME_STARTUP'):
689 if os.getenv('TIME_STARTUP'):
690 sys.exit(0)
690 sys.exit(0)
691 while True:
691 while True:
692 timeout = None
692 timeout = None
693 timeobj = None
693 timeobj = None
694 for obj in self.table.itervalues():
694 for obj in self.table.itervalues():
695 if obj.timeout is not None and (timeout is None or obj.timeout < timeout):
695 if obj.timeout is not None and (timeout is None or obj.timeout < timeout):
696 timeout, timeobj = obj.timeout, obj
696 timeout, timeobj = obj.timeout, obj
697 try:
697 try:
698 if self.ui.debugflag:
698 if self.ui.debugflag:
699 if timeout is None:
699 if timeout is None:
700 self.ui.note(_('polling: no timeout\n'))
700 self.ui.note(_('polling: no timeout\n'))
701 else:
701 else:
702 self.ui.note(_('polling: %sms timeout\n') % timeout)
702 self.ui.note(_('polling: %sms timeout\n') % timeout)
703 events = self.poll.poll(timeout)
703 events = self.poll.poll(timeout)
704 except select.error, err:
704 except select.error, err:
705 if err[0] == errno.EINTR:
705 if err[0] == errno.EINTR:
706 continue
706 continue
707 raise
707 raise
708 if events:
708 if events:
709 for fd, event in events:
709 for fd, event in events:
710 self.table[fd].handle_event(fd, event)
710 self.table[fd].handle_event(fd, event)
711 elif timeobj:
711 elif timeobj:
712 timeobj.handle_timeout()
712 timeobj.handle_timeout()
713
713
714 def start(ui, repo):
714 def start(ui, repo):
715 def closefds(ignore):
715 def closefds(ignore):
716 # (from python bug #1177468)
716 # (from python bug #1177468)
717 # close all inherited file descriptors
717 # close all inherited file descriptors
718 # Python 2.4.1 and later use /dev/urandom to seed the random module's RNG
718 # Python 2.4.1 and later use /dev/urandom to seed the random module's RNG
719 # a file descriptor is kept internally as os._urandomfd (created on demand
719 # a file descriptor is kept internally as os._urandomfd (created on demand
720 # the first time os.urandom() is called), and should not be closed
720 # the first time os.urandom() is called), and should not be closed
721 try:
721 try:
722 os.urandom(4)
722 os.urandom(4)
723 urandom_fd = getattr(os, '_urandomfd', None)
723 urandom_fd = getattr(os, '_urandomfd', None)
724 except AttributeError:
724 except AttributeError:
725 urandom_fd = None
725 urandom_fd = None
726 ignore.append(urandom_fd)
726 ignore.append(urandom_fd)
727 for fd in range(3, 256):
727 for fd in range(3, 256):
728 if fd in ignore:
728 if fd in ignore:
729 continue
729 continue
730 try:
730 try:
731 os.close(fd)
731 os.close(fd)
732 except OSError:
732 except OSError:
733 pass
733 pass
734
734
735 m = Master(ui, repo)
735 m = Master(ui, repo)
736 sys.stdout.flush()
736 sys.stdout.flush()
737 sys.stderr.flush()
737 sys.stderr.flush()
738
738
739 pid = os.fork()
739 pid = os.fork()
740 if pid:
740 if pid:
741 return pid
741 return pid
742
742
743 closefds([m.server.fileno(), m.watcher.fileno()])
743 closefds([m.server.fileno(), m.watcher.fileno()])
744 os.setsid()
744 os.setsid()
745
745
746 fd = os.open('/dev/null', os.O_RDONLY)
746 fd = os.open('/dev/null', os.O_RDONLY)
747 os.dup2(fd, 0)
747 os.dup2(fd, 0)
748 if fd > 0:
748 if fd > 0:
749 os.close(fd)
749 os.close(fd)
750
750
751 fd = os.open(ui.config('inotify', 'log', '/dev/null'),
751 fd = os.open(ui.config('inotify', 'log', '/dev/null'),
752 os.O_RDWR | os.O_CREAT | os.O_TRUNC)
752 os.O_RDWR | os.O_CREAT | os.O_TRUNC)
753 os.dup2(fd, 1)
753 os.dup2(fd, 1)
754 os.dup2(fd, 2)
754 os.dup2(fd, 2)
755 if fd > 2:
755 if fd > 2:
756 os.close(fd)
756 os.close(fd)
757
757
758 try:
758 try:
759 m.run()
759 m.run()
760 finally:
760 finally:
761 m.shutdown()
761 m.shutdown()
762 os._exit(0)
762 os._exit(0)
@@ -1,539 +1,539 b''
1 # keyword.py - $Keyword$ expansion for Mercurial
1 # keyword.py - $Keyword$ expansion for Mercurial
2 #
2 #
3 # Copyright 2007, 2008 Christian Ebert <blacktrash@gmx.net>
3 # Copyright 2007, 2008 Christian Ebert <blacktrash@gmx.net>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7 #
7 #
8 # $Id$
8 # $Id$
9 #
9 #
10 # Keyword expansion hack against the grain of a DSCM
10 # Keyword expansion hack against the grain of a DSCM
11 #
11 #
12 # There are many good reasons why this is not needed in a distributed
12 # There are many good reasons why this is not needed in a distributed
13 # SCM, still it may be useful in very small projects based on single
13 # SCM, still it may be useful in very small projects based on single
14 # files (like LaTeX packages), that are mostly addressed to an
14 # files (like LaTeX packages), that are mostly addressed to an
15 # audience not running a version control system.
15 # audience not running a version control system.
16 #
16 #
17 # For in-depth discussion refer to
17 # For in-depth discussion refer to
18 # <http://www.selenic.com/mercurial/wiki/index.cgi/KeywordPlan>.
18 # <http://www.selenic.com/mercurial/wiki/index.cgi/KeywordPlan>.
19 #
19 #
20 # Keyword expansion is based on Mercurial's changeset template mappings.
20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 #
21 #
22 # Binary files are not touched.
22 # Binary files are not touched.
23 #
23 #
24 # Setup in hgrc:
24 # Setup in hgrc:
25 #
25 #
26 # [extensions]
26 # [extensions]
27 # # enable extension
27 # # enable extension
28 # hgext.keyword =
28 # hgext.keyword =
29 #
29 #
30 # Files to act upon/ignore are specified in the [keyword] section.
30 # Files to act upon/ignore are specified in the [keyword] section.
31 # Customized keyword template mappings in the [keywordmaps] section.
31 # Customized keyword template mappings in the [keywordmaps] section.
32 #
32 #
33 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
33 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
34
34
35 '''keyword expansion in local repositories
35 '''keyword expansion in local repositories
36
36
37 This extension expands RCS/CVS-like or self-customized $Keywords$ in
37 This extension expands RCS/CVS-like or self-customized $Keywords$ in
38 tracked text files selected by your configuration.
38 tracked text files selected by your configuration.
39
39
40 Keywords are only expanded in local repositories and not stored in the
40 Keywords are only expanded in local repositories and not stored in the
41 change history. The mechanism can be regarded as a convenience for the
41 change history. The mechanism can be regarded as a convenience for the
42 current user or for archive distribution.
42 current user or for archive distribution.
43
43
44 Configuration is done in the [keyword] and [keywordmaps] sections of
44 Configuration is done in the [keyword] and [keywordmaps] sections of
45 hgrc files.
45 hgrc files.
46
46
47 Example:
47 Example:
48
48
49 [keyword]
49 [keyword]
50 # expand keywords in every python file except those matching "x*"
50 # expand keywords in every python file except those matching "x*"
51 **.py =
51 **.py =
52 x* = ignore
52 x* = ignore
53
53
54 Note: the more specific you are in your filename patterns
54 Note: the more specific you are in your filename patterns
55 the less you lose speed in huge repositories.
55 the less you lose speed in huge repositories.
56
56
57 For [keywordmaps] template mapping and expansion demonstration and
57 For [keywordmaps] template mapping and expansion demonstration and
58 control run "hg kwdemo".
58 control run "hg kwdemo".
59
59
60 An additional date template filter {date|utcdate} is provided.
60 An additional date template filter {date|utcdate} is provided.
61
61
62 The default template mappings (view with "hg kwdemo -d") can be
62 The default template mappings (view with "hg kwdemo -d") can be
63 replaced with customized keywords and templates. Again, run "hg
63 replaced with customized keywords and templates. Again, run "hg
64 kwdemo" to control the results of your config changes.
64 kwdemo" to control the results of your config changes.
65
65
66 Before changing/disabling active keywords, run "hg kwshrink" to avoid
66 Before changing/disabling active keywords, run "hg kwshrink" to avoid
67 the risk of inadvertedly storing expanded keywords in the change
67 the risk of inadvertedly storing expanded keywords in the change
68 history.
68 history.
69
69
70 To force expansion after enabling it, or a configuration change, run
70 To force expansion after enabling it, or a configuration change, run
71 "hg kwexpand".
71 "hg kwexpand".
72
72
73 Also, when committing with the record extension or using mq's qrecord,
73 Also, when committing with the record extension or using mq's qrecord,
74 be aware that keywords cannot be updated. Again, run "hg kwexpand" on
74 be aware that keywords cannot be updated. Again, run "hg kwexpand" on
75 the files in question to update keyword expansions after all changes
75 the files in question to update keyword expansions after all changes
76 have been checked in.
76 have been checked in.
77
77
78 Expansions spanning more than one line and incremental expansions,
78 Expansions spanning more than one line and incremental expansions,
79 like CVS' $Log$, are not supported. A keyword template map
79 like CVS' $Log$, are not supported. A keyword template map
80 "Log = {desc}" expands to the first line of the changeset description.
80 "Log = {desc}" expands to the first line of the changeset description.
81 '''
81 '''
82
82
83 from mercurial import commands, cmdutil, dispatch, filelog, revlog, extensions
83 from mercurial import commands, cmdutil, dispatch, filelog, revlog, extensions
84 from mercurial import patch, localrepo, templater, templatefilters, util
84 from mercurial import patch, localrepo, templater, templatefilters, util
85 from mercurial.hgweb import webcommands
85 from mercurial.hgweb import webcommands
86 from mercurial.lock import release
86 from mercurial.lock import release
87 from mercurial.node import nullid, hex
87 from mercurial.node import nullid, hex
88 from mercurial.i18n import _
88 from mercurial.i18n import _
89 import re, shutil, tempfile, time
89 import re, shutil, tempfile, time
90
90
91 commands.optionalrepo += ' kwdemo'
91 commands.optionalrepo += ' kwdemo'
92
92
93 # hg commands that do not act on keywords
93 # hg commands that do not act on keywords
94 nokwcommands = ('add addremove annotate bundle copy export grep incoming init'
94 nokwcommands = ('add addremove annotate bundle copy export grep incoming init'
95 ' log outgoing push rename rollback tip verify'
95 ' log outgoing push rename rollback tip verify'
96 ' convert email glog')
96 ' convert email glog')
97
97
98 # hg commands that trigger expansion only when writing to working dir,
98 # hg commands that trigger expansion only when writing to working dir,
99 # not when reading filelog, and unexpand when reading from working dir
99 # not when reading filelog, and unexpand when reading from working dir
100 restricted = 'merge record resolve qfold qimport qnew qpush qrefresh qrecord'
100 restricted = 'merge record resolve qfold qimport qnew qpush qrefresh qrecord'
101
101
102 def utcdate(date):
102 def utcdate(date):
103 '''Returns hgdate in cvs-like UTC format.'''
103 '''Returns hgdate in cvs-like UTC format.'''
104 return time.strftime('%Y/%m/%d %H:%M:%S', time.gmtime(date[0]))
104 return time.strftime('%Y/%m/%d %H:%M:%S', time.gmtime(date[0]))
105
105
106 # make keyword tools accessible
106 # make keyword tools accessible
107 kwtools = {'templater': None, 'hgcmd': '', 'inc': [], 'exc': ['.hg*']}
107 kwtools = {'templater': None, 'hgcmd': '', 'inc': [], 'exc': ['.hg*']}
108
108
109
109
110 class kwtemplater(object):
110 class kwtemplater(object):
111 '''
111 '''
112 Sets up keyword templates, corresponding keyword regex, and
112 Sets up keyword templates, corresponding keyword regex, and
113 provides keyword substitution functions.
113 provides keyword substitution functions.
114 '''
114 '''
115 templates = {
115 templates = {
116 'Revision': '{node|short}',
116 'Revision': '{node|short}',
117 'Author': '{author|user}',
117 'Author': '{author|user}',
118 'Date': '{date|utcdate}',
118 'Date': '{date|utcdate}',
119 'RCSFile': '{file|basename},v',
119 'RCSFile': '{file|basename},v',
120 'Source': '{root}/{file},v',
120 'Source': '{root}/{file},v',
121 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
121 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
122 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
122 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
123 }
123 }
124
124
125 def __init__(self, ui, repo):
125 def __init__(self, ui, repo):
126 self.ui = ui
126 self.ui = ui
127 self.repo = repo
127 self.repo = repo
128 self.matcher = util.matcher(repo.root,
128 self.matcher = util.matcher(repo.root,
129 inc=kwtools['inc'], exc=kwtools['exc'])[1]
129 inc=kwtools['inc'], exc=kwtools['exc'])[1]
130 self.restrict = kwtools['hgcmd'] in restricted.split()
130 self.restrict = kwtools['hgcmd'] in restricted.split()
131
131
132 kwmaps = self.ui.configitems('keywordmaps')
132 kwmaps = self.ui.configitems('keywordmaps')
133 if kwmaps: # override default templates
133 if kwmaps: # override default templates
134 kwmaps = [(k, templater.parsestring(v, False))
134 kwmaps = [(k, templater.parsestring(v, False))
135 for (k, v) in kwmaps]
135 for (k, v) in kwmaps]
136 self.templates = dict(kwmaps)
136 self.templates = dict(kwmaps)
137 escaped = map(re.escape, self.templates.keys())
137 escaped = map(re.escape, self.templates.keys())
138 kwpat = r'\$(%s)(: [^$\n\r]*? )??\$' % '|'.join(escaped)
138 kwpat = r'\$(%s)(: [^$\n\r]*? )??\$' % '|'.join(escaped)
139 self.re_kw = re.compile(kwpat)
139 self.re_kw = re.compile(kwpat)
140
140
141 templatefilters.filters['utcdate'] = utcdate
141 templatefilters.filters['utcdate'] = utcdate
142 self.ct = cmdutil.changeset_templater(self.ui, self.repo,
142 self.ct = cmdutil.changeset_templater(self.ui, self.repo,
143 False, None, '', False)
143 False, None, '', False)
144
144
145 def substitute(self, data, path, ctx, subfunc):
145 def substitute(self, data, path, ctx, subfunc):
146 '''Replaces keywords in data with expanded template.'''
146 '''Replaces keywords in data with expanded template.'''
147 def kwsub(mobj):
147 def kwsub(mobj):
148 kw = mobj.group(1)
148 kw = mobj.group(1)
149 self.ct.use_template(self.templates[kw])
149 self.ct.use_template(self.templates[kw])
150 self.ui.pushbuffer()
150 self.ui.pushbuffer()
151 self.ct.show(ctx, root=self.repo.root, file=path)
151 self.ct.show(ctx, root=self.repo.root, file=path)
152 ekw = templatefilters.firstline(self.ui.popbuffer())
152 ekw = templatefilters.firstline(self.ui.popbuffer())
153 return '$%s: %s $' % (kw, ekw)
153 return '$%s: %s $' % (kw, ekw)
154 return subfunc(kwsub, data)
154 return subfunc(kwsub, data)
155
155
156 def expand(self, path, node, data):
156 def expand(self, path, node, data):
157 '''Returns data with keywords expanded.'''
157 '''Returns data with keywords expanded.'''
158 if not self.restrict and self.matcher(path) and not util.binary(data):
158 if not self.restrict and self.matcher(path) and not util.binary(data):
159 ctx = self.repo.filectx(path, fileid=node).changectx()
159 ctx = self.repo.filectx(path, fileid=node).changectx()
160 return self.substitute(data, path, ctx, self.re_kw.sub)
160 return self.substitute(data, path, ctx, self.re_kw.sub)
161 return data
161 return data
162
162
163 def iskwfile(self, path, flagfunc):
163 def iskwfile(self, path, flagfunc):
164 '''Returns true if path matches [keyword] pattern
164 '''Returns true if path matches [keyword] pattern
165 and is not a symbolic link.
165 and is not a symbolic link.
166 Caveat: localrepository._link fails on Windows.'''
166 Caveat: localrepository._link fails on Windows.'''
167 return self.matcher(path) and not 'l' in flagfunc(path)
167 return self.matcher(path) and not 'l' in flagfunc(path)
168
168
169 def overwrite(self, node, expand, files):
169 def overwrite(self, node, expand, files):
170 '''Overwrites selected files expanding/shrinking keywords.'''
170 '''Overwrites selected files expanding/shrinking keywords.'''
171 ctx = self.repo[node]
171 ctx = self.repo[node]
172 mf = ctx.manifest()
172 mf = ctx.manifest()
173 if node is not None: # commit
173 if node is not None: # commit
174 files = [f for f in ctx.files() if f in mf]
174 files = [f for f in ctx.files() if f in mf]
175 notify = self.ui.debug
175 notify = self.ui.debug
176 else: # kwexpand/kwshrink
176 else: # kwexpand/kwshrink
177 notify = self.ui.note
177 notify = self.ui.note
178 candidates = [f for f in files if self.iskwfile(f, ctx.flags)]
178 candidates = [f for f in files if self.iskwfile(f, ctx.flags)]
179 if candidates:
179 if candidates:
180 self.restrict = True # do not expand when reading
180 self.restrict = True # do not expand when reading
181 msg = (expand and _('overwriting %s expanding keywords\n')
181 msg = (expand and _('overwriting %s expanding keywords\n')
182 or _('overwriting %s shrinking keywords\n'))
182 or _('overwriting %s shrinking keywords\n'))
183 for f in candidates:
183 for f in candidates:
184 fp = self.repo.file(f)
184 fp = self.repo.file(f)
185 data = fp.read(mf[f])
185 data = fp.read(mf[f])
186 if util.binary(data):
186 if util.binary(data):
187 continue
187 continue
188 if expand:
188 if expand:
189 if node is None:
189 if node is None:
190 ctx = self.repo.filectx(f, fileid=mf[f]).changectx()
190 ctx = self.repo.filectx(f, fileid=mf[f]).changectx()
191 data, found = self.substitute(data, f, ctx,
191 data, found = self.substitute(data, f, ctx,
192 self.re_kw.subn)
192 self.re_kw.subn)
193 else:
193 else:
194 found = self.re_kw.search(data)
194 found = self.re_kw.search(data)
195 if found:
195 if found:
196 notify(msg % f)
196 notify(msg % f)
197 self.repo.wwrite(f, data, mf.flags(f))
197 self.repo.wwrite(f, data, mf.flags(f))
198 self.repo.dirstate.normal(f)
198 self.repo.dirstate.normal(f)
199 self.restrict = False
199 self.restrict = False
200
200
201 def shrinktext(self, text):
201 def shrinktext(self, text):
202 '''Unconditionally removes all keyword substitutions from text.'''
202 '''Unconditionally removes all keyword substitutions from text.'''
203 return self.re_kw.sub(r'$\1$', text)
203 return self.re_kw.sub(r'$\1$', text)
204
204
205 def shrink(self, fname, text):
205 def shrink(self, fname, text):
206 '''Returns text with all keyword substitutions removed.'''
206 '''Returns text with all keyword substitutions removed.'''
207 if self.matcher(fname) and not util.binary(text):
207 if self.matcher(fname) and not util.binary(text):
208 return self.shrinktext(text)
208 return self.shrinktext(text)
209 return text
209 return text
210
210
211 def shrinklines(self, fname, lines):
211 def shrinklines(self, fname, lines):
212 '''Returns lines with keyword substitutions removed.'''
212 '''Returns lines with keyword substitutions removed.'''
213 if self.matcher(fname):
213 if self.matcher(fname):
214 text = ''.join(lines)
214 text = ''.join(lines)
215 if not util.binary(text):
215 if not util.binary(text):
216 return self.shrinktext(text).splitlines(True)
216 return self.shrinktext(text).splitlines(True)
217 return lines
217 return lines
218
218
219 def wread(self, fname, data):
219 def wread(self, fname, data):
220 '''If in restricted mode returns data read from wdir with
220 '''If in restricted mode returns data read from wdir with
221 keyword substitutions removed.'''
221 keyword substitutions removed.'''
222 return self.restrict and self.shrink(fname, data) or data
222 return self.restrict and self.shrink(fname, data) or data
223
223
224 class kwfilelog(filelog.filelog):
224 class kwfilelog(filelog.filelog):
225 '''
225 '''
226 Subclass of filelog to hook into its read, add, cmp methods.
226 Subclass of filelog to hook into its read, add, cmp methods.
227 Keywords are "stored" unexpanded, and processed on reading.
227 Keywords are "stored" unexpanded, and processed on reading.
228 '''
228 '''
229 def __init__(self, opener, kwt, path):
229 def __init__(self, opener, kwt, path):
230 super(kwfilelog, self).__init__(opener, path)
230 super(kwfilelog, self).__init__(opener, path)
231 self.kwt = kwt
231 self.kwt = kwt
232 self.path = path
232 self.path = path
233
233
234 def read(self, node):
234 def read(self, node):
235 '''Expands keywords when reading filelog.'''
235 '''Expands keywords when reading filelog.'''
236 data = super(kwfilelog, self).read(node)
236 data = super(kwfilelog, self).read(node)
237 return self.kwt.expand(self.path, node, data)
237 return self.kwt.expand(self.path, node, data)
238
238
239 def add(self, text, meta, tr, link, p1=None, p2=None):
239 def add(self, text, meta, tr, link, p1=None, p2=None):
240 '''Removes keyword substitutions when adding to filelog.'''
240 '''Removes keyword substitutions when adding to filelog.'''
241 text = self.kwt.shrink(self.path, text)
241 text = self.kwt.shrink(self.path, text)
242 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
242 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
243
243
244 def cmp(self, node, text):
244 def cmp(self, node, text):
245 '''Removes keyword substitutions for comparison.'''
245 '''Removes keyword substitutions for comparison.'''
246 text = self.kwt.shrink(self.path, text)
246 text = self.kwt.shrink(self.path, text)
247 if self.renamed(node):
247 if self.renamed(node):
248 t2 = super(kwfilelog, self).read(node)
248 t2 = super(kwfilelog, self).read(node)
249 return t2 != text
249 return t2 != text
250 return revlog.revlog.cmp(self, node, text)
250 return revlog.revlog.cmp(self, node, text)
251
251
252 def _status(ui, repo, kwt, unknown, *pats, **opts):
252 def _status(ui, repo, kwt, unknown, *pats, **opts):
253 '''Bails out if [keyword] configuration is not active.
253 '''Bails out if [keyword] configuration is not active.
254 Returns status of working directory.'''
254 Returns status of working directory.'''
255 if kwt:
255 if kwt:
256 matcher = cmdutil.match(repo, pats, opts)
256 matcher = cmdutil.match(repo, pats, opts)
257 return repo.status(match=matcher, unknown=unknown, clean=True)
257 return repo.status(match=matcher, unknown=unknown, clean=True)
258 if ui.configitems('keyword'):
258 if ui.configitems('keyword'):
259 raise util.Abort(_('[keyword] patterns cannot match'))
259 raise util.Abort(_('[keyword] patterns cannot match'))
260 raise util.Abort(_('no [keyword] patterns configured'))
260 raise util.Abort(_('no [keyword] patterns configured'))
261
261
262 def _kwfwrite(ui, repo, expand, *pats, **opts):
262 def _kwfwrite(ui, repo, expand, *pats, **opts):
263 '''Selects files and passes them to kwtemplater.overwrite.'''
263 '''Selects files and passes them to kwtemplater.overwrite.'''
264 if repo.dirstate.parents()[1] != nullid:
264 if repo.dirstate.parents()[1] != nullid:
265 raise util.Abort(_('outstanding uncommitted merge'))
265 raise util.Abort(_('outstanding uncommitted merge'))
266 kwt = kwtools['templater']
266 kwt = kwtools['templater']
267 status = _status(ui, repo, kwt, False, *pats, **opts)
267 status = _status(ui, repo, kwt, False, *pats, **opts)
268 modified, added, removed, deleted = status[:4]
268 modified, added, removed, deleted = status[:4]
269 if modified or added or removed or deleted:
269 if modified or added or removed or deleted:
270 raise util.Abort(_('outstanding uncommitted changes'))
270 raise util.Abort(_('outstanding uncommitted changes'))
271 wlock = lock = None
271 wlock = lock = None
272 try:
272 try:
273 wlock = repo.wlock()
273 wlock = repo.wlock()
274 lock = repo.lock()
274 lock = repo.lock()
275 kwt.overwrite(None, expand, status[6])
275 kwt.overwrite(None, expand, status[6])
276 finally:
276 finally:
277 release(lock, wlock)
277 release(lock, wlock)
278
278
279 def demo(ui, repo, *args, **opts):
279 def demo(ui, repo, *args, **opts):
280 '''print [keywordmaps] configuration and an expansion example
280 '''print [keywordmaps] configuration and an expansion example
281
281
282 Show current, custom, or default keyword template maps and their
282 Show current, custom, or default keyword template maps and their
283 expansion.
283 expansion.
284
284
285 Extend current configuration by specifying maps as arguments and
285 Extend current configuration by specifying maps as arguments and
286 optionally by reading from an additional hgrc file.
286 optionally by reading from an additional hgrc file.
287
287
288 Override current keyword template maps with "default" option.
288 Override current keyword template maps with "default" option.
289 '''
289 '''
290 def demostatus(stat):
290 def demostatus(stat):
291 ui.status(_('\n\t%s\n') % stat)
291 ui.status(_('\n\t%s\n') % stat)
292
292
293 def demoitems(section, items):
293 def demoitems(section, items):
294 ui.write('[%s]\n' % section)
294 ui.write('[%s]\n' % section)
295 for k, v in items:
295 for k, v in items:
296 ui.write('%s = %s\n' % (k, v))
296 ui.write('%s = %s\n' % (k, v))
297
297
298 msg = 'hg keyword config and expansion example'
298 msg = 'hg keyword config and expansion example'
299 kwstatus = 'current'
299 kwstatus = 'current'
300 fn = 'demo.txt'
300 fn = 'demo.txt'
301 branchname = 'demobranch'
301 branchname = 'demobranch'
302 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
302 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
303 ui.note(_('creating temporary repository at %s\n') % tmpdir)
303 ui.note(_('creating temporary repository at %s\n') % tmpdir)
304 repo = localrepo.localrepository(ui, tmpdir, True)
304 repo = localrepo.localrepository(ui, tmpdir, True)
305 ui.setconfig('keyword', fn, '')
305 ui.setconfig('keyword', fn, '')
306 if args or opts.get('rcfile'):
306 if args or opts.get('rcfile'):
307 kwstatus = 'custom'
307 kwstatus = 'custom'
308 if opts.get('rcfile'):
308 if opts.get('rcfile'):
309 ui.readconfig(opts.get('rcfile'))
309 ui.readconfig(opts.get('rcfile'))
310 if opts.get('default'):
310 if opts.get('default'):
311 kwstatus = 'default'
311 kwstatus = 'default'
312 kwmaps = kwtemplater.templates
312 kwmaps = kwtemplater.templates
313 if ui.configitems('keywordmaps'):
313 if ui.configitems('keywordmaps'):
314 # override maps from optional rcfile
314 # override maps from optional rcfile
315 for k, v in kwmaps.iteritems():
315 for k, v in kwmaps.iteritems():
316 ui.setconfig('keywordmaps', k, v)
316 ui.setconfig('keywordmaps', k, v)
317 elif args:
317 elif args:
318 # simulate hgrc parsing
318 # simulate hgrc parsing
319 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
319 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
320 fp = repo.opener('hgrc', 'w')
320 fp = repo.opener('hgrc', 'w')
321 fp.writelines(rcmaps)
321 fp.writelines(rcmaps)
322 fp.close()
322 fp.close()
323 ui.readconfig(repo.join('hgrc'))
323 ui.readconfig(repo.join('hgrc'))
324 if not opts.get('default'):
324 if not opts.get('default'):
325 kwmaps = dict(ui.configitems('keywordmaps')) or kwtemplater.templates
325 kwmaps = dict(ui.configitems('keywordmaps')) or kwtemplater.templates
326 uisetup(ui)
326 uisetup(ui)
327 reposetup(ui, repo)
327 reposetup(ui, repo)
328 for k, v in ui.configitems('extensions'):
328 for k, v in ui.configitems('extensions'):
329 if k.endswith('keyword'):
329 if k.endswith('keyword'):
330 extension = '%s = %s' % (k, v)
330 extension = '%s = %s' % (k, v)
331 break
331 break
332 demostatus('config using %s keyword template maps' % kwstatus)
332 demostatus('config using %s keyword template maps' % kwstatus)
333 ui.write('[extensions]\n%s\n' % extension)
333 ui.write('[extensions]\n%s\n' % extension)
334 demoitems('keyword', ui.configitems('keyword'))
334 demoitems('keyword', ui.configitems('keyword'))
335 demoitems('keywordmaps', kwmaps.iteritems())
335 demoitems('keywordmaps', kwmaps.iteritems())
336 keywords = '$' + '$\n$'.join(kwmaps.keys()) + '$\n'
336 keywords = '$' + '$\n$'.join(kwmaps.keys()) + '$\n'
337 repo.wopener(fn, 'w').write(keywords)
337 repo.wopener(fn, 'w').write(keywords)
338 repo.add([fn])
338 repo.add([fn])
339 path = repo.wjoin(fn)
339 path = repo.wjoin(fn)
340 ui.note(_('\n%s keywords written to %s:\n') % (kwstatus, path))
340 ui.note(_('\n%s keywords written to %s:\n') % (kwstatus, path))
341 ui.note(keywords)
341 ui.note(keywords)
342 ui.note('\nhg -R "%s" branch "%s"\n' % (tmpdir, branchname))
342 ui.note('\nhg -R "%s" branch "%s"\n' % (tmpdir, branchname))
343 # silence branch command if not verbose
343 # silence branch command if not verbose
344 quiet = ui.quiet
344 quiet = ui.quiet
345 ui.quiet = not ui.verbose
345 ui.quiet = not ui.verbose
346 commands.branch(ui, repo, branchname)
346 commands.branch(ui, repo, branchname)
347 ui.quiet = quiet
347 ui.quiet = quiet
348 for name, cmd in ui.configitems('hooks'):
348 for name, cmd in ui.configitems('hooks'):
349 if name.split('.', 1)[0].find('commit') > -1:
349 if name.split('.', 1)[0].find('commit') > -1:
350 repo.ui.setconfig('hooks', name, '')
350 repo.ui.setconfig('hooks', name, '')
351 ui.note(_('unhooked all commit hooks\n'))
351 ui.note(_('unhooked all commit hooks\n'))
352 ui.note('hg -R "%s" ci -m "%s"\n' % (tmpdir, msg))
352 ui.note('hg -R "%s" ci -m "%s"\n' % (tmpdir, msg))
353 repo.commit(text=msg)
353 repo.commit(text=msg)
354 fmt = ui.verbose and ' in %s' % path or ''
354 fmt = ui.verbose and ' in %s' % path or ''
355 demostatus('%s keywords expanded%s' % (kwstatus, fmt))
355 demostatus('%s keywords expanded%s' % (kwstatus, fmt))
356 ui.write(repo.wread(fn))
356 ui.write(repo.wread(fn))
357 ui.debug(_('\nremoving temporary repository %s\n') % tmpdir)
357 ui.debug(_('\nremoving temporary repository %s\n') % tmpdir)
358 shutil.rmtree(tmpdir, ignore_errors=True)
358 shutil.rmtree(tmpdir, ignore_errors=True)
359
359
360 def expand(ui, repo, *pats, **opts):
360 def expand(ui, repo, *pats, **opts):
361 '''expand keywords in working directory
361 '''expand keywords in working directory
362
362
363 Run after (re)enabling keyword expansion.
363 Run after (re)enabling keyword expansion.
364
364
365 kwexpand refuses to run if given files contain local changes.
365 kwexpand refuses to run if given files contain local changes.
366 '''
366 '''
367 # 3rd argument sets expansion to True
367 # 3rd argument sets expansion to True
368 _kwfwrite(ui, repo, True, *pats, **opts)
368 _kwfwrite(ui, repo, True, *pats, **opts)
369
369
370 def files(ui, repo, *pats, **opts):
370 def files(ui, repo, *pats, **opts):
371 '''print files currently configured for keyword expansion
371 '''print files currently configured for keyword expansion
372
372
373 Crosscheck which files in working directory are potential targets
373 Crosscheck which files in working directory are potential targets
374 for keyword expansion. That is, files matched by [keyword] config
374 for keyword expansion. That is, files matched by [keyword] config
375 patterns but not symlinks.
375 patterns but not symlinks.
376 '''
376 '''
377 kwt = kwtools['templater']
377 kwt = kwtools['templater']
378 status = _status(ui, repo, kwt, opts.get('untracked'), *pats, **opts)
378 status = _status(ui, repo, kwt, opts.get('untracked'), *pats, **opts)
379 modified, added, removed, deleted, unknown, ignored, clean = status
379 modified, added, removed, deleted, unknown, ignored, clean = status
380 files = util.sort(modified + added + clean + unknown)
380 files = sorted(modified + added + clean + unknown)
381 wctx = repo[None]
381 wctx = repo[None]
382 kwfiles = [f for f in files if kwt.iskwfile(f, wctx.flags)]
382 kwfiles = [f for f in files if kwt.iskwfile(f, wctx.flags)]
383 cwd = pats and repo.getcwd() or ''
383 cwd = pats and repo.getcwd() or ''
384 kwfstats = not opts.get('ignore') and (('K', kwfiles),) or ()
384 kwfstats = not opts.get('ignore') and (('K', kwfiles),) or ()
385 if opts.get('all') or opts.get('ignore'):
385 if opts.get('all') or opts.get('ignore'):
386 kwfstats += (('I', [f for f in files if f not in kwfiles]),)
386 kwfstats += (('I', [f for f in files if f not in kwfiles]),)
387 for char, filenames in kwfstats:
387 for char, filenames in kwfstats:
388 fmt = (opts.get('all') or ui.verbose) and '%s %%s\n' % char or '%s\n'
388 fmt = (opts.get('all') or ui.verbose) and '%s %%s\n' % char or '%s\n'
389 for f in filenames:
389 for f in filenames:
390 ui.write(fmt % repo.pathto(f, cwd))
390 ui.write(fmt % repo.pathto(f, cwd))
391
391
392 def shrink(ui, repo, *pats, **opts):
392 def shrink(ui, repo, *pats, **opts):
393 '''revert expanded keywords in working directory
393 '''revert expanded keywords in working directory
394
394
395 Run before changing/disabling active keywords or if you experience
395 Run before changing/disabling active keywords or if you experience
396 problems with "hg import" or "hg merge".
396 problems with "hg import" or "hg merge".
397
397
398 kwshrink refuses to run if given files contain local changes.
398 kwshrink refuses to run if given files contain local changes.
399 '''
399 '''
400 # 3rd argument sets expansion to False
400 # 3rd argument sets expansion to False
401 _kwfwrite(ui, repo, False, *pats, **opts)
401 _kwfwrite(ui, repo, False, *pats, **opts)
402
402
403
403
404 def uisetup(ui):
404 def uisetup(ui):
405 '''Collects [keyword] config in kwtools.
405 '''Collects [keyword] config in kwtools.
406 Monkeypatches dispatch._parse if needed.'''
406 Monkeypatches dispatch._parse if needed.'''
407
407
408 for pat, opt in ui.configitems('keyword'):
408 for pat, opt in ui.configitems('keyword'):
409 if opt != 'ignore':
409 if opt != 'ignore':
410 kwtools['inc'].append(pat)
410 kwtools['inc'].append(pat)
411 else:
411 else:
412 kwtools['exc'].append(pat)
412 kwtools['exc'].append(pat)
413
413
414 if kwtools['inc']:
414 if kwtools['inc']:
415 def kwdispatch_parse(orig, ui, args):
415 def kwdispatch_parse(orig, ui, args):
416 '''Monkeypatch dispatch._parse to obtain running hg command.'''
416 '''Monkeypatch dispatch._parse to obtain running hg command.'''
417 cmd, func, args, options, cmdoptions = orig(ui, args)
417 cmd, func, args, options, cmdoptions = orig(ui, args)
418 kwtools['hgcmd'] = cmd
418 kwtools['hgcmd'] = cmd
419 return cmd, func, args, options, cmdoptions
419 return cmd, func, args, options, cmdoptions
420
420
421 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
421 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
422
422
423 def reposetup(ui, repo):
423 def reposetup(ui, repo):
424 '''Sets up repo as kwrepo for keyword substitution.
424 '''Sets up repo as kwrepo for keyword substitution.
425 Overrides file method to return kwfilelog instead of filelog
425 Overrides file method to return kwfilelog instead of filelog
426 if file matches user configuration.
426 if file matches user configuration.
427 Wraps commit to overwrite configured files with updated
427 Wraps commit to overwrite configured files with updated
428 keyword substitutions.
428 keyword substitutions.
429 Monkeypatches patch and webcommands.'''
429 Monkeypatches patch and webcommands.'''
430
430
431 try:
431 try:
432 if (not repo.local() or not kwtools['inc']
432 if (not repo.local() or not kwtools['inc']
433 or kwtools['hgcmd'] in nokwcommands.split()
433 or kwtools['hgcmd'] in nokwcommands.split()
434 or '.hg' in util.splitpath(repo.root)
434 or '.hg' in util.splitpath(repo.root)
435 or repo._url.startswith('bundle:')):
435 or repo._url.startswith('bundle:')):
436 return
436 return
437 except AttributeError:
437 except AttributeError:
438 pass
438 pass
439
439
440 kwtools['templater'] = kwt = kwtemplater(ui, repo)
440 kwtools['templater'] = kwt = kwtemplater(ui, repo)
441
441
442 class kwrepo(repo.__class__):
442 class kwrepo(repo.__class__):
443 def file(self, f):
443 def file(self, f):
444 if f[0] == '/':
444 if f[0] == '/':
445 f = f[1:]
445 f = f[1:]
446 return kwfilelog(self.sopener, kwt, f)
446 return kwfilelog(self.sopener, kwt, f)
447
447
448 def wread(self, filename):
448 def wread(self, filename):
449 data = super(kwrepo, self).wread(filename)
449 data = super(kwrepo, self).wread(filename)
450 return kwt.wread(filename, data)
450 return kwt.wread(filename, data)
451
451
452 def commit(self, files=None, text='', user=None, date=None,
452 def commit(self, files=None, text='', user=None, date=None,
453 match=None, force=False, force_editor=False,
453 match=None, force=False, force_editor=False,
454 p1=None, p2=None, extra={}, empty_ok=False):
454 p1=None, p2=None, extra={}, empty_ok=False):
455 wlock = lock = None
455 wlock = lock = None
456 _p1 = _p2 = None
456 _p1 = _p2 = None
457 try:
457 try:
458 wlock = self.wlock()
458 wlock = self.wlock()
459 lock = self.lock()
459 lock = self.lock()
460 # store and postpone commit hooks
460 # store and postpone commit hooks
461 commithooks = {}
461 commithooks = {}
462 for name, cmd in ui.configitems('hooks'):
462 for name, cmd in ui.configitems('hooks'):
463 if name.split('.', 1)[0] == 'commit':
463 if name.split('.', 1)[0] == 'commit':
464 commithooks[name] = cmd
464 commithooks[name] = cmd
465 ui.setconfig('hooks', name, None)
465 ui.setconfig('hooks', name, None)
466 if commithooks:
466 if commithooks:
467 # store parents for commit hook environment
467 # store parents for commit hook environment
468 if p1 is None:
468 if p1 is None:
469 _p1, _p2 = repo.dirstate.parents()
469 _p1, _p2 = repo.dirstate.parents()
470 else:
470 else:
471 _p1, _p2 = p1, p2 or nullid
471 _p1, _p2 = p1, p2 or nullid
472 _p1 = hex(_p1)
472 _p1 = hex(_p1)
473 if _p2 == nullid:
473 if _p2 == nullid:
474 _p2 = ''
474 _p2 = ''
475 else:
475 else:
476 _p2 = hex(_p2)
476 _p2 = hex(_p2)
477
477
478 n = super(kwrepo, self).commit(files, text, user, date, match,
478 n = super(kwrepo, self).commit(files, text, user, date, match,
479 force, force_editor, p1, p2,
479 force, force_editor, p1, p2,
480 extra, empty_ok)
480 extra, empty_ok)
481
481
482 # restore commit hooks
482 # restore commit hooks
483 for name, cmd in commithooks.iteritems():
483 for name, cmd in commithooks.iteritems():
484 ui.setconfig('hooks', name, cmd)
484 ui.setconfig('hooks', name, cmd)
485 if n is not None:
485 if n is not None:
486 kwt.overwrite(n, True, None)
486 kwt.overwrite(n, True, None)
487 repo.hook('commit', node=n, parent1=_p1, parent2=_p2)
487 repo.hook('commit', node=n, parent1=_p1, parent2=_p2)
488 return n
488 return n
489 finally:
489 finally:
490 release(lock, wlock)
490 release(lock, wlock)
491
491
492 # monkeypatches
492 # monkeypatches
493 def kwpatchfile_init(orig, self, ui, fname, opener, missing=False):
493 def kwpatchfile_init(orig, self, ui, fname, opener, missing=False):
494 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
494 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
495 rejects or conflicts due to expanded keywords in working dir.'''
495 rejects or conflicts due to expanded keywords in working dir.'''
496 orig(self, ui, fname, opener, missing)
496 orig(self, ui, fname, opener, missing)
497 # shrink keywords read from working dir
497 # shrink keywords read from working dir
498 self.lines = kwt.shrinklines(self.fname, self.lines)
498 self.lines = kwt.shrinklines(self.fname, self.lines)
499
499
500 def kw_diff(orig, repo, node1=None, node2=None, match=None, changes=None,
500 def kw_diff(orig, repo, node1=None, node2=None, match=None, changes=None,
501 opts=None):
501 opts=None):
502 '''Monkeypatch patch.diff to avoid expansion except when
502 '''Monkeypatch patch.diff to avoid expansion except when
503 comparing against working dir.'''
503 comparing against working dir.'''
504 if node2 is not None:
504 if node2 is not None:
505 kwt.matcher = util.never
505 kwt.matcher = util.never
506 elif node1 is not None and node1 != repo['.'].node():
506 elif node1 is not None and node1 != repo['.'].node():
507 kwt.restrict = True
507 kwt.restrict = True
508 return orig(repo, node1, node2, match, changes, opts)
508 return orig(repo, node1, node2, match, changes, opts)
509
509
510 def kwweb_skip(orig, web, req, tmpl):
510 def kwweb_skip(orig, web, req, tmpl):
511 '''Wraps webcommands.x turning off keyword expansion.'''
511 '''Wraps webcommands.x turning off keyword expansion.'''
512 kwt.matcher = util.never
512 kwt.matcher = util.never
513 return orig(web, req, tmpl)
513 return orig(web, req, tmpl)
514
514
515 repo.__class__ = kwrepo
515 repo.__class__ = kwrepo
516
516
517 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
517 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
518 extensions.wrapfunction(patch, 'diff', kw_diff)
518 extensions.wrapfunction(patch, 'diff', kw_diff)
519 for c in 'annotate changeset rev filediff diff'.split():
519 for c in 'annotate changeset rev filediff diff'.split():
520 extensions.wrapfunction(webcommands, c, kwweb_skip)
520 extensions.wrapfunction(webcommands, c, kwweb_skip)
521
521
522 cmdtable = {
522 cmdtable = {
523 'kwdemo':
523 'kwdemo':
524 (demo,
524 (demo,
525 [('d', 'default', None, _('show default keyword template maps')),
525 [('d', 'default', None, _('show default keyword template maps')),
526 ('f', 'rcfile', [], _('read maps from rcfile'))],
526 ('f', 'rcfile', [], _('read maps from rcfile'))],
527 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...')),
527 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...')),
528 'kwexpand': (expand, commands.walkopts,
528 'kwexpand': (expand, commands.walkopts,
529 _('hg kwexpand [OPTION]... [FILE]...')),
529 _('hg kwexpand [OPTION]... [FILE]...')),
530 'kwfiles':
530 'kwfiles':
531 (files,
531 (files,
532 [('a', 'all', None, _('show keyword status flags of all files')),
532 [('a', 'all', None, _('show keyword status flags of all files')),
533 ('i', 'ignore', None, _('show files excluded from expansion')),
533 ('i', 'ignore', None, _('show files excluded from expansion')),
534 ('u', 'untracked', None, _('additionally show untracked files')),
534 ('u', 'untracked', None, _('additionally show untracked files')),
535 ] + commands.walkopts,
535 ] + commands.walkopts,
536 _('hg kwfiles [OPTION]... [FILE]...')),
536 _('hg kwfiles [OPTION]... [FILE]...')),
537 'kwshrink': (shrink, commands.walkopts,
537 'kwshrink': (shrink, commands.walkopts,
538 _('hg kwshrink [OPTION]... [FILE]...')),
538 _('hg kwshrink [OPTION]... [FILE]...')),
539 }
539 }
@@ -1,2610 +1,2608 b''
1 # mq.py - patch queues for mercurial
1 # mq.py - patch queues for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 '''patch management and development
8 '''patch management and development
9
9
10 This extension lets you work with a stack of patches in a Mercurial
10 This extension lets you work with a stack of patches in a Mercurial
11 repository. It manages two stacks of patches - all known patches, and
11 repository. It manages two stacks of patches - all known patches, and
12 applied patches (subset of known patches).
12 applied patches (subset of known patches).
13
13
14 Known patches are represented as patch files in the .hg/patches
14 Known patches are represented as patch files in the .hg/patches
15 directory. Applied patches are both patch files and changesets.
15 directory. Applied patches are both patch files and changesets.
16
16
17 Common tasks (use "hg help command" for more details):
17 Common tasks (use "hg help command" for more details):
18
18
19 prepare repository to work with patches qinit
19 prepare repository to work with patches qinit
20 create new patch qnew
20 create new patch qnew
21 import existing patch qimport
21 import existing patch qimport
22
22
23 print patch series qseries
23 print patch series qseries
24 print applied patches qapplied
24 print applied patches qapplied
25 print name of top applied patch qtop
25 print name of top applied patch qtop
26
26
27 add known patch to applied stack qpush
27 add known patch to applied stack qpush
28 remove patch from applied stack qpop
28 remove patch from applied stack qpop
29 refresh contents of top applied patch qrefresh
29 refresh contents of top applied patch qrefresh
30 '''
30 '''
31
31
32 from mercurial.i18n import _
32 from mercurial.i18n import _
33 from mercurial.node import bin, hex, short, nullid, nullrev
33 from mercurial.node import bin, hex, short, nullid, nullrev
34 from mercurial.lock import release
34 from mercurial.lock import release
35 from mercurial import commands, cmdutil, hg, patch, util
35 from mercurial import commands, cmdutil, hg, patch, util
36 from mercurial import repair, extensions, url, error
36 from mercurial import repair, extensions, url, error
37 import os, sys, re, errno
37 import os, sys, re, errno
38
38
39 commands.norepo += " qclone"
39 commands.norepo += " qclone"
40
40
41 # Patch names looks like unix-file names.
41 # Patch names looks like unix-file names.
42 # They must be joinable with queue directory and result in the patch path.
42 # They must be joinable with queue directory and result in the patch path.
43 normname = util.normpath
43 normname = util.normpath
44
44
45 class statusentry:
45 class statusentry:
46 def __init__(self, rev, name=None):
46 def __init__(self, rev, name=None):
47 if not name:
47 if not name:
48 fields = rev.split(':', 1)
48 fields = rev.split(':', 1)
49 if len(fields) == 2:
49 if len(fields) == 2:
50 self.rev, self.name = fields
50 self.rev, self.name = fields
51 else:
51 else:
52 self.rev, self.name = None, None
52 self.rev, self.name = None, None
53 else:
53 else:
54 self.rev, self.name = rev, name
54 self.rev, self.name = rev, name
55
55
56 def __str__(self):
56 def __str__(self):
57 return self.rev + ':' + self.name
57 return self.rev + ':' + self.name
58
58
59 class patchheader(object):
59 class patchheader(object):
60 def __init__(self, message, comments, user, date, haspatch):
60 def __init__(self, message, comments, user, date, haspatch):
61 self.message = message
61 self.message = message
62 self.comments = comments
62 self.comments = comments
63 self.user = user
63 self.user = user
64 self.date = date
64 self.date = date
65 self.haspatch = haspatch
65 self.haspatch = haspatch
66
66
67 def setuser(self, user):
67 def setuser(self, user):
68 if not self.setheader(['From: ', '# User '], user):
68 if not self.setheader(['From: ', '# User '], user):
69 try:
69 try:
70 patchheaderat = self.comments.index('# HG changeset patch')
70 patchheaderat = self.comments.index('# HG changeset patch')
71 self.comments.insert(patchheaderat + 1,'# User ' + user)
71 self.comments.insert(patchheaderat + 1,'# User ' + user)
72 except ValueError:
72 except ValueError:
73 self.comments = ['From: ' + user, ''] + self.comments
73 self.comments = ['From: ' + user, ''] + self.comments
74 self.user = user
74 self.user = user
75
75
76 def setdate(self, date):
76 def setdate(self, date):
77 if self.setheader(['# Date '], date):
77 if self.setheader(['# Date '], date):
78 self.date = date
78 self.date = date
79
79
80 def setmessage(self, message):
80 def setmessage(self, message):
81 if self.comments:
81 if self.comments:
82 self._delmsg()
82 self._delmsg()
83 self.message = [message]
83 self.message = [message]
84 self.comments += self.message
84 self.comments += self.message
85
85
86 def setheader(self, prefixes, new):
86 def setheader(self, prefixes, new):
87 '''Update all references to a field in the patch header.
87 '''Update all references to a field in the patch header.
88 If none found, add it email style.'''
88 If none found, add it email style.'''
89 res = False
89 res = False
90 for prefix in prefixes:
90 for prefix in prefixes:
91 for i in xrange(len(self.comments)):
91 for i in xrange(len(self.comments)):
92 if self.comments[i].startswith(prefix):
92 if self.comments[i].startswith(prefix):
93 self.comments[i] = prefix + new
93 self.comments[i] = prefix + new
94 res = True
94 res = True
95 break
95 break
96 return res
96 return res
97
97
98 def __str__(self):
98 def __str__(self):
99 if not self.comments:
99 if not self.comments:
100 return ''
100 return ''
101 return '\n'.join(self.comments) + '\n\n'
101 return '\n'.join(self.comments) + '\n\n'
102
102
103 def _delmsg(self):
103 def _delmsg(self):
104 '''Remove existing message, keeping the rest of the comments fields.
104 '''Remove existing message, keeping the rest of the comments fields.
105 If comments contains 'subject: ', message will prepend
105 If comments contains 'subject: ', message will prepend
106 the field and a blank line.'''
106 the field and a blank line.'''
107 if self.message:
107 if self.message:
108 subj = 'subject: ' + self.message[0].lower()
108 subj = 'subject: ' + self.message[0].lower()
109 for i in xrange(len(self.comments)):
109 for i in xrange(len(self.comments)):
110 if subj == self.comments[i].lower():
110 if subj == self.comments[i].lower():
111 del self.comments[i]
111 del self.comments[i]
112 self.message = self.message[2:]
112 self.message = self.message[2:]
113 break
113 break
114 ci = 0
114 ci = 0
115 for mi in xrange(len(self.message)):
115 for mi in xrange(len(self.message)):
116 while self.message[mi] != self.comments[ci]:
116 while self.message[mi] != self.comments[ci]:
117 ci += 1
117 ci += 1
118 del self.comments[ci]
118 del self.comments[ci]
119
119
120 class queue:
120 class queue:
121 def __init__(self, ui, path, patchdir=None):
121 def __init__(self, ui, path, patchdir=None):
122 self.basepath = path
122 self.basepath = path
123 self.path = patchdir or os.path.join(path, "patches")
123 self.path = patchdir or os.path.join(path, "patches")
124 self.opener = util.opener(self.path)
124 self.opener = util.opener(self.path)
125 self.ui = ui
125 self.ui = ui
126 self.applied = []
126 self.applied = []
127 self.full_series = []
127 self.full_series = []
128 self.applied_dirty = 0
128 self.applied_dirty = 0
129 self.series_dirty = 0
129 self.series_dirty = 0
130 self.series_path = "series"
130 self.series_path = "series"
131 self.status_path = "status"
131 self.status_path = "status"
132 self.guards_path = "guards"
132 self.guards_path = "guards"
133 self.active_guards = None
133 self.active_guards = None
134 self.guards_dirty = False
134 self.guards_dirty = False
135 self._diffopts = None
135 self._diffopts = None
136
136
137 if os.path.exists(self.join(self.series_path)):
137 if os.path.exists(self.join(self.series_path)):
138 self.full_series = self.opener(self.series_path).read().splitlines()
138 self.full_series = self.opener(self.series_path).read().splitlines()
139 self.parse_series()
139 self.parse_series()
140
140
141 if os.path.exists(self.join(self.status_path)):
141 if os.path.exists(self.join(self.status_path)):
142 lines = self.opener(self.status_path).read().splitlines()
142 lines = self.opener(self.status_path).read().splitlines()
143 self.applied = [statusentry(l) for l in lines]
143 self.applied = [statusentry(l) for l in lines]
144
144
145 def diffopts(self):
145 def diffopts(self):
146 if self._diffopts is None:
146 if self._diffopts is None:
147 self._diffopts = patch.diffopts(self.ui)
147 self._diffopts = patch.diffopts(self.ui)
148 return self._diffopts
148 return self._diffopts
149
149
150 def join(self, *p):
150 def join(self, *p):
151 return os.path.join(self.path, *p)
151 return os.path.join(self.path, *p)
152
152
153 def find_series(self, patch):
153 def find_series(self, patch):
154 pre = re.compile("(\s*)([^#]+)")
154 pre = re.compile("(\s*)([^#]+)")
155 index = 0
155 index = 0
156 for l in self.full_series:
156 for l in self.full_series:
157 m = pre.match(l)
157 m = pre.match(l)
158 if m:
158 if m:
159 s = m.group(2)
159 s = m.group(2)
160 s = s.rstrip()
160 s = s.rstrip()
161 if s == patch:
161 if s == patch:
162 return index
162 return index
163 index += 1
163 index += 1
164 return None
164 return None
165
165
166 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
166 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
167
167
168 def parse_series(self):
168 def parse_series(self):
169 self.series = []
169 self.series = []
170 self.series_guards = []
170 self.series_guards = []
171 for l in self.full_series:
171 for l in self.full_series:
172 h = l.find('#')
172 h = l.find('#')
173 if h == -1:
173 if h == -1:
174 patch = l
174 patch = l
175 comment = ''
175 comment = ''
176 elif h == 0:
176 elif h == 0:
177 continue
177 continue
178 else:
178 else:
179 patch = l[:h]
179 patch = l[:h]
180 comment = l[h:]
180 comment = l[h:]
181 patch = patch.strip()
181 patch = patch.strip()
182 if patch:
182 if patch:
183 if patch in self.series:
183 if patch in self.series:
184 raise util.Abort(_('%s appears more than once in %s') %
184 raise util.Abort(_('%s appears more than once in %s') %
185 (patch, self.join(self.series_path)))
185 (patch, self.join(self.series_path)))
186 self.series.append(patch)
186 self.series.append(patch)
187 self.series_guards.append(self.guard_re.findall(comment))
187 self.series_guards.append(self.guard_re.findall(comment))
188
188
189 def check_guard(self, guard):
189 def check_guard(self, guard):
190 if not guard:
190 if not guard:
191 return _('guard cannot be an empty string')
191 return _('guard cannot be an empty string')
192 bad_chars = '# \t\r\n\f'
192 bad_chars = '# \t\r\n\f'
193 first = guard[0]
193 first = guard[0]
194 for c in '-+':
194 for c in '-+':
195 if first == c:
195 if first == c:
196 return (_('guard %r starts with invalid character: %r') %
196 return (_('guard %r starts with invalid character: %r') %
197 (guard, c))
197 (guard, c))
198 for c in bad_chars:
198 for c in bad_chars:
199 if c in guard:
199 if c in guard:
200 return _('invalid character in guard %r: %r') % (guard, c)
200 return _('invalid character in guard %r: %r') % (guard, c)
201
201
202 def set_active(self, guards):
202 def set_active(self, guards):
203 for guard in guards:
203 for guard in guards:
204 bad = self.check_guard(guard)
204 bad = self.check_guard(guard)
205 if bad:
205 if bad:
206 raise util.Abort(bad)
206 raise util.Abort(bad)
207 guards = util.sort(set(guards))
207 guards = sorted(set(guards))
208 self.ui.debug(_('active guards: %s\n') % ' '.join(guards))
208 self.ui.debug(_('active guards: %s\n') % ' '.join(guards))
209 self.active_guards = guards
209 self.active_guards = guards
210 self.guards_dirty = True
210 self.guards_dirty = True
211
211
212 def active(self):
212 def active(self):
213 if self.active_guards is None:
213 if self.active_guards is None:
214 self.active_guards = []
214 self.active_guards = []
215 try:
215 try:
216 guards = self.opener(self.guards_path).read().split()
216 guards = self.opener(self.guards_path).read().split()
217 except IOError, err:
217 except IOError, err:
218 if err.errno != errno.ENOENT: raise
218 if err.errno != errno.ENOENT: raise
219 guards = []
219 guards = []
220 for i, guard in enumerate(guards):
220 for i, guard in enumerate(guards):
221 bad = self.check_guard(guard)
221 bad = self.check_guard(guard)
222 if bad:
222 if bad:
223 self.ui.warn('%s:%d: %s\n' %
223 self.ui.warn('%s:%d: %s\n' %
224 (self.join(self.guards_path), i + 1, bad))
224 (self.join(self.guards_path), i + 1, bad))
225 else:
225 else:
226 self.active_guards.append(guard)
226 self.active_guards.append(guard)
227 return self.active_guards
227 return self.active_guards
228
228
229 def set_guards(self, idx, guards):
229 def set_guards(self, idx, guards):
230 for g in guards:
230 for g in guards:
231 if len(g) < 2:
231 if len(g) < 2:
232 raise util.Abort(_('guard %r too short') % g)
232 raise util.Abort(_('guard %r too short') % g)
233 if g[0] not in '-+':
233 if g[0] not in '-+':
234 raise util.Abort(_('guard %r starts with invalid char') % g)
234 raise util.Abort(_('guard %r starts with invalid char') % g)
235 bad = self.check_guard(g[1:])
235 bad = self.check_guard(g[1:])
236 if bad:
236 if bad:
237 raise util.Abort(bad)
237 raise util.Abort(bad)
238 drop = self.guard_re.sub('', self.full_series[idx])
238 drop = self.guard_re.sub('', self.full_series[idx])
239 self.full_series[idx] = drop + ''.join([' #' + g for g in guards])
239 self.full_series[idx] = drop + ''.join([' #' + g for g in guards])
240 self.parse_series()
240 self.parse_series()
241 self.series_dirty = True
241 self.series_dirty = True
242
242
243 def pushable(self, idx):
243 def pushable(self, idx):
244 if isinstance(idx, str):
244 if isinstance(idx, str):
245 idx = self.series.index(idx)
245 idx = self.series.index(idx)
246 patchguards = self.series_guards[idx]
246 patchguards = self.series_guards[idx]
247 if not patchguards:
247 if not patchguards:
248 return True, None
248 return True, None
249 guards = self.active()
249 guards = self.active()
250 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
250 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
251 if exactneg:
251 if exactneg:
252 return False, exactneg[0]
252 return False, exactneg[0]
253 pos = [g for g in patchguards if g[0] == '+']
253 pos = [g for g in patchguards if g[0] == '+']
254 exactpos = [g for g in pos if g[1:] in guards]
254 exactpos = [g for g in pos if g[1:] in guards]
255 if pos:
255 if pos:
256 if exactpos:
256 if exactpos:
257 return True, exactpos[0]
257 return True, exactpos[0]
258 return False, pos
258 return False, pos
259 return True, ''
259 return True, ''
260
260
261 def explain_pushable(self, idx, all_patches=False):
261 def explain_pushable(self, idx, all_patches=False):
262 write = all_patches and self.ui.write or self.ui.warn
262 write = all_patches and self.ui.write or self.ui.warn
263 if all_patches or self.ui.verbose:
263 if all_patches or self.ui.verbose:
264 if isinstance(idx, str):
264 if isinstance(idx, str):
265 idx = self.series.index(idx)
265 idx = self.series.index(idx)
266 pushable, why = self.pushable(idx)
266 pushable, why = self.pushable(idx)
267 if all_patches and pushable:
267 if all_patches and pushable:
268 if why is None:
268 if why is None:
269 write(_('allowing %s - no guards in effect\n') %
269 write(_('allowing %s - no guards in effect\n') %
270 self.series[idx])
270 self.series[idx])
271 else:
271 else:
272 if not why:
272 if not why:
273 write(_('allowing %s - no matching negative guards\n') %
273 write(_('allowing %s - no matching negative guards\n') %
274 self.series[idx])
274 self.series[idx])
275 else:
275 else:
276 write(_('allowing %s - guarded by %r\n') %
276 write(_('allowing %s - guarded by %r\n') %
277 (self.series[idx], why))
277 (self.series[idx], why))
278 if not pushable:
278 if not pushable:
279 if why:
279 if why:
280 write(_('skipping %s - guarded by %r\n') %
280 write(_('skipping %s - guarded by %r\n') %
281 (self.series[idx], why))
281 (self.series[idx], why))
282 else:
282 else:
283 write(_('skipping %s - no matching guards\n') %
283 write(_('skipping %s - no matching guards\n') %
284 self.series[idx])
284 self.series[idx])
285
285
286 def save_dirty(self):
286 def save_dirty(self):
287 def write_list(items, path):
287 def write_list(items, path):
288 fp = self.opener(path, 'w')
288 fp = self.opener(path, 'w')
289 for i in items:
289 for i in items:
290 fp.write("%s\n" % i)
290 fp.write("%s\n" % i)
291 fp.close()
291 fp.close()
292 if self.applied_dirty: write_list(map(str, self.applied), self.status_path)
292 if self.applied_dirty: write_list(map(str, self.applied), self.status_path)
293 if self.series_dirty: write_list(self.full_series, self.series_path)
293 if self.series_dirty: write_list(self.full_series, self.series_path)
294 if self.guards_dirty: write_list(self.active_guards, self.guards_path)
294 if self.guards_dirty: write_list(self.active_guards, self.guards_path)
295
295
296 def readheaders(self, patch):
296 def readheaders(self, patch):
297 def eatdiff(lines):
297 def eatdiff(lines):
298 while lines:
298 while lines:
299 l = lines[-1]
299 l = lines[-1]
300 if (l.startswith("diff -") or
300 if (l.startswith("diff -") or
301 l.startswith("Index:") or
301 l.startswith("Index:") or
302 l.startswith("===========")):
302 l.startswith("===========")):
303 del lines[-1]
303 del lines[-1]
304 else:
304 else:
305 break
305 break
306 def eatempty(lines):
306 def eatempty(lines):
307 while lines:
307 while lines:
308 l = lines[-1]
308 l = lines[-1]
309 if re.match('\s*$', l):
309 if re.match('\s*$', l):
310 del lines[-1]
310 del lines[-1]
311 else:
311 else:
312 break
312 break
313
313
314 pf = self.join(patch)
314 pf = self.join(patch)
315 message = []
315 message = []
316 comments = []
316 comments = []
317 user = None
317 user = None
318 date = None
318 date = None
319 format = None
319 format = None
320 subject = None
320 subject = None
321 diffstart = 0
321 diffstart = 0
322
322
323 for line in file(pf):
323 for line in file(pf):
324 line = line.rstrip()
324 line = line.rstrip()
325 if line.startswith('diff --git'):
325 if line.startswith('diff --git'):
326 diffstart = 2
326 diffstart = 2
327 break
327 break
328 if diffstart:
328 if diffstart:
329 if line.startswith('+++ '):
329 if line.startswith('+++ '):
330 diffstart = 2
330 diffstart = 2
331 break
331 break
332 if line.startswith("--- "):
332 if line.startswith("--- "):
333 diffstart = 1
333 diffstart = 1
334 continue
334 continue
335 elif format == "hgpatch":
335 elif format == "hgpatch":
336 # parse values when importing the result of an hg export
336 # parse values when importing the result of an hg export
337 if line.startswith("# User "):
337 if line.startswith("# User "):
338 user = line[7:]
338 user = line[7:]
339 elif line.startswith("# Date "):
339 elif line.startswith("# Date "):
340 date = line[7:]
340 date = line[7:]
341 elif not line.startswith("# ") and line:
341 elif not line.startswith("# ") and line:
342 message.append(line)
342 message.append(line)
343 format = None
343 format = None
344 elif line == '# HG changeset patch':
344 elif line == '# HG changeset patch':
345 format = "hgpatch"
345 format = "hgpatch"
346 elif (format != "tagdone" and (line.startswith("Subject: ") or
346 elif (format != "tagdone" and (line.startswith("Subject: ") or
347 line.startswith("subject: "))):
347 line.startswith("subject: "))):
348 subject = line[9:]
348 subject = line[9:]
349 format = "tag"
349 format = "tag"
350 elif (format != "tagdone" and (line.startswith("From: ") or
350 elif (format != "tagdone" and (line.startswith("From: ") or
351 line.startswith("from: "))):
351 line.startswith("from: "))):
352 user = line[6:]
352 user = line[6:]
353 format = "tag"
353 format = "tag"
354 elif format == "tag" and line == "":
354 elif format == "tag" and line == "":
355 # when looking for tags (subject: from: etc) they
355 # when looking for tags (subject: from: etc) they
356 # end once you find a blank line in the source
356 # end once you find a blank line in the source
357 format = "tagdone"
357 format = "tagdone"
358 elif message or line:
358 elif message or line:
359 message.append(line)
359 message.append(line)
360 comments.append(line)
360 comments.append(line)
361
361
362 eatdiff(message)
362 eatdiff(message)
363 eatdiff(comments)
363 eatdiff(comments)
364 eatempty(message)
364 eatempty(message)
365 eatempty(comments)
365 eatempty(comments)
366
366
367 # make sure message isn't empty
367 # make sure message isn't empty
368 if format and format.startswith("tag") and subject:
368 if format and format.startswith("tag") and subject:
369 message.insert(0, "")
369 message.insert(0, "")
370 message.insert(0, subject)
370 message.insert(0, subject)
371 return patchheader(message, comments, user, date, diffstart > 1)
371 return patchheader(message, comments, user, date, diffstart > 1)
372
372
373 def removeundo(self, repo):
373 def removeundo(self, repo):
374 undo = repo.sjoin('undo')
374 undo = repo.sjoin('undo')
375 if not os.path.exists(undo):
375 if not os.path.exists(undo):
376 return
376 return
377 try:
377 try:
378 os.unlink(undo)
378 os.unlink(undo)
379 except OSError, inst:
379 except OSError, inst:
380 self.ui.warn(_('error removing undo: %s\n') % str(inst))
380 self.ui.warn(_('error removing undo: %s\n') % str(inst))
381
381
382 def printdiff(self, repo, node1, node2=None, files=None,
382 def printdiff(self, repo, node1, node2=None, files=None,
383 fp=None, changes=None, opts={}):
383 fp=None, changes=None, opts={}):
384 m = cmdutil.match(repo, files, opts)
384 m = cmdutil.match(repo, files, opts)
385 chunks = patch.diff(repo, node1, node2, m, changes, self.diffopts())
385 chunks = patch.diff(repo, node1, node2, m, changes, self.diffopts())
386 write = fp is None and repo.ui.write or fp.write
386 write = fp is None and repo.ui.write or fp.write
387 for chunk in chunks:
387 for chunk in chunks:
388 write(chunk)
388 write(chunk)
389
389
390 def mergeone(self, repo, mergeq, head, patch, rev):
390 def mergeone(self, repo, mergeq, head, patch, rev):
391 # first try just applying the patch
391 # first try just applying the patch
392 (err, n) = self.apply(repo, [ patch ], update_status=False,
392 (err, n) = self.apply(repo, [ patch ], update_status=False,
393 strict=True, merge=rev)
393 strict=True, merge=rev)
394
394
395 if err == 0:
395 if err == 0:
396 return (err, n)
396 return (err, n)
397
397
398 if n is None:
398 if n is None:
399 raise util.Abort(_("apply failed for patch %s") % patch)
399 raise util.Abort(_("apply failed for patch %s") % patch)
400
400
401 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
401 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
402
402
403 # apply failed, strip away that rev and merge.
403 # apply failed, strip away that rev and merge.
404 hg.clean(repo, head)
404 hg.clean(repo, head)
405 self.strip(repo, n, update=False, backup='strip')
405 self.strip(repo, n, update=False, backup='strip')
406
406
407 ctx = repo[rev]
407 ctx = repo[rev]
408 ret = hg.merge(repo, rev)
408 ret = hg.merge(repo, rev)
409 if ret:
409 if ret:
410 raise util.Abort(_("update returned %d") % ret)
410 raise util.Abort(_("update returned %d") % ret)
411 n = repo.commit(None, ctx.description(), ctx.user(), force=1)
411 n = repo.commit(None, ctx.description(), ctx.user(), force=1)
412 if n == None:
412 if n == None:
413 raise util.Abort(_("repo commit failed"))
413 raise util.Abort(_("repo commit failed"))
414 try:
414 try:
415 ph = mergeq.readheaders(patch)
415 ph = mergeq.readheaders(patch)
416 except:
416 except:
417 raise util.Abort(_("unable to read %s") % patch)
417 raise util.Abort(_("unable to read %s") % patch)
418
418
419 patchf = self.opener(patch, "w")
419 patchf = self.opener(patch, "w")
420 comments = str(ph)
420 comments = str(ph)
421 if comments:
421 if comments:
422 patchf.write(comments)
422 patchf.write(comments)
423 self.printdiff(repo, head, n, fp=patchf)
423 self.printdiff(repo, head, n, fp=patchf)
424 patchf.close()
424 patchf.close()
425 self.removeundo(repo)
425 self.removeundo(repo)
426 return (0, n)
426 return (0, n)
427
427
428 def qparents(self, repo, rev=None):
428 def qparents(self, repo, rev=None):
429 if rev is None:
429 if rev is None:
430 (p1, p2) = repo.dirstate.parents()
430 (p1, p2) = repo.dirstate.parents()
431 if p2 == nullid:
431 if p2 == nullid:
432 return p1
432 return p1
433 if len(self.applied) == 0:
433 if len(self.applied) == 0:
434 return None
434 return None
435 return bin(self.applied[-1].rev)
435 return bin(self.applied[-1].rev)
436 pp = repo.changelog.parents(rev)
436 pp = repo.changelog.parents(rev)
437 if pp[1] != nullid:
437 if pp[1] != nullid:
438 arevs = [ x.rev for x in self.applied ]
438 arevs = [ x.rev for x in self.applied ]
439 p0 = hex(pp[0])
439 p0 = hex(pp[0])
440 p1 = hex(pp[1])
440 p1 = hex(pp[1])
441 if p0 in arevs:
441 if p0 in arevs:
442 return pp[0]
442 return pp[0]
443 if p1 in arevs:
443 if p1 in arevs:
444 return pp[1]
444 return pp[1]
445 return pp[0]
445 return pp[0]
446
446
447 def mergepatch(self, repo, mergeq, series):
447 def mergepatch(self, repo, mergeq, series):
448 if len(self.applied) == 0:
448 if len(self.applied) == 0:
449 # each of the patches merged in will have two parents. This
449 # each of the patches merged in will have two parents. This
450 # can confuse the qrefresh, qdiff, and strip code because it
450 # can confuse the qrefresh, qdiff, and strip code because it
451 # needs to know which parent is actually in the patch queue.
451 # needs to know which parent is actually in the patch queue.
452 # so, we insert a merge marker with only one parent. This way
452 # so, we insert a merge marker with only one parent. This way
453 # the first patch in the queue is never a merge patch
453 # the first patch in the queue is never a merge patch
454 #
454 #
455 pname = ".hg.patches.merge.marker"
455 pname = ".hg.patches.merge.marker"
456 n = repo.commit(None, '[mq]: merge marker', user=None, force=1)
456 n = repo.commit(None, '[mq]: merge marker', user=None, force=1)
457 self.removeundo(repo)
457 self.removeundo(repo)
458 self.applied.append(statusentry(hex(n), pname))
458 self.applied.append(statusentry(hex(n), pname))
459 self.applied_dirty = 1
459 self.applied_dirty = 1
460
460
461 head = self.qparents(repo)
461 head = self.qparents(repo)
462
462
463 for patch in series:
463 for patch in series:
464 patch = mergeq.lookup(patch, strict=True)
464 patch = mergeq.lookup(patch, strict=True)
465 if not patch:
465 if not patch:
466 self.ui.warn(_("patch %s does not exist\n") % patch)
466 self.ui.warn(_("patch %s does not exist\n") % patch)
467 return (1, None)
467 return (1, None)
468 pushable, reason = self.pushable(patch)
468 pushable, reason = self.pushable(patch)
469 if not pushable:
469 if not pushable:
470 self.explain_pushable(patch, all_patches=True)
470 self.explain_pushable(patch, all_patches=True)
471 continue
471 continue
472 info = mergeq.isapplied(patch)
472 info = mergeq.isapplied(patch)
473 if not info:
473 if not info:
474 self.ui.warn(_("patch %s is not applied\n") % patch)
474 self.ui.warn(_("patch %s is not applied\n") % patch)
475 return (1, None)
475 return (1, None)
476 rev = bin(info[1])
476 rev = bin(info[1])
477 (err, head) = self.mergeone(repo, mergeq, head, patch, rev)
477 (err, head) = self.mergeone(repo, mergeq, head, patch, rev)
478 if head:
478 if head:
479 self.applied.append(statusentry(hex(head), patch))
479 self.applied.append(statusentry(hex(head), patch))
480 self.applied_dirty = 1
480 self.applied_dirty = 1
481 if err:
481 if err:
482 return (err, head)
482 return (err, head)
483 self.save_dirty()
483 self.save_dirty()
484 return (0, head)
484 return (0, head)
485
485
486 def patch(self, repo, patchfile):
486 def patch(self, repo, patchfile):
487 '''Apply patchfile to the working directory.
487 '''Apply patchfile to the working directory.
488 patchfile: file name of patch'''
488 patchfile: file name of patch'''
489 files = {}
489 files = {}
490 try:
490 try:
491 fuzz = patch.patch(patchfile, self.ui, strip=1, cwd=repo.root,
491 fuzz = patch.patch(patchfile, self.ui, strip=1, cwd=repo.root,
492 files=files)
492 files=files)
493 except Exception, inst:
493 except Exception, inst:
494 self.ui.note(str(inst) + '\n')
494 self.ui.note(str(inst) + '\n')
495 if not self.ui.verbose:
495 if not self.ui.verbose:
496 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
496 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
497 return (False, files, False)
497 return (False, files, False)
498
498
499 return (True, files, fuzz)
499 return (True, files, fuzz)
500
500
501 def apply(self, repo, series, list=False, update_status=True,
501 def apply(self, repo, series, list=False, update_status=True,
502 strict=False, patchdir=None, merge=None, all_files={}):
502 strict=False, patchdir=None, merge=None, all_files={}):
503 wlock = lock = tr = None
503 wlock = lock = tr = None
504 try:
504 try:
505 wlock = repo.wlock()
505 wlock = repo.wlock()
506 lock = repo.lock()
506 lock = repo.lock()
507 tr = repo.transaction()
507 tr = repo.transaction()
508 try:
508 try:
509 ret = self._apply(repo, series, list, update_status,
509 ret = self._apply(repo, series, list, update_status,
510 strict, patchdir, merge, all_files=all_files)
510 strict, patchdir, merge, all_files=all_files)
511 tr.close()
511 tr.close()
512 self.save_dirty()
512 self.save_dirty()
513 return ret
513 return ret
514 except:
514 except:
515 try:
515 try:
516 tr.abort()
516 tr.abort()
517 finally:
517 finally:
518 repo.invalidate()
518 repo.invalidate()
519 repo.dirstate.invalidate()
519 repo.dirstate.invalidate()
520 raise
520 raise
521 finally:
521 finally:
522 del tr
522 del tr
523 release(lock, wlock)
523 release(lock, wlock)
524 self.removeundo(repo)
524 self.removeundo(repo)
525
525
526 def _apply(self, repo, series, list=False, update_status=True,
526 def _apply(self, repo, series, list=False, update_status=True,
527 strict=False, patchdir=None, merge=None, all_files={}):
527 strict=False, patchdir=None, merge=None, all_files={}):
528 # TODO unify with commands.py
528 # TODO unify with commands.py
529 if not patchdir:
529 if not patchdir:
530 patchdir = self.path
530 patchdir = self.path
531 err = 0
531 err = 0
532 n = None
532 n = None
533 for patchname in series:
533 for patchname in series:
534 pushable, reason = self.pushable(patchname)
534 pushable, reason = self.pushable(patchname)
535 if not pushable:
535 if not pushable:
536 self.explain_pushable(patchname, all_patches=True)
536 self.explain_pushable(patchname, all_patches=True)
537 continue
537 continue
538 self.ui.warn(_("applying %s\n") % patchname)
538 self.ui.warn(_("applying %s\n") % patchname)
539 pf = os.path.join(patchdir, patchname)
539 pf = os.path.join(patchdir, patchname)
540
540
541 try:
541 try:
542 ph = self.readheaders(patchname)
542 ph = self.readheaders(patchname)
543 except:
543 except:
544 self.ui.warn(_("Unable to read %s\n") % patchname)
544 self.ui.warn(_("Unable to read %s\n") % patchname)
545 err = 1
545 err = 1
546 break
546 break
547
547
548 message = ph.message
548 message = ph.message
549 if not message:
549 if not message:
550 message = _("imported patch %s\n") % patchname
550 message = _("imported patch %s\n") % patchname
551 else:
551 else:
552 if list:
552 if list:
553 message.append(_("\nimported patch %s") % patchname)
553 message.append(_("\nimported patch %s") % patchname)
554 message = '\n'.join(message)
554 message = '\n'.join(message)
555
555
556 if ph.haspatch:
556 if ph.haspatch:
557 (patcherr, files, fuzz) = self.patch(repo, pf)
557 (patcherr, files, fuzz) = self.patch(repo, pf)
558 all_files.update(files)
558 all_files.update(files)
559 patcherr = not patcherr
559 patcherr = not patcherr
560 else:
560 else:
561 self.ui.warn(_("patch %s is empty\n") % patchname)
561 self.ui.warn(_("patch %s is empty\n") % patchname)
562 patcherr, files, fuzz = 0, [], 0
562 patcherr, files, fuzz = 0, [], 0
563
563
564 if merge and files:
564 if merge and files:
565 # Mark as removed/merged and update dirstate parent info
565 # Mark as removed/merged and update dirstate parent info
566 removed = []
566 removed = []
567 merged = []
567 merged = []
568 for f in files:
568 for f in files:
569 if os.path.exists(repo.wjoin(f)):
569 if os.path.exists(repo.wjoin(f)):
570 merged.append(f)
570 merged.append(f)
571 else:
571 else:
572 removed.append(f)
572 removed.append(f)
573 for f in removed:
573 for f in removed:
574 repo.dirstate.remove(f)
574 repo.dirstate.remove(f)
575 for f in merged:
575 for f in merged:
576 repo.dirstate.merge(f)
576 repo.dirstate.merge(f)
577 p1, p2 = repo.dirstate.parents()
577 p1, p2 = repo.dirstate.parents()
578 repo.dirstate.setparents(p1, merge)
578 repo.dirstate.setparents(p1, merge)
579
579
580 files = patch.updatedir(self.ui, repo, files)
580 files = patch.updatedir(self.ui, repo, files)
581 match = cmdutil.matchfiles(repo, files or [])
581 match = cmdutil.matchfiles(repo, files or [])
582 n = repo.commit(files, message, ph.user, ph.date, match=match,
582 n = repo.commit(files, message, ph.user, ph.date, match=match,
583 force=True)
583 force=True)
584
584
585 if n == None:
585 if n == None:
586 raise util.Abort(_("repo commit failed"))
586 raise util.Abort(_("repo commit failed"))
587
587
588 if update_status:
588 if update_status:
589 self.applied.append(statusentry(hex(n), patchname))
589 self.applied.append(statusentry(hex(n), patchname))
590
590
591 if patcherr:
591 if patcherr:
592 self.ui.warn(_("patch failed, rejects left in working dir\n"))
592 self.ui.warn(_("patch failed, rejects left in working dir\n"))
593 err = 1
593 err = 1
594 break
594 break
595
595
596 if fuzz and strict:
596 if fuzz and strict:
597 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
597 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
598 err = 1
598 err = 1
599 break
599 break
600 return (err, n)
600 return (err, n)
601
601
602 def _clean_series(self, patches):
602 def _clean_series(self, patches):
603 indices = util.sort([self.find_series(p) for p in patches])
603 for i in sorted([self.find_series(p) for p in patches], reverse=True):
604 for i in indices[-1::-1]:
605 del self.full_series[i]
604 del self.full_series[i]
606 self.parse_series()
605 self.parse_series()
607 self.series_dirty = 1
606 self.series_dirty = 1
608
607
609 def finish(self, repo, revs):
608 def finish(self, repo, revs):
610 revs.sort()
611 firstrev = repo[self.applied[0].rev].rev()
609 firstrev = repo[self.applied[0].rev].rev()
612 appliedbase = 0
610 appliedbase = 0
613 patches = []
611 patches = []
614 for rev in util.sort(revs):
612 for rev in sorted(revs):
615 if rev < firstrev:
613 if rev < firstrev:
616 raise util.Abort(_('revision %d is not managed') % rev)
614 raise util.Abort(_('revision %d is not managed') % rev)
617 base = bin(self.applied[appliedbase].rev)
615 base = bin(self.applied[appliedbase].rev)
618 node = repo.changelog.node(rev)
616 node = repo.changelog.node(rev)
619 if node != base:
617 if node != base:
620 raise util.Abort(_('cannot delete revision %d above '
618 raise util.Abort(_('cannot delete revision %d above '
621 'applied patches') % rev)
619 'applied patches') % rev)
622 patches.append(self.applied[appliedbase].name)
620 patches.append(self.applied[appliedbase].name)
623 appliedbase += 1
621 appliedbase += 1
624
622
625 r = self.qrepo()
623 r = self.qrepo()
626 if r:
624 if r:
627 r.remove(patches, True)
625 r.remove(patches, True)
628 else:
626 else:
629 for p in patches:
627 for p in patches:
630 os.unlink(self.join(p))
628 os.unlink(self.join(p))
631
629
632 del self.applied[:appliedbase]
630 del self.applied[:appliedbase]
633 self.applied_dirty = 1
631 self.applied_dirty = 1
634 self._clean_series(patches)
632 self._clean_series(patches)
635
633
636 def delete(self, repo, patches, opts):
634 def delete(self, repo, patches, opts):
637 if not patches and not opts.get('rev'):
635 if not patches and not opts.get('rev'):
638 raise util.Abort(_('qdelete requires at least one revision or '
636 raise util.Abort(_('qdelete requires at least one revision or '
639 'patch name'))
637 'patch name'))
640
638
641 realpatches = []
639 realpatches = []
642 for patch in patches:
640 for patch in patches:
643 patch = self.lookup(patch, strict=True)
641 patch = self.lookup(patch, strict=True)
644 info = self.isapplied(patch)
642 info = self.isapplied(patch)
645 if info:
643 if info:
646 raise util.Abort(_("cannot delete applied patch %s") % patch)
644 raise util.Abort(_("cannot delete applied patch %s") % patch)
647 if patch not in self.series:
645 if patch not in self.series:
648 raise util.Abort(_("patch %s not in series file") % patch)
646 raise util.Abort(_("patch %s not in series file") % patch)
649 realpatches.append(patch)
647 realpatches.append(patch)
650
648
651 appliedbase = 0
649 appliedbase = 0
652 if opts.get('rev'):
650 if opts.get('rev'):
653 if not self.applied:
651 if not self.applied:
654 raise util.Abort(_('no patches applied'))
652 raise util.Abort(_('no patches applied'))
655 revs = cmdutil.revrange(repo, opts['rev'])
653 revs = cmdutil.revrange(repo, opts['rev'])
656 if len(revs) > 1 and revs[0] > revs[1]:
654 if len(revs) > 1 and revs[0] > revs[1]:
657 revs.reverse()
655 revs.reverse()
658 for rev in revs:
656 for rev in revs:
659 if appliedbase >= len(self.applied):
657 if appliedbase >= len(self.applied):
660 raise util.Abort(_("revision %d is not managed") % rev)
658 raise util.Abort(_("revision %d is not managed") % rev)
661
659
662 base = bin(self.applied[appliedbase].rev)
660 base = bin(self.applied[appliedbase].rev)
663 node = repo.changelog.node(rev)
661 node = repo.changelog.node(rev)
664 if node != base:
662 if node != base:
665 raise util.Abort(_("cannot delete revision %d above "
663 raise util.Abort(_("cannot delete revision %d above "
666 "applied patches") % rev)
664 "applied patches") % rev)
667 realpatches.append(self.applied[appliedbase].name)
665 realpatches.append(self.applied[appliedbase].name)
668 appliedbase += 1
666 appliedbase += 1
669
667
670 if not opts.get('keep'):
668 if not opts.get('keep'):
671 r = self.qrepo()
669 r = self.qrepo()
672 if r:
670 if r:
673 r.remove(realpatches, True)
671 r.remove(realpatches, True)
674 else:
672 else:
675 for p in realpatches:
673 for p in realpatches:
676 os.unlink(self.join(p))
674 os.unlink(self.join(p))
677
675
678 if appliedbase:
676 if appliedbase:
679 del self.applied[:appliedbase]
677 del self.applied[:appliedbase]
680 self.applied_dirty = 1
678 self.applied_dirty = 1
681 self._clean_series(realpatches)
679 self._clean_series(realpatches)
682
680
683 def check_toppatch(self, repo):
681 def check_toppatch(self, repo):
684 if len(self.applied) > 0:
682 if len(self.applied) > 0:
685 top = bin(self.applied[-1].rev)
683 top = bin(self.applied[-1].rev)
686 pp = repo.dirstate.parents()
684 pp = repo.dirstate.parents()
687 if top not in pp:
685 if top not in pp:
688 raise util.Abort(_("working directory revision is not qtip"))
686 raise util.Abort(_("working directory revision is not qtip"))
689 return top
687 return top
690 return None
688 return None
691 def check_localchanges(self, repo, force=False, refresh=True):
689 def check_localchanges(self, repo, force=False, refresh=True):
692 m, a, r, d = repo.status()[:4]
690 m, a, r, d = repo.status()[:4]
693 if m or a or r or d:
691 if m or a or r or d:
694 if not force:
692 if not force:
695 if refresh:
693 if refresh:
696 raise util.Abort(_("local changes found, refresh first"))
694 raise util.Abort(_("local changes found, refresh first"))
697 else:
695 else:
698 raise util.Abort(_("local changes found"))
696 raise util.Abort(_("local changes found"))
699 return m, a, r, d
697 return m, a, r, d
700
698
701 _reserved = ('series', 'status', 'guards')
699 _reserved = ('series', 'status', 'guards')
702 def check_reserved_name(self, name):
700 def check_reserved_name(self, name):
703 if (name in self._reserved or name.startswith('.hg')
701 if (name in self._reserved or name.startswith('.hg')
704 or name.startswith('.mq')):
702 or name.startswith('.mq')):
705 raise util.Abort(_('"%s" cannot be used as the name of a patch')
703 raise util.Abort(_('"%s" cannot be used as the name of a patch')
706 % name)
704 % name)
707
705
708 def new(self, repo, patchfn, *pats, **opts):
706 def new(self, repo, patchfn, *pats, **opts):
709 """options:
707 """options:
710 msg: a string or a no-argument function returning a string
708 msg: a string or a no-argument function returning a string
711 """
709 """
712 msg = opts.get('msg')
710 msg = opts.get('msg')
713 force = opts.get('force')
711 force = opts.get('force')
714 user = opts.get('user')
712 user = opts.get('user')
715 date = opts.get('date')
713 date = opts.get('date')
716 if date:
714 if date:
717 date = util.parsedate(date)
715 date = util.parsedate(date)
718 self.check_reserved_name(patchfn)
716 self.check_reserved_name(patchfn)
719 if os.path.exists(self.join(patchfn)):
717 if os.path.exists(self.join(patchfn)):
720 raise util.Abort(_('patch "%s" already exists') % patchfn)
718 raise util.Abort(_('patch "%s" already exists') % patchfn)
721 if opts.get('include') or opts.get('exclude') or pats:
719 if opts.get('include') or opts.get('exclude') or pats:
722 match = cmdutil.match(repo, pats, opts)
720 match = cmdutil.match(repo, pats, opts)
723 # detect missing files in pats
721 # detect missing files in pats
724 def badfn(f, msg):
722 def badfn(f, msg):
725 raise util.Abort('%s: %s' % (f, msg))
723 raise util.Abort('%s: %s' % (f, msg))
726 match.bad = badfn
724 match.bad = badfn
727 m, a, r, d = repo.status(match=match)[:4]
725 m, a, r, d = repo.status(match=match)[:4]
728 else:
726 else:
729 m, a, r, d = self.check_localchanges(repo, force)
727 m, a, r, d = self.check_localchanges(repo, force)
730 match = cmdutil.matchfiles(repo, m + a + r)
728 match = cmdutil.matchfiles(repo, m + a + r)
731 commitfiles = m + a + r
729 commitfiles = m + a + r
732 self.check_toppatch(repo)
730 self.check_toppatch(repo)
733 insert = self.full_series_end()
731 insert = self.full_series_end()
734 wlock = repo.wlock()
732 wlock = repo.wlock()
735 try:
733 try:
736 # if patch file write fails, abort early
734 # if patch file write fails, abort early
737 p = self.opener(patchfn, "w")
735 p = self.opener(patchfn, "w")
738 try:
736 try:
739 if date:
737 if date:
740 p.write("# HG changeset patch\n")
738 p.write("# HG changeset patch\n")
741 if user:
739 if user:
742 p.write("# User " + user + "\n")
740 p.write("# User " + user + "\n")
743 p.write("# Date %d %d\n\n" % date)
741 p.write("# Date %d %d\n\n" % date)
744 elif user:
742 elif user:
745 p.write("From: " + user + "\n\n")
743 p.write("From: " + user + "\n\n")
746
744
747 if callable(msg):
745 if callable(msg):
748 msg = msg()
746 msg = msg()
749 commitmsg = msg and msg or ("[mq]: %s" % patchfn)
747 commitmsg = msg and msg or ("[mq]: %s" % patchfn)
750 n = repo.commit(commitfiles, commitmsg, user, date, match=match, force=True)
748 n = repo.commit(commitfiles, commitmsg, user, date, match=match, force=True)
751 if n == None:
749 if n == None:
752 raise util.Abort(_("repo commit failed"))
750 raise util.Abort(_("repo commit failed"))
753 try:
751 try:
754 self.full_series[insert:insert] = [patchfn]
752 self.full_series[insert:insert] = [patchfn]
755 self.applied.append(statusentry(hex(n), patchfn))
753 self.applied.append(statusentry(hex(n), patchfn))
756 self.parse_series()
754 self.parse_series()
757 self.series_dirty = 1
755 self.series_dirty = 1
758 self.applied_dirty = 1
756 self.applied_dirty = 1
759 if msg:
757 if msg:
760 msg = msg + "\n\n"
758 msg = msg + "\n\n"
761 p.write(msg)
759 p.write(msg)
762 if commitfiles:
760 if commitfiles:
763 diffopts = self.diffopts()
761 diffopts = self.diffopts()
764 if opts.get('git'): diffopts.git = True
762 if opts.get('git'): diffopts.git = True
765 parent = self.qparents(repo, n)
763 parent = self.qparents(repo, n)
766 chunks = patch.diff(repo, node1=parent, node2=n,
764 chunks = patch.diff(repo, node1=parent, node2=n,
767 match=match, opts=diffopts)
765 match=match, opts=diffopts)
768 for chunk in chunks:
766 for chunk in chunks:
769 p.write(chunk)
767 p.write(chunk)
770 p.close()
768 p.close()
771 wlock.release()
769 wlock.release()
772 wlock = None
770 wlock = None
773 r = self.qrepo()
771 r = self.qrepo()
774 if r: r.add([patchfn])
772 if r: r.add([patchfn])
775 except:
773 except:
776 repo.rollback()
774 repo.rollback()
777 raise
775 raise
778 except Exception:
776 except Exception:
779 patchpath = self.join(patchfn)
777 patchpath = self.join(patchfn)
780 try:
778 try:
781 os.unlink(patchpath)
779 os.unlink(patchpath)
782 except:
780 except:
783 self.ui.warn(_('error unlinking %s\n') % patchpath)
781 self.ui.warn(_('error unlinking %s\n') % patchpath)
784 raise
782 raise
785 self.removeundo(repo)
783 self.removeundo(repo)
786 finally:
784 finally:
787 release(wlock)
785 release(wlock)
788
786
789 def strip(self, repo, rev, update=True, backup="all", force=None):
787 def strip(self, repo, rev, update=True, backup="all", force=None):
790 wlock = lock = None
788 wlock = lock = None
791 try:
789 try:
792 wlock = repo.wlock()
790 wlock = repo.wlock()
793 lock = repo.lock()
791 lock = repo.lock()
794
792
795 if update:
793 if update:
796 self.check_localchanges(repo, force=force, refresh=False)
794 self.check_localchanges(repo, force=force, refresh=False)
797 urev = self.qparents(repo, rev)
795 urev = self.qparents(repo, rev)
798 hg.clean(repo, urev)
796 hg.clean(repo, urev)
799 repo.dirstate.write()
797 repo.dirstate.write()
800
798
801 self.removeundo(repo)
799 self.removeundo(repo)
802 repair.strip(self.ui, repo, rev, backup)
800 repair.strip(self.ui, repo, rev, backup)
803 # strip may have unbundled a set of backed up revisions after
801 # strip may have unbundled a set of backed up revisions after
804 # the actual strip
802 # the actual strip
805 self.removeundo(repo)
803 self.removeundo(repo)
806 finally:
804 finally:
807 release(lock, wlock)
805 release(lock, wlock)
808
806
809 def isapplied(self, patch):
807 def isapplied(self, patch):
810 """returns (index, rev, patch)"""
808 """returns (index, rev, patch)"""
811 for i in xrange(len(self.applied)):
809 for i in xrange(len(self.applied)):
812 a = self.applied[i]
810 a = self.applied[i]
813 if a.name == patch:
811 if a.name == patch:
814 return (i, a.rev, a.name)
812 return (i, a.rev, a.name)
815 return None
813 return None
816
814
817 # if the exact patch name does not exist, we try a few
815 # if the exact patch name does not exist, we try a few
818 # variations. If strict is passed, we try only #1
816 # variations. If strict is passed, we try only #1
819 #
817 #
820 # 1) a number to indicate an offset in the series file
818 # 1) a number to indicate an offset in the series file
821 # 2) a unique substring of the patch name was given
819 # 2) a unique substring of the patch name was given
822 # 3) patchname[-+]num to indicate an offset in the series file
820 # 3) patchname[-+]num to indicate an offset in the series file
823 def lookup(self, patch, strict=False):
821 def lookup(self, patch, strict=False):
824 patch = patch and str(patch)
822 patch = patch and str(patch)
825
823
826 def partial_name(s):
824 def partial_name(s):
827 if s in self.series:
825 if s in self.series:
828 return s
826 return s
829 matches = [x for x in self.series if s in x]
827 matches = [x for x in self.series if s in x]
830 if len(matches) > 1:
828 if len(matches) > 1:
831 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
829 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
832 for m in matches:
830 for m in matches:
833 self.ui.warn(' %s\n' % m)
831 self.ui.warn(' %s\n' % m)
834 return None
832 return None
835 if matches:
833 if matches:
836 return matches[0]
834 return matches[0]
837 if len(self.series) > 0 and len(self.applied) > 0:
835 if len(self.series) > 0 and len(self.applied) > 0:
838 if s == 'qtip':
836 if s == 'qtip':
839 return self.series[self.series_end(True)-1]
837 return self.series[self.series_end(True)-1]
840 if s == 'qbase':
838 if s == 'qbase':
841 return self.series[0]
839 return self.series[0]
842 return None
840 return None
843
841
844 if patch == None:
842 if patch == None:
845 return None
843 return None
846 if patch in self.series:
844 if patch in self.series:
847 return patch
845 return patch
848
846
849 if not os.path.isfile(self.join(patch)):
847 if not os.path.isfile(self.join(patch)):
850 try:
848 try:
851 sno = int(patch)
849 sno = int(patch)
852 except(ValueError, OverflowError):
850 except(ValueError, OverflowError):
853 pass
851 pass
854 else:
852 else:
855 if -len(self.series) <= sno < len(self.series):
853 if -len(self.series) <= sno < len(self.series):
856 return self.series[sno]
854 return self.series[sno]
857
855
858 if not strict:
856 if not strict:
859 res = partial_name(patch)
857 res = partial_name(patch)
860 if res:
858 if res:
861 return res
859 return res
862 minus = patch.rfind('-')
860 minus = patch.rfind('-')
863 if minus >= 0:
861 if minus >= 0:
864 res = partial_name(patch[:minus])
862 res = partial_name(patch[:minus])
865 if res:
863 if res:
866 i = self.series.index(res)
864 i = self.series.index(res)
867 try:
865 try:
868 off = int(patch[minus+1:] or 1)
866 off = int(patch[minus+1:] or 1)
869 except(ValueError, OverflowError):
867 except(ValueError, OverflowError):
870 pass
868 pass
871 else:
869 else:
872 if i - off >= 0:
870 if i - off >= 0:
873 return self.series[i - off]
871 return self.series[i - off]
874 plus = patch.rfind('+')
872 plus = patch.rfind('+')
875 if plus >= 0:
873 if plus >= 0:
876 res = partial_name(patch[:plus])
874 res = partial_name(patch[:plus])
877 if res:
875 if res:
878 i = self.series.index(res)
876 i = self.series.index(res)
879 try:
877 try:
880 off = int(patch[plus+1:] or 1)
878 off = int(patch[plus+1:] or 1)
881 except(ValueError, OverflowError):
879 except(ValueError, OverflowError):
882 pass
880 pass
883 else:
881 else:
884 if i + off < len(self.series):
882 if i + off < len(self.series):
885 return self.series[i + off]
883 return self.series[i + off]
886 raise util.Abort(_("patch %s not in series") % patch)
884 raise util.Abort(_("patch %s not in series") % patch)
887
885
888 def push(self, repo, patch=None, force=False, list=False,
886 def push(self, repo, patch=None, force=False, list=False,
889 mergeq=None, all=False):
887 mergeq=None, all=False):
890 wlock = repo.wlock()
888 wlock = repo.wlock()
891 if repo.dirstate.parents()[0] != repo.changelog.tip():
889 if repo.dirstate.parents()[0] != repo.changelog.tip():
892 self.ui.status(_("(working directory not at tip)\n"))
890 self.ui.status(_("(working directory not at tip)\n"))
893
891
894 if not self.series:
892 if not self.series:
895 self.ui.warn(_('no patches in series\n'))
893 self.ui.warn(_('no patches in series\n'))
896 return 0
894 return 0
897
895
898 try:
896 try:
899 patch = self.lookup(patch)
897 patch = self.lookup(patch)
900 # Suppose our series file is: A B C and the current 'top'
898 # Suppose our series file is: A B C and the current 'top'
901 # patch is B. qpush C should be performed (moving forward)
899 # patch is B. qpush C should be performed (moving forward)
902 # qpush B is a NOP (no change) qpush A is an error (can't
900 # qpush B is a NOP (no change) qpush A is an error (can't
903 # go backwards with qpush)
901 # go backwards with qpush)
904 if patch:
902 if patch:
905 info = self.isapplied(patch)
903 info = self.isapplied(patch)
906 if info:
904 if info:
907 if info[0] < len(self.applied) - 1:
905 if info[0] < len(self.applied) - 1:
908 raise util.Abort(
906 raise util.Abort(
909 _("cannot push to a previous patch: %s") % patch)
907 _("cannot push to a previous patch: %s") % patch)
910 self.ui.warn(
908 self.ui.warn(
911 _('qpush: %s is already at the top\n') % patch)
909 _('qpush: %s is already at the top\n') % patch)
912 return
910 return
913 pushable, reason = self.pushable(patch)
911 pushable, reason = self.pushable(patch)
914 if not pushable:
912 if not pushable:
915 if reason:
913 if reason:
916 reason = _('guarded by %r') % reason
914 reason = _('guarded by %r') % reason
917 else:
915 else:
918 reason = _('no matching guards')
916 reason = _('no matching guards')
919 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
917 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
920 return 1
918 return 1
921 elif all:
919 elif all:
922 patch = self.series[-1]
920 patch = self.series[-1]
923 if self.isapplied(patch):
921 if self.isapplied(patch):
924 self.ui.warn(_('all patches are currently applied\n'))
922 self.ui.warn(_('all patches are currently applied\n'))
925 return 0
923 return 0
926
924
927 # Following the above example, starting at 'top' of B:
925 # Following the above example, starting at 'top' of B:
928 # qpush should be performed (pushes C), but a subsequent
926 # qpush should be performed (pushes C), but a subsequent
929 # qpush without an argument is an error (nothing to
927 # qpush without an argument is an error (nothing to
930 # apply). This allows a loop of "...while hg qpush..." to
928 # apply). This allows a loop of "...while hg qpush..." to
931 # work as it detects an error when done
929 # work as it detects an error when done
932 start = self.series_end()
930 start = self.series_end()
933 if start == len(self.series):
931 if start == len(self.series):
934 self.ui.warn(_('patch series already fully applied\n'))
932 self.ui.warn(_('patch series already fully applied\n'))
935 return 1
933 return 1
936 if not force:
934 if not force:
937 self.check_localchanges(repo)
935 self.check_localchanges(repo)
938
936
939 self.applied_dirty = 1
937 self.applied_dirty = 1
940 if start > 0:
938 if start > 0:
941 self.check_toppatch(repo)
939 self.check_toppatch(repo)
942 if not patch:
940 if not patch:
943 patch = self.series[start]
941 patch = self.series[start]
944 end = start + 1
942 end = start + 1
945 else:
943 else:
946 end = self.series.index(patch, start) + 1
944 end = self.series.index(patch, start) + 1
947 s = self.series[start:end]
945 s = self.series[start:end]
948 all_files = {}
946 all_files = {}
949 try:
947 try:
950 if mergeq:
948 if mergeq:
951 ret = self.mergepatch(repo, mergeq, s)
949 ret = self.mergepatch(repo, mergeq, s)
952 else:
950 else:
953 ret = self.apply(repo, s, list, all_files=all_files)
951 ret = self.apply(repo, s, list, all_files=all_files)
954 except:
952 except:
955 self.ui.warn(_('cleaning up working directory...'))
953 self.ui.warn(_('cleaning up working directory...'))
956 node = repo.dirstate.parents()[0]
954 node = repo.dirstate.parents()[0]
957 hg.revert(repo, node, None)
955 hg.revert(repo, node, None)
958 unknown = repo.status(unknown=True)[4]
956 unknown = repo.status(unknown=True)[4]
959 # only remove unknown files that we know we touched or
957 # only remove unknown files that we know we touched or
960 # created while patching
958 # created while patching
961 for f in unknown:
959 for f in unknown:
962 if f in all_files:
960 if f in all_files:
963 util.unlink(repo.wjoin(f))
961 util.unlink(repo.wjoin(f))
964 self.ui.warn(_('done\n'))
962 self.ui.warn(_('done\n'))
965 raise
963 raise
966 top = self.applied[-1].name
964 top = self.applied[-1].name
967 if ret[0]:
965 if ret[0]:
968 self.ui.write(_("errors during apply, please fix and "
966 self.ui.write(_("errors during apply, please fix and "
969 "refresh %s\n") % top)
967 "refresh %s\n") % top)
970 else:
968 else:
971 self.ui.write(_("now at: %s\n") % top)
969 self.ui.write(_("now at: %s\n") % top)
972 return ret[0]
970 return ret[0]
973 finally:
971 finally:
974 wlock.release()
972 wlock.release()
975
973
976 def pop(self, repo, patch=None, force=False, update=True, all=False):
974 def pop(self, repo, patch=None, force=False, update=True, all=False):
977 def getfile(f, rev, flags):
975 def getfile(f, rev, flags):
978 t = repo.file(f).read(rev)
976 t = repo.file(f).read(rev)
979 repo.wwrite(f, t, flags)
977 repo.wwrite(f, t, flags)
980
978
981 wlock = repo.wlock()
979 wlock = repo.wlock()
982 try:
980 try:
983 if patch:
981 if patch:
984 # index, rev, patch
982 # index, rev, patch
985 info = self.isapplied(patch)
983 info = self.isapplied(patch)
986 if not info:
984 if not info:
987 patch = self.lookup(patch)
985 patch = self.lookup(patch)
988 info = self.isapplied(patch)
986 info = self.isapplied(patch)
989 if not info:
987 if not info:
990 raise util.Abort(_("patch %s is not applied") % patch)
988 raise util.Abort(_("patch %s is not applied") % patch)
991
989
992 if len(self.applied) == 0:
990 if len(self.applied) == 0:
993 # Allow qpop -a to work repeatedly,
991 # Allow qpop -a to work repeatedly,
994 # but not qpop without an argument
992 # but not qpop without an argument
995 self.ui.warn(_("no patches applied\n"))
993 self.ui.warn(_("no patches applied\n"))
996 return not all
994 return not all
997
995
998 if all:
996 if all:
999 start = 0
997 start = 0
1000 elif patch:
998 elif patch:
1001 start = info[0] + 1
999 start = info[0] + 1
1002 else:
1000 else:
1003 start = len(self.applied) - 1
1001 start = len(self.applied) - 1
1004
1002
1005 if start >= len(self.applied):
1003 if start >= len(self.applied):
1006 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1004 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1007 return
1005 return
1008
1006
1009 if not update:
1007 if not update:
1010 parents = repo.dirstate.parents()
1008 parents = repo.dirstate.parents()
1011 rr = [ bin(x.rev) for x in self.applied ]
1009 rr = [ bin(x.rev) for x in self.applied ]
1012 for p in parents:
1010 for p in parents:
1013 if p in rr:
1011 if p in rr:
1014 self.ui.warn(_("qpop: forcing dirstate update\n"))
1012 self.ui.warn(_("qpop: forcing dirstate update\n"))
1015 update = True
1013 update = True
1016 else:
1014 else:
1017 parents = [p.hex() for p in repo[None].parents()]
1015 parents = [p.hex() for p in repo[None].parents()]
1018 needupdate = False
1016 needupdate = False
1019 for entry in self.applied[start:]:
1017 for entry in self.applied[start:]:
1020 if entry.rev in parents:
1018 if entry.rev in parents:
1021 needupdate = True
1019 needupdate = True
1022 break
1020 break
1023 update = needupdate
1021 update = needupdate
1024
1022
1025 if not force and update:
1023 if not force and update:
1026 self.check_localchanges(repo)
1024 self.check_localchanges(repo)
1027
1025
1028 self.applied_dirty = 1
1026 self.applied_dirty = 1
1029 end = len(self.applied)
1027 end = len(self.applied)
1030 rev = bin(self.applied[start].rev)
1028 rev = bin(self.applied[start].rev)
1031 if update:
1029 if update:
1032 top = self.check_toppatch(repo)
1030 top = self.check_toppatch(repo)
1033
1031
1034 try:
1032 try:
1035 heads = repo.changelog.heads(rev)
1033 heads = repo.changelog.heads(rev)
1036 except error.LookupError:
1034 except error.LookupError:
1037 node = short(rev)
1035 node = short(rev)
1038 raise util.Abort(_('trying to pop unknown node %s') % node)
1036 raise util.Abort(_('trying to pop unknown node %s') % node)
1039
1037
1040 if heads != [bin(self.applied[-1].rev)]:
1038 if heads != [bin(self.applied[-1].rev)]:
1041 raise util.Abort(_("popping would remove a revision not "
1039 raise util.Abort(_("popping would remove a revision not "
1042 "managed by this patch queue"))
1040 "managed by this patch queue"))
1043
1041
1044 # we know there are no local changes, so we can make a simplified
1042 # we know there are no local changes, so we can make a simplified
1045 # form of hg.update.
1043 # form of hg.update.
1046 if update:
1044 if update:
1047 qp = self.qparents(repo, rev)
1045 qp = self.qparents(repo, rev)
1048 changes = repo.changelog.read(qp)
1046 changes = repo.changelog.read(qp)
1049 mmap = repo.manifest.read(changes[0])
1047 mmap = repo.manifest.read(changes[0])
1050 m, a, r, d = repo.status(qp, top)[:4]
1048 m, a, r, d = repo.status(qp, top)[:4]
1051 if d:
1049 if d:
1052 raise util.Abort(_("deletions found between repo revs"))
1050 raise util.Abort(_("deletions found between repo revs"))
1053 for f in m:
1051 for f in m:
1054 getfile(f, mmap[f], mmap.flags(f))
1052 getfile(f, mmap[f], mmap.flags(f))
1055 for f in r:
1053 for f in r:
1056 getfile(f, mmap[f], mmap.flags(f))
1054 getfile(f, mmap[f], mmap.flags(f))
1057 for f in m + r:
1055 for f in m + r:
1058 repo.dirstate.normal(f)
1056 repo.dirstate.normal(f)
1059 for f in a:
1057 for f in a:
1060 try:
1058 try:
1061 os.unlink(repo.wjoin(f))
1059 os.unlink(repo.wjoin(f))
1062 except OSError, e:
1060 except OSError, e:
1063 if e.errno != errno.ENOENT:
1061 if e.errno != errno.ENOENT:
1064 raise
1062 raise
1065 try: os.removedirs(os.path.dirname(repo.wjoin(f)))
1063 try: os.removedirs(os.path.dirname(repo.wjoin(f)))
1066 except: pass
1064 except: pass
1067 repo.dirstate.forget(f)
1065 repo.dirstate.forget(f)
1068 repo.dirstate.setparents(qp, nullid)
1066 repo.dirstate.setparents(qp, nullid)
1069 del self.applied[start:end]
1067 del self.applied[start:end]
1070 self.strip(repo, rev, update=False, backup='strip')
1068 self.strip(repo, rev, update=False, backup='strip')
1071 if len(self.applied):
1069 if len(self.applied):
1072 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1070 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1073 else:
1071 else:
1074 self.ui.write(_("patch queue now empty\n"))
1072 self.ui.write(_("patch queue now empty\n"))
1075 finally:
1073 finally:
1076 wlock.release()
1074 wlock.release()
1077
1075
1078 def diff(self, repo, pats, opts):
1076 def diff(self, repo, pats, opts):
1079 top = self.check_toppatch(repo)
1077 top = self.check_toppatch(repo)
1080 if not top:
1078 if not top:
1081 self.ui.write(_("no patches applied\n"))
1079 self.ui.write(_("no patches applied\n"))
1082 return
1080 return
1083 qp = self.qparents(repo, top)
1081 qp = self.qparents(repo, top)
1084 self._diffopts = patch.diffopts(self.ui, opts)
1082 self._diffopts = patch.diffopts(self.ui, opts)
1085 self.printdiff(repo, qp, files=pats, opts=opts)
1083 self.printdiff(repo, qp, files=pats, opts=opts)
1086
1084
1087 def refresh(self, repo, pats=None, **opts):
1085 def refresh(self, repo, pats=None, **opts):
1088 if len(self.applied) == 0:
1086 if len(self.applied) == 0:
1089 self.ui.write(_("no patches applied\n"))
1087 self.ui.write(_("no patches applied\n"))
1090 return 1
1088 return 1
1091 msg = opts.get('msg', '').rstrip()
1089 msg = opts.get('msg', '').rstrip()
1092 newuser = opts.get('user')
1090 newuser = opts.get('user')
1093 newdate = opts.get('date')
1091 newdate = opts.get('date')
1094 if newdate:
1092 if newdate:
1095 newdate = '%d %d' % util.parsedate(newdate)
1093 newdate = '%d %d' % util.parsedate(newdate)
1096 wlock = repo.wlock()
1094 wlock = repo.wlock()
1097 try:
1095 try:
1098 self.check_toppatch(repo)
1096 self.check_toppatch(repo)
1099 (top, patchfn) = (self.applied[-1].rev, self.applied[-1].name)
1097 (top, patchfn) = (self.applied[-1].rev, self.applied[-1].name)
1100 top = bin(top)
1098 top = bin(top)
1101 if repo.changelog.heads(top) != [top]:
1099 if repo.changelog.heads(top) != [top]:
1102 raise util.Abort(_("cannot refresh a revision with children"))
1100 raise util.Abort(_("cannot refresh a revision with children"))
1103 cparents = repo.changelog.parents(top)
1101 cparents = repo.changelog.parents(top)
1104 patchparent = self.qparents(repo, top)
1102 patchparent = self.qparents(repo, top)
1105 ph = self.readheaders(patchfn)
1103 ph = self.readheaders(patchfn)
1106
1104
1107 patchf = self.opener(patchfn, 'r')
1105 patchf = self.opener(patchfn, 'r')
1108
1106
1109 # if the patch was a git patch, refresh it as a git patch
1107 # if the patch was a git patch, refresh it as a git patch
1110 for line in patchf:
1108 for line in patchf:
1111 if line.startswith('diff --git'):
1109 if line.startswith('diff --git'):
1112 self.diffopts().git = True
1110 self.diffopts().git = True
1113 break
1111 break
1114
1112
1115 if msg:
1113 if msg:
1116 ph.setmessage(msg)
1114 ph.setmessage(msg)
1117 if newuser:
1115 if newuser:
1118 ph.setuser(newuser)
1116 ph.setuser(newuser)
1119 if newdate:
1117 if newdate:
1120 ph.setdate(newdate)
1118 ph.setdate(newdate)
1121
1119
1122 # only commit new patch when write is complete
1120 # only commit new patch when write is complete
1123 patchf = self.opener(patchfn, 'w', atomictemp=True)
1121 patchf = self.opener(patchfn, 'w', atomictemp=True)
1124
1122
1125 patchf.seek(0)
1123 patchf.seek(0)
1126 patchf.truncate()
1124 patchf.truncate()
1127
1125
1128 comments = str(ph)
1126 comments = str(ph)
1129 if comments:
1127 if comments:
1130 patchf.write(comments)
1128 patchf.write(comments)
1131
1129
1132 if opts.get('git'):
1130 if opts.get('git'):
1133 self.diffopts().git = True
1131 self.diffopts().git = True
1134 tip = repo.changelog.tip()
1132 tip = repo.changelog.tip()
1135 if top == tip:
1133 if top == tip:
1136 # if the top of our patch queue is also the tip, there is an
1134 # if the top of our patch queue is also the tip, there is an
1137 # optimization here. We update the dirstate in place and strip
1135 # optimization here. We update the dirstate in place and strip
1138 # off the tip commit. Then just commit the current directory
1136 # off the tip commit. Then just commit the current directory
1139 # tree. We can also send repo.commit the list of files
1137 # tree. We can also send repo.commit the list of files
1140 # changed to speed up the diff
1138 # changed to speed up the diff
1141 #
1139 #
1142 # in short mode, we only diff the files included in the
1140 # in short mode, we only diff the files included in the
1143 # patch already plus specified files
1141 # patch already plus specified files
1144 #
1142 #
1145 # this should really read:
1143 # this should really read:
1146 # mm, dd, aa, aa2 = repo.status(tip, patchparent)[:4]
1144 # mm, dd, aa, aa2 = repo.status(tip, patchparent)[:4]
1147 # but we do it backwards to take advantage of manifest/chlog
1145 # but we do it backwards to take advantage of manifest/chlog
1148 # caching against the next repo.status call
1146 # caching against the next repo.status call
1149 #
1147 #
1150 mm, aa, dd, aa2 = repo.status(patchparent, tip)[:4]
1148 mm, aa, dd, aa2 = repo.status(patchparent, tip)[:4]
1151 changes = repo.changelog.read(tip)
1149 changes = repo.changelog.read(tip)
1152 man = repo.manifest.read(changes[0])
1150 man = repo.manifest.read(changes[0])
1153 aaa = aa[:]
1151 aaa = aa[:]
1154 matchfn = cmdutil.match(repo, pats, opts)
1152 matchfn = cmdutil.match(repo, pats, opts)
1155 if opts.get('short'):
1153 if opts.get('short'):
1156 # if amending a patch, we start with existing
1154 # if amending a patch, we start with existing
1157 # files plus specified files - unfiltered
1155 # files plus specified files - unfiltered
1158 match = cmdutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1156 match = cmdutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1159 # filter with inc/exl options
1157 # filter with inc/exl options
1160 matchfn = cmdutil.match(repo, opts=opts)
1158 matchfn = cmdutil.match(repo, opts=opts)
1161 else:
1159 else:
1162 match = cmdutil.matchall(repo)
1160 match = cmdutil.matchall(repo)
1163 m, a, r, d = repo.status(match=match)[:4]
1161 m, a, r, d = repo.status(match=match)[:4]
1164
1162
1165 # we might end up with files that were added between
1163 # we might end up with files that were added between
1166 # tip and the dirstate parent, but then changed in the
1164 # tip and the dirstate parent, but then changed in the
1167 # local dirstate. in this case, we want them to only
1165 # local dirstate. in this case, we want them to only
1168 # show up in the added section
1166 # show up in the added section
1169 for x in m:
1167 for x in m:
1170 if x not in aa:
1168 if x not in aa:
1171 mm.append(x)
1169 mm.append(x)
1172 # we might end up with files added by the local dirstate that
1170 # we might end up with files added by the local dirstate that
1173 # were deleted by the patch. In this case, they should only
1171 # were deleted by the patch. In this case, they should only
1174 # show up in the changed section.
1172 # show up in the changed section.
1175 for x in a:
1173 for x in a:
1176 if x in dd:
1174 if x in dd:
1177 del dd[dd.index(x)]
1175 del dd[dd.index(x)]
1178 mm.append(x)
1176 mm.append(x)
1179 else:
1177 else:
1180 aa.append(x)
1178 aa.append(x)
1181 # make sure any files deleted in the local dirstate
1179 # make sure any files deleted in the local dirstate
1182 # are not in the add or change column of the patch
1180 # are not in the add or change column of the patch
1183 forget = []
1181 forget = []
1184 for x in d + r:
1182 for x in d + r:
1185 if x in aa:
1183 if x in aa:
1186 del aa[aa.index(x)]
1184 del aa[aa.index(x)]
1187 forget.append(x)
1185 forget.append(x)
1188 continue
1186 continue
1189 elif x in mm:
1187 elif x in mm:
1190 del mm[mm.index(x)]
1188 del mm[mm.index(x)]
1191 dd.append(x)
1189 dd.append(x)
1192
1190
1193 m = list(set(mm))
1191 m = list(set(mm))
1194 r = list(set(dd))
1192 r = list(set(dd))
1195 a = list(set(aa))
1193 a = list(set(aa))
1196 c = [filter(matchfn, l) for l in (m, a, r)]
1194 c = [filter(matchfn, l) for l in (m, a, r)]
1197 match = cmdutil.matchfiles(repo, set(c[0] + c[1] + c[2]))
1195 match = cmdutil.matchfiles(repo, set(c[0] + c[1] + c[2]))
1198 chunks = patch.diff(repo, patchparent, match=match,
1196 chunks = patch.diff(repo, patchparent, match=match,
1199 changes=c, opts=self.diffopts())
1197 changes=c, opts=self.diffopts())
1200 for chunk in chunks:
1198 for chunk in chunks:
1201 patchf.write(chunk)
1199 patchf.write(chunk)
1202
1200
1203 try:
1201 try:
1204 if self.diffopts().git:
1202 if self.diffopts().git:
1205 copies = {}
1203 copies = {}
1206 for dst in a:
1204 for dst in a:
1207 src = repo.dirstate.copied(dst)
1205 src = repo.dirstate.copied(dst)
1208 # during qfold, the source file for copies may
1206 # during qfold, the source file for copies may
1209 # be removed. Treat this as a simple add.
1207 # be removed. Treat this as a simple add.
1210 if src is not None and src in repo.dirstate:
1208 if src is not None and src in repo.dirstate:
1211 copies.setdefault(src, []).append(dst)
1209 copies.setdefault(src, []).append(dst)
1212 repo.dirstate.add(dst)
1210 repo.dirstate.add(dst)
1213 # remember the copies between patchparent and tip
1211 # remember the copies between patchparent and tip
1214 for dst in aaa:
1212 for dst in aaa:
1215 f = repo.file(dst)
1213 f = repo.file(dst)
1216 src = f.renamed(man[dst])
1214 src = f.renamed(man[dst])
1217 if src:
1215 if src:
1218 copies.setdefault(src[0], []).extend(copies.get(dst, []))
1216 copies.setdefault(src[0], []).extend(copies.get(dst, []))
1219 if dst in a:
1217 if dst in a:
1220 copies[src[0]].append(dst)
1218 copies[src[0]].append(dst)
1221 # we can't copy a file created by the patch itself
1219 # we can't copy a file created by the patch itself
1222 if dst in copies:
1220 if dst in copies:
1223 del copies[dst]
1221 del copies[dst]
1224 for src, dsts in copies.iteritems():
1222 for src, dsts in copies.iteritems():
1225 for dst in dsts:
1223 for dst in dsts:
1226 repo.dirstate.copy(src, dst)
1224 repo.dirstate.copy(src, dst)
1227 else:
1225 else:
1228 for dst in a:
1226 for dst in a:
1229 repo.dirstate.add(dst)
1227 repo.dirstate.add(dst)
1230 # Drop useless copy information
1228 # Drop useless copy information
1231 for f in list(repo.dirstate.copies()):
1229 for f in list(repo.dirstate.copies()):
1232 repo.dirstate.copy(None, f)
1230 repo.dirstate.copy(None, f)
1233 for f in r:
1231 for f in r:
1234 repo.dirstate.remove(f)
1232 repo.dirstate.remove(f)
1235 # if the patch excludes a modified file, mark that
1233 # if the patch excludes a modified file, mark that
1236 # file with mtime=0 so status can see it.
1234 # file with mtime=0 so status can see it.
1237 mm = []
1235 mm = []
1238 for i in xrange(len(m)-1, -1, -1):
1236 for i in xrange(len(m)-1, -1, -1):
1239 if not matchfn(m[i]):
1237 if not matchfn(m[i]):
1240 mm.append(m[i])
1238 mm.append(m[i])
1241 del m[i]
1239 del m[i]
1242 for f in m:
1240 for f in m:
1243 repo.dirstate.normal(f)
1241 repo.dirstate.normal(f)
1244 for f in mm:
1242 for f in mm:
1245 repo.dirstate.normallookup(f)
1243 repo.dirstate.normallookup(f)
1246 for f in forget:
1244 for f in forget:
1247 repo.dirstate.forget(f)
1245 repo.dirstate.forget(f)
1248
1246
1249 if not msg:
1247 if not msg:
1250 if not ph.message:
1248 if not ph.message:
1251 message = "[mq]: %s\n" % patchfn
1249 message = "[mq]: %s\n" % patchfn
1252 else:
1250 else:
1253 message = "\n".join(ph.message)
1251 message = "\n".join(ph.message)
1254 else:
1252 else:
1255 message = msg
1253 message = msg
1256
1254
1257 user = ph.user or changes[1]
1255 user = ph.user or changes[1]
1258
1256
1259 # assumes strip can roll itself back if interrupted
1257 # assumes strip can roll itself back if interrupted
1260 repo.dirstate.setparents(*cparents)
1258 repo.dirstate.setparents(*cparents)
1261 self.applied.pop()
1259 self.applied.pop()
1262 self.applied_dirty = 1
1260 self.applied_dirty = 1
1263 self.strip(repo, top, update=False,
1261 self.strip(repo, top, update=False,
1264 backup='strip')
1262 backup='strip')
1265 except:
1263 except:
1266 repo.dirstate.invalidate()
1264 repo.dirstate.invalidate()
1267 raise
1265 raise
1268
1266
1269 try:
1267 try:
1270 # might be nice to attempt to roll back strip after this
1268 # might be nice to attempt to roll back strip after this
1271 patchf.rename()
1269 patchf.rename()
1272 n = repo.commit(match.files(), message, user, ph.date,
1270 n = repo.commit(match.files(), message, user, ph.date,
1273 match=match, force=1)
1271 match=match, force=1)
1274 self.applied.append(statusentry(hex(n), patchfn))
1272 self.applied.append(statusentry(hex(n), patchfn))
1275 except:
1273 except:
1276 ctx = repo[cparents[0]]
1274 ctx = repo[cparents[0]]
1277 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1275 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1278 self.save_dirty()
1276 self.save_dirty()
1279 self.ui.warn(_('refresh interrupted while patch was popped! '
1277 self.ui.warn(_('refresh interrupted while patch was popped! '
1280 '(revert --all, qpush to recover)\n'))
1278 '(revert --all, qpush to recover)\n'))
1281 raise
1279 raise
1282 else:
1280 else:
1283 self.printdiff(repo, patchparent, fp=patchf)
1281 self.printdiff(repo, patchparent, fp=patchf)
1284 patchf.rename()
1282 patchf.rename()
1285 added = repo.status()[1]
1283 added = repo.status()[1]
1286 for a in added:
1284 for a in added:
1287 f = repo.wjoin(a)
1285 f = repo.wjoin(a)
1288 try:
1286 try:
1289 os.unlink(f)
1287 os.unlink(f)
1290 except OSError, e:
1288 except OSError, e:
1291 if e.errno != errno.ENOENT:
1289 if e.errno != errno.ENOENT:
1292 raise
1290 raise
1293 try: os.removedirs(os.path.dirname(f))
1291 try: os.removedirs(os.path.dirname(f))
1294 except: pass
1292 except: pass
1295 # forget the file copies in the dirstate
1293 # forget the file copies in the dirstate
1296 # push should readd the files later on
1294 # push should readd the files later on
1297 repo.dirstate.forget(a)
1295 repo.dirstate.forget(a)
1298 self.pop(repo, force=True)
1296 self.pop(repo, force=True)
1299 self.push(repo, force=True)
1297 self.push(repo, force=True)
1300 finally:
1298 finally:
1301 wlock.release()
1299 wlock.release()
1302 self.removeundo(repo)
1300 self.removeundo(repo)
1303
1301
1304 def init(self, repo, create=False):
1302 def init(self, repo, create=False):
1305 if not create and os.path.isdir(self.path):
1303 if not create and os.path.isdir(self.path):
1306 raise util.Abort(_("patch queue directory already exists"))
1304 raise util.Abort(_("patch queue directory already exists"))
1307 try:
1305 try:
1308 os.mkdir(self.path)
1306 os.mkdir(self.path)
1309 except OSError, inst:
1307 except OSError, inst:
1310 if inst.errno != errno.EEXIST or not create:
1308 if inst.errno != errno.EEXIST or not create:
1311 raise
1309 raise
1312 if create:
1310 if create:
1313 return self.qrepo(create=True)
1311 return self.qrepo(create=True)
1314
1312
1315 def unapplied(self, repo, patch=None):
1313 def unapplied(self, repo, patch=None):
1316 if patch and patch not in self.series:
1314 if patch and patch not in self.series:
1317 raise util.Abort(_("patch %s is not in series file") % patch)
1315 raise util.Abort(_("patch %s is not in series file") % patch)
1318 if not patch:
1316 if not patch:
1319 start = self.series_end()
1317 start = self.series_end()
1320 else:
1318 else:
1321 start = self.series.index(patch) + 1
1319 start = self.series.index(patch) + 1
1322 unapplied = []
1320 unapplied = []
1323 for i in xrange(start, len(self.series)):
1321 for i in xrange(start, len(self.series)):
1324 pushable, reason = self.pushable(i)
1322 pushable, reason = self.pushable(i)
1325 if pushable:
1323 if pushable:
1326 unapplied.append((i, self.series[i]))
1324 unapplied.append((i, self.series[i]))
1327 self.explain_pushable(i)
1325 self.explain_pushable(i)
1328 return unapplied
1326 return unapplied
1329
1327
1330 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1328 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1331 summary=False):
1329 summary=False):
1332 def displayname(patchname):
1330 def displayname(patchname):
1333 if summary:
1331 if summary:
1334 ph = self.readheaders(patchname)
1332 ph = self.readheaders(patchname)
1335 msg = ph.message
1333 msg = ph.message
1336 msg = msg and ': ' + msg[0] or ': '
1334 msg = msg and ': ' + msg[0] or ': '
1337 else:
1335 else:
1338 msg = ''
1336 msg = ''
1339 return '%s%s' % (patchname, msg)
1337 return '%s%s' % (patchname, msg)
1340
1338
1341 applied = set([p.name for p in self.applied])
1339 applied = set([p.name for p in self.applied])
1342 if length is None:
1340 if length is None:
1343 length = len(self.series) - start
1341 length = len(self.series) - start
1344 if not missing:
1342 if not missing:
1345 for i in xrange(start, start+length):
1343 for i in xrange(start, start+length):
1346 patch = self.series[i]
1344 patch = self.series[i]
1347 if patch in applied:
1345 if patch in applied:
1348 stat = 'A'
1346 stat = 'A'
1349 elif self.pushable(i)[0]:
1347 elif self.pushable(i)[0]:
1350 stat = 'U'
1348 stat = 'U'
1351 else:
1349 else:
1352 stat = 'G'
1350 stat = 'G'
1353 pfx = ''
1351 pfx = ''
1354 if self.ui.verbose:
1352 if self.ui.verbose:
1355 pfx = '%d %s ' % (i, stat)
1353 pfx = '%d %s ' % (i, stat)
1356 elif status and status != stat:
1354 elif status and status != stat:
1357 continue
1355 continue
1358 self.ui.write('%s%s\n' % (pfx, displayname(patch)))
1356 self.ui.write('%s%s\n' % (pfx, displayname(patch)))
1359 else:
1357 else:
1360 msng_list = []
1358 msng_list = []
1361 for root, dirs, files in os.walk(self.path):
1359 for root, dirs, files in os.walk(self.path):
1362 d = root[len(self.path) + 1:]
1360 d = root[len(self.path) + 1:]
1363 for f in files:
1361 for f in files:
1364 fl = os.path.join(d, f)
1362 fl = os.path.join(d, f)
1365 if (fl not in self.series and
1363 if (fl not in self.series and
1366 fl not in (self.status_path, self.series_path,
1364 fl not in (self.status_path, self.series_path,
1367 self.guards_path)
1365 self.guards_path)
1368 and not fl.startswith('.')):
1366 and not fl.startswith('.')):
1369 msng_list.append(fl)
1367 msng_list.append(fl)
1370 for x in util.sort(msng_list):
1368 for x in sorted(msng_list):
1371 pfx = self.ui.verbose and ('D ') or ''
1369 pfx = self.ui.verbose and ('D ') or ''
1372 self.ui.write("%s%s\n" % (pfx, displayname(x)))
1370 self.ui.write("%s%s\n" % (pfx, displayname(x)))
1373
1371
1374 def issaveline(self, l):
1372 def issaveline(self, l):
1375 if l.name == '.hg.patches.save.line':
1373 if l.name == '.hg.patches.save.line':
1376 return True
1374 return True
1377
1375
1378 def qrepo(self, create=False):
1376 def qrepo(self, create=False):
1379 if create or os.path.isdir(self.join(".hg")):
1377 if create or os.path.isdir(self.join(".hg")):
1380 return hg.repository(self.ui, path=self.path, create=create)
1378 return hg.repository(self.ui, path=self.path, create=create)
1381
1379
1382 def restore(self, repo, rev, delete=None, qupdate=None):
1380 def restore(self, repo, rev, delete=None, qupdate=None):
1383 c = repo.changelog.read(rev)
1381 c = repo.changelog.read(rev)
1384 desc = c[4].strip()
1382 desc = c[4].strip()
1385 lines = desc.splitlines()
1383 lines = desc.splitlines()
1386 i = 0
1384 i = 0
1387 datastart = None
1385 datastart = None
1388 series = []
1386 series = []
1389 applied = []
1387 applied = []
1390 qpp = None
1388 qpp = None
1391 for i in xrange(0, len(lines)):
1389 for i in xrange(0, len(lines)):
1392 if lines[i] == 'Patch Data:':
1390 if lines[i] == 'Patch Data:':
1393 datastart = i + 1
1391 datastart = i + 1
1394 elif lines[i].startswith('Dirstate:'):
1392 elif lines[i].startswith('Dirstate:'):
1395 l = lines[i].rstrip()
1393 l = lines[i].rstrip()
1396 l = l[10:].split(' ')
1394 l = l[10:].split(' ')
1397 qpp = [ bin(x) for x in l ]
1395 qpp = [ bin(x) for x in l ]
1398 elif datastart != None:
1396 elif datastart != None:
1399 l = lines[i].rstrip()
1397 l = lines[i].rstrip()
1400 se = statusentry(l)
1398 se = statusentry(l)
1401 file_ = se.name
1399 file_ = se.name
1402 if se.rev:
1400 if se.rev:
1403 applied.append(se)
1401 applied.append(se)
1404 else:
1402 else:
1405 series.append(file_)
1403 series.append(file_)
1406 if datastart == None:
1404 if datastart == None:
1407 self.ui.warn(_("No saved patch data found\n"))
1405 self.ui.warn(_("No saved patch data found\n"))
1408 return 1
1406 return 1
1409 self.ui.warn(_("restoring status: %s\n") % lines[0])
1407 self.ui.warn(_("restoring status: %s\n") % lines[0])
1410 self.full_series = series
1408 self.full_series = series
1411 self.applied = applied
1409 self.applied = applied
1412 self.parse_series()
1410 self.parse_series()
1413 self.series_dirty = 1
1411 self.series_dirty = 1
1414 self.applied_dirty = 1
1412 self.applied_dirty = 1
1415 heads = repo.changelog.heads()
1413 heads = repo.changelog.heads()
1416 if delete:
1414 if delete:
1417 if rev not in heads:
1415 if rev not in heads:
1418 self.ui.warn(_("save entry has children, leaving it alone\n"))
1416 self.ui.warn(_("save entry has children, leaving it alone\n"))
1419 else:
1417 else:
1420 self.ui.warn(_("removing save entry %s\n") % short(rev))
1418 self.ui.warn(_("removing save entry %s\n") % short(rev))
1421 pp = repo.dirstate.parents()
1419 pp = repo.dirstate.parents()
1422 if rev in pp:
1420 if rev in pp:
1423 update = True
1421 update = True
1424 else:
1422 else:
1425 update = False
1423 update = False
1426 self.strip(repo, rev, update=update, backup='strip')
1424 self.strip(repo, rev, update=update, backup='strip')
1427 if qpp:
1425 if qpp:
1428 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1426 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1429 (short(qpp[0]), short(qpp[1])))
1427 (short(qpp[0]), short(qpp[1])))
1430 if qupdate:
1428 if qupdate:
1431 self.ui.status(_("queue directory updating\n"))
1429 self.ui.status(_("queue directory updating\n"))
1432 r = self.qrepo()
1430 r = self.qrepo()
1433 if not r:
1431 if not r:
1434 self.ui.warn(_("Unable to load queue repository\n"))
1432 self.ui.warn(_("Unable to load queue repository\n"))
1435 return 1
1433 return 1
1436 hg.clean(r, qpp[0])
1434 hg.clean(r, qpp[0])
1437
1435
1438 def save(self, repo, msg=None):
1436 def save(self, repo, msg=None):
1439 if len(self.applied) == 0:
1437 if len(self.applied) == 0:
1440 self.ui.warn(_("save: no patches applied, exiting\n"))
1438 self.ui.warn(_("save: no patches applied, exiting\n"))
1441 return 1
1439 return 1
1442 if self.issaveline(self.applied[-1]):
1440 if self.issaveline(self.applied[-1]):
1443 self.ui.warn(_("status is already saved\n"))
1441 self.ui.warn(_("status is already saved\n"))
1444 return 1
1442 return 1
1445
1443
1446 ar = [ ':' + x for x in self.full_series ]
1444 ar = [ ':' + x for x in self.full_series ]
1447 if not msg:
1445 if not msg:
1448 msg = _("hg patches saved state")
1446 msg = _("hg patches saved state")
1449 else:
1447 else:
1450 msg = "hg patches: " + msg.rstrip('\r\n')
1448 msg = "hg patches: " + msg.rstrip('\r\n')
1451 r = self.qrepo()
1449 r = self.qrepo()
1452 if r:
1450 if r:
1453 pp = r.dirstate.parents()
1451 pp = r.dirstate.parents()
1454 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1452 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1455 msg += "\n\nPatch Data:\n"
1453 msg += "\n\nPatch Data:\n"
1456 text = msg + "\n".join([str(x) for x in self.applied]) + '\n' + (ar and
1454 text = msg + "\n".join([str(x) for x in self.applied]) + '\n' + (ar and
1457 "\n".join(ar) + '\n' or "")
1455 "\n".join(ar) + '\n' or "")
1458 n = repo.commit(None, text, user=None, force=1)
1456 n = repo.commit(None, text, user=None, force=1)
1459 if not n:
1457 if not n:
1460 self.ui.warn(_("repo commit failed\n"))
1458 self.ui.warn(_("repo commit failed\n"))
1461 return 1
1459 return 1
1462 self.applied.append(statusentry(hex(n),'.hg.patches.save.line'))
1460 self.applied.append(statusentry(hex(n),'.hg.patches.save.line'))
1463 self.applied_dirty = 1
1461 self.applied_dirty = 1
1464 self.removeundo(repo)
1462 self.removeundo(repo)
1465
1463
1466 def full_series_end(self):
1464 def full_series_end(self):
1467 if len(self.applied) > 0:
1465 if len(self.applied) > 0:
1468 p = self.applied[-1].name
1466 p = self.applied[-1].name
1469 end = self.find_series(p)
1467 end = self.find_series(p)
1470 if end == None:
1468 if end == None:
1471 return len(self.full_series)
1469 return len(self.full_series)
1472 return end + 1
1470 return end + 1
1473 return 0
1471 return 0
1474
1472
1475 def series_end(self, all_patches=False):
1473 def series_end(self, all_patches=False):
1476 """If all_patches is False, return the index of the next pushable patch
1474 """If all_patches is False, return the index of the next pushable patch
1477 in the series, or the series length. If all_patches is True, return the
1475 in the series, or the series length. If all_patches is True, return the
1478 index of the first patch past the last applied one.
1476 index of the first patch past the last applied one.
1479 """
1477 """
1480 end = 0
1478 end = 0
1481 def next(start):
1479 def next(start):
1482 if all_patches:
1480 if all_patches:
1483 return start
1481 return start
1484 i = start
1482 i = start
1485 while i < len(self.series):
1483 while i < len(self.series):
1486 p, reason = self.pushable(i)
1484 p, reason = self.pushable(i)
1487 if p:
1485 if p:
1488 break
1486 break
1489 self.explain_pushable(i)
1487 self.explain_pushable(i)
1490 i += 1
1488 i += 1
1491 return i
1489 return i
1492 if len(self.applied) > 0:
1490 if len(self.applied) > 0:
1493 p = self.applied[-1].name
1491 p = self.applied[-1].name
1494 try:
1492 try:
1495 end = self.series.index(p)
1493 end = self.series.index(p)
1496 except ValueError:
1494 except ValueError:
1497 return 0
1495 return 0
1498 return next(end + 1)
1496 return next(end + 1)
1499 return next(end)
1497 return next(end)
1500
1498
1501 def appliedname(self, index):
1499 def appliedname(self, index):
1502 pname = self.applied[index].name
1500 pname = self.applied[index].name
1503 if not self.ui.verbose:
1501 if not self.ui.verbose:
1504 p = pname
1502 p = pname
1505 else:
1503 else:
1506 p = str(self.series.index(pname)) + " " + pname
1504 p = str(self.series.index(pname)) + " " + pname
1507 return p
1505 return p
1508
1506
1509 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
1507 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
1510 force=None, git=False):
1508 force=None, git=False):
1511 def checkseries(patchname):
1509 def checkseries(patchname):
1512 if patchname in self.series:
1510 if patchname in self.series:
1513 raise util.Abort(_('patch %s is already in the series file')
1511 raise util.Abort(_('patch %s is already in the series file')
1514 % patchname)
1512 % patchname)
1515 def checkfile(patchname):
1513 def checkfile(patchname):
1516 if not force and os.path.exists(self.join(patchname)):
1514 if not force and os.path.exists(self.join(patchname)):
1517 raise util.Abort(_('patch "%s" already exists')
1515 raise util.Abort(_('patch "%s" already exists')
1518 % patchname)
1516 % patchname)
1519
1517
1520 if rev:
1518 if rev:
1521 if files:
1519 if files:
1522 raise util.Abort(_('option "-r" not valid when importing '
1520 raise util.Abort(_('option "-r" not valid when importing '
1523 'files'))
1521 'files'))
1524 rev = cmdutil.revrange(repo, rev)
1522 rev = cmdutil.revrange(repo, rev)
1525 rev.sort(lambda x, y: cmp(y, x))
1523 rev.sort(lambda x, y: cmp(y, x))
1526 if (len(files) > 1 or len(rev) > 1) and patchname:
1524 if (len(files) > 1 or len(rev) > 1) and patchname:
1527 raise util.Abort(_('option "-n" not valid when importing multiple '
1525 raise util.Abort(_('option "-n" not valid when importing multiple '
1528 'patches'))
1526 'patches'))
1529 i = 0
1527 i = 0
1530 added = []
1528 added = []
1531 if rev:
1529 if rev:
1532 # If mq patches are applied, we can only import revisions
1530 # If mq patches are applied, we can only import revisions
1533 # that form a linear path to qbase.
1531 # that form a linear path to qbase.
1534 # Otherwise, they should form a linear path to a head.
1532 # Otherwise, they should form a linear path to a head.
1535 heads = repo.changelog.heads(repo.changelog.node(rev[-1]))
1533 heads = repo.changelog.heads(repo.changelog.node(rev[-1]))
1536 if len(heads) > 1:
1534 if len(heads) > 1:
1537 raise util.Abort(_('revision %d is the root of more than one '
1535 raise util.Abort(_('revision %d is the root of more than one '
1538 'branch') % rev[-1])
1536 'branch') % rev[-1])
1539 if self.applied:
1537 if self.applied:
1540 base = hex(repo.changelog.node(rev[0]))
1538 base = hex(repo.changelog.node(rev[0]))
1541 if base in [n.rev for n in self.applied]:
1539 if base in [n.rev for n in self.applied]:
1542 raise util.Abort(_('revision %d is already managed')
1540 raise util.Abort(_('revision %d is already managed')
1543 % rev[0])
1541 % rev[0])
1544 if heads != [bin(self.applied[-1].rev)]:
1542 if heads != [bin(self.applied[-1].rev)]:
1545 raise util.Abort(_('revision %d is not the parent of '
1543 raise util.Abort(_('revision %d is not the parent of '
1546 'the queue') % rev[0])
1544 'the queue') % rev[0])
1547 base = repo.changelog.rev(bin(self.applied[0].rev))
1545 base = repo.changelog.rev(bin(self.applied[0].rev))
1548 lastparent = repo.changelog.parentrevs(base)[0]
1546 lastparent = repo.changelog.parentrevs(base)[0]
1549 else:
1547 else:
1550 if heads != [repo.changelog.node(rev[0])]:
1548 if heads != [repo.changelog.node(rev[0])]:
1551 raise util.Abort(_('revision %d has unmanaged children')
1549 raise util.Abort(_('revision %d has unmanaged children')
1552 % rev[0])
1550 % rev[0])
1553 lastparent = None
1551 lastparent = None
1554
1552
1555 if git:
1553 if git:
1556 self.diffopts().git = True
1554 self.diffopts().git = True
1557
1555
1558 for r in rev:
1556 for r in rev:
1559 p1, p2 = repo.changelog.parentrevs(r)
1557 p1, p2 = repo.changelog.parentrevs(r)
1560 n = repo.changelog.node(r)
1558 n = repo.changelog.node(r)
1561 if p2 != nullrev:
1559 if p2 != nullrev:
1562 raise util.Abort(_('cannot import merge revision %d') % r)
1560 raise util.Abort(_('cannot import merge revision %d') % r)
1563 if lastparent and lastparent != r:
1561 if lastparent and lastparent != r:
1564 raise util.Abort(_('revision %d is not the parent of %d')
1562 raise util.Abort(_('revision %d is not the parent of %d')
1565 % (r, lastparent))
1563 % (r, lastparent))
1566 lastparent = p1
1564 lastparent = p1
1567
1565
1568 if not patchname:
1566 if not patchname:
1569 patchname = normname('%d.diff' % r)
1567 patchname = normname('%d.diff' % r)
1570 self.check_reserved_name(patchname)
1568 self.check_reserved_name(patchname)
1571 checkseries(patchname)
1569 checkseries(patchname)
1572 checkfile(patchname)
1570 checkfile(patchname)
1573 self.full_series.insert(0, patchname)
1571 self.full_series.insert(0, patchname)
1574
1572
1575 patchf = self.opener(patchname, "w")
1573 patchf = self.opener(patchname, "w")
1576 patch.export(repo, [n], fp=patchf, opts=self.diffopts())
1574 patch.export(repo, [n], fp=patchf, opts=self.diffopts())
1577 patchf.close()
1575 patchf.close()
1578
1576
1579 se = statusentry(hex(n), patchname)
1577 se = statusentry(hex(n), patchname)
1580 self.applied.insert(0, se)
1578 self.applied.insert(0, se)
1581
1579
1582 added.append(patchname)
1580 added.append(patchname)
1583 patchname = None
1581 patchname = None
1584 self.parse_series()
1582 self.parse_series()
1585 self.applied_dirty = 1
1583 self.applied_dirty = 1
1586
1584
1587 for filename in files:
1585 for filename in files:
1588 if existing:
1586 if existing:
1589 if filename == '-':
1587 if filename == '-':
1590 raise util.Abort(_('-e is incompatible with import from -'))
1588 raise util.Abort(_('-e is incompatible with import from -'))
1591 if not patchname:
1589 if not patchname:
1592 patchname = normname(filename)
1590 patchname = normname(filename)
1593 self.check_reserved_name(patchname)
1591 self.check_reserved_name(patchname)
1594 if not os.path.isfile(self.join(patchname)):
1592 if not os.path.isfile(self.join(patchname)):
1595 raise util.Abort(_("patch %s does not exist") % patchname)
1593 raise util.Abort(_("patch %s does not exist") % patchname)
1596 else:
1594 else:
1597 try:
1595 try:
1598 if filename == '-':
1596 if filename == '-':
1599 if not patchname:
1597 if not patchname:
1600 raise util.Abort(_('need --name to import a patch from -'))
1598 raise util.Abort(_('need --name to import a patch from -'))
1601 text = sys.stdin.read()
1599 text = sys.stdin.read()
1602 else:
1600 else:
1603 text = url.open(self.ui, filename).read()
1601 text = url.open(self.ui, filename).read()
1604 except (OSError, IOError):
1602 except (OSError, IOError):
1605 raise util.Abort(_("unable to read %s") % filename)
1603 raise util.Abort(_("unable to read %s") % filename)
1606 if not patchname:
1604 if not patchname:
1607 patchname = normname(os.path.basename(filename))
1605 patchname = normname(os.path.basename(filename))
1608 self.check_reserved_name(patchname)
1606 self.check_reserved_name(patchname)
1609 checkfile(patchname)
1607 checkfile(patchname)
1610 patchf = self.opener(patchname, "w")
1608 patchf = self.opener(patchname, "w")
1611 patchf.write(text)
1609 patchf.write(text)
1612 if not force:
1610 if not force:
1613 checkseries(patchname)
1611 checkseries(patchname)
1614 if patchname not in self.series:
1612 if patchname not in self.series:
1615 index = self.full_series_end() + i
1613 index = self.full_series_end() + i
1616 self.full_series[index:index] = [patchname]
1614 self.full_series[index:index] = [patchname]
1617 self.parse_series()
1615 self.parse_series()
1618 self.ui.warn(_("adding %s to series file\n") % patchname)
1616 self.ui.warn(_("adding %s to series file\n") % patchname)
1619 i += 1
1617 i += 1
1620 added.append(patchname)
1618 added.append(patchname)
1621 patchname = None
1619 patchname = None
1622 self.series_dirty = 1
1620 self.series_dirty = 1
1623 qrepo = self.qrepo()
1621 qrepo = self.qrepo()
1624 if qrepo:
1622 if qrepo:
1625 qrepo.add(added)
1623 qrepo.add(added)
1626
1624
1627 def delete(ui, repo, *patches, **opts):
1625 def delete(ui, repo, *patches, **opts):
1628 """remove patches from queue
1626 """remove patches from queue
1629
1627
1630 The patches must not be applied, unless they are arguments to the
1628 The patches must not be applied, unless they are arguments to the
1631 -r/--rev parameter. At least one patch or revision is required.
1629 -r/--rev parameter. At least one patch or revision is required.
1632
1630
1633 With --rev, mq will stop managing the named revisions (converting
1631 With --rev, mq will stop managing the named revisions (converting
1634 them to regular mercurial changesets). The qfinish command should
1632 them to regular mercurial changesets). The qfinish command should
1635 be used as an alternative for qdelete -r, as the latter option is
1633 be used as an alternative for qdelete -r, as the latter option is
1636 deprecated.
1634 deprecated.
1637
1635
1638 With -k/--keep, the patch files are preserved in the patch
1636 With -k/--keep, the patch files are preserved in the patch
1639 directory."""
1637 directory."""
1640 q = repo.mq
1638 q = repo.mq
1641 q.delete(repo, patches, opts)
1639 q.delete(repo, patches, opts)
1642 q.save_dirty()
1640 q.save_dirty()
1643 return 0
1641 return 0
1644
1642
1645 def applied(ui, repo, patch=None, **opts):
1643 def applied(ui, repo, patch=None, **opts):
1646 """print the patches already applied"""
1644 """print the patches already applied"""
1647 q = repo.mq
1645 q = repo.mq
1648 if patch:
1646 if patch:
1649 if patch not in q.series:
1647 if patch not in q.series:
1650 raise util.Abort(_("patch %s is not in series file") % patch)
1648 raise util.Abort(_("patch %s is not in series file") % patch)
1651 end = q.series.index(patch) + 1
1649 end = q.series.index(patch) + 1
1652 else:
1650 else:
1653 end = q.series_end(True)
1651 end = q.series_end(True)
1654 return q.qseries(repo, length=end, status='A', summary=opts.get('summary'))
1652 return q.qseries(repo, length=end, status='A', summary=opts.get('summary'))
1655
1653
1656 def unapplied(ui, repo, patch=None, **opts):
1654 def unapplied(ui, repo, patch=None, **opts):
1657 """print the patches not yet applied"""
1655 """print the patches not yet applied"""
1658 q = repo.mq
1656 q = repo.mq
1659 if patch:
1657 if patch:
1660 if patch not in q.series:
1658 if patch not in q.series:
1661 raise util.Abort(_("patch %s is not in series file") % patch)
1659 raise util.Abort(_("patch %s is not in series file") % patch)
1662 start = q.series.index(patch) + 1
1660 start = q.series.index(patch) + 1
1663 else:
1661 else:
1664 start = q.series_end(True)
1662 start = q.series_end(True)
1665 q.qseries(repo, start=start, status='U', summary=opts.get('summary'))
1663 q.qseries(repo, start=start, status='U', summary=opts.get('summary'))
1666
1664
1667 def qimport(ui, repo, *filename, **opts):
1665 def qimport(ui, repo, *filename, **opts):
1668 """import a patch
1666 """import a patch
1669
1667
1670 The patch is inserted into the series after the last applied
1668 The patch is inserted into the series after the last applied
1671 patch. If no patches have been applied, qimport prepends the patch
1669 patch. If no patches have been applied, qimport prepends the patch
1672 to the series.
1670 to the series.
1673
1671
1674 The patch will have the same name as its source file unless you
1672 The patch will have the same name as its source file unless you
1675 give it a new one with -n/--name.
1673 give it a new one with -n/--name.
1676
1674
1677 You can register an existing patch inside the patch directory with
1675 You can register an existing patch inside the patch directory with
1678 the -e/--existing flag.
1676 the -e/--existing flag.
1679
1677
1680 With -f/--force, an existing patch of the same name will be
1678 With -f/--force, an existing patch of the same name will be
1681 overwritten.
1679 overwritten.
1682
1680
1683 An existing changeset may be placed under mq control with -r/--rev
1681 An existing changeset may be placed under mq control with -r/--rev
1684 (e.g. qimport --rev tip -n patch will place tip under mq control).
1682 (e.g. qimport --rev tip -n patch will place tip under mq control).
1685 With -g/--git, patches imported with --rev will use the git diff
1683 With -g/--git, patches imported with --rev will use the git diff
1686 format. See the diffs help topic for information on why this is
1684 format. See the diffs help topic for information on why this is
1687 important for preserving rename/copy information and permission
1685 important for preserving rename/copy information and permission
1688 changes.
1686 changes.
1689
1687
1690 To import a patch from standard input, pass - as the patch file.
1688 To import a patch from standard input, pass - as the patch file.
1691 When importing from standard input, a patch name must be specified
1689 When importing from standard input, a patch name must be specified
1692 using the --name flag.
1690 using the --name flag.
1693 """
1691 """
1694 q = repo.mq
1692 q = repo.mq
1695 q.qimport(repo, filename, patchname=opts['name'],
1693 q.qimport(repo, filename, patchname=opts['name'],
1696 existing=opts['existing'], force=opts['force'], rev=opts['rev'],
1694 existing=opts['existing'], force=opts['force'], rev=opts['rev'],
1697 git=opts['git'])
1695 git=opts['git'])
1698 q.save_dirty()
1696 q.save_dirty()
1699 return 0
1697 return 0
1700
1698
1701 def init(ui, repo, **opts):
1699 def init(ui, repo, **opts):
1702 """init a new queue repository
1700 """init a new queue repository
1703
1701
1704 The queue repository is unversioned by default. If
1702 The queue repository is unversioned by default. If
1705 -c/--create-repo is specified, qinit will create a separate nested
1703 -c/--create-repo is specified, qinit will create a separate nested
1706 repository for patches (qinit -c may also be run later to convert
1704 repository for patches (qinit -c may also be run later to convert
1707 an unversioned patch repository into a versioned one). You can use
1705 an unversioned patch repository into a versioned one). You can use
1708 qcommit to commit changes to this queue repository."""
1706 qcommit to commit changes to this queue repository."""
1709 q = repo.mq
1707 q = repo.mq
1710 r = q.init(repo, create=opts['create_repo'])
1708 r = q.init(repo, create=opts['create_repo'])
1711 q.save_dirty()
1709 q.save_dirty()
1712 if r:
1710 if r:
1713 if not os.path.exists(r.wjoin('.hgignore')):
1711 if not os.path.exists(r.wjoin('.hgignore')):
1714 fp = r.wopener('.hgignore', 'w')
1712 fp = r.wopener('.hgignore', 'w')
1715 fp.write('^\\.hg\n')
1713 fp.write('^\\.hg\n')
1716 fp.write('^\\.mq\n')
1714 fp.write('^\\.mq\n')
1717 fp.write('syntax: glob\n')
1715 fp.write('syntax: glob\n')
1718 fp.write('status\n')
1716 fp.write('status\n')
1719 fp.write('guards\n')
1717 fp.write('guards\n')
1720 fp.close()
1718 fp.close()
1721 if not os.path.exists(r.wjoin('series')):
1719 if not os.path.exists(r.wjoin('series')):
1722 r.wopener('series', 'w').close()
1720 r.wopener('series', 'w').close()
1723 r.add(['.hgignore', 'series'])
1721 r.add(['.hgignore', 'series'])
1724 commands.add(ui, r)
1722 commands.add(ui, r)
1725 return 0
1723 return 0
1726
1724
1727 def clone(ui, source, dest=None, **opts):
1725 def clone(ui, source, dest=None, **opts):
1728 '''clone main and patch repository at same time
1726 '''clone main and patch repository at same time
1729
1727
1730 If source is local, destination will have no patches applied. If
1728 If source is local, destination will have no patches applied. If
1731 source is remote, this command can not check if patches are
1729 source is remote, this command can not check if patches are
1732 applied in source, so cannot guarantee that patches are not
1730 applied in source, so cannot guarantee that patches are not
1733 applied in destination. If you clone remote repository, be sure
1731 applied in destination. If you clone remote repository, be sure
1734 before that it has no patches applied.
1732 before that it has no patches applied.
1735
1733
1736 Source patch repository is looked for in <src>/.hg/patches by
1734 Source patch repository is looked for in <src>/.hg/patches by
1737 default. Use -p <url> to change.
1735 default. Use -p <url> to change.
1738
1736
1739 The patch directory must be a nested mercurial repository, as
1737 The patch directory must be a nested mercurial repository, as
1740 would be created by qinit -c.
1738 would be created by qinit -c.
1741 '''
1739 '''
1742 def patchdir(repo):
1740 def patchdir(repo):
1743 url = repo.url()
1741 url = repo.url()
1744 if url.endswith('/'):
1742 if url.endswith('/'):
1745 url = url[:-1]
1743 url = url[:-1]
1746 return url + '/.hg/patches'
1744 return url + '/.hg/patches'
1747 if dest is None:
1745 if dest is None:
1748 dest = hg.defaultdest(source)
1746 dest = hg.defaultdest(source)
1749 sr = hg.repository(cmdutil.remoteui(ui, opts), ui.expandpath(source))
1747 sr = hg.repository(cmdutil.remoteui(ui, opts), ui.expandpath(source))
1750 if opts['patches']:
1748 if opts['patches']:
1751 patchespath = ui.expandpath(opts['patches'])
1749 patchespath = ui.expandpath(opts['patches'])
1752 else:
1750 else:
1753 patchespath = patchdir(sr)
1751 patchespath = patchdir(sr)
1754 try:
1752 try:
1755 hg.repository(ui, patchespath)
1753 hg.repository(ui, patchespath)
1756 except error.RepoError:
1754 except error.RepoError:
1757 raise util.Abort(_('versioned patch repository not found'
1755 raise util.Abort(_('versioned patch repository not found'
1758 ' (see qinit -c)'))
1756 ' (see qinit -c)'))
1759 qbase, destrev = None, None
1757 qbase, destrev = None, None
1760 if sr.local():
1758 if sr.local():
1761 if sr.mq.applied:
1759 if sr.mq.applied:
1762 qbase = bin(sr.mq.applied[0].rev)
1760 qbase = bin(sr.mq.applied[0].rev)
1763 if not hg.islocal(dest):
1761 if not hg.islocal(dest):
1764 heads = set(sr.heads())
1762 heads = set(sr.heads())
1765 destrev = list(heads.difference(sr.heads(qbase)))
1763 destrev = list(heads.difference(sr.heads(qbase)))
1766 destrev.append(sr.changelog.parents(qbase)[0])
1764 destrev.append(sr.changelog.parents(qbase)[0])
1767 elif sr.capable('lookup'):
1765 elif sr.capable('lookup'):
1768 try:
1766 try:
1769 qbase = sr.lookup('qbase')
1767 qbase = sr.lookup('qbase')
1770 except error.RepoError:
1768 except error.RepoError:
1771 pass
1769 pass
1772 ui.note(_('cloning main repository\n'))
1770 ui.note(_('cloning main repository\n'))
1773 sr, dr = hg.clone(ui, sr.url(), dest,
1771 sr, dr = hg.clone(ui, sr.url(), dest,
1774 pull=opts['pull'],
1772 pull=opts['pull'],
1775 rev=destrev,
1773 rev=destrev,
1776 update=False,
1774 update=False,
1777 stream=opts['uncompressed'])
1775 stream=opts['uncompressed'])
1778 ui.note(_('cloning patch repository\n'))
1776 ui.note(_('cloning patch repository\n'))
1779 hg.clone(ui, opts['patches'] or patchdir(sr), patchdir(dr),
1777 hg.clone(ui, opts['patches'] or patchdir(sr), patchdir(dr),
1780 pull=opts['pull'], update=not opts['noupdate'],
1778 pull=opts['pull'], update=not opts['noupdate'],
1781 stream=opts['uncompressed'])
1779 stream=opts['uncompressed'])
1782 if dr.local():
1780 if dr.local():
1783 if qbase:
1781 if qbase:
1784 ui.note(_('stripping applied patches from destination '
1782 ui.note(_('stripping applied patches from destination '
1785 'repository\n'))
1783 'repository\n'))
1786 dr.mq.strip(dr, qbase, update=False, backup=None)
1784 dr.mq.strip(dr, qbase, update=False, backup=None)
1787 if not opts['noupdate']:
1785 if not opts['noupdate']:
1788 ui.note(_('updating destination repository\n'))
1786 ui.note(_('updating destination repository\n'))
1789 hg.update(dr, dr.changelog.tip())
1787 hg.update(dr, dr.changelog.tip())
1790
1788
1791 def commit(ui, repo, *pats, **opts):
1789 def commit(ui, repo, *pats, **opts):
1792 """commit changes in the queue repository"""
1790 """commit changes in the queue repository"""
1793 q = repo.mq
1791 q = repo.mq
1794 r = q.qrepo()
1792 r = q.qrepo()
1795 if not r: raise util.Abort('no queue repository')
1793 if not r: raise util.Abort('no queue repository')
1796 commands.commit(r.ui, r, *pats, **opts)
1794 commands.commit(r.ui, r, *pats, **opts)
1797
1795
1798 def series(ui, repo, **opts):
1796 def series(ui, repo, **opts):
1799 """print the entire series file"""
1797 """print the entire series file"""
1800 repo.mq.qseries(repo, missing=opts['missing'], summary=opts['summary'])
1798 repo.mq.qseries(repo, missing=opts['missing'], summary=opts['summary'])
1801 return 0
1799 return 0
1802
1800
1803 def top(ui, repo, **opts):
1801 def top(ui, repo, **opts):
1804 """print the name of the current patch"""
1802 """print the name of the current patch"""
1805 q = repo.mq
1803 q = repo.mq
1806 t = q.applied and q.series_end(True) or 0
1804 t = q.applied and q.series_end(True) or 0
1807 if t:
1805 if t:
1808 return q.qseries(repo, start=t-1, length=1, status='A',
1806 return q.qseries(repo, start=t-1, length=1, status='A',
1809 summary=opts.get('summary'))
1807 summary=opts.get('summary'))
1810 else:
1808 else:
1811 ui.write(_("no patches applied\n"))
1809 ui.write(_("no patches applied\n"))
1812 return 1
1810 return 1
1813
1811
1814 def next(ui, repo, **opts):
1812 def next(ui, repo, **opts):
1815 """print the name of the next patch"""
1813 """print the name of the next patch"""
1816 q = repo.mq
1814 q = repo.mq
1817 end = q.series_end()
1815 end = q.series_end()
1818 if end == len(q.series):
1816 if end == len(q.series):
1819 ui.write(_("all patches applied\n"))
1817 ui.write(_("all patches applied\n"))
1820 return 1
1818 return 1
1821 return q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
1819 return q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
1822
1820
1823 def prev(ui, repo, **opts):
1821 def prev(ui, repo, **opts):
1824 """print the name of the previous patch"""
1822 """print the name of the previous patch"""
1825 q = repo.mq
1823 q = repo.mq
1826 l = len(q.applied)
1824 l = len(q.applied)
1827 if l == 1:
1825 if l == 1:
1828 ui.write(_("only one patch applied\n"))
1826 ui.write(_("only one patch applied\n"))
1829 return 1
1827 return 1
1830 if not l:
1828 if not l:
1831 ui.write(_("no patches applied\n"))
1829 ui.write(_("no patches applied\n"))
1832 return 1
1830 return 1
1833 return q.qseries(repo, start=l-2, length=1, status='A',
1831 return q.qseries(repo, start=l-2, length=1, status='A',
1834 summary=opts.get('summary'))
1832 summary=opts.get('summary'))
1835
1833
1836 def setupheaderopts(ui, opts):
1834 def setupheaderopts(ui, opts):
1837 def do(opt,val):
1835 def do(opt,val):
1838 if not opts[opt] and opts['current' + opt]:
1836 if not opts[opt] and opts['current' + opt]:
1839 opts[opt] = val
1837 opts[opt] = val
1840 do('user', ui.username())
1838 do('user', ui.username())
1841 do('date', "%d %d" % util.makedate())
1839 do('date', "%d %d" % util.makedate())
1842
1840
1843 def new(ui, repo, patch, *args, **opts):
1841 def new(ui, repo, patch, *args, **opts):
1844 """create a new patch
1842 """create a new patch
1845
1843
1846 qnew creates a new patch on top of the currently-applied patch (if
1844 qnew creates a new patch on top of the currently-applied patch (if
1847 any). It will refuse to run if there are any outstanding changes
1845 any). It will refuse to run if there are any outstanding changes
1848 unless -f/--force is specified, in which case the patch will be
1846 unless -f/--force is specified, in which case the patch will be
1849 initialized with them. You may also use -I/--include,
1847 initialized with them. You may also use -I/--include,
1850 -X/--exclude, and/or a list of files after the patch name to add
1848 -X/--exclude, and/or a list of files after the patch name to add
1851 only changes to matching files to the new patch, leaving the rest
1849 only changes to matching files to the new patch, leaving the rest
1852 as uncommitted modifications.
1850 as uncommitted modifications.
1853
1851
1854 -u/--user and -d/--date can be used to set the (given) user and
1852 -u/--user and -d/--date can be used to set the (given) user and
1855 date, respectively. -U/--currentuser and -D/--currentdate set user
1853 date, respectively. -U/--currentuser and -D/--currentdate set user
1856 to current user and date to current date.
1854 to current user and date to current date.
1857
1855
1858 -e/--edit, -m/--message or -l/--logfile set the patch header as
1856 -e/--edit, -m/--message or -l/--logfile set the patch header as
1859 well as the commit message. If none is specified, the header is
1857 well as the commit message. If none is specified, the header is
1860 empty and the commit message is '[mq]: PATCH'.
1858 empty and the commit message is '[mq]: PATCH'.
1861
1859
1862 Use the -g/--git option to keep the patch in the git extended diff
1860 Use the -g/--git option to keep the patch in the git extended diff
1863 format. Read the diffs help topic for more information on why this
1861 format. Read the diffs help topic for more information on why this
1864 is important for preserving permission changes and copy/rename
1862 is important for preserving permission changes and copy/rename
1865 information.
1863 information.
1866 """
1864 """
1867 msg = cmdutil.logmessage(opts)
1865 msg = cmdutil.logmessage(opts)
1868 def getmsg(): return ui.edit(msg, ui.username())
1866 def getmsg(): return ui.edit(msg, ui.username())
1869 q = repo.mq
1867 q = repo.mq
1870 opts['msg'] = msg
1868 opts['msg'] = msg
1871 if opts.get('edit'):
1869 if opts.get('edit'):
1872 opts['msg'] = getmsg
1870 opts['msg'] = getmsg
1873 else:
1871 else:
1874 opts['msg'] = msg
1872 opts['msg'] = msg
1875 setupheaderopts(ui, opts)
1873 setupheaderopts(ui, opts)
1876 q.new(repo, patch, *args, **opts)
1874 q.new(repo, patch, *args, **opts)
1877 q.save_dirty()
1875 q.save_dirty()
1878 return 0
1876 return 0
1879
1877
1880 def refresh(ui, repo, *pats, **opts):
1878 def refresh(ui, repo, *pats, **opts):
1881 """update the current patch
1879 """update the current patch
1882
1880
1883 If any file patterns are provided, the refreshed patch will
1881 If any file patterns are provided, the refreshed patch will
1884 contain only the modifications that match those patterns; the
1882 contain only the modifications that match those patterns; the
1885 remaining modifications will remain in the working directory.
1883 remaining modifications will remain in the working directory.
1886
1884
1887 If -s/--short is specified, files currently included in the patch
1885 If -s/--short is specified, files currently included in the patch
1888 will be refreshed just like matched files and remain in the patch.
1886 will be refreshed just like matched files and remain in the patch.
1889
1887
1890 hg add/remove/copy/rename work as usual, though you might want to
1888 hg add/remove/copy/rename work as usual, though you might want to
1891 use git-style patches (-g/--git or [diff] git=1) to track copies
1889 use git-style patches (-g/--git or [diff] git=1) to track copies
1892 and renames. See the diffs help topic for more information on the
1890 and renames. See the diffs help topic for more information on the
1893 git diff format.
1891 git diff format.
1894 """
1892 """
1895 q = repo.mq
1893 q = repo.mq
1896 message = cmdutil.logmessage(opts)
1894 message = cmdutil.logmessage(opts)
1897 if opts['edit']:
1895 if opts['edit']:
1898 if not q.applied:
1896 if not q.applied:
1899 ui.write(_("no patches applied\n"))
1897 ui.write(_("no patches applied\n"))
1900 return 1
1898 return 1
1901 if message:
1899 if message:
1902 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
1900 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
1903 patch = q.applied[-1].name
1901 patch = q.applied[-1].name
1904 ph = q.readheaders(patch)
1902 ph = q.readheaders(patch)
1905 message = ui.edit('\n'.join(ph.message), ph.user or ui.username())
1903 message = ui.edit('\n'.join(ph.message), ph.user or ui.username())
1906 setupheaderopts(ui, opts)
1904 setupheaderopts(ui, opts)
1907 ret = q.refresh(repo, pats, msg=message, **opts)
1905 ret = q.refresh(repo, pats, msg=message, **opts)
1908 q.save_dirty()
1906 q.save_dirty()
1909 return ret
1907 return ret
1910
1908
1911 def diff(ui, repo, *pats, **opts):
1909 def diff(ui, repo, *pats, **opts):
1912 """diff of the current patch and subsequent modifications
1910 """diff of the current patch and subsequent modifications
1913
1911
1914 Shows a diff which includes the current patch as well as any
1912 Shows a diff which includes the current patch as well as any
1915 changes which have been made in the working directory since the
1913 changes which have been made in the working directory since the
1916 last refresh (thus showing what the current patch would become
1914 last refresh (thus showing what the current patch would become
1917 after a qrefresh).
1915 after a qrefresh).
1918
1916
1919 Use 'hg diff' if you only want to see the changes made since the
1917 Use 'hg diff' if you only want to see the changes made since the
1920 last qrefresh, or 'hg export qtip' if you want to see changes made
1918 last qrefresh, or 'hg export qtip' if you want to see changes made
1921 by the current patch without including changes made since the
1919 by the current patch without including changes made since the
1922 qrefresh.
1920 qrefresh.
1923 """
1921 """
1924 repo.mq.diff(repo, pats, opts)
1922 repo.mq.diff(repo, pats, opts)
1925 return 0
1923 return 0
1926
1924
1927 def fold(ui, repo, *files, **opts):
1925 def fold(ui, repo, *files, **opts):
1928 """fold the named patches into the current patch
1926 """fold the named patches into the current patch
1929
1927
1930 Patches must not yet be applied. Each patch will be successively
1928 Patches must not yet be applied. Each patch will be successively
1931 applied to the current patch in the order given. If all the
1929 applied to the current patch in the order given. If all the
1932 patches apply successfully, the current patch will be refreshed
1930 patches apply successfully, the current patch will be refreshed
1933 with the new cumulative patch, and the folded patches will be
1931 with the new cumulative patch, and the folded patches will be
1934 deleted. With -k/--keep, the folded patch files will not be
1932 deleted. With -k/--keep, the folded patch files will not be
1935 removed afterwards.
1933 removed afterwards.
1936
1934
1937 The header for each folded patch will be concatenated with the
1935 The header for each folded patch will be concatenated with the
1938 current patch header, separated by a line of '* * *'."""
1936 current patch header, separated by a line of '* * *'."""
1939
1937
1940 q = repo.mq
1938 q = repo.mq
1941
1939
1942 if not files:
1940 if not files:
1943 raise util.Abort(_('qfold requires at least one patch name'))
1941 raise util.Abort(_('qfold requires at least one patch name'))
1944 if not q.check_toppatch(repo):
1942 if not q.check_toppatch(repo):
1945 raise util.Abort(_('No patches applied'))
1943 raise util.Abort(_('No patches applied'))
1946
1944
1947 message = cmdutil.logmessage(opts)
1945 message = cmdutil.logmessage(opts)
1948 if opts['edit']:
1946 if opts['edit']:
1949 if message:
1947 if message:
1950 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
1948 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
1951
1949
1952 parent = q.lookup('qtip')
1950 parent = q.lookup('qtip')
1953 patches = []
1951 patches = []
1954 messages = []
1952 messages = []
1955 for f in files:
1953 for f in files:
1956 p = q.lookup(f)
1954 p = q.lookup(f)
1957 if p in patches or p == parent:
1955 if p in patches or p == parent:
1958 ui.warn(_('Skipping already folded patch %s') % p)
1956 ui.warn(_('Skipping already folded patch %s') % p)
1959 if q.isapplied(p):
1957 if q.isapplied(p):
1960 raise util.Abort(_('qfold cannot fold already applied patch %s') % p)
1958 raise util.Abort(_('qfold cannot fold already applied patch %s') % p)
1961 patches.append(p)
1959 patches.append(p)
1962
1960
1963 for p in patches:
1961 for p in patches:
1964 if not message:
1962 if not message:
1965 ph = q.readheaders(p)
1963 ph = q.readheaders(p)
1966 if ph.message:
1964 if ph.message:
1967 messages.append(ph.message)
1965 messages.append(ph.message)
1968 pf = q.join(p)
1966 pf = q.join(p)
1969 (patchsuccess, files, fuzz) = q.patch(repo, pf)
1967 (patchsuccess, files, fuzz) = q.patch(repo, pf)
1970 if not patchsuccess:
1968 if not patchsuccess:
1971 raise util.Abort(_('Error folding patch %s') % p)
1969 raise util.Abort(_('Error folding patch %s') % p)
1972 patch.updatedir(ui, repo, files)
1970 patch.updatedir(ui, repo, files)
1973
1971
1974 if not message:
1972 if not message:
1975 ph = q.readheaders(parent)
1973 ph = q.readheaders(parent)
1976 message, user = ph.message, ph.user
1974 message, user = ph.message, ph.user
1977 for msg in messages:
1975 for msg in messages:
1978 message.append('* * *')
1976 message.append('* * *')
1979 message.extend(msg)
1977 message.extend(msg)
1980 message = '\n'.join(message)
1978 message = '\n'.join(message)
1981
1979
1982 if opts['edit']:
1980 if opts['edit']:
1983 message = ui.edit(message, user or ui.username())
1981 message = ui.edit(message, user or ui.username())
1984
1982
1985 q.refresh(repo, msg=message)
1983 q.refresh(repo, msg=message)
1986 q.delete(repo, patches, opts)
1984 q.delete(repo, patches, opts)
1987 q.save_dirty()
1985 q.save_dirty()
1988
1986
1989 def goto(ui, repo, patch, **opts):
1987 def goto(ui, repo, patch, **opts):
1990 '''push or pop patches until named patch is at top of stack'''
1988 '''push or pop patches until named patch is at top of stack'''
1991 q = repo.mq
1989 q = repo.mq
1992 patch = q.lookup(patch)
1990 patch = q.lookup(patch)
1993 if q.isapplied(patch):
1991 if q.isapplied(patch):
1994 ret = q.pop(repo, patch, force=opts['force'])
1992 ret = q.pop(repo, patch, force=opts['force'])
1995 else:
1993 else:
1996 ret = q.push(repo, patch, force=opts['force'])
1994 ret = q.push(repo, patch, force=opts['force'])
1997 q.save_dirty()
1995 q.save_dirty()
1998 return ret
1996 return ret
1999
1997
2000 def guard(ui, repo, *args, **opts):
1998 def guard(ui, repo, *args, **opts):
2001 '''set or print guards for a patch
1999 '''set or print guards for a patch
2002
2000
2003 Guards control whether a patch can be pushed. A patch with no
2001 Guards control whether a patch can be pushed. A patch with no
2004 guards is always pushed. A patch with a positive guard ("+foo") is
2002 guards is always pushed. A patch with a positive guard ("+foo") is
2005 pushed only if the qselect command has activated it. A patch with
2003 pushed only if the qselect command has activated it. A patch with
2006 a negative guard ("-foo") is never pushed if the qselect command
2004 a negative guard ("-foo") is never pushed if the qselect command
2007 has activated it.
2005 has activated it.
2008
2006
2009 With no arguments, print the currently active guards.
2007 With no arguments, print the currently active guards.
2010 With arguments, set guards for the named patch.
2008 With arguments, set guards for the named patch.
2011 NOTE: Specifying negative guards now requires '--'.
2009 NOTE: Specifying negative guards now requires '--'.
2012
2010
2013 To set guards on another patch:
2011 To set guards on another patch:
2014 hg qguard -- other.patch +2.6.17 -stable
2012 hg qguard -- other.patch +2.6.17 -stable
2015 '''
2013 '''
2016 def status(idx):
2014 def status(idx):
2017 guards = q.series_guards[idx] or ['unguarded']
2015 guards = q.series_guards[idx] or ['unguarded']
2018 ui.write('%s: %s\n' % (q.series[idx], ' '.join(guards)))
2016 ui.write('%s: %s\n' % (q.series[idx], ' '.join(guards)))
2019 q = repo.mq
2017 q = repo.mq
2020 patch = None
2018 patch = None
2021 args = list(args)
2019 args = list(args)
2022 if opts['list']:
2020 if opts['list']:
2023 if args or opts['none']:
2021 if args or opts['none']:
2024 raise util.Abort(_('cannot mix -l/--list with options or arguments'))
2022 raise util.Abort(_('cannot mix -l/--list with options or arguments'))
2025 for i in xrange(len(q.series)):
2023 for i in xrange(len(q.series)):
2026 status(i)
2024 status(i)
2027 return
2025 return
2028 if not args or args[0][0:1] in '-+':
2026 if not args or args[0][0:1] in '-+':
2029 if not q.applied:
2027 if not q.applied:
2030 raise util.Abort(_('no patches applied'))
2028 raise util.Abort(_('no patches applied'))
2031 patch = q.applied[-1].name
2029 patch = q.applied[-1].name
2032 if patch is None and args[0][0:1] not in '-+':
2030 if patch is None and args[0][0:1] not in '-+':
2033 patch = args.pop(0)
2031 patch = args.pop(0)
2034 if patch is None:
2032 if patch is None:
2035 raise util.Abort(_('no patch to work with'))
2033 raise util.Abort(_('no patch to work with'))
2036 if args or opts['none']:
2034 if args or opts['none']:
2037 idx = q.find_series(patch)
2035 idx = q.find_series(patch)
2038 if idx is None:
2036 if idx is None:
2039 raise util.Abort(_('no patch named %s') % patch)
2037 raise util.Abort(_('no patch named %s') % patch)
2040 q.set_guards(idx, args)
2038 q.set_guards(idx, args)
2041 q.save_dirty()
2039 q.save_dirty()
2042 else:
2040 else:
2043 status(q.series.index(q.lookup(patch)))
2041 status(q.series.index(q.lookup(patch)))
2044
2042
2045 def header(ui, repo, patch=None):
2043 def header(ui, repo, patch=None):
2046 """print the header of the topmost or specified patch"""
2044 """print the header of the topmost or specified patch"""
2047 q = repo.mq
2045 q = repo.mq
2048
2046
2049 if patch:
2047 if patch:
2050 patch = q.lookup(patch)
2048 patch = q.lookup(patch)
2051 else:
2049 else:
2052 if not q.applied:
2050 if not q.applied:
2053 ui.write('no patches applied\n')
2051 ui.write('no patches applied\n')
2054 return 1
2052 return 1
2055 patch = q.lookup('qtip')
2053 patch = q.lookup('qtip')
2056 ph = repo.mq.readheaders(patch)
2054 ph = repo.mq.readheaders(patch)
2057
2055
2058 ui.write('\n'.join(ph.message) + '\n')
2056 ui.write('\n'.join(ph.message) + '\n')
2059
2057
2060 def lastsavename(path):
2058 def lastsavename(path):
2061 (directory, base) = os.path.split(path)
2059 (directory, base) = os.path.split(path)
2062 names = os.listdir(directory)
2060 names = os.listdir(directory)
2063 namere = re.compile("%s.([0-9]+)" % base)
2061 namere = re.compile("%s.([0-9]+)" % base)
2064 maxindex = None
2062 maxindex = None
2065 maxname = None
2063 maxname = None
2066 for f in names:
2064 for f in names:
2067 m = namere.match(f)
2065 m = namere.match(f)
2068 if m:
2066 if m:
2069 index = int(m.group(1))
2067 index = int(m.group(1))
2070 if maxindex == None or index > maxindex:
2068 if maxindex == None or index > maxindex:
2071 maxindex = index
2069 maxindex = index
2072 maxname = f
2070 maxname = f
2073 if maxname:
2071 if maxname:
2074 return (os.path.join(directory, maxname), maxindex)
2072 return (os.path.join(directory, maxname), maxindex)
2075 return (None, None)
2073 return (None, None)
2076
2074
2077 def savename(path):
2075 def savename(path):
2078 (last, index) = lastsavename(path)
2076 (last, index) = lastsavename(path)
2079 if last is None:
2077 if last is None:
2080 index = 0
2078 index = 0
2081 newpath = path + ".%d" % (index + 1)
2079 newpath = path + ".%d" % (index + 1)
2082 return newpath
2080 return newpath
2083
2081
2084 def push(ui, repo, patch=None, **opts):
2082 def push(ui, repo, patch=None, **opts):
2085 """push the next patch onto the stack
2083 """push the next patch onto the stack
2086
2084
2087 When -f/--force is applied, all local changes in patched files
2085 When -f/--force is applied, all local changes in patched files
2088 will be lost.
2086 will be lost.
2089 """
2087 """
2090 q = repo.mq
2088 q = repo.mq
2091 mergeq = None
2089 mergeq = None
2092
2090
2093 if opts['merge']:
2091 if opts['merge']:
2094 if opts['name']:
2092 if opts['name']:
2095 newpath = repo.join(opts['name'])
2093 newpath = repo.join(opts['name'])
2096 else:
2094 else:
2097 newpath, i = lastsavename(q.path)
2095 newpath, i = lastsavename(q.path)
2098 if not newpath:
2096 if not newpath:
2099 ui.warn(_("no saved queues found, please use -n\n"))
2097 ui.warn(_("no saved queues found, please use -n\n"))
2100 return 1
2098 return 1
2101 mergeq = queue(ui, repo.join(""), newpath)
2099 mergeq = queue(ui, repo.join(""), newpath)
2102 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2100 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2103 ret = q.push(repo, patch, force=opts['force'], list=opts['list'],
2101 ret = q.push(repo, patch, force=opts['force'], list=opts['list'],
2104 mergeq=mergeq, all=opts.get('all'))
2102 mergeq=mergeq, all=opts.get('all'))
2105 return ret
2103 return ret
2106
2104
2107 def pop(ui, repo, patch=None, **opts):
2105 def pop(ui, repo, patch=None, **opts):
2108 """pop the current patch off the stack
2106 """pop the current patch off the stack
2109
2107
2110 By default, pops off the top of the patch stack. If given a patch
2108 By default, pops off the top of the patch stack. If given a patch
2111 name, keeps popping off patches until the named patch is at the
2109 name, keeps popping off patches until the named patch is at the
2112 top of the stack.
2110 top of the stack.
2113 """
2111 """
2114 localupdate = True
2112 localupdate = True
2115 if opts['name']:
2113 if opts['name']:
2116 q = queue(ui, repo.join(""), repo.join(opts['name']))
2114 q = queue(ui, repo.join(""), repo.join(opts['name']))
2117 ui.warn(_('using patch queue: %s\n') % q.path)
2115 ui.warn(_('using patch queue: %s\n') % q.path)
2118 localupdate = False
2116 localupdate = False
2119 else:
2117 else:
2120 q = repo.mq
2118 q = repo.mq
2121 ret = q.pop(repo, patch, force=opts['force'], update=localupdate,
2119 ret = q.pop(repo, patch, force=opts['force'], update=localupdate,
2122 all=opts['all'])
2120 all=opts['all'])
2123 q.save_dirty()
2121 q.save_dirty()
2124 return ret
2122 return ret
2125
2123
2126 def rename(ui, repo, patch, name=None, **opts):
2124 def rename(ui, repo, patch, name=None, **opts):
2127 """rename a patch
2125 """rename a patch
2128
2126
2129 With one argument, renames the current patch to PATCH1.
2127 With one argument, renames the current patch to PATCH1.
2130 With two arguments, renames PATCH1 to PATCH2."""
2128 With two arguments, renames PATCH1 to PATCH2."""
2131
2129
2132 q = repo.mq
2130 q = repo.mq
2133
2131
2134 if not name:
2132 if not name:
2135 name = patch
2133 name = patch
2136 patch = None
2134 patch = None
2137
2135
2138 if patch:
2136 if patch:
2139 patch = q.lookup(patch)
2137 patch = q.lookup(patch)
2140 else:
2138 else:
2141 if not q.applied:
2139 if not q.applied:
2142 ui.write(_('no patches applied\n'))
2140 ui.write(_('no patches applied\n'))
2143 return
2141 return
2144 patch = q.lookup('qtip')
2142 patch = q.lookup('qtip')
2145 absdest = q.join(name)
2143 absdest = q.join(name)
2146 if os.path.isdir(absdest):
2144 if os.path.isdir(absdest):
2147 name = normname(os.path.join(name, os.path.basename(patch)))
2145 name = normname(os.path.join(name, os.path.basename(patch)))
2148 absdest = q.join(name)
2146 absdest = q.join(name)
2149 if os.path.exists(absdest):
2147 if os.path.exists(absdest):
2150 raise util.Abort(_('%s already exists') % absdest)
2148 raise util.Abort(_('%s already exists') % absdest)
2151
2149
2152 if name in q.series:
2150 if name in q.series:
2153 raise util.Abort(_('A patch named %s already exists in the series file') % name)
2151 raise util.Abort(_('A patch named %s already exists in the series file') % name)
2154
2152
2155 if ui.verbose:
2153 if ui.verbose:
2156 ui.write('renaming %s to %s\n' % (patch, name))
2154 ui.write('renaming %s to %s\n' % (patch, name))
2157 i = q.find_series(patch)
2155 i = q.find_series(patch)
2158 guards = q.guard_re.findall(q.full_series[i])
2156 guards = q.guard_re.findall(q.full_series[i])
2159 q.full_series[i] = name + ''.join([' #' + g for g in guards])
2157 q.full_series[i] = name + ''.join([' #' + g for g in guards])
2160 q.parse_series()
2158 q.parse_series()
2161 q.series_dirty = 1
2159 q.series_dirty = 1
2162
2160
2163 info = q.isapplied(patch)
2161 info = q.isapplied(patch)
2164 if info:
2162 if info:
2165 q.applied[info[0]] = statusentry(info[1], name)
2163 q.applied[info[0]] = statusentry(info[1], name)
2166 q.applied_dirty = 1
2164 q.applied_dirty = 1
2167
2165
2168 util.rename(q.join(patch), absdest)
2166 util.rename(q.join(patch), absdest)
2169 r = q.qrepo()
2167 r = q.qrepo()
2170 if r:
2168 if r:
2171 wlock = r.wlock()
2169 wlock = r.wlock()
2172 try:
2170 try:
2173 if r.dirstate[patch] == 'a':
2171 if r.dirstate[patch] == 'a':
2174 r.dirstate.forget(patch)
2172 r.dirstate.forget(patch)
2175 r.dirstate.add(name)
2173 r.dirstate.add(name)
2176 else:
2174 else:
2177 if r.dirstate[name] == 'r':
2175 if r.dirstate[name] == 'r':
2178 r.undelete([name])
2176 r.undelete([name])
2179 r.copy(patch, name)
2177 r.copy(patch, name)
2180 r.remove([patch], False)
2178 r.remove([patch], False)
2181 finally:
2179 finally:
2182 wlock.release()
2180 wlock.release()
2183
2181
2184 q.save_dirty()
2182 q.save_dirty()
2185
2183
2186 def restore(ui, repo, rev, **opts):
2184 def restore(ui, repo, rev, **opts):
2187 """restore the queue state saved by a revision"""
2185 """restore the queue state saved by a revision"""
2188 rev = repo.lookup(rev)
2186 rev = repo.lookup(rev)
2189 q = repo.mq
2187 q = repo.mq
2190 q.restore(repo, rev, delete=opts['delete'],
2188 q.restore(repo, rev, delete=opts['delete'],
2191 qupdate=opts['update'])
2189 qupdate=opts['update'])
2192 q.save_dirty()
2190 q.save_dirty()
2193 return 0
2191 return 0
2194
2192
2195 def save(ui, repo, **opts):
2193 def save(ui, repo, **opts):
2196 """save current queue state"""
2194 """save current queue state"""
2197 q = repo.mq
2195 q = repo.mq
2198 message = cmdutil.logmessage(opts)
2196 message = cmdutil.logmessage(opts)
2199 ret = q.save(repo, msg=message)
2197 ret = q.save(repo, msg=message)
2200 if ret:
2198 if ret:
2201 return ret
2199 return ret
2202 q.save_dirty()
2200 q.save_dirty()
2203 if opts['copy']:
2201 if opts['copy']:
2204 path = q.path
2202 path = q.path
2205 if opts['name']:
2203 if opts['name']:
2206 newpath = os.path.join(q.basepath, opts['name'])
2204 newpath = os.path.join(q.basepath, opts['name'])
2207 if os.path.exists(newpath):
2205 if os.path.exists(newpath):
2208 if not os.path.isdir(newpath):
2206 if not os.path.isdir(newpath):
2209 raise util.Abort(_('destination %s exists and is not '
2207 raise util.Abort(_('destination %s exists and is not '
2210 'a directory') % newpath)
2208 'a directory') % newpath)
2211 if not opts['force']:
2209 if not opts['force']:
2212 raise util.Abort(_('destination %s exists, '
2210 raise util.Abort(_('destination %s exists, '
2213 'use -f to force') % newpath)
2211 'use -f to force') % newpath)
2214 else:
2212 else:
2215 newpath = savename(path)
2213 newpath = savename(path)
2216 ui.warn(_("copy %s to %s\n") % (path, newpath))
2214 ui.warn(_("copy %s to %s\n") % (path, newpath))
2217 util.copyfiles(path, newpath)
2215 util.copyfiles(path, newpath)
2218 if opts['empty']:
2216 if opts['empty']:
2219 try:
2217 try:
2220 os.unlink(q.join(q.status_path))
2218 os.unlink(q.join(q.status_path))
2221 except:
2219 except:
2222 pass
2220 pass
2223 return 0
2221 return 0
2224
2222
2225 def strip(ui, repo, rev, **opts):
2223 def strip(ui, repo, rev, **opts):
2226 """strip a revision and all its descendants from the repository
2224 """strip a revision and all its descendants from the repository
2227
2225
2228 If one of the working directory's parent revisions is stripped, the
2226 If one of the working directory's parent revisions is stripped, the
2229 working directory will be updated to the parent of the stripped
2227 working directory will be updated to the parent of the stripped
2230 revision.
2228 revision.
2231 """
2229 """
2232 backup = 'all'
2230 backup = 'all'
2233 if opts['backup']:
2231 if opts['backup']:
2234 backup = 'strip'
2232 backup = 'strip'
2235 elif opts['nobackup']:
2233 elif opts['nobackup']:
2236 backup = 'none'
2234 backup = 'none'
2237
2235
2238 rev = repo.lookup(rev)
2236 rev = repo.lookup(rev)
2239 p = repo.dirstate.parents()
2237 p = repo.dirstate.parents()
2240 cl = repo.changelog
2238 cl = repo.changelog
2241 update = True
2239 update = True
2242 if p[0] == nullid:
2240 if p[0] == nullid:
2243 update = False
2241 update = False
2244 elif p[1] == nullid and rev != cl.ancestor(p[0], rev):
2242 elif p[1] == nullid and rev != cl.ancestor(p[0], rev):
2245 update = False
2243 update = False
2246 elif rev not in (cl.ancestor(p[0], rev), cl.ancestor(p[1], rev)):
2244 elif rev not in (cl.ancestor(p[0], rev), cl.ancestor(p[1], rev)):
2247 update = False
2245 update = False
2248
2246
2249 repo.mq.strip(repo, rev, backup=backup, update=update, force=opts['force'])
2247 repo.mq.strip(repo, rev, backup=backup, update=update, force=opts['force'])
2250 return 0
2248 return 0
2251
2249
2252 def select(ui, repo, *args, **opts):
2250 def select(ui, repo, *args, **opts):
2253 '''set or print guarded patches to push
2251 '''set or print guarded patches to push
2254
2252
2255 Use the qguard command to set or print guards on patch, then use
2253 Use the qguard command to set or print guards on patch, then use
2256 qselect to tell mq which guards to use. A patch will be pushed if
2254 qselect to tell mq which guards to use. A patch will be pushed if
2257 it has no guards or any positive guards match the currently
2255 it has no guards or any positive guards match the currently
2258 selected guard, but will not be pushed if any negative guards
2256 selected guard, but will not be pushed if any negative guards
2259 match the current guard. For example:
2257 match the current guard. For example:
2260
2258
2261 qguard foo.patch -stable (negative guard)
2259 qguard foo.patch -stable (negative guard)
2262 qguard bar.patch +stable (positive guard)
2260 qguard bar.patch +stable (positive guard)
2263 qselect stable
2261 qselect stable
2264
2262
2265 This activates the "stable" guard. mq will skip foo.patch (because
2263 This activates the "stable" guard. mq will skip foo.patch (because
2266 it has a negative match) but push bar.patch (because it has a
2264 it has a negative match) but push bar.patch (because it has a
2267 positive match).
2265 positive match).
2268
2266
2269 With no arguments, prints the currently active guards.
2267 With no arguments, prints the currently active guards.
2270 With one argument, sets the active guard.
2268 With one argument, sets the active guard.
2271
2269
2272 Use -n/--none to deactivate guards (no other arguments needed).
2270 Use -n/--none to deactivate guards (no other arguments needed).
2273 When no guards are active, patches with positive guards are
2271 When no guards are active, patches with positive guards are
2274 skipped and patches with negative guards are pushed.
2272 skipped and patches with negative guards are pushed.
2275
2273
2276 qselect can change the guards on applied patches. It does not pop
2274 qselect can change the guards on applied patches. It does not pop
2277 guarded patches by default. Use --pop to pop back to the last
2275 guarded patches by default. Use --pop to pop back to the last
2278 applied patch that is not guarded. Use --reapply (which implies
2276 applied patch that is not guarded. Use --reapply (which implies
2279 --pop) to push back to the current patch afterwards, but skip
2277 --pop) to push back to the current patch afterwards, but skip
2280 guarded patches.
2278 guarded patches.
2281
2279
2282 Use -s/--series to print a list of all guards in the series file
2280 Use -s/--series to print a list of all guards in the series file
2283 (no other arguments needed). Use -v for more information.'''
2281 (no other arguments needed). Use -v for more information.'''
2284
2282
2285 q = repo.mq
2283 q = repo.mq
2286 guards = q.active()
2284 guards = q.active()
2287 if args or opts['none']:
2285 if args or opts['none']:
2288 old_unapplied = q.unapplied(repo)
2286 old_unapplied = q.unapplied(repo)
2289 old_guarded = [i for i in xrange(len(q.applied)) if
2287 old_guarded = [i for i in xrange(len(q.applied)) if
2290 not q.pushable(i)[0]]
2288 not q.pushable(i)[0]]
2291 q.set_active(args)
2289 q.set_active(args)
2292 q.save_dirty()
2290 q.save_dirty()
2293 if not args:
2291 if not args:
2294 ui.status(_('guards deactivated\n'))
2292 ui.status(_('guards deactivated\n'))
2295 if not opts['pop'] and not opts['reapply']:
2293 if not opts['pop'] and not opts['reapply']:
2296 unapplied = q.unapplied(repo)
2294 unapplied = q.unapplied(repo)
2297 guarded = [i for i in xrange(len(q.applied))
2295 guarded = [i for i in xrange(len(q.applied))
2298 if not q.pushable(i)[0]]
2296 if not q.pushable(i)[0]]
2299 if len(unapplied) != len(old_unapplied):
2297 if len(unapplied) != len(old_unapplied):
2300 ui.status(_('number of unguarded, unapplied patches has '
2298 ui.status(_('number of unguarded, unapplied patches has '
2301 'changed from %d to %d\n') %
2299 'changed from %d to %d\n') %
2302 (len(old_unapplied), len(unapplied)))
2300 (len(old_unapplied), len(unapplied)))
2303 if len(guarded) != len(old_guarded):
2301 if len(guarded) != len(old_guarded):
2304 ui.status(_('number of guarded, applied patches has changed '
2302 ui.status(_('number of guarded, applied patches has changed '
2305 'from %d to %d\n') %
2303 'from %d to %d\n') %
2306 (len(old_guarded), len(guarded)))
2304 (len(old_guarded), len(guarded)))
2307 elif opts['series']:
2305 elif opts['series']:
2308 guards = {}
2306 guards = {}
2309 noguards = 0
2307 noguards = 0
2310 for gs in q.series_guards:
2308 for gs in q.series_guards:
2311 if not gs:
2309 if not gs:
2312 noguards += 1
2310 noguards += 1
2313 for g in gs:
2311 for g in gs:
2314 guards.setdefault(g, 0)
2312 guards.setdefault(g, 0)
2315 guards[g] += 1
2313 guards[g] += 1
2316 if ui.verbose:
2314 if ui.verbose:
2317 guards['NONE'] = noguards
2315 guards['NONE'] = noguards
2318 guards = guards.items()
2316 guards = guards.items()
2319 guards.sort(lambda a, b: cmp(a[0][1:], b[0][1:]))
2317 guards.sort(lambda a, b: cmp(a[0][1:], b[0][1:]))
2320 if guards:
2318 if guards:
2321 ui.note(_('guards in series file:\n'))
2319 ui.note(_('guards in series file:\n'))
2322 for guard, count in guards:
2320 for guard, count in guards:
2323 ui.note('%2d ' % count)
2321 ui.note('%2d ' % count)
2324 ui.write(guard, '\n')
2322 ui.write(guard, '\n')
2325 else:
2323 else:
2326 ui.note(_('no guards in series file\n'))
2324 ui.note(_('no guards in series file\n'))
2327 else:
2325 else:
2328 if guards:
2326 if guards:
2329 ui.note(_('active guards:\n'))
2327 ui.note(_('active guards:\n'))
2330 for g in guards:
2328 for g in guards:
2331 ui.write(g, '\n')
2329 ui.write(g, '\n')
2332 else:
2330 else:
2333 ui.write(_('no active guards\n'))
2331 ui.write(_('no active guards\n'))
2334 reapply = opts['reapply'] and q.applied and q.appliedname(-1)
2332 reapply = opts['reapply'] and q.applied and q.appliedname(-1)
2335 popped = False
2333 popped = False
2336 if opts['pop'] or opts['reapply']:
2334 if opts['pop'] or opts['reapply']:
2337 for i in xrange(len(q.applied)):
2335 for i in xrange(len(q.applied)):
2338 pushable, reason = q.pushable(i)
2336 pushable, reason = q.pushable(i)
2339 if not pushable:
2337 if not pushable:
2340 ui.status(_('popping guarded patches\n'))
2338 ui.status(_('popping guarded patches\n'))
2341 popped = True
2339 popped = True
2342 if i == 0:
2340 if i == 0:
2343 q.pop(repo, all=True)
2341 q.pop(repo, all=True)
2344 else:
2342 else:
2345 q.pop(repo, i-1)
2343 q.pop(repo, i-1)
2346 break
2344 break
2347 if popped:
2345 if popped:
2348 try:
2346 try:
2349 if reapply:
2347 if reapply:
2350 ui.status(_('reapplying unguarded patches\n'))
2348 ui.status(_('reapplying unguarded patches\n'))
2351 q.push(repo, reapply)
2349 q.push(repo, reapply)
2352 finally:
2350 finally:
2353 q.save_dirty()
2351 q.save_dirty()
2354
2352
2355 def finish(ui, repo, *revrange, **opts):
2353 def finish(ui, repo, *revrange, **opts):
2356 """move applied patches into repository history
2354 """move applied patches into repository history
2357
2355
2358 Finishes the specified revisions (corresponding to applied
2356 Finishes the specified revisions (corresponding to applied
2359 patches) by moving them out of mq control into regular repository
2357 patches) by moving them out of mq control into regular repository
2360 history.
2358 history.
2361
2359
2362 Accepts a revision range or the -a/--applied option. If --applied
2360 Accepts a revision range or the -a/--applied option. If --applied
2363 is specified, all applied mq revisions are removed from mq
2361 is specified, all applied mq revisions are removed from mq
2364 control. Otherwise, the given revisions must be at the base of the
2362 control. Otherwise, the given revisions must be at the base of the
2365 stack of applied patches.
2363 stack of applied patches.
2366
2364
2367 This can be especially useful if your changes have been applied to
2365 This can be especially useful if your changes have been applied to
2368 an upstream repository, or if you are about to push your changes
2366 an upstream repository, or if you are about to push your changes
2369 to upstream.
2367 to upstream.
2370 """
2368 """
2371 if not opts['applied'] and not revrange:
2369 if not opts['applied'] and not revrange:
2372 raise util.Abort(_('no revisions specified'))
2370 raise util.Abort(_('no revisions specified'))
2373 elif opts['applied']:
2371 elif opts['applied']:
2374 revrange = ('qbase:qtip',) + revrange
2372 revrange = ('qbase:qtip',) + revrange
2375
2373
2376 q = repo.mq
2374 q = repo.mq
2377 if not q.applied:
2375 if not q.applied:
2378 ui.status(_('no patches applied\n'))
2376 ui.status(_('no patches applied\n'))
2379 return 0
2377 return 0
2380
2378
2381 revs = cmdutil.revrange(repo, revrange)
2379 revs = cmdutil.revrange(repo, revrange)
2382 q.finish(repo, revs)
2380 q.finish(repo, revs)
2383 q.save_dirty()
2381 q.save_dirty()
2384 return 0
2382 return 0
2385
2383
2386 def reposetup(ui, repo):
2384 def reposetup(ui, repo):
2387 class mqrepo(repo.__class__):
2385 class mqrepo(repo.__class__):
2388 def abort_if_wdir_patched(self, errmsg, force=False):
2386 def abort_if_wdir_patched(self, errmsg, force=False):
2389 if self.mq.applied and not force:
2387 if self.mq.applied and not force:
2390 parent = hex(self.dirstate.parents()[0])
2388 parent = hex(self.dirstate.parents()[0])
2391 if parent in [s.rev for s in self.mq.applied]:
2389 if parent in [s.rev for s in self.mq.applied]:
2392 raise util.Abort(errmsg)
2390 raise util.Abort(errmsg)
2393
2391
2394 def commit(self, *args, **opts):
2392 def commit(self, *args, **opts):
2395 if len(args) >= 6:
2393 if len(args) >= 6:
2396 force = args[5]
2394 force = args[5]
2397 else:
2395 else:
2398 force = opts.get('force')
2396 force = opts.get('force')
2399 self.abort_if_wdir_patched(
2397 self.abort_if_wdir_patched(
2400 _('cannot commit over an applied mq patch'),
2398 _('cannot commit over an applied mq patch'),
2401 force)
2399 force)
2402
2400
2403 return super(mqrepo, self).commit(*args, **opts)
2401 return super(mqrepo, self).commit(*args, **opts)
2404
2402
2405 def push(self, remote, force=False, revs=None):
2403 def push(self, remote, force=False, revs=None):
2406 if self.mq.applied and not force and not revs:
2404 if self.mq.applied and not force and not revs:
2407 raise util.Abort(_('source has mq patches applied'))
2405 raise util.Abort(_('source has mq patches applied'))
2408 return super(mqrepo, self).push(remote, force, revs)
2406 return super(mqrepo, self).push(remote, force, revs)
2409
2407
2410 def tags(self):
2408 def tags(self):
2411 if self.tagscache:
2409 if self.tagscache:
2412 return self.tagscache
2410 return self.tagscache
2413
2411
2414 tagscache = super(mqrepo, self).tags()
2412 tagscache = super(mqrepo, self).tags()
2415
2413
2416 q = self.mq
2414 q = self.mq
2417 if not q.applied:
2415 if not q.applied:
2418 return tagscache
2416 return tagscache
2419
2417
2420 mqtags = [(bin(patch.rev), patch.name) for patch in q.applied]
2418 mqtags = [(bin(patch.rev), patch.name) for patch in q.applied]
2421
2419
2422 if mqtags[-1][0] not in self.changelog.nodemap:
2420 if mqtags[-1][0] not in self.changelog.nodemap:
2423 self.ui.warn(_('mq status file refers to unknown node %s\n')
2421 self.ui.warn(_('mq status file refers to unknown node %s\n')
2424 % short(mqtags[-1][0]))
2422 % short(mqtags[-1][0]))
2425 return tagscache
2423 return tagscache
2426
2424
2427 mqtags.append((mqtags[-1][0], 'qtip'))
2425 mqtags.append((mqtags[-1][0], 'qtip'))
2428 mqtags.append((mqtags[0][0], 'qbase'))
2426 mqtags.append((mqtags[0][0], 'qbase'))
2429 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
2427 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
2430 for patch in mqtags:
2428 for patch in mqtags:
2431 if patch[1] in tagscache:
2429 if patch[1] in tagscache:
2432 self.ui.warn(_('Tag %s overrides mq patch of the same name\n')
2430 self.ui.warn(_('Tag %s overrides mq patch of the same name\n')
2433 % patch[1])
2431 % patch[1])
2434 else:
2432 else:
2435 tagscache[patch[1]] = patch[0]
2433 tagscache[patch[1]] = patch[0]
2436
2434
2437 return tagscache
2435 return tagscache
2438
2436
2439 def _branchtags(self, partial, lrev):
2437 def _branchtags(self, partial, lrev):
2440 q = self.mq
2438 q = self.mq
2441 if not q.applied:
2439 if not q.applied:
2442 return super(mqrepo, self)._branchtags(partial, lrev)
2440 return super(mqrepo, self)._branchtags(partial, lrev)
2443
2441
2444 cl = self.changelog
2442 cl = self.changelog
2445 qbasenode = bin(q.applied[0].rev)
2443 qbasenode = bin(q.applied[0].rev)
2446 if qbasenode not in cl.nodemap:
2444 if qbasenode not in cl.nodemap:
2447 self.ui.warn(_('mq status file refers to unknown node %s\n')
2445 self.ui.warn(_('mq status file refers to unknown node %s\n')
2448 % short(qbasenode))
2446 % short(qbasenode))
2449 return super(mqrepo, self)._branchtags(partial, lrev)
2447 return super(mqrepo, self)._branchtags(partial, lrev)
2450
2448
2451 qbase = cl.rev(qbasenode)
2449 qbase = cl.rev(qbasenode)
2452 start = lrev + 1
2450 start = lrev + 1
2453 if start < qbase:
2451 if start < qbase:
2454 # update the cache (excluding the patches) and save it
2452 # update the cache (excluding the patches) and save it
2455 self._updatebranchcache(partial, lrev+1, qbase)
2453 self._updatebranchcache(partial, lrev+1, qbase)
2456 self._writebranchcache(partial, cl.node(qbase-1), qbase-1)
2454 self._writebranchcache(partial, cl.node(qbase-1), qbase-1)
2457 start = qbase
2455 start = qbase
2458 # if start = qbase, the cache is as updated as it should be.
2456 # if start = qbase, the cache is as updated as it should be.
2459 # if start > qbase, the cache includes (part of) the patches.
2457 # if start > qbase, the cache includes (part of) the patches.
2460 # we might as well use it, but we won't save it.
2458 # we might as well use it, but we won't save it.
2461
2459
2462 # update the cache up to the tip
2460 # update the cache up to the tip
2463 self._updatebranchcache(partial, start, len(cl))
2461 self._updatebranchcache(partial, start, len(cl))
2464
2462
2465 return partial
2463 return partial
2466
2464
2467 if repo.local():
2465 if repo.local():
2468 repo.__class__ = mqrepo
2466 repo.__class__ = mqrepo
2469 repo.mq = queue(ui, repo.join(""))
2467 repo.mq = queue(ui, repo.join(""))
2470
2468
2471 def mqimport(orig, ui, repo, *args, **kwargs):
2469 def mqimport(orig, ui, repo, *args, **kwargs):
2472 if hasattr(repo, 'abort_if_wdir_patched'):
2470 if hasattr(repo, 'abort_if_wdir_patched'):
2473 repo.abort_if_wdir_patched(_('cannot import over an applied patch'),
2471 repo.abort_if_wdir_patched(_('cannot import over an applied patch'),
2474 kwargs.get('force'))
2472 kwargs.get('force'))
2475 return orig(ui, repo, *args, **kwargs)
2473 return orig(ui, repo, *args, **kwargs)
2476
2474
2477 def uisetup(ui):
2475 def uisetup(ui):
2478 extensions.wrapcommand(commands.table, 'import', mqimport)
2476 extensions.wrapcommand(commands.table, 'import', mqimport)
2479
2477
2480 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
2478 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
2481
2479
2482 cmdtable = {
2480 cmdtable = {
2483 "qapplied": (applied, [] + seriesopts, _('hg qapplied [-s] [PATCH]')),
2481 "qapplied": (applied, [] + seriesopts, _('hg qapplied [-s] [PATCH]')),
2484 "qclone":
2482 "qclone":
2485 (clone,
2483 (clone,
2486 [('', 'pull', None, _('use pull protocol to copy metadata')),
2484 [('', 'pull', None, _('use pull protocol to copy metadata')),
2487 ('U', 'noupdate', None, _('do not update the new working directories')),
2485 ('U', 'noupdate', None, _('do not update the new working directories')),
2488 ('', 'uncompressed', None,
2486 ('', 'uncompressed', None,
2489 _('use uncompressed transfer (fast over LAN)')),
2487 _('use uncompressed transfer (fast over LAN)')),
2490 ('p', 'patches', '', _('location of source patch repository')),
2488 ('p', 'patches', '', _('location of source patch repository')),
2491 ] + commands.remoteopts,
2489 ] + commands.remoteopts,
2492 _('hg qclone [OPTION]... SOURCE [DEST]')),
2490 _('hg qclone [OPTION]... SOURCE [DEST]')),
2493 "qcommit|qci":
2491 "qcommit|qci":
2494 (commit,
2492 (commit,
2495 commands.table["^commit|ci"][1],
2493 commands.table["^commit|ci"][1],
2496 _('hg qcommit [OPTION]... [FILE]...')),
2494 _('hg qcommit [OPTION]... [FILE]...')),
2497 "^qdiff":
2495 "^qdiff":
2498 (diff,
2496 (diff,
2499 commands.diffopts + commands.diffopts2 + commands.walkopts,
2497 commands.diffopts + commands.diffopts2 + commands.walkopts,
2500 _('hg qdiff [OPTION]... [FILE]...')),
2498 _('hg qdiff [OPTION]... [FILE]...')),
2501 "qdelete|qremove|qrm":
2499 "qdelete|qremove|qrm":
2502 (delete,
2500 (delete,
2503 [('k', 'keep', None, _('keep patch file')),
2501 [('k', 'keep', None, _('keep patch file')),
2504 ('r', 'rev', [], _('stop managing a revision'))],
2502 ('r', 'rev', [], _('stop managing a revision'))],
2505 _('hg qdelete [-k] [-r REV]... [PATCH]...')),
2503 _('hg qdelete [-k] [-r REV]... [PATCH]...')),
2506 'qfold':
2504 'qfold':
2507 (fold,
2505 (fold,
2508 [('e', 'edit', None, _('edit patch header')),
2506 [('e', 'edit', None, _('edit patch header')),
2509 ('k', 'keep', None, _('keep folded patch files')),
2507 ('k', 'keep', None, _('keep folded patch files')),
2510 ] + commands.commitopts,
2508 ] + commands.commitopts,
2511 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...')),
2509 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...')),
2512 'qgoto':
2510 'qgoto':
2513 (goto,
2511 (goto,
2514 [('f', 'force', None, _('overwrite any local changes'))],
2512 [('f', 'force', None, _('overwrite any local changes'))],
2515 _('hg qgoto [OPTION]... PATCH')),
2513 _('hg qgoto [OPTION]... PATCH')),
2516 'qguard':
2514 'qguard':
2517 (guard,
2515 (guard,
2518 [('l', 'list', None, _('list all patches and guards')),
2516 [('l', 'list', None, _('list all patches and guards')),
2519 ('n', 'none', None, _('drop all guards'))],
2517 ('n', 'none', None, _('drop all guards'))],
2520 _('hg qguard [-l] [-n] -- [PATCH] [+GUARD]... [-GUARD]...')),
2518 _('hg qguard [-l] [-n] -- [PATCH] [+GUARD]... [-GUARD]...')),
2521 'qheader': (header, [], _('hg qheader [PATCH]')),
2519 'qheader': (header, [], _('hg qheader [PATCH]')),
2522 "^qimport":
2520 "^qimport":
2523 (qimport,
2521 (qimport,
2524 [('e', 'existing', None, _('import file in patch directory')),
2522 [('e', 'existing', None, _('import file in patch directory')),
2525 ('n', 'name', '', _('patch file name')),
2523 ('n', 'name', '', _('patch file name')),
2526 ('f', 'force', None, _('overwrite existing files')),
2524 ('f', 'force', None, _('overwrite existing files')),
2527 ('r', 'rev', [], _('place existing revisions under mq control')),
2525 ('r', 'rev', [], _('place existing revisions under mq control')),
2528 ('g', 'git', None, _('use git extended diff format'))],
2526 ('g', 'git', None, _('use git extended diff format'))],
2529 _('hg qimport [-e] [-n NAME] [-f] [-g] [-r REV]... FILE...')),
2527 _('hg qimport [-e] [-n NAME] [-f] [-g] [-r REV]... FILE...')),
2530 "^qinit":
2528 "^qinit":
2531 (init,
2529 (init,
2532 [('c', 'create-repo', None, _('create queue repository'))],
2530 [('c', 'create-repo', None, _('create queue repository'))],
2533 _('hg qinit [-c]')),
2531 _('hg qinit [-c]')),
2534 "qnew":
2532 "qnew":
2535 (new,
2533 (new,
2536 [('e', 'edit', None, _('edit commit message')),
2534 [('e', 'edit', None, _('edit commit message')),
2537 ('f', 'force', None, _('import uncommitted changes into patch')),
2535 ('f', 'force', None, _('import uncommitted changes into patch')),
2538 ('g', 'git', None, _('use git extended diff format')),
2536 ('g', 'git', None, _('use git extended diff format')),
2539 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2537 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2540 ('u', 'user', '', _('add "From: <given user>" to patch')),
2538 ('u', 'user', '', _('add "From: <given user>" to patch')),
2541 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2539 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2542 ('d', 'date', '', _('add "Date: <given date>" to patch'))
2540 ('d', 'date', '', _('add "Date: <given date>" to patch'))
2543 ] + commands.walkopts + commands.commitopts,
2541 ] + commands.walkopts + commands.commitopts,
2544 _('hg qnew [-e] [-m TEXT] [-l FILE] [-f] PATCH [FILE]...')),
2542 _('hg qnew [-e] [-m TEXT] [-l FILE] [-f] PATCH [FILE]...')),
2545 "qnext": (next, [] + seriesopts, _('hg qnext [-s]')),
2543 "qnext": (next, [] + seriesopts, _('hg qnext [-s]')),
2546 "qprev": (prev, [] + seriesopts, _('hg qprev [-s]')),
2544 "qprev": (prev, [] + seriesopts, _('hg qprev [-s]')),
2547 "^qpop":
2545 "^qpop":
2548 (pop,
2546 (pop,
2549 [('a', 'all', None, _('pop all patches')),
2547 [('a', 'all', None, _('pop all patches')),
2550 ('n', 'name', '', _('queue name to pop')),
2548 ('n', 'name', '', _('queue name to pop')),
2551 ('f', 'force', None, _('forget any local changes'))],
2549 ('f', 'force', None, _('forget any local changes'))],
2552 _('hg qpop [-a] [-n NAME] [-f] [PATCH | INDEX]')),
2550 _('hg qpop [-a] [-n NAME] [-f] [PATCH | INDEX]')),
2553 "^qpush":
2551 "^qpush":
2554 (push,
2552 (push,
2555 [('f', 'force', None, _('apply if the patch has rejects')),
2553 [('f', 'force', None, _('apply if the patch has rejects')),
2556 ('l', 'list', None, _('list patch name in commit text')),
2554 ('l', 'list', None, _('list patch name in commit text')),
2557 ('a', 'all', None, _('apply all patches')),
2555 ('a', 'all', None, _('apply all patches')),
2558 ('m', 'merge', None, _('merge from another queue')),
2556 ('m', 'merge', None, _('merge from another queue')),
2559 ('n', 'name', '', _('merge queue name'))],
2557 ('n', 'name', '', _('merge queue name'))],
2560 _('hg qpush [-f] [-l] [-a] [-m] [-n NAME] [PATCH | INDEX]')),
2558 _('hg qpush [-f] [-l] [-a] [-m] [-n NAME] [PATCH | INDEX]')),
2561 "^qrefresh":
2559 "^qrefresh":
2562 (refresh,
2560 (refresh,
2563 [('e', 'edit', None, _('edit commit message')),
2561 [('e', 'edit', None, _('edit commit message')),
2564 ('g', 'git', None, _('use git extended diff format')),
2562 ('g', 'git', None, _('use git extended diff format')),
2565 ('s', 'short', None, _('refresh only files already in the patch and specified files')),
2563 ('s', 'short', None, _('refresh only files already in the patch and specified files')),
2566 ('U', 'currentuser', None, _('add/update "From: <current user>" in patch')),
2564 ('U', 'currentuser', None, _('add/update "From: <current user>" in patch')),
2567 ('u', 'user', '', _('add/update "From: <given user>" in patch')),
2565 ('u', 'user', '', _('add/update "From: <given user>" in patch')),
2568 ('D', 'currentdate', None, _('update "Date: <current date>" in patch (if present)')),
2566 ('D', 'currentdate', None, _('update "Date: <current date>" in patch (if present)')),
2569 ('d', 'date', '', _('update "Date: <given date>" in patch (if present)'))
2567 ('d', 'date', '', _('update "Date: <given date>" in patch (if present)'))
2570 ] + commands.walkopts + commands.commitopts,
2568 ] + commands.walkopts + commands.commitopts,
2571 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...')),
2569 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...')),
2572 'qrename|qmv':
2570 'qrename|qmv':
2573 (rename, [], _('hg qrename PATCH1 [PATCH2]')),
2571 (rename, [], _('hg qrename PATCH1 [PATCH2]')),
2574 "qrestore":
2572 "qrestore":
2575 (restore,
2573 (restore,
2576 [('d', 'delete', None, _('delete save entry')),
2574 [('d', 'delete', None, _('delete save entry')),
2577 ('u', 'update', None, _('update queue working directory'))],
2575 ('u', 'update', None, _('update queue working directory'))],
2578 _('hg qrestore [-d] [-u] REV')),
2576 _('hg qrestore [-d] [-u] REV')),
2579 "qsave":
2577 "qsave":
2580 (save,
2578 (save,
2581 [('c', 'copy', None, _('copy patch directory')),
2579 [('c', 'copy', None, _('copy patch directory')),
2582 ('n', 'name', '', _('copy directory name')),
2580 ('n', 'name', '', _('copy directory name')),
2583 ('e', 'empty', None, _('clear queue status file')),
2581 ('e', 'empty', None, _('clear queue status file')),
2584 ('f', 'force', None, _('force copy'))] + commands.commitopts,
2582 ('f', 'force', None, _('force copy'))] + commands.commitopts,
2585 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]')),
2583 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]')),
2586 "qselect":
2584 "qselect":
2587 (select,
2585 (select,
2588 [('n', 'none', None, _('disable all guards')),
2586 [('n', 'none', None, _('disable all guards')),
2589 ('s', 'series', None, _('list all guards in series file')),
2587 ('s', 'series', None, _('list all guards in series file')),
2590 ('', 'pop', None, _('pop to before first guarded applied patch')),
2588 ('', 'pop', None, _('pop to before first guarded applied patch')),
2591 ('', 'reapply', None, _('pop, then reapply patches'))],
2589 ('', 'reapply', None, _('pop, then reapply patches'))],
2592 _('hg qselect [OPTION]... [GUARD]...')),
2590 _('hg qselect [OPTION]... [GUARD]...')),
2593 "qseries":
2591 "qseries":
2594 (series,
2592 (series,
2595 [('m', 'missing', None, _('print patches not in series')),
2593 [('m', 'missing', None, _('print patches not in series')),
2596 ] + seriesopts,
2594 ] + seriesopts,
2597 _('hg qseries [-ms]')),
2595 _('hg qseries [-ms]')),
2598 "^strip":
2596 "^strip":
2599 (strip,
2597 (strip,
2600 [('f', 'force', None, _('force removal with local changes')),
2598 [('f', 'force', None, _('force removal with local changes')),
2601 ('b', 'backup', None, _('bundle unrelated changesets')),
2599 ('b', 'backup', None, _('bundle unrelated changesets')),
2602 ('n', 'nobackup', None, _('no backups'))],
2600 ('n', 'nobackup', None, _('no backups'))],
2603 _('hg strip [-f] [-b] [-n] REV')),
2601 _('hg strip [-f] [-b] [-n] REV')),
2604 "qtop": (top, [] + seriesopts, _('hg qtop [-s]')),
2602 "qtop": (top, [] + seriesopts, _('hg qtop [-s]')),
2605 "qunapplied": (unapplied, [] + seriesopts, _('hg qunapplied [-s] [PATCH]')),
2603 "qunapplied": (unapplied, [] + seriesopts, _('hg qunapplied [-s] [PATCH]')),
2606 "qfinish":
2604 "qfinish":
2607 (finish,
2605 (finish,
2608 [('a', 'applied', None, _('finish all applied changesets'))],
2606 [('a', 'applied', None, _('finish all applied changesets'))],
2609 _('hg qfinish [-a] [REV...]')),
2607 _('hg qfinish [-a] [REV...]')),
2610 }
2608 }
@@ -1,110 +1,110 b''
1 # Copyright (C) 2006 - Marco Barisione <marco@barisione.org>
1 # Copyright (C) 2006 - Marco Barisione <marco@barisione.org>
2 #
2 #
3 # This is a small extension for Mercurial (http://www.selenic.com/mercurial)
3 # This is a small extension for Mercurial (http://www.selenic.com/mercurial)
4 # that removes files not known to mercurial
4 # that removes files not known to mercurial
5 #
5 #
6 # This program was inspired by the "cvspurge" script contained in CVS utilities
6 # This program was inspired by the "cvspurge" script contained in CVS utilities
7 # (http://www.red-bean.com/cvsutils/).
7 # (http://www.red-bean.com/cvsutils/).
8 #
8 #
9 # To enable the "purge" extension put these lines in your ~/.hgrc:
9 # To enable the "purge" extension put these lines in your ~/.hgrc:
10 # [extensions]
10 # [extensions]
11 # hgext.purge =
11 # hgext.purge =
12 #
12 #
13 # For help on the usage of "hg purge" use:
13 # For help on the usage of "hg purge" use:
14 # hg help purge
14 # hg help purge
15 #
15 #
16 # This program is free software; you can redistribute it and/or modify
16 # This program is free software; you can redistribute it and/or modify
17 # it under the terms of the GNU General Public License as published by
17 # it under the terms of the GNU General Public License as published by
18 # the Free Software Foundation; either version 2 of the License, or
18 # the Free Software Foundation; either version 2 of the License, or
19 # (at your option) any later version.
19 # (at your option) any later version.
20 #
20 #
21 # This program is distributed in the hope that it will be useful,
21 # This program is distributed in the hope that it will be useful,
22 # but WITHOUT ANY WARRANTY; without even the implied warranty of
22 # but WITHOUT ANY WARRANTY; without even the implied warranty of
23 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
23 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
24 # GNU General Public License for more details.
24 # GNU General Public License for more details.
25 #
25 #
26 # You should have received a copy of the GNU General Public License
26 # You should have received a copy of the GNU General Public License
27 # along with this program; if not, write to the Free Software
27 # along with this program; if not, write to the Free Software
28 # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
28 # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
29
29
30 from mercurial import util, commands, cmdutil
30 from mercurial import util, commands, cmdutil
31 from mercurial.i18n import _
31 from mercurial.i18n import _
32 import os, stat
32 import os, stat
33
33
34 def purge(ui, repo, *dirs, **opts):
34 def purge(ui, repo, *dirs, **opts):
35 '''removes files not tracked by Mercurial
35 '''removes files not tracked by Mercurial
36
36
37 Delete files not known to Mercurial. This is useful to test local
37 Delete files not known to Mercurial. This is useful to test local
38 and uncommitted changes in an otherwise-clean source tree.
38 and uncommitted changes in an otherwise-clean source tree.
39
39
40 This means that purge will delete:
40 This means that purge will delete:
41 - Unknown files: files marked with "?" by "hg status"
41 - Unknown files: files marked with "?" by "hg status"
42 - Empty directories: in fact Mercurial ignores directories unless
42 - Empty directories: in fact Mercurial ignores directories unless
43 they contain files under source control managment
43 they contain files under source control managment
44 But it will leave untouched:
44 But it will leave untouched:
45 - Modified and unmodified tracked files
45 - Modified and unmodified tracked files
46 - Ignored files (unless --all is specified)
46 - Ignored files (unless --all is specified)
47 - New files added to the repository (with "hg add")
47 - New files added to the repository (with "hg add")
48
48
49 If directories are given on the command line, only files in these
49 If directories are given on the command line, only files in these
50 directories are considered.
50 directories are considered.
51
51
52 Be careful with purge, as you could irreversibly delete some files
52 Be careful with purge, as you could irreversibly delete some files
53 you forgot to add to the repository. If you only want to print the
53 you forgot to add to the repository. If you only want to print the
54 list of files that this program would delete, use the --print
54 list of files that this program would delete, use the --print
55 option.
55 option.
56 '''
56 '''
57 act = not opts['print']
57 act = not opts['print']
58 eol = '\n'
58 eol = '\n'
59 if opts['print0']:
59 if opts['print0']:
60 eol = '\0'
60 eol = '\0'
61 act = False # --print0 implies --print
61 act = False # --print0 implies --print
62
62
63 def remove(remove_func, name):
63 def remove(remove_func, name):
64 if act:
64 if act:
65 try:
65 try:
66 remove_func(repo.wjoin(name))
66 remove_func(repo.wjoin(name))
67 except OSError:
67 except OSError:
68 m = _('%s cannot be removed') % name
68 m = _('%s cannot be removed') % name
69 if opts['abort_on_err']:
69 if opts['abort_on_err']:
70 raise util.Abort(m)
70 raise util.Abort(m)
71 ui.warn(_('warning: %s\n') % m)
71 ui.warn(_('warning: %s\n') % m)
72 else:
72 else:
73 ui.write('%s%s' % (name, eol))
73 ui.write('%s%s' % (name, eol))
74
74
75 def removefile(path):
75 def removefile(path):
76 try:
76 try:
77 os.remove(path)
77 os.remove(path)
78 except OSError:
78 except OSError:
79 # read-only files cannot be unlinked under Windows
79 # read-only files cannot be unlinked under Windows
80 s = os.stat(path)
80 s = os.stat(path)
81 if (s.st_mode & stat.S_IWRITE) != 0:
81 if (s.st_mode & stat.S_IWRITE) != 0:
82 raise
82 raise
83 os.chmod(path, stat.S_IMODE(s.st_mode) | stat.S_IWRITE)
83 os.chmod(path, stat.S_IMODE(s.st_mode) | stat.S_IWRITE)
84 os.remove(path)
84 os.remove(path)
85
85
86 directories = []
86 directories = []
87 match = cmdutil.match(repo, dirs, opts)
87 match = cmdutil.match(repo, dirs, opts)
88 match.dir = directories.append
88 match.dir = directories.append
89 status = repo.status(match=match, ignored=opts['all'], unknown=True)
89 status = repo.status(match=match, ignored=opts['all'], unknown=True)
90
90
91 for f in util.sort(status[4] + status[5]):
91 for f in sorted(status[4] + status[5]):
92 ui.note(_('Removing file %s\n') % f)
92 ui.note(_('Removing file %s\n') % f)
93 remove(removefile, f)
93 remove(removefile, f)
94
94
95 for f in util.sort(directories)[::-1]:
95 for f in sorted(directories, reverse=True):
96 if match(f) and not os.listdir(repo.wjoin(f)):
96 if match(f) and not os.listdir(repo.wjoin(f)):
97 ui.note(_('Removing directory %s\n') % f)
97 ui.note(_('Removing directory %s\n') % f)
98 remove(os.rmdir, f)
98 remove(os.rmdir, f)
99
99
100 cmdtable = {
100 cmdtable = {
101 'purge|clean':
101 'purge|clean':
102 (purge,
102 (purge,
103 [('a', 'abort-on-err', None, _('abort if an error occurs')),
103 [('a', 'abort-on-err', None, _('abort if an error occurs')),
104 ('', 'all', None, _('purge ignored files too')),
104 ('', 'all', None, _('purge ignored files too')),
105 ('p', 'print', None, _('print the file names instead of deleting them')),
105 ('p', 'print', None, _('print the file names instead of deleting them')),
106 ('0', 'print0', None, _('end filenames with NUL, for use with xargs'
106 ('0', 'print0', None, _('end filenames with NUL, for use with xargs'
107 ' (implies -p/--print)')),
107 ' (implies -p/--print)')),
108 ] + commands.walkopts,
108 ] + commands.walkopts,
109 _('hg purge [OPTION]... [DIR]...'))
109 _('hg purge [OPTION]... [DIR]...'))
110 }
110 }
@@ -1,472 +1,472 b''
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 '''move sets of revisions to a different ancestor
8 '''move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 http://www.selenic.com/mercurial/wiki/index.cgi/RebaseProject
14 http://www.selenic.com/mercurial/wiki/index.cgi/RebaseProject
15 '''
15 '''
16
16
17 from mercurial import util, repair, merge, cmdutil, commands, error
17 from mercurial import util, repair, merge, cmdutil, commands, error
18 from mercurial import extensions, ancestor, copies, patch
18 from mercurial import extensions, ancestor, copies, patch
19 from mercurial.commands import templateopts
19 from mercurial.commands import templateopts
20 from mercurial.node import nullrev
20 from mercurial.node import nullrev
21 from mercurial.lock import release
21 from mercurial.lock import release
22 from mercurial.i18n import _
22 from mercurial.i18n import _
23 import os, errno
23 import os, errno
24
24
25 def rebasemerge(repo, rev, first=False):
25 def rebasemerge(repo, rev, first=False):
26 'return the correct ancestor'
26 'return the correct ancestor'
27 oldancestor = ancestor.ancestor
27 oldancestor = ancestor.ancestor
28
28
29 def newancestor(a, b, pfunc):
29 def newancestor(a, b, pfunc):
30 ancestor.ancestor = oldancestor
30 ancestor.ancestor = oldancestor
31 if b == rev:
31 if b == rev:
32 return repo[rev].parents()[0].rev()
32 return repo[rev].parents()[0].rev()
33 return ancestor.ancestor(a, b, pfunc)
33 return ancestor.ancestor(a, b, pfunc)
34
34
35 if not first:
35 if not first:
36 ancestor.ancestor = newancestor
36 ancestor.ancestor = newancestor
37 else:
37 else:
38 repo.ui.debug(_("first revision, do not change ancestor\n"))
38 repo.ui.debug(_("first revision, do not change ancestor\n"))
39 stats = merge.update(repo, rev, True, True, False)
39 stats = merge.update(repo, rev, True, True, False)
40 return stats
40 return stats
41
41
42 def rebase(ui, repo, **opts):
42 def rebase(ui, repo, **opts):
43 """move changeset (and descendants) to a different branch
43 """move changeset (and descendants) to a different branch
44
44
45 Rebase uses repeated merging to graft changesets from one part of
45 Rebase uses repeated merging to graft changesets from one part of
46 history onto another. This can be useful for linearizing local
46 history onto another. This can be useful for linearizing local
47 changes relative to a master development tree.
47 changes relative to a master development tree.
48
48
49 If a rebase is interrupted to manually resolve a merge, it can be
49 If a rebase is interrupted to manually resolve a merge, it can be
50 continued with --continue/-c or aborted with --abort/-a.
50 continued with --continue/-c or aborted with --abort/-a.
51 """
51 """
52 originalwd = target = None
52 originalwd = target = None
53 external = nullrev
53 external = nullrev
54 state = skipped = {}
54 state = skipped = {}
55
55
56 lock = wlock = None
56 lock = wlock = None
57 try:
57 try:
58 lock = repo.lock()
58 lock = repo.lock()
59 wlock = repo.wlock()
59 wlock = repo.wlock()
60
60
61 # Validate input and define rebasing points
61 # Validate input and define rebasing points
62 destf = opts.get('dest', None)
62 destf = opts.get('dest', None)
63 srcf = opts.get('source', None)
63 srcf = opts.get('source', None)
64 basef = opts.get('base', None)
64 basef = opts.get('base', None)
65 contf = opts.get('continue')
65 contf = opts.get('continue')
66 abortf = opts.get('abort')
66 abortf = opts.get('abort')
67 collapsef = opts.get('collapse', False)
67 collapsef = opts.get('collapse', False)
68 extrafn = opts.get('extrafn')
68 extrafn = opts.get('extrafn')
69 keepf = opts.get('keep', False)
69 keepf = opts.get('keep', False)
70 keepbranchesf = opts.get('keepbranches', False)
70 keepbranchesf = opts.get('keepbranches', False)
71
71
72 if contf or abortf:
72 if contf or abortf:
73 if contf and abortf:
73 if contf and abortf:
74 raise error.ParseError('rebase',
74 raise error.ParseError('rebase',
75 _('cannot use both abort and continue'))
75 _('cannot use both abort and continue'))
76 if collapsef:
76 if collapsef:
77 raise error.ParseError(
77 raise error.ParseError(
78 'rebase', _('cannot use collapse with continue or abort'))
78 'rebase', _('cannot use collapse with continue or abort'))
79
79
80 if srcf or basef or destf:
80 if srcf or basef or destf:
81 raise error.ParseError('rebase',
81 raise error.ParseError('rebase',
82 _('abort and continue do not allow specifying revisions'))
82 _('abort and continue do not allow specifying revisions'))
83
83
84 (originalwd, target, state, collapsef, keepf,
84 (originalwd, target, state, collapsef, keepf,
85 keepbranchesf, external) = restorestatus(repo)
85 keepbranchesf, external) = restorestatus(repo)
86 if abortf:
86 if abortf:
87 abort(repo, originalwd, target, state)
87 abort(repo, originalwd, target, state)
88 return
88 return
89 else:
89 else:
90 if srcf and basef:
90 if srcf and basef:
91 raise error.ParseError('rebase', _('cannot specify both a '
91 raise error.ParseError('rebase', _('cannot specify both a '
92 'revision and a base'))
92 'revision and a base'))
93 cmdutil.bail_if_changed(repo)
93 cmdutil.bail_if_changed(repo)
94 result = buildstate(repo, destf, srcf, basef, collapsef)
94 result = buildstate(repo, destf, srcf, basef, collapsef)
95 if result:
95 if result:
96 originalwd, target, state, external = result
96 originalwd, target, state, external = result
97 else: # Empty state built, nothing to rebase
97 else: # Empty state built, nothing to rebase
98 repo.ui.status(_('nothing to rebase\n'))
98 repo.ui.status(_('nothing to rebase\n'))
99 return
99 return
100
100
101 if keepbranchesf:
101 if keepbranchesf:
102 if extrafn:
102 if extrafn:
103 raise error.ParseError(
103 raise error.ParseError(
104 'rebase', _('cannot use both keepbranches and extrafn'))
104 'rebase', _('cannot use both keepbranches and extrafn'))
105 def extrafn(ctx, extra):
105 def extrafn(ctx, extra):
106 extra['branch'] = ctx.branch()
106 extra['branch'] = ctx.branch()
107
107
108 # Rebase
108 # Rebase
109 targetancestors = list(repo.changelog.ancestors(target))
109 targetancestors = list(repo.changelog.ancestors(target))
110 targetancestors.append(target)
110 targetancestors.append(target)
111
111
112 for rev in util.sort(state):
112 for rev in sorted(state):
113 if state[rev] == -1:
113 if state[rev] == -1:
114 storestatus(repo, originalwd, target, state, collapsef, keepf,
114 storestatus(repo, originalwd, target, state, collapsef, keepf,
115 keepbranchesf, external)
115 keepbranchesf, external)
116 rebasenode(repo, rev, target, state, skipped, targetancestors,
116 rebasenode(repo, rev, target, state, skipped, targetancestors,
117 collapsef, extrafn)
117 collapsef, extrafn)
118 ui.note(_('rebase merging completed\n'))
118 ui.note(_('rebase merging completed\n'))
119
119
120 if collapsef:
120 if collapsef:
121 p1, p2 = defineparents(repo, min(state), target,
121 p1, p2 = defineparents(repo, min(state), target,
122 state, targetancestors)
122 state, targetancestors)
123 concludenode(repo, rev, p1, external, state, collapsef,
123 concludenode(repo, rev, p1, external, state, collapsef,
124 last=True, skipped=skipped, extrafn=extrafn)
124 last=True, skipped=skipped, extrafn=extrafn)
125
125
126 if 'qtip' in repo.tags():
126 if 'qtip' in repo.tags():
127 updatemq(repo, state, skipped, **opts)
127 updatemq(repo, state, skipped, **opts)
128
128
129 if not keepf:
129 if not keepf:
130 # Remove no more useful revisions
130 # Remove no more useful revisions
131 if set(repo.changelog.descendants(min(state))) - set(state):
131 if set(repo.changelog.descendants(min(state))) - set(state):
132 ui.warn(_("warning: new changesets detected on source branch, "
132 ui.warn(_("warning: new changesets detected on source branch, "
133 "not stripping\n"))
133 "not stripping\n"))
134 else:
134 else:
135 repair.strip(repo.ui, repo, repo[min(state)].node(), "strip")
135 repair.strip(repo.ui, repo, repo[min(state)].node(), "strip")
136
136
137 clearstatus(repo)
137 clearstatus(repo)
138 ui.status(_("rebase completed\n"))
138 ui.status(_("rebase completed\n"))
139 if os.path.exists(repo.sjoin('undo')):
139 if os.path.exists(repo.sjoin('undo')):
140 util.unlink(repo.sjoin('undo'))
140 util.unlink(repo.sjoin('undo'))
141 if skipped:
141 if skipped:
142 ui.note(_("%d revisions have been skipped\n") % len(skipped))
142 ui.note(_("%d revisions have been skipped\n") % len(skipped))
143 finally:
143 finally:
144 release(lock, wlock)
144 release(lock, wlock)
145
145
146 def concludenode(repo, rev, p1, p2, state, collapse, last=False, skipped={},
146 def concludenode(repo, rev, p1, p2, state, collapse, last=False, skipped={},
147 extrafn=None):
147 extrafn=None):
148 """Skip commit if collapsing has been required and rev is not the last
148 """Skip commit if collapsing has been required and rev is not the last
149 revision, commit otherwise
149 revision, commit otherwise
150 """
150 """
151 repo.ui.debug(_(" set parents\n"))
151 repo.ui.debug(_(" set parents\n"))
152 if collapse and not last:
152 if collapse and not last:
153 repo.dirstate.setparents(repo[p1].node())
153 repo.dirstate.setparents(repo[p1].node())
154 return None
154 return None
155
155
156 repo.dirstate.setparents(repo[p1].node(), repo[p2].node())
156 repo.dirstate.setparents(repo[p1].node(), repo[p2].node())
157
157
158 # Commit, record the old nodeid
158 # Commit, record the old nodeid
159 m, a, r = repo.status()[:3]
159 m, a, r = repo.status()[:3]
160 newrev = nullrev
160 newrev = nullrev
161 try:
161 try:
162 if last:
162 if last:
163 commitmsg = 'Collapsed revision'
163 commitmsg = 'Collapsed revision'
164 for rebased in state:
164 for rebased in state:
165 if rebased not in skipped:
165 if rebased not in skipped:
166 commitmsg += '\n* %s' % repo[rebased].description()
166 commitmsg += '\n* %s' % repo[rebased].description()
167 commitmsg = repo.ui.edit(commitmsg, repo.ui.username())
167 commitmsg = repo.ui.edit(commitmsg, repo.ui.username())
168 else:
168 else:
169 commitmsg = repo[rev].description()
169 commitmsg = repo[rev].description()
170 # Commit might fail if unresolved files exist
170 # Commit might fail if unresolved files exist
171 extra = {'rebase_source': repo[rev].hex()}
171 extra = {'rebase_source': repo[rev].hex()}
172 if extrafn:
172 if extrafn:
173 extrafn(repo[rev], extra)
173 extrafn(repo[rev], extra)
174 newrev = repo.commit(m+a+r,
174 newrev = repo.commit(m+a+r,
175 text=commitmsg,
175 text=commitmsg,
176 user=repo[rev].user(),
176 user=repo[rev].user(),
177 date=repo[rev].date(),
177 date=repo[rev].date(),
178 extra=extra)
178 extra=extra)
179 return newrev
179 return newrev
180 except util.Abort:
180 except util.Abort:
181 # Invalidate the previous setparents
181 # Invalidate the previous setparents
182 repo.dirstate.invalidate()
182 repo.dirstate.invalidate()
183 raise
183 raise
184
184
185 def rebasenode(repo, rev, target, state, skipped, targetancestors, collapse,
185 def rebasenode(repo, rev, target, state, skipped, targetancestors, collapse,
186 extrafn):
186 extrafn):
187 'Rebase a single revision'
187 'Rebase a single revision'
188 repo.ui.debug(_("rebasing %d:%s\n") % (rev, repo[rev]))
188 repo.ui.debug(_("rebasing %d:%s\n") % (rev, repo[rev]))
189
189
190 p1, p2 = defineparents(repo, rev, target, state, targetancestors)
190 p1, p2 = defineparents(repo, rev, target, state, targetancestors)
191
191
192 repo.ui.debug(_(" future parents are %d and %d\n") % (repo[p1].rev(),
192 repo.ui.debug(_(" future parents are %d and %d\n") % (repo[p1].rev(),
193 repo[p2].rev()))
193 repo[p2].rev()))
194
194
195 # Merge phase
195 # Merge phase
196 if len(repo.parents()) != 2:
196 if len(repo.parents()) != 2:
197 # Update to target and merge it with local
197 # Update to target and merge it with local
198 if repo['.'].rev() != repo[p1].rev():
198 if repo['.'].rev() != repo[p1].rev():
199 repo.ui.debug(_(" update to %d:%s\n") % (repo[p1].rev(), repo[p1]))
199 repo.ui.debug(_(" update to %d:%s\n") % (repo[p1].rev(), repo[p1]))
200 merge.update(repo, p1, False, True, False)
200 merge.update(repo, p1, False, True, False)
201 else:
201 else:
202 repo.ui.debug(_(" already in target\n"))
202 repo.ui.debug(_(" already in target\n"))
203 repo.dirstate.write()
203 repo.dirstate.write()
204 repo.ui.debug(_(" merge against %d:%s\n") % (repo[rev].rev(), repo[rev]))
204 repo.ui.debug(_(" merge against %d:%s\n") % (repo[rev].rev(), repo[rev]))
205 first = repo[rev].rev() == repo[min(state)].rev()
205 first = repo[rev].rev() == repo[min(state)].rev()
206 stats = rebasemerge(repo, rev, first)
206 stats = rebasemerge(repo, rev, first)
207
207
208 if stats[3] > 0:
208 if stats[3] > 0:
209 raise util.Abort(_('fix unresolved conflicts with hg resolve then '
209 raise util.Abort(_('fix unresolved conflicts with hg resolve then '
210 'run hg rebase --continue'))
210 'run hg rebase --continue'))
211 else: # we have an interrupted rebase
211 else: # we have an interrupted rebase
212 repo.ui.debug(_('resuming interrupted rebase\n'))
212 repo.ui.debug(_('resuming interrupted rebase\n'))
213
213
214 # Keep track of renamed files in the revision that is going to be rebased
214 # Keep track of renamed files in the revision that is going to be rebased
215 # Here we simulate the copies and renames in the source changeset
215 # Here we simulate the copies and renames in the source changeset
216 cop, diver = copies.copies(repo, repo[rev], repo[target], repo[p2], True)
216 cop, diver = copies.copies(repo, repo[rev], repo[target], repo[p2], True)
217 m1 = repo[rev].manifest()
217 m1 = repo[rev].manifest()
218 m2 = repo[target].manifest()
218 m2 = repo[target].manifest()
219 for k, v in cop.iteritems():
219 for k, v in cop.iteritems():
220 if k in m1:
220 if k in m1:
221 if v in m1 or v in m2:
221 if v in m1 or v in m2:
222 repo.dirstate.copy(v, k)
222 repo.dirstate.copy(v, k)
223 if v in m2 and v not in m1:
223 if v in m2 and v not in m1:
224 repo.dirstate.remove(v)
224 repo.dirstate.remove(v)
225
225
226 newrev = concludenode(repo, rev, p1, p2, state, collapse,
226 newrev = concludenode(repo, rev, p1, p2, state, collapse,
227 extrafn=extrafn)
227 extrafn=extrafn)
228
228
229 # Update the state
229 # Update the state
230 if newrev is not None:
230 if newrev is not None:
231 state[rev] = repo[newrev].rev()
231 state[rev] = repo[newrev].rev()
232 else:
232 else:
233 if not collapse:
233 if not collapse:
234 repo.ui.note(_('no changes, revision %d skipped\n') % rev)
234 repo.ui.note(_('no changes, revision %d skipped\n') % rev)
235 repo.ui.debug(_('next revision set to %s\n') % p1)
235 repo.ui.debug(_('next revision set to %s\n') % p1)
236 skipped[rev] = True
236 skipped[rev] = True
237 state[rev] = p1
237 state[rev] = p1
238
238
239 def defineparents(repo, rev, target, state, targetancestors):
239 def defineparents(repo, rev, target, state, targetancestors):
240 'Return the new parent relationship of the revision that will be rebased'
240 'Return the new parent relationship of the revision that will be rebased'
241 parents = repo[rev].parents()
241 parents = repo[rev].parents()
242 p1 = p2 = nullrev
242 p1 = p2 = nullrev
243
243
244 P1n = parents[0].rev()
244 P1n = parents[0].rev()
245 if P1n in targetancestors:
245 if P1n in targetancestors:
246 p1 = target
246 p1 = target
247 elif P1n in state:
247 elif P1n in state:
248 p1 = state[P1n]
248 p1 = state[P1n]
249 else: # P1n external
249 else: # P1n external
250 p1 = target
250 p1 = target
251 p2 = P1n
251 p2 = P1n
252
252
253 if len(parents) == 2 and parents[1].rev() not in targetancestors:
253 if len(parents) == 2 and parents[1].rev() not in targetancestors:
254 P2n = parents[1].rev()
254 P2n = parents[1].rev()
255 # interesting second parent
255 # interesting second parent
256 if P2n in state:
256 if P2n in state:
257 if p1 == target: # P1n in targetancestors or external
257 if p1 == target: # P1n in targetancestors or external
258 p1 = state[P2n]
258 p1 = state[P2n]
259 else:
259 else:
260 p2 = state[P2n]
260 p2 = state[P2n]
261 else: # P2n external
261 else: # P2n external
262 if p2 != nullrev: # P1n external too => rev is a merged revision
262 if p2 != nullrev: # P1n external too => rev is a merged revision
263 raise util.Abort(_('cannot use revision %d as base, result '
263 raise util.Abort(_('cannot use revision %d as base, result '
264 'would have 3 parents') % rev)
264 'would have 3 parents') % rev)
265 p2 = P2n
265 p2 = P2n
266 return p1, p2
266 return p1, p2
267
267
268 def isagitpatch(repo, patchname):
268 def isagitpatch(repo, patchname):
269 'Return true if the given patch is in git format'
269 'Return true if the given patch is in git format'
270 mqpatch = os.path.join(repo.mq.path, patchname)
270 mqpatch = os.path.join(repo.mq.path, patchname)
271 for line in patch.linereader(file(mqpatch, 'rb')):
271 for line in patch.linereader(file(mqpatch, 'rb')):
272 if line.startswith('diff --git'):
272 if line.startswith('diff --git'):
273 return True
273 return True
274 return False
274 return False
275
275
276 def updatemq(repo, state, skipped, **opts):
276 def updatemq(repo, state, skipped, **opts):
277 'Update rebased mq patches - finalize and then import them'
277 'Update rebased mq patches - finalize and then import them'
278 mqrebase = {}
278 mqrebase = {}
279 for p in repo.mq.applied:
279 for p in repo.mq.applied:
280 if repo[p.rev].rev() in state:
280 if repo[p.rev].rev() in state:
281 repo.ui.debug(_('revision %d is an mq patch (%s), finalize it.\n') %
281 repo.ui.debug(_('revision %d is an mq patch (%s), finalize it.\n') %
282 (repo[p.rev].rev(), p.name))
282 (repo[p.rev].rev(), p.name))
283 mqrebase[repo[p.rev].rev()] = (p.name, isagitpatch(repo, p.name))
283 mqrebase[repo[p.rev].rev()] = (p.name, isagitpatch(repo, p.name))
284
284
285 if mqrebase:
285 if mqrebase:
286 repo.mq.finish(repo, mqrebase.keys())
286 repo.mq.finish(repo, mqrebase.keys())
287
287
288 # We must start import from the newest revision
288 # We must start import from the newest revision
289 mq = mqrebase.keys()
289 mq = mqrebase.keys()
290 mq.sort()
290 mq.sort()
291 mq.reverse()
291 mq.reverse()
292 for rev in mq:
292 for rev in mq:
293 if rev not in skipped:
293 if rev not in skipped:
294 repo.ui.debug(_('import mq patch %d (%s)\n')
294 repo.ui.debug(_('import mq patch %d (%s)\n')
295 % (state[rev], mqrebase[rev][0]))
295 % (state[rev], mqrebase[rev][0]))
296 repo.mq.qimport(repo, (), patchname=mqrebase[rev][0],
296 repo.mq.qimport(repo, (), patchname=mqrebase[rev][0],
297 git=mqrebase[rev][1],rev=[str(state[rev])])
297 git=mqrebase[rev][1],rev=[str(state[rev])])
298 repo.mq.save_dirty()
298 repo.mq.save_dirty()
299
299
300 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
300 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
301 external):
301 external):
302 'Store the current status to allow recovery'
302 'Store the current status to allow recovery'
303 f = repo.opener("rebasestate", "w")
303 f = repo.opener("rebasestate", "w")
304 f.write(repo[originalwd].hex() + '\n')
304 f.write(repo[originalwd].hex() + '\n')
305 f.write(repo[target].hex() + '\n')
305 f.write(repo[target].hex() + '\n')
306 f.write(repo[external].hex() + '\n')
306 f.write(repo[external].hex() + '\n')
307 f.write('%d\n' % int(collapse))
307 f.write('%d\n' % int(collapse))
308 f.write('%d\n' % int(keep))
308 f.write('%d\n' % int(keep))
309 f.write('%d\n' % int(keepbranches))
309 f.write('%d\n' % int(keepbranches))
310 for d, v in state.iteritems():
310 for d, v in state.iteritems():
311 oldrev = repo[d].hex()
311 oldrev = repo[d].hex()
312 newrev = repo[v].hex()
312 newrev = repo[v].hex()
313 f.write("%s:%s\n" % (oldrev, newrev))
313 f.write("%s:%s\n" % (oldrev, newrev))
314 f.close()
314 f.close()
315 repo.ui.debug(_('rebase status stored\n'))
315 repo.ui.debug(_('rebase status stored\n'))
316
316
317 def clearstatus(repo):
317 def clearstatus(repo):
318 'Remove the status files'
318 'Remove the status files'
319 if os.path.exists(repo.join("rebasestate")):
319 if os.path.exists(repo.join("rebasestate")):
320 util.unlink(repo.join("rebasestate"))
320 util.unlink(repo.join("rebasestate"))
321
321
322 def restorestatus(repo):
322 def restorestatus(repo):
323 'Restore a previously stored status'
323 'Restore a previously stored status'
324 try:
324 try:
325 target = None
325 target = None
326 collapse = False
326 collapse = False
327 external = nullrev
327 external = nullrev
328 state = {}
328 state = {}
329 f = repo.opener("rebasestate")
329 f = repo.opener("rebasestate")
330 for i, l in enumerate(f.read().splitlines()):
330 for i, l in enumerate(f.read().splitlines()):
331 if i == 0:
331 if i == 0:
332 originalwd = repo[l].rev()
332 originalwd = repo[l].rev()
333 elif i == 1:
333 elif i == 1:
334 target = repo[l].rev()
334 target = repo[l].rev()
335 elif i == 2:
335 elif i == 2:
336 external = repo[l].rev()
336 external = repo[l].rev()
337 elif i == 3:
337 elif i == 3:
338 collapse = bool(int(l))
338 collapse = bool(int(l))
339 elif i == 4:
339 elif i == 4:
340 keep = bool(int(l))
340 keep = bool(int(l))
341 elif i == 5:
341 elif i == 5:
342 keepbranches = bool(int(l))
342 keepbranches = bool(int(l))
343 else:
343 else:
344 oldrev, newrev = l.split(':')
344 oldrev, newrev = l.split(':')
345 state[repo[oldrev].rev()] = repo[newrev].rev()
345 state[repo[oldrev].rev()] = repo[newrev].rev()
346 repo.ui.debug(_('rebase status resumed\n'))
346 repo.ui.debug(_('rebase status resumed\n'))
347 return originalwd, target, state, collapse, keep, keepbranches, external
347 return originalwd, target, state, collapse, keep, keepbranches, external
348 except IOError, err:
348 except IOError, err:
349 if err.errno != errno.ENOENT:
349 if err.errno != errno.ENOENT:
350 raise
350 raise
351 raise util.Abort(_('no rebase in progress'))
351 raise util.Abort(_('no rebase in progress'))
352
352
353 def abort(repo, originalwd, target, state):
353 def abort(repo, originalwd, target, state):
354 'Restore the repository to its original state'
354 'Restore the repository to its original state'
355 if set(repo.changelog.descendants(target)) - set(state.values()):
355 if set(repo.changelog.descendants(target)) - set(state.values()):
356 repo.ui.warn(_("warning: new changesets detected on target branch, "
356 repo.ui.warn(_("warning: new changesets detected on target branch, "
357 "not stripping\n"))
357 "not stripping\n"))
358 else:
358 else:
359 # Strip from the first rebased revision
359 # Strip from the first rebased revision
360 merge.update(repo, repo[originalwd].rev(), False, True, False)
360 merge.update(repo, repo[originalwd].rev(), False, True, False)
361 rebased = filter(lambda x: x > -1, state.values())
361 rebased = filter(lambda x: x > -1, state.values())
362 if rebased:
362 if rebased:
363 strippoint = min(rebased)
363 strippoint = min(rebased)
364 repair.strip(repo.ui, repo, repo[strippoint].node(), "strip")
364 repair.strip(repo.ui, repo, repo[strippoint].node(), "strip")
365 clearstatus(repo)
365 clearstatus(repo)
366 repo.ui.status(_('rebase aborted\n'))
366 repo.ui.status(_('rebase aborted\n'))
367
367
368 def buildstate(repo, dest, src, base, collapse):
368 def buildstate(repo, dest, src, base, collapse):
369 'Define which revisions are going to be rebased and where'
369 'Define which revisions are going to be rebased and where'
370 targetancestors = set()
370 targetancestors = set()
371
371
372 if not dest:
372 if not dest:
373 # Destination defaults to the latest revision in the current branch
373 # Destination defaults to the latest revision in the current branch
374 branch = repo[None].branch()
374 branch = repo[None].branch()
375 dest = repo[branch].rev()
375 dest = repo[branch].rev()
376 else:
376 else:
377 if 'qtip' in repo.tags() and (repo[dest].hex() in
377 if 'qtip' in repo.tags() and (repo[dest].hex() in
378 [s.rev for s in repo.mq.applied]):
378 [s.rev for s in repo.mq.applied]):
379 raise util.Abort(_('cannot rebase onto an applied mq patch'))
379 raise util.Abort(_('cannot rebase onto an applied mq patch'))
380 dest = repo[dest].rev()
380 dest = repo[dest].rev()
381
381
382 if src:
382 if src:
383 commonbase = repo[src].ancestor(repo[dest])
383 commonbase = repo[src].ancestor(repo[dest])
384 if commonbase == repo[src]:
384 if commonbase == repo[src]:
385 raise util.Abort(_('cannot rebase an ancestor'))
385 raise util.Abort(_('cannot rebase an ancestor'))
386 if commonbase == repo[dest]:
386 if commonbase == repo[dest]:
387 raise util.Abort(_('cannot rebase a descendant'))
387 raise util.Abort(_('cannot rebase a descendant'))
388 source = repo[src].rev()
388 source = repo[src].rev()
389 else:
389 else:
390 if base:
390 if base:
391 cwd = repo[base].rev()
391 cwd = repo[base].rev()
392 else:
392 else:
393 cwd = repo['.'].rev()
393 cwd = repo['.'].rev()
394
394
395 if cwd == dest:
395 if cwd == dest:
396 repo.ui.debug(_('already working on current\n'))
396 repo.ui.debug(_('already working on current\n'))
397 return None
397 return None
398
398
399 targetancestors = set(repo.changelog.ancestors(dest))
399 targetancestors = set(repo.changelog.ancestors(dest))
400 if cwd in targetancestors:
400 if cwd in targetancestors:
401 repo.ui.debug(_('already working on the current branch\n'))
401 repo.ui.debug(_('already working on the current branch\n'))
402 return None
402 return None
403
403
404 cwdancestors = set(repo.changelog.ancestors(cwd))
404 cwdancestors = set(repo.changelog.ancestors(cwd))
405 cwdancestors.add(cwd)
405 cwdancestors.add(cwd)
406 rebasingbranch = cwdancestors - targetancestors
406 rebasingbranch = cwdancestors - targetancestors
407 source = min(rebasingbranch)
407 source = min(rebasingbranch)
408
408
409 repo.ui.debug(_('rebase onto %d starting from %d\n') % (dest, source))
409 repo.ui.debug(_('rebase onto %d starting from %d\n') % (dest, source))
410 state = dict.fromkeys(repo.changelog.descendants(source), nullrev)
410 state = dict.fromkeys(repo.changelog.descendants(source), nullrev)
411 external = nullrev
411 external = nullrev
412 if collapse:
412 if collapse:
413 if not targetancestors:
413 if not targetancestors:
414 targetancestors = set(repo.changelog.ancestors(dest))
414 targetancestors = set(repo.changelog.ancestors(dest))
415 for rev in state:
415 for rev in state:
416 # Check externals and fail if there are more than one
416 # Check externals and fail if there are more than one
417 for p in repo[rev].parents():
417 for p in repo[rev].parents():
418 if (p.rev() not in state and p.rev() != source
418 if (p.rev() not in state and p.rev() != source
419 and p.rev() not in targetancestors):
419 and p.rev() not in targetancestors):
420 if external != nullrev:
420 if external != nullrev:
421 raise util.Abort(_('unable to collapse, there is more '
421 raise util.Abort(_('unable to collapse, there is more '
422 'than one external parent'))
422 'than one external parent'))
423 external = p.rev()
423 external = p.rev()
424
424
425 state[source] = nullrev
425 state[source] = nullrev
426 return repo['.'].rev(), repo[dest].rev(), state, external
426 return repo['.'].rev(), repo[dest].rev(), state, external
427
427
428 def pullrebase(orig, ui, repo, *args, **opts):
428 def pullrebase(orig, ui, repo, *args, **opts):
429 'Call rebase after pull if the latter has been invoked with --rebase'
429 'Call rebase after pull if the latter has been invoked with --rebase'
430 if opts.get('rebase'):
430 if opts.get('rebase'):
431 if opts.get('update'):
431 if opts.get('update'):
432 del opts.get['update']
432 del opts.get['update']
433 ui.debug(_('--update and --rebase are not compatible, ignoring '
433 ui.debug(_('--update and --rebase are not compatible, ignoring '
434 'the update flag\n'))
434 'the update flag\n'))
435
435
436 cmdutil.bail_if_changed(repo)
436 cmdutil.bail_if_changed(repo)
437 revsprepull = len(repo)
437 revsprepull = len(repo)
438 orig(ui, repo, *args, **opts)
438 orig(ui, repo, *args, **opts)
439 revspostpull = len(repo)
439 revspostpull = len(repo)
440 if revspostpull > revsprepull:
440 if revspostpull > revsprepull:
441 rebase(ui, repo, **opts)
441 rebase(ui, repo, **opts)
442 branch = repo[None].branch()
442 branch = repo[None].branch()
443 dest = repo[branch].rev()
443 dest = repo[branch].rev()
444 if dest != repo['.'].rev():
444 if dest != repo['.'].rev():
445 # there was nothing to rebase we force an update
445 # there was nothing to rebase we force an update
446 merge.update(repo, dest, False, False, False)
446 merge.update(repo, dest, False, False, False)
447 else:
447 else:
448 orig(ui, repo, *args, **opts)
448 orig(ui, repo, *args, **opts)
449
449
450 def uisetup(ui):
450 def uisetup(ui):
451 'Replace pull with a decorator to provide --rebase option'
451 'Replace pull with a decorator to provide --rebase option'
452 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
452 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
453 entry[1].append(('', 'rebase', None,
453 entry[1].append(('', 'rebase', None,
454 _("rebase working directory to branch head"))
454 _("rebase working directory to branch head"))
455 )
455 )
456
456
457 cmdtable = {
457 cmdtable = {
458 "rebase":
458 "rebase":
459 (rebase,
459 (rebase,
460 [
460 [
461 ('s', 'source', '', _('rebase from a given revision')),
461 ('s', 'source', '', _('rebase from a given revision')),
462 ('b', 'base', '', _('rebase from the base of a given revision')),
462 ('b', 'base', '', _('rebase from the base of a given revision')),
463 ('d', 'dest', '', _('rebase onto a given revision')),
463 ('d', 'dest', '', _('rebase onto a given revision')),
464 ('', 'collapse', False, _('collapse the rebased revisions')),
464 ('', 'collapse', False, _('collapse the rebased revisions')),
465 ('', 'keep', False, _('keep original revisions')),
465 ('', 'keep', False, _('keep original revisions')),
466 ('', 'keepbranches', False, _('keep original branches')),
466 ('', 'keepbranches', False, _('keep original branches')),
467 ('c', 'continue', False, _('continue an interrupted rebase')),
467 ('c', 'continue', False, _('continue an interrupted rebase')),
468 ('a', 'abort', False, _('abort an interrupted rebase')),] +
468 ('a', 'abort', False, _('abort an interrupted rebase')),] +
469 templateopts,
469 templateopts,
470 _('hg rebase [-s REV | -b REV] [-d REV] [--collapse] [--keep] '
470 _('hg rebase [-s REV | -b REV] [-d REV] [--collapse] [--keep] '
471 '[--keepbranches] | [-c] | [-a]')),
471 '[--keepbranches] | [-c] | [-a]')),
472 }
472 }
@@ -1,602 +1,602 b''
1 # Patch transplanting extension for Mercurial
1 # Patch transplanting extension for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 '''patch transplanting tool
8 '''patch transplanting tool
9
9
10 This extension allows you to transplant patches from another branch.
10 This extension allows you to transplant patches from another branch.
11
11
12 Transplanted patches are recorded in .hg/transplant/transplants, as a
12 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 map from a changeset hash to its hash in the source repository.
13 map from a changeset hash to its hash in the source repository.
14 '''
14 '''
15
15
16 from mercurial.i18n import _
16 from mercurial.i18n import _
17 import os, tempfile
17 import os, tempfile
18 from mercurial import bundlerepo, changegroup, cmdutil, hg, merge
18 from mercurial import bundlerepo, changegroup, cmdutil, hg, merge
19 from mercurial import patch, revlog, util, error
19 from mercurial import patch, revlog, util, error
20
20
21 class transplantentry:
21 class transplantentry:
22 def __init__(self, lnode, rnode):
22 def __init__(self, lnode, rnode):
23 self.lnode = lnode
23 self.lnode = lnode
24 self.rnode = rnode
24 self.rnode = rnode
25
25
26 class transplants:
26 class transplants:
27 def __init__(self, path=None, transplantfile=None, opener=None):
27 def __init__(self, path=None, transplantfile=None, opener=None):
28 self.path = path
28 self.path = path
29 self.transplantfile = transplantfile
29 self.transplantfile = transplantfile
30 self.opener = opener
30 self.opener = opener
31
31
32 if not opener:
32 if not opener:
33 self.opener = util.opener(self.path)
33 self.opener = util.opener(self.path)
34 self.transplants = []
34 self.transplants = []
35 self.dirty = False
35 self.dirty = False
36 self.read()
36 self.read()
37
37
38 def read(self):
38 def read(self):
39 abspath = os.path.join(self.path, self.transplantfile)
39 abspath = os.path.join(self.path, self.transplantfile)
40 if self.transplantfile and os.path.exists(abspath):
40 if self.transplantfile and os.path.exists(abspath):
41 for line in self.opener(self.transplantfile).read().splitlines():
41 for line in self.opener(self.transplantfile).read().splitlines():
42 lnode, rnode = map(revlog.bin, line.split(':'))
42 lnode, rnode = map(revlog.bin, line.split(':'))
43 self.transplants.append(transplantentry(lnode, rnode))
43 self.transplants.append(transplantentry(lnode, rnode))
44
44
45 def write(self):
45 def write(self):
46 if self.dirty and self.transplantfile:
46 if self.dirty and self.transplantfile:
47 if not os.path.isdir(self.path):
47 if not os.path.isdir(self.path):
48 os.mkdir(self.path)
48 os.mkdir(self.path)
49 fp = self.opener(self.transplantfile, 'w')
49 fp = self.opener(self.transplantfile, 'w')
50 for c in self.transplants:
50 for c in self.transplants:
51 l, r = map(revlog.hex, (c.lnode, c.rnode))
51 l, r = map(revlog.hex, (c.lnode, c.rnode))
52 fp.write(l + ':' + r + '\n')
52 fp.write(l + ':' + r + '\n')
53 fp.close()
53 fp.close()
54 self.dirty = False
54 self.dirty = False
55
55
56 def get(self, rnode):
56 def get(self, rnode):
57 return [t for t in self.transplants if t.rnode == rnode]
57 return [t for t in self.transplants if t.rnode == rnode]
58
58
59 def set(self, lnode, rnode):
59 def set(self, lnode, rnode):
60 self.transplants.append(transplantentry(lnode, rnode))
60 self.transplants.append(transplantentry(lnode, rnode))
61 self.dirty = True
61 self.dirty = True
62
62
63 def remove(self, transplant):
63 def remove(self, transplant):
64 del self.transplants[self.transplants.index(transplant)]
64 del self.transplants[self.transplants.index(transplant)]
65 self.dirty = True
65 self.dirty = True
66
66
67 class transplanter:
67 class transplanter:
68 def __init__(self, ui, repo):
68 def __init__(self, ui, repo):
69 self.ui = ui
69 self.ui = ui
70 self.path = repo.join('transplant')
70 self.path = repo.join('transplant')
71 self.opener = util.opener(self.path)
71 self.opener = util.opener(self.path)
72 self.transplants = transplants(self.path, 'transplants',
72 self.transplants = transplants(self.path, 'transplants',
73 opener=self.opener)
73 opener=self.opener)
74
74
75 def applied(self, repo, node, parent):
75 def applied(self, repo, node, parent):
76 '''returns True if a node is already an ancestor of parent
76 '''returns True if a node is already an ancestor of parent
77 or has already been transplanted'''
77 or has already been transplanted'''
78 if hasnode(repo, node):
78 if hasnode(repo, node):
79 if node in repo.changelog.reachable(parent, stop=node):
79 if node in repo.changelog.reachable(parent, stop=node):
80 return True
80 return True
81 for t in self.transplants.get(node):
81 for t in self.transplants.get(node):
82 # it might have been stripped
82 # it might have been stripped
83 if not hasnode(repo, t.lnode):
83 if not hasnode(repo, t.lnode):
84 self.transplants.remove(t)
84 self.transplants.remove(t)
85 return False
85 return False
86 if t.lnode in repo.changelog.reachable(parent, stop=t.lnode):
86 if t.lnode in repo.changelog.reachable(parent, stop=t.lnode):
87 return True
87 return True
88 return False
88 return False
89
89
90 def apply(self, repo, source, revmap, merges, opts={}):
90 def apply(self, repo, source, revmap, merges, opts={}):
91 '''apply the revisions in revmap one by one in revision order'''
91 '''apply the revisions in revmap one by one in revision order'''
92 revs = util.sort(revmap)
92 revs = sorted(revmap)
93 p1, p2 = repo.dirstate.parents()
93 p1, p2 = repo.dirstate.parents()
94 pulls = []
94 pulls = []
95 diffopts = patch.diffopts(self.ui, opts)
95 diffopts = patch.diffopts(self.ui, opts)
96 diffopts.git = True
96 diffopts.git = True
97
97
98 lock = wlock = None
98 lock = wlock = None
99 try:
99 try:
100 wlock = repo.wlock()
100 wlock = repo.wlock()
101 lock = repo.lock()
101 lock = repo.lock()
102 for rev in revs:
102 for rev in revs:
103 node = revmap[rev]
103 node = revmap[rev]
104 revstr = '%s:%s' % (rev, revlog.short(node))
104 revstr = '%s:%s' % (rev, revlog.short(node))
105
105
106 if self.applied(repo, node, p1):
106 if self.applied(repo, node, p1):
107 self.ui.warn(_('skipping already applied revision %s\n') %
107 self.ui.warn(_('skipping already applied revision %s\n') %
108 revstr)
108 revstr)
109 continue
109 continue
110
110
111 parents = source.changelog.parents(node)
111 parents = source.changelog.parents(node)
112 if not opts.get('filter'):
112 if not opts.get('filter'):
113 # If the changeset parent is the same as the
113 # If the changeset parent is the same as the
114 # wdir's parent, just pull it.
114 # wdir's parent, just pull it.
115 if parents[0] == p1:
115 if parents[0] == p1:
116 pulls.append(node)
116 pulls.append(node)
117 p1 = node
117 p1 = node
118 continue
118 continue
119 if pulls:
119 if pulls:
120 if source != repo:
120 if source != repo:
121 repo.pull(source, heads=pulls)
121 repo.pull(source, heads=pulls)
122 merge.update(repo, pulls[-1], False, False, None)
122 merge.update(repo, pulls[-1], False, False, None)
123 p1, p2 = repo.dirstate.parents()
123 p1, p2 = repo.dirstate.parents()
124 pulls = []
124 pulls = []
125
125
126 domerge = False
126 domerge = False
127 if node in merges:
127 if node in merges:
128 # pulling all the merge revs at once would mean we
128 # pulling all the merge revs at once would mean we
129 # couldn't transplant after the latest even if
129 # couldn't transplant after the latest even if
130 # transplants before them fail.
130 # transplants before them fail.
131 domerge = True
131 domerge = True
132 if not hasnode(repo, node):
132 if not hasnode(repo, node):
133 repo.pull(source, heads=[node])
133 repo.pull(source, heads=[node])
134
134
135 if parents[1] != revlog.nullid:
135 if parents[1] != revlog.nullid:
136 self.ui.note(_('skipping merge changeset %s:%s\n')
136 self.ui.note(_('skipping merge changeset %s:%s\n')
137 % (rev, revlog.short(node)))
137 % (rev, revlog.short(node)))
138 patchfile = None
138 patchfile = None
139 else:
139 else:
140 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
140 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
141 fp = os.fdopen(fd, 'w')
141 fp = os.fdopen(fd, 'w')
142 gen = patch.diff(source, parents[0], node, opts=diffopts)
142 gen = patch.diff(source, parents[0], node, opts=diffopts)
143 for chunk in gen:
143 for chunk in gen:
144 fp.write(chunk)
144 fp.write(chunk)
145 fp.close()
145 fp.close()
146
146
147 del revmap[rev]
147 del revmap[rev]
148 if patchfile or domerge:
148 if patchfile or domerge:
149 try:
149 try:
150 n = self.applyone(repo, node,
150 n = self.applyone(repo, node,
151 source.changelog.read(node),
151 source.changelog.read(node),
152 patchfile, merge=domerge,
152 patchfile, merge=domerge,
153 log=opts.get('log'),
153 log=opts.get('log'),
154 filter=opts.get('filter'))
154 filter=opts.get('filter'))
155 if n and domerge:
155 if n and domerge:
156 self.ui.status(_('%s merged at %s\n') % (revstr,
156 self.ui.status(_('%s merged at %s\n') % (revstr,
157 revlog.short(n)))
157 revlog.short(n)))
158 elif n:
158 elif n:
159 self.ui.status(_('%s transplanted to %s\n')
159 self.ui.status(_('%s transplanted to %s\n')
160 % (revlog.short(node),
160 % (revlog.short(node),
161 revlog.short(n)))
161 revlog.short(n)))
162 finally:
162 finally:
163 if patchfile:
163 if patchfile:
164 os.unlink(patchfile)
164 os.unlink(patchfile)
165 if pulls:
165 if pulls:
166 repo.pull(source, heads=pulls)
166 repo.pull(source, heads=pulls)
167 merge.update(repo, pulls[-1], False, False, None)
167 merge.update(repo, pulls[-1], False, False, None)
168 finally:
168 finally:
169 self.saveseries(revmap, merges)
169 self.saveseries(revmap, merges)
170 self.transplants.write()
170 self.transplants.write()
171 lock.release()
171 lock.release()
172 wlock.release()
172 wlock.release()
173
173
174 def filter(self, filter, changelog, patchfile):
174 def filter(self, filter, changelog, patchfile):
175 '''arbitrarily rewrite changeset before applying it'''
175 '''arbitrarily rewrite changeset before applying it'''
176
176
177 self.ui.status(_('filtering %s\n') % patchfile)
177 self.ui.status(_('filtering %s\n') % patchfile)
178 user, date, msg = (changelog[1], changelog[2], changelog[4])
178 user, date, msg = (changelog[1], changelog[2], changelog[4])
179
179
180 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
180 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
181 fp = os.fdopen(fd, 'w')
181 fp = os.fdopen(fd, 'w')
182 fp.write("# HG changeset patch\n")
182 fp.write("# HG changeset patch\n")
183 fp.write("# User %s\n" % user)
183 fp.write("# User %s\n" % user)
184 fp.write("# Date %d %d\n" % date)
184 fp.write("# Date %d %d\n" % date)
185 fp.write(changelog[4])
185 fp.write(changelog[4])
186 fp.close()
186 fp.close()
187
187
188 try:
188 try:
189 util.system('%s %s %s' % (filter, util.shellquote(headerfile),
189 util.system('%s %s %s' % (filter, util.shellquote(headerfile),
190 util.shellquote(patchfile)),
190 util.shellquote(patchfile)),
191 environ={'HGUSER': changelog[1]},
191 environ={'HGUSER': changelog[1]},
192 onerr=util.Abort, errprefix=_('filter failed'))
192 onerr=util.Abort, errprefix=_('filter failed'))
193 user, date, msg = self.parselog(file(headerfile))[1:4]
193 user, date, msg = self.parselog(file(headerfile))[1:4]
194 finally:
194 finally:
195 os.unlink(headerfile)
195 os.unlink(headerfile)
196
196
197 return (user, date, msg)
197 return (user, date, msg)
198
198
199 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
199 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
200 filter=None):
200 filter=None):
201 '''apply the patch in patchfile to the repository as a transplant'''
201 '''apply the patch in patchfile to the repository as a transplant'''
202 (manifest, user, (time, timezone), files, message) = cl[:5]
202 (manifest, user, (time, timezone), files, message) = cl[:5]
203 date = "%d %d" % (time, timezone)
203 date = "%d %d" % (time, timezone)
204 extra = {'transplant_source': node}
204 extra = {'transplant_source': node}
205 if filter:
205 if filter:
206 (user, date, message) = self.filter(filter, cl, patchfile)
206 (user, date, message) = self.filter(filter, cl, patchfile)
207
207
208 if log:
208 if log:
209 message += '\n(transplanted from %s)' % revlog.hex(node)
209 message += '\n(transplanted from %s)' % revlog.hex(node)
210
210
211 self.ui.status(_('applying %s\n') % revlog.short(node))
211 self.ui.status(_('applying %s\n') % revlog.short(node))
212 self.ui.note('%s %s\n%s\n' % (user, date, message))
212 self.ui.note('%s %s\n%s\n' % (user, date, message))
213
213
214 if not patchfile and not merge:
214 if not patchfile and not merge:
215 raise util.Abort(_('can only omit patchfile if merging'))
215 raise util.Abort(_('can only omit patchfile if merging'))
216 if patchfile:
216 if patchfile:
217 try:
217 try:
218 files = {}
218 files = {}
219 try:
219 try:
220 patch.patch(patchfile, self.ui, cwd=repo.root,
220 patch.patch(patchfile, self.ui, cwd=repo.root,
221 files=files)
221 files=files)
222 if not files:
222 if not files:
223 self.ui.warn(_('%s: empty changeset')
223 self.ui.warn(_('%s: empty changeset')
224 % revlog.hex(node))
224 % revlog.hex(node))
225 return None
225 return None
226 finally:
226 finally:
227 files = patch.updatedir(self.ui, repo, files)
227 files = patch.updatedir(self.ui, repo, files)
228 except Exception, inst:
228 except Exception, inst:
229 if filter:
229 if filter:
230 os.unlink(patchfile)
230 os.unlink(patchfile)
231 seriespath = os.path.join(self.path, 'series')
231 seriespath = os.path.join(self.path, 'series')
232 if os.path.exists(seriespath):
232 if os.path.exists(seriespath):
233 os.unlink(seriespath)
233 os.unlink(seriespath)
234 p1 = repo.dirstate.parents()[0]
234 p1 = repo.dirstate.parents()[0]
235 p2 = node
235 p2 = node
236 self.log(user, date, message, p1, p2, merge=merge)
236 self.log(user, date, message, p1, p2, merge=merge)
237 self.ui.write(str(inst) + '\n')
237 self.ui.write(str(inst) + '\n')
238 raise util.Abort(_('Fix up the merge and run '
238 raise util.Abort(_('Fix up the merge and run '
239 'hg transplant --continue'))
239 'hg transplant --continue'))
240 else:
240 else:
241 files = None
241 files = None
242 if merge:
242 if merge:
243 p1, p2 = repo.dirstate.parents()
243 p1, p2 = repo.dirstate.parents()
244 repo.dirstate.setparents(p1, node)
244 repo.dirstate.setparents(p1, node)
245
245
246 n = repo.commit(files, message, user, date, extra=extra)
246 n = repo.commit(files, message, user, date, extra=extra)
247 if not merge:
247 if not merge:
248 self.transplants.set(n, node)
248 self.transplants.set(n, node)
249
249
250 return n
250 return n
251
251
252 def resume(self, repo, source, opts=None):
252 def resume(self, repo, source, opts=None):
253 '''recover last transaction and apply remaining changesets'''
253 '''recover last transaction and apply remaining changesets'''
254 if os.path.exists(os.path.join(self.path, 'journal')):
254 if os.path.exists(os.path.join(self.path, 'journal')):
255 n, node = self.recover(repo)
255 n, node = self.recover(repo)
256 self.ui.status(_('%s transplanted as %s\n') % (revlog.short(node),
256 self.ui.status(_('%s transplanted as %s\n') % (revlog.short(node),
257 revlog.short(n)))
257 revlog.short(n)))
258 seriespath = os.path.join(self.path, 'series')
258 seriespath = os.path.join(self.path, 'series')
259 if not os.path.exists(seriespath):
259 if not os.path.exists(seriespath):
260 self.transplants.write()
260 self.transplants.write()
261 return
261 return
262 nodes, merges = self.readseries()
262 nodes, merges = self.readseries()
263 revmap = {}
263 revmap = {}
264 for n in nodes:
264 for n in nodes:
265 revmap[source.changelog.rev(n)] = n
265 revmap[source.changelog.rev(n)] = n
266 os.unlink(seriespath)
266 os.unlink(seriespath)
267
267
268 self.apply(repo, source, revmap, merges, opts)
268 self.apply(repo, source, revmap, merges, opts)
269
269
270 def recover(self, repo):
270 def recover(self, repo):
271 '''commit working directory using journal metadata'''
271 '''commit working directory using journal metadata'''
272 node, user, date, message, parents = self.readlog()
272 node, user, date, message, parents = self.readlog()
273 merge = len(parents) == 2
273 merge = len(parents) == 2
274
274
275 if not user or not date or not message or not parents[0]:
275 if not user or not date or not message or not parents[0]:
276 raise util.Abort(_('transplant log file is corrupt'))
276 raise util.Abort(_('transplant log file is corrupt'))
277
277
278 extra = {'transplant_source': node}
278 extra = {'transplant_source': node}
279 wlock = repo.wlock()
279 wlock = repo.wlock()
280 try:
280 try:
281 p1, p2 = repo.dirstate.parents()
281 p1, p2 = repo.dirstate.parents()
282 if p1 != parents[0]:
282 if p1 != parents[0]:
283 raise util.Abort(
283 raise util.Abort(
284 _('working dir not at transplant parent %s') %
284 _('working dir not at transplant parent %s') %
285 revlog.hex(parents[0]))
285 revlog.hex(parents[0]))
286 if merge:
286 if merge:
287 repo.dirstate.setparents(p1, parents[1])
287 repo.dirstate.setparents(p1, parents[1])
288 n = repo.commit(None, message, user, date, extra=extra)
288 n = repo.commit(None, message, user, date, extra=extra)
289 if not n:
289 if not n:
290 raise util.Abort(_('commit failed'))
290 raise util.Abort(_('commit failed'))
291 if not merge:
291 if not merge:
292 self.transplants.set(n, node)
292 self.transplants.set(n, node)
293 self.unlog()
293 self.unlog()
294
294
295 return n, node
295 return n, node
296 finally:
296 finally:
297 wlock.release()
297 wlock.release()
298
298
299 def readseries(self):
299 def readseries(self):
300 nodes = []
300 nodes = []
301 merges = []
301 merges = []
302 cur = nodes
302 cur = nodes
303 for line in self.opener('series').read().splitlines():
303 for line in self.opener('series').read().splitlines():
304 if line.startswith('# Merges'):
304 if line.startswith('# Merges'):
305 cur = merges
305 cur = merges
306 continue
306 continue
307 cur.append(revlog.bin(line))
307 cur.append(revlog.bin(line))
308
308
309 return (nodes, merges)
309 return (nodes, merges)
310
310
311 def saveseries(self, revmap, merges):
311 def saveseries(self, revmap, merges):
312 if not revmap:
312 if not revmap:
313 return
313 return
314
314
315 if not os.path.isdir(self.path):
315 if not os.path.isdir(self.path):
316 os.mkdir(self.path)
316 os.mkdir(self.path)
317 series = self.opener('series', 'w')
317 series = self.opener('series', 'w')
318 for rev in util.sort(revmap):
318 for rev in sorted(revmap):
319 series.write(revlog.hex(revmap[rev]) + '\n')
319 series.write(revlog.hex(revmap[rev]) + '\n')
320 if merges:
320 if merges:
321 series.write('# Merges\n')
321 series.write('# Merges\n')
322 for m in merges:
322 for m in merges:
323 series.write(revlog.hex(m) + '\n')
323 series.write(revlog.hex(m) + '\n')
324 series.close()
324 series.close()
325
325
326 def parselog(self, fp):
326 def parselog(self, fp):
327 parents = []
327 parents = []
328 message = []
328 message = []
329 node = revlog.nullid
329 node = revlog.nullid
330 inmsg = False
330 inmsg = False
331 for line in fp.read().splitlines():
331 for line in fp.read().splitlines():
332 if inmsg:
332 if inmsg:
333 message.append(line)
333 message.append(line)
334 elif line.startswith('# User '):
334 elif line.startswith('# User '):
335 user = line[7:]
335 user = line[7:]
336 elif line.startswith('# Date '):
336 elif line.startswith('# Date '):
337 date = line[7:]
337 date = line[7:]
338 elif line.startswith('# Node ID '):
338 elif line.startswith('# Node ID '):
339 node = revlog.bin(line[10:])
339 node = revlog.bin(line[10:])
340 elif line.startswith('# Parent '):
340 elif line.startswith('# Parent '):
341 parents.append(revlog.bin(line[9:]))
341 parents.append(revlog.bin(line[9:]))
342 elif not line.startswith('#'):
342 elif not line.startswith('#'):
343 inmsg = True
343 inmsg = True
344 message.append(line)
344 message.append(line)
345 return (node, user, date, '\n'.join(message), parents)
345 return (node, user, date, '\n'.join(message), parents)
346
346
347 def log(self, user, date, message, p1, p2, merge=False):
347 def log(self, user, date, message, p1, p2, merge=False):
348 '''journal changelog metadata for later recover'''
348 '''journal changelog metadata for later recover'''
349
349
350 if not os.path.isdir(self.path):
350 if not os.path.isdir(self.path):
351 os.mkdir(self.path)
351 os.mkdir(self.path)
352 fp = self.opener('journal', 'w')
352 fp = self.opener('journal', 'w')
353 fp.write('# User %s\n' % user)
353 fp.write('# User %s\n' % user)
354 fp.write('# Date %s\n' % date)
354 fp.write('# Date %s\n' % date)
355 fp.write('# Node ID %s\n' % revlog.hex(p2))
355 fp.write('# Node ID %s\n' % revlog.hex(p2))
356 fp.write('# Parent ' + revlog.hex(p1) + '\n')
356 fp.write('# Parent ' + revlog.hex(p1) + '\n')
357 if merge:
357 if merge:
358 fp.write('# Parent ' + revlog.hex(p2) + '\n')
358 fp.write('# Parent ' + revlog.hex(p2) + '\n')
359 fp.write(message.rstrip() + '\n')
359 fp.write(message.rstrip() + '\n')
360 fp.close()
360 fp.close()
361
361
362 def readlog(self):
362 def readlog(self):
363 return self.parselog(self.opener('journal'))
363 return self.parselog(self.opener('journal'))
364
364
365 def unlog(self):
365 def unlog(self):
366 '''remove changelog journal'''
366 '''remove changelog journal'''
367 absdst = os.path.join(self.path, 'journal')
367 absdst = os.path.join(self.path, 'journal')
368 if os.path.exists(absdst):
368 if os.path.exists(absdst):
369 os.unlink(absdst)
369 os.unlink(absdst)
370
370
371 def transplantfilter(self, repo, source, root):
371 def transplantfilter(self, repo, source, root):
372 def matchfn(node):
372 def matchfn(node):
373 if self.applied(repo, node, root):
373 if self.applied(repo, node, root):
374 return False
374 return False
375 if source.changelog.parents(node)[1] != revlog.nullid:
375 if source.changelog.parents(node)[1] != revlog.nullid:
376 return False
376 return False
377 extra = source.changelog.read(node)[5]
377 extra = source.changelog.read(node)[5]
378 cnode = extra.get('transplant_source')
378 cnode = extra.get('transplant_source')
379 if cnode and self.applied(repo, cnode, root):
379 if cnode and self.applied(repo, cnode, root):
380 return False
380 return False
381 return True
381 return True
382
382
383 return matchfn
383 return matchfn
384
384
385 def hasnode(repo, node):
385 def hasnode(repo, node):
386 try:
386 try:
387 return repo.changelog.rev(node) != None
387 return repo.changelog.rev(node) != None
388 except error.RevlogError:
388 except error.RevlogError:
389 return False
389 return False
390
390
391 def browserevs(ui, repo, nodes, opts):
391 def browserevs(ui, repo, nodes, opts):
392 '''interactively transplant changesets'''
392 '''interactively transplant changesets'''
393 def browsehelp(ui):
393 def browsehelp(ui):
394 ui.write('y: transplant this changeset\n'
394 ui.write('y: transplant this changeset\n'
395 'n: skip this changeset\n'
395 'n: skip this changeset\n'
396 'm: merge at this changeset\n'
396 'm: merge at this changeset\n'
397 'p: show patch\n'
397 'p: show patch\n'
398 'c: commit selected changesets\n'
398 'c: commit selected changesets\n'
399 'q: cancel transplant\n'
399 'q: cancel transplant\n'
400 '?: show this help\n')
400 '?: show this help\n')
401
401
402 displayer = cmdutil.show_changeset(ui, repo, opts)
402 displayer = cmdutil.show_changeset(ui, repo, opts)
403 transplants = []
403 transplants = []
404 merges = []
404 merges = []
405 for node in nodes:
405 for node in nodes:
406 displayer.show(repo[node])
406 displayer.show(repo[node])
407 action = None
407 action = None
408 while not action:
408 while not action:
409 action = ui.prompt(_('apply changeset? [ynmpcq?]:'))
409 action = ui.prompt(_('apply changeset? [ynmpcq?]:'))
410 if action == '?':
410 if action == '?':
411 browsehelp(ui)
411 browsehelp(ui)
412 action = None
412 action = None
413 elif action == 'p':
413 elif action == 'p':
414 parent = repo.changelog.parents(node)[0]
414 parent = repo.changelog.parents(node)[0]
415 for chunk in patch.diff(repo, parent, node):
415 for chunk in patch.diff(repo, parent, node):
416 repo.ui.write(chunk)
416 repo.ui.write(chunk)
417 action = None
417 action = None
418 elif action not in ('y', 'n', 'm', 'c', 'q'):
418 elif action not in ('y', 'n', 'm', 'c', 'q'):
419 ui.write('no such option\n')
419 ui.write('no such option\n')
420 action = None
420 action = None
421 if action == 'y':
421 if action == 'y':
422 transplants.append(node)
422 transplants.append(node)
423 elif action == 'm':
423 elif action == 'm':
424 merges.append(node)
424 merges.append(node)
425 elif action == 'c':
425 elif action == 'c':
426 break
426 break
427 elif action == 'q':
427 elif action == 'q':
428 transplants = ()
428 transplants = ()
429 merges = ()
429 merges = ()
430 break
430 break
431 return (transplants, merges)
431 return (transplants, merges)
432
432
433 def transplant(ui, repo, *revs, **opts):
433 def transplant(ui, repo, *revs, **opts):
434 '''transplant changesets from another branch
434 '''transplant changesets from another branch
435
435
436 Selected changesets will be applied on top of the current working
436 Selected changesets will be applied on top of the current working
437 directory with the log of the original changeset. If --log is
437 directory with the log of the original changeset. If --log is
438 specified, log messages will have a comment appended of the form:
438 specified, log messages will have a comment appended of the form:
439
439
440 (transplanted from CHANGESETHASH)
440 (transplanted from CHANGESETHASH)
441
441
442 You can rewrite the changelog message with the --filter option.
442 You can rewrite the changelog message with the --filter option.
443 Its argument will be invoked with the current changelog message as
443 Its argument will be invoked with the current changelog message as
444 $1 and the patch as $2.
444 $1 and the patch as $2.
445
445
446 If --source/-s is specified, selects changesets from the named
446 If --source/-s is specified, selects changesets from the named
447 repository. If --branch/-b is specified, selects changesets from
447 repository. If --branch/-b is specified, selects changesets from
448 the branch holding the named revision, up to that revision. If
448 the branch holding the named revision, up to that revision. If
449 --all/-a is specified, all changesets on the branch will be
449 --all/-a is specified, all changesets on the branch will be
450 transplanted, otherwise you will be prompted to select the
450 transplanted, otherwise you will be prompted to select the
451 changesets you want.
451 changesets you want.
452
452
453 hg transplant --branch REVISION --all will rebase the selected
453 hg transplant --branch REVISION --all will rebase the selected
454 branch (up to the named revision) onto your current working
454 branch (up to the named revision) onto your current working
455 directory.
455 directory.
456
456
457 You can optionally mark selected transplanted changesets as merge
457 You can optionally mark selected transplanted changesets as merge
458 changesets. You will not be prompted to transplant any ancestors
458 changesets. You will not be prompted to transplant any ancestors
459 of a merged transplant, and you can merge descendants of them
459 of a merged transplant, and you can merge descendants of them
460 normally instead of transplanting them.
460 normally instead of transplanting them.
461
461
462 If no merges or revisions are provided, hg transplant will start
462 If no merges or revisions are provided, hg transplant will start
463 an interactive changeset browser.
463 an interactive changeset browser.
464
464
465 If a changeset application fails, you can fix the merge by hand
465 If a changeset application fails, you can fix the merge by hand
466 and then resume where you left off by calling hg transplant
466 and then resume where you left off by calling hg transplant
467 --continue/-c.
467 --continue/-c.
468 '''
468 '''
469 def getremotechanges(repo, url):
469 def getremotechanges(repo, url):
470 sourcerepo = ui.expandpath(url)
470 sourcerepo = ui.expandpath(url)
471 source = hg.repository(ui, sourcerepo)
471 source = hg.repository(ui, sourcerepo)
472 common, incoming, rheads = repo.findcommonincoming(source, force=True)
472 common, incoming, rheads = repo.findcommonincoming(source, force=True)
473 if not incoming:
473 if not incoming:
474 return (source, None, None)
474 return (source, None, None)
475
475
476 bundle = None
476 bundle = None
477 if not source.local():
477 if not source.local():
478 if source.capable('changegroupsubset'):
478 if source.capable('changegroupsubset'):
479 cg = source.changegroupsubset(incoming, rheads, 'incoming')
479 cg = source.changegroupsubset(incoming, rheads, 'incoming')
480 else:
480 else:
481 cg = source.changegroup(incoming, 'incoming')
481 cg = source.changegroup(incoming, 'incoming')
482 bundle = changegroup.writebundle(cg, None, 'HG10UN')
482 bundle = changegroup.writebundle(cg, None, 'HG10UN')
483 source = bundlerepo.bundlerepository(ui, repo.root, bundle)
483 source = bundlerepo.bundlerepository(ui, repo.root, bundle)
484
484
485 return (source, incoming, bundle)
485 return (source, incoming, bundle)
486
486
487 def incwalk(repo, incoming, branches, match=util.always):
487 def incwalk(repo, incoming, branches, match=util.always):
488 if not branches:
488 if not branches:
489 branches=None
489 branches=None
490 for node in repo.changelog.nodesbetween(incoming, branches)[0]:
490 for node in repo.changelog.nodesbetween(incoming, branches)[0]:
491 if match(node):
491 if match(node):
492 yield node
492 yield node
493
493
494 def transplantwalk(repo, root, branches, match=util.always):
494 def transplantwalk(repo, root, branches, match=util.always):
495 if not branches:
495 if not branches:
496 branches = repo.heads()
496 branches = repo.heads()
497 ancestors = []
497 ancestors = []
498 for branch in branches:
498 for branch in branches:
499 ancestors.append(repo.changelog.ancestor(root, branch))
499 ancestors.append(repo.changelog.ancestor(root, branch))
500 for node in repo.changelog.nodesbetween(ancestors, branches)[0]:
500 for node in repo.changelog.nodesbetween(ancestors, branches)[0]:
501 if match(node):
501 if match(node):
502 yield node
502 yield node
503
503
504 def checkopts(opts, revs):
504 def checkopts(opts, revs):
505 if opts.get('continue'):
505 if opts.get('continue'):
506 if filter(lambda opt: opts.get(opt), ('branch', 'all', 'merge')):
506 if filter(lambda opt: opts.get(opt), ('branch', 'all', 'merge')):
507 raise util.Abort(_('--continue is incompatible with '
507 raise util.Abort(_('--continue is incompatible with '
508 'branch, all or merge'))
508 'branch, all or merge'))
509 return
509 return
510 if not (opts.get('source') or revs or
510 if not (opts.get('source') or revs or
511 opts.get('merge') or opts.get('branch')):
511 opts.get('merge') or opts.get('branch')):
512 raise util.Abort(_('no source URL, branch tag or revision '
512 raise util.Abort(_('no source URL, branch tag or revision '
513 'list provided'))
513 'list provided'))
514 if opts.get('all'):
514 if opts.get('all'):
515 if not opts.get('branch'):
515 if not opts.get('branch'):
516 raise util.Abort(_('--all requires a branch revision'))
516 raise util.Abort(_('--all requires a branch revision'))
517 if revs:
517 if revs:
518 raise util.Abort(_('--all is incompatible with a '
518 raise util.Abort(_('--all is incompatible with a '
519 'revision list'))
519 'revision list'))
520
520
521 checkopts(opts, revs)
521 checkopts(opts, revs)
522
522
523 if not opts.get('log'):
523 if not opts.get('log'):
524 opts['log'] = ui.config('transplant', 'log')
524 opts['log'] = ui.config('transplant', 'log')
525 if not opts.get('filter'):
525 if not opts.get('filter'):
526 opts['filter'] = ui.config('transplant', 'filter')
526 opts['filter'] = ui.config('transplant', 'filter')
527
527
528 tp = transplanter(ui, repo)
528 tp = transplanter(ui, repo)
529
529
530 p1, p2 = repo.dirstate.parents()
530 p1, p2 = repo.dirstate.parents()
531 if len(repo) > 0 and p1 == revlog.nullid:
531 if len(repo) > 0 and p1 == revlog.nullid:
532 raise util.Abort(_('no revision checked out'))
532 raise util.Abort(_('no revision checked out'))
533 if not opts.get('continue'):
533 if not opts.get('continue'):
534 if p2 != revlog.nullid:
534 if p2 != revlog.nullid:
535 raise util.Abort(_('outstanding uncommitted merges'))
535 raise util.Abort(_('outstanding uncommitted merges'))
536 m, a, r, d = repo.status()[:4]
536 m, a, r, d = repo.status()[:4]
537 if m or a or r or d:
537 if m or a or r or d:
538 raise util.Abort(_('outstanding local changes'))
538 raise util.Abort(_('outstanding local changes'))
539
539
540 bundle = None
540 bundle = None
541 source = opts.get('source')
541 source = opts.get('source')
542 if source:
542 if source:
543 (source, incoming, bundle) = getremotechanges(repo, source)
543 (source, incoming, bundle) = getremotechanges(repo, source)
544 else:
544 else:
545 source = repo
545 source = repo
546
546
547 try:
547 try:
548 if opts.get('continue'):
548 if opts.get('continue'):
549 tp.resume(repo, source, opts)
549 tp.resume(repo, source, opts)
550 return
550 return
551
551
552 tf=tp.transplantfilter(repo, source, p1)
552 tf=tp.transplantfilter(repo, source, p1)
553 if opts.get('prune'):
553 if opts.get('prune'):
554 prune = [source.lookup(r)
554 prune = [source.lookup(r)
555 for r in cmdutil.revrange(source, opts.get('prune'))]
555 for r in cmdutil.revrange(source, opts.get('prune'))]
556 matchfn = lambda x: tf(x) and x not in prune
556 matchfn = lambda x: tf(x) and x not in prune
557 else:
557 else:
558 matchfn = tf
558 matchfn = tf
559 branches = map(source.lookup, opts.get('branch', ()))
559 branches = map(source.lookup, opts.get('branch', ()))
560 merges = map(source.lookup, opts.get('merge', ()))
560 merges = map(source.lookup, opts.get('merge', ()))
561 revmap = {}
561 revmap = {}
562 if revs:
562 if revs:
563 for r in cmdutil.revrange(source, revs):
563 for r in cmdutil.revrange(source, revs):
564 revmap[int(r)] = source.lookup(r)
564 revmap[int(r)] = source.lookup(r)
565 elif opts.get('all') or not merges:
565 elif opts.get('all') or not merges:
566 if source != repo:
566 if source != repo:
567 alltransplants = incwalk(source, incoming, branches,
567 alltransplants = incwalk(source, incoming, branches,
568 match=matchfn)
568 match=matchfn)
569 else:
569 else:
570 alltransplants = transplantwalk(source, p1, branches,
570 alltransplants = transplantwalk(source, p1, branches,
571 match=matchfn)
571 match=matchfn)
572 if opts.get('all'):
572 if opts.get('all'):
573 revs = alltransplants
573 revs = alltransplants
574 else:
574 else:
575 revs, newmerges = browserevs(ui, source, alltransplants, opts)
575 revs, newmerges = browserevs(ui, source, alltransplants, opts)
576 merges.extend(newmerges)
576 merges.extend(newmerges)
577 for r in revs:
577 for r in revs:
578 revmap[source.changelog.rev(r)] = r
578 revmap[source.changelog.rev(r)] = r
579 for r in merges:
579 for r in merges:
580 revmap[source.changelog.rev(r)] = r
580 revmap[source.changelog.rev(r)] = r
581
581
582 tp.apply(repo, source, revmap, merges, opts)
582 tp.apply(repo, source, revmap, merges, opts)
583 finally:
583 finally:
584 if bundle:
584 if bundle:
585 source.close()
585 source.close()
586 os.unlink(bundle)
586 os.unlink(bundle)
587
587
588 cmdtable = {
588 cmdtable = {
589 "transplant":
589 "transplant":
590 (transplant,
590 (transplant,
591 [('s', 'source', '', _('pull patches from REPOSITORY')),
591 [('s', 'source', '', _('pull patches from REPOSITORY')),
592 ('b', 'branch', [], _('pull patches from branch BRANCH')),
592 ('b', 'branch', [], _('pull patches from branch BRANCH')),
593 ('a', 'all', None, _('pull all changesets up to BRANCH')),
593 ('a', 'all', None, _('pull all changesets up to BRANCH')),
594 ('p', 'prune', [], _('skip over REV')),
594 ('p', 'prune', [], _('skip over REV')),
595 ('m', 'merge', [], _('merge at REV')),
595 ('m', 'merge', [], _('merge at REV')),
596 ('', 'log', None, _('append transplant info to log message')),
596 ('', 'log', None, _('append transplant info to log message')),
597 ('c', 'continue', None, _('continue last transplant session '
597 ('c', 'continue', None, _('continue last transplant session '
598 'after repair')),
598 'after repair')),
599 ('', 'filter', '', _('filter changesets through FILTER'))],
599 ('', 'filter', '', _('filter changesets through FILTER'))],
600 _('hg transplant [-s REPOSITORY] [-b BRANCH [-a]] [-p REV] '
600 _('hg transplant [-s REPOSITORY] [-b BRANCH [-a]] [-p REV] '
601 '[-m REV] [REV]...'))
601 '[-m REV] [REV]...'))
602 }
602 }
@@ -1,221 +1,221 b''
1 # changelog.py - changelog class for mercurial
1 # changelog.py - changelog class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import bin, hex, nullid
8 from node import bin, hex, nullid
9 from i18n import _
9 from i18n import _
10 import util, error, revlog, encoding
10 import util, error, revlog, encoding
11
11
12 def _string_escape(text):
12 def _string_escape(text):
13 """
13 """
14 >>> d = {'nl': chr(10), 'bs': chr(92), 'cr': chr(13), 'nul': chr(0)}
14 >>> d = {'nl': chr(10), 'bs': chr(92), 'cr': chr(13), 'nul': chr(0)}
15 >>> s = "ab%(nl)scd%(bs)s%(bs)sn%(nul)sab%(cr)scd%(bs)s%(nl)s" % d
15 >>> s = "ab%(nl)scd%(bs)s%(bs)sn%(nul)sab%(cr)scd%(bs)s%(nl)s" % d
16 >>> s
16 >>> s
17 'ab\\ncd\\\\\\\\n\\x00ab\\rcd\\\\\\n'
17 'ab\\ncd\\\\\\\\n\\x00ab\\rcd\\\\\\n'
18 >>> res = _string_escape(s)
18 >>> res = _string_escape(s)
19 >>> s == res.decode('string_escape')
19 >>> s == res.decode('string_escape')
20 True
20 True
21 """
21 """
22 # subset of the string_escape codec
22 # subset of the string_escape codec
23 text = text.replace('\\', '\\\\').replace('\n', '\\n').replace('\r', '\\r')
23 text = text.replace('\\', '\\\\').replace('\n', '\\n').replace('\r', '\\r')
24 return text.replace('\0', '\\0')
24 return text.replace('\0', '\\0')
25
25
26 class appender:
26 class appender:
27 '''the changelog index must be updated last on disk, so we use this class
27 '''the changelog index must be updated last on disk, so we use this class
28 to delay writes to it'''
28 to delay writes to it'''
29 def __init__(self, fp, buf):
29 def __init__(self, fp, buf):
30 self.data = buf
30 self.data = buf
31 self.fp = fp
31 self.fp = fp
32 self.offset = fp.tell()
32 self.offset = fp.tell()
33 self.size = util.fstat(fp).st_size
33 self.size = util.fstat(fp).st_size
34
34
35 def end(self):
35 def end(self):
36 return self.size + len("".join(self.data))
36 return self.size + len("".join(self.data))
37 def tell(self):
37 def tell(self):
38 return self.offset
38 return self.offset
39 def flush(self):
39 def flush(self):
40 pass
40 pass
41 def close(self):
41 def close(self):
42 self.fp.close()
42 self.fp.close()
43
43
44 def seek(self, offset, whence=0):
44 def seek(self, offset, whence=0):
45 '''virtual file offset spans real file and data'''
45 '''virtual file offset spans real file and data'''
46 if whence == 0:
46 if whence == 0:
47 self.offset = offset
47 self.offset = offset
48 elif whence == 1:
48 elif whence == 1:
49 self.offset += offset
49 self.offset += offset
50 elif whence == 2:
50 elif whence == 2:
51 self.offset = self.end() + offset
51 self.offset = self.end() + offset
52 if self.offset < self.size:
52 if self.offset < self.size:
53 self.fp.seek(self.offset)
53 self.fp.seek(self.offset)
54
54
55 def read(self, count=-1):
55 def read(self, count=-1):
56 '''only trick here is reads that span real file and data'''
56 '''only trick here is reads that span real file and data'''
57 ret = ""
57 ret = ""
58 if self.offset < self.size:
58 if self.offset < self.size:
59 s = self.fp.read(count)
59 s = self.fp.read(count)
60 ret = s
60 ret = s
61 self.offset += len(s)
61 self.offset += len(s)
62 if count > 0:
62 if count > 0:
63 count -= len(s)
63 count -= len(s)
64 if count != 0:
64 if count != 0:
65 doff = self.offset - self.size
65 doff = self.offset - self.size
66 self.data.insert(0, "".join(self.data))
66 self.data.insert(0, "".join(self.data))
67 del self.data[1:]
67 del self.data[1:]
68 s = self.data[0][doff:doff+count]
68 s = self.data[0][doff:doff+count]
69 self.offset += len(s)
69 self.offset += len(s)
70 ret += s
70 ret += s
71 return ret
71 return ret
72
72
73 def write(self, s):
73 def write(self, s):
74 self.data.append(str(s))
74 self.data.append(str(s))
75 self.offset += len(s)
75 self.offset += len(s)
76
76
77 class changelog(revlog.revlog):
77 class changelog(revlog.revlog):
78 def __init__(self, opener):
78 def __init__(self, opener):
79 revlog.revlog.__init__(self, opener, "00changelog.i")
79 revlog.revlog.__init__(self, opener, "00changelog.i")
80
80
81 def delayupdate(self):
81 def delayupdate(self):
82 "delay visibility of index updates to other readers"
82 "delay visibility of index updates to other readers"
83 self._realopener = self.opener
83 self._realopener = self.opener
84 self.opener = self._delayopener
84 self.opener = self._delayopener
85 self._delaycount = len(self)
85 self._delaycount = len(self)
86 self._delaybuf = []
86 self._delaybuf = []
87 self._delayname = None
87 self._delayname = None
88
88
89 def finalize(self, tr):
89 def finalize(self, tr):
90 "finalize index updates"
90 "finalize index updates"
91 self.opener = self._realopener
91 self.opener = self._realopener
92 # move redirected index data back into place
92 # move redirected index data back into place
93 if self._delayname:
93 if self._delayname:
94 util.rename(self._delayname + ".a", self._delayname)
94 util.rename(self._delayname + ".a", self._delayname)
95 elif self._delaybuf:
95 elif self._delaybuf:
96 fp = self.opener(self.indexfile, 'a')
96 fp = self.opener(self.indexfile, 'a')
97 fp.write("".join(self._delaybuf))
97 fp.write("".join(self._delaybuf))
98 fp.close()
98 fp.close()
99 self._delaybuf = []
99 self._delaybuf = []
100 # split when we're done
100 # split when we're done
101 self.checkinlinesize(tr)
101 self.checkinlinesize(tr)
102
102
103 def _delayopener(self, name, mode='r'):
103 def _delayopener(self, name, mode='r'):
104 fp = self._realopener(name, mode)
104 fp = self._realopener(name, mode)
105 # only divert the index
105 # only divert the index
106 if not name == self.indexfile:
106 if not name == self.indexfile:
107 return fp
107 return fp
108 # if we're doing an initial clone, divert to another file
108 # if we're doing an initial clone, divert to another file
109 if self._delaycount == 0:
109 if self._delaycount == 0:
110 self._delayname = fp.name
110 self._delayname = fp.name
111 if not len(self):
111 if not len(self):
112 # make sure to truncate the file
112 # make sure to truncate the file
113 mode = mode.replace('a', 'w')
113 mode = mode.replace('a', 'w')
114 return self._realopener(name + ".a", mode)
114 return self._realopener(name + ".a", mode)
115 # otherwise, divert to memory
115 # otherwise, divert to memory
116 return appender(fp, self._delaybuf)
116 return appender(fp, self._delaybuf)
117
117
118 def readpending(self, file):
118 def readpending(self, file):
119 r = revlog.revlog(self.opener, file)
119 r = revlog.revlog(self.opener, file)
120 self.index = r.index
120 self.index = r.index
121 self.nodemap = r.nodemap
121 self.nodemap = r.nodemap
122 self._chunkcache = r._chunkcache
122 self._chunkcache = r._chunkcache
123
123
124 def writepending(self):
124 def writepending(self):
125 "create a file containing the unfinalized state for pretxnchangegroup"
125 "create a file containing the unfinalized state for pretxnchangegroup"
126 if self._delaybuf:
126 if self._delaybuf:
127 # make a temporary copy of the index
127 # make a temporary copy of the index
128 fp1 = self._realopener(self.indexfile)
128 fp1 = self._realopener(self.indexfile)
129 fp2 = self._realopener(self.indexfile + ".a", "w")
129 fp2 = self._realopener(self.indexfile + ".a", "w")
130 fp2.write(fp1.read())
130 fp2.write(fp1.read())
131 # add pending data
131 # add pending data
132 fp2.write("".join(self._delaybuf))
132 fp2.write("".join(self._delaybuf))
133 fp2.close()
133 fp2.close()
134 # switch modes so finalize can simply rename
134 # switch modes so finalize can simply rename
135 self._delaybuf = []
135 self._delaybuf = []
136 self._delayname = fp1.name
136 self._delayname = fp1.name
137
137
138 if self._delayname:
138 if self._delayname:
139 return True
139 return True
140
140
141 return False
141 return False
142
142
143 def checkinlinesize(self, tr, fp=None):
143 def checkinlinesize(self, tr, fp=None):
144 if self.opener == self._delayopener:
144 if self.opener == self._delayopener:
145 return
145 return
146 return revlog.revlog.checkinlinesize(self, tr, fp)
146 return revlog.revlog.checkinlinesize(self, tr, fp)
147
147
148 def decode_extra(self, text):
148 def decode_extra(self, text):
149 extra = {}
149 extra = {}
150 for l in text.split('\0'):
150 for l in text.split('\0'):
151 if l:
151 if l:
152 k, v = l.decode('string_escape').split(':', 1)
152 k, v = l.decode('string_escape').split(':', 1)
153 extra[k] = v
153 extra[k] = v
154 return extra
154 return extra
155
155
156 def encode_extra(self, d):
156 def encode_extra(self, d):
157 # keys must be sorted to produce a deterministic changelog entry
157 # keys must be sorted to produce a deterministic changelog entry
158 items = [_string_escape('%s:%s' % (k, d[k])) for k in util.sort(d)]
158 items = [_string_escape('%s:%s' % (k, d[k])) for k in sorted(d)]
159 return "\0".join(items)
159 return "\0".join(items)
160
160
161 def read(self, node):
161 def read(self, node):
162 """
162 """
163 format used:
163 format used:
164 nodeid\n : manifest node in ascii
164 nodeid\n : manifest node in ascii
165 user\n : user, no \n or \r allowed
165 user\n : user, no \n or \r allowed
166 time tz extra\n : date (time is int or float, timezone is int)
166 time tz extra\n : date (time is int or float, timezone is int)
167 : extra is metadatas, encoded and separated by '\0'
167 : extra is metadatas, encoded and separated by '\0'
168 : older versions ignore it
168 : older versions ignore it
169 files\n\n : files modified by the cset, no \n or \r allowed
169 files\n\n : files modified by the cset, no \n or \r allowed
170 (.*) : comment (free text, ideally utf-8)
170 (.*) : comment (free text, ideally utf-8)
171
171
172 changelog v0 doesn't use extra
172 changelog v0 doesn't use extra
173 """
173 """
174 text = self.revision(node)
174 text = self.revision(node)
175 if not text:
175 if not text:
176 return (nullid, "", (0, 0), [], "", {'branch': 'default'})
176 return (nullid, "", (0, 0), [], "", {'branch': 'default'})
177 last = text.index("\n\n")
177 last = text.index("\n\n")
178 desc = encoding.tolocal(text[last + 2:])
178 desc = encoding.tolocal(text[last + 2:])
179 l = text[:last].split('\n')
179 l = text[:last].split('\n')
180 manifest = bin(l[0])
180 manifest = bin(l[0])
181 user = encoding.tolocal(l[1])
181 user = encoding.tolocal(l[1])
182
182
183 extra_data = l[2].split(' ', 2)
183 extra_data = l[2].split(' ', 2)
184 if len(extra_data) != 3:
184 if len(extra_data) != 3:
185 time = float(extra_data.pop(0))
185 time = float(extra_data.pop(0))
186 try:
186 try:
187 # various tools did silly things with the time zone field.
187 # various tools did silly things with the time zone field.
188 timezone = int(extra_data[0])
188 timezone = int(extra_data[0])
189 except:
189 except:
190 timezone = 0
190 timezone = 0
191 extra = {}
191 extra = {}
192 else:
192 else:
193 time, timezone, extra = extra_data
193 time, timezone, extra = extra_data
194 time, timezone = float(time), int(timezone)
194 time, timezone = float(time), int(timezone)
195 extra = self.decode_extra(extra)
195 extra = self.decode_extra(extra)
196 if not extra.get('branch'):
196 if not extra.get('branch'):
197 extra['branch'] = 'default'
197 extra['branch'] = 'default'
198 files = l[3:]
198 files = l[3:]
199 return (manifest, user, (time, timezone), files, desc, extra)
199 return (manifest, user, (time, timezone), files, desc, extra)
200
200
201 def add(self, manifest, files, desc, transaction, p1=None, p2=None,
201 def add(self, manifest, files, desc, transaction, p1=None, p2=None,
202 user=None, date=None, extra={}):
202 user=None, date=None, extra={}):
203
203
204 user = user.strip()
204 user = user.strip()
205 if "\n" in user:
205 if "\n" in user:
206 raise error.RevlogError(_("username %s contains a newline")
206 raise error.RevlogError(_("username %s contains a newline")
207 % repr(user))
207 % repr(user))
208 user, desc = encoding.fromlocal(user), encoding.fromlocal(desc)
208 user, desc = encoding.fromlocal(user), encoding.fromlocal(desc)
209
209
210 if date:
210 if date:
211 parseddate = "%d %d" % util.parsedate(date)
211 parseddate = "%d %d" % util.parsedate(date)
212 else:
212 else:
213 parseddate = "%d %d" % util.makedate()
213 parseddate = "%d %d" % util.makedate()
214 if extra and extra.get("branch") in ("default", ""):
214 if extra and extra.get("branch") in ("default", ""):
215 del extra["branch"]
215 del extra["branch"]
216 if extra:
216 if extra:
217 extra = self.encode_extra(extra)
217 extra = self.encode_extra(extra)
218 parseddate = "%s %s" % (parseddate, extra)
218 parseddate = "%s %s" % (parseddate, extra)
219 l = [hex(manifest), user, parseddate] + util.sort(files) + ["", desc]
219 l = [hex(manifest), user, parseddate] + sorted(files) + ["", desc]
220 text = "\n".join(l)
220 text = "\n".join(l)
221 return self.addrevision(text, transaction, len(self), p1, p2)
221 return self.addrevision(text, transaction, len(self), p1, p2)
@@ -1,1226 +1,1226 b''
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms
5 # This software may be used and distributed according to the terms
6 # of the GNU General Public License, incorporated herein by reference.
6 # of the GNU General Public License, incorporated herein by reference.
7
7
8 from node import hex, nullid, nullrev, short
8 from node import hex, nullid, nullrev, short
9 from i18n import _
9 from i18n import _
10 import os, sys, bisect, stat, encoding
10 import os, sys, bisect, stat, encoding
11 import mdiff, bdiff, util, templater, templatefilters, patch, errno, error
11 import mdiff, bdiff, util, templater, templatefilters, patch, errno, error
12 import match as _match
12 import match as _match
13
13
14 revrangesep = ':'
14 revrangesep = ':'
15
15
16 def findpossible(cmd, table, strict=False):
16 def findpossible(cmd, table, strict=False):
17 """
17 """
18 Return cmd -> (aliases, command table entry)
18 Return cmd -> (aliases, command table entry)
19 for each matching command.
19 for each matching command.
20 Return debug commands (or their aliases) only if no normal command matches.
20 Return debug commands (or their aliases) only if no normal command matches.
21 """
21 """
22 choice = {}
22 choice = {}
23 debugchoice = {}
23 debugchoice = {}
24 for e in table.keys():
24 for e in table.keys():
25 aliases = e.lstrip("^").split("|")
25 aliases = e.lstrip("^").split("|")
26 found = None
26 found = None
27 if cmd in aliases:
27 if cmd in aliases:
28 found = cmd
28 found = cmd
29 elif not strict:
29 elif not strict:
30 for a in aliases:
30 for a in aliases:
31 if a.startswith(cmd):
31 if a.startswith(cmd):
32 found = a
32 found = a
33 break
33 break
34 if found is not None:
34 if found is not None:
35 if aliases[0].startswith("debug") or found.startswith("debug"):
35 if aliases[0].startswith("debug") or found.startswith("debug"):
36 debugchoice[found] = (aliases, table[e])
36 debugchoice[found] = (aliases, table[e])
37 else:
37 else:
38 choice[found] = (aliases, table[e])
38 choice[found] = (aliases, table[e])
39
39
40 if not choice and debugchoice:
40 if not choice and debugchoice:
41 choice = debugchoice
41 choice = debugchoice
42
42
43 return choice
43 return choice
44
44
45 def findcmd(cmd, table, strict=True):
45 def findcmd(cmd, table, strict=True):
46 """Return (aliases, command table entry) for command string."""
46 """Return (aliases, command table entry) for command string."""
47 choice = findpossible(cmd, table, strict)
47 choice = findpossible(cmd, table, strict)
48
48
49 if cmd in choice:
49 if cmd in choice:
50 return choice[cmd]
50 return choice[cmd]
51
51
52 if len(choice) > 1:
52 if len(choice) > 1:
53 clist = choice.keys()
53 clist = choice.keys()
54 clist.sort()
54 clist.sort()
55 raise error.AmbiguousCommand(cmd, clist)
55 raise error.AmbiguousCommand(cmd, clist)
56
56
57 if choice:
57 if choice:
58 return choice.values()[0]
58 return choice.values()[0]
59
59
60 raise error.UnknownCommand(cmd)
60 raise error.UnknownCommand(cmd)
61
61
62 def bail_if_changed(repo):
62 def bail_if_changed(repo):
63 if repo.dirstate.parents()[1] != nullid:
63 if repo.dirstate.parents()[1] != nullid:
64 raise util.Abort(_('outstanding uncommitted merge'))
64 raise util.Abort(_('outstanding uncommitted merge'))
65 modified, added, removed, deleted = repo.status()[:4]
65 modified, added, removed, deleted = repo.status()[:4]
66 if modified or added or removed or deleted:
66 if modified or added or removed or deleted:
67 raise util.Abort(_("outstanding uncommitted changes"))
67 raise util.Abort(_("outstanding uncommitted changes"))
68
68
69 def logmessage(opts):
69 def logmessage(opts):
70 """ get the log message according to -m and -l option """
70 """ get the log message according to -m and -l option """
71 message = opts.get('message')
71 message = opts.get('message')
72 logfile = opts.get('logfile')
72 logfile = opts.get('logfile')
73
73
74 if message and logfile:
74 if message and logfile:
75 raise util.Abort(_('options --message and --logfile are mutually '
75 raise util.Abort(_('options --message and --logfile are mutually '
76 'exclusive'))
76 'exclusive'))
77 if not message and logfile:
77 if not message and logfile:
78 try:
78 try:
79 if logfile == '-':
79 if logfile == '-':
80 message = sys.stdin.read()
80 message = sys.stdin.read()
81 else:
81 else:
82 message = open(logfile).read()
82 message = open(logfile).read()
83 except IOError, inst:
83 except IOError, inst:
84 raise util.Abort(_("can't read commit message '%s': %s") %
84 raise util.Abort(_("can't read commit message '%s': %s") %
85 (logfile, inst.strerror))
85 (logfile, inst.strerror))
86 return message
86 return message
87
87
88 def loglimit(opts):
88 def loglimit(opts):
89 """get the log limit according to option -l/--limit"""
89 """get the log limit according to option -l/--limit"""
90 limit = opts.get('limit')
90 limit = opts.get('limit')
91 if limit:
91 if limit:
92 try:
92 try:
93 limit = int(limit)
93 limit = int(limit)
94 except ValueError:
94 except ValueError:
95 raise util.Abort(_('limit must be a positive integer'))
95 raise util.Abort(_('limit must be a positive integer'))
96 if limit <= 0: raise util.Abort(_('limit must be positive'))
96 if limit <= 0: raise util.Abort(_('limit must be positive'))
97 else:
97 else:
98 limit = sys.maxint
98 limit = sys.maxint
99 return limit
99 return limit
100
100
101 def remoteui(src, opts):
101 def remoteui(src, opts):
102 'build a remote ui from ui or repo and opts'
102 'build a remote ui from ui or repo and opts'
103 if hasattr(src, 'baseui'): # looks like a repository
103 if hasattr(src, 'baseui'): # looks like a repository
104 dst = src.baseui # drop repo-specific config
104 dst = src.baseui # drop repo-specific config
105 src = src.ui # copy target options from repo
105 src = src.ui # copy target options from repo
106 else: # assume it's a global ui object
106 else: # assume it's a global ui object
107 dst = src # keep all global options
107 dst = src # keep all global options
108
108
109 # copy ssh-specific options
109 # copy ssh-specific options
110 for o in 'ssh', 'remotecmd':
110 for o in 'ssh', 'remotecmd':
111 v = opts.get(o) or src.config('ui', o)
111 v = opts.get(o) or src.config('ui', o)
112 if v:
112 if v:
113 dst.setconfig("ui", o, v)
113 dst.setconfig("ui", o, v)
114 # copy bundle-specific options
114 # copy bundle-specific options
115 r = src.config('bundle', 'mainreporoot')
115 r = src.config('bundle', 'mainreporoot')
116 if r:
116 if r:
117 dst.setconfig('bundle', 'mainreporoot', r)
117 dst.setconfig('bundle', 'mainreporoot', r)
118
118
119 return dst
119 return dst
120
120
121 def revpair(repo, revs):
121 def revpair(repo, revs):
122 '''return pair of nodes, given list of revisions. second item can
122 '''return pair of nodes, given list of revisions. second item can
123 be None, meaning use working dir.'''
123 be None, meaning use working dir.'''
124
124
125 def revfix(repo, val, defval):
125 def revfix(repo, val, defval):
126 if not val and val != 0 and defval is not None:
126 if not val and val != 0 and defval is not None:
127 val = defval
127 val = defval
128 return repo.lookup(val)
128 return repo.lookup(val)
129
129
130 if not revs:
130 if not revs:
131 return repo.dirstate.parents()[0], None
131 return repo.dirstate.parents()[0], None
132 end = None
132 end = None
133 if len(revs) == 1:
133 if len(revs) == 1:
134 if revrangesep in revs[0]:
134 if revrangesep in revs[0]:
135 start, end = revs[0].split(revrangesep, 1)
135 start, end = revs[0].split(revrangesep, 1)
136 start = revfix(repo, start, 0)
136 start = revfix(repo, start, 0)
137 end = revfix(repo, end, len(repo) - 1)
137 end = revfix(repo, end, len(repo) - 1)
138 else:
138 else:
139 start = revfix(repo, revs[0], None)
139 start = revfix(repo, revs[0], None)
140 elif len(revs) == 2:
140 elif len(revs) == 2:
141 if revrangesep in revs[0] or revrangesep in revs[1]:
141 if revrangesep in revs[0] or revrangesep in revs[1]:
142 raise util.Abort(_('too many revisions specified'))
142 raise util.Abort(_('too many revisions specified'))
143 start = revfix(repo, revs[0], None)
143 start = revfix(repo, revs[0], None)
144 end = revfix(repo, revs[1], None)
144 end = revfix(repo, revs[1], None)
145 else:
145 else:
146 raise util.Abort(_('too many revisions specified'))
146 raise util.Abort(_('too many revisions specified'))
147 return start, end
147 return start, end
148
148
149 def revrange(repo, revs):
149 def revrange(repo, revs):
150 """Yield revision as strings from a list of revision specifications."""
150 """Yield revision as strings from a list of revision specifications."""
151
151
152 def revfix(repo, val, defval):
152 def revfix(repo, val, defval):
153 if not val and val != 0 and defval is not None:
153 if not val and val != 0 and defval is not None:
154 return defval
154 return defval
155 return repo.changelog.rev(repo.lookup(val))
155 return repo.changelog.rev(repo.lookup(val))
156
156
157 seen, l = {}, []
157 seen, l = {}, []
158 for spec in revs:
158 for spec in revs:
159 if revrangesep in spec:
159 if revrangesep in spec:
160 start, end = spec.split(revrangesep, 1)
160 start, end = spec.split(revrangesep, 1)
161 start = revfix(repo, start, 0)
161 start = revfix(repo, start, 0)
162 end = revfix(repo, end, len(repo) - 1)
162 end = revfix(repo, end, len(repo) - 1)
163 step = start > end and -1 or 1
163 step = start > end and -1 or 1
164 for rev in xrange(start, end+step, step):
164 for rev in xrange(start, end+step, step):
165 if rev in seen:
165 if rev in seen:
166 continue
166 continue
167 seen[rev] = 1
167 seen[rev] = 1
168 l.append(rev)
168 l.append(rev)
169 else:
169 else:
170 rev = revfix(repo, spec, None)
170 rev = revfix(repo, spec, None)
171 if rev in seen:
171 if rev in seen:
172 continue
172 continue
173 seen[rev] = 1
173 seen[rev] = 1
174 l.append(rev)
174 l.append(rev)
175
175
176 return l
176 return l
177
177
178 def make_filename(repo, pat, node,
178 def make_filename(repo, pat, node,
179 total=None, seqno=None, revwidth=None, pathname=None):
179 total=None, seqno=None, revwidth=None, pathname=None):
180 node_expander = {
180 node_expander = {
181 'H': lambda: hex(node),
181 'H': lambda: hex(node),
182 'R': lambda: str(repo.changelog.rev(node)),
182 'R': lambda: str(repo.changelog.rev(node)),
183 'h': lambda: short(node),
183 'h': lambda: short(node),
184 }
184 }
185 expander = {
185 expander = {
186 '%': lambda: '%',
186 '%': lambda: '%',
187 'b': lambda: os.path.basename(repo.root),
187 'b': lambda: os.path.basename(repo.root),
188 }
188 }
189
189
190 try:
190 try:
191 if node:
191 if node:
192 expander.update(node_expander)
192 expander.update(node_expander)
193 if node:
193 if node:
194 expander['r'] = (lambda:
194 expander['r'] = (lambda:
195 str(repo.changelog.rev(node)).zfill(revwidth or 0))
195 str(repo.changelog.rev(node)).zfill(revwidth or 0))
196 if total is not None:
196 if total is not None:
197 expander['N'] = lambda: str(total)
197 expander['N'] = lambda: str(total)
198 if seqno is not None:
198 if seqno is not None:
199 expander['n'] = lambda: str(seqno)
199 expander['n'] = lambda: str(seqno)
200 if total is not None and seqno is not None:
200 if total is not None and seqno is not None:
201 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
201 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
202 if pathname is not None:
202 if pathname is not None:
203 expander['s'] = lambda: os.path.basename(pathname)
203 expander['s'] = lambda: os.path.basename(pathname)
204 expander['d'] = lambda: os.path.dirname(pathname) or '.'
204 expander['d'] = lambda: os.path.dirname(pathname) or '.'
205 expander['p'] = lambda: pathname
205 expander['p'] = lambda: pathname
206
206
207 newname = []
207 newname = []
208 patlen = len(pat)
208 patlen = len(pat)
209 i = 0
209 i = 0
210 while i < patlen:
210 while i < patlen:
211 c = pat[i]
211 c = pat[i]
212 if c == '%':
212 if c == '%':
213 i += 1
213 i += 1
214 c = pat[i]
214 c = pat[i]
215 c = expander[c]()
215 c = expander[c]()
216 newname.append(c)
216 newname.append(c)
217 i += 1
217 i += 1
218 return ''.join(newname)
218 return ''.join(newname)
219 except KeyError, inst:
219 except KeyError, inst:
220 raise util.Abort(_("invalid format spec '%%%s' in output file name") %
220 raise util.Abort(_("invalid format spec '%%%s' in output file name") %
221 inst.args[0])
221 inst.args[0])
222
222
223 def make_file(repo, pat, node=None,
223 def make_file(repo, pat, node=None,
224 total=None, seqno=None, revwidth=None, mode='wb', pathname=None):
224 total=None, seqno=None, revwidth=None, mode='wb', pathname=None):
225
225
226 writable = 'w' in mode or 'a' in mode
226 writable = 'w' in mode or 'a' in mode
227
227
228 if not pat or pat == '-':
228 if not pat or pat == '-':
229 return writable and sys.stdout or sys.stdin
229 return writable and sys.stdout or sys.stdin
230 if hasattr(pat, 'write') and writable:
230 if hasattr(pat, 'write') and writable:
231 return pat
231 return pat
232 if hasattr(pat, 'read') and 'r' in mode:
232 if hasattr(pat, 'read') and 'r' in mode:
233 return pat
233 return pat
234 return open(make_filename(repo, pat, node, total, seqno, revwidth,
234 return open(make_filename(repo, pat, node, total, seqno, revwidth,
235 pathname),
235 pathname),
236 mode)
236 mode)
237
237
238 def match(repo, pats=[], opts={}, globbed=False, default='relpath'):
238 def match(repo, pats=[], opts={}, globbed=False, default='relpath'):
239 if not globbed and default == 'relpath':
239 if not globbed and default == 'relpath':
240 pats = util.expand_glob(pats or [])
240 pats = util.expand_glob(pats or [])
241 m = _match.match(repo.root, repo.getcwd(), pats,
241 m = _match.match(repo.root, repo.getcwd(), pats,
242 opts.get('include'), opts.get('exclude'), default)
242 opts.get('include'), opts.get('exclude'), default)
243 def badfn(f, msg):
243 def badfn(f, msg):
244 repo.ui.warn("%s: %s\n" % (m.rel(f), msg))
244 repo.ui.warn("%s: %s\n" % (m.rel(f), msg))
245 return False
245 return False
246 m.bad = badfn
246 m.bad = badfn
247 return m
247 return m
248
248
249 def matchall(repo):
249 def matchall(repo):
250 return _match.always(repo.root, repo.getcwd())
250 return _match.always(repo.root, repo.getcwd())
251
251
252 def matchfiles(repo, files):
252 def matchfiles(repo, files):
253 return _match.exact(repo.root, repo.getcwd(), files)
253 return _match.exact(repo.root, repo.getcwd(), files)
254
254
255 def findrenames(repo, added=None, removed=None, threshold=0.5):
255 def findrenames(repo, added=None, removed=None, threshold=0.5):
256 '''find renamed files -- yields (before, after, score) tuples'''
256 '''find renamed files -- yields (before, after, score) tuples'''
257 if added is None or removed is None:
257 if added is None or removed is None:
258 added, removed = repo.status()[1:3]
258 added, removed = repo.status()[1:3]
259 ctx = repo['.']
259 ctx = repo['.']
260 for a in added:
260 for a in added:
261 aa = repo.wread(a)
261 aa = repo.wread(a)
262 bestname, bestscore = None, threshold
262 bestname, bestscore = None, threshold
263 for r in removed:
263 for r in removed:
264 rr = ctx.filectx(r).data()
264 rr = ctx.filectx(r).data()
265
265
266 # bdiff.blocks() returns blocks of matching lines
266 # bdiff.blocks() returns blocks of matching lines
267 # count the number of bytes in each
267 # count the number of bytes in each
268 equal = 0
268 equal = 0
269 alines = mdiff.splitnewlines(aa)
269 alines = mdiff.splitnewlines(aa)
270 matches = bdiff.blocks(aa, rr)
270 matches = bdiff.blocks(aa, rr)
271 for x1,x2,y1,y2 in matches:
271 for x1,x2,y1,y2 in matches:
272 for line in alines[x1:x2]:
272 for line in alines[x1:x2]:
273 equal += len(line)
273 equal += len(line)
274
274
275 lengths = len(aa) + len(rr)
275 lengths = len(aa) + len(rr)
276 if lengths:
276 if lengths:
277 myscore = equal*2.0 / lengths
277 myscore = equal*2.0 / lengths
278 if myscore >= bestscore:
278 if myscore >= bestscore:
279 bestname, bestscore = r, myscore
279 bestname, bestscore = r, myscore
280 if bestname:
280 if bestname:
281 yield bestname, a, bestscore
281 yield bestname, a, bestscore
282
282
283 def addremove(repo, pats=[], opts={}, dry_run=None, similarity=None):
283 def addremove(repo, pats=[], opts={}, dry_run=None, similarity=None):
284 if dry_run is None:
284 if dry_run is None:
285 dry_run = opts.get('dry_run')
285 dry_run = opts.get('dry_run')
286 if similarity is None:
286 if similarity is None:
287 similarity = float(opts.get('similarity') or 0)
287 similarity = float(opts.get('similarity') or 0)
288 add, remove = [], []
288 add, remove = [], []
289 mapping = {}
289 mapping = {}
290 audit_path = util.path_auditor(repo.root)
290 audit_path = util.path_auditor(repo.root)
291 m = match(repo, pats, opts)
291 m = match(repo, pats, opts)
292 for abs in repo.walk(m):
292 for abs in repo.walk(m):
293 target = repo.wjoin(abs)
293 target = repo.wjoin(abs)
294 good = True
294 good = True
295 try:
295 try:
296 audit_path(abs)
296 audit_path(abs)
297 except:
297 except:
298 good = False
298 good = False
299 rel = m.rel(abs)
299 rel = m.rel(abs)
300 exact = m.exact(abs)
300 exact = m.exact(abs)
301 if good and abs not in repo.dirstate:
301 if good and abs not in repo.dirstate:
302 add.append(abs)
302 add.append(abs)
303 mapping[abs] = rel, m.exact(abs)
303 mapping[abs] = rel, m.exact(abs)
304 if repo.ui.verbose or not exact:
304 if repo.ui.verbose or not exact:
305 repo.ui.status(_('adding %s\n') % ((pats and rel) or abs))
305 repo.ui.status(_('adding %s\n') % ((pats and rel) or abs))
306 if repo.dirstate[abs] != 'r' and (not good or not util.lexists(target)
306 if repo.dirstate[abs] != 'r' and (not good or not util.lexists(target)
307 or (os.path.isdir(target) and not os.path.islink(target))):
307 or (os.path.isdir(target) and not os.path.islink(target))):
308 remove.append(abs)
308 remove.append(abs)
309 mapping[abs] = rel, exact
309 mapping[abs] = rel, exact
310 if repo.ui.verbose or not exact:
310 if repo.ui.verbose or not exact:
311 repo.ui.status(_('removing %s\n') % ((pats and rel) or abs))
311 repo.ui.status(_('removing %s\n') % ((pats and rel) or abs))
312 if not dry_run:
312 if not dry_run:
313 repo.remove(remove)
313 repo.remove(remove)
314 repo.add(add)
314 repo.add(add)
315 if similarity > 0:
315 if similarity > 0:
316 for old, new, score in findrenames(repo, add, remove, similarity):
316 for old, new, score in findrenames(repo, add, remove, similarity):
317 oldrel, oldexact = mapping[old]
317 oldrel, oldexact = mapping[old]
318 newrel, newexact = mapping[new]
318 newrel, newexact = mapping[new]
319 if repo.ui.verbose or not oldexact or not newexact:
319 if repo.ui.verbose or not oldexact or not newexact:
320 repo.ui.status(_('recording removal of %s as rename to %s '
320 repo.ui.status(_('recording removal of %s as rename to %s '
321 '(%d%% similar)\n') %
321 '(%d%% similar)\n') %
322 (oldrel, newrel, score * 100))
322 (oldrel, newrel, score * 100))
323 if not dry_run:
323 if not dry_run:
324 repo.copy(old, new)
324 repo.copy(old, new)
325
325
326 def copy(ui, repo, pats, opts, rename=False):
326 def copy(ui, repo, pats, opts, rename=False):
327 # called with the repo lock held
327 # called with the repo lock held
328 #
328 #
329 # hgsep => pathname that uses "/" to separate directories
329 # hgsep => pathname that uses "/" to separate directories
330 # ossep => pathname that uses os.sep to separate directories
330 # ossep => pathname that uses os.sep to separate directories
331 cwd = repo.getcwd()
331 cwd = repo.getcwd()
332 targets = {}
332 targets = {}
333 after = opts.get("after")
333 after = opts.get("after")
334 dryrun = opts.get("dry_run")
334 dryrun = opts.get("dry_run")
335
335
336 def walkpat(pat):
336 def walkpat(pat):
337 srcs = []
337 srcs = []
338 m = match(repo, [pat], opts, globbed=True)
338 m = match(repo, [pat], opts, globbed=True)
339 for abs in repo.walk(m):
339 for abs in repo.walk(m):
340 state = repo.dirstate[abs]
340 state = repo.dirstate[abs]
341 rel = m.rel(abs)
341 rel = m.rel(abs)
342 exact = m.exact(abs)
342 exact = m.exact(abs)
343 if state in '?r':
343 if state in '?r':
344 if exact and state == '?':
344 if exact and state == '?':
345 ui.warn(_('%s: not copying - file is not managed\n') % rel)
345 ui.warn(_('%s: not copying - file is not managed\n') % rel)
346 if exact and state == 'r':
346 if exact and state == 'r':
347 ui.warn(_('%s: not copying - file has been marked for'
347 ui.warn(_('%s: not copying - file has been marked for'
348 ' remove\n') % rel)
348 ' remove\n') % rel)
349 continue
349 continue
350 # abs: hgsep
350 # abs: hgsep
351 # rel: ossep
351 # rel: ossep
352 srcs.append((abs, rel, exact))
352 srcs.append((abs, rel, exact))
353 return srcs
353 return srcs
354
354
355 # abssrc: hgsep
355 # abssrc: hgsep
356 # relsrc: ossep
356 # relsrc: ossep
357 # otarget: ossep
357 # otarget: ossep
358 def copyfile(abssrc, relsrc, otarget, exact):
358 def copyfile(abssrc, relsrc, otarget, exact):
359 abstarget = util.canonpath(repo.root, cwd, otarget)
359 abstarget = util.canonpath(repo.root, cwd, otarget)
360 reltarget = repo.pathto(abstarget, cwd)
360 reltarget = repo.pathto(abstarget, cwd)
361 target = repo.wjoin(abstarget)
361 target = repo.wjoin(abstarget)
362 src = repo.wjoin(abssrc)
362 src = repo.wjoin(abssrc)
363 state = repo.dirstate[abstarget]
363 state = repo.dirstate[abstarget]
364
364
365 # check for collisions
365 # check for collisions
366 prevsrc = targets.get(abstarget)
366 prevsrc = targets.get(abstarget)
367 if prevsrc is not None:
367 if prevsrc is not None:
368 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
368 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
369 (reltarget, repo.pathto(abssrc, cwd),
369 (reltarget, repo.pathto(abssrc, cwd),
370 repo.pathto(prevsrc, cwd)))
370 repo.pathto(prevsrc, cwd)))
371 return
371 return
372
372
373 # check for overwrites
373 # check for overwrites
374 exists = os.path.exists(target)
374 exists = os.path.exists(target)
375 if not after and exists or after and state in 'mn':
375 if not after and exists or after and state in 'mn':
376 if not opts['force']:
376 if not opts['force']:
377 ui.warn(_('%s: not overwriting - file exists\n') %
377 ui.warn(_('%s: not overwriting - file exists\n') %
378 reltarget)
378 reltarget)
379 return
379 return
380
380
381 if after:
381 if after:
382 if not exists:
382 if not exists:
383 return
383 return
384 elif not dryrun:
384 elif not dryrun:
385 try:
385 try:
386 if exists:
386 if exists:
387 os.unlink(target)
387 os.unlink(target)
388 targetdir = os.path.dirname(target) or '.'
388 targetdir = os.path.dirname(target) or '.'
389 if not os.path.isdir(targetdir):
389 if not os.path.isdir(targetdir):
390 os.makedirs(targetdir)
390 os.makedirs(targetdir)
391 util.copyfile(src, target)
391 util.copyfile(src, target)
392 except IOError, inst:
392 except IOError, inst:
393 if inst.errno == errno.ENOENT:
393 if inst.errno == errno.ENOENT:
394 ui.warn(_('%s: deleted in working copy\n') % relsrc)
394 ui.warn(_('%s: deleted in working copy\n') % relsrc)
395 else:
395 else:
396 ui.warn(_('%s: cannot copy - %s\n') %
396 ui.warn(_('%s: cannot copy - %s\n') %
397 (relsrc, inst.strerror))
397 (relsrc, inst.strerror))
398 return True # report a failure
398 return True # report a failure
399
399
400 if ui.verbose or not exact:
400 if ui.verbose or not exact:
401 if rename:
401 if rename:
402 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
402 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
403 else:
403 else:
404 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
404 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
405
405
406 targets[abstarget] = abssrc
406 targets[abstarget] = abssrc
407
407
408 # fix up dirstate
408 # fix up dirstate
409 origsrc = repo.dirstate.copied(abssrc) or abssrc
409 origsrc = repo.dirstate.copied(abssrc) or abssrc
410 if abstarget == origsrc: # copying back a copy?
410 if abstarget == origsrc: # copying back a copy?
411 if state not in 'mn' and not dryrun:
411 if state not in 'mn' and not dryrun:
412 repo.dirstate.normallookup(abstarget)
412 repo.dirstate.normallookup(abstarget)
413 else:
413 else:
414 if repo.dirstate[origsrc] == 'a' and origsrc == abssrc:
414 if repo.dirstate[origsrc] == 'a' and origsrc == abssrc:
415 if not ui.quiet:
415 if not ui.quiet:
416 ui.warn(_("%s has not been committed yet, so no copy "
416 ui.warn(_("%s has not been committed yet, so no copy "
417 "data will be stored for %s.\n")
417 "data will be stored for %s.\n")
418 % (repo.pathto(origsrc, cwd), reltarget))
418 % (repo.pathto(origsrc, cwd), reltarget))
419 if repo.dirstate[abstarget] in '?r' and not dryrun:
419 if repo.dirstate[abstarget] in '?r' and not dryrun:
420 repo.add([abstarget])
420 repo.add([abstarget])
421 elif not dryrun:
421 elif not dryrun:
422 repo.copy(origsrc, abstarget)
422 repo.copy(origsrc, abstarget)
423
423
424 if rename and not dryrun:
424 if rename and not dryrun:
425 repo.remove([abssrc], not after)
425 repo.remove([abssrc], not after)
426
426
427 # pat: ossep
427 # pat: ossep
428 # dest ossep
428 # dest ossep
429 # srcs: list of (hgsep, hgsep, ossep, bool)
429 # srcs: list of (hgsep, hgsep, ossep, bool)
430 # return: function that takes hgsep and returns ossep
430 # return: function that takes hgsep and returns ossep
431 def targetpathfn(pat, dest, srcs):
431 def targetpathfn(pat, dest, srcs):
432 if os.path.isdir(pat):
432 if os.path.isdir(pat):
433 abspfx = util.canonpath(repo.root, cwd, pat)
433 abspfx = util.canonpath(repo.root, cwd, pat)
434 abspfx = util.localpath(abspfx)
434 abspfx = util.localpath(abspfx)
435 if destdirexists:
435 if destdirexists:
436 striplen = len(os.path.split(abspfx)[0])
436 striplen = len(os.path.split(abspfx)[0])
437 else:
437 else:
438 striplen = len(abspfx)
438 striplen = len(abspfx)
439 if striplen:
439 if striplen:
440 striplen += len(os.sep)
440 striplen += len(os.sep)
441 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
441 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
442 elif destdirexists:
442 elif destdirexists:
443 res = lambda p: os.path.join(dest,
443 res = lambda p: os.path.join(dest,
444 os.path.basename(util.localpath(p)))
444 os.path.basename(util.localpath(p)))
445 else:
445 else:
446 res = lambda p: dest
446 res = lambda p: dest
447 return res
447 return res
448
448
449 # pat: ossep
449 # pat: ossep
450 # dest ossep
450 # dest ossep
451 # srcs: list of (hgsep, hgsep, ossep, bool)
451 # srcs: list of (hgsep, hgsep, ossep, bool)
452 # return: function that takes hgsep and returns ossep
452 # return: function that takes hgsep and returns ossep
453 def targetpathafterfn(pat, dest, srcs):
453 def targetpathafterfn(pat, dest, srcs):
454 if util.patkind(pat, None)[0]:
454 if util.patkind(pat, None)[0]:
455 # a mercurial pattern
455 # a mercurial pattern
456 res = lambda p: os.path.join(dest,
456 res = lambda p: os.path.join(dest,
457 os.path.basename(util.localpath(p)))
457 os.path.basename(util.localpath(p)))
458 else:
458 else:
459 abspfx = util.canonpath(repo.root, cwd, pat)
459 abspfx = util.canonpath(repo.root, cwd, pat)
460 if len(abspfx) < len(srcs[0][0]):
460 if len(abspfx) < len(srcs[0][0]):
461 # A directory. Either the target path contains the last
461 # A directory. Either the target path contains the last
462 # component of the source path or it does not.
462 # component of the source path or it does not.
463 def evalpath(striplen):
463 def evalpath(striplen):
464 score = 0
464 score = 0
465 for s in srcs:
465 for s in srcs:
466 t = os.path.join(dest, util.localpath(s[0])[striplen:])
466 t = os.path.join(dest, util.localpath(s[0])[striplen:])
467 if os.path.exists(t):
467 if os.path.exists(t):
468 score += 1
468 score += 1
469 return score
469 return score
470
470
471 abspfx = util.localpath(abspfx)
471 abspfx = util.localpath(abspfx)
472 striplen = len(abspfx)
472 striplen = len(abspfx)
473 if striplen:
473 if striplen:
474 striplen += len(os.sep)
474 striplen += len(os.sep)
475 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
475 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
476 score = evalpath(striplen)
476 score = evalpath(striplen)
477 striplen1 = len(os.path.split(abspfx)[0])
477 striplen1 = len(os.path.split(abspfx)[0])
478 if striplen1:
478 if striplen1:
479 striplen1 += len(os.sep)
479 striplen1 += len(os.sep)
480 if evalpath(striplen1) > score:
480 if evalpath(striplen1) > score:
481 striplen = striplen1
481 striplen = striplen1
482 res = lambda p: os.path.join(dest,
482 res = lambda p: os.path.join(dest,
483 util.localpath(p)[striplen:])
483 util.localpath(p)[striplen:])
484 else:
484 else:
485 # a file
485 # a file
486 if destdirexists:
486 if destdirexists:
487 res = lambda p: os.path.join(dest,
487 res = lambda p: os.path.join(dest,
488 os.path.basename(util.localpath(p)))
488 os.path.basename(util.localpath(p)))
489 else:
489 else:
490 res = lambda p: dest
490 res = lambda p: dest
491 return res
491 return res
492
492
493
493
494 pats = util.expand_glob(pats)
494 pats = util.expand_glob(pats)
495 if not pats:
495 if not pats:
496 raise util.Abort(_('no source or destination specified'))
496 raise util.Abort(_('no source or destination specified'))
497 if len(pats) == 1:
497 if len(pats) == 1:
498 raise util.Abort(_('no destination specified'))
498 raise util.Abort(_('no destination specified'))
499 dest = pats.pop()
499 dest = pats.pop()
500 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
500 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
501 if not destdirexists:
501 if not destdirexists:
502 if len(pats) > 1 or util.patkind(pats[0], None)[0]:
502 if len(pats) > 1 or util.patkind(pats[0], None)[0]:
503 raise util.Abort(_('with multiple sources, destination must be an '
503 raise util.Abort(_('with multiple sources, destination must be an '
504 'existing directory'))
504 'existing directory'))
505 if util.endswithsep(dest):
505 if util.endswithsep(dest):
506 raise util.Abort(_('destination %s is not a directory') % dest)
506 raise util.Abort(_('destination %s is not a directory') % dest)
507
507
508 tfn = targetpathfn
508 tfn = targetpathfn
509 if after:
509 if after:
510 tfn = targetpathafterfn
510 tfn = targetpathafterfn
511 copylist = []
511 copylist = []
512 for pat in pats:
512 for pat in pats:
513 srcs = walkpat(pat)
513 srcs = walkpat(pat)
514 if not srcs:
514 if not srcs:
515 continue
515 continue
516 copylist.append((tfn(pat, dest, srcs), srcs))
516 copylist.append((tfn(pat, dest, srcs), srcs))
517 if not copylist:
517 if not copylist:
518 raise util.Abort(_('no files to copy'))
518 raise util.Abort(_('no files to copy'))
519
519
520 errors = 0
520 errors = 0
521 for targetpath, srcs in copylist:
521 for targetpath, srcs in copylist:
522 for abssrc, relsrc, exact in srcs:
522 for abssrc, relsrc, exact in srcs:
523 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
523 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
524 errors += 1
524 errors += 1
525
525
526 if errors:
526 if errors:
527 ui.warn(_('(consider using --after)\n'))
527 ui.warn(_('(consider using --after)\n'))
528
528
529 return errors
529 return errors
530
530
531 def service(opts, parentfn=None, initfn=None, runfn=None):
531 def service(opts, parentfn=None, initfn=None, runfn=None):
532 '''Run a command as a service.'''
532 '''Run a command as a service.'''
533
533
534 if opts['daemon'] and not opts['daemon_pipefds']:
534 if opts['daemon'] and not opts['daemon_pipefds']:
535 rfd, wfd = os.pipe()
535 rfd, wfd = os.pipe()
536 args = sys.argv[:]
536 args = sys.argv[:]
537 args.append('--daemon-pipefds=%d,%d' % (rfd, wfd))
537 args.append('--daemon-pipefds=%d,%d' % (rfd, wfd))
538 # Don't pass --cwd to the child process, because we've already
538 # Don't pass --cwd to the child process, because we've already
539 # changed directory.
539 # changed directory.
540 for i in xrange(1,len(args)):
540 for i in xrange(1,len(args)):
541 if args[i].startswith('--cwd='):
541 if args[i].startswith('--cwd='):
542 del args[i]
542 del args[i]
543 break
543 break
544 elif args[i].startswith('--cwd'):
544 elif args[i].startswith('--cwd'):
545 del args[i:i+2]
545 del args[i:i+2]
546 break
546 break
547 pid = os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
547 pid = os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
548 args[0], args)
548 args[0], args)
549 os.close(wfd)
549 os.close(wfd)
550 os.read(rfd, 1)
550 os.read(rfd, 1)
551 if parentfn:
551 if parentfn:
552 return parentfn(pid)
552 return parentfn(pid)
553 else:
553 else:
554 os._exit(0)
554 os._exit(0)
555
555
556 if initfn:
556 if initfn:
557 initfn()
557 initfn()
558
558
559 if opts['pid_file']:
559 if opts['pid_file']:
560 fp = open(opts['pid_file'], 'w')
560 fp = open(opts['pid_file'], 'w')
561 fp.write(str(os.getpid()) + '\n')
561 fp.write(str(os.getpid()) + '\n')
562 fp.close()
562 fp.close()
563
563
564 if opts['daemon_pipefds']:
564 if opts['daemon_pipefds']:
565 rfd, wfd = [int(x) for x in opts['daemon_pipefds'].split(',')]
565 rfd, wfd = [int(x) for x in opts['daemon_pipefds'].split(',')]
566 os.close(rfd)
566 os.close(rfd)
567 try:
567 try:
568 os.setsid()
568 os.setsid()
569 except AttributeError:
569 except AttributeError:
570 pass
570 pass
571 os.write(wfd, 'y')
571 os.write(wfd, 'y')
572 os.close(wfd)
572 os.close(wfd)
573 sys.stdout.flush()
573 sys.stdout.flush()
574 sys.stderr.flush()
574 sys.stderr.flush()
575 fd = os.open(util.nulldev, os.O_RDWR)
575 fd = os.open(util.nulldev, os.O_RDWR)
576 if fd != 0: os.dup2(fd, 0)
576 if fd != 0: os.dup2(fd, 0)
577 if fd != 1: os.dup2(fd, 1)
577 if fd != 1: os.dup2(fd, 1)
578 if fd != 2: os.dup2(fd, 2)
578 if fd != 2: os.dup2(fd, 2)
579 if fd not in (0, 1, 2): os.close(fd)
579 if fd not in (0, 1, 2): os.close(fd)
580
580
581 if runfn:
581 if runfn:
582 return runfn()
582 return runfn()
583
583
584 class changeset_printer(object):
584 class changeset_printer(object):
585 '''show changeset information when templating not requested.'''
585 '''show changeset information when templating not requested.'''
586
586
587 def __init__(self, ui, repo, patch, diffopts, buffered):
587 def __init__(self, ui, repo, patch, diffopts, buffered):
588 self.ui = ui
588 self.ui = ui
589 self.repo = repo
589 self.repo = repo
590 self.buffered = buffered
590 self.buffered = buffered
591 self.patch = patch
591 self.patch = patch
592 self.diffopts = diffopts
592 self.diffopts = diffopts
593 self.header = {}
593 self.header = {}
594 self.hunk = {}
594 self.hunk = {}
595 self.lastheader = None
595 self.lastheader = None
596
596
597 def flush(self, rev):
597 def flush(self, rev):
598 if rev in self.header:
598 if rev in self.header:
599 h = self.header[rev]
599 h = self.header[rev]
600 if h != self.lastheader:
600 if h != self.lastheader:
601 self.lastheader = h
601 self.lastheader = h
602 self.ui.write(h)
602 self.ui.write(h)
603 del self.header[rev]
603 del self.header[rev]
604 if rev in self.hunk:
604 if rev in self.hunk:
605 self.ui.write(self.hunk[rev])
605 self.ui.write(self.hunk[rev])
606 del self.hunk[rev]
606 del self.hunk[rev]
607 return 1
607 return 1
608 return 0
608 return 0
609
609
610 def show(self, ctx, copies=(), **props):
610 def show(self, ctx, copies=(), **props):
611 if self.buffered:
611 if self.buffered:
612 self.ui.pushbuffer()
612 self.ui.pushbuffer()
613 self._show(ctx, copies, props)
613 self._show(ctx, copies, props)
614 self.hunk[ctx.rev()] = self.ui.popbuffer()
614 self.hunk[ctx.rev()] = self.ui.popbuffer()
615 else:
615 else:
616 self._show(ctx, copies, props)
616 self._show(ctx, copies, props)
617
617
618 def _show(self, ctx, copies, props):
618 def _show(self, ctx, copies, props):
619 '''show a single changeset or file revision'''
619 '''show a single changeset or file revision'''
620 changenode = ctx.node()
620 changenode = ctx.node()
621 rev = ctx.rev()
621 rev = ctx.rev()
622
622
623 if self.ui.quiet:
623 if self.ui.quiet:
624 self.ui.write("%d:%s\n" % (rev, short(changenode)))
624 self.ui.write("%d:%s\n" % (rev, short(changenode)))
625 return
625 return
626
626
627 log = self.repo.changelog
627 log = self.repo.changelog
628 changes = log.read(changenode)
628 changes = log.read(changenode)
629 date = util.datestr(changes[2])
629 date = util.datestr(changes[2])
630 extra = changes[5]
630 extra = changes[5]
631 branch = extra.get("branch")
631 branch = extra.get("branch")
632
632
633 hexfunc = self.ui.debugflag and hex or short
633 hexfunc = self.ui.debugflag and hex or short
634
634
635 parents = [(p, hexfunc(log.node(p)))
635 parents = [(p, hexfunc(log.node(p)))
636 for p in self._meaningful_parentrevs(log, rev)]
636 for p in self._meaningful_parentrevs(log, rev)]
637
637
638 self.ui.write(_("changeset: %d:%s\n") % (rev, hexfunc(changenode)))
638 self.ui.write(_("changeset: %d:%s\n") % (rev, hexfunc(changenode)))
639
639
640 # don't show the default branch name
640 # don't show the default branch name
641 if branch != 'default':
641 if branch != 'default':
642 branch = encoding.tolocal(branch)
642 branch = encoding.tolocal(branch)
643 self.ui.write(_("branch: %s\n") % branch)
643 self.ui.write(_("branch: %s\n") % branch)
644 for tag in self.repo.nodetags(changenode):
644 for tag in self.repo.nodetags(changenode):
645 self.ui.write(_("tag: %s\n") % tag)
645 self.ui.write(_("tag: %s\n") % tag)
646 for parent in parents:
646 for parent in parents:
647 self.ui.write(_("parent: %d:%s\n") % parent)
647 self.ui.write(_("parent: %d:%s\n") % parent)
648
648
649 if self.ui.debugflag:
649 if self.ui.debugflag:
650 self.ui.write(_("manifest: %d:%s\n") %
650 self.ui.write(_("manifest: %d:%s\n") %
651 (self.repo.manifest.rev(changes[0]), hex(changes[0])))
651 (self.repo.manifest.rev(changes[0]), hex(changes[0])))
652 self.ui.write(_("user: %s\n") % changes[1])
652 self.ui.write(_("user: %s\n") % changes[1])
653 self.ui.write(_("date: %s\n") % date)
653 self.ui.write(_("date: %s\n") % date)
654
654
655 if self.ui.debugflag:
655 if self.ui.debugflag:
656 files = self.repo.status(log.parents(changenode)[0], changenode)[:3]
656 files = self.repo.status(log.parents(changenode)[0], changenode)[:3]
657 for key, value in zip([_("files:"), _("files+:"), _("files-:")],
657 for key, value in zip([_("files:"), _("files+:"), _("files-:")],
658 files):
658 files):
659 if value:
659 if value:
660 self.ui.write("%-12s %s\n" % (key, " ".join(value)))
660 self.ui.write("%-12s %s\n" % (key, " ".join(value)))
661 elif changes[3] and self.ui.verbose:
661 elif changes[3] and self.ui.verbose:
662 self.ui.write(_("files: %s\n") % " ".join(changes[3]))
662 self.ui.write(_("files: %s\n") % " ".join(changes[3]))
663 if copies and self.ui.verbose:
663 if copies and self.ui.verbose:
664 copies = ['%s (%s)' % c for c in copies]
664 copies = ['%s (%s)' % c for c in copies]
665 self.ui.write(_("copies: %s\n") % ' '.join(copies))
665 self.ui.write(_("copies: %s\n") % ' '.join(copies))
666
666
667 if extra and self.ui.debugflag:
667 if extra and self.ui.debugflag:
668 for key, value in util.sort(extra.items()):
668 for key, value in sorted(extra.items()):
669 self.ui.write(_("extra: %s=%s\n")
669 self.ui.write(_("extra: %s=%s\n")
670 % (key, value.encode('string_escape')))
670 % (key, value.encode('string_escape')))
671
671
672 description = changes[4].strip()
672 description = changes[4].strip()
673 if description:
673 if description:
674 if self.ui.verbose:
674 if self.ui.verbose:
675 self.ui.write(_("description:\n"))
675 self.ui.write(_("description:\n"))
676 self.ui.write(description)
676 self.ui.write(description)
677 self.ui.write("\n\n")
677 self.ui.write("\n\n")
678 else:
678 else:
679 self.ui.write(_("summary: %s\n") %
679 self.ui.write(_("summary: %s\n") %
680 description.splitlines()[0])
680 description.splitlines()[0])
681 self.ui.write("\n")
681 self.ui.write("\n")
682
682
683 self.showpatch(changenode)
683 self.showpatch(changenode)
684
684
685 def showpatch(self, node):
685 def showpatch(self, node):
686 if self.patch:
686 if self.patch:
687 prev = self.repo.changelog.parents(node)[0]
687 prev = self.repo.changelog.parents(node)[0]
688 chunks = patch.diff(self.repo, prev, node, match=self.patch,
688 chunks = patch.diff(self.repo, prev, node, match=self.patch,
689 opts=patch.diffopts(self.ui, self.diffopts))
689 opts=patch.diffopts(self.ui, self.diffopts))
690 for chunk in chunks:
690 for chunk in chunks:
691 self.ui.write(chunk)
691 self.ui.write(chunk)
692 self.ui.write("\n")
692 self.ui.write("\n")
693
693
694 def _meaningful_parentrevs(self, log, rev):
694 def _meaningful_parentrevs(self, log, rev):
695 """Return list of meaningful (or all if debug) parentrevs for rev.
695 """Return list of meaningful (or all if debug) parentrevs for rev.
696
696
697 For merges (two non-nullrev revisions) both parents are meaningful.
697 For merges (two non-nullrev revisions) both parents are meaningful.
698 Otherwise the first parent revision is considered meaningful if it
698 Otherwise the first parent revision is considered meaningful if it
699 is not the preceding revision.
699 is not the preceding revision.
700 """
700 """
701 parents = log.parentrevs(rev)
701 parents = log.parentrevs(rev)
702 if not self.ui.debugflag and parents[1] == nullrev:
702 if not self.ui.debugflag and parents[1] == nullrev:
703 if parents[0] >= rev - 1:
703 if parents[0] >= rev - 1:
704 parents = []
704 parents = []
705 else:
705 else:
706 parents = [parents[0]]
706 parents = [parents[0]]
707 return parents
707 return parents
708
708
709
709
710 class changeset_templater(changeset_printer):
710 class changeset_templater(changeset_printer):
711 '''format changeset information.'''
711 '''format changeset information.'''
712
712
713 def __init__(self, ui, repo, patch, diffopts, mapfile, buffered):
713 def __init__(self, ui, repo, patch, diffopts, mapfile, buffered):
714 changeset_printer.__init__(self, ui, repo, patch, diffopts, buffered)
714 changeset_printer.__init__(self, ui, repo, patch, diffopts, buffered)
715 filters = templatefilters.filters.copy()
715 filters = templatefilters.filters.copy()
716 filters['formatnode'] = (ui.debugflag and (lambda x: x)
716 filters['formatnode'] = (ui.debugflag and (lambda x: x)
717 or (lambda x: x[:12]))
717 or (lambda x: x[:12]))
718 self.t = templater.templater(mapfile, filters,
718 self.t = templater.templater(mapfile, filters,
719 cache={
719 cache={
720 'parent': '{rev}:{node|formatnode} ',
720 'parent': '{rev}:{node|formatnode} ',
721 'manifest': '{rev}:{node|formatnode}',
721 'manifest': '{rev}:{node|formatnode}',
722 'filecopy': '{name} ({source})'})
722 'filecopy': '{name} ({source})'})
723
723
724 def use_template(self, t):
724 def use_template(self, t):
725 '''set template string to use'''
725 '''set template string to use'''
726 self.t.cache['changeset'] = t
726 self.t.cache['changeset'] = t
727
727
728 def _meaningful_parentrevs(self, ctx):
728 def _meaningful_parentrevs(self, ctx):
729 """Return list of meaningful (or all if debug) parentrevs for rev.
729 """Return list of meaningful (or all if debug) parentrevs for rev.
730 """
730 """
731 parents = ctx.parents()
731 parents = ctx.parents()
732 if len(parents) > 1:
732 if len(parents) > 1:
733 return parents
733 return parents
734 if self.ui.debugflag:
734 if self.ui.debugflag:
735 return [parents[0], self.repo['null']]
735 return [parents[0], self.repo['null']]
736 if parents[0].rev() >= ctx.rev() - 1:
736 if parents[0].rev() >= ctx.rev() - 1:
737 return []
737 return []
738 return parents
738 return parents
739
739
740 def _show(self, ctx, copies, props):
740 def _show(self, ctx, copies, props):
741 '''show a single changeset or file revision'''
741 '''show a single changeset or file revision'''
742
742
743 def showlist(name, values, plural=None, **args):
743 def showlist(name, values, plural=None, **args):
744 '''expand set of values.
744 '''expand set of values.
745 name is name of key in template map.
745 name is name of key in template map.
746 values is list of strings or dicts.
746 values is list of strings or dicts.
747 plural is plural of name, if not simply name + 's'.
747 plural is plural of name, if not simply name + 's'.
748
748
749 expansion works like this, given name 'foo'.
749 expansion works like this, given name 'foo'.
750
750
751 if values is empty, expand 'no_foos'.
751 if values is empty, expand 'no_foos'.
752
752
753 if 'foo' not in template map, return values as a string,
753 if 'foo' not in template map, return values as a string,
754 joined by space.
754 joined by space.
755
755
756 expand 'start_foos'.
756 expand 'start_foos'.
757
757
758 for each value, expand 'foo'. if 'last_foo' in template
758 for each value, expand 'foo'. if 'last_foo' in template
759 map, expand it instead of 'foo' for last key.
759 map, expand it instead of 'foo' for last key.
760
760
761 expand 'end_foos'.
761 expand 'end_foos'.
762 '''
762 '''
763 if plural: names = plural
763 if plural: names = plural
764 else: names = name + 's'
764 else: names = name + 's'
765 if not values:
765 if not values:
766 noname = 'no_' + names
766 noname = 'no_' + names
767 if noname in self.t:
767 if noname in self.t:
768 yield self.t(noname, **args)
768 yield self.t(noname, **args)
769 return
769 return
770 if name not in self.t:
770 if name not in self.t:
771 if isinstance(values[0], str):
771 if isinstance(values[0], str):
772 yield ' '.join(values)
772 yield ' '.join(values)
773 else:
773 else:
774 for v in values:
774 for v in values:
775 yield dict(v, **args)
775 yield dict(v, **args)
776 return
776 return
777 startname = 'start_' + names
777 startname = 'start_' + names
778 if startname in self.t:
778 if startname in self.t:
779 yield self.t(startname, **args)
779 yield self.t(startname, **args)
780 vargs = args.copy()
780 vargs = args.copy()
781 def one(v, tag=name):
781 def one(v, tag=name):
782 try:
782 try:
783 vargs.update(v)
783 vargs.update(v)
784 except (AttributeError, ValueError):
784 except (AttributeError, ValueError):
785 try:
785 try:
786 for a, b in v:
786 for a, b in v:
787 vargs[a] = b
787 vargs[a] = b
788 except ValueError:
788 except ValueError:
789 vargs[name] = v
789 vargs[name] = v
790 return self.t(tag, **vargs)
790 return self.t(tag, **vargs)
791 lastname = 'last_' + name
791 lastname = 'last_' + name
792 if lastname in self.t:
792 if lastname in self.t:
793 last = values.pop()
793 last = values.pop()
794 else:
794 else:
795 last = None
795 last = None
796 for v in values:
796 for v in values:
797 yield one(v)
797 yield one(v)
798 if last is not None:
798 if last is not None:
799 yield one(last, tag=lastname)
799 yield one(last, tag=lastname)
800 endname = 'end_' + names
800 endname = 'end_' + names
801 if endname in self.t:
801 if endname in self.t:
802 yield self.t(endname, **args)
802 yield self.t(endname, **args)
803
803
804 def showbranches(**args):
804 def showbranches(**args):
805 branch = ctx.branch()
805 branch = ctx.branch()
806 if branch != 'default':
806 if branch != 'default':
807 branch = encoding.tolocal(branch)
807 branch = encoding.tolocal(branch)
808 return showlist('branch', [branch], plural='branches', **args)
808 return showlist('branch', [branch], plural='branches', **args)
809
809
810 def showparents(**args):
810 def showparents(**args):
811 parents = [[('rev', p.rev()), ('node', p.hex())]
811 parents = [[('rev', p.rev()), ('node', p.hex())]
812 for p in self._meaningful_parentrevs(ctx)]
812 for p in self._meaningful_parentrevs(ctx)]
813 return showlist('parent', parents, **args)
813 return showlist('parent', parents, **args)
814
814
815 def showtags(**args):
815 def showtags(**args):
816 return showlist('tag', ctx.tags(), **args)
816 return showlist('tag', ctx.tags(), **args)
817
817
818 def showextras(**args):
818 def showextras(**args):
819 for key, value in util.sort(ctx.extra().items()):
819 for key, value in sorted(ctx.extra().items()):
820 args = args.copy()
820 args = args.copy()
821 args.update(dict(key=key, value=value))
821 args.update(dict(key=key, value=value))
822 yield self.t('extra', **args)
822 yield self.t('extra', **args)
823
823
824 def showcopies(**args):
824 def showcopies(**args):
825 c = [{'name': x[0], 'source': x[1]} for x in copies]
825 c = [{'name': x[0], 'source': x[1]} for x in copies]
826 return showlist('file_copy', c, plural='file_copies', **args)
826 return showlist('file_copy', c, plural='file_copies', **args)
827
827
828 files = []
828 files = []
829 def getfiles():
829 def getfiles():
830 if not files:
830 if not files:
831 files[:] = self.repo.status(ctx.parents()[0].node(),
831 files[:] = self.repo.status(ctx.parents()[0].node(),
832 ctx.node())[:3]
832 ctx.node())[:3]
833 return files
833 return files
834 def showfiles(**args):
834 def showfiles(**args):
835 return showlist('file', ctx.files(), **args)
835 return showlist('file', ctx.files(), **args)
836 def showmods(**args):
836 def showmods(**args):
837 return showlist('file_mod', getfiles()[0], **args)
837 return showlist('file_mod', getfiles()[0], **args)
838 def showadds(**args):
838 def showadds(**args):
839 return showlist('file_add', getfiles()[1], **args)
839 return showlist('file_add', getfiles()[1], **args)
840 def showdels(**args):
840 def showdels(**args):
841 return showlist('file_del', getfiles()[2], **args)
841 return showlist('file_del', getfiles()[2], **args)
842 def showmanifest(**args):
842 def showmanifest(**args):
843 args = args.copy()
843 args = args.copy()
844 args.update(dict(rev=self.repo.manifest.rev(ctx.changeset()[0]),
844 args.update(dict(rev=self.repo.manifest.rev(ctx.changeset()[0]),
845 node=hex(ctx.changeset()[0])))
845 node=hex(ctx.changeset()[0])))
846 return self.t('manifest', **args)
846 return self.t('manifest', **args)
847
847
848 def showdiffstat(**args):
848 def showdiffstat(**args):
849 diff = patch.diff(self.repo, ctx.parents()[0].node(), ctx.node())
849 diff = patch.diff(self.repo, ctx.parents()[0].node(), ctx.node())
850 files, adds, removes = 0, 0, 0
850 files, adds, removes = 0, 0, 0
851 for i in patch.diffstatdata(util.iterlines(diff)):
851 for i in patch.diffstatdata(util.iterlines(diff)):
852 files += 1
852 files += 1
853 adds += i[1]
853 adds += i[1]
854 removes += i[2]
854 removes += i[2]
855 return '%s: +%s/-%s' % (files, adds, removes)
855 return '%s: +%s/-%s' % (files, adds, removes)
856
856
857 defprops = {
857 defprops = {
858 'author': ctx.user(),
858 'author': ctx.user(),
859 'branches': showbranches,
859 'branches': showbranches,
860 'date': ctx.date(),
860 'date': ctx.date(),
861 'desc': ctx.description().strip(),
861 'desc': ctx.description().strip(),
862 'file_adds': showadds,
862 'file_adds': showadds,
863 'file_dels': showdels,
863 'file_dels': showdels,
864 'file_mods': showmods,
864 'file_mods': showmods,
865 'files': showfiles,
865 'files': showfiles,
866 'file_copies': showcopies,
866 'file_copies': showcopies,
867 'manifest': showmanifest,
867 'manifest': showmanifest,
868 'node': ctx.hex(),
868 'node': ctx.hex(),
869 'parents': showparents,
869 'parents': showparents,
870 'rev': ctx.rev(),
870 'rev': ctx.rev(),
871 'tags': showtags,
871 'tags': showtags,
872 'extras': showextras,
872 'extras': showextras,
873 'diffstat': showdiffstat,
873 'diffstat': showdiffstat,
874 }
874 }
875 props = props.copy()
875 props = props.copy()
876 props.update(defprops)
876 props.update(defprops)
877
877
878 # find correct templates for current mode
878 # find correct templates for current mode
879
879
880 tmplmodes = [
880 tmplmodes = [
881 (True, None),
881 (True, None),
882 (self.ui.verbose, 'verbose'),
882 (self.ui.verbose, 'verbose'),
883 (self.ui.quiet, 'quiet'),
883 (self.ui.quiet, 'quiet'),
884 (self.ui.debugflag, 'debug'),
884 (self.ui.debugflag, 'debug'),
885 ]
885 ]
886
886
887 types = {'header': '', 'changeset': 'changeset'}
887 types = {'header': '', 'changeset': 'changeset'}
888 for mode, postfix in tmplmodes:
888 for mode, postfix in tmplmodes:
889 for type in types:
889 for type in types:
890 cur = postfix and ('%s_%s' % (type, postfix)) or type
890 cur = postfix and ('%s_%s' % (type, postfix)) or type
891 if mode and cur in self.t:
891 if mode and cur in self.t:
892 types[type] = cur
892 types[type] = cur
893
893
894 try:
894 try:
895
895
896 # write header
896 # write header
897 if types['header']:
897 if types['header']:
898 h = templater.stringify(self.t(types['header'], **props))
898 h = templater.stringify(self.t(types['header'], **props))
899 if self.buffered:
899 if self.buffered:
900 self.header[ctx.rev()] = h
900 self.header[ctx.rev()] = h
901 else:
901 else:
902 self.ui.write(h)
902 self.ui.write(h)
903
903
904 # write changeset metadata, then patch if requested
904 # write changeset metadata, then patch if requested
905 key = types['changeset']
905 key = types['changeset']
906 self.ui.write(templater.stringify(self.t(key, **props)))
906 self.ui.write(templater.stringify(self.t(key, **props)))
907 self.showpatch(ctx.node())
907 self.showpatch(ctx.node())
908
908
909 except KeyError, inst:
909 except KeyError, inst:
910 msg = _("%s: no key named '%s'")
910 msg = _("%s: no key named '%s'")
911 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
911 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
912 except SyntaxError, inst:
912 except SyntaxError, inst:
913 raise util.Abort(_('%s: %s') % (self.t.mapfile, inst.args[0]))
913 raise util.Abort(_('%s: %s') % (self.t.mapfile, inst.args[0]))
914
914
915 def show_changeset(ui, repo, opts, buffered=False, matchfn=False):
915 def show_changeset(ui, repo, opts, buffered=False, matchfn=False):
916 """show one changeset using template or regular display.
916 """show one changeset using template or regular display.
917
917
918 Display format will be the first non-empty hit of:
918 Display format will be the first non-empty hit of:
919 1. option 'template'
919 1. option 'template'
920 2. option 'style'
920 2. option 'style'
921 3. [ui] setting 'logtemplate'
921 3. [ui] setting 'logtemplate'
922 4. [ui] setting 'style'
922 4. [ui] setting 'style'
923 If all of these values are either the unset or the empty string,
923 If all of these values are either the unset or the empty string,
924 regular display via changeset_printer() is done.
924 regular display via changeset_printer() is done.
925 """
925 """
926 # options
926 # options
927 patch = False
927 patch = False
928 if opts.get('patch'):
928 if opts.get('patch'):
929 patch = matchfn or matchall(repo)
929 patch = matchfn or matchall(repo)
930
930
931 tmpl = opts.get('template')
931 tmpl = opts.get('template')
932 style = None
932 style = None
933 if tmpl:
933 if tmpl:
934 tmpl = templater.parsestring(tmpl, quoted=False)
934 tmpl = templater.parsestring(tmpl, quoted=False)
935 else:
935 else:
936 style = opts.get('style')
936 style = opts.get('style')
937
937
938 # ui settings
938 # ui settings
939 if not (tmpl or style):
939 if not (tmpl or style):
940 tmpl = ui.config('ui', 'logtemplate')
940 tmpl = ui.config('ui', 'logtemplate')
941 if tmpl:
941 if tmpl:
942 tmpl = templater.parsestring(tmpl)
942 tmpl = templater.parsestring(tmpl)
943 else:
943 else:
944 style = ui.config('ui', 'style')
944 style = ui.config('ui', 'style')
945
945
946 if not (tmpl or style):
946 if not (tmpl or style):
947 return changeset_printer(ui, repo, patch, opts, buffered)
947 return changeset_printer(ui, repo, patch, opts, buffered)
948
948
949 mapfile = None
949 mapfile = None
950 if style and not tmpl:
950 if style and not tmpl:
951 mapfile = style
951 mapfile = style
952 if not os.path.split(mapfile)[0]:
952 if not os.path.split(mapfile)[0]:
953 mapname = (templater.templatepath('map-cmdline.' + mapfile)
953 mapname = (templater.templatepath('map-cmdline.' + mapfile)
954 or templater.templatepath(mapfile))
954 or templater.templatepath(mapfile))
955 if mapname: mapfile = mapname
955 if mapname: mapfile = mapname
956
956
957 try:
957 try:
958 t = changeset_templater(ui, repo, patch, opts, mapfile, buffered)
958 t = changeset_templater(ui, repo, patch, opts, mapfile, buffered)
959 except SyntaxError, inst:
959 except SyntaxError, inst:
960 raise util.Abort(inst.args[0])
960 raise util.Abort(inst.args[0])
961 if tmpl: t.use_template(tmpl)
961 if tmpl: t.use_template(tmpl)
962 return t
962 return t
963
963
964 def finddate(ui, repo, date):
964 def finddate(ui, repo, date):
965 """Find the tipmost changeset that matches the given date spec"""
965 """Find the tipmost changeset that matches the given date spec"""
966 df = util.matchdate(date)
966 df = util.matchdate(date)
967 get = util.cachefunc(lambda r: repo[r].changeset())
967 get = util.cachefunc(lambda r: repo[r].changeset())
968 changeiter, matchfn = walkchangerevs(ui, repo, [], get, {'rev':None})
968 changeiter, matchfn = walkchangerevs(ui, repo, [], get, {'rev':None})
969 results = {}
969 results = {}
970 for st, rev, fns in changeiter:
970 for st, rev, fns in changeiter:
971 if st == 'add':
971 if st == 'add':
972 d = get(rev)[2]
972 d = get(rev)[2]
973 if df(d[0]):
973 if df(d[0]):
974 results[rev] = d
974 results[rev] = d
975 elif st == 'iter':
975 elif st == 'iter':
976 if rev in results:
976 if rev in results:
977 ui.status(_("Found revision %s from %s\n") %
977 ui.status(_("Found revision %s from %s\n") %
978 (rev, util.datestr(results[rev])))
978 (rev, util.datestr(results[rev])))
979 return str(rev)
979 return str(rev)
980
980
981 raise util.Abort(_("revision matching date not found"))
981 raise util.Abort(_("revision matching date not found"))
982
982
983 def walkchangerevs(ui, repo, pats, change, opts):
983 def walkchangerevs(ui, repo, pats, change, opts):
984 '''Iterate over files and the revs in which they changed.
984 '''Iterate over files and the revs in which they changed.
985
985
986 Callers most commonly need to iterate backwards over the history
986 Callers most commonly need to iterate backwards over the history
987 in which they are interested. Doing so has awful (quadratic-looking)
987 in which they are interested. Doing so has awful (quadratic-looking)
988 performance, so we use iterators in a "windowed" way.
988 performance, so we use iterators in a "windowed" way.
989
989
990 We walk a window of revisions in the desired order. Within the
990 We walk a window of revisions in the desired order. Within the
991 window, we first walk forwards to gather data, then in the desired
991 window, we first walk forwards to gather data, then in the desired
992 order (usually backwards) to display it.
992 order (usually backwards) to display it.
993
993
994 This function returns an (iterator, matchfn) tuple. The iterator
994 This function returns an (iterator, matchfn) tuple. The iterator
995 yields 3-tuples. They will be of one of the following forms:
995 yields 3-tuples. They will be of one of the following forms:
996
996
997 "window", incrementing, lastrev: stepping through a window,
997 "window", incrementing, lastrev: stepping through a window,
998 positive if walking forwards through revs, last rev in the
998 positive if walking forwards through revs, last rev in the
999 sequence iterated over - use to reset state for the current window
999 sequence iterated over - use to reset state for the current window
1000
1000
1001 "add", rev, fns: out-of-order traversal of the given file names
1001 "add", rev, fns: out-of-order traversal of the given file names
1002 fns, which changed during revision rev - use to gather data for
1002 fns, which changed during revision rev - use to gather data for
1003 possible display
1003 possible display
1004
1004
1005 "iter", rev, None: in-order traversal of the revs earlier iterated
1005 "iter", rev, None: in-order traversal of the revs earlier iterated
1006 over with "add" - use to display data'''
1006 over with "add" - use to display data'''
1007
1007
1008 def increasing_windows(start, end, windowsize=8, sizelimit=512):
1008 def increasing_windows(start, end, windowsize=8, sizelimit=512):
1009 if start < end:
1009 if start < end:
1010 while start < end:
1010 while start < end:
1011 yield start, min(windowsize, end-start)
1011 yield start, min(windowsize, end-start)
1012 start += windowsize
1012 start += windowsize
1013 if windowsize < sizelimit:
1013 if windowsize < sizelimit:
1014 windowsize *= 2
1014 windowsize *= 2
1015 else:
1015 else:
1016 while start > end:
1016 while start > end:
1017 yield start, min(windowsize, start-end-1)
1017 yield start, min(windowsize, start-end-1)
1018 start -= windowsize
1018 start -= windowsize
1019 if windowsize < sizelimit:
1019 if windowsize < sizelimit:
1020 windowsize *= 2
1020 windowsize *= 2
1021
1021
1022 m = match(repo, pats, opts)
1022 m = match(repo, pats, opts)
1023 follow = opts.get('follow') or opts.get('follow_first')
1023 follow = opts.get('follow') or opts.get('follow_first')
1024
1024
1025 if not len(repo):
1025 if not len(repo):
1026 return [], m
1026 return [], m
1027
1027
1028 if follow:
1028 if follow:
1029 defrange = '%s:0' % repo['.'].rev()
1029 defrange = '%s:0' % repo['.'].rev()
1030 else:
1030 else:
1031 defrange = '-1:0'
1031 defrange = '-1:0'
1032 revs = revrange(repo, opts['rev'] or [defrange])
1032 revs = revrange(repo, opts['rev'] or [defrange])
1033 wanted = set()
1033 wanted = set()
1034 slowpath = m.anypats() or (m.files() and opts.get('removed'))
1034 slowpath = m.anypats() or (m.files() and opts.get('removed'))
1035 fncache = {}
1035 fncache = {}
1036
1036
1037 if not slowpath and not m.files():
1037 if not slowpath and not m.files():
1038 # No files, no patterns. Display all revs.
1038 # No files, no patterns. Display all revs.
1039 wanted = set(revs)
1039 wanted = set(revs)
1040 copies = []
1040 copies = []
1041 if not slowpath:
1041 if not slowpath:
1042 # Only files, no patterns. Check the history of each file.
1042 # Only files, no patterns. Check the history of each file.
1043 def filerevgen(filelog, node):
1043 def filerevgen(filelog, node):
1044 cl_count = len(repo)
1044 cl_count = len(repo)
1045 if node is None:
1045 if node is None:
1046 last = len(filelog) - 1
1046 last = len(filelog) - 1
1047 else:
1047 else:
1048 last = filelog.rev(node)
1048 last = filelog.rev(node)
1049 for i, window in increasing_windows(last, nullrev):
1049 for i, window in increasing_windows(last, nullrev):
1050 revs = []
1050 revs = []
1051 for j in xrange(i - window, i + 1):
1051 for j in xrange(i - window, i + 1):
1052 n = filelog.node(j)
1052 n = filelog.node(j)
1053 revs.append((filelog.linkrev(j),
1053 revs.append((filelog.linkrev(j),
1054 follow and filelog.renamed(n)))
1054 follow and filelog.renamed(n)))
1055 revs.reverse()
1055 revs.reverse()
1056 for rev in revs:
1056 for rev in revs:
1057 # only yield rev for which we have the changelog, it can
1057 # only yield rev for which we have the changelog, it can
1058 # happen while doing "hg log" during a pull or commit
1058 # happen while doing "hg log" during a pull or commit
1059 if rev[0] < cl_count:
1059 if rev[0] < cl_count:
1060 yield rev
1060 yield rev
1061 def iterfiles():
1061 def iterfiles():
1062 for filename in m.files():
1062 for filename in m.files():
1063 yield filename, None
1063 yield filename, None
1064 for filename_node in copies:
1064 for filename_node in copies:
1065 yield filename_node
1065 yield filename_node
1066 minrev, maxrev = min(revs), max(revs)
1066 minrev, maxrev = min(revs), max(revs)
1067 for file_, node in iterfiles():
1067 for file_, node in iterfiles():
1068 filelog = repo.file(file_)
1068 filelog = repo.file(file_)
1069 if not len(filelog):
1069 if not len(filelog):
1070 if node is None:
1070 if node is None:
1071 # A zero count may be a directory or deleted file, so
1071 # A zero count may be a directory or deleted file, so
1072 # try to find matching entries on the slow path.
1072 # try to find matching entries on the slow path.
1073 if follow:
1073 if follow:
1074 raise util.Abort(_('cannot follow nonexistent file: "%s"') % file_)
1074 raise util.Abort(_('cannot follow nonexistent file: "%s"') % file_)
1075 slowpath = True
1075 slowpath = True
1076 break
1076 break
1077 else:
1077 else:
1078 ui.warn(_('%s:%s copy source revision cannot be found!\n')
1078 ui.warn(_('%s:%s copy source revision cannot be found!\n')
1079 % (file_, short(node)))
1079 % (file_, short(node)))
1080 continue
1080 continue
1081 for rev, copied in filerevgen(filelog, node):
1081 for rev, copied in filerevgen(filelog, node):
1082 if rev <= maxrev:
1082 if rev <= maxrev:
1083 if rev < minrev:
1083 if rev < minrev:
1084 break
1084 break
1085 fncache.setdefault(rev, [])
1085 fncache.setdefault(rev, [])
1086 fncache[rev].append(file_)
1086 fncache[rev].append(file_)
1087 wanted.add(rev)
1087 wanted.add(rev)
1088 if follow and copied:
1088 if follow and copied:
1089 copies.append(copied)
1089 copies.append(copied)
1090 if slowpath:
1090 if slowpath:
1091 if follow:
1091 if follow:
1092 raise util.Abort(_('can only follow copies/renames for explicit '
1092 raise util.Abort(_('can only follow copies/renames for explicit '
1093 'file names'))
1093 'file names'))
1094
1094
1095 # The slow path checks files modified in every changeset.
1095 # The slow path checks files modified in every changeset.
1096 def changerevgen():
1096 def changerevgen():
1097 for i, window in increasing_windows(len(repo) - 1, nullrev):
1097 for i, window in increasing_windows(len(repo) - 1, nullrev):
1098 for j in xrange(i - window, i + 1):
1098 for j in xrange(i - window, i + 1):
1099 yield j, change(j)[3]
1099 yield j, change(j)[3]
1100
1100
1101 for rev, changefiles in changerevgen():
1101 for rev, changefiles in changerevgen():
1102 matches = filter(m, changefiles)
1102 matches = filter(m, changefiles)
1103 if matches:
1103 if matches:
1104 fncache[rev] = matches
1104 fncache[rev] = matches
1105 wanted.add(rev)
1105 wanted.add(rev)
1106
1106
1107 class followfilter:
1107 class followfilter:
1108 def __init__(self, onlyfirst=False):
1108 def __init__(self, onlyfirst=False):
1109 self.startrev = nullrev
1109 self.startrev = nullrev
1110 self.roots = []
1110 self.roots = []
1111 self.onlyfirst = onlyfirst
1111 self.onlyfirst = onlyfirst
1112
1112
1113 def match(self, rev):
1113 def match(self, rev):
1114 def realparents(rev):
1114 def realparents(rev):
1115 if self.onlyfirst:
1115 if self.onlyfirst:
1116 return repo.changelog.parentrevs(rev)[0:1]
1116 return repo.changelog.parentrevs(rev)[0:1]
1117 else:
1117 else:
1118 return filter(lambda x: x != nullrev,
1118 return filter(lambda x: x != nullrev,
1119 repo.changelog.parentrevs(rev))
1119 repo.changelog.parentrevs(rev))
1120
1120
1121 if self.startrev == nullrev:
1121 if self.startrev == nullrev:
1122 self.startrev = rev
1122 self.startrev = rev
1123 return True
1123 return True
1124
1124
1125 if rev > self.startrev:
1125 if rev > self.startrev:
1126 # forward: all descendants
1126 # forward: all descendants
1127 if not self.roots:
1127 if not self.roots:
1128 self.roots.append(self.startrev)
1128 self.roots.append(self.startrev)
1129 for parent in realparents(rev):
1129 for parent in realparents(rev):
1130 if parent in self.roots:
1130 if parent in self.roots:
1131 self.roots.append(rev)
1131 self.roots.append(rev)
1132 return True
1132 return True
1133 else:
1133 else:
1134 # backwards: all parents
1134 # backwards: all parents
1135 if not self.roots:
1135 if not self.roots:
1136 self.roots.extend(realparents(self.startrev))
1136 self.roots.extend(realparents(self.startrev))
1137 if rev in self.roots:
1137 if rev in self.roots:
1138 self.roots.remove(rev)
1138 self.roots.remove(rev)
1139 self.roots.extend(realparents(rev))
1139 self.roots.extend(realparents(rev))
1140 return True
1140 return True
1141
1141
1142 return False
1142 return False
1143
1143
1144 # it might be worthwhile to do this in the iterator if the rev range
1144 # it might be worthwhile to do this in the iterator if the rev range
1145 # is descending and the prune args are all within that range
1145 # is descending and the prune args are all within that range
1146 for rev in opts.get('prune', ()):
1146 for rev in opts.get('prune', ()):
1147 rev = repo.changelog.rev(repo.lookup(rev))
1147 rev = repo.changelog.rev(repo.lookup(rev))
1148 ff = followfilter()
1148 ff = followfilter()
1149 stop = min(revs[0], revs[-1])
1149 stop = min(revs[0], revs[-1])
1150 for x in xrange(rev, stop-1, -1):
1150 for x in xrange(rev, stop-1, -1):
1151 if ff.match(x):
1151 if ff.match(x):
1152 wanted.discard(x)
1152 wanted.discard(x)
1153
1153
1154 def iterate():
1154 def iterate():
1155 if follow and not m.files():
1155 if follow and not m.files():
1156 ff = followfilter(onlyfirst=opts.get('follow_first'))
1156 ff = followfilter(onlyfirst=opts.get('follow_first'))
1157 def want(rev):
1157 def want(rev):
1158 return ff.match(rev) and rev in wanted
1158 return ff.match(rev) and rev in wanted
1159 else:
1159 else:
1160 def want(rev):
1160 def want(rev):
1161 return rev in wanted
1161 return rev in wanted
1162
1162
1163 for i, window in increasing_windows(0, len(revs)):
1163 for i, window in increasing_windows(0, len(revs)):
1164 yield 'window', revs[0] < revs[-1], revs[-1]
1164 yield 'window', revs[0] < revs[-1], revs[-1]
1165 nrevs = [rev for rev in revs[i:i+window] if want(rev)]
1165 nrevs = [rev for rev in revs[i:i+window] if want(rev)]
1166 for rev in util.sort(list(nrevs)):
1166 for rev in sorted(nrevs):
1167 fns = fncache.get(rev)
1167 fns = fncache.get(rev)
1168 if not fns:
1168 if not fns:
1169 def fns_generator():
1169 def fns_generator():
1170 for f in change(rev)[3]:
1170 for f in change(rev)[3]:
1171 if m(f):
1171 if m(f):
1172 yield f
1172 yield f
1173 fns = fns_generator()
1173 fns = fns_generator()
1174 yield 'add', rev, fns
1174 yield 'add', rev, fns
1175 for rev in nrevs:
1175 for rev in nrevs:
1176 yield 'iter', rev, None
1176 yield 'iter', rev, None
1177 return iterate(), m
1177 return iterate(), m
1178
1178
1179 def commit(ui, repo, commitfunc, pats, opts):
1179 def commit(ui, repo, commitfunc, pats, opts):
1180 '''commit the specified files or all outstanding changes'''
1180 '''commit the specified files or all outstanding changes'''
1181 date = opts.get('date')
1181 date = opts.get('date')
1182 if date:
1182 if date:
1183 opts['date'] = util.parsedate(date)
1183 opts['date'] = util.parsedate(date)
1184 message = logmessage(opts)
1184 message = logmessage(opts)
1185
1185
1186 # extract addremove carefully -- this function can be called from a command
1186 # extract addremove carefully -- this function can be called from a command
1187 # that doesn't support addremove
1187 # that doesn't support addremove
1188 if opts.get('addremove'):
1188 if opts.get('addremove'):
1189 addremove(repo, pats, opts)
1189 addremove(repo, pats, opts)
1190
1190
1191 m = match(repo, pats, opts)
1191 m = match(repo, pats, opts)
1192 if pats:
1192 if pats:
1193 modified, added, removed = repo.status(match=m)[:3]
1193 modified, added, removed = repo.status(match=m)[:3]
1194 files = util.sort(modified + added + removed)
1194 files = sorted(modified + added + removed)
1195
1195
1196 def is_dir(f):
1196 def is_dir(f):
1197 name = f + '/'
1197 name = f + '/'
1198 i = bisect.bisect(files, name)
1198 i = bisect.bisect(files, name)
1199 return i < len(files) and files[i].startswith(name)
1199 return i < len(files) and files[i].startswith(name)
1200
1200
1201 for f in m.files():
1201 for f in m.files():
1202 if f == '.':
1202 if f == '.':
1203 continue
1203 continue
1204 if f not in files:
1204 if f not in files:
1205 rf = repo.wjoin(f)
1205 rf = repo.wjoin(f)
1206 rel = repo.pathto(f)
1206 rel = repo.pathto(f)
1207 try:
1207 try:
1208 mode = os.lstat(rf)[stat.ST_MODE]
1208 mode = os.lstat(rf)[stat.ST_MODE]
1209 except OSError:
1209 except OSError:
1210 if is_dir(f): # deleted directory ?
1210 if is_dir(f): # deleted directory ?
1211 continue
1211 continue
1212 raise util.Abort(_("file %s not found!") % rel)
1212 raise util.Abort(_("file %s not found!") % rel)
1213 if stat.S_ISDIR(mode):
1213 if stat.S_ISDIR(mode):
1214 if not is_dir(f):
1214 if not is_dir(f):
1215 raise util.Abort(_("no match under directory %s!")
1215 raise util.Abort(_("no match under directory %s!")
1216 % rel)
1216 % rel)
1217 elif not (stat.S_ISREG(mode) or stat.S_ISLNK(mode)):
1217 elif not (stat.S_ISREG(mode) or stat.S_ISLNK(mode)):
1218 raise util.Abort(_("can't commit %s: "
1218 raise util.Abort(_("can't commit %s: "
1219 "unsupported file type!") % rel)
1219 "unsupported file type!") % rel)
1220 elif f not in repo.dirstate:
1220 elif f not in repo.dirstate:
1221 raise util.Abort(_("file %s not tracked!") % rel)
1221 raise util.Abort(_("file %s not tracked!") % rel)
1222 m = matchfiles(repo, files)
1222 m = matchfiles(repo, files)
1223 try:
1223 try:
1224 return commitfunc(ui, repo, message, m, opts)
1224 return commitfunc(ui, repo, message, m, opts)
1225 except ValueError, inst:
1225 except ValueError, inst:
1226 raise util.Abort(str(inst))
1226 raise util.Abort(str(inst))
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now