##// END OF EJS Templates
hgext: mark all first-party extensions as such
Augie Fackler -
r16743:38caf405 default
parent child Browse files
Show More
@@ -1,252 +1,254 b''
1 # acl.py - changeset access control for mercurial
1 # acl.py - changeset access control for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''hooks for controlling repository access
8 '''hooks for controlling repository access
9
9
10 This hook makes it possible to allow or deny write access to given
10 This hook makes it possible to allow or deny write access to given
11 branches and paths of a repository when receiving incoming changesets
11 branches and paths of a repository when receiving incoming changesets
12 via pretxnchangegroup and pretxncommit.
12 via pretxnchangegroup and pretxncommit.
13
13
14 The authorization is matched based on the local user name on the
14 The authorization is matched based on the local user name on the
15 system where the hook runs, and not the committer of the original
15 system where the hook runs, and not the committer of the original
16 changeset (since the latter is merely informative).
16 changeset (since the latter is merely informative).
17
17
18 The acl hook is best used along with a restricted shell like hgsh,
18 The acl hook is best used along with a restricted shell like hgsh,
19 preventing authenticating users from doing anything other than pushing
19 preventing authenticating users from doing anything other than pushing
20 or pulling. The hook is not safe to use if users have interactive
20 or pulling. The hook is not safe to use if users have interactive
21 shell access, as they can then disable the hook. Nor is it safe if
21 shell access, as they can then disable the hook. Nor is it safe if
22 remote users share an account, because then there is no way to
22 remote users share an account, because then there is no way to
23 distinguish them.
23 distinguish them.
24
24
25 The order in which access checks are performed is:
25 The order in which access checks are performed is:
26
26
27 1) Deny list for branches (section ``acl.deny.branches``)
27 1) Deny list for branches (section ``acl.deny.branches``)
28 2) Allow list for branches (section ``acl.allow.branches``)
28 2) Allow list for branches (section ``acl.allow.branches``)
29 3) Deny list for paths (section ``acl.deny``)
29 3) Deny list for paths (section ``acl.deny``)
30 4) Allow list for paths (section ``acl.allow``)
30 4) Allow list for paths (section ``acl.allow``)
31
31
32 The allow and deny sections take key-value pairs.
32 The allow and deny sections take key-value pairs.
33
33
34 Branch-based Access Control
34 Branch-based Access Control
35 ...........................
35 ...........................
36
36
37 Use the ``acl.deny.branches`` and ``acl.allow.branches`` sections to
37 Use the ``acl.deny.branches`` and ``acl.allow.branches`` sections to
38 have branch-based access control. Keys in these sections can be
38 have branch-based access control. Keys in these sections can be
39 either:
39 either:
40
40
41 - a branch name, or
41 - a branch name, or
42 - an asterisk, to match any branch;
42 - an asterisk, to match any branch;
43
43
44 The corresponding values can be either:
44 The corresponding values can be either:
45
45
46 - a comma-separated list containing users and groups, or
46 - a comma-separated list containing users and groups, or
47 - an asterisk, to match anyone;
47 - an asterisk, to match anyone;
48
48
49 Path-based Access Control
49 Path-based Access Control
50 .........................
50 .........................
51
51
52 Use the ``acl.deny`` and ``acl.allow`` sections to have path-based
52 Use the ``acl.deny`` and ``acl.allow`` sections to have path-based
53 access control. Keys in these sections accept a subtree pattern (with
53 access control. Keys in these sections accept a subtree pattern (with
54 a glob syntax by default). The corresponding values follow the same
54 a glob syntax by default). The corresponding values follow the same
55 syntax as the other sections above.
55 syntax as the other sections above.
56
56
57 Groups
57 Groups
58 ......
58 ......
59
59
60 Group names must be prefixed with an ``@`` symbol. Specifying a group
60 Group names must be prefixed with an ``@`` symbol. Specifying a group
61 name has the same effect as specifying all the users in that group.
61 name has the same effect as specifying all the users in that group.
62
62
63 You can define group members in the ``acl.groups`` section.
63 You can define group members in the ``acl.groups`` section.
64 If a group name is not defined there, and Mercurial is running under
64 If a group name is not defined there, and Mercurial is running under
65 a Unix-like system, the list of users will be taken from the OS.
65 a Unix-like system, the list of users will be taken from the OS.
66 Otherwise, an exception will be raised.
66 Otherwise, an exception will be raised.
67
67
68 Example Configuration
68 Example Configuration
69 .....................
69 .....................
70
70
71 ::
71 ::
72
72
73 [hooks]
73 [hooks]
74
74
75 # Use this if you want to check access restrictions at commit time
75 # Use this if you want to check access restrictions at commit time
76 pretxncommit.acl = python:hgext.acl.hook
76 pretxncommit.acl = python:hgext.acl.hook
77
77
78 # Use this if you want to check access restrictions for pull, push,
78 # Use this if you want to check access restrictions for pull, push,
79 # bundle and serve.
79 # bundle and serve.
80 pretxnchangegroup.acl = python:hgext.acl.hook
80 pretxnchangegroup.acl = python:hgext.acl.hook
81
81
82 [acl]
82 [acl]
83 # Allow or deny access for incoming changes only if their source is
83 # Allow or deny access for incoming changes only if their source is
84 # listed here, let them pass otherwise. Source is "serve" for all
84 # listed here, let them pass otherwise. Source is "serve" for all
85 # remote access (http or ssh), "push", "pull" or "bundle" when the
85 # remote access (http or ssh), "push", "pull" or "bundle" when the
86 # related commands are run locally.
86 # related commands are run locally.
87 # Default: serve
87 # Default: serve
88 sources = serve
88 sources = serve
89
89
90 [acl.deny.branches]
90 [acl.deny.branches]
91
91
92 # Everyone is denied to the frozen branch:
92 # Everyone is denied to the frozen branch:
93 frozen-branch = *
93 frozen-branch = *
94
94
95 # A bad user is denied on all branches:
95 # A bad user is denied on all branches:
96 * = bad-user
96 * = bad-user
97
97
98 [acl.allow.branches]
98 [acl.allow.branches]
99
99
100 # A few users are allowed on branch-a:
100 # A few users are allowed on branch-a:
101 branch-a = user-1, user-2, user-3
101 branch-a = user-1, user-2, user-3
102
102
103 # Only one user is allowed on branch-b:
103 # Only one user is allowed on branch-b:
104 branch-b = user-1
104 branch-b = user-1
105
105
106 # The super user is allowed on any branch:
106 # The super user is allowed on any branch:
107 * = super-user
107 * = super-user
108
108
109 # Everyone is allowed on branch-for-tests:
109 # Everyone is allowed on branch-for-tests:
110 branch-for-tests = *
110 branch-for-tests = *
111
111
112 [acl.deny]
112 [acl.deny]
113 # This list is checked first. If a match is found, acl.allow is not
113 # This list is checked first. If a match is found, acl.allow is not
114 # checked. All users are granted access if acl.deny is not present.
114 # checked. All users are granted access if acl.deny is not present.
115 # Format for both lists: glob pattern = user, ..., @group, ...
115 # Format for both lists: glob pattern = user, ..., @group, ...
116
116
117 # To match everyone, use an asterisk for the user:
117 # To match everyone, use an asterisk for the user:
118 # my/glob/pattern = *
118 # my/glob/pattern = *
119
119
120 # user6 will not have write access to any file:
120 # user6 will not have write access to any file:
121 ** = user6
121 ** = user6
122
122
123 # Group "hg-denied" will not have write access to any file:
123 # Group "hg-denied" will not have write access to any file:
124 ** = @hg-denied
124 ** = @hg-denied
125
125
126 # Nobody will be able to change "DONT-TOUCH-THIS.txt", despite
126 # Nobody will be able to change "DONT-TOUCH-THIS.txt", despite
127 # everyone being able to change all other files. See below.
127 # everyone being able to change all other files. See below.
128 src/main/resources/DONT-TOUCH-THIS.txt = *
128 src/main/resources/DONT-TOUCH-THIS.txt = *
129
129
130 [acl.allow]
130 [acl.allow]
131 # if acl.allow is not present, all users are allowed by default
131 # if acl.allow is not present, all users are allowed by default
132 # empty acl.allow = no users allowed
132 # empty acl.allow = no users allowed
133
133
134 # User "doc_writer" has write access to any file under the "docs"
134 # User "doc_writer" has write access to any file under the "docs"
135 # folder:
135 # folder:
136 docs/** = doc_writer
136 docs/** = doc_writer
137
137
138 # User "jack" and group "designers" have write access to any file
138 # User "jack" and group "designers" have write access to any file
139 # under the "images" folder:
139 # under the "images" folder:
140 images/** = jack, @designers
140 images/** = jack, @designers
141
141
142 # Everyone (except for "user6" and "@hg-denied" - see acl.deny above)
142 # Everyone (except for "user6" and "@hg-denied" - see acl.deny above)
143 # will have write access to any file under the "resources" folder
143 # will have write access to any file under the "resources" folder
144 # (except for 1 file. See acl.deny):
144 # (except for 1 file. See acl.deny):
145 src/main/resources/** = *
145 src/main/resources/** = *
146
146
147 .hgtags = release_engineer
147 .hgtags = release_engineer
148
148
149 '''
149 '''
150
150
151 from mercurial.i18n import _
151 from mercurial.i18n import _
152 from mercurial import util, match
152 from mercurial import util, match
153 import getpass, urllib
153 import getpass, urllib
154
154
155 testedwith = 'internal'
156
155 def _getusers(ui, group):
157 def _getusers(ui, group):
156
158
157 # First, try to use group definition from section [acl.groups]
159 # First, try to use group definition from section [acl.groups]
158 hgrcusers = ui.configlist('acl.groups', group)
160 hgrcusers = ui.configlist('acl.groups', group)
159 if hgrcusers:
161 if hgrcusers:
160 return hgrcusers
162 return hgrcusers
161
163
162 ui.debug('acl: "%s" not defined in [acl.groups]\n' % group)
164 ui.debug('acl: "%s" not defined in [acl.groups]\n' % group)
163 # If no users found in group definition, get users from OS-level group
165 # If no users found in group definition, get users from OS-level group
164 try:
166 try:
165 return util.groupmembers(group)
167 return util.groupmembers(group)
166 except KeyError:
168 except KeyError:
167 raise util.Abort(_("group '%s' is undefined") % group)
169 raise util.Abort(_("group '%s' is undefined") % group)
168
170
169 def _usermatch(ui, user, usersorgroups):
171 def _usermatch(ui, user, usersorgroups):
170
172
171 if usersorgroups == '*':
173 if usersorgroups == '*':
172 return True
174 return True
173
175
174 for ug in usersorgroups.replace(',', ' ').split():
176 for ug in usersorgroups.replace(',', ' ').split():
175 if user == ug or ug.find('@') == 0 and user in _getusers(ui, ug[1:]):
177 if user == ug or ug.find('@') == 0 and user in _getusers(ui, ug[1:]):
176 return True
178 return True
177
179
178 return False
180 return False
179
181
180 def buildmatch(ui, repo, user, key):
182 def buildmatch(ui, repo, user, key):
181 '''return tuple of (match function, list enabled).'''
183 '''return tuple of (match function, list enabled).'''
182 if not ui.has_section(key):
184 if not ui.has_section(key):
183 ui.debug('acl: %s not enabled\n' % key)
185 ui.debug('acl: %s not enabled\n' % key)
184 return None
186 return None
185
187
186 pats = [pat for pat, users in ui.configitems(key)
188 pats = [pat for pat, users in ui.configitems(key)
187 if _usermatch(ui, user, users)]
189 if _usermatch(ui, user, users)]
188 ui.debug('acl: %s enabled, %d entries for user %s\n' %
190 ui.debug('acl: %s enabled, %d entries for user %s\n' %
189 (key, len(pats), user))
191 (key, len(pats), user))
190
192
191 if not repo:
193 if not repo:
192 if pats:
194 if pats:
193 return lambda b: '*' in pats or b in pats
195 return lambda b: '*' in pats or b in pats
194 return lambda b: False
196 return lambda b: False
195
197
196 if pats:
198 if pats:
197 return match.match(repo.root, '', pats)
199 return match.match(repo.root, '', pats)
198 return match.exact(repo.root, '', [])
200 return match.exact(repo.root, '', [])
199
201
200
202
201 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
203 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
202 if hooktype not in ['pretxnchangegroup', 'pretxncommit']:
204 if hooktype not in ['pretxnchangegroup', 'pretxncommit']:
203 raise util.Abort(_('config error - hook type "%s" cannot stop '
205 raise util.Abort(_('config error - hook type "%s" cannot stop '
204 'incoming changesets nor commits') % hooktype)
206 'incoming changesets nor commits') % hooktype)
205 if (hooktype == 'pretxnchangegroup' and
207 if (hooktype == 'pretxnchangegroup' and
206 source not in ui.config('acl', 'sources', 'serve').split()):
208 source not in ui.config('acl', 'sources', 'serve').split()):
207 ui.debug('acl: changes have source "%s" - skipping\n' % source)
209 ui.debug('acl: changes have source "%s" - skipping\n' % source)
208 return
210 return
209
211
210 user = None
212 user = None
211 if source == 'serve' and 'url' in kwargs:
213 if source == 'serve' and 'url' in kwargs:
212 url = kwargs['url'].split(':')
214 url = kwargs['url'].split(':')
213 if url[0] == 'remote' and url[1].startswith('http'):
215 if url[0] == 'remote' and url[1].startswith('http'):
214 user = urllib.unquote(url[3])
216 user = urllib.unquote(url[3])
215
217
216 if user is None:
218 if user is None:
217 user = getpass.getuser()
219 user = getpass.getuser()
218
220
219 ui.debug('acl: checking access for user "%s"\n' % user)
221 ui.debug('acl: checking access for user "%s"\n' % user)
220
222
221 cfg = ui.config('acl', 'config')
223 cfg = ui.config('acl', 'config')
222 if cfg:
224 if cfg:
223 ui.readconfig(cfg, sections = ['acl.groups', 'acl.allow.branches',
225 ui.readconfig(cfg, sections = ['acl.groups', 'acl.allow.branches',
224 'acl.deny.branches', 'acl.allow', 'acl.deny'])
226 'acl.deny.branches', 'acl.allow', 'acl.deny'])
225
227
226 allowbranches = buildmatch(ui, None, user, 'acl.allow.branches')
228 allowbranches = buildmatch(ui, None, user, 'acl.allow.branches')
227 denybranches = buildmatch(ui, None, user, 'acl.deny.branches')
229 denybranches = buildmatch(ui, None, user, 'acl.deny.branches')
228 allow = buildmatch(ui, repo, user, 'acl.allow')
230 allow = buildmatch(ui, repo, user, 'acl.allow')
229 deny = buildmatch(ui, repo, user, 'acl.deny')
231 deny = buildmatch(ui, repo, user, 'acl.deny')
230
232
231 for rev in xrange(repo[node], len(repo)):
233 for rev in xrange(repo[node], len(repo)):
232 ctx = repo[rev]
234 ctx = repo[rev]
233 branch = ctx.branch()
235 branch = ctx.branch()
234 if denybranches and denybranches(branch):
236 if denybranches and denybranches(branch):
235 raise util.Abort(_('acl: user "%s" denied on branch "%s"'
237 raise util.Abort(_('acl: user "%s" denied on branch "%s"'
236 ' (changeset "%s")')
238 ' (changeset "%s")')
237 % (user, branch, ctx))
239 % (user, branch, ctx))
238 if allowbranches and not allowbranches(branch):
240 if allowbranches and not allowbranches(branch):
239 raise util.Abort(_('acl: user "%s" not allowed on branch "%s"'
241 raise util.Abort(_('acl: user "%s" not allowed on branch "%s"'
240 ' (changeset "%s")')
242 ' (changeset "%s")')
241 % (user, branch, ctx))
243 % (user, branch, ctx))
242 ui.debug('acl: branch access granted: "%s" on branch "%s"\n'
244 ui.debug('acl: branch access granted: "%s" on branch "%s"\n'
243 % (ctx, branch))
245 % (ctx, branch))
244
246
245 for f in ctx.files():
247 for f in ctx.files():
246 if deny and deny(f):
248 if deny and deny(f):
247 raise util.Abort(_('acl: user "%s" denied on "%s"'
249 raise util.Abort(_('acl: user "%s" denied on "%s"'
248 ' (changeset "%s")') % (user, f, ctx))
250 ' (changeset "%s")') % (user, f, ctx))
249 if allow and not allow(f):
251 if allow and not allow(f):
250 raise util.Abort(_('acl: user "%s" not allowed on "%s"'
252 raise util.Abort(_('acl: user "%s" not allowed on "%s"'
251 ' (changeset "%s")') % (user, f, ctx))
253 ' (changeset "%s")') % (user, f, ctx))
252 ui.debug('acl: path access granted: "%s"\n' % ctx)
254 ui.debug('acl: path access granted: "%s"\n' % ctx)
@@ -1,913 +1,914 b''
1 # bugzilla.py - bugzilla integration for mercurial
1 # bugzilla.py - bugzilla integration for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 # Copyright 2011-2 Jim Hague <jim.hague@acm.org>
4 # Copyright 2011-2 Jim Hague <jim.hague@acm.org>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''hooks for integrating with the Bugzilla bug tracker
9 '''hooks for integrating with the Bugzilla bug tracker
10
10
11 This hook extension adds comments on bugs in Bugzilla when changesets
11 This hook extension adds comments on bugs in Bugzilla when changesets
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 the Mercurial template mechanism.
13 the Mercurial template mechanism.
14
14
15 The bug references can optionally include an update for Bugzilla of the
15 The bug references can optionally include an update for Bugzilla of the
16 hours spent working on the bug. Bugs can also be marked fixed.
16 hours spent working on the bug. Bugs can also be marked fixed.
17
17
18 Three basic modes of access to Bugzilla are provided:
18 Three basic modes of access to Bugzilla are provided:
19
19
20 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
20 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
21
21
22 2. Check data via the Bugzilla XMLRPC interface and submit bug change
22 2. Check data via the Bugzilla XMLRPC interface and submit bug change
23 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
23 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
24
24
25 3. Writing directly to the Bugzilla database. Only Bugzilla installations
25 3. Writing directly to the Bugzilla database. Only Bugzilla installations
26 using MySQL are supported. Requires Python MySQLdb.
26 using MySQL are supported. Requires Python MySQLdb.
27
27
28 Writing directly to the database is susceptible to schema changes, and
28 Writing directly to the database is susceptible to schema changes, and
29 relies on a Bugzilla contrib script to send out bug change
29 relies on a Bugzilla contrib script to send out bug change
30 notification emails. This script runs as the user running Mercurial,
30 notification emails. This script runs as the user running Mercurial,
31 must be run on the host with the Bugzilla install, and requires
31 must be run on the host with the Bugzilla install, and requires
32 permission to read Bugzilla configuration details and the necessary
32 permission to read Bugzilla configuration details and the necessary
33 MySQL user and password to have full access rights to the Bugzilla
33 MySQL user and password to have full access rights to the Bugzilla
34 database. For these reasons this access mode is now considered
34 database. For these reasons this access mode is now considered
35 deprecated, and will not be updated for new Bugzilla versions going
35 deprecated, and will not be updated for new Bugzilla versions going
36 forward. Only adding comments is supported in this access mode.
36 forward. Only adding comments is supported in this access mode.
37
37
38 Access via XMLRPC needs a Bugzilla username and password to be specified
38 Access via XMLRPC needs a Bugzilla username and password to be specified
39 in the configuration. Comments are added under that username. Since the
39 in the configuration. Comments are added under that username. Since the
40 configuration must be readable by all Mercurial users, it is recommended
40 configuration must be readable by all Mercurial users, it is recommended
41 that the rights of that user are restricted in Bugzilla to the minimum
41 that the rights of that user are restricted in Bugzilla to the minimum
42 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
42 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
43
43
44 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
44 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
45 email to the Bugzilla email interface to submit comments to bugs.
45 email to the Bugzilla email interface to submit comments to bugs.
46 The From: address in the email is set to the email address of the Mercurial
46 The From: address in the email is set to the email address of the Mercurial
47 user, so the comment appears to come from the Mercurial user. In the event
47 user, so the comment appears to come from the Mercurial user. In the event
48 that the Mercurial user email is not recognised by Bugzilla as a Bugzilla
48 that the Mercurial user email is not recognised by Bugzilla as a Bugzilla
49 user, the email associated with the Bugzilla username used to log into
49 user, the email associated with the Bugzilla username used to log into
50 Bugzilla is used instead as the source of the comment. Marking bugs fixed
50 Bugzilla is used instead as the source of the comment. Marking bugs fixed
51 works on all supported Bugzilla versions.
51 works on all supported Bugzilla versions.
52
52
53 Configuration items common to all access modes:
53 Configuration items common to all access modes:
54
54
55 bugzilla.version
55 bugzilla.version
56 This access type to use. Values recognised are:
56 This access type to use. Values recognised are:
57
57
58 :``xmlrpc``: Bugzilla XMLRPC interface.
58 :``xmlrpc``: Bugzilla XMLRPC interface.
59 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
59 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
60 :``3.0``: MySQL access, Bugzilla 3.0 and later.
60 :``3.0``: MySQL access, Bugzilla 3.0 and later.
61 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
61 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
62 including 3.0.
62 including 3.0.
63 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
63 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
64 including 2.18.
64 including 2.18.
65
65
66 bugzilla.regexp
66 bugzilla.regexp
67 Regular expression to match bug IDs for update in changeset commit message.
67 Regular expression to match bug IDs for update in changeset commit message.
68 It must contain one "()" named group ``<ids>`` containing the bug
68 It must contain one "()" named group ``<ids>`` containing the bug
69 IDs separated by non-digit characters. It may also contain
69 IDs separated by non-digit characters. It may also contain
70 a named group ``<hours>`` with a floating-point number giving the
70 a named group ``<hours>`` with a floating-point number giving the
71 hours worked on the bug. If no named groups are present, the first
71 hours worked on the bug. If no named groups are present, the first
72 "()" group is assumed to contain the bug IDs, and work time is not
72 "()" group is assumed to contain the bug IDs, and work time is not
73 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
73 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
74 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
74 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
75 variations thereof, followed by an hours number prefixed by ``h`` or
75 variations thereof, followed by an hours number prefixed by ``h`` or
76 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
76 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
77
77
78 bugzilla.fixregexp
78 bugzilla.fixregexp
79 Regular expression to match bug IDs for marking fixed in changeset
79 Regular expression to match bug IDs for marking fixed in changeset
80 commit message. This must contain a "()" named group ``<ids>` containing
80 commit message. This must contain a "()" named group ``<ids>` containing
81 the bug IDs separated by non-digit characters. It may also contain
81 the bug IDs separated by non-digit characters. It may also contain
82 a named group ``<hours>`` with a floating-point number giving the
82 a named group ``<hours>`` with a floating-point number giving the
83 hours worked on the bug. If no named groups are present, the first
83 hours worked on the bug. If no named groups are present, the first
84 "()" group is assumed to contain the bug IDs, and work time is not
84 "()" group is assumed to contain the bug IDs, and work time is not
85 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
85 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
86 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
86 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
87 variations thereof, followed by an hours number prefixed by ``h`` or
87 variations thereof, followed by an hours number prefixed by ``h`` or
88 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
88 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
89
89
90 bugzilla.fixstatus
90 bugzilla.fixstatus
91 The status to set a bug to when marking fixed. Default ``RESOLVED``.
91 The status to set a bug to when marking fixed. Default ``RESOLVED``.
92
92
93 bugzilla.fixresolution
93 bugzilla.fixresolution
94 The resolution to set a bug to when marking fixed. Default ``FIXED``.
94 The resolution to set a bug to when marking fixed. Default ``FIXED``.
95
95
96 bugzilla.style
96 bugzilla.style
97 The style file to use when formatting comments.
97 The style file to use when formatting comments.
98
98
99 bugzilla.template
99 bugzilla.template
100 Template to use when formatting comments. Overrides style if
100 Template to use when formatting comments. Overrides style if
101 specified. In addition to the usual Mercurial keywords, the
101 specified. In addition to the usual Mercurial keywords, the
102 extension specifies:
102 extension specifies:
103
103
104 :``{bug}``: The Bugzilla bug ID.
104 :``{bug}``: The Bugzilla bug ID.
105 :``{root}``: The full pathname of the Mercurial repository.
105 :``{root}``: The full pathname of the Mercurial repository.
106 :``{webroot}``: Stripped pathname of the Mercurial repository.
106 :``{webroot}``: Stripped pathname of the Mercurial repository.
107 :``{hgweb}``: Base URL for browsing Mercurial repositories.
107 :``{hgweb}``: Base URL for browsing Mercurial repositories.
108
108
109 Default ``changeset {node|short} in repo {root} refers to bug
109 Default ``changeset {node|short} in repo {root} refers to bug
110 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
110 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
111
111
112 bugzilla.strip
112 bugzilla.strip
113 The number of path separator characters to strip from the front of
113 The number of path separator characters to strip from the front of
114 the Mercurial repository path (``{root}`` in templates) to produce
114 the Mercurial repository path (``{root}`` in templates) to produce
115 ``{webroot}``. For example, a repository with ``{root}``
115 ``{webroot}``. For example, a repository with ``{root}``
116 ``/var/local/my-project`` with a strip of 2 gives a value for
116 ``/var/local/my-project`` with a strip of 2 gives a value for
117 ``{webroot}`` of ``my-project``. Default 0.
117 ``{webroot}`` of ``my-project``. Default 0.
118
118
119 web.baseurl
119 web.baseurl
120 Base URL for browsing Mercurial repositories. Referenced from
120 Base URL for browsing Mercurial repositories. Referenced from
121 templates as ``{hgweb}``.
121 templates as ``{hgweb}``.
122
122
123 Configuration items common to XMLRPC+email and MySQL access modes:
123 Configuration items common to XMLRPC+email and MySQL access modes:
124
124
125 bugzilla.usermap
125 bugzilla.usermap
126 Path of file containing Mercurial committer email to Bugzilla user email
126 Path of file containing Mercurial committer email to Bugzilla user email
127 mappings. If specified, the file should contain one mapping per
127 mappings. If specified, the file should contain one mapping per
128 line::
128 line::
129
129
130 committer = Bugzilla user
130 committer = Bugzilla user
131
131
132 See also the ``[usermap]`` section.
132 See also the ``[usermap]`` section.
133
133
134 The ``[usermap]`` section is used to specify mappings of Mercurial
134 The ``[usermap]`` section is used to specify mappings of Mercurial
135 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
135 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
136 Contains entries of the form ``committer = Bugzilla user``.
136 Contains entries of the form ``committer = Bugzilla user``.
137
137
138 XMLRPC access mode configuration:
138 XMLRPC access mode configuration:
139
139
140 bugzilla.bzurl
140 bugzilla.bzurl
141 The base URL for the Bugzilla installation.
141 The base URL for the Bugzilla installation.
142 Default ``http://localhost/bugzilla``.
142 Default ``http://localhost/bugzilla``.
143
143
144 bugzilla.user
144 bugzilla.user
145 The username to use to log into Bugzilla via XMLRPC. Default
145 The username to use to log into Bugzilla via XMLRPC. Default
146 ``bugs``.
146 ``bugs``.
147
147
148 bugzilla.password
148 bugzilla.password
149 The password for Bugzilla login.
149 The password for Bugzilla login.
150
150
151 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
151 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
152 and also:
152 and also:
153
153
154 bugzilla.bzemail
154 bugzilla.bzemail
155 The Bugzilla email address.
155 The Bugzilla email address.
156
156
157 In addition, the Mercurial email settings must be configured. See the
157 In addition, the Mercurial email settings must be configured. See the
158 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
158 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
159
159
160 MySQL access mode configuration:
160 MySQL access mode configuration:
161
161
162 bugzilla.host
162 bugzilla.host
163 Hostname of the MySQL server holding the Bugzilla database.
163 Hostname of the MySQL server holding the Bugzilla database.
164 Default ``localhost``.
164 Default ``localhost``.
165
165
166 bugzilla.db
166 bugzilla.db
167 Name of the Bugzilla database in MySQL. Default ``bugs``.
167 Name of the Bugzilla database in MySQL. Default ``bugs``.
168
168
169 bugzilla.user
169 bugzilla.user
170 Username to use to access MySQL server. Default ``bugs``.
170 Username to use to access MySQL server. Default ``bugs``.
171
171
172 bugzilla.password
172 bugzilla.password
173 Password to use to access MySQL server.
173 Password to use to access MySQL server.
174
174
175 bugzilla.timeout
175 bugzilla.timeout
176 Database connection timeout (seconds). Default 5.
176 Database connection timeout (seconds). Default 5.
177
177
178 bugzilla.bzuser
178 bugzilla.bzuser
179 Fallback Bugzilla user name to record comments with, if changeset
179 Fallback Bugzilla user name to record comments with, if changeset
180 committer cannot be found as a Bugzilla user.
180 committer cannot be found as a Bugzilla user.
181
181
182 bugzilla.bzdir
182 bugzilla.bzdir
183 Bugzilla install directory. Used by default notify. Default
183 Bugzilla install directory. Used by default notify. Default
184 ``/var/www/html/bugzilla``.
184 ``/var/www/html/bugzilla``.
185
185
186 bugzilla.notify
186 bugzilla.notify
187 The command to run to get Bugzilla to send bug change notification
187 The command to run to get Bugzilla to send bug change notification
188 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
188 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
189 id) and ``user`` (committer bugzilla email). Default depends on
189 id) and ``user`` (committer bugzilla email). Default depends on
190 version; from 2.18 it is "cd %(bzdir)s && perl -T
190 version; from 2.18 it is "cd %(bzdir)s && perl -T
191 contrib/sendbugmail.pl %(id)s %(user)s".
191 contrib/sendbugmail.pl %(id)s %(user)s".
192
192
193 Activating the extension::
193 Activating the extension::
194
194
195 [extensions]
195 [extensions]
196 bugzilla =
196 bugzilla =
197
197
198 [hooks]
198 [hooks]
199 # run bugzilla hook on every change pulled or pushed in here
199 # run bugzilla hook on every change pulled or pushed in here
200 incoming.bugzilla = python:hgext.bugzilla.hook
200 incoming.bugzilla = python:hgext.bugzilla.hook
201
201
202 Example configurations:
202 Example configurations:
203
203
204 XMLRPC example configuration. This uses the Bugzilla at
204 XMLRPC example configuration. This uses the Bugzilla at
205 ``http://my-project.org/bugzilla``, logging in as user
205 ``http://my-project.org/bugzilla``, logging in as user
206 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
206 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
207 collection of Mercurial repositories in ``/var/local/hg/repos/``,
207 collection of Mercurial repositories in ``/var/local/hg/repos/``,
208 with a web interface at ``http://my-project.org/hg``. ::
208 with a web interface at ``http://my-project.org/hg``. ::
209
209
210 [bugzilla]
210 [bugzilla]
211 bzurl=http://my-project.org/bugzilla
211 bzurl=http://my-project.org/bugzilla
212 user=bugmail@my-project.org
212 user=bugmail@my-project.org
213 password=plugh
213 password=plugh
214 version=xmlrpc
214 version=xmlrpc
215 template=Changeset {node|short} in {root|basename}.
215 template=Changeset {node|short} in {root|basename}.
216 {hgweb}/{webroot}/rev/{node|short}\\n
216 {hgweb}/{webroot}/rev/{node|short}\\n
217 {desc}\\n
217 {desc}\\n
218 strip=5
218 strip=5
219
219
220 [web]
220 [web]
221 baseurl=http://my-project.org/hg
221 baseurl=http://my-project.org/hg
222
222
223 XMLRPC+email example configuration. This uses the Bugzilla at
223 XMLRPC+email example configuration. This uses the Bugzilla at
224 ``http://my-project.org/bugzilla``, logging in as user
224 ``http://my-project.org/bugzilla``, logging in as user
225 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
225 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
226 collection of Mercurial repositories in ``/var/local/hg/repos/``,
226 collection of Mercurial repositories in ``/var/local/hg/repos/``,
227 with a web interface at ``http://my-project.org/hg``. Bug comments
227 with a web interface at ``http://my-project.org/hg``. Bug comments
228 are sent to the Bugzilla email address
228 are sent to the Bugzilla email address
229 ``bugzilla@my-project.org``. ::
229 ``bugzilla@my-project.org``. ::
230
230
231 [bugzilla]
231 [bugzilla]
232 bzurl=http://my-project.org/bugzilla
232 bzurl=http://my-project.org/bugzilla
233 user=bugmail@my-project.org
233 user=bugmail@my-project.org
234 password=plugh
234 password=plugh
235 version=xmlrpc
235 version=xmlrpc
236 bzemail=bugzilla@my-project.org
236 bzemail=bugzilla@my-project.org
237 template=Changeset {node|short} in {root|basename}.
237 template=Changeset {node|short} in {root|basename}.
238 {hgweb}/{webroot}/rev/{node|short}\\n
238 {hgweb}/{webroot}/rev/{node|short}\\n
239 {desc}\\n
239 {desc}\\n
240 strip=5
240 strip=5
241
241
242 [web]
242 [web]
243 baseurl=http://my-project.org/hg
243 baseurl=http://my-project.org/hg
244
244
245 [usermap]
245 [usermap]
246 user@emaildomain.com=user.name@bugzilladomain.com
246 user@emaildomain.com=user.name@bugzilladomain.com
247
247
248 MySQL example configuration. This has a local Bugzilla 3.2 installation
248 MySQL example configuration. This has a local Bugzilla 3.2 installation
249 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
249 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
250 the Bugzilla database name is ``bugs`` and MySQL is
250 the Bugzilla database name is ``bugs`` and MySQL is
251 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
251 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
252 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
252 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
253 with a web interface at ``http://my-project.org/hg``. ::
253 with a web interface at ``http://my-project.org/hg``. ::
254
254
255 [bugzilla]
255 [bugzilla]
256 host=localhost
256 host=localhost
257 password=XYZZY
257 password=XYZZY
258 version=3.0
258 version=3.0
259 bzuser=unknown@domain.com
259 bzuser=unknown@domain.com
260 bzdir=/opt/bugzilla-3.2
260 bzdir=/opt/bugzilla-3.2
261 template=Changeset {node|short} in {root|basename}.
261 template=Changeset {node|short} in {root|basename}.
262 {hgweb}/{webroot}/rev/{node|short}\\n
262 {hgweb}/{webroot}/rev/{node|short}\\n
263 {desc}\\n
263 {desc}\\n
264 strip=5
264 strip=5
265
265
266 [web]
266 [web]
267 baseurl=http://my-project.org/hg
267 baseurl=http://my-project.org/hg
268
268
269 [usermap]
269 [usermap]
270 user@emaildomain.com=user.name@bugzilladomain.com
270 user@emaildomain.com=user.name@bugzilladomain.com
271
271
272 All the above add a comment to the Bugzilla bug record of the form::
272 All the above add a comment to the Bugzilla bug record of the form::
273
273
274 Changeset 3b16791d6642 in repository-name.
274 Changeset 3b16791d6642 in repository-name.
275 http://my-project.org/hg/repository-name/rev/3b16791d6642
275 http://my-project.org/hg/repository-name/rev/3b16791d6642
276
276
277 Changeset commit comment. Bug 1234.
277 Changeset commit comment. Bug 1234.
278 '''
278 '''
279
279
280 from mercurial.i18n import _
280 from mercurial.i18n import _
281 from mercurial.node import short
281 from mercurial.node import short
282 from mercurial import cmdutil, mail, templater, util
282 from mercurial import cmdutil, mail, templater, util
283 import re, time, urlparse, xmlrpclib
283 import re, time, urlparse, xmlrpclib
284
284
285 testedwith = 'internal'
286
285 class bzaccess(object):
287 class bzaccess(object):
286 '''Base class for access to Bugzilla.'''
288 '''Base class for access to Bugzilla.'''
287
289
288 def __init__(self, ui):
290 def __init__(self, ui):
289 self.ui = ui
291 self.ui = ui
290 usermap = self.ui.config('bugzilla', 'usermap')
292 usermap = self.ui.config('bugzilla', 'usermap')
291 if usermap:
293 if usermap:
292 self.ui.readconfig(usermap, sections=['usermap'])
294 self.ui.readconfig(usermap, sections=['usermap'])
293
295
294 def map_committer(self, user):
296 def map_committer(self, user):
295 '''map name of committer to Bugzilla user name.'''
297 '''map name of committer to Bugzilla user name.'''
296 for committer, bzuser in self.ui.configitems('usermap'):
298 for committer, bzuser in self.ui.configitems('usermap'):
297 if committer.lower() == user.lower():
299 if committer.lower() == user.lower():
298 return bzuser
300 return bzuser
299 return user
301 return user
300
302
301 # Methods to be implemented by access classes.
303 # Methods to be implemented by access classes.
302 #
304 #
303 # 'bugs' is a dict keyed on bug id, where values are a dict holding
305 # 'bugs' is a dict keyed on bug id, where values are a dict holding
304 # updates to bug state. Recognised dict keys are:
306 # updates to bug state. Recognised dict keys are:
305 #
307 #
306 # 'hours': Value, float containing work hours to be updated.
308 # 'hours': Value, float containing work hours to be updated.
307 # 'fix': If key present, bug is to be marked fixed. Value ignored.
309 # 'fix': If key present, bug is to be marked fixed. Value ignored.
308
310
309 def filter_real_bug_ids(self, bugs):
311 def filter_real_bug_ids(self, bugs):
310 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
312 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
311 pass
313 pass
312
314
313 def filter_cset_known_bug_ids(self, node, bugs):
315 def filter_cset_known_bug_ids(self, node, bugs):
314 '''remove bug IDs where node occurs in comment text from bugs.'''
316 '''remove bug IDs where node occurs in comment text from bugs.'''
315 pass
317 pass
316
318
317 def updatebug(self, bugid, newstate, text, committer):
319 def updatebug(self, bugid, newstate, text, committer):
318 '''update the specified bug. Add comment text and set new states.
320 '''update the specified bug. Add comment text and set new states.
319
321
320 If possible add the comment as being from the committer of
322 If possible add the comment as being from the committer of
321 the changeset. Otherwise use the default Bugzilla user.
323 the changeset. Otherwise use the default Bugzilla user.
322 '''
324 '''
323 pass
325 pass
324
326
325 def notify(self, bugs, committer):
327 def notify(self, bugs, committer):
326 '''Force sending of Bugzilla notification emails.
328 '''Force sending of Bugzilla notification emails.
327
329
328 Only required if the access method does not trigger notification
330 Only required if the access method does not trigger notification
329 emails automatically.
331 emails automatically.
330 '''
332 '''
331 pass
333 pass
332
334
333 # Bugzilla via direct access to MySQL database.
335 # Bugzilla via direct access to MySQL database.
334 class bzmysql(bzaccess):
336 class bzmysql(bzaccess):
335 '''Support for direct MySQL access to Bugzilla.
337 '''Support for direct MySQL access to Bugzilla.
336
338
337 The earliest Bugzilla version this is tested with is version 2.16.
339 The earliest Bugzilla version this is tested with is version 2.16.
338
340
339 If your Bugzilla is version 3.4 or above, you are strongly
341 If your Bugzilla is version 3.4 or above, you are strongly
340 recommended to use the XMLRPC access method instead.
342 recommended to use the XMLRPC access method instead.
341 '''
343 '''
342
344
343 @staticmethod
345 @staticmethod
344 def sql_buglist(ids):
346 def sql_buglist(ids):
345 '''return SQL-friendly list of bug ids'''
347 '''return SQL-friendly list of bug ids'''
346 return '(' + ','.join(map(str, ids)) + ')'
348 return '(' + ','.join(map(str, ids)) + ')'
347
349
348 _MySQLdb = None
350 _MySQLdb = None
349
351
350 def __init__(self, ui):
352 def __init__(self, ui):
351 try:
353 try:
352 import MySQLdb as mysql
354 import MySQLdb as mysql
353 bzmysql._MySQLdb = mysql
355 bzmysql._MySQLdb = mysql
354 except ImportError, err:
356 except ImportError, err:
355 raise util.Abort(_('python mysql support not available: %s') % err)
357 raise util.Abort(_('python mysql support not available: %s') % err)
356
358
357 bzaccess.__init__(self, ui)
359 bzaccess.__init__(self, ui)
358
360
359 host = self.ui.config('bugzilla', 'host', 'localhost')
361 host = self.ui.config('bugzilla', 'host', 'localhost')
360 user = self.ui.config('bugzilla', 'user', 'bugs')
362 user = self.ui.config('bugzilla', 'user', 'bugs')
361 passwd = self.ui.config('bugzilla', 'password')
363 passwd = self.ui.config('bugzilla', 'password')
362 db = self.ui.config('bugzilla', 'db', 'bugs')
364 db = self.ui.config('bugzilla', 'db', 'bugs')
363 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
365 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
364 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
366 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
365 (host, db, user, '*' * len(passwd)))
367 (host, db, user, '*' * len(passwd)))
366 self.conn = bzmysql._MySQLdb.connect(host=host,
368 self.conn = bzmysql._MySQLdb.connect(host=host,
367 user=user, passwd=passwd,
369 user=user, passwd=passwd,
368 db=db,
370 db=db,
369 connect_timeout=timeout)
371 connect_timeout=timeout)
370 self.cursor = self.conn.cursor()
372 self.cursor = self.conn.cursor()
371 self.longdesc_id = self.get_longdesc_id()
373 self.longdesc_id = self.get_longdesc_id()
372 self.user_ids = {}
374 self.user_ids = {}
373 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
375 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
374
376
375 def run(self, *args, **kwargs):
377 def run(self, *args, **kwargs):
376 '''run a query.'''
378 '''run a query.'''
377 self.ui.note(_('query: %s %s\n') % (args, kwargs))
379 self.ui.note(_('query: %s %s\n') % (args, kwargs))
378 try:
380 try:
379 self.cursor.execute(*args, **kwargs)
381 self.cursor.execute(*args, **kwargs)
380 except bzmysql._MySQLdb.MySQLError:
382 except bzmysql._MySQLdb.MySQLError:
381 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
383 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
382 raise
384 raise
383
385
384 def get_longdesc_id(self):
386 def get_longdesc_id(self):
385 '''get identity of longdesc field'''
387 '''get identity of longdesc field'''
386 self.run('select fieldid from fielddefs where name = "longdesc"')
388 self.run('select fieldid from fielddefs where name = "longdesc"')
387 ids = self.cursor.fetchall()
389 ids = self.cursor.fetchall()
388 if len(ids) != 1:
390 if len(ids) != 1:
389 raise util.Abort(_('unknown database schema'))
391 raise util.Abort(_('unknown database schema'))
390 return ids[0][0]
392 return ids[0][0]
391
393
392 def filter_real_bug_ids(self, bugs):
394 def filter_real_bug_ids(self, bugs):
393 '''filter not-existing bugs from set.'''
395 '''filter not-existing bugs from set.'''
394 self.run('select bug_id from bugs where bug_id in %s' %
396 self.run('select bug_id from bugs where bug_id in %s' %
395 bzmysql.sql_buglist(bugs.keys()))
397 bzmysql.sql_buglist(bugs.keys()))
396 existing = [id for (id,) in self.cursor.fetchall()]
398 existing = [id for (id,) in self.cursor.fetchall()]
397 for id in bugs.keys():
399 for id in bugs.keys():
398 if id not in existing:
400 if id not in existing:
399 self.ui.status(_('bug %d does not exist\n') % id)
401 self.ui.status(_('bug %d does not exist\n') % id)
400 del bugs[id]
402 del bugs[id]
401
403
402 def filter_cset_known_bug_ids(self, node, bugs):
404 def filter_cset_known_bug_ids(self, node, bugs):
403 '''filter bug ids that already refer to this changeset from set.'''
405 '''filter bug ids that already refer to this changeset from set.'''
404 self.run('''select bug_id from longdescs where
406 self.run('''select bug_id from longdescs where
405 bug_id in %s and thetext like "%%%s%%"''' %
407 bug_id in %s and thetext like "%%%s%%"''' %
406 (bzmysql.sql_buglist(bugs.keys()), short(node)))
408 (bzmysql.sql_buglist(bugs.keys()), short(node)))
407 for (id,) in self.cursor.fetchall():
409 for (id,) in self.cursor.fetchall():
408 self.ui.status(_('bug %d already knows about changeset %s\n') %
410 self.ui.status(_('bug %d already knows about changeset %s\n') %
409 (id, short(node)))
411 (id, short(node)))
410 del bugs[id]
412 del bugs[id]
411
413
412 def notify(self, bugs, committer):
414 def notify(self, bugs, committer):
413 '''tell bugzilla to send mail.'''
415 '''tell bugzilla to send mail.'''
414 self.ui.status(_('telling bugzilla to send mail:\n'))
416 self.ui.status(_('telling bugzilla to send mail:\n'))
415 (user, userid) = self.get_bugzilla_user(committer)
417 (user, userid) = self.get_bugzilla_user(committer)
416 for id in bugs.keys():
418 for id in bugs.keys():
417 self.ui.status(_(' bug %s\n') % id)
419 self.ui.status(_(' bug %s\n') % id)
418 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
420 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
419 bzdir = self.ui.config('bugzilla', 'bzdir',
421 bzdir = self.ui.config('bugzilla', 'bzdir',
420 '/var/www/html/bugzilla')
422 '/var/www/html/bugzilla')
421 try:
423 try:
422 # Backwards-compatible with old notify string, which
424 # Backwards-compatible with old notify string, which
423 # took one string. This will throw with a new format
425 # took one string. This will throw with a new format
424 # string.
426 # string.
425 cmd = cmdfmt % id
427 cmd = cmdfmt % id
426 except TypeError:
428 except TypeError:
427 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
429 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
428 self.ui.note(_('running notify command %s\n') % cmd)
430 self.ui.note(_('running notify command %s\n') % cmd)
429 fp = util.popen('(%s) 2>&1' % cmd)
431 fp = util.popen('(%s) 2>&1' % cmd)
430 out = fp.read()
432 out = fp.read()
431 ret = fp.close()
433 ret = fp.close()
432 if ret:
434 if ret:
433 self.ui.warn(out)
435 self.ui.warn(out)
434 raise util.Abort(_('bugzilla notify command %s') %
436 raise util.Abort(_('bugzilla notify command %s') %
435 util.explainexit(ret)[0])
437 util.explainexit(ret)[0])
436 self.ui.status(_('done\n'))
438 self.ui.status(_('done\n'))
437
439
438 def get_user_id(self, user):
440 def get_user_id(self, user):
439 '''look up numeric bugzilla user id.'''
441 '''look up numeric bugzilla user id.'''
440 try:
442 try:
441 return self.user_ids[user]
443 return self.user_ids[user]
442 except KeyError:
444 except KeyError:
443 try:
445 try:
444 userid = int(user)
446 userid = int(user)
445 except ValueError:
447 except ValueError:
446 self.ui.note(_('looking up user %s\n') % user)
448 self.ui.note(_('looking up user %s\n') % user)
447 self.run('''select userid from profiles
449 self.run('''select userid from profiles
448 where login_name like %s''', user)
450 where login_name like %s''', user)
449 all = self.cursor.fetchall()
451 all = self.cursor.fetchall()
450 if len(all) != 1:
452 if len(all) != 1:
451 raise KeyError(user)
453 raise KeyError(user)
452 userid = int(all[0][0])
454 userid = int(all[0][0])
453 self.user_ids[user] = userid
455 self.user_ids[user] = userid
454 return userid
456 return userid
455
457
456 def get_bugzilla_user(self, committer):
458 def get_bugzilla_user(self, committer):
457 '''See if committer is a registered bugzilla user. Return
459 '''See if committer is a registered bugzilla user. Return
458 bugzilla username and userid if so. If not, return default
460 bugzilla username and userid if so. If not, return default
459 bugzilla username and userid.'''
461 bugzilla username and userid.'''
460 user = self.map_committer(committer)
462 user = self.map_committer(committer)
461 try:
463 try:
462 userid = self.get_user_id(user)
464 userid = self.get_user_id(user)
463 except KeyError:
465 except KeyError:
464 try:
466 try:
465 defaultuser = self.ui.config('bugzilla', 'bzuser')
467 defaultuser = self.ui.config('bugzilla', 'bzuser')
466 if not defaultuser:
468 if not defaultuser:
467 raise util.Abort(_('cannot find bugzilla user id for %s') %
469 raise util.Abort(_('cannot find bugzilla user id for %s') %
468 user)
470 user)
469 userid = self.get_user_id(defaultuser)
471 userid = self.get_user_id(defaultuser)
470 user = defaultuser
472 user = defaultuser
471 except KeyError:
473 except KeyError:
472 raise util.Abort(_('cannot find bugzilla user id for %s or %s')
474 raise util.Abort(_('cannot find bugzilla user id for %s or %s')
473 % (user, defaultuser))
475 % (user, defaultuser))
474 return (user, userid)
476 return (user, userid)
475
477
476 def updatebug(self, bugid, newstate, text, committer):
478 def updatebug(self, bugid, newstate, text, committer):
477 '''update bug state with comment text.
479 '''update bug state with comment text.
478
480
479 Try adding comment as committer of changeset, otherwise as
481 Try adding comment as committer of changeset, otherwise as
480 default bugzilla user.'''
482 default bugzilla user.'''
481 if len(newstate) > 0:
483 if len(newstate) > 0:
482 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
484 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
483
485
484 (user, userid) = self.get_bugzilla_user(committer)
486 (user, userid) = self.get_bugzilla_user(committer)
485 now = time.strftime('%Y-%m-%d %H:%M:%S')
487 now = time.strftime('%Y-%m-%d %H:%M:%S')
486 self.run('''insert into longdescs
488 self.run('''insert into longdescs
487 (bug_id, who, bug_when, thetext)
489 (bug_id, who, bug_when, thetext)
488 values (%s, %s, %s, %s)''',
490 values (%s, %s, %s, %s)''',
489 (bugid, userid, now, text))
491 (bugid, userid, now, text))
490 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
492 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
491 values (%s, %s, %s, %s)''',
493 values (%s, %s, %s, %s)''',
492 (bugid, userid, now, self.longdesc_id))
494 (bugid, userid, now, self.longdesc_id))
493 self.conn.commit()
495 self.conn.commit()
494
496
495 class bzmysql_2_18(bzmysql):
497 class bzmysql_2_18(bzmysql):
496 '''support for bugzilla 2.18 series.'''
498 '''support for bugzilla 2.18 series.'''
497
499
498 def __init__(self, ui):
500 def __init__(self, ui):
499 bzmysql.__init__(self, ui)
501 bzmysql.__init__(self, ui)
500 self.default_notify = \
502 self.default_notify = \
501 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
503 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
502
504
503 class bzmysql_3_0(bzmysql_2_18):
505 class bzmysql_3_0(bzmysql_2_18):
504 '''support for bugzilla 3.0 series.'''
506 '''support for bugzilla 3.0 series.'''
505
507
506 def __init__(self, ui):
508 def __init__(self, ui):
507 bzmysql_2_18.__init__(self, ui)
509 bzmysql_2_18.__init__(self, ui)
508
510
509 def get_longdesc_id(self):
511 def get_longdesc_id(self):
510 '''get identity of longdesc field'''
512 '''get identity of longdesc field'''
511 self.run('select id from fielddefs where name = "longdesc"')
513 self.run('select id from fielddefs where name = "longdesc"')
512 ids = self.cursor.fetchall()
514 ids = self.cursor.fetchall()
513 if len(ids) != 1:
515 if len(ids) != 1:
514 raise util.Abort(_('unknown database schema'))
516 raise util.Abort(_('unknown database schema'))
515 return ids[0][0]
517 return ids[0][0]
516
518
517 # Buzgilla via XMLRPC interface.
519 # Buzgilla via XMLRPC interface.
518
520
519 class cookietransportrequest(object):
521 class cookietransportrequest(object):
520 """A Transport request method that retains cookies over its lifetime.
522 """A Transport request method that retains cookies over its lifetime.
521
523
522 The regular xmlrpclib transports ignore cookies. Which causes
524 The regular xmlrpclib transports ignore cookies. Which causes
523 a bit of a problem when you need a cookie-based login, as with
525 a bit of a problem when you need a cookie-based login, as with
524 the Bugzilla XMLRPC interface.
526 the Bugzilla XMLRPC interface.
525
527
526 So this is a helper for defining a Transport which looks for
528 So this is a helper for defining a Transport which looks for
527 cookies being set in responses and saves them to add to all future
529 cookies being set in responses and saves them to add to all future
528 requests.
530 requests.
529 """
531 """
530
532
531 # Inspiration drawn from
533 # Inspiration drawn from
532 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
534 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
533 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
535 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
534
536
535 cookies = []
537 cookies = []
536 def send_cookies(self, connection):
538 def send_cookies(self, connection):
537 if self.cookies:
539 if self.cookies:
538 for cookie in self.cookies:
540 for cookie in self.cookies:
539 connection.putheader("Cookie", cookie)
541 connection.putheader("Cookie", cookie)
540
542
541 def request(self, host, handler, request_body, verbose=0):
543 def request(self, host, handler, request_body, verbose=0):
542 self.verbose = verbose
544 self.verbose = verbose
543 self.accept_gzip_encoding = False
545 self.accept_gzip_encoding = False
544
546
545 # issue XML-RPC request
547 # issue XML-RPC request
546 h = self.make_connection(host)
548 h = self.make_connection(host)
547 if verbose:
549 if verbose:
548 h.set_debuglevel(1)
550 h.set_debuglevel(1)
549
551
550 self.send_request(h, handler, request_body)
552 self.send_request(h, handler, request_body)
551 self.send_host(h, host)
553 self.send_host(h, host)
552 self.send_cookies(h)
554 self.send_cookies(h)
553 self.send_user_agent(h)
555 self.send_user_agent(h)
554 self.send_content(h, request_body)
556 self.send_content(h, request_body)
555
557
556 # Deal with differences between Python 2.4-2.6 and 2.7.
558 # Deal with differences between Python 2.4-2.6 and 2.7.
557 # In the former h is a HTTP(S). In the latter it's a
559 # In the former h is a HTTP(S). In the latter it's a
558 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
560 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
559 # HTTP(S) has an underlying HTTP(S)Connection, so extract
561 # HTTP(S) has an underlying HTTP(S)Connection, so extract
560 # that and use it.
562 # that and use it.
561 try:
563 try:
562 response = h.getresponse()
564 response = h.getresponse()
563 except AttributeError:
565 except AttributeError:
564 response = h._conn.getresponse()
566 response = h._conn.getresponse()
565
567
566 # Add any cookie definitions to our list.
568 # Add any cookie definitions to our list.
567 for header in response.msg.getallmatchingheaders("Set-Cookie"):
569 for header in response.msg.getallmatchingheaders("Set-Cookie"):
568 val = header.split(": ", 1)[1]
570 val = header.split(": ", 1)[1]
569 cookie = val.split(";", 1)[0]
571 cookie = val.split(";", 1)[0]
570 self.cookies.append(cookie)
572 self.cookies.append(cookie)
571
573
572 if response.status != 200:
574 if response.status != 200:
573 raise xmlrpclib.ProtocolError(host + handler, response.status,
575 raise xmlrpclib.ProtocolError(host + handler, response.status,
574 response.reason, response.msg.headers)
576 response.reason, response.msg.headers)
575
577
576 payload = response.read()
578 payload = response.read()
577 parser, unmarshaller = self.getparser()
579 parser, unmarshaller = self.getparser()
578 parser.feed(payload)
580 parser.feed(payload)
579 parser.close()
581 parser.close()
580
582
581 return unmarshaller.close()
583 return unmarshaller.close()
582
584
583 # The explicit calls to the underlying xmlrpclib __init__() methods are
585 # The explicit calls to the underlying xmlrpclib __init__() methods are
584 # necessary. The xmlrpclib.Transport classes are old-style classes, and
586 # necessary. The xmlrpclib.Transport classes are old-style classes, and
585 # it turns out their __init__() doesn't get called when doing multiple
587 # it turns out their __init__() doesn't get called when doing multiple
586 # inheritance with a new-style class.
588 # inheritance with a new-style class.
587 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
589 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
588 def __init__(self, use_datetime=0):
590 def __init__(self, use_datetime=0):
589 if util.safehasattr(xmlrpclib.Transport, "__init__"):
591 if util.safehasattr(xmlrpclib.Transport, "__init__"):
590 xmlrpclib.Transport.__init__(self, use_datetime)
592 xmlrpclib.Transport.__init__(self, use_datetime)
591
593
592 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
594 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
593 def __init__(self, use_datetime=0):
595 def __init__(self, use_datetime=0):
594 if util.safehasattr(xmlrpclib.Transport, "__init__"):
596 if util.safehasattr(xmlrpclib.Transport, "__init__"):
595 xmlrpclib.SafeTransport.__init__(self, use_datetime)
597 xmlrpclib.SafeTransport.__init__(self, use_datetime)
596
598
597 class bzxmlrpc(bzaccess):
599 class bzxmlrpc(bzaccess):
598 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
600 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
599
601
600 Requires a minimum Bugzilla version 3.4.
602 Requires a minimum Bugzilla version 3.4.
601 """
603 """
602
604
603 def __init__(self, ui):
605 def __init__(self, ui):
604 bzaccess.__init__(self, ui)
606 bzaccess.__init__(self, ui)
605
607
606 bzweb = self.ui.config('bugzilla', 'bzurl',
608 bzweb = self.ui.config('bugzilla', 'bzurl',
607 'http://localhost/bugzilla/')
609 'http://localhost/bugzilla/')
608 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
610 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
609
611
610 user = self.ui.config('bugzilla', 'user', 'bugs')
612 user = self.ui.config('bugzilla', 'user', 'bugs')
611 passwd = self.ui.config('bugzilla', 'password')
613 passwd = self.ui.config('bugzilla', 'password')
612
614
613 self.fixstatus = self.ui.config('bugzilla', 'fixstatus', 'RESOLVED')
615 self.fixstatus = self.ui.config('bugzilla', 'fixstatus', 'RESOLVED')
614 self.fixresolution = self.ui.config('bugzilla', 'fixresolution',
616 self.fixresolution = self.ui.config('bugzilla', 'fixresolution',
615 'FIXED')
617 'FIXED')
616
618
617 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
619 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
618 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
620 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
619 self.bzvermajor = int(ver[0])
621 self.bzvermajor = int(ver[0])
620 self.bzverminor = int(ver[1])
622 self.bzverminor = int(ver[1])
621 self.bzproxy.User.login(dict(login=user, password=passwd))
623 self.bzproxy.User.login(dict(login=user, password=passwd))
622
624
623 def transport(self, uri):
625 def transport(self, uri):
624 if urlparse.urlparse(uri, "http")[0] == "https":
626 if urlparse.urlparse(uri, "http")[0] == "https":
625 return cookiesafetransport()
627 return cookiesafetransport()
626 else:
628 else:
627 return cookietransport()
629 return cookietransport()
628
630
629 def get_bug_comments(self, id):
631 def get_bug_comments(self, id):
630 """Return a string with all comment text for a bug."""
632 """Return a string with all comment text for a bug."""
631 c = self.bzproxy.Bug.comments(dict(ids=[id], include_fields=['text']))
633 c = self.bzproxy.Bug.comments(dict(ids=[id], include_fields=['text']))
632 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
634 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
633
635
634 def filter_real_bug_ids(self, bugs):
636 def filter_real_bug_ids(self, bugs):
635 probe = self.bzproxy.Bug.get(dict(ids=sorted(bugs.keys()),
637 probe = self.bzproxy.Bug.get(dict(ids=sorted(bugs.keys()),
636 include_fields=[],
638 include_fields=[],
637 permissive=True))
639 permissive=True))
638 for badbug in probe['faults']:
640 for badbug in probe['faults']:
639 id = badbug['id']
641 id = badbug['id']
640 self.ui.status(_('bug %d does not exist\n') % id)
642 self.ui.status(_('bug %d does not exist\n') % id)
641 del bugs[id]
643 del bugs[id]
642
644
643 def filter_cset_known_bug_ids(self, node, bugs):
645 def filter_cset_known_bug_ids(self, node, bugs):
644 for id in sorted(bugs.keys()):
646 for id in sorted(bugs.keys()):
645 if self.get_bug_comments(id).find(short(node)) != -1:
647 if self.get_bug_comments(id).find(short(node)) != -1:
646 self.ui.status(_('bug %d already knows about changeset %s\n') %
648 self.ui.status(_('bug %d already knows about changeset %s\n') %
647 (id, short(node)))
649 (id, short(node)))
648 del bugs[id]
650 del bugs[id]
649
651
650 def updatebug(self, bugid, newstate, text, committer):
652 def updatebug(self, bugid, newstate, text, committer):
651 args = {}
653 args = {}
652 if 'hours' in newstate:
654 if 'hours' in newstate:
653 args['work_time'] = newstate['hours']
655 args['work_time'] = newstate['hours']
654
656
655 if self.bzvermajor >= 4:
657 if self.bzvermajor >= 4:
656 args['ids'] = [bugid]
658 args['ids'] = [bugid]
657 args['comment'] = {'body' : text}
659 args['comment'] = {'body' : text}
658 args['status'] = self.fixstatus
660 args['status'] = self.fixstatus
659 args['resolution'] = self.fixresolution
661 args['resolution'] = self.fixresolution
660 self.bzproxy.Bug.update(args)
662 self.bzproxy.Bug.update(args)
661 else:
663 else:
662 if 'fix' in newstate:
664 if 'fix' in newstate:
663 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
665 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
664 "to mark bugs fixed\n"))
666 "to mark bugs fixed\n"))
665 args['id'] = bugid
667 args['id'] = bugid
666 args['comment'] = text
668 args['comment'] = text
667 self.bzproxy.Bug.add_comment(args)
669 self.bzproxy.Bug.add_comment(args)
668
670
669 class bzxmlrpcemail(bzxmlrpc):
671 class bzxmlrpcemail(bzxmlrpc):
670 """Read data from Bugzilla via XMLRPC, send updates via email.
672 """Read data from Bugzilla via XMLRPC, send updates via email.
671
673
672 Advantages of sending updates via email:
674 Advantages of sending updates via email:
673 1. Comments can be added as any user, not just logged in user.
675 1. Comments can be added as any user, not just logged in user.
674 2. Bug statuses or other fields not accessible via XMLRPC can
676 2. Bug statuses or other fields not accessible via XMLRPC can
675 potentially be updated.
677 potentially be updated.
676
678
677 There is no XMLRPC function to change bug status before Bugzilla
679 There is no XMLRPC function to change bug status before Bugzilla
678 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
680 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
679 But bugs can be marked fixed via email from 3.4 onwards.
681 But bugs can be marked fixed via email from 3.4 onwards.
680 """
682 """
681
683
682 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
684 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
683 # in-email fields are specified as '@<fieldname> = <value>'. In
685 # in-email fields are specified as '@<fieldname> = <value>'. In
684 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
686 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
685 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
687 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
686 # compatibility, but rather than rely on this use the new format for
688 # compatibility, but rather than rely on this use the new format for
687 # 4.0 onwards.
689 # 4.0 onwards.
688
690
689 def __init__(self, ui):
691 def __init__(self, ui):
690 bzxmlrpc.__init__(self, ui)
692 bzxmlrpc.__init__(self, ui)
691
693
692 self.bzemail = self.ui.config('bugzilla', 'bzemail')
694 self.bzemail = self.ui.config('bugzilla', 'bzemail')
693 if not self.bzemail:
695 if not self.bzemail:
694 raise util.Abort(_("configuration 'bzemail' missing"))
696 raise util.Abort(_("configuration 'bzemail' missing"))
695 mail.validateconfig(self.ui)
697 mail.validateconfig(self.ui)
696
698
697 def makecommandline(self, fieldname, value):
699 def makecommandline(self, fieldname, value):
698 if self.bzvermajor >= 4:
700 if self.bzvermajor >= 4:
699 return "@%s %s" % (fieldname, str(value))
701 return "@%s %s" % (fieldname, str(value))
700 else:
702 else:
701 if fieldname == "id":
703 if fieldname == "id":
702 fieldname = "bug_id"
704 fieldname = "bug_id"
703 return "@%s = %s" % (fieldname, str(value))
705 return "@%s = %s" % (fieldname, str(value))
704
706
705 def send_bug_modify_email(self, bugid, commands, comment, committer):
707 def send_bug_modify_email(self, bugid, commands, comment, committer):
706 '''send modification message to Bugzilla bug via email.
708 '''send modification message to Bugzilla bug via email.
707
709
708 The message format is documented in the Bugzilla email_in.pl
710 The message format is documented in the Bugzilla email_in.pl
709 specification. commands is a list of command lines, comment is the
711 specification. commands is a list of command lines, comment is the
710 comment text.
712 comment text.
711
713
712 To stop users from crafting commit comments with
714 To stop users from crafting commit comments with
713 Bugzilla commands, specify the bug ID via the message body, rather
715 Bugzilla commands, specify the bug ID via the message body, rather
714 than the subject line, and leave a blank line after it.
716 than the subject line, and leave a blank line after it.
715 '''
717 '''
716 user = self.map_committer(committer)
718 user = self.map_committer(committer)
717 matches = self.bzproxy.User.get(dict(match=[user]))
719 matches = self.bzproxy.User.get(dict(match=[user]))
718 if not matches['users']:
720 if not matches['users']:
719 user = self.ui.config('bugzilla', 'user', 'bugs')
721 user = self.ui.config('bugzilla', 'user', 'bugs')
720 matches = self.bzproxy.User.get(dict(match=[user]))
722 matches = self.bzproxy.User.get(dict(match=[user]))
721 if not matches['users']:
723 if not matches['users']:
722 raise util.Abort(_("default bugzilla user %s email not found") %
724 raise util.Abort(_("default bugzilla user %s email not found") %
723 user)
725 user)
724 user = matches['users'][0]['email']
726 user = matches['users'][0]['email']
725 commands.append(self.makecommandline("id", bugid))
727 commands.append(self.makecommandline("id", bugid))
726
728
727 text = "\n".join(commands) + "\n\n" + comment
729 text = "\n".join(commands) + "\n\n" + comment
728
730
729 _charsets = mail._charsets(self.ui)
731 _charsets = mail._charsets(self.ui)
730 user = mail.addressencode(self.ui, user, _charsets)
732 user = mail.addressencode(self.ui, user, _charsets)
731 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
733 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
732 msg = mail.mimeencode(self.ui, text, _charsets)
734 msg = mail.mimeencode(self.ui, text, _charsets)
733 msg['From'] = user
735 msg['From'] = user
734 msg['To'] = bzemail
736 msg['To'] = bzemail
735 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
737 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
736 sendmail = mail.connect(self.ui)
738 sendmail = mail.connect(self.ui)
737 sendmail(user, bzemail, msg.as_string())
739 sendmail(user, bzemail, msg.as_string())
738
740
739 def updatebug(self, bugid, newstate, text, committer):
741 def updatebug(self, bugid, newstate, text, committer):
740 cmds = []
742 cmds = []
741 if 'hours' in newstate:
743 if 'hours' in newstate:
742 cmds.append(self.makecommandline("work_time", newstate['hours']))
744 cmds.append(self.makecommandline("work_time", newstate['hours']))
743 if 'fix' in newstate:
745 if 'fix' in newstate:
744 cmds.append(self.makecommandline("bug_status", self.fixstatus))
746 cmds.append(self.makecommandline("bug_status", self.fixstatus))
745 cmds.append(self.makecommandline("resolution", self.fixresolution))
747 cmds.append(self.makecommandline("resolution", self.fixresolution))
746 self.send_bug_modify_email(bugid, cmds, text, committer)
748 self.send_bug_modify_email(bugid, cmds, text, committer)
747
749
748 class bugzilla(object):
750 class bugzilla(object):
749 # supported versions of bugzilla. different versions have
751 # supported versions of bugzilla. different versions have
750 # different schemas.
752 # different schemas.
751 _versions = {
753 _versions = {
752 '2.16': bzmysql,
754 '2.16': bzmysql,
753 '2.18': bzmysql_2_18,
755 '2.18': bzmysql_2_18,
754 '3.0': bzmysql_3_0,
756 '3.0': bzmysql_3_0,
755 'xmlrpc': bzxmlrpc,
757 'xmlrpc': bzxmlrpc,
756 'xmlrpc+email': bzxmlrpcemail
758 'xmlrpc+email': bzxmlrpcemail
757 }
759 }
758
760
759 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
761 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
760 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
762 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
761 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
763 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
762
764
763 _default_fix_re = (r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
765 _default_fix_re = (r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
764 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
766 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
765 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
767 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
766 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
768 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
767
769
768 _bz = None
770 _bz = None
769
771
770 def __init__(self, ui, repo):
772 def __init__(self, ui, repo):
771 self.ui = ui
773 self.ui = ui
772 self.repo = repo
774 self.repo = repo
773
775
774 def bz(self):
776 def bz(self):
775 '''return object that knows how to talk to bugzilla version in
777 '''return object that knows how to talk to bugzilla version in
776 use.'''
778 use.'''
777
779
778 if bugzilla._bz is None:
780 if bugzilla._bz is None:
779 bzversion = self.ui.config('bugzilla', 'version')
781 bzversion = self.ui.config('bugzilla', 'version')
780 try:
782 try:
781 bzclass = bugzilla._versions[bzversion]
783 bzclass = bugzilla._versions[bzversion]
782 except KeyError:
784 except KeyError:
783 raise util.Abort(_('bugzilla version %s not supported') %
785 raise util.Abort(_('bugzilla version %s not supported') %
784 bzversion)
786 bzversion)
785 bugzilla._bz = bzclass(self.ui)
787 bugzilla._bz = bzclass(self.ui)
786 return bugzilla._bz
788 return bugzilla._bz
787
789
788 def __getattr__(self, key):
790 def __getattr__(self, key):
789 return getattr(self.bz(), key)
791 return getattr(self.bz(), key)
790
792
791 _bug_re = None
793 _bug_re = None
792 _fix_re = None
794 _fix_re = None
793 _split_re = None
795 _split_re = None
794
796
795 def find_bugs(self, ctx):
797 def find_bugs(self, ctx):
796 '''return bugs dictionary created from commit comment.
798 '''return bugs dictionary created from commit comment.
797
799
798 Extract bug info from changeset comments. Filter out any that are
800 Extract bug info from changeset comments. Filter out any that are
799 not known to Bugzilla, and any that already have a reference to
801 not known to Bugzilla, and any that already have a reference to
800 the given changeset in their comments.
802 the given changeset in their comments.
801 '''
803 '''
802 if bugzilla._bug_re is None:
804 if bugzilla._bug_re is None:
803 bugzilla._bug_re = re.compile(
805 bugzilla._bug_re = re.compile(
804 self.ui.config('bugzilla', 'regexp',
806 self.ui.config('bugzilla', 'regexp',
805 bugzilla._default_bug_re), re.IGNORECASE)
807 bugzilla._default_bug_re), re.IGNORECASE)
806 bugzilla._fix_re = re.compile(
808 bugzilla._fix_re = re.compile(
807 self.ui.config('bugzilla', 'fixregexp',
809 self.ui.config('bugzilla', 'fixregexp',
808 bugzilla._default_fix_re), re.IGNORECASE)
810 bugzilla._default_fix_re), re.IGNORECASE)
809 bugzilla._split_re = re.compile(r'\D+')
811 bugzilla._split_re = re.compile(r'\D+')
810 start = 0
812 start = 0
811 hours = 0.0
813 hours = 0.0
812 bugs = {}
814 bugs = {}
813 bugmatch = bugzilla._bug_re.search(ctx.description(), start)
815 bugmatch = bugzilla._bug_re.search(ctx.description(), start)
814 fixmatch = bugzilla._fix_re.search(ctx.description(), start)
816 fixmatch = bugzilla._fix_re.search(ctx.description(), start)
815 while True:
817 while True:
816 bugattribs = {}
818 bugattribs = {}
817 if not bugmatch and not fixmatch:
819 if not bugmatch and not fixmatch:
818 break
820 break
819 if not bugmatch:
821 if not bugmatch:
820 m = fixmatch
822 m = fixmatch
821 elif not fixmatch:
823 elif not fixmatch:
822 m = bugmatch
824 m = bugmatch
823 else:
825 else:
824 if bugmatch.start() < fixmatch.start():
826 if bugmatch.start() < fixmatch.start():
825 m = bugmatch
827 m = bugmatch
826 else:
828 else:
827 m = fixmatch
829 m = fixmatch
828 start = m.end()
830 start = m.end()
829 if m is bugmatch:
831 if m is bugmatch:
830 bugmatch = bugzilla._bug_re.search(ctx.description(), start)
832 bugmatch = bugzilla._bug_re.search(ctx.description(), start)
831 if 'fix' in bugattribs:
833 if 'fix' in bugattribs:
832 del bugattribs['fix']
834 del bugattribs['fix']
833 else:
835 else:
834 fixmatch = bugzilla._fix_re.search(ctx.description(), start)
836 fixmatch = bugzilla._fix_re.search(ctx.description(), start)
835 bugattribs['fix'] = None
837 bugattribs['fix'] = None
836
838
837 try:
839 try:
838 ids = m.group('ids')
840 ids = m.group('ids')
839 except IndexError:
841 except IndexError:
840 ids = m.group(1)
842 ids = m.group(1)
841 try:
843 try:
842 hours = float(m.group('hours'))
844 hours = float(m.group('hours'))
843 bugattribs['hours'] = hours
845 bugattribs['hours'] = hours
844 except IndexError:
846 except IndexError:
845 pass
847 pass
846 except TypeError:
848 except TypeError:
847 pass
849 pass
848 except ValueError:
850 except ValueError:
849 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
851 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
850
852
851 for id in bugzilla._split_re.split(ids):
853 for id in bugzilla._split_re.split(ids):
852 if not id:
854 if not id:
853 continue
855 continue
854 bugs[int(id)] = bugattribs
856 bugs[int(id)] = bugattribs
855 if bugs:
857 if bugs:
856 self.filter_real_bug_ids(bugs)
858 self.filter_real_bug_ids(bugs)
857 if bugs:
859 if bugs:
858 self.filter_cset_known_bug_ids(ctx.node(), bugs)
860 self.filter_cset_known_bug_ids(ctx.node(), bugs)
859 return bugs
861 return bugs
860
862
861 def update(self, bugid, newstate, ctx):
863 def update(self, bugid, newstate, ctx):
862 '''update bugzilla bug with reference to changeset.'''
864 '''update bugzilla bug with reference to changeset.'''
863
865
864 def webroot(root):
866 def webroot(root):
865 '''strip leading prefix of repo root and turn into
867 '''strip leading prefix of repo root and turn into
866 url-safe path.'''
868 url-safe path.'''
867 count = int(self.ui.config('bugzilla', 'strip', 0))
869 count = int(self.ui.config('bugzilla', 'strip', 0))
868 root = util.pconvert(root)
870 root = util.pconvert(root)
869 while count > 0:
871 while count > 0:
870 c = root.find('/')
872 c = root.find('/')
871 if c == -1:
873 if c == -1:
872 break
874 break
873 root = root[c + 1:]
875 root = root[c + 1:]
874 count -= 1
876 count -= 1
875 return root
877 return root
876
878
877 mapfile = self.ui.config('bugzilla', 'style')
879 mapfile = self.ui.config('bugzilla', 'style')
878 tmpl = self.ui.config('bugzilla', 'template')
880 tmpl = self.ui.config('bugzilla', 'template')
879 t = cmdutil.changeset_templater(self.ui, self.repo,
881 t = cmdutil.changeset_templater(self.ui, self.repo,
880 False, None, mapfile, False)
882 False, None, mapfile, False)
881 if not mapfile and not tmpl:
883 if not mapfile and not tmpl:
882 tmpl = _('changeset {node|short} in repo {root} refers '
884 tmpl = _('changeset {node|short} in repo {root} refers '
883 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
885 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
884 if tmpl:
886 if tmpl:
885 tmpl = templater.parsestring(tmpl, quoted=False)
887 tmpl = templater.parsestring(tmpl, quoted=False)
886 t.use_template(tmpl)
888 t.use_template(tmpl)
887 self.ui.pushbuffer()
889 self.ui.pushbuffer()
888 t.show(ctx, changes=ctx.changeset(),
890 t.show(ctx, changes=ctx.changeset(),
889 bug=str(bugid),
891 bug=str(bugid),
890 hgweb=self.ui.config('web', 'baseurl'),
892 hgweb=self.ui.config('web', 'baseurl'),
891 root=self.repo.root,
893 root=self.repo.root,
892 webroot=webroot(self.repo.root))
894 webroot=webroot(self.repo.root))
893 data = self.ui.popbuffer()
895 data = self.ui.popbuffer()
894 self.updatebug(bugid, newstate, data, util.email(ctx.user()))
896 self.updatebug(bugid, newstate, data, util.email(ctx.user()))
895
897
896 def hook(ui, repo, hooktype, node=None, **kwargs):
898 def hook(ui, repo, hooktype, node=None, **kwargs):
897 '''add comment to bugzilla for each changeset that refers to a
899 '''add comment to bugzilla for each changeset that refers to a
898 bugzilla bug id. only add a comment once per bug, so same change
900 bugzilla bug id. only add a comment once per bug, so same change
899 seen multiple times does not fill bug with duplicate data.'''
901 seen multiple times does not fill bug with duplicate data.'''
900 if node is None:
902 if node is None:
901 raise util.Abort(_('hook type %s does not pass a changeset id') %
903 raise util.Abort(_('hook type %s does not pass a changeset id') %
902 hooktype)
904 hooktype)
903 try:
905 try:
904 bz = bugzilla(ui, repo)
906 bz = bugzilla(ui, repo)
905 ctx = repo[node]
907 ctx = repo[node]
906 bugs = bz.find_bugs(ctx)
908 bugs = bz.find_bugs(ctx)
907 if bugs:
909 if bugs:
908 for bug in bugs:
910 for bug in bugs:
909 bz.update(bug, bugs[bug], ctx)
911 bz.update(bug, bugs[bug], ctx)
910 bz.notify(bugs, util.email(ctx.user()))
912 bz.notify(bugs, util.email(ctx.user()))
911 except Exception, e:
913 except Exception, e:
912 raise util.Abort(_('Bugzilla error: %s') % e)
914 raise util.Abort(_('Bugzilla error: %s') % e)
913
@@ -1,49 +1,50 b''
1 # Mercurial extension to provide the 'hg children' command
1 # Mercurial extension to provide the 'hg children' command
2 #
2 #
3 # Copyright 2007 by Intevation GmbH <intevation@intevation.de>
3 # Copyright 2007 by Intevation GmbH <intevation@intevation.de>
4 #
4 #
5 # Author(s):
5 # Author(s):
6 # Thomas Arendsen Hein <thomas@intevation.de>
6 # Thomas Arendsen Hein <thomas@intevation.de>
7 #
7 #
8 # This software may be used and distributed according to the terms of the
8 # This software may be used and distributed according to the terms of the
9 # GNU General Public License version 2 or any later version.
9 # GNU General Public License version 2 or any later version.
10
10
11 '''command to display child changesets (DEPRECATED)
11 '''command to display child changesets (DEPRECATED)
12
12
13 This extension is deprecated. You should use :hg:`log -r
13 This extension is deprecated. You should use :hg:`log -r
14 "children(REV)"` instead.
14 "children(REV)"` instead.
15 '''
15 '''
16
16
17 from mercurial import cmdutil
17 from mercurial import cmdutil
18 from mercurial.commands import templateopts
18 from mercurial.commands import templateopts
19 from mercurial.i18n import _
19 from mercurial.i18n import _
20
20
21 testedwith = 'internal'
21
22
22 def children(ui, repo, file_=None, **opts):
23 def children(ui, repo, file_=None, **opts):
23 """show the children of the given or working directory revision
24 """show the children of the given or working directory revision
24
25
25 Print the children of the working directory's revisions. If a
26 Print the children of the working directory's revisions. If a
26 revision is given via -r/--rev, the children of that revision will
27 revision is given via -r/--rev, the children of that revision will
27 be printed. If a file argument is given, revision in which the
28 be printed. If a file argument is given, revision in which the
28 file was last changed (after the working directory revision or the
29 file was last changed (after the working directory revision or the
29 argument to --rev if given) is printed.
30 argument to --rev if given) is printed.
30 """
31 """
31 rev = opts.get('rev')
32 rev = opts.get('rev')
32 if file_:
33 if file_:
33 ctx = repo.filectx(file_, changeid=rev)
34 ctx = repo.filectx(file_, changeid=rev)
34 else:
35 else:
35 ctx = repo[rev]
36 ctx = repo[rev]
36
37
37 displayer = cmdutil.show_changeset(ui, repo, opts)
38 displayer = cmdutil.show_changeset(ui, repo, opts)
38 for cctx in ctx.children():
39 for cctx in ctx.children():
39 displayer.show(cctx)
40 displayer.show(cctx)
40 displayer.close()
41 displayer.close()
41
42
42 cmdtable = {
43 cmdtable = {
43 "children":
44 "children":
44 (children,
45 (children,
45 [('r', 'rev', '',
46 [('r', 'rev', '',
46 _('show children of the specified revision'), _('REV')),
47 _('show children of the specified revision'), _('REV')),
47 ] + templateopts,
48 ] + templateopts,
48 _('hg children [-r REV] [FILE]')),
49 _('hg children [-r REV] [FILE]')),
49 }
50 }
@@ -1,197 +1,199 b''
1 # churn.py - create a graph of revisions count grouped by template
1 # churn.py - create a graph of revisions count grouped by template
2 #
2 #
3 # Copyright 2006 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
3 # Copyright 2006 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
4 # Copyright 2008 Alexander Solovyov <piranha@piranha.org.ua>
4 # Copyright 2008 Alexander Solovyov <piranha@piranha.org.ua>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''command to display statistics about repository history'''
9 '''command to display statistics about repository history'''
10
10
11 from mercurial.i18n import _
11 from mercurial.i18n import _
12 from mercurial import patch, cmdutil, scmutil, util, templater, commands
12 from mercurial import patch, cmdutil, scmutil, util, templater, commands
13 import os
13 import os
14 import time, datetime
14 import time, datetime
15
15
16 testedwith = 'internal'
17
16 def maketemplater(ui, repo, tmpl):
18 def maketemplater(ui, repo, tmpl):
17 tmpl = templater.parsestring(tmpl, quoted=False)
19 tmpl = templater.parsestring(tmpl, quoted=False)
18 try:
20 try:
19 t = cmdutil.changeset_templater(ui, repo, False, None, None, False)
21 t = cmdutil.changeset_templater(ui, repo, False, None, None, False)
20 except SyntaxError, inst:
22 except SyntaxError, inst:
21 raise util.Abort(inst.args[0])
23 raise util.Abort(inst.args[0])
22 t.use_template(tmpl)
24 t.use_template(tmpl)
23 return t
25 return t
24
26
25 def changedlines(ui, repo, ctx1, ctx2, fns):
27 def changedlines(ui, repo, ctx1, ctx2, fns):
26 added, removed = 0, 0
28 added, removed = 0, 0
27 fmatch = scmutil.matchfiles(repo, fns)
29 fmatch = scmutil.matchfiles(repo, fns)
28 diff = ''.join(patch.diff(repo, ctx1.node(), ctx2.node(), fmatch))
30 diff = ''.join(patch.diff(repo, ctx1.node(), ctx2.node(), fmatch))
29 for l in diff.split('\n'):
31 for l in diff.split('\n'):
30 if l.startswith("+") and not l.startswith("+++ "):
32 if l.startswith("+") and not l.startswith("+++ "):
31 added += 1
33 added += 1
32 elif l.startswith("-") and not l.startswith("--- "):
34 elif l.startswith("-") and not l.startswith("--- "):
33 removed += 1
35 removed += 1
34 return (added, removed)
36 return (added, removed)
35
37
36 def countrate(ui, repo, amap, *pats, **opts):
38 def countrate(ui, repo, amap, *pats, **opts):
37 """Calculate stats"""
39 """Calculate stats"""
38 if opts.get('dateformat'):
40 if opts.get('dateformat'):
39 def getkey(ctx):
41 def getkey(ctx):
40 t, tz = ctx.date()
42 t, tz = ctx.date()
41 date = datetime.datetime(*time.gmtime(float(t) - tz)[:6])
43 date = datetime.datetime(*time.gmtime(float(t) - tz)[:6])
42 return date.strftime(opts['dateformat'])
44 return date.strftime(opts['dateformat'])
43 else:
45 else:
44 tmpl = opts.get('template', '{author|email}')
46 tmpl = opts.get('template', '{author|email}')
45 tmpl = maketemplater(ui, repo, tmpl)
47 tmpl = maketemplater(ui, repo, tmpl)
46 def getkey(ctx):
48 def getkey(ctx):
47 ui.pushbuffer()
49 ui.pushbuffer()
48 tmpl.show(ctx)
50 tmpl.show(ctx)
49 return ui.popbuffer()
51 return ui.popbuffer()
50
52
51 state = {'count': 0}
53 state = {'count': 0}
52 rate = {}
54 rate = {}
53 df = False
55 df = False
54 if opts.get('date'):
56 if opts.get('date'):
55 df = util.matchdate(opts['date'])
57 df = util.matchdate(opts['date'])
56
58
57 m = scmutil.match(repo[None], pats, opts)
59 m = scmutil.match(repo[None], pats, opts)
58 def prep(ctx, fns):
60 def prep(ctx, fns):
59 rev = ctx.rev()
61 rev = ctx.rev()
60 if df and not df(ctx.date()[0]): # doesn't match date format
62 if df and not df(ctx.date()[0]): # doesn't match date format
61 return
63 return
62
64
63 key = getkey(ctx).strip()
65 key = getkey(ctx).strip()
64 key = amap.get(key, key) # alias remap
66 key = amap.get(key, key) # alias remap
65 if opts.get('changesets'):
67 if opts.get('changesets'):
66 rate[key] = (rate.get(key, (0,))[0] + 1, 0)
68 rate[key] = (rate.get(key, (0,))[0] + 1, 0)
67 else:
69 else:
68 parents = ctx.parents()
70 parents = ctx.parents()
69 if len(parents) > 1:
71 if len(parents) > 1:
70 ui.note(_('Revision %d is a merge, ignoring...\n') % (rev,))
72 ui.note(_('Revision %d is a merge, ignoring...\n') % (rev,))
71 return
73 return
72
74
73 ctx1 = parents[0]
75 ctx1 = parents[0]
74 lines = changedlines(ui, repo, ctx1, ctx, fns)
76 lines = changedlines(ui, repo, ctx1, ctx, fns)
75 rate[key] = [r + l for r, l in zip(rate.get(key, (0, 0)), lines)]
77 rate[key] = [r + l for r, l in zip(rate.get(key, (0, 0)), lines)]
76
78
77 state['count'] += 1
79 state['count'] += 1
78 ui.progress(_('analyzing'), state['count'], total=len(repo))
80 ui.progress(_('analyzing'), state['count'], total=len(repo))
79
81
80 for ctx in cmdutil.walkchangerevs(repo, m, opts, prep):
82 for ctx in cmdutil.walkchangerevs(repo, m, opts, prep):
81 continue
83 continue
82
84
83 ui.progress(_('analyzing'), None)
85 ui.progress(_('analyzing'), None)
84
86
85 return rate
87 return rate
86
88
87
89
88 def churn(ui, repo, *pats, **opts):
90 def churn(ui, repo, *pats, **opts):
89 '''histogram of changes to the repository
91 '''histogram of changes to the repository
90
92
91 This command will display a histogram representing the number
93 This command will display a histogram representing the number
92 of changed lines or revisions, grouped according to the given
94 of changed lines or revisions, grouped according to the given
93 template. The default template will group changes by author.
95 template. The default template will group changes by author.
94 The --dateformat option may be used to group the results by
96 The --dateformat option may be used to group the results by
95 date instead.
97 date instead.
96
98
97 Statistics are based on the number of changed lines, or
99 Statistics are based on the number of changed lines, or
98 alternatively the number of matching revisions if the
100 alternatively the number of matching revisions if the
99 --changesets option is specified.
101 --changesets option is specified.
100
102
101 Examples::
103 Examples::
102
104
103 # display count of changed lines for every committer
105 # display count of changed lines for every committer
104 hg churn -t '{author|email}'
106 hg churn -t '{author|email}'
105
107
106 # display daily activity graph
108 # display daily activity graph
107 hg churn -f '%H' -s -c
109 hg churn -f '%H' -s -c
108
110
109 # display activity of developers by month
111 # display activity of developers by month
110 hg churn -f '%Y-%m' -s -c
112 hg churn -f '%Y-%m' -s -c
111
113
112 # display count of lines changed in every year
114 # display count of lines changed in every year
113 hg churn -f '%Y' -s
115 hg churn -f '%Y' -s
114
116
115 It is possible to map alternate email addresses to a main address
117 It is possible to map alternate email addresses to a main address
116 by providing a file using the following format::
118 by providing a file using the following format::
117
119
118 <alias email> = <actual email>
120 <alias email> = <actual email>
119
121
120 Such a file may be specified with the --aliases option, otherwise
122 Such a file may be specified with the --aliases option, otherwise
121 a .hgchurn file will be looked for in the working directory root.
123 a .hgchurn file will be looked for in the working directory root.
122 '''
124 '''
123 def pad(s, l):
125 def pad(s, l):
124 return (s + " " * l)[:l]
126 return (s + " " * l)[:l]
125
127
126 amap = {}
128 amap = {}
127 aliases = opts.get('aliases')
129 aliases = opts.get('aliases')
128 if not aliases and os.path.exists(repo.wjoin('.hgchurn')):
130 if not aliases and os.path.exists(repo.wjoin('.hgchurn')):
129 aliases = repo.wjoin('.hgchurn')
131 aliases = repo.wjoin('.hgchurn')
130 if aliases:
132 if aliases:
131 for l in open(aliases, "r"):
133 for l in open(aliases, "r"):
132 try:
134 try:
133 alias, actual = l.split('=' in l and '=' or None, 1)
135 alias, actual = l.split('=' in l and '=' or None, 1)
134 amap[alias.strip()] = actual.strip()
136 amap[alias.strip()] = actual.strip()
135 except ValueError:
137 except ValueError:
136 l = l.strip()
138 l = l.strip()
137 if l:
139 if l:
138 ui.warn(_("skipping malformed alias: %s\n") % l)
140 ui.warn(_("skipping malformed alias: %s\n") % l)
139 continue
141 continue
140
142
141 rate = countrate(ui, repo, amap, *pats, **opts).items()
143 rate = countrate(ui, repo, amap, *pats, **opts).items()
142 if not rate:
144 if not rate:
143 return
145 return
144
146
145 sortkey = ((not opts.get('sort')) and (lambda x: -sum(x[1])) or None)
147 sortkey = ((not opts.get('sort')) and (lambda x: -sum(x[1])) or None)
146 rate.sort(key=sortkey)
148 rate.sort(key=sortkey)
147
149
148 # Be careful not to have a zero maxcount (issue833)
150 # Be careful not to have a zero maxcount (issue833)
149 maxcount = float(max(sum(v) for k, v in rate)) or 1.0
151 maxcount = float(max(sum(v) for k, v in rate)) or 1.0
150 maxname = max(len(k) for k, v in rate)
152 maxname = max(len(k) for k, v in rate)
151
153
152 ttywidth = ui.termwidth()
154 ttywidth = ui.termwidth()
153 ui.debug("assuming %i character terminal\n" % ttywidth)
155 ui.debug("assuming %i character terminal\n" % ttywidth)
154 width = ttywidth - maxname - 2 - 2 - 2
156 width = ttywidth - maxname - 2 - 2 - 2
155
157
156 if opts.get('diffstat'):
158 if opts.get('diffstat'):
157 width -= 15
159 width -= 15
158 def format(name, diffstat):
160 def format(name, diffstat):
159 added, removed = diffstat
161 added, removed = diffstat
160 return "%s %15s %s%s\n" % (pad(name, maxname),
162 return "%s %15s %s%s\n" % (pad(name, maxname),
161 '+%d/-%d' % (added, removed),
163 '+%d/-%d' % (added, removed),
162 ui.label('+' * charnum(added),
164 ui.label('+' * charnum(added),
163 'diffstat.inserted'),
165 'diffstat.inserted'),
164 ui.label('-' * charnum(removed),
166 ui.label('-' * charnum(removed),
165 'diffstat.deleted'))
167 'diffstat.deleted'))
166 else:
168 else:
167 width -= 6
169 width -= 6
168 def format(name, count):
170 def format(name, count):
169 return "%s %6d %s\n" % (pad(name, maxname), sum(count),
171 return "%s %6d %s\n" % (pad(name, maxname), sum(count),
170 '*' * charnum(sum(count)))
172 '*' * charnum(sum(count)))
171
173
172 def charnum(count):
174 def charnum(count):
173 return int(round(count * width / maxcount))
175 return int(round(count * width / maxcount))
174
176
175 for name, count in rate:
177 for name, count in rate:
176 ui.write(format(name, count))
178 ui.write(format(name, count))
177
179
178
180
179 cmdtable = {
181 cmdtable = {
180 "churn":
182 "churn":
181 (churn,
183 (churn,
182 [('r', 'rev', [],
184 [('r', 'rev', [],
183 _('count rate for the specified revision or range'), _('REV')),
185 _('count rate for the specified revision or range'), _('REV')),
184 ('d', 'date', '',
186 ('d', 'date', '',
185 _('count rate for revisions matching date spec'), _('DATE')),
187 _('count rate for revisions matching date spec'), _('DATE')),
186 ('t', 'template', '{author|email}',
188 ('t', 'template', '{author|email}',
187 _('template to group changesets'), _('TEMPLATE')),
189 _('template to group changesets'), _('TEMPLATE')),
188 ('f', 'dateformat', '',
190 ('f', 'dateformat', '',
189 _('strftime-compatible format for grouping by date'), _('FORMAT')),
191 _('strftime-compatible format for grouping by date'), _('FORMAT')),
190 ('c', 'changesets', False, _('count rate by number of changesets')),
192 ('c', 'changesets', False, _('count rate by number of changesets')),
191 ('s', 'sort', False, _('sort by key (default: sort by count)')),
193 ('s', 'sort', False, _('sort by key (default: sort by count)')),
192 ('', 'diffstat', False, _('display added/removed lines separately')),
194 ('', 'diffstat', False, _('display added/removed lines separately')),
193 ('', 'aliases', '',
195 ('', 'aliases', '',
194 _('file with email aliases'), _('FILE')),
196 _('file with email aliases'), _('FILE')),
195 ] + commands.walkopts,
197 ] + commands.walkopts,
196 _("hg churn [-d DATE] [-r REV] [--aliases FILE] [FILE]")),
198 _("hg churn [-d DATE] [-r REV] [--aliases FILE] [FILE]")),
197 }
199 }
@@ -1,501 +1,503 b''
1 # color.py color output for the status and qseries commands
1 # color.py color output for the status and qseries commands
2 #
2 #
3 # Copyright (C) 2007 Kevin Christen <kevin.christen@gmail.com>
3 # Copyright (C) 2007 Kevin Christen <kevin.christen@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''colorize output from some commands
8 '''colorize output from some commands
9
9
10 This extension modifies the status and resolve commands to add color
10 This extension modifies the status and resolve commands to add color
11 to their output to reflect file status, the qseries command to add
11 to their output to reflect file status, the qseries command to add
12 color to reflect patch status (applied, unapplied, missing), and to
12 color to reflect patch status (applied, unapplied, missing), and to
13 diff-related commands to highlight additions, removals, diff headers,
13 diff-related commands to highlight additions, removals, diff headers,
14 and trailing whitespace.
14 and trailing whitespace.
15
15
16 Other effects in addition to color, like bold and underlined text, are
16 Other effects in addition to color, like bold and underlined text, are
17 also available. By default, the terminfo database is used to find the
17 also available. By default, the terminfo database is used to find the
18 terminal codes used to change color and effect. If terminfo is not
18 terminal codes used to change color and effect. If terminfo is not
19 available, then effects are rendered with the ECMA-48 SGR control
19 available, then effects are rendered with the ECMA-48 SGR control
20 function (aka ANSI escape codes).
20 function (aka ANSI escape codes).
21
21
22 Default effects may be overridden from your configuration file::
22 Default effects may be overridden from your configuration file::
23
23
24 [color]
24 [color]
25 status.modified = blue bold underline red_background
25 status.modified = blue bold underline red_background
26 status.added = green bold
26 status.added = green bold
27 status.removed = red bold blue_background
27 status.removed = red bold blue_background
28 status.deleted = cyan bold underline
28 status.deleted = cyan bold underline
29 status.unknown = magenta bold underline
29 status.unknown = magenta bold underline
30 status.ignored = black bold
30 status.ignored = black bold
31
31
32 # 'none' turns off all effects
32 # 'none' turns off all effects
33 status.clean = none
33 status.clean = none
34 status.copied = none
34 status.copied = none
35
35
36 qseries.applied = blue bold underline
36 qseries.applied = blue bold underline
37 qseries.unapplied = black bold
37 qseries.unapplied = black bold
38 qseries.missing = red bold
38 qseries.missing = red bold
39
39
40 diff.diffline = bold
40 diff.diffline = bold
41 diff.extended = cyan bold
41 diff.extended = cyan bold
42 diff.file_a = red bold
42 diff.file_a = red bold
43 diff.file_b = green bold
43 diff.file_b = green bold
44 diff.hunk = magenta
44 diff.hunk = magenta
45 diff.deleted = red
45 diff.deleted = red
46 diff.inserted = green
46 diff.inserted = green
47 diff.changed = white
47 diff.changed = white
48 diff.trailingwhitespace = bold red_background
48 diff.trailingwhitespace = bold red_background
49
49
50 resolve.unresolved = red bold
50 resolve.unresolved = red bold
51 resolve.resolved = green bold
51 resolve.resolved = green bold
52
52
53 bookmarks.current = green
53 bookmarks.current = green
54
54
55 branches.active = none
55 branches.active = none
56 branches.closed = black bold
56 branches.closed = black bold
57 branches.current = green
57 branches.current = green
58 branches.inactive = none
58 branches.inactive = none
59
59
60 tags.normal = green
60 tags.normal = green
61 tags.local = black bold
61 tags.local = black bold
62
62
63 The available effects in terminfo mode are 'blink', 'bold', 'dim',
63 The available effects in terminfo mode are 'blink', 'bold', 'dim',
64 'inverse', 'invisible', 'italic', 'standout', and 'underline'; in
64 'inverse', 'invisible', 'italic', 'standout', and 'underline'; in
65 ECMA-48 mode, the options are 'bold', 'inverse', 'italic', and
65 ECMA-48 mode, the options are 'bold', 'inverse', 'italic', and
66 'underline'. How each is rendered depends on the terminal emulator.
66 'underline'. How each is rendered depends on the terminal emulator.
67 Some may not be available for a given terminal type, and will be
67 Some may not be available for a given terminal type, and will be
68 silently ignored.
68 silently ignored.
69
69
70 Note that on some systems, terminfo mode may cause problems when using
70 Note that on some systems, terminfo mode may cause problems when using
71 color with the pager extension and less -R. less with the -R option
71 color with the pager extension and less -R. less with the -R option
72 will only display ECMA-48 color codes, and terminfo mode may sometimes
72 will only display ECMA-48 color codes, and terminfo mode may sometimes
73 emit codes that less doesn't understand. You can work around this by
73 emit codes that less doesn't understand. You can work around this by
74 either using ansi mode (or auto mode), or by using less -r (which will
74 either using ansi mode (or auto mode), or by using less -r (which will
75 pass through all terminal control codes, not just color control
75 pass through all terminal control codes, not just color control
76 codes).
76 codes).
77
77
78 Because there are only eight standard colors, this module allows you
78 Because there are only eight standard colors, this module allows you
79 to define color names for other color slots which might be available
79 to define color names for other color slots which might be available
80 for your terminal type, assuming terminfo mode. For instance::
80 for your terminal type, assuming terminfo mode. For instance::
81
81
82 color.brightblue = 12
82 color.brightblue = 12
83 color.pink = 207
83 color.pink = 207
84 color.orange = 202
84 color.orange = 202
85
85
86 to set 'brightblue' to color slot 12 (useful for 16 color terminals
86 to set 'brightblue' to color slot 12 (useful for 16 color terminals
87 that have brighter colors defined in the upper eight) and, 'pink' and
87 that have brighter colors defined in the upper eight) and, 'pink' and
88 'orange' to colors in 256-color xterm's default color cube. These
88 'orange' to colors in 256-color xterm's default color cube. These
89 defined colors may then be used as any of the pre-defined eight,
89 defined colors may then be used as any of the pre-defined eight,
90 including appending '_background' to set the background to that color.
90 including appending '_background' to set the background to that color.
91
91
92 By default, the color extension will use ANSI mode (or win32 mode on
92 By default, the color extension will use ANSI mode (or win32 mode on
93 Windows) if it detects a terminal. To override auto mode (to enable
93 Windows) if it detects a terminal. To override auto mode (to enable
94 terminfo mode, for example), set the following configuration option::
94 terminfo mode, for example), set the following configuration option::
95
95
96 [color]
96 [color]
97 mode = terminfo
97 mode = terminfo
98
98
99 Any value other than 'ansi', 'win32', 'terminfo', or 'auto' will
99 Any value other than 'ansi', 'win32', 'terminfo', or 'auto' will
100 disable color.
100 disable color.
101 '''
101 '''
102
102
103 import os
103 import os
104
104
105 from mercurial import commands, dispatch, extensions, ui as uimod, util
105 from mercurial import commands, dispatch, extensions, ui as uimod, util
106 from mercurial.i18n import _
106 from mercurial.i18n import _
107
107
108 testedwith = 'internal'
109
108 # start and stop parameters for effects
110 # start and stop parameters for effects
109 _effects = {'none': 0, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33,
111 _effects = {'none': 0, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33,
110 'blue': 34, 'magenta': 35, 'cyan': 36, 'white': 37, 'bold': 1,
112 'blue': 34, 'magenta': 35, 'cyan': 36, 'white': 37, 'bold': 1,
111 'italic': 3, 'underline': 4, 'inverse': 7,
113 'italic': 3, 'underline': 4, 'inverse': 7,
112 'black_background': 40, 'red_background': 41,
114 'black_background': 40, 'red_background': 41,
113 'green_background': 42, 'yellow_background': 43,
115 'green_background': 42, 'yellow_background': 43,
114 'blue_background': 44, 'purple_background': 45,
116 'blue_background': 44, 'purple_background': 45,
115 'cyan_background': 46, 'white_background': 47}
117 'cyan_background': 46, 'white_background': 47}
116
118
117 def _terminfosetup(ui, mode):
119 def _terminfosetup(ui, mode):
118 '''Initialize terminfo data and the terminal if we're in terminfo mode.'''
120 '''Initialize terminfo data and the terminal if we're in terminfo mode.'''
119
121
120 global _terminfo_params
122 global _terminfo_params
121 # If we failed to load curses, we go ahead and return.
123 # If we failed to load curses, we go ahead and return.
122 if not _terminfo_params:
124 if not _terminfo_params:
123 return
125 return
124 # Otherwise, see what the config file says.
126 # Otherwise, see what the config file says.
125 if mode not in ('auto', 'terminfo'):
127 if mode not in ('auto', 'terminfo'):
126 return
128 return
127
129
128 _terminfo_params.update((key[6:], (False, int(val)))
130 _terminfo_params.update((key[6:], (False, int(val)))
129 for key, val in ui.configitems('color')
131 for key, val in ui.configitems('color')
130 if key.startswith('color.'))
132 if key.startswith('color.'))
131
133
132 try:
134 try:
133 curses.setupterm()
135 curses.setupterm()
134 except curses.error, e:
136 except curses.error, e:
135 _terminfo_params = {}
137 _terminfo_params = {}
136 return
138 return
137
139
138 for key, (b, e) in _terminfo_params.items():
140 for key, (b, e) in _terminfo_params.items():
139 if not b:
141 if not b:
140 continue
142 continue
141 if not curses.tigetstr(e):
143 if not curses.tigetstr(e):
142 # Most terminals don't support dim, invis, etc, so don't be
144 # Most terminals don't support dim, invis, etc, so don't be
143 # noisy and use ui.debug().
145 # noisy and use ui.debug().
144 ui.debug("no terminfo entry for %s\n" % e)
146 ui.debug("no terminfo entry for %s\n" % e)
145 del _terminfo_params[key]
147 del _terminfo_params[key]
146 if not curses.tigetstr('setaf') or not curses.tigetstr('setab'):
148 if not curses.tigetstr('setaf') or not curses.tigetstr('setab'):
147 # Only warn about missing terminfo entries if we explicitly asked for
149 # Only warn about missing terminfo entries if we explicitly asked for
148 # terminfo mode.
150 # terminfo mode.
149 if mode == "terminfo":
151 if mode == "terminfo":
150 ui.warn(_("no terminfo entry for setab/setaf: reverting to "
152 ui.warn(_("no terminfo entry for setab/setaf: reverting to "
151 "ECMA-48 color\n"))
153 "ECMA-48 color\n"))
152 _terminfo_params = {}
154 _terminfo_params = {}
153
155
154 def _modesetup(ui, opts):
156 def _modesetup(ui, opts):
155 global _terminfo_params
157 global _terminfo_params
156
158
157 coloropt = opts['color']
159 coloropt = opts['color']
158 auto = coloropt == 'auto'
160 auto = coloropt == 'auto'
159 always = not auto and util.parsebool(coloropt)
161 always = not auto and util.parsebool(coloropt)
160 if not always and not auto:
162 if not always and not auto:
161 return None
163 return None
162
164
163 formatted = always or (os.environ.get('TERM') != 'dumb' and ui.formatted())
165 formatted = always or (os.environ.get('TERM') != 'dumb' and ui.formatted())
164
166
165 mode = ui.config('color', 'mode', 'auto')
167 mode = ui.config('color', 'mode', 'auto')
166 realmode = mode
168 realmode = mode
167 if mode == 'auto':
169 if mode == 'auto':
168 if os.name == 'nt' and 'TERM' not in os.environ:
170 if os.name == 'nt' and 'TERM' not in os.environ:
169 # looks line a cmd.exe console, use win32 API or nothing
171 # looks line a cmd.exe console, use win32 API or nothing
170 realmode = 'win32'
172 realmode = 'win32'
171 else:
173 else:
172 realmode = 'ansi'
174 realmode = 'ansi'
173
175
174 if realmode == 'win32':
176 if realmode == 'win32':
175 _terminfo_params = {}
177 _terminfo_params = {}
176 if not w32effects:
178 if not w32effects:
177 if mode == 'win32':
179 if mode == 'win32':
178 # only warn if color.mode is explicitly set to win32
180 # only warn if color.mode is explicitly set to win32
179 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
181 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
180 return None
182 return None
181 _effects.update(w32effects)
183 _effects.update(w32effects)
182 elif realmode == 'ansi':
184 elif realmode == 'ansi':
183 _terminfo_params = {}
185 _terminfo_params = {}
184 elif realmode == 'terminfo':
186 elif realmode == 'terminfo':
185 _terminfosetup(ui, mode)
187 _terminfosetup(ui, mode)
186 if not _terminfo_params:
188 if not _terminfo_params:
187 if mode == 'terminfo':
189 if mode == 'terminfo':
188 ## FIXME Shouldn't we return None in this case too?
190 ## FIXME Shouldn't we return None in this case too?
189 # only warn if color.mode is explicitly set to win32
191 # only warn if color.mode is explicitly set to win32
190 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
192 ui.warn(_('warning: failed to set color mode to %s\n') % mode)
191 realmode = 'ansi'
193 realmode = 'ansi'
192 else:
194 else:
193 return None
195 return None
194
196
195 if always or (auto and formatted):
197 if always or (auto and formatted):
196 return realmode
198 return realmode
197 return None
199 return None
198
200
199 try:
201 try:
200 import curses
202 import curses
201 # Mapping from effect name to terminfo attribute name or color number.
203 # Mapping from effect name to terminfo attribute name or color number.
202 # This will also force-load the curses module.
204 # This will also force-load the curses module.
203 _terminfo_params = {'none': (True, 'sgr0'),
205 _terminfo_params = {'none': (True, 'sgr0'),
204 'standout': (True, 'smso'),
206 'standout': (True, 'smso'),
205 'underline': (True, 'smul'),
207 'underline': (True, 'smul'),
206 'reverse': (True, 'rev'),
208 'reverse': (True, 'rev'),
207 'inverse': (True, 'rev'),
209 'inverse': (True, 'rev'),
208 'blink': (True, 'blink'),
210 'blink': (True, 'blink'),
209 'dim': (True, 'dim'),
211 'dim': (True, 'dim'),
210 'bold': (True, 'bold'),
212 'bold': (True, 'bold'),
211 'invisible': (True, 'invis'),
213 'invisible': (True, 'invis'),
212 'italic': (True, 'sitm'),
214 'italic': (True, 'sitm'),
213 'black': (False, curses.COLOR_BLACK),
215 'black': (False, curses.COLOR_BLACK),
214 'red': (False, curses.COLOR_RED),
216 'red': (False, curses.COLOR_RED),
215 'green': (False, curses.COLOR_GREEN),
217 'green': (False, curses.COLOR_GREEN),
216 'yellow': (False, curses.COLOR_YELLOW),
218 'yellow': (False, curses.COLOR_YELLOW),
217 'blue': (False, curses.COLOR_BLUE),
219 'blue': (False, curses.COLOR_BLUE),
218 'magenta': (False, curses.COLOR_MAGENTA),
220 'magenta': (False, curses.COLOR_MAGENTA),
219 'cyan': (False, curses.COLOR_CYAN),
221 'cyan': (False, curses.COLOR_CYAN),
220 'white': (False, curses.COLOR_WHITE)}
222 'white': (False, curses.COLOR_WHITE)}
221 except ImportError:
223 except ImportError:
222 _terminfo_params = False
224 _terminfo_params = False
223
225
224 _styles = {'grep.match': 'red bold',
226 _styles = {'grep.match': 'red bold',
225 'bookmarks.current': 'green',
227 'bookmarks.current': 'green',
226 'branches.active': 'none',
228 'branches.active': 'none',
227 'branches.closed': 'black bold',
229 'branches.closed': 'black bold',
228 'branches.current': 'green',
230 'branches.current': 'green',
229 'branches.inactive': 'none',
231 'branches.inactive': 'none',
230 'diff.changed': 'white',
232 'diff.changed': 'white',
231 'diff.deleted': 'red',
233 'diff.deleted': 'red',
232 'diff.diffline': 'bold',
234 'diff.diffline': 'bold',
233 'diff.extended': 'cyan bold',
235 'diff.extended': 'cyan bold',
234 'diff.file_a': 'red bold',
236 'diff.file_a': 'red bold',
235 'diff.file_b': 'green bold',
237 'diff.file_b': 'green bold',
236 'diff.hunk': 'magenta',
238 'diff.hunk': 'magenta',
237 'diff.inserted': 'green',
239 'diff.inserted': 'green',
238 'diff.trailingwhitespace': 'bold red_background',
240 'diff.trailingwhitespace': 'bold red_background',
239 'diffstat.deleted': 'red',
241 'diffstat.deleted': 'red',
240 'diffstat.inserted': 'green',
242 'diffstat.inserted': 'green',
241 'ui.prompt': 'yellow',
243 'ui.prompt': 'yellow',
242 'log.changeset': 'yellow',
244 'log.changeset': 'yellow',
243 'resolve.resolved': 'green bold',
245 'resolve.resolved': 'green bold',
244 'resolve.unresolved': 'red bold',
246 'resolve.unresolved': 'red bold',
245 'status.added': 'green bold',
247 'status.added': 'green bold',
246 'status.clean': 'none',
248 'status.clean': 'none',
247 'status.copied': 'none',
249 'status.copied': 'none',
248 'status.deleted': 'cyan bold underline',
250 'status.deleted': 'cyan bold underline',
249 'status.ignored': 'black bold',
251 'status.ignored': 'black bold',
250 'status.modified': 'blue bold',
252 'status.modified': 'blue bold',
251 'status.removed': 'red bold',
253 'status.removed': 'red bold',
252 'status.unknown': 'magenta bold underline',
254 'status.unknown': 'magenta bold underline',
253 'tags.normal': 'green',
255 'tags.normal': 'green',
254 'tags.local': 'black bold'}
256 'tags.local': 'black bold'}
255
257
256
258
257 def _effect_str(effect):
259 def _effect_str(effect):
258 '''Helper function for render_effects().'''
260 '''Helper function for render_effects().'''
259
261
260 bg = False
262 bg = False
261 if effect.endswith('_background'):
263 if effect.endswith('_background'):
262 bg = True
264 bg = True
263 effect = effect[:-11]
265 effect = effect[:-11]
264 attr, val = _terminfo_params[effect]
266 attr, val = _terminfo_params[effect]
265 if attr:
267 if attr:
266 return curses.tigetstr(val)
268 return curses.tigetstr(val)
267 elif bg:
269 elif bg:
268 return curses.tparm(curses.tigetstr('setab'), val)
270 return curses.tparm(curses.tigetstr('setab'), val)
269 else:
271 else:
270 return curses.tparm(curses.tigetstr('setaf'), val)
272 return curses.tparm(curses.tigetstr('setaf'), val)
271
273
272 def render_effects(text, effects):
274 def render_effects(text, effects):
273 'Wrap text in commands to turn on each effect.'
275 'Wrap text in commands to turn on each effect.'
274 if not text:
276 if not text:
275 return text
277 return text
276 if not _terminfo_params:
278 if not _terminfo_params:
277 start = [str(_effects[e]) for e in ['none'] + effects.split()]
279 start = [str(_effects[e]) for e in ['none'] + effects.split()]
278 start = '\033[' + ';'.join(start) + 'm'
280 start = '\033[' + ';'.join(start) + 'm'
279 stop = '\033[' + str(_effects['none']) + 'm'
281 stop = '\033[' + str(_effects['none']) + 'm'
280 else:
282 else:
281 start = ''.join(_effect_str(effect)
283 start = ''.join(_effect_str(effect)
282 for effect in ['none'] + effects.split())
284 for effect in ['none'] + effects.split())
283 stop = _effect_str('none')
285 stop = _effect_str('none')
284 return ''.join([start, text, stop])
286 return ''.join([start, text, stop])
285
287
286 def extstyles():
288 def extstyles():
287 for name, ext in extensions.extensions():
289 for name, ext in extensions.extensions():
288 _styles.update(getattr(ext, 'colortable', {}))
290 _styles.update(getattr(ext, 'colortable', {}))
289
291
290 def configstyles(ui):
292 def configstyles(ui):
291 for status, cfgeffects in ui.configitems('color'):
293 for status, cfgeffects in ui.configitems('color'):
292 if '.' not in status or status.startswith('color.'):
294 if '.' not in status or status.startswith('color.'):
293 continue
295 continue
294 cfgeffects = ui.configlist('color', status)
296 cfgeffects = ui.configlist('color', status)
295 if cfgeffects:
297 if cfgeffects:
296 good = []
298 good = []
297 for e in cfgeffects:
299 for e in cfgeffects:
298 if not _terminfo_params and e in _effects:
300 if not _terminfo_params and e in _effects:
299 good.append(e)
301 good.append(e)
300 elif e in _terminfo_params or e[:-11] in _terminfo_params:
302 elif e in _terminfo_params or e[:-11] in _terminfo_params:
301 good.append(e)
303 good.append(e)
302 else:
304 else:
303 ui.warn(_("ignoring unknown color/effect %r "
305 ui.warn(_("ignoring unknown color/effect %r "
304 "(configured in color.%s)\n")
306 "(configured in color.%s)\n")
305 % (e, status))
307 % (e, status))
306 _styles[status] = ' '.join(good)
308 _styles[status] = ' '.join(good)
307
309
308 class colorui(uimod.ui):
310 class colorui(uimod.ui):
309 def popbuffer(self, labeled=False):
311 def popbuffer(self, labeled=False):
310 if labeled:
312 if labeled:
311 return ''.join(self.label(a, label) for a, label
313 return ''.join(self.label(a, label) for a, label
312 in self._buffers.pop())
314 in self._buffers.pop())
313 return ''.join(a for a, label in self._buffers.pop())
315 return ''.join(a for a, label in self._buffers.pop())
314
316
315 _colormode = 'ansi'
317 _colormode = 'ansi'
316 def write(self, *args, **opts):
318 def write(self, *args, **opts):
317 label = opts.get('label', '')
319 label = opts.get('label', '')
318 if self._buffers:
320 if self._buffers:
319 self._buffers[-1].extend([(str(a), label) for a in args])
321 self._buffers[-1].extend([(str(a), label) for a in args])
320 elif self._colormode == 'win32':
322 elif self._colormode == 'win32':
321 for a in args:
323 for a in args:
322 win32print(a, super(colorui, self).write, **opts)
324 win32print(a, super(colorui, self).write, **opts)
323 else:
325 else:
324 return super(colorui, self).write(
326 return super(colorui, self).write(
325 *[self.label(str(a), label) for a in args], **opts)
327 *[self.label(str(a), label) for a in args], **opts)
326
328
327 def write_err(self, *args, **opts):
329 def write_err(self, *args, **opts):
328 label = opts.get('label', '')
330 label = opts.get('label', '')
329 if self._colormode == 'win32':
331 if self._colormode == 'win32':
330 for a in args:
332 for a in args:
331 win32print(a, super(colorui, self).write_err, **opts)
333 win32print(a, super(colorui, self).write_err, **opts)
332 else:
334 else:
333 return super(colorui, self).write_err(
335 return super(colorui, self).write_err(
334 *[self.label(str(a), label) for a in args], **opts)
336 *[self.label(str(a), label) for a in args], **opts)
335
337
336 def label(self, msg, label):
338 def label(self, msg, label):
337 effects = []
339 effects = []
338 for l in label.split():
340 for l in label.split():
339 s = _styles.get(l, '')
341 s = _styles.get(l, '')
340 if s:
342 if s:
341 effects.append(s)
343 effects.append(s)
342 effects = ' '.join(effects)
344 effects = ' '.join(effects)
343 if effects:
345 if effects:
344 return '\n'.join([render_effects(s, effects)
346 return '\n'.join([render_effects(s, effects)
345 for s in msg.split('\n')])
347 for s in msg.split('\n')])
346 return msg
348 return msg
347
349
348
350
349 def uisetup(ui):
351 def uisetup(ui):
350 global _terminfo_params
352 global _terminfo_params
351 if ui.plain():
353 if ui.plain():
352 return
354 return
353 def colorcmd(orig, ui_, opts, cmd, cmdfunc):
355 def colorcmd(orig, ui_, opts, cmd, cmdfunc):
354 mode = _modesetup(ui_, opts)
356 mode = _modesetup(ui_, opts)
355 if mode:
357 if mode:
356 colorui._colormode = mode
358 colorui._colormode = mode
357 if not issubclass(ui_.__class__, colorui):
359 if not issubclass(ui_.__class__, colorui):
358 colorui.__bases__ = (ui_.__class__,)
360 colorui.__bases__ = (ui_.__class__,)
359 ui_.__class__ = colorui
361 ui_.__class__ = colorui
360 extstyles()
362 extstyles()
361 configstyles(ui_)
363 configstyles(ui_)
362 return orig(ui_, opts, cmd, cmdfunc)
364 return orig(ui_, opts, cmd, cmdfunc)
363 extensions.wrapfunction(dispatch, '_runcommand', colorcmd)
365 extensions.wrapfunction(dispatch, '_runcommand', colorcmd)
364
366
365 def extsetup(ui):
367 def extsetup(ui):
366 commands.globalopts.append(
368 commands.globalopts.append(
367 ('', 'color', 'auto',
369 ('', 'color', 'auto',
368 # i18n: 'always', 'auto', and 'never' are keywords and should
370 # i18n: 'always', 'auto', and 'never' are keywords and should
369 # not be translated
371 # not be translated
370 _("when to colorize (boolean, always, auto, or never)"),
372 _("when to colorize (boolean, always, auto, or never)"),
371 _('TYPE')))
373 _('TYPE')))
372
374
373 if os.name != 'nt':
375 if os.name != 'nt':
374 w32effects = None
376 w32effects = None
375 else:
377 else:
376 import re, ctypes
378 import re, ctypes
377
379
378 _kernel32 = ctypes.windll.kernel32
380 _kernel32 = ctypes.windll.kernel32
379
381
380 _WORD = ctypes.c_ushort
382 _WORD = ctypes.c_ushort
381
383
382 _INVALID_HANDLE_VALUE = -1
384 _INVALID_HANDLE_VALUE = -1
383
385
384 class _COORD(ctypes.Structure):
386 class _COORD(ctypes.Structure):
385 _fields_ = [('X', ctypes.c_short),
387 _fields_ = [('X', ctypes.c_short),
386 ('Y', ctypes.c_short)]
388 ('Y', ctypes.c_short)]
387
389
388 class _SMALL_RECT(ctypes.Structure):
390 class _SMALL_RECT(ctypes.Structure):
389 _fields_ = [('Left', ctypes.c_short),
391 _fields_ = [('Left', ctypes.c_short),
390 ('Top', ctypes.c_short),
392 ('Top', ctypes.c_short),
391 ('Right', ctypes.c_short),
393 ('Right', ctypes.c_short),
392 ('Bottom', ctypes.c_short)]
394 ('Bottom', ctypes.c_short)]
393
395
394 class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
396 class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
395 _fields_ = [('dwSize', _COORD),
397 _fields_ = [('dwSize', _COORD),
396 ('dwCursorPosition', _COORD),
398 ('dwCursorPosition', _COORD),
397 ('wAttributes', _WORD),
399 ('wAttributes', _WORD),
398 ('srWindow', _SMALL_RECT),
400 ('srWindow', _SMALL_RECT),
399 ('dwMaximumWindowSize', _COORD)]
401 ('dwMaximumWindowSize', _COORD)]
400
402
401 _STD_OUTPUT_HANDLE = 0xfffffff5L # (DWORD)-11
403 _STD_OUTPUT_HANDLE = 0xfffffff5L # (DWORD)-11
402 _STD_ERROR_HANDLE = 0xfffffff4L # (DWORD)-12
404 _STD_ERROR_HANDLE = 0xfffffff4L # (DWORD)-12
403
405
404 _FOREGROUND_BLUE = 0x0001
406 _FOREGROUND_BLUE = 0x0001
405 _FOREGROUND_GREEN = 0x0002
407 _FOREGROUND_GREEN = 0x0002
406 _FOREGROUND_RED = 0x0004
408 _FOREGROUND_RED = 0x0004
407 _FOREGROUND_INTENSITY = 0x0008
409 _FOREGROUND_INTENSITY = 0x0008
408
410
409 _BACKGROUND_BLUE = 0x0010
411 _BACKGROUND_BLUE = 0x0010
410 _BACKGROUND_GREEN = 0x0020
412 _BACKGROUND_GREEN = 0x0020
411 _BACKGROUND_RED = 0x0040
413 _BACKGROUND_RED = 0x0040
412 _BACKGROUND_INTENSITY = 0x0080
414 _BACKGROUND_INTENSITY = 0x0080
413
415
414 _COMMON_LVB_REVERSE_VIDEO = 0x4000
416 _COMMON_LVB_REVERSE_VIDEO = 0x4000
415 _COMMON_LVB_UNDERSCORE = 0x8000
417 _COMMON_LVB_UNDERSCORE = 0x8000
416
418
417 # http://msdn.microsoft.com/en-us/library/ms682088%28VS.85%29.aspx
419 # http://msdn.microsoft.com/en-us/library/ms682088%28VS.85%29.aspx
418 w32effects = {
420 w32effects = {
419 'none': -1,
421 'none': -1,
420 'black': 0,
422 'black': 0,
421 'red': _FOREGROUND_RED,
423 'red': _FOREGROUND_RED,
422 'green': _FOREGROUND_GREEN,
424 'green': _FOREGROUND_GREEN,
423 'yellow': _FOREGROUND_RED | _FOREGROUND_GREEN,
425 'yellow': _FOREGROUND_RED | _FOREGROUND_GREEN,
424 'blue': _FOREGROUND_BLUE,
426 'blue': _FOREGROUND_BLUE,
425 'magenta': _FOREGROUND_BLUE | _FOREGROUND_RED,
427 'magenta': _FOREGROUND_BLUE | _FOREGROUND_RED,
426 'cyan': _FOREGROUND_BLUE | _FOREGROUND_GREEN,
428 'cyan': _FOREGROUND_BLUE | _FOREGROUND_GREEN,
427 'white': _FOREGROUND_RED | _FOREGROUND_GREEN | _FOREGROUND_BLUE,
429 'white': _FOREGROUND_RED | _FOREGROUND_GREEN | _FOREGROUND_BLUE,
428 'bold': _FOREGROUND_INTENSITY,
430 'bold': _FOREGROUND_INTENSITY,
429 'black_background': 0x100, # unused value > 0x0f
431 'black_background': 0x100, # unused value > 0x0f
430 'red_background': _BACKGROUND_RED,
432 'red_background': _BACKGROUND_RED,
431 'green_background': _BACKGROUND_GREEN,
433 'green_background': _BACKGROUND_GREEN,
432 'yellow_background': _BACKGROUND_RED | _BACKGROUND_GREEN,
434 'yellow_background': _BACKGROUND_RED | _BACKGROUND_GREEN,
433 'blue_background': _BACKGROUND_BLUE,
435 'blue_background': _BACKGROUND_BLUE,
434 'purple_background': _BACKGROUND_BLUE | _BACKGROUND_RED,
436 'purple_background': _BACKGROUND_BLUE | _BACKGROUND_RED,
435 'cyan_background': _BACKGROUND_BLUE | _BACKGROUND_GREEN,
437 'cyan_background': _BACKGROUND_BLUE | _BACKGROUND_GREEN,
436 'white_background': (_BACKGROUND_RED | _BACKGROUND_GREEN |
438 'white_background': (_BACKGROUND_RED | _BACKGROUND_GREEN |
437 _BACKGROUND_BLUE),
439 _BACKGROUND_BLUE),
438 'bold_background': _BACKGROUND_INTENSITY,
440 'bold_background': _BACKGROUND_INTENSITY,
439 'underline': _COMMON_LVB_UNDERSCORE, # double-byte charsets only
441 'underline': _COMMON_LVB_UNDERSCORE, # double-byte charsets only
440 'inverse': _COMMON_LVB_REVERSE_VIDEO, # double-byte charsets only
442 'inverse': _COMMON_LVB_REVERSE_VIDEO, # double-byte charsets only
441 }
443 }
442
444
443 passthrough = set([_FOREGROUND_INTENSITY,
445 passthrough = set([_FOREGROUND_INTENSITY,
444 _BACKGROUND_INTENSITY,
446 _BACKGROUND_INTENSITY,
445 _COMMON_LVB_UNDERSCORE,
447 _COMMON_LVB_UNDERSCORE,
446 _COMMON_LVB_REVERSE_VIDEO])
448 _COMMON_LVB_REVERSE_VIDEO])
447
449
448 stdout = _kernel32.GetStdHandle(
450 stdout = _kernel32.GetStdHandle(
449 _STD_OUTPUT_HANDLE) # don't close the handle returned
451 _STD_OUTPUT_HANDLE) # don't close the handle returned
450 if stdout is None or stdout == _INVALID_HANDLE_VALUE:
452 if stdout is None or stdout == _INVALID_HANDLE_VALUE:
451 w32effects = None
453 w32effects = None
452 else:
454 else:
453 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
455 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
454 if not _kernel32.GetConsoleScreenBufferInfo(
456 if not _kernel32.GetConsoleScreenBufferInfo(
455 stdout, ctypes.byref(csbi)):
457 stdout, ctypes.byref(csbi)):
456 # stdout may not support GetConsoleScreenBufferInfo()
458 # stdout may not support GetConsoleScreenBufferInfo()
457 # when called from subprocess or redirected
459 # when called from subprocess or redirected
458 w32effects = None
460 w32effects = None
459 else:
461 else:
460 origattr = csbi.wAttributes
462 origattr = csbi.wAttributes
461 ansire = re.compile('\033\[([^m]*)m([^\033]*)(.*)',
463 ansire = re.compile('\033\[([^m]*)m([^\033]*)(.*)',
462 re.MULTILINE | re.DOTALL)
464 re.MULTILINE | re.DOTALL)
463
465
464 def win32print(text, orig, **opts):
466 def win32print(text, orig, **opts):
465 label = opts.get('label', '')
467 label = opts.get('label', '')
466 attr = origattr
468 attr = origattr
467
469
468 def mapcolor(val, attr):
470 def mapcolor(val, attr):
469 if val == -1:
471 if val == -1:
470 return origattr
472 return origattr
471 elif val in passthrough:
473 elif val in passthrough:
472 return attr | val
474 return attr | val
473 elif val > 0x0f:
475 elif val > 0x0f:
474 return (val & 0x70) | (attr & 0x8f)
476 return (val & 0x70) | (attr & 0x8f)
475 else:
477 else:
476 return (val & 0x07) | (attr & 0xf8)
478 return (val & 0x07) | (attr & 0xf8)
477
479
478 # determine console attributes based on labels
480 # determine console attributes based on labels
479 for l in label.split():
481 for l in label.split():
480 style = _styles.get(l, '')
482 style = _styles.get(l, '')
481 for effect in style.split():
483 for effect in style.split():
482 attr = mapcolor(w32effects[effect], attr)
484 attr = mapcolor(w32effects[effect], attr)
483
485
484 # hack to ensure regexp finds data
486 # hack to ensure regexp finds data
485 if not text.startswith('\033['):
487 if not text.startswith('\033['):
486 text = '\033[m' + text
488 text = '\033[m' + text
487
489
488 # Look for ANSI-like codes embedded in text
490 # Look for ANSI-like codes embedded in text
489 m = re.match(ansire, text)
491 m = re.match(ansire, text)
490
492
491 try:
493 try:
492 while m:
494 while m:
493 for sattr in m.group(1).split(';'):
495 for sattr in m.group(1).split(';'):
494 if sattr:
496 if sattr:
495 attr = mapcolor(int(sattr), attr)
497 attr = mapcolor(int(sattr), attr)
496 _kernel32.SetConsoleTextAttribute(stdout, attr)
498 _kernel32.SetConsoleTextAttribute(stdout, attr)
497 orig(m.group(2), **opts)
499 orig(m.group(2), **opts)
498 m = re.match(ansire, m.group(3))
500 m = re.match(ansire, m.group(3))
499 finally:
501 finally:
500 # Explicity reset original attributes
502 # Explicity reset original attributes
501 _kernel32.SetConsoleTextAttribute(stdout, origattr)
503 _kernel32.SetConsoleTextAttribute(stdout, origattr)
@@ -1,368 +1,370 b''
1 # convert.py Foreign SCM converter
1 # convert.py Foreign SCM converter
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''import revisions from foreign VCS repositories into Mercurial'''
8 '''import revisions from foreign VCS repositories into Mercurial'''
9
9
10 import convcmd
10 import convcmd
11 import cvsps
11 import cvsps
12 import subversion
12 import subversion
13 from mercurial import commands, templatekw
13 from mercurial import commands, templatekw
14 from mercurial.i18n import _
14 from mercurial.i18n import _
15
15
16 testedwith = 'internal'
17
16 # Commands definition was moved elsewhere to ease demandload job.
18 # Commands definition was moved elsewhere to ease demandload job.
17
19
18 def convert(ui, src, dest=None, revmapfile=None, **opts):
20 def convert(ui, src, dest=None, revmapfile=None, **opts):
19 """convert a foreign SCM repository to a Mercurial one.
21 """convert a foreign SCM repository to a Mercurial one.
20
22
21 Accepted source formats [identifiers]:
23 Accepted source formats [identifiers]:
22
24
23 - Mercurial [hg]
25 - Mercurial [hg]
24 - CVS [cvs]
26 - CVS [cvs]
25 - Darcs [darcs]
27 - Darcs [darcs]
26 - git [git]
28 - git [git]
27 - Subversion [svn]
29 - Subversion [svn]
28 - Monotone [mtn]
30 - Monotone [mtn]
29 - GNU Arch [gnuarch]
31 - GNU Arch [gnuarch]
30 - Bazaar [bzr]
32 - Bazaar [bzr]
31 - Perforce [p4]
33 - Perforce [p4]
32
34
33 Accepted destination formats [identifiers]:
35 Accepted destination formats [identifiers]:
34
36
35 - Mercurial [hg]
37 - Mercurial [hg]
36 - Subversion [svn] (history on branches is not preserved)
38 - Subversion [svn] (history on branches is not preserved)
37
39
38 If no revision is given, all revisions will be converted.
40 If no revision is given, all revisions will be converted.
39 Otherwise, convert will only import up to the named revision
41 Otherwise, convert will only import up to the named revision
40 (given in a format understood by the source).
42 (given in a format understood by the source).
41
43
42 If no destination directory name is specified, it defaults to the
44 If no destination directory name is specified, it defaults to the
43 basename of the source with ``-hg`` appended. If the destination
45 basename of the source with ``-hg`` appended. If the destination
44 repository doesn't exist, it will be created.
46 repository doesn't exist, it will be created.
45
47
46 By default, all sources except Mercurial will use --branchsort.
48 By default, all sources except Mercurial will use --branchsort.
47 Mercurial uses --sourcesort to preserve original revision numbers
49 Mercurial uses --sourcesort to preserve original revision numbers
48 order. Sort modes have the following effects:
50 order. Sort modes have the following effects:
49
51
50 --branchsort convert from parent to child revision when possible,
52 --branchsort convert from parent to child revision when possible,
51 which means branches are usually converted one after
53 which means branches are usually converted one after
52 the other. It generates more compact repositories.
54 the other. It generates more compact repositories.
53
55
54 --datesort sort revisions by date. Converted repositories have
56 --datesort sort revisions by date. Converted repositories have
55 good-looking changelogs but are often an order of
57 good-looking changelogs but are often an order of
56 magnitude larger than the same ones generated by
58 magnitude larger than the same ones generated by
57 --branchsort.
59 --branchsort.
58
60
59 --sourcesort try to preserve source revisions order, only
61 --sourcesort try to preserve source revisions order, only
60 supported by Mercurial sources.
62 supported by Mercurial sources.
61
63
62 If ``REVMAP`` isn't given, it will be put in a default location
64 If ``REVMAP`` isn't given, it will be put in a default location
63 (``<dest>/.hg/shamap`` by default). The ``REVMAP`` is a simple
65 (``<dest>/.hg/shamap`` by default). The ``REVMAP`` is a simple
64 text file that maps each source commit ID to the destination ID
66 text file that maps each source commit ID to the destination ID
65 for that revision, like so::
67 for that revision, like so::
66
68
67 <source ID> <destination ID>
69 <source ID> <destination ID>
68
70
69 If the file doesn't exist, it's automatically created. It's
71 If the file doesn't exist, it's automatically created. It's
70 updated on each commit copied, so :hg:`convert` can be interrupted
72 updated on each commit copied, so :hg:`convert` can be interrupted
71 and can be run repeatedly to copy new commits.
73 and can be run repeatedly to copy new commits.
72
74
73 The authormap is a simple text file that maps each source commit
75 The authormap is a simple text file that maps each source commit
74 author to a destination commit author. It is handy for source SCMs
76 author to a destination commit author. It is handy for source SCMs
75 that use unix logins to identify authors (eg: CVS). One line per
77 that use unix logins to identify authors (eg: CVS). One line per
76 author mapping and the line format is::
78 author mapping and the line format is::
77
79
78 source author = destination author
80 source author = destination author
79
81
80 Empty lines and lines starting with a ``#`` are ignored.
82 Empty lines and lines starting with a ``#`` are ignored.
81
83
82 The filemap is a file that allows filtering and remapping of files
84 The filemap is a file that allows filtering and remapping of files
83 and directories. Each line can contain one of the following
85 and directories. Each line can contain one of the following
84 directives::
86 directives::
85
87
86 include path/to/file-or-dir
88 include path/to/file-or-dir
87
89
88 exclude path/to/file-or-dir
90 exclude path/to/file-or-dir
89
91
90 rename path/to/source path/to/destination
92 rename path/to/source path/to/destination
91
93
92 Comment lines start with ``#``. A specified path matches if it
94 Comment lines start with ``#``. A specified path matches if it
93 equals the full relative name of a file or one of its parent
95 equals the full relative name of a file or one of its parent
94 directories. The ``include`` or ``exclude`` directive with the
96 directories. The ``include`` or ``exclude`` directive with the
95 longest matching path applies, so line order does not matter.
97 longest matching path applies, so line order does not matter.
96
98
97 The ``include`` directive causes a file, or all files under a
99 The ``include`` directive causes a file, or all files under a
98 directory, to be included in the destination repository, and the
100 directory, to be included in the destination repository, and the
99 exclusion of all other files and directories not explicitly
101 exclusion of all other files and directories not explicitly
100 included. The ``exclude`` directive causes files or directories to
102 included. The ``exclude`` directive causes files or directories to
101 be omitted. The ``rename`` directive renames a file or directory if
103 be omitted. The ``rename`` directive renames a file or directory if
102 it is converted. To rename from a subdirectory into the root of
104 it is converted. To rename from a subdirectory into the root of
103 the repository, use ``.`` as the path to rename to.
105 the repository, use ``.`` as the path to rename to.
104
106
105 The splicemap is a file that allows insertion of synthetic
107 The splicemap is a file that allows insertion of synthetic
106 history, letting you specify the parents of a revision. This is
108 history, letting you specify the parents of a revision. This is
107 useful if you want to e.g. give a Subversion merge two parents, or
109 useful if you want to e.g. give a Subversion merge two parents, or
108 graft two disconnected series of history together. Each entry
110 graft two disconnected series of history together. Each entry
109 contains a key, followed by a space, followed by one or two
111 contains a key, followed by a space, followed by one or two
110 comma-separated values::
112 comma-separated values::
111
113
112 key parent1, parent2
114 key parent1, parent2
113
115
114 The key is the revision ID in the source
116 The key is the revision ID in the source
115 revision control system whose parents should be modified (same
117 revision control system whose parents should be modified (same
116 format as a key in .hg/shamap). The values are the revision IDs
118 format as a key in .hg/shamap). The values are the revision IDs
117 (in either the source or destination revision control system) that
119 (in either the source or destination revision control system) that
118 should be used as the new parents for that node. For example, if
120 should be used as the new parents for that node. For example, if
119 you have merged "release-1.0" into "trunk", then you should
121 you have merged "release-1.0" into "trunk", then you should
120 specify the revision on "trunk" as the first parent and the one on
122 specify the revision on "trunk" as the first parent and the one on
121 the "release-1.0" branch as the second.
123 the "release-1.0" branch as the second.
122
124
123 The branchmap is a file that allows you to rename a branch when it is
125 The branchmap is a file that allows you to rename a branch when it is
124 being brought in from whatever external repository. When used in
126 being brought in from whatever external repository. When used in
125 conjunction with a splicemap, it allows for a powerful combination
127 conjunction with a splicemap, it allows for a powerful combination
126 to help fix even the most badly mismanaged repositories and turn them
128 to help fix even the most badly mismanaged repositories and turn them
127 into nicely structured Mercurial repositories. The branchmap contains
129 into nicely structured Mercurial repositories. The branchmap contains
128 lines of the form::
130 lines of the form::
129
131
130 original_branch_name new_branch_name
132 original_branch_name new_branch_name
131
133
132 where "original_branch_name" is the name of the branch in the
134 where "original_branch_name" is the name of the branch in the
133 source repository, and "new_branch_name" is the name of the branch
135 source repository, and "new_branch_name" is the name of the branch
134 is the destination repository. No whitespace is allowed in the
136 is the destination repository. No whitespace is allowed in the
135 branch names. This can be used to (for instance) move code in one
137 branch names. This can be used to (for instance) move code in one
136 repository from "default" to a named branch.
138 repository from "default" to a named branch.
137
139
138 Mercurial Source
140 Mercurial Source
139 ''''''''''''''''
141 ''''''''''''''''
140
142
141 The Mercurial source recognizes the following configuration
143 The Mercurial source recognizes the following configuration
142 options, which you can set on the command line with ``--config``:
144 options, which you can set on the command line with ``--config``:
143
145
144 :convert.hg.ignoreerrors: ignore integrity errors when reading.
146 :convert.hg.ignoreerrors: ignore integrity errors when reading.
145 Use it to fix Mercurial repositories with missing revlogs, by
147 Use it to fix Mercurial repositories with missing revlogs, by
146 converting from and to Mercurial. Default is False.
148 converting from and to Mercurial. Default is False.
147
149
148 :convert.hg.saverev: store original revision ID in changeset
150 :convert.hg.saverev: store original revision ID in changeset
149 (forces target IDs to change). It takes a boolean argument and
151 (forces target IDs to change). It takes a boolean argument and
150 defaults to False.
152 defaults to False.
151
153
152 :convert.hg.startrev: convert start revision and its descendants.
154 :convert.hg.startrev: convert start revision and its descendants.
153 It takes a hg revision identifier and defaults to 0.
155 It takes a hg revision identifier and defaults to 0.
154
156
155 CVS Source
157 CVS Source
156 ''''''''''
158 ''''''''''
157
159
158 CVS source will use a sandbox (i.e. a checked-out copy) from CVS
160 CVS source will use a sandbox (i.e. a checked-out copy) from CVS
159 to indicate the starting point of what will be converted. Direct
161 to indicate the starting point of what will be converted. Direct
160 access to the repository files is not needed, unless of course the
162 access to the repository files is not needed, unless of course the
161 repository is ``:local:``. The conversion uses the top level
163 repository is ``:local:``. The conversion uses the top level
162 directory in the sandbox to find the CVS repository, and then uses
164 directory in the sandbox to find the CVS repository, and then uses
163 CVS rlog commands to find files to convert. This means that unless
165 CVS rlog commands to find files to convert. This means that unless
164 a filemap is given, all files under the starting directory will be
166 a filemap is given, all files under the starting directory will be
165 converted, and that any directory reorganization in the CVS
167 converted, and that any directory reorganization in the CVS
166 sandbox is ignored.
168 sandbox is ignored.
167
169
168 The following options can be used with ``--config``:
170 The following options can be used with ``--config``:
169
171
170 :convert.cvsps.cache: Set to False to disable remote log caching,
172 :convert.cvsps.cache: Set to False to disable remote log caching,
171 for testing and debugging purposes. Default is True.
173 for testing and debugging purposes. Default is True.
172
174
173 :convert.cvsps.fuzz: Specify the maximum time (in seconds) that is
175 :convert.cvsps.fuzz: Specify the maximum time (in seconds) that is
174 allowed between commits with identical user and log message in
176 allowed between commits with identical user and log message in
175 a single changeset. When very large files were checked in as
177 a single changeset. When very large files were checked in as
176 part of a changeset then the default may not be long enough.
178 part of a changeset then the default may not be long enough.
177 The default is 60.
179 The default is 60.
178
180
179 :convert.cvsps.mergeto: Specify a regular expression to which
181 :convert.cvsps.mergeto: Specify a regular expression to which
180 commit log messages are matched. If a match occurs, then the
182 commit log messages are matched. If a match occurs, then the
181 conversion process will insert a dummy revision merging the
183 conversion process will insert a dummy revision merging the
182 branch on which this log message occurs to the branch
184 branch on which this log message occurs to the branch
183 indicated in the regex. Default is ``{{mergetobranch
185 indicated in the regex. Default is ``{{mergetobranch
184 ([-\\w]+)}}``
186 ([-\\w]+)}}``
185
187
186 :convert.cvsps.mergefrom: Specify a regular expression to which
188 :convert.cvsps.mergefrom: Specify a regular expression to which
187 commit log messages are matched. If a match occurs, then the
189 commit log messages are matched. If a match occurs, then the
188 conversion process will add the most recent revision on the
190 conversion process will add the most recent revision on the
189 branch indicated in the regex as the second parent of the
191 branch indicated in the regex as the second parent of the
190 changeset. Default is ``{{mergefrombranch ([-\\w]+)}}``
192 changeset. Default is ``{{mergefrombranch ([-\\w]+)}}``
191
193
192 :hook.cvslog: Specify a Python function to be called at the end of
194 :hook.cvslog: Specify a Python function to be called at the end of
193 gathering the CVS log. The function is passed a list with the
195 gathering the CVS log. The function is passed a list with the
194 log entries, and can modify the entries in-place, or add or
196 log entries, and can modify the entries in-place, or add or
195 delete them.
197 delete them.
196
198
197 :hook.cvschangesets: Specify a Python function to be called after
199 :hook.cvschangesets: Specify a Python function to be called after
198 the changesets are calculated from the the CVS log. The
200 the changesets are calculated from the the CVS log. The
199 function is passed a list with the changeset entries, and can
201 function is passed a list with the changeset entries, and can
200 modify the changesets in-place, or add or delete them.
202 modify the changesets in-place, or add or delete them.
201
203
202 An additional "debugcvsps" Mercurial command allows the builtin
204 An additional "debugcvsps" Mercurial command allows the builtin
203 changeset merging code to be run without doing a conversion. Its
205 changeset merging code to be run without doing a conversion. Its
204 parameters and output are similar to that of cvsps 2.1. Please see
206 parameters and output are similar to that of cvsps 2.1. Please see
205 the command help for more details.
207 the command help for more details.
206
208
207 Subversion Source
209 Subversion Source
208 '''''''''''''''''
210 '''''''''''''''''
209
211
210 Subversion source detects classical trunk/branches/tags layouts.
212 Subversion source detects classical trunk/branches/tags layouts.
211 By default, the supplied ``svn://repo/path/`` source URL is
213 By default, the supplied ``svn://repo/path/`` source URL is
212 converted as a single branch. If ``svn://repo/path/trunk`` exists
214 converted as a single branch. If ``svn://repo/path/trunk`` exists
213 it replaces the default branch. If ``svn://repo/path/branches``
215 it replaces the default branch. If ``svn://repo/path/branches``
214 exists, its subdirectories are listed as possible branches. If
216 exists, its subdirectories are listed as possible branches. If
215 ``svn://repo/path/tags`` exists, it is looked for tags referencing
217 ``svn://repo/path/tags`` exists, it is looked for tags referencing
216 converted branches. Default ``trunk``, ``branches`` and ``tags``
218 converted branches. Default ``trunk``, ``branches`` and ``tags``
217 values can be overridden with following options. Set them to paths
219 values can be overridden with following options. Set them to paths
218 relative to the source URL, or leave them blank to disable auto
220 relative to the source URL, or leave them blank to disable auto
219 detection.
221 detection.
220
222
221 The following options can be set with ``--config``:
223 The following options can be set with ``--config``:
222
224
223 :convert.svn.branches: specify the directory containing branches.
225 :convert.svn.branches: specify the directory containing branches.
224 The default is ``branches``.
226 The default is ``branches``.
225
227
226 :convert.svn.tags: specify the directory containing tags. The
228 :convert.svn.tags: specify the directory containing tags. The
227 default is ``tags``.
229 default is ``tags``.
228
230
229 :convert.svn.trunk: specify the name of the trunk branch. The
231 :convert.svn.trunk: specify the name of the trunk branch. The
230 default is ``trunk``.
232 default is ``trunk``.
231
233
232 Source history can be retrieved starting at a specific revision,
234 Source history can be retrieved starting at a specific revision,
233 instead of being integrally converted. Only single branch
235 instead of being integrally converted. Only single branch
234 conversions are supported.
236 conversions are supported.
235
237
236 :convert.svn.startrev: specify start Subversion revision number.
238 :convert.svn.startrev: specify start Subversion revision number.
237 The default is 0.
239 The default is 0.
238
240
239 Perforce Source
241 Perforce Source
240 '''''''''''''''
242 '''''''''''''''
241
243
242 The Perforce (P4) importer can be given a p4 depot path or a
244 The Perforce (P4) importer can be given a p4 depot path or a
243 client specification as source. It will convert all files in the
245 client specification as source. It will convert all files in the
244 source to a flat Mercurial repository, ignoring labels, branches
246 source to a flat Mercurial repository, ignoring labels, branches
245 and integrations. Note that when a depot path is given you then
247 and integrations. Note that when a depot path is given you then
246 usually should specify a target directory, because otherwise the
248 usually should specify a target directory, because otherwise the
247 target may be named ``...-hg``.
249 target may be named ``...-hg``.
248
250
249 It is possible to limit the amount of source history to be
251 It is possible to limit the amount of source history to be
250 converted by specifying an initial Perforce revision:
252 converted by specifying an initial Perforce revision:
251
253
252 :convert.p4.startrev: specify initial Perforce revision (a
254 :convert.p4.startrev: specify initial Perforce revision (a
253 Perforce changelist number).
255 Perforce changelist number).
254
256
255 Mercurial Destination
257 Mercurial Destination
256 '''''''''''''''''''''
258 '''''''''''''''''''''
257
259
258 The following options are supported:
260 The following options are supported:
259
261
260 :convert.hg.clonebranches: dispatch source branches in separate
262 :convert.hg.clonebranches: dispatch source branches in separate
261 clones. The default is False.
263 clones. The default is False.
262
264
263 :convert.hg.tagsbranch: branch name for tag revisions, defaults to
265 :convert.hg.tagsbranch: branch name for tag revisions, defaults to
264 ``default``.
266 ``default``.
265
267
266 :convert.hg.usebranchnames: preserve branch names. The default is
268 :convert.hg.usebranchnames: preserve branch names. The default is
267 True.
269 True.
268 """
270 """
269 return convcmd.convert(ui, src, dest, revmapfile, **opts)
271 return convcmd.convert(ui, src, dest, revmapfile, **opts)
270
272
271 def debugsvnlog(ui, **opts):
273 def debugsvnlog(ui, **opts):
272 return subversion.debugsvnlog(ui, **opts)
274 return subversion.debugsvnlog(ui, **opts)
273
275
274 def debugcvsps(ui, *args, **opts):
276 def debugcvsps(ui, *args, **opts):
275 '''create changeset information from CVS
277 '''create changeset information from CVS
276
278
277 This command is intended as a debugging tool for the CVS to
279 This command is intended as a debugging tool for the CVS to
278 Mercurial converter, and can be used as a direct replacement for
280 Mercurial converter, and can be used as a direct replacement for
279 cvsps.
281 cvsps.
280
282
281 Hg debugcvsps reads the CVS rlog for current directory (or any
283 Hg debugcvsps reads the CVS rlog for current directory (or any
282 named directory) in the CVS repository, and converts the log to a
284 named directory) in the CVS repository, and converts the log to a
283 series of changesets based on matching commit log entries and
285 series of changesets based on matching commit log entries and
284 dates.'''
286 dates.'''
285 return cvsps.debugcvsps(ui, *args, **opts)
287 return cvsps.debugcvsps(ui, *args, **opts)
286
288
287 commands.norepo += " convert debugsvnlog debugcvsps"
289 commands.norepo += " convert debugsvnlog debugcvsps"
288
290
289 cmdtable = {
291 cmdtable = {
290 "convert":
292 "convert":
291 (convert,
293 (convert,
292 [('', 'authors', '',
294 [('', 'authors', '',
293 _('username mapping filename (DEPRECATED, use --authormap instead)'),
295 _('username mapping filename (DEPRECATED, use --authormap instead)'),
294 _('FILE')),
296 _('FILE')),
295 ('s', 'source-type', '',
297 ('s', 'source-type', '',
296 _('source repository type'), _('TYPE')),
298 _('source repository type'), _('TYPE')),
297 ('d', 'dest-type', '',
299 ('d', 'dest-type', '',
298 _('destination repository type'), _('TYPE')),
300 _('destination repository type'), _('TYPE')),
299 ('r', 'rev', '',
301 ('r', 'rev', '',
300 _('import up to target revision REV'), _('REV')),
302 _('import up to target revision REV'), _('REV')),
301 ('A', 'authormap', '',
303 ('A', 'authormap', '',
302 _('remap usernames using this file'), _('FILE')),
304 _('remap usernames using this file'), _('FILE')),
303 ('', 'filemap', '',
305 ('', 'filemap', '',
304 _('remap file names using contents of file'), _('FILE')),
306 _('remap file names using contents of file'), _('FILE')),
305 ('', 'splicemap', '',
307 ('', 'splicemap', '',
306 _('splice synthesized history into place'), _('FILE')),
308 _('splice synthesized history into place'), _('FILE')),
307 ('', 'branchmap', '',
309 ('', 'branchmap', '',
308 _('change branch names while converting'), _('FILE')),
310 _('change branch names while converting'), _('FILE')),
309 ('', 'branchsort', None, _('try to sort changesets by branches')),
311 ('', 'branchsort', None, _('try to sort changesets by branches')),
310 ('', 'datesort', None, _('try to sort changesets by date')),
312 ('', 'datesort', None, _('try to sort changesets by date')),
311 ('', 'sourcesort', None, _('preserve source changesets order'))],
313 ('', 'sourcesort', None, _('preserve source changesets order'))],
312 _('hg convert [OPTION]... SOURCE [DEST [REVMAP]]')),
314 _('hg convert [OPTION]... SOURCE [DEST [REVMAP]]')),
313 "debugsvnlog":
315 "debugsvnlog":
314 (debugsvnlog,
316 (debugsvnlog,
315 [],
317 [],
316 'hg debugsvnlog'),
318 'hg debugsvnlog'),
317 "debugcvsps":
319 "debugcvsps":
318 (debugcvsps,
320 (debugcvsps,
319 [
321 [
320 # Main options shared with cvsps-2.1
322 # Main options shared with cvsps-2.1
321 ('b', 'branches', [], _('only return changes on specified branches')),
323 ('b', 'branches', [], _('only return changes on specified branches')),
322 ('p', 'prefix', '', _('prefix to remove from file names')),
324 ('p', 'prefix', '', _('prefix to remove from file names')),
323 ('r', 'revisions', [],
325 ('r', 'revisions', [],
324 _('only return changes after or between specified tags')),
326 _('only return changes after or between specified tags')),
325 ('u', 'update-cache', None, _("update cvs log cache")),
327 ('u', 'update-cache', None, _("update cvs log cache")),
326 ('x', 'new-cache', None, _("create new cvs log cache")),
328 ('x', 'new-cache', None, _("create new cvs log cache")),
327 ('z', 'fuzz', 60, _('set commit time fuzz in seconds')),
329 ('z', 'fuzz', 60, _('set commit time fuzz in seconds')),
328 ('', 'root', '', _('specify cvsroot')),
330 ('', 'root', '', _('specify cvsroot')),
329 # Options specific to builtin cvsps
331 # Options specific to builtin cvsps
330 ('', 'parents', '', _('show parent changesets')),
332 ('', 'parents', '', _('show parent changesets')),
331 ('', 'ancestors', '',
333 ('', 'ancestors', '',
332 _('show current changeset in ancestor branches')),
334 _('show current changeset in ancestor branches')),
333 # Options that are ignored for compatibility with cvsps-2.1
335 # Options that are ignored for compatibility with cvsps-2.1
334 ('A', 'cvs-direct', None, _('ignored for compatibility')),
336 ('A', 'cvs-direct', None, _('ignored for compatibility')),
335 ],
337 ],
336 _('hg debugcvsps [OPTION]... [PATH]...')),
338 _('hg debugcvsps [OPTION]... [PATH]...')),
337 }
339 }
338
340
339 def kwconverted(ctx, name):
341 def kwconverted(ctx, name):
340 rev = ctx.extra().get('convert_revision', '')
342 rev = ctx.extra().get('convert_revision', '')
341 if rev.startswith('svn:'):
343 if rev.startswith('svn:'):
342 if name == 'svnrev':
344 if name == 'svnrev':
343 return str(subversion.revsplit(rev)[2])
345 return str(subversion.revsplit(rev)[2])
344 elif name == 'svnpath':
346 elif name == 'svnpath':
345 return subversion.revsplit(rev)[1]
347 return subversion.revsplit(rev)[1]
346 elif name == 'svnuuid':
348 elif name == 'svnuuid':
347 return subversion.revsplit(rev)[0]
349 return subversion.revsplit(rev)[0]
348 return rev
350 return rev
349
351
350 def kwsvnrev(repo, ctx, **args):
352 def kwsvnrev(repo, ctx, **args):
351 """:svnrev: String. Converted subversion revision number."""
353 """:svnrev: String. Converted subversion revision number."""
352 return kwconverted(ctx, 'svnrev')
354 return kwconverted(ctx, 'svnrev')
353
355
354 def kwsvnpath(repo, ctx, **args):
356 def kwsvnpath(repo, ctx, **args):
355 """:svnpath: String. Converted subversion revision project path."""
357 """:svnpath: String. Converted subversion revision project path."""
356 return kwconverted(ctx, 'svnpath')
358 return kwconverted(ctx, 'svnpath')
357
359
358 def kwsvnuuid(repo, ctx, **args):
360 def kwsvnuuid(repo, ctx, **args):
359 """:svnuuid: String. Converted subversion revision repository identifier."""
361 """:svnuuid: String. Converted subversion revision repository identifier."""
360 return kwconverted(ctx, 'svnuuid')
362 return kwconverted(ctx, 'svnuuid')
361
363
362 def extsetup(ui):
364 def extsetup(ui):
363 templatekw.keywords['svnrev'] = kwsvnrev
365 templatekw.keywords['svnrev'] = kwsvnrev
364 templatekw.keywords['svnpath'] = kwsvnpath
366 templatekw.keywords['svnpath'] = kwsvnpath
365 templatekw.keywords['svnuuid'] = kwsvnuuid
367 templatekw.keywords['svnuuid'] = kwsvnuuid
366
368
367 # tell hggettext to extract docstrings from these functions:
369 # tell hggettext to extract docstrings from these functions:
368 i18nfunctions = [kwsvnrev, kwsvnpath, kwsvnuuid]
370 i18nfunctions = [kwsvnrev, kwsvnpath, kwsvnuuid]
@@ -1,347 +1,349 b''
1 """automatically manage newlines in repository files
1 """automatically manage newlines in repository files
2
2
3 This extension allows you to manage the type of line endings (CRLF or
3 This extension allows you to manage the type of line endings (CRLF or
4 LF) that are used in the repository and in the local working
4 LF) that are used in the repository and in the local working
5 directory. That way you can get CRLF line endings on Windows and LF on
5 directory. That way you can get CRLF line endings on Windows and LF on
6 Unix/Mac, thereby letting everybody use their OS native line endings.
6 Unix/Mac, thereby letting everybody use their OS native line endings.
7
7
8 The extension reads its configuration from a versioned ``.hgeol``
8 The extension reads its configuration from a versioned ``.hgeol``
9 configuration file found in the root of the working copy. The
9 configuration file found in the root of the working copy. The
10 ``.hgeol`` file use the same syntax as all other Mercurial
10 ``.hgeol`` file use the same syntax as all other Mercurial
11 configuration files. It uses two sections, ``[patterns]`` and
11 configuration files. It uses two sections, ``[patterns]`` and
12 ``[repository]``.
12 ``[repository]``.
13
13
14 The ``[patterns]`` section specifies how line endings should be
14 The ``[patterns]`` section specifies how line endings should be
15 converted between the working copy and the repository. The format is
15 converted between the working copy and the repository. The format is
16 specified by a file pattern. The first match is used, so put more
16 specified by a file pattern. The first match is used, so put more
17 specific patterns first. The available line endings are ``LF``,
17 specific patterns first. The available line endings are ``LF``,
18 ``CRLF``, and ``BIN``.
18 ``CRLF``, and ``BIN``.
19
19
20 Files with the declared format of ``CRLF`` or ``LF`` are always
20 Files with the declared format of ``CRLF`` or ``LF`` are always
21 checked out and stored in the repository in that format and files
21 checked out and stored in the repository in that format and files
22 declared to be binary (``BIN``) are left unchanged. Additionally,
22 declared to be binary (``BIN``) are left unchanged. Additionally,
23 ``native`` is an alias for checking out in the platform's default line
23 ``native`` is an alias for checking out in the platform's default line
24 ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on
24 ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on
25 Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's
25 Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's
26 default behaviour; it is only needed if you need to override a later,
26 default behaviour; it is only needed if you need to override a later,
27 more general pattern.
27 more general pattern.
28
28
29 The optional ``[repository]`` section specifies the line endings to
29 The optional ``[repository]`` section specifies the line endings to
30 use for files stored in the repository. It has a single setting,
30 use for files stored in the repository. It has a single setting,
31 ``native``, which determines the storage line endings for files
31 ``native``, which determines the storage line endings for files
32 declared as ``native`` in the ``[patterns]`` section. It can be set to
32 declared as ``native`` in the ``[patterns]`` section. It can be set to
33 ``LF`` or ``CRLF``. The default is ``LF``. For example, this means
33 ``LF`` or ``CRLF``. The default is ``LF``. For example, this means
34 that on Windows, files configured as ``native`` (``CRLF`` by default)
34 that on Windows, files configured as ``native`` (``CRLF`` by default)
35 will be converted to ``LF`` when stored in the repository. Files
35 will be converted to ``LF`` when stored in the repository. Files
36 declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section
36 declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section
37 are always stored as-is in the repository.
37 are always stored as-is in the repository.
38
38
39 Example versioned ``.hgeol`` file::
39 Example versioned ``.hgeol`` file::
40
40
41 [patterns]
41 [patterns]
42 **.py = native
42 **.py = native
43 **.vcproj = CRLF
43 **.vcproj = CRLF
44 **.txt = native
44 **.txt = native
45 Makefile = LF
45 Makefile = LF
46 **.jpg = BIN
46 **.jpg = BIN
47
47
48 [repository]
48 [repository]
49 native = LF
49 native = LF
50
50
51 .. note::
51 .. note::
52 The rules will first apply when files are touched in the working
52 The rules will first apply when files are touched in the working
53 copy, e.g. by updating to null and back to tip to touch all files.
53 copy, e.g. by updating to null and back to tip to touch all files.
54
54
55 The extension uses an optional ``[eol]`` section read from both the
55 The extension uses an optional ``[eol]`` section read from both the
56 normal Mercurial configuration files and the ``.hgeol`` file, with the
56 normal Mercurial configuration files and the ``.hgeol`` file, with the
57 latter overriding the former. You can use that section to control the
57 latter overriding the former. You can use that section to control the
58 overall behavior. There are three settings:
58 overall behavior. There are three settings:
59
59
60 - ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or
60 - ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or
61 ``CRLF`` to override the default interpretation of ``native`` for
61 ``CRLF`` to override the default interpretation of ``native`` for
62 checkout. This can be used with :hg:`archive` on Unix, say, to
62 checkout. This can be used with :hg:`archive` on Unix, say, to
63 generate an archive where files have line endings for Windows.
63 generate an archive where files have line endings for Windows.
64
64
65 - ``eol.only-consistent`` (default True) can be set to False to make
65 - ``eol.only-consistent`` (default True) can be set to False to make
66 the extension convert files with inconsistent EOLs. Inconsistent
66 the extension convert files with inconsistent EOLs. Inconsistent
67 means that there is both ``CRLF`` and ``LF`` present in the file.
67 means that there is both ``CRLF`` and ``LF`` present in the file.
68 Such files are normally not touched under the assumption that they
68 Such files are normally not touched under the assumption that they
69 have mixed EOLs on purpose.
69 have mixed EOLs on purpose.
70
70
71 - ``eol.fix-trailing-newline`` (default False) can be set to True to
71 - ``eol.fix-trailing-newline`` (default False) can be set to True to
72 ensure that converted files end with a EOL character (either ``\\n``
72 ensure that converted files end with a EOL character (either ``\\n``
73 or ``\\r\\n`` as per the configured patterns).
73 or ``\\r\\n`` as per the configured patterns).
74
74
75 The extension provides ``cleverencode:`` and ``cleverdecode:`` filters
75 The extension provides ``cleverencode:`` and ``cleverdecode:`` filters
76 like the deprecated win32text extension does. This means that you can
76 like the deprecated win32text extension does. This means that you can
77 disable win32text and enable eol and your filters will still work. You
77 disable win32text and enable eol and your filters will still work. You
78 only need to these filters until you have prepared a ``.hgeol`` file.
78 only need to these filters until you have prepared a ``.hgeol`` file.
79
79
80 The ``win32text.forbid*`` hooks provided by the win32text extension
80 The ``win32text.forbid*`` hooks provided by the win32text extension
81 have been unified into a single hook named ``eol.checkheadshook``. The
81 have been unified into a single hook named ``eol.checkheadshook``. The
82 hook will lookup the expected line endings from the ``.hgeol`` file,
82 hook will lookup the expected line endings from the ``.hgeol`` file,
83 which means you must migrate to a ``.hgeol`` file first before using
83 which means you must migrate to a ``.hgeol`` file first before using
84 the hook. ``eol.checkheadshook`` only checks heads, intermediate
84 the hook. ``eol.checkheadshook`` only checks heads, intermediate
85 invalid revisions will be pushed. To forbid them completely, use the
85 invalid revisions will be pushed. To forbid them completely, use the
86 ``eol.checkallhook`` hook. These hooks are best used as
86 ``eol.checkallhook`` hook. These hooks are best used as
87 ``pretxnchangegroup`` hooks.
87 ``pretxnchangegroup`` hooks.
88
88
89 See :hg:`help patterns` for more information about the glob patterns
89 See :hg:`help patterns` for more information about the glob patterns
90 used.
90 used.
91 """
91 """
92
92
93 from mercurial.i18n import _
93 from mercurial.i18n import _
94 from mercurial import util, config, extensions, match, error
94 from mercurial import util, config, extensions, match, error
95 import re, os
95 import re, os
96
96
97 testedwith = 'internal'
98
97 # Matches a lone LF, i.e., one that is not part of CRLF.
99 # Matches a lone LF, i.e., one that is not part of CRLF.
98 singlelf = re.compile('(^|[^\r])\n')
100 singlelf = re.compile('(^|[^\r])\n')
99 # Matches a single EOL which can either be a CRLF where repeated CR
101 # Matches a single EOL which can either be a CRLF where repeated CR
100 # are removed or a LF. We do not care about old Machintosh files, so a
102 # are removed or a LF. We do not care about old Machintosh files, so a
101 # stray CR is an error.
103 # stray CR is an error.
102 eolre = re.compile('\r*\n')
104 eolre = re.compile('\r*\n')
103
105
104
106
105 def inconsistenteol(data):
107 def inconsistenteol(data):
106 return '\r\n' in data and singlelf.search(data)
108 return '\r\n' in data and singlelf.search(data)
107
109
108 def tolf(s, params, ui, **kwargs):
110 def tolf(s, params, ui, **kwargs):
109 """Filter to convert to LF EOLs."""
111 """Filter to convert to LF EOLs."""
110 if util.binary(s):
112 if util.binary(s):
111 return s
113 return s
112 if ui.configbool('eol', 'only-consistent', True) and inconsistenteol(s):
114 if ui.configbool('eol', 'only-consistent', True) and inconsistenteol(s):
113 return s
115 return s
114 if (ui.configbool('eol', 'fix-trailing-newline', False)
116 if (ui.configbool('eol', 'fix-trailing-newline', False)
115 and s and s[-1] != '\n'):
117 and s and s[-1] != '\n'):
116 s = s + '\n'
118 s = s + '\n'
117 return eolre.sub('\n', s)
119 return eolre.sub('\n', s)
118
120
119 def tocrlf(s, params, ui, **kwargs):
121 def tocrlf(s, params, ui, **kwargs):
120 """Filter to convert to CRLF EOLs."""
122 """Filter to convert to CRLF EOLs."""
121 if util.binary(s):
123 if util.binary(s):
122 return s
124 return s
123 if ui.configbool('eol', 'only-consistent', True) and inconsistenteol(s):
125 if ui.configbool('eol', 'only-consistent', True) and inconsistenteol(s):
124 return s
126 return s
125 if (ui.configbool('eol', 'fix-trailing-newline', False)
127 if (ui.configbool('eol', 'fix-trailing-newline', False)
126 and s and s[-1] != '\n'):
128 and s and s[-1] != '\n'):
127 s = s + '\n'
129 s = s + '\n'
128 return eolre.sub('\r\n', s)
130 return eolre.sub('\r\n', s)
129
131
130 def isbinary(s, params):
132 def isbinary(s, params):
131 """Filter to do nothing with the file."""
133 """Filter to do nothing with the file."""
132 return s
134 return s
133
135
134 filters = {
136 filters = {
135 'to-lf': tolf,
137 'to-lf': tolf,
136 'to-crlf': tocrlf,
138 'to-crlf': tocrlf,
137 'is-binary': isbinary,
139 'is-binary': isbinary,
138 # The following provide backwards compatibility with win32text
140 # The following provide backwards compatibility with win32text
139 'cleverencode:': tolf,
141 'cleverencode:': tolf,
140 'cleverdecode:': tocrlf
142 'cleverdecode:': tocrlf
141 }
143 }
142
144
143 class eolfile(object):
145 class eolfile(object):
144 def __init__(self, ui, root, data):
146 def __init__(self, ui, root, data):
145 self._decode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
147 self._decode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
146 self._encode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
148 self._encode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
147
149
148 self.cfg = config.config()
150 self.cfg = config.config()
149 # Our files should not be touched. The pattern must be
151 # Our files should not be touched. The pattern must be
150 # inserted first override a '** = native' pattern.
152 # inserted first override a '** = native' pattern.
151 self.cfg.set('patterns', '.hg*', 'BIN')
153 self.cfg.set('patterns', '.hg*', 'BIN')
152 # We can then parse the user's patterns.
154 # We can then parse the user's patterns.
153 self.cfg.parse('.hgeol', data)
155 self.cfg.parse('.hgeol', data)
154
156
155 isrepolf = self.cfg.get('repository', 'native') != 'CRLF'
157 isrepolf = self.cfg.get('repository', 'native') != 'CRLF'
156 self._encode['NATIVE'] = isrepolf and 'to-lf' or 'to-crlf'
158 self._encode['NATIVE'] = isrepolf and 'to-lf' or 'to-crlf'
157 iswdlf = ui.config('eol', 'native', os.linesep) in ('LF', '\n')
159 iswdlf = ui.config('eol', 'native', os.linesep) in ('LF', '\n')
158 self._decode['NATIVE'] = iswdlf and 'to-lf' or 'to-crlf'
160 self._decode['NATIVE'] = iswdlf and 'to-lf' or 'to-crlf'
159
161
160 include = []
162 include = []
161 exclude = []
163 exclude = []
162 for pattern, style in self.cfg.items('patterns'):
164 for pattern, style in self.cfg.items('patterns'):
163 key = style.upper()
165 key = style.upper()
164 if key == 'BIN':
166 if key == 'BIN':
165 exclude.append(pattern)
167 exclude.append(pattern)
166 else:
168 else:
167 include.append(pattern)
169 include.append(pattern)
168 # This will match the files for which we need to care
170 # This will match the files for which we need to care
169 # about inconsistent newlines.
171 # about inconsistent newlines.
170 self.match = match.match(root, '', [], include, exclude)
172 self.match = match.match(root, '', [], include, exclude)
171
173
172 def copytoui(self, ui):
174 def copytoui(self, ui):
173 for pattern, style in self.cfg.items('patterns'):
175 for pattern, style in self.cfg.items('patterns'):
174 key = style.upper()
176 key = style.upper()
175 try:
177 try:
176 ui.setconfig('decode', pattern, self._decode[key])
178 ui.setconfig('decode', pattern, self._decode[key])
177 ui.setconfig('encode', pattern, self._encode[key])
179 ui.setconfig('encode', pattern, self._encode[key])
178 except KeyError:
180 except KeyError:
179 ui.warn(_("ignoring unknown EOL style '%s' from %s\n")
181 ui.warn(_("ignoring unknown EOL style '%s' from %s\n")
180 % (style, self.cfg.source('patterns', pattern)))
182 % (style, self.cfg.source('patterns', pattern)))
181 # eol.only-consistent can be specified in ~/.hgrc or .hgeol
183 # eol.only-consistent can be specified in ~/.hgrc or .hgeol
182 for k, v in self.cfg.items('eol'):
184 for k, v in self.cfg.items('eol'):
183 ui.setconfig('eol', k, v)
185 ui.setconfig('eol', k, v)
184
186
185 def checkrev(self, repo, ctx, files):
187 def checkrev(self, repo, ctx, files):
186 failed = []
188 failed = []
187 for f in (files or ctx.files()):
189 for f in (files or ctx.files()):
188 if f not in ctx:
190 if f not in ctx:
189 continue
191 continue
190 for pattern, style in self.cfg.items('patterns'):
192 for pattern, style in self.cfg.items('patterns'):
191 if not match.match(repo.root, '', [pattern])(f):
193 if not match.match(repo.root, '', [pattern])(f):
192 continue
194 continue
193 target = self._encode[style.upper()]
195 target = self._encode[style.upper()]
194 data = ctx[f].data()
196 data = ctx[f].data()
195 if (target == "to-lf" and "\r\n" in data
197 if (target == "to-lf" and "\r\n" in data
196 or target == "to-crlf" and singlelf.search(data)):
198 or target == "to-crlf" and singlelf.search(data)):
197 failed.append((str(ctx), target, f))
199 failed.append((str(ctx), target, f))
198 break
200 break
199 return failed
201 return failed
200
202
201 def parseeol(ui, repo, nodes):
203 def parseeol(ui, repo, nodes):
202 try:
204 try:
203 for node in nodes:
205 for node in nodes:
204 try:
206 try:
205 if node is None:
207 if node is None:
206 # Cannot use workingctx.data() since it would load
208 # Cannot use workingctx.data() since it would load
207 # and cache the filters before we configure them.
209 # and cache the filters before we configure them.
208 data = repo.wfile('.hgeol').read()
210 data = repo.wfile('.hgeol').read()
209 else:
211 else:
210 data = repo[node]['.hgeol'].data()
212 data = repo[node]['.hgeol'].data()
211 return eolfile(ui, repo.root, data)
213 return eolfile(ui, repo.root, data)
212 except (IOError, LookupError):
214 except (IOError, LookupError):
213 pass
215 pass
214 except error.ParseError, inst:
216 except error.ParseError, inst:
215 ui.warn(_("warning: ignoring .hgeol file due to parse error "
217 ui.warn(_("warning: ignoring .hgeol file due to parse error "
216 "at %s: %s\n") % (inst.args[1], inst.args[0]))
218 "at %s: %s\n") % (inst.args[1], inst.args[0]))
217 return None
219 return None
218
220
219 def _checkhook(ui, repo, node, headsonly):
221 def _checkhook(ui, repo, node, headsonly):
220 # Get revisions to check and touched files at the same time
222 # Get revisions to check and touched files at the same time
221 files = set()
223 files = set()
222 revs = set()
224 revs = set()
223 for rev in xrange(repo[node].rev(), len(repo)):
225 for rev in xrange(repo[node].rev(), len(repo)):
224 revs.add(rev)
226 revs.add(rev)
225 if headsonly:
227 if headsonly:
226 ctx = repo[rev]
228 ctx = repo[rev]
227 files.update(ctx.files())
229 files.update(ctx.files())
228 for pctx in ctx.parents():
230 for pctx in ctx.parents():
229 revs.discard(pctx.rev())
231 revs.discard(pctx.rev())
230 failed = []
232 failed = []
231 for rev in revs:
233 for rev in revs:
232 ctx = repo[rev]
234 ctx = repo[rev]
233 eol = parseeol(ui, repo, [ctx.node()])
235 eol = parseeol(ui, repo, [ctx.node()])
234 if eol:
236 if eol:
235 failed.extend(eol.checkrev(repo, ctx, files))
237 failed.extend(eol.checkrev(repo, ctx, files))
236
238
237 if failed:
239 if failed:
238 eols = {'to-lf': 'CRLF', 'to-crlf': 'LF'}
240 eols = {'to-lf': 'CRLF', 'to-crlf': 'LF'}
239 msgs = []
241 msgs = []
240 for node, target, f in failed:
242 for node, target, f in failed:
241 msgs.append(_(" %s in %s should not have %s line endings") %
243 msgs.append(_(" %s in %s should not have %s line endings") %
242 (f, node, eols[target]))
244 (f, node, eols[target]))
243 raise util.Abort(_("end-of-line check failed:\n") + "\n".join(msgs))
245 raise util.Abort(_("end-of-line check failed:\n") + "\n".join(msgs))
244
246
245 def checkallhook(ui, repo, node, hooktype, **kwargs):
247 def checkallhook(ui, repo, node, hooktype, **kwargs):
246 """verify that files have expected EOLs"""
248 """verify that files have expected EOLs"""
247 _checkhook(ui, repo, node, False)
249 _checkhook(ui, repo, node, False)
248
250
249 def checkheadshook(ui, repo, node, hooktype, **kwargs):
251 def checkheadshook(ui, repo, node, hooktype, **kwargs):
250 """verify that files have expected EOLs"""
252 """verify that files have expected EOLs"""
251 _checkhook(ui, repo, node, True)
253 _checkhook(ui, repo, node, True)
252
254
253 # "checkheadshook" used to be called "hook"
255 # "checkheadshook" used to be called "hook"
254 hook = checkheadshook
256 hook = checkheadshook
255
257
256 def preupdate(ui, repo, hooktype, parent1, parent2):
258 def preupdate(ui, repo, hooktype, parent1, parent2):
257 repo.loadeol([parent1])
259 repo.loadeol([parent1])
258 return False
260 return False
259
261
260 def uisetup(ui):
262 def uisetup(ui):
261 ui.setconfig('hooks', 'preupdate.eol', preupdate)
263 ui.setconfig('hooks', 'preupdate.eol', preupdate)
262
264
263 def extsetup(ui):
265 def extsetup(ui):
264 try:
266 try:
265 extensions.find('win32text')
267 extensions.find('win32text')
266 ui.warn(_("the eol extension is incompatible with the "
268 ui.warn(_("the eol extension is incompatible with the "
267 "win32text extension\n"))
269 "win32text extension\n"))
268 except KeyError:
270 except KeyError:
269 pass
271 pass
270
272
271
273
272 def reposetup(ui, repo):
274 def reposetup(ui, repo):
273 uisetup(repo.ui)
275 uisetup(repo.ui)
274
276
275 if not repo.local():
277 if not repo.local():
276 return
278 return
277 for name, fn in filters.iteritems():
279 for name, fn in filters.iteritems():
278 repo.adddatafilter(name, fn)
280 repo.adddatafilter(name, fn)
279
281
280 ui.setconfig('patch', 'eol', 'auto')
282 ui.setconfig('patch', 'eol', 'auto')
281
283
282 class eolrepo(repo.__class__):
284 class eolrepo(repo.__class__):
283
285
284 def loadeol(self, nodes):
286 def loadeol(self, nodes):
285 eol = parseeol(self.ui, self, nodes)
287 eol = parseeol(self.ui, self, nodes)
286 if eol is None:
288 if eol is None:
287 return None
289 return None
288 eol.copytoui(self.ui)
290 eol.copytoui(self.ui)
289 return eol.match
291 return eol.match
290
292
291 def _hgcleardirstate(self):
293 def _hgcleardirstate(self):
292 self._eolfile = self.loadeol([None, 'tip'])
294 self._eolfile = self.loadeol([None, 'tip'])
293 if not self._eolfile:
295 if not self._eolfile:
294 self._eolfile = util.never
296 self._eolfile = util.never
295 return
297 return
296
298
297 try:
299 try:
298 cachemtime = os.path.getmtime(self.join("eol.cache"))
300 cachemtime = os.path.getmtime(self.join("eol.cache"))
299 except OSError:
301 except OSError:
300 cachemtime = 0
302 cachemtime = 0
301
303
302 try:
304 try:
303 eolmtime = os.path.getmtime(self.wjoin(".hgeol"))
305 eolmtime = os.path.getmtime(self.wjoin(".hgeol"))
304 except OSError:
306 except OSError:
305 eolmtime = 0
307 eolmtime = 0
306
308
307 if eolmtime > cachemtime:
309 if eolmtime > cachemtime:
308 ui.debug("eol: detected change in .hgeol\n")
310 ui.debug("eol: detected change in .hgeol\n")
309 wlock = None
311 wlock = None
310 try:
312 try:
311 wlock = self.wlock()
313 wlock = self.wlock()
312 for f in self.dirstate:
314 for f in self.dirstate:
313 if self.dirstate[f] == 'n':
315 if self.dirstate[f] == 'n':
314 # all normal files need to be looked at
316 # all normal files need to be looked at
315 # again since the new .hgeol file might no
317 # again since the new .hgeol file might no
316 # longer match a file it matched before
318 # longer match a file it matched before
317 self.dirstate.normallookup(f)
319 self.dirstate.normallookup(f)
318 # Create or touch the cache to update mtime
320 # Create or touch the cache to update mtime
319 self.opener("eol.cache", "w").close()
321 self.opener("eol.cache", "w").close()
320 wlock.release()
322 wlock.release()
321 except error.LockUnavailable:
323 except error.LockUnavailable:
322 # If we cannot lock the repository and clear the
324 # If we cannot lock the repository and clear the
323 # dirstate, then a commit might not see all files
325 # dirstate, then a commit might not see all files
324 # as modified. But if we cannot lock the
326 # as modified. But if we cannot lock the
325 # repository, then we can also not make a commit,
327 # repository, then we can also not make a commit,
326 # so ignore the error.
328 # so ignore the error.
327 pass
329 pass
328
330
329 def commitctx(self, ctx, error=False):
331 def commitctx(self, ctx, error=False):
330 for f in sorted(ctx.added() + ctx.modified()):
332 for f in sorted(ctx.added() + ctx.modified()):
331 if not self._eolfile(f):
333 if not self._eolfile(f):
332 continue
334 continue
333 try:
335 try:
334 data = ctx[f].data()
336 data = ctx[f].data()
335 except IOError:
337 except IOError:
336 continue
338 continue
337 if util.binary(data):
339 if util.binary(data):
338 # We should not abort here, since the user should
340 # We should not abort here, since the user should
339 # be able to say "** = native" to automatically
341 # be able to say "** = native" to automatically
340 # have all non-binary files taken care of.
342 # have all non-binary files taken care of.
341 continue
343 continue
342 if inconsistenteol(data):
344 if inconsistenteol(data):
343 raise util.Abort(_("inconsistent newline style "
345 raise util.Abort(_("inconsistent newline style "
344 "in %s\n" % f))
346 "in %s\n" % f))
345 return super(eolrepo, self).commitctx(ctx, error)
347 return super(eolrepo, self).commitctx(ctx, error)
346 repo.__class__ = eolrepo
348 repo.__class__ = eolrepo
347 repo._hgcleardirstate()
349 repo._hgcleardirstate()
@@ -1,329 +1,331 b''
1 # extdiff.py - external diff program support for mercurial
1 # extdiff.py - external diff program support for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to allow external programs to compare revisions
8 '''command to allow external programs to compare revisions
9
9
10 The extdiff Mercurial extension allows you to use external programs
10 The extdiff Mercurial extension allows you to use external programs
11 to compare revisions, or revision with working directory. The external
11 to compare revisions, or revision with working directory. The external
12 diff programs are called with a configurable set of options and two
12 diff programs are called with a configurable set of options and two
13 non-option arguments: paths to directories containing snapshots of
13 non-option arguments: paths to directories containing snapshots of
14 files to compare.
14 files to compare.
15
15
16 The extdiff extension also allows you to configure new diff commands, so
16 The extdiff extension also allows you to configure new diff commands, so
17 you do not need to type :hg:`extdiff -p kdiff3` always. ::
17 you do not need to type :hg:`extdiff -p kdiff3` always. ::
18
18
19 [extdiff]
19 [extdiff]
20 # add new command that runs GNU diff(1) in 'context diff' mode
20 # add new command that runs GNU diff(1) in 'context diff' mode
21 cdiff = gdiff -Nprc5
21 cdiff = gdiff -Nprc5
22 ## or the old way:
22 ## or the old way:
23 #cmd.cdiff = gdiff
23 #cmd.cdiff = gdiff
24 #opts.cdiff = -Nprc5
24 #opts.cdiff = -Nprc5
25
25
26 # add new command called vdiff, runs kdiff3
26 # add new command called vdiff, runs kdiff3
27 vdiff = kdiff3
27 vdiff = kdiff3
28
28
29 # add new command called meld, runs meld (no need to name twice)
29 # add new command called meld, runs meld (no need to name twice)
30 meld =
30 meld =
31
31
32 # add new command called vimdiff, runs gvimdiff with DirDiff plugin
32 # add new command called vimdiff, runs gvimdiff with DirDiff plugin
33 # (see http://www.vim.org/scripts/script.php?script_id=102) Non
33 # (see http://www.vim.org/scripts/script.php?script_id=102) Non
34 # English user, be sure to put "let g:DirDiffDynamicDiffText = 1" in
34 # English user, be sure to put "let g:DirDiffDynamicDiffText = 1" in
35 # your .vimrc
35 # your .vimrc
36 vimdiff = gvim -f "+next" \\
36 vimdiff = gvim -f "+next" \\
37 "+execute 'DirDiff' fnameescape(argv(0)) fnameescape(argv(1))"
37 "+execute 'DirDiff' fnameescape(argv(0)) fnameescape(argv(1))"
38
38
39 Tool arguments can include variables that are expanded at runtime::
39 Tool arguments can include variables that are expanded at runtime::
40
40
41 $parent1, $plabel1 - filename, descriptive label of first parent
41 $parent1, $plabel1 - filename, descriptive label of first parent
42 $child, $clabel - filename, descriptive label of child revision
42 $child, $clabel - filename, descriptive label of child revision
43 $parent2, $plabel2 - filename, descriptive label of second parent
43 $parent2, $plabel2 - filename, descriptive label of second parent
44 $root - repository root
44 $root - repository root
45 $parent is an alias for $parent1.
45 $parent is an alias for $parent1.
46
46
47 The extdiff extension will look in your [diff-tools] and [merge-tools]
47 The extdiff extension will look in your [diff-tools] and [merge-tools]
48 sections for diff tool arguments, when none are specified in [extdiff].
48 sections for diff tool arguments, when none are specified in [extdiff].
49
49
50 ::
50 ::
51
51
52 [extdiff]
52 [extdiff]
53 kdiff3 =
53 kdiff3 =
54
54
55 [diff-tools]
55 [diff-tools]
56 kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
56 kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
57
57
58 You can use -I/-X and list of file or directory names like normal
58 You can use -I/-X and list of file or directory names like normal
59 :hg:`diff` command. The extdiff extension makes snapshots of only
59 :hg:`diff` command. The extdiff extension makes snapshots of only
60 needed files, so running the external diff program will actually be
60 needed files, so running the external diff program will actually be
61 pretty fast (at least faster than having to compare the entire tree).
61 pretty fast (at least faster than having to compare the entire tree).
62 '''
62 '''
63
63
64 from mercurial.i18n import _
64 from mercurial.i18n import _
65 from mercurial.node import short, nullid
65 from mercurial.node import short, nullid
66 from mercurial import scmutil, scmutil, util, commands, encoding
66 from mercurial import scmutil, scmutil, util, commands, encoding
67 import os, shlex, shutil, tempfile, re
67 import os, shlex, shutil, tempfile, re
68
68
69 testedwith = 'internal'
70
69 def snapshot(ui, repo, files, node, tmproot):
71 def snapshot(ui, repo, files, node, tmproot):
70 '''snapshot files as of some revision
72 '''snapshot files as of some revision
71 if not using snapshot, -I/-X does not work and recursive diff
73 if not using snapshot, -I/-X does not work and recursive diff
72 in tools like kdiff3 and meld displays too many files.'''
74 in tools like kdiff3 and meld displays too many files.'''
73 dirname = os.path.basename(repo.root)
75 dirname = os.path.basename(repo.root)
74 if dirname == "":
76 if dirname == "":
75 dirname = "root"
77 dirname = "root"
76 if node is not None:
78 if node is not None:
77 dirname = '%s.%s' % (dirname, short(node))
79 dirname = '%s.%s' % (dirname, short(node))
78 base = os.path.join(tmproot, dirname)
80 base = os.path.join(tmproot, dirname)
79 os.mkdir(base)
81 os.mkdir(base)
80 if node is not None:
82 if node is not None:
81 ui.note(_('making snapshot of %d files from rev %s\n') %
83 ui.note(_('making snapshot of %d files from rev %s\n') %
82 (len(files), short(node)))
84 (len(files), short(node)))
83 else:
85 else:
84 ui.note(_('making snapshot of %d files from working directory\n') %
86 ui.note(_('making snapshot of %d files from working directory\n') %
85 (len(files)))
87 (len(files)))
86 wopener = scmutil.opener(base)
88 wopener = scmutil.opener(base)
87 fns_and_mtime = []
89 fns_and_mtime = []
88 ctx = repo[node]
90 ctx = repo[node]
89 for fn in files:
91 for fn in files:
90 wfn = util.pconvert(fn)
92 wfn = util.pconvert(fn)
91 if wfn not in ctx:
93 if wfn not in ctx:
92 # File doesn't exist; could be a bogus modify
94 # File doesn't exist; could be a bogus modify
93 continue
95 continue
94 ui.note(' %s\n' % wfn)
96 ui.note(' %s\n' % wfn)
95 dest = os.path.join(base, wfn)
97 dest = os.path.join(base, wfn)
96 fctx = ctx[wfn]
98 fctx = ctx[wfn]
97 data = repo.wwritedata(wfn, fctx.data())
99 data = repo.wwritedata(wfn, fctx.data())
98 if 'l' in fctx.flags():
100 if 'l' in fctx.flags():
99 wopener.symlink(data, wfn)
101 wopener.symlink(data, wfn)
100 else:
102 else:
101 wopener.write(wfn, data)
103 wopener.write(wfn, data)
102 if 'x' in fctx.flags():
104 if 'x' in fctx.flags():
103 util.setflags(dest, False, True)
105 util.setflags(dest, False, True)
104 if node is None:
106 if node is None:
105 fns_and_mtime.append((dest, repo.wjoin(fn),
107 fns_and_mtime.append((dest, repo.wjoin(fn),
106 os.lstat(dest).st_mtime))
108 os.lstat(dest).st_mtime))
107 return dirname, fns_and_mtime
109 return dirname, fns_and_mtime
108
110
109 def dodiff(ui, repo, diffcmd, diffopts, pats, opts):
111 def dodiff(ui, repo, diffcmd, diffopts, pats, opts):
110 '''Do the actuall diff:
112 '''Do the actuall diff:
111
113
112 - copy to a temp structure if diffing 2 internal revisions
114 - copy to a temp structure if diffing 2 internal revisions
113 - copy to a temp structure if diffing working revision with
115 - copy to a temp structure if diffing working revision with
114 another one and more than 1 file is changed
116 another one and more than 1 file is changed
115 - just invoke the diff for a single file in the working dir
117 - just invoke the diff for a single file in the working dir
116 '''
118 '''
117
119
118 revs = opts.get('rev')
120 revs = opts.get('rev')
119 change = opts.get('change')
121 change = opts.get('change')
120 args = ' '.join(diffopts)
122 args = ' '.join(diffopts)
121 do3way = '$parent2' in args
123 do3way = '$parent2' in args
122
124
123 if revs and change:
125 if revs and change:
124 msg = _('cannot specify --rev and --change at the same time')
126 msg = _('cannot specify --rev and --change at the same time')
125 raise util.Abort(msg)
127 raise util.Abort(msg)
126 elif change:
128 elif change:
127 node2 = scmutil.revsingle(repo, change, None).node()
129 node2 = scmutil.revsingle(repo, change, None).node()
128 node1a, node1b = repo.changelog.parents(node2)
130 node1a, node1b = repo.changelog.parents(node2)
129 else:
131 else:
130 node1a, node2 = scmutil.revpair(repo, revs)
132 node1a, node2 = scmutil.revpair(repo, revs)
131 if not revs:
133 if not revs:
132 node1b = repo.dirstate.p2()
134 node1b = repo.dirstate.p2()
133 else:
135 else:
134 node1b = nullid
136 node1b = nullid
135
137
136 # Disable 3-way merge if there is only one parent
138 # Disable 3-way merge if there is only one parent
137 if do3way:
139 if do3way:
138 if node1b == nullid:
140 if node1b == nullid:
139 do3way = False
141 do3way = False
140
142
141 matcher = scmutil.match(repo[node2], pats, opts)
143 matcher = scmutil.match(repo[node2], pats, opts)
142 mod_a, add_a, rem_a = map(set, repo.status(node1a, node2, matcher)[:3])
144 mod_a, add_a, rem_a = map(set, repo.status(node1a, node2, matcher)[:3])
143 if do3way:
145 if do3way:
144 mod_b, add_b, rem_b = map(set, repo.status(node1b, node2, matcher)[:3])
146 mod_b, add_b, rem_b = map(set, repo.status(node1b, node2, matcher)[:3])
145 else:
147 else:
146 mod_b, add_b, rem_b = set(), set(), set()
148 mod_b, add_b, rem_b = set(), set(), set()
147 modadd = mod_a | add_a | mod_b | add_b
149 modadd = mod_a | add_a | mod_b | add_b
148 common = modadd | rem_a | rem_b
150 common = modadd | rem_a | rem_b
149 if not common:
151 if not common:
150 return 0
152 return 0
151
153
152 tmproot = tempfile.mkdtemp(prefix='extdiff.')
154 tmproot = tempfile.mkdtemp(prefix='extdiff.')
153 try:
155 try:
154 # Always make a copy of node1a (and node1b, if applicable)
156 # Always make a copy of node1a (and node1b, if applicable)
155 dir1a_files = mod_a | rem_a | ((mod_b | add_b) - add_a)
157 dir1a_files = mod_a | rem_a | ((mod_b | add_b) - add_a)
156 dir1a = snapshot(ui, repo, dir1a_files, node1a, tmproot)[0]
158 dir1a = snapshot(ui, repo, dir1a_files, node1a, tmproot)[0]
157 rev1a = '@%d' % repo[node1a].rev()
159 rev1a = '@%d' % repo[node1a].rev()
158 if do3way:
160 if do3way:
159 dir1b_files = mod_b | rem_b | ((mod_a | add_a) - add_b)
161 dir1b_files = mod_b | rem_b | ((mod_a | add_a) - add_b)
160 dir1b = snapshot(ui, repo, dir1b_files, node1b, tmproot)[0]
162 dir1b = snapshot(ui, repo, dir1b_files, node1b, tmproot)[0]
161 rev1b = '@%d' % repo[node1b].rev()
163 rev1b = '@%d' % repo[node1b].rev()
162 else:
164 else:
163 dir1b = None
165 dir1b = None
164 rev1b = ''
166 rev1b = ''
165
167
166 fns_and_mtime = []
168 fns_and_mtime = []
167
169
168 # If node2 in not the wc or there is >1 change, copy it
170 # If node2 in not the wc or there is >1 change, copy it
169 dir2root = ''
171 dir2root = ''
170 rev2 = ''
172 rev2 = ''
171 if node2:
173 if node2:
172 dir2 = snapshot(ui, repo, modadd, node2, tmproot)[0]
174 dir2 = snapshot(ui, repo, modadd, node2, tmproot)[0]
173 rev2 = '@%d' % repo[node2].rev()
175 rev2 = '@%d' % repo[node2].rev()
174 elif len(common) > 1:
176 elif len(common) > 1:
175 #we only actually need to get the files to copy back to
177 #we only actually need to get the files to copy back to
176 #the working dir in this case (because the other cases
178 #the working dir in this case (because the other cases
177 #are: diffing 2 revisions or single file -- in which case
179 #are: diffing 2 revisions or single file -- in which case
178 #the file is already directly passed to the diff tool).
180 #the file is already directly passed to the diff tool).
179 dir2, fns_and_mtime = snapshot(ui, repo, modadd, None, tmproot)
181 dir2, fns_and_mtime = snapshot(ui, repo, modadd, None, tmproot)
180 else:
182 else:
181 # This lets the diff tool open the changed file directly
183 # This lets the diff tool open the changed file directly
182 dir2 = ''
184 dir2 = ''
183 dir2root = repo.root
185 dir2root = repo.root
184
186
185 label1a = rev1a
187 label1a = rev1a
186 label1b = rev1b
188 label1b = rev1b
187 label2 = rev2
189 label2 = rev2
188
190
189 # If only one change, diff the files instead of the directories
191 # If only one change, diff the files instead of the directories
190 # Handle bogus modifies correctly by checking if the files exist
192 # Handle bogus modifies correctly by checking if the files exist
191 if len(common) == 1:
193 if len(common) == 1:
192 common_file = util.localpath(common.pop())
194 common_file = util.localpath(common.pop())
193 dir1a = os.path.join(tmproot, dir1a, common_file)
195 dir1a = os.path.join(tmproot, dir1a, common_file)
194 label1a = common_file + rev1a
196 label1a = common_file + rev1a
195 if not os.path.isfile(dir1a):
197 if not os.path.isfile(dir1a):
196 dir1a = os.devnull
198 dir1a = os.devnull
197 if do3way:
199 if do3way:
198 dir1b = os.path.join(tmproot, dir1b, common_file)
200 dir1b = os.path.join(tmproot, dir1b, common_file)
199 label1b = common_file + rev1b
201 label1b = common_file + rev1b
200 if not os.path.isfile(dir1b):
202 if not os.path.isfile(dir1b):
201 dir1b = os.devnull
203 dir1b = os.devnull
202 dir2 = os.path.join(dir2root, dir2, common_file)
204 dir2 = os.path.join(dir2root, dir2, common_file)
203 label2 = common_file + rev2
205 label2 = common_file + rev2
204
206
205 # Function to quote file/dir names in the argument string.
207 # Function to quote file/dir names in the argument string.
206 # When not operating in 3-way mode, an empty string is
208 # When not operating in 3-way mode, an empty string is
207 # returned for parent2
209 # returned for parent2
208 replace = dict(parent=dir1a, parent1=dir1a, parent2=dir1b,
210 replace = dict(parent=dir1a, parent1=dir1a, parent2=dir1b,
209 plabel1=label1a, plabel2=label1b,
211 plabel1=label1a, plabel2=label1b,
210 clabel=label2, child=dir2,
212 clabel=label2, child=dir2,
211 root=repo.root)
213 root=repo.root)
212 def quote(match):
214 def quote(match):
213 key = match.group()[1:]
215 key = match.group()[1:]
214 if not do3way and key == 'parent2':
216 if not do3way and key == 'parent2':
215 return ''
217 return ''
216 return util.shellquote(replace[key])
218 return util.shellquote(replace[key])
217
219
218 # Match parent2 first, so 'parent1?' will match both parent1 and parent
220 # Match parent2 first, so 'parent1?' will match both parent1 and parent
219 regex = '\$(parent2|parent1?|child|plabel1|plabel2|clabel|root)'
221 regex = '\$(parent2|parent1?|child|plabel1|plabel2|clabel|root)'
220 if not do3way and not re.search(regex, args):
222 if not do3way and not re.search(regex, args):
221 args += ' $parent1 $child'
223 args += ' $parent1 $child'
222 args = re.sub(regex, quote, args)
224 args = re.sub(regex, quote, args)
223 cmdline = util.shellquote(diffcmd) + ' ' + args
225 cmdline = util.shellquote(diffcmd) + ' ' + args
224
226
225 ui.debug('running %r in %s\n' % (cmdline, tmproot))
227 ui.debug('running %r in %s\n' % (cmdline, tmproot))
226 util.system(cmdline, cwd=tmproot, out=ui.fout)
228 util.system(cmdline, cwd=tmproot, out=ui.fout)
227
229
228 for copy_fn, working_fn, mtime in fns_and_mtime:
230 for copy_fn, working_fn, mtime in fns_and_mtime:
229 if os.lstat(copy_fn).st_mtime != mtime:
231 if os.lstat(copy_fn).st_mtime != mtime:
230 ui.debug('file changed while diffing. '
232 ui.debug('file changed while diffing. '
231 'Overwriting: %s (src: %s)\n' % (working_fn, copy_fn))
233 'Overwriting: %s (src: %s)\n' % (working_fn, copy_fn))
232 util.copyfile(copy_fn, working_fn)
234 util.copyfile(copy_fn, working_fn)
233
235
234 return 1
236 return 1
235 finally:
237 finally:
236 ui.note(_('cleaning up temp directory\n'))
238 ui.note(_('cleaning up temp directory\n'))
237 shutil.rmtree(tmproot)
239 shutil.rmtree(tmproot)
238
240
239 def extdiff(ui, repo, *pats, **opts):
241 def extdiff(ui, repo, *pats, **opts):
240 '''use external program to diff repository (or selected files)
242 '''use external program to diff repository (or selected files)
241
243
242 Show differences between revisions for the specified files, using
244 Show differences between revisions for the specified files, using
243 an external program. The default program used is diff, with
245 an external program. The default program used is diff, with
244 default options "-Npru".
246 default options "-Npru".
245
247
246 To select a different program, use the -p/--program option. The
248 To select a different program, use the -p/--program option. The
247 program will be passed the names of two directories to compare. To
249 program will be passed the names of two directories to compare. To
248 pass additional options to the program, use -o/--option. These
250 pass additional options to the program, use -o/--option. These
249 will be passed before the names of the directories to compare.
251 will be passed before the names of the directories to compare.
250
252
251 When two revision arguments are given, then changes are shown
253 When two revision arguments are given, then changes are shown
252 between those revisions. If only one revision is specified then
254 between those revisions. If only one revision is specified then
253 that revision is compared to the working directory, and, when no
255 that revision is compared to the working directory, and, when no
254 revisions are specified, the working directory files are compared
256 revisions are specified, the working directory files are compared
255 to its parent.'''
257 to its parent.'''
256 program = opts.get('program')
258 program = opts.get('program')
257 option = opts.get('option')
259 option = opts.get('option')
258 if not program:
260 if not program:
259 program = 'diff'
261 program = 'diff'
260 option = option or ['-Npru']
262 option = option or ['-Npru']
261 return dodiff(ui, repo, program, option, pats, opts)
263 return dodiff(ui, repo, program, option, pats, opts)
262
264
263 cmdtable = {
265 cmdtable = {
264 "extdiff":
266 "extdiff":
265 (extdiff,
267 (extdiff,
266 [('p', 'program', '',
268 [('p', 'program', '',
267 _('comparison program to run'), _('CMD')),
269 _('comparison program to run'), _('CMD')),
268 ('o', 'option', [],
270 ('o', 'option', [],
269 _('pass option to comparison program'), _('OPT')),
271 _('pass option to comparison program'), _('OPT')),
270 ('r', 'rev', [],
272 ('r', 'rev', [],
271 _('revision'), _('REV')),
273 _('revision'), _('REV')),
272 ('c', 'change', '',
274 ('c', 'change', '',
273 _('change made by revision'), _('REV')),
275 _('change made by revision'), _('REV')),
274 ] + commands.walkopts,
276 ] + commands.walkopts,
275 _('hg extdiff [OPT]... [FILE]...')),
277 _('hg extdiff [OPT]... [FILE]...')),
276 }
278 }
277
279
278 def uisetup(ui):
280 def uisetup(ui):
279 for cmd, path in ui.configitems('extdiff'):
281 for cmd, path in ui.configitems('extdiff'):
280 if cmd.startswith('cmd.'):
282 if cmd.startswith('cmd.'):
281 cmd = cmd[4:]
283 cmd = cmd[4:]
282 if not path:
284 if not path:
283 path = cmd
285 path = cmd
284 diffopts = ui.config('extdiff', 'opts.' + cmd, '')
286 diffopts = ui.config('extdiff', 'opts.' + cmd, '')
285 diffopts = diffopts and [diffopts] or []
287 diffopts = diffopts and [diffopts] or []
286 elif cmd.startswith('opts.'):
288 elif cmd.startswith('opts.'):
287 continue
289 continue
288 else:
290 else:
289 # command = path opts
291 # command = path opts
290 if path:
292 if path:
291 diffopts = shlex.split(path)
293 diffopts = shlex.split(path)
292 path = diffopts.pop(0)
294 path = diffopts.pop(0)
293 else:
295 else:
294 path, diffopts = cmd, []
296 path, diffopts = cmd, []
295 # look for diff arguments in [diff-tools] then [merge-tools]
297 # look for diff arguments in [diff-tools] then [merge-tools]
296 if diffopts == []:
298 if diffopts == []:
297 args = ui.config('diff-tools', cmd+'.diffargs') or \
299 args = ui.config('diff-tools', cmd+'.diffargs') or \
298 ui.config('merge-tools', cmd+'.diffargs')
300 ui.config('merge-tools', cmd+'.diffargs')
299 if args:
301 if args:
300 diffopts = shlex.split(args)
302 diffopts = shlex.split(args)
301 def save(cmd, path, diffopts):
303 def save(cmd, path, diffopts):
302 '''use closure to save diff command to use'''
304 '''use closure to save diff command to use'''
303 def mydiff(ui, repo, *pats, **opts):
305 def mydiff(ui, repo, *pats, **opts):
304 return dodiff(ui, repo, path, diffopts + opts['option'],
306 return dodiff(ui, repo, path, diffopts + opts['option'],
305 pats, opts)
307 pats, opts)
306 doc = _('''\
308 doc = _('''\
307 use %(path)s to diff repository (or selected files)
309 use %(path)s to diff repository (or selected files)
308
310
309 Show differences between revisions for the specified files, using
311 Show differences between revisions for the specified files, using
310 the %(path)s program.
312 the %(path)s program.
311
313
312 When two revision arguments are given, then changes are shown
314 When two revision arguments are given, then changes are shown
313 between those revisions. If only one revision is specified then
315 between those revisions. If only one revision is specified then
314 that revision is compared to the working directory, and, when no
316 that revision is compared to the working directory, and, when no
315 revisions are specified, the working directory files are compared
317 revisions are specified, the working directory files are compared
316 to its parent.\
318 to its parent.\
317 ''') % dict(path=util.uirepr(path))
319 ''') % dict(path=util.uirepr(path))
318
320
319 # We must translate the docstring right away since it is
321 # We must translate the docstring right away since it is
320 # used as a format string. The string will unfortunately
322 # used as a format string. The string will unfortunately
321 # be translated again in commands.helpcmd and this will
323 # be translated again in commands.helpcmd and this will
322 # fail when the docstring contains non-ASCII characters.
324 # fail when the docstring contains non-ASCII characters.
323 # Decoding the string to a Unicode string here (using the
325 # Decoding the string to a Unicode string here (using the
324 # right encoding) prevents that.
326 # right encoding) prevents that.
325 mydiff.__doc__ = doc.decode(encoding.encoding)
327 mydiff.__doc__ = doc.decode(encoding.encoding)
326 return mydiff
328 return mydiff
327 cmdtable[cmd] = (save(cmd, path, diffopts),
329 cmdtable[cmd] = (save(cmd, path, diffopts),
328 cmdtable['extdiff'][1][1:],
330 cmdtable['extdiff'][1][1:],
329 _('hg %s [OPTION]... [FILE]...') % cmd)
331 _('hg %s [OPTION]... [FILE]...') % cmd)
@@ -1,156 +1,158 b''
1 # fetch.py - pull and merge remote changes
1 # fetch.py - pull and merge remote changes
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''pull, update and merge in one command (DEPRECATED)'''
8 '''pull, update and merge in one command (DEPRECATED)'''
9
9
10 from mercurial.i18n import _
10 from mercurial.i18n import _
11 from mercurial.node import nullid, short
11 from mercurial.node import nullid, short
12 from mercurial import commands, cmdutil, hg, util, error
12 from mercurial import commands, cmdutil, hg, util, error
13 from mercurial.lock import release
13 from mercurial.lock import release
14
14
15 testedwith = 'internal'
16
15 def fetch(ui, repo, source='default', **opts):
17 def fetch(ui, repo, source='default', **opts):
16 '''pull changes from a remote repository, merge new changes if needed.
18 '''pull changes from a remote repository, merge new changes if needed.
17
19
18 This finds all changes from the repository at the specified path
20 This finds all changes from the repository at the specified path
19 or URL and adds them to the local repository.
21 or URL and adds them to the local repository.
20
22
21 If the pulled changes add a new branch head, the head is
23 If the pulled changes add a new branch head, the head is
22 automatically merged, and the result of the merge is committed.
24 automatically merged, and the result of the merge is committed.
23 Otherwise, the working directory is updated to include the new
25 Otherwise, the working directory is updated to include the new
24 changes.
26 changes.
25
27
26 When a merge is needed, the working directory is first updated to
28 When a merge is needed, the working directory is first updated to
27 the newly pulled changes. Local changes are then merged into the
29 the newly pulled changes. Local changes are then merged into the
28 pulled changes. To switch the merge order, use --switch-parent.
30 pulled changes. To switch the merge order, use --switch-parent.
29
31
30 See :hg:`help dates` for a list of formats valid for -d/--date.
32 See :hg:`help dates` for a list of formats valid for -d/--date.
31
33
32 Returns 0 on success.
34 Returns 0 on success.
33 '''
35 '''
34
36
35 date = opts.get('date')
37 date = opts.get('date')
36 if date:
38 if date:
37 opts['date'] = util.parsedate(date)
39 opts['date'] = util.parsedate(date)
38
40
39 parent, p2 = repo.dirstate.parents()
41 parent, p2 = repo.dirstate.parents()
40 branch = repo.dirstate.branch()
42 branch = repo.dirstate.branch()
41 try:
43 try:
42 branchnode = repo.branchtip(branch)
44 branchnode = repo.branchtip(branch)
43 except error.RepoLookupError:
45 except error.RepoLookupError:
44 branchnode = None
46 branchnode = None
45 if parent != branchnode:
47 if parent != branchnode:
46 raise util.Abort(_('working dir not at branch tip '
48 raise util.Abort(_('working dir not at branch tip '
47 '(use "hg update" to check out branch tip)'))
49 '(use "hg update" to check out branch tip)'))
48
50
49 if p2 != nullid:
51 if p2 != nullid:
50 raise util.Abort(_('outstanding uncommitted merge'))
52 raise util.Abort(_('outstanding uncommitted merge'))
51
53
52 wlock = lock = None
54 wlock = lock = None
53 try:
55 try:
54 wlock = repo.wlock()
56 wlock = repo.wlock()
55 lock = repo.lock()
57 lock = repo.lock()
56 mod, add, rem, del_ = repo.status()[:4]
58 mod, add, rem, del_ = repo.status()[:4]
57
59
58 if mod or add or rem:
60 if mod or add or rem:
59 raise util.Abort(_('outstanding uncommitted changes'))
61 raise util.Abort(_('outstanding uncommitted changes'))
60 if del_:
62 if del_:
61 raise util.Abort(_('working directory is missing some files'))
63 raise util.Abort(_('working directory is missing some files'))
62 bheads = repo.branchheads(branch)
64 bheads = repo.branchheads(branch)
63 bheads = [head for head in bheads if len(repo[head].children()) == 0]
65 bheads = [head for head in bheads if len(repo[head].children()) == 0]
64 if len(bheads) > 1:
66 if len(bheads) > 1:
65 raise util.Abort(_('multiple heads in this branch '
67 raise util.Abort(_('multiple heads in this branch '
66 '(use "hg heads ." and "hg merge" to merge)'))
68 '(use "hg heads ." and "hg merge" to merge)'))
67
69
68 other = hg.peer(repo, opts, ui.expandpath(source))
70 other = hg.peer(repo, opts, ui.expandpath(source))
69 ui.status(_('pulling from %s\n') %
71 ui.status(_('pulling from %s\n') %
70 util.hidepassword(ui.expandpath(source)))
72 util.hidepassword(ui.expandpath(source)))
71 revs = None
73 revs = None
72 if opts['rev']:
74 if opts['rev']:
73 try:
75 try:
74 revs = [other.lookup(rev) for rev in opts['rev']]
76 revs = [other.lookup(rev) for rev in opts['rev']]
75 except error.CapabilityError:
77 except error.CapabilityError:
76 err = _("Other repository doesn't support revision lookup, "
78 err = _("Other repository doesn't support revision lookup, "
77 "so a rev cannot be specified.")
79 "so a rev cannot be specified.")
78 raise util.Abort(err)
80 raise util.Abort(err)
79
81
80 # Are there any changes at all?
82 # Are there any changes at all?
81 modheads = repo.pull(other, heads=revs)
83 modheads = repo.pull(other, heads=revs)
82 if modheads == 0:
84 if modheads == 0:
83 return 0
85 return 0
84
86
85 # Is this a simple fast-forward along the current branch?
87 # Is this a simple fast-forward along the current branch?
86 newheads = repo.branchheads(branch)
88 newheads = repo.branchheads(branch)
87 newchildren = repo.changelog.nodesbetween([parent], newheads)[2]
89 newchildren = repo.changelog.nodesbetween([parent], newheads)[2]
88 if len(newheads) == 1 and len(newchildren):
90 if len(newheads) == 1 and len(newchildren):
89 if newchildren[0] != parent:
91 if newchildren[0] != parent:
90 return hg.update(repo, newchildren[0])
92 return hg.update(repo, newchildren[0])
91 else:
93 else:
92 return 0
94 return 0
93
95
94 # Are there more than one additional branch heads?
96 # Are there more than one additional branch heads?
95 newchildren = [n for n in newchildren if n != parent]
97 newchildren = [n for n in newchildren if n != parent]
96 newparent = parent
98 newparent = parent
97 if newchildren:
99 if newchildren:
98 newparent = newchildren[0]
100 newparent = newchildren[0]
99 hg.clean(repo, newparent)
101 hg.clean(repo, newparent)
100 newheads = [n for n in newheads if n != newparent]
102 newheads = [n for n in newheads if n != newparent]
101 if len(newheads) > 1:
103 if len(newheads) > 1:
102 ui.status(_('not merging with %d other new branch heads '
104 ui.status(_('not merging with %d other new branch heads '
103 '(use "hg heads ." and "hg merge" to merge them)\n') %
105 '(use "hg heads ." and "hg merge" to merge them)\n') %
104 (len(newheads) - 1))
106 (len(newheads) - 1))
105 return 1
107 return 1
106
108
107 if not newheads:
109 if not newheads:
108 return 0
110 return 0
109
111
110 # Otherwise, let's merge.
112 # Otherwise, let's merge.
111 err = False
113 err = False
112 if newheads:
114 if newheads:
113 # By default, we consider the repository we're pulling
115 # By default, we consider the repository we're pulling
114 # *from* as authoritative, so we merge our changes into
116 # *from* as authoritative, so we merge our changes into
115 # theirs.
117 # theirs.
116 if opts['switch_parent']:
118 if opts['switch_parent']:
117 firstparent, secondparent = newparent, newheads[0]
119 firstparent, secondparent = newparent, newheads[0]
118 else:
120 else:
119 firstparent, secondparent = newheads[0], newparent
121 firstparent, secondparent = newheads[0], newparent
120 ui.status(_('updating to %d:%s\n') %
122 ui.status(_('updating to %d:%s\n') %
121 (repo.changelog.rev(firstparent),
123 (repo.changelog.rev(firstparent),
122 short(firstparent)))
124 short(firstparent)))
123 hg.clean(repo, firstparent)
125 hg.clean(repo, firstparent)
124 ui.status(_('merging with %d:%s\n') %
126 ui.status(_('merging with %d:%s\n') %
125 (repo.changelog.rev(secondparent), short(secondparent)))
127 (repo.changelog.rev(secondparent), short(secondparent)))
126 err = hg.merge(repo, secondparent, remind=False)
128 err = hg.merge(repo, secondparent, remind=False)
127
129
128 if not err:
130 if not err:
129 # we don't translate commit messages
131 # we don't translate commit messages
130 message = (cmdutil.logmessage(ui, opts) or
132 message = (cmdutil.logmessage(ui, opts) or
131 ('Automated merge with %s' %
133 ('Automated merge with %s' %
132 util.removeauth(other.url())))
134 util.removeauth(other.url())))
133 editor = cmdutil.commiteditor
135 editor = cmdutil.commiteditor
134 if opts.get('force_editor') or opts.get('edit'):
136 if opts.get('force_editor') or opts.get('edit'):
135 editor = cmdutil.commitforceeditor
137 editor = cmdutil.commitforceeditor
136 n = repo.commit(message, opts['user'], opts['date'], editor=editor)
138 n = repo.commit(message, opts['user'], opts['date'], editor=editor)
137 ui.status(_('new changeset %d:%s merges remote changes '
139 ui.status(_('new changeset %d:%s merges remote changes '
138 'with local\n') % (repo.changelog.rev(n),
140 'with local\n') % (repo.changelog.rev(n),
139 short(n)))
141 short(n)))
140
142
141 return err
143 return err
142
144
143 finally:
145 finally:
144 release(lock, wlock)
146 release(lock, wlock)
145
147
146 cmdtable = {
148 cmdtable = {
147 'fetch':
149 'fetch':
148 (fetch,
150 (fetch,
149 [('r', 'rev', [],
151 [('r', 'rev', [],
150 _('a specific revision you would like to pull'), _('REV')),
152 _('a specific revision you would like to pull'), _('REV')),
151 ('e', 'edit', None, _('edit commit message')),
153 ('e', 'edit', None, _('edit commit message')),
152 ('', 'force-editor', None, _('edit commit message (DEPRECATED)')),
154 ('', 'force-editor', None, _('edit commit message (DEPRECATED)')),
153 ('', 'switch-parent', None, _('switch parents when merging')),
155 ('', 'switch-parent', None, _('switch parents when merging')),
154 ] + commands.commitopts + commands.commitopts2 + commands.remoteopts,
156 ] + commands.commitopts + commands.commitopts2 + commands.remoteopts,
155 _('hg fetch [SOURCE]')),
157 _('hg fetch [SOURCE]')),
156 }
158 }
@@ -1,289 +1,289 b''
1 # Copyright 2005, 2006 Benoit Boissinot <benoit.boissinot@ens-lyon.org>
1 # Copyright 2005, 2006 Benoit Boissinot <benoit.boissinot@ens-lyon.org>
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5
5
6 '''commands to sign and verify changesets'''
6 '''commands to sign and verify changesets'''
7
7
8 import os, tempfile, binascii
8 import os, tempfile, binascii
9 from mercurial import util, commands, match, cmdutil
9 from mercurial import util, commands, match, cmdutil
10 from mercurial import node as hgnode
10 from mercurial import node as hgnode
11 from mercurial.i18n import _
11 from mercurial.i18n import _
12
12
13 cmdtable = {}
13 cmdtable = {}
14 command = cmdutil.command(cmdtable)
14 command = cmdutil.command(cmdtable)
15 testedwith = 'internal'
15
16
16 class gpg(object):
17 class gpg(object):
17 def __init__(self, path, key=None):
18 def __init__(self, path, key=None):
18 self.path = path
19 self.path = path
19 self.key = (key and " --local-user \"%s\"" % key) or ""
20 self.key = (key and " --local-user \"%s\"" % key) or ""
20
21
21 def sign(self, data):
22 def sign(self, data):
22 gpgcmd = "%s --sign --detach-sign%s" % (self.path, self.key)
23 gpgcmd = "%s --sign --detach-sign%s" % (self.path, self.key)
23 return util.filter(data, gpgcmd)
24 return util.filter(data, gpgcmd)
24
25
25 def verify(self, data, sig):
26 def verify(self, data, sig):
26 """ returns of the good and bad signatures"""
27 """ returns of the good and bad signatures"""
27 sigfile = datafile = None
28 sigfile = datafile = None
28 try:
29 try:
29 # create temporary files
30 # create temporary files
30 fd, sigfile = tempfile.mkstemp(prefix="hg-gpg-", suffix=".sig")
31 fd, sigfile = tempfile.mkstemp(prefix="hg-gpg-", suffix=".sig")
31 fp = os.fdopen(fd, 'wb')
32 fp = os.fdopen(fd, 'wb')
32 fp.write(sig)
33 fp.write(sig)
33 fp.close()
34 fp.close()
34 fd, datafile = tempfile.mkstemp(prefix="hg-gpg-", suffix=".txt")
35 fd, datafile = tempfile.mkstemp(prefix="hg-gpg-", suffix=".txt")
35 fp = os.fdopen(fd, 'wb')
36 fp = os.fdopen(fd, 'wb')
36 fp.write(data)
37 fp.write(data)
37 fp.close()
38 fp.close()
38 gpgcmd = ("%s --logger-fd 1 --status-fd 1 --verify "
39 gpgcmd = ("%s --logger-fd 1 --status-fd 1 --verify "
39 "\"%s\" \"%s\"" % (self.path, sigfile, datafile))
40 "\"%s\" \"%s\"" % (self.path, sigfile, datafile))
40 ret = util.filter("", gpgcmd)
41 ret = util.filter("", gpgcmd)
41 finally:
42 finally:
42 for f in (sigfile, datafile):
43 for f in (sigfile, datafile):
43 try:
44 try:
44 if f:
45 if f:
45 os.unlink(f)
46 os.unlink(f)
46 except OSError:
47 except OSError:
47 pass
48 pass
48 keys = []
49 keys = []
49 key, fingerprint = None, None
50 key, fingerprint = None, None
50 err = ""
51 err = ""
51 for l in ret.splitlines():
52 for l in ret.splitlines():
52 # see DETAILS in the gnupg documentation
53 # see DETAILS in the gnupg documentation
53 # filter the logger output
54 # filter the logger output
54 if not l.startswith("[GNUPG:]"):
55 if not l.startswith("[GNUPG:]"):
55 continue
56 continue
56 l = l[9:]
57 l = l[9:]
57 if l.startswith("ERRSIG"):
58 if l.startswith("ERRSIG"):
58 err = _("error while verifying signature")
59 err = _("error while verifying signature")
59 break
60 break
60 elif l.startswith("VALIDSIG"):
61 elif l.startswith("VALIDSIG"):
61 # fingerprint of the primary key
62 # fingerprint of the primary key
62 fingerprint = l.split()[10]
63 fingerprint = l.split()[10]
63 elif (l.startswith("GOODSIG") or
64 elif (l.startswith("GOODSIG") or
64 l.startswith("EXPSIG") or
65 l.startswith("EXPSIG") or
65 l.startswith("EXPKEYSIG") or
66 l.startswith("EXPKEYSIG") or
66 l.startswith("BADSIG")):
67 l.startswith("BADSIG")):
67 if key is not None:
68 if key is not None:
68 keys.append(key + [fingerprint])
69 keys.append(key + [fingerprint])
69 key = l.split(" ", 2)
70 key = l.split(" ", 2)
70 fingerprint = None
71 fingerprint = None
71 if err:
72 if err:
72 return err, []
73 return err, []
73 if key is not None:
74 if key is not None:
74 keys.append(key + [fingerprint])
75 keys.append(key + [fingerprint])
75 return err, keys
76 return err, keys
76
77
77 def newgpg(ui, **opts):
78 def newgpg(ui, **opts):
78 """create a new gpg instance"""
79 """create a new gpg instance"""
79 gpgpath = ui.config("gpg", "cmd", "gpg")
80 gpgpath = ui.config("gpg", "cmd", "gpg")
80 gpgkey = opts.get('key')
81 gpgkey = opts.get('key')
81 if not gpgkey:
82 if not gpgkey:
82 gpgkey = ui.config("gpg", "key", None)
83 gpgkey = ui.config("gpg", "key", None)
83 return gpg(gpgpath, gpgkey)
84 return gpg(gpgpath, gpgkey)
84
85
85 def sigwalk(repo):
86 def sigwalk(repo):
86 """
87 """
87 walk over every sigs, yields a couple
88 walk over every sigs, yields a couple
88 ((node, version, sig), (filename, linenumber))
89 ((node, version, sig), (filename, linenumber))
89 """
90 """
90 def parsefile(fileiter, context):
91 def parsefile(fileiter, context):
91 ln = 1
92 ln = 1
92 for l in fileiter:
93 for l in fileiter:
93 if not l:
94 if not l:
94 continue
95 continue
95 yield (l.split(" ", 2), (context, ln))
96 yield (l.split(" ", 2), (context, ln))
96 ln += 1
97 ln += 1
97
98
98 # read the heads
99 # read the heads
99 fl = repo.file(".hgsigs")
100 fl = repo.file(".hgsigs")
100 for r in reversed(fl.heads()):
101 for r in reversed(fl.heads()):
101 fn = ".hgsigs|%s" % hgnode.short(r)
102 fn = ".hgsigs|%s" % hgnode.short(r)
102 for item in parsefile(fl.read(r).splitlines(), fn):
103 for item in parsefile(fl.read(r).splitlines(), fn):
103 yield item
104 yield item
104 try:
105 try:
105 # read local signatures
106 # read local signatures
106 fn = "localsigs"
107 fn = "localsigs"
107 for item in parsefile(repo.opener(fn), fn):
108 for item in parsefile(repo.opener(fn), fn):
108 yield item
109 yield item
109 except IOError:
110 except IOError:
110 pass
111 pass
111
112
112 def getkeys(ui, repo, mygpg, sigdata, context):
113 def getkeys(ui, repo, mygpg, sigdata, context):
113 """get the keys who signed a data"""
114 """get the keys who signed a data"""
114 fn, ln = context
115 fn, ln = context
115 node, version, sig = sigdata
116 node, version, sig = sigdata
116 prefix = "%s:%d" % (fn, ln)
117 prefix = "%s:%d" % (fn, ln)
117 node = hgnode.bin(node)
118 node = hgnode.bin(node)
118
119
119 data = node2txt(repo, node, version)
120 data = node2txt(repo, node, version)
120 sig = binascii.a2b_base64(sig)
121 sig = binascii.a2b_base64(sig)
121 err, keys = mygpg.verify(data, sig)
122 err, keys = mygpg.verify(data, sig)
122 if err:
123 if err:
123 ui.warn("%s:%d %s\n" % (fn, ln , err))
124 ui.warn("%s:%d %s\n" % (fn, ln , err))
124 return None
125 return None
125
126
126 validkeys = []
127 validkeys = []
127 # warn for expired key and/or sigs
128 # warn for expired key and/or sigs
128 for key in keys:
129 for key in keys:
129 if key[0] == "BADSIG":
130 if key[0] == "BADSIG":
130 ui.write(_("%s Bad signature from \"%s\"\n") % (prefix, key[2]))
131 ui.write(_("%s Bad signature from \"%s\"\n") % (prefix, key[2]))
131 continue
132 continue
132 if key[0] == "EXPSIG":
133 if key[0] == "EXPSIG":
133 ui.write(_("%s Note: Signature has expired"
134 ui.write(_("%s Note: Signature has expired"
134 " (signed by: \"%s\")\n") % (prefix, key[2]))
135 " (signed by: \"%s\")\n") % (prefix, key[2]))
135 elif key[0] == "EXPKEYSIG":
136 elif key[0] == "EXPKEYSIG":
136 ui.write(_("%s Note: This key has expired"
137 ui.write(_("%s Note: This key has expired"
137 " (signed by: \"%s\")\n") % (prefix, key[2]))
138 " (signed by: \"%s\")\n") % (prefix, key[2]))
138 validkeys.append((key[1], key[2], key[3]))
139 validkeys.append((key[1], key[2], key[3]))
139 return validkeys
140 return validkeys
140
141
141 @command("sigs", [], _('hg sigs'))
142 @command("sigs", [], _('hg sigs'))
142 def sigs(ui, repo):
143 def sigs(ui, repo):
143 """list signed changesets"""
144 """list signed changesets"""
144 mygpg = newgpg(ui)
145 mygpg = newgpg(ui)
145 revs = {}
146 revs = {}
146
147
147 for data, context in sigwalk(repo):
148 for data, context in sigwalk(repo):
148 node, version, sig = data
149 node, version, sig = data
149 fn, ln = context
150 fn, ln = context
150 try:
151 try:
151 n = repo.lookup(node)
152 n = repo.lookup(node)
152 except KeyError:
153 except KeyError:
153 ui.warn(_("%s:%d node does not exist\n") % (fn, ln))
154 ui.warn(_("%s:%d node does not exist\n") % (fn, ln))
154 continue
155 continue
155 r = repo.changelog.rev(n)
156 r = repo.changelog.rev(n)
156 keys = getkeys(ui, repo, mygpg, data, context)
157 keys = getkeys(ui, repo, mygpg, data, context)
157 if not keys:
158 if not keys:
158 continue
159 continue
159 revs.setdefault(r, [])
160 revs.setdefault(r, [])
160 revs[r].extend(keys)
161 revs[r].extend(keys)
161 for rev in sorted(revs, reverse=True):
162 for rev in sorted(revs, reverse=True):
162 for k in revs[rev]:
163 for k in revs[rev]:
163 r = "%5d:%s" % (rev, hgnode.hex(repo.changelog.node(rev)))
164 r = "%5d:%s" % (rev, hgnode.hex(repo.changelog.node(rev)))
164 ui.write("%-30s %s\n" % (keystr(ui, k), r))
165 ui.write("%-30s %s\n" % (keystr(ui, k), r))
165
166
166 @command("sigcheck", [], _('hg sigcheck REVISION'))
167 @command("sigcheck", [], _('hg sigcheck REVISION'))
167 def check(ui, repo, rev):
168 def check(ui, repo, rev):
168 """verify all the signatures there may be for a particular revision"""
169 """verify all the signatures there may be for a particular revision"""
169 mygpg = newgpg(ui)
170 mygpg = newgpg(ui)
170 rev = repo.lookup(rev)
171 rev = repo.lookup(rev)
171 hexrev = hgnode.hex(rev)
172 hexrev = hgnode.hex(rev)
172 keys = []
173 keys = []
173
174
174 for data, context in sigwalk(repo):
175 for data, context in sigwalk(repo):
175 node, version, sig = data
176 node, version, sig = data
176 if node == hexrev:
177 if node == hexrev:
177 k = getkeys(ui, repo, mygpg, data, context)
178 k = getkeys(ui, repo, mygpg, data, context)
178 if k:
179 if k:
179 keys.extend(k)
180 keys.extend(k)
180
181
181 if not keys:
182 if not keys:
182 ui.write(_("No valid signature for %s\n") % hgnode.short(rev))
183 ui.write(_("No valid signature for %s\n") % hgnode.short(rev))
183 return
184 return
184
185
185 # print summary
186 # print summary
186 ui.write("%s is signed by:\n" % hgnode.short(rev))
187 ui.write("%s is signed by:\n" % hgnode.short(rev))
187 for key in keys:
188 for key in keys:
188 ui.write(" %s\n" % keystr(ui, key))
189 ui.write(" %s\n" % keystr(ui, key))
189
190
190 def keystr(ui, key):
191 def keystr(ui, key):
191 """associate a string to a key (username, comment)"""
192 """associate a string to a key (username, comment)"""
192 keyid, user, fingerprint = key
193 keyid, user, fingerprint = key
193 comment = ui.config("gpg", fingerprint, None)
194 comment = ui.config("gpg", fingerprint, None)
194 if comment:
195 if comment:
195 return "%s (%s)" % (user, comment)
196 return "%s (%s)" % (user, comment)
196 else:
197 else:
197 return user
198 return user
198
199
199 @command("sign",
200 @command("sign",
200 [('l', 'local', None, _('make the signature local')),
201 [('l', 'local', None, _('make the signature local')),
201 ('f', 'force', None, _('sign even if the sigfile is modified')),
202 ('f', 'force', None, _('sign even if the sigfile is modified')),
202 ('', 'no-commit', None, _('do not commit the sigfile after signing')),
203 ('', 'no-commit', None, _('do not commit the sigfile after signing')),
203 ('k', 'key', '',
204 ('k', 'key', '',
204 _('the key id to sign with'), _('ID')),
205 _('the key id to sign with'), _('ID')),
205 ('m', 'message', '',
206 ('m', 'message', '',
206 _('commit message'), _('TEXT')),
207 _('commit message'), _('TEXT')),
207 ] + commands.commitopts2,
208 ] + commands.commitopts2,
208 _('hg sign [OPTION]... [REVISION]...'))
209 _('hg sign [OPTION]... [REVISION]...'))
209 def sign(ui, repo, *revs, **opts):
210 def sign(ui, repo, *revs, **opts):
210 """add a signature for the current or given revision
211 """add a signature for the current or given revision
211
212
212 If no revision is given, the parent of the working directory is used,
213 If no revision is given, the parent of the working directory is used,
213 or tip if no revision is checked out.
214 or tip if no revision is checked out.
214
215
215 See :hg:`help dates` for a list of formats valid for -d/--date.
216 See :hg:`help dates` for a list of formats valid for -d/--date.
216 """
217 """
217
218
218 mygpg = newgpg(ui, **opts)
219 mygpg = newgpg(ui, **opts)
219 sigver = "0"
220 sigver = "0"
220 sigmessage = ""
221 sigmessage = ""
221
222
222 date = opts.get('date')
223 date = opts.get('date')
223 if date:
224 if date:
224 opts['date'] = util.parsedate(date)
225 opts['date'] = util.parsedate(date)
225
226
226 if revs:
227 if revs:
227 nodes = [repo.lookup(n) for n in revs]
228 nodes = [repo.lookup(n) for n in revs]
228 else:
229 else:
229 nodes = [node for node in repo.dirstate.parents()
230 nodes = [node for node in repo.dirstate.parents()
230 if node != hgnode.nullid]
231 if node != hgnode.nullid]
231 if len(nodes) > 1:
232 if len(nodes) > 1:
232 raise util.Abort(_('uncommitted merge - please provide a '
233 raise util.Abort(_('uncommitted merge - please provide a '
233 'specific revision'))
234 'specific revision'))
234 if not nodes:
235 if not nodes:
235 nodes = [repo.changelog.tip()]
236 nodes = [repo.changelog.tip()]
236
237
237 for n in nodes:
238 for n in nodes:
238 hexnode = hgnode.hex(n)
239 hexnode = hgnode.hex(n)
239 ui.write(_("Signing %d:%s\n") % (repo.changelog.rev(n),
240 ui.write(_("Signing %d:%s\n") % (repo.changelog.rev(n),
240 hgnode.short(n)))
241 hgnode.short(n)))
241 # build data
242 # build data
242 data = node2txt(repo, n, sigver)
243 data = node2txt(repo, n, sigver)
243 sig = mygpg.sign(data)
244 sig = mygpg.sign(data)
244 if not sig:
245 if not sig:
245 raise util.Abort(_("error while signing"))
246 raise util.Abort(_("error while signing"))
246 sig = binascii.b2a_base64(sig)
247 sig = binascii.b2a_base64(sig)
247 sig = sig.replace("\n", "")
248 sig = sig.replace("\n", "")
248 sigmessage += "%s %s %s\n" % (hexnode, sigver, sig)
249 sigmessage += "%s %s %s\n" % (hexnode, sigver, sig)
249
250
250 # write it
251 # write it
251 if opts['local']:
252 if opts['local']:
252 repo.opener.append("localsigs", sigmessage)
253 repo.opener.append("localsigs", sigmessage)
253 return
254 return
254
255
255 msigs = match.exact(repo.root, '', ['.hgsigs'])
256 msigs = match.exact(repo.root, '', ['.hgsigs'])
256 s = repo.status(match=msigs, unknown=True, ignored=True)[:6]
257 s = repo.status(match=msigs, unknown=True, ignored=True)[:6]
257 if util.any(s) and not opts["force"]:
258 if util.any(s) and not opts["force"]:
258 raise util.Abort(_("working copy of .hgsigs is changed "
259 raise util.Abort(_("working copy of .hgsigs is changed "
259 "(please commit .hgsigs manually "
260 "(please commit .hgsigs manually "
260 "or use --force)"))
261 "or use --force)"))
261
262
262 sigsfile = repo.wfile(".hgsigs", "ab")
263 sigsfile = repo.wfile(".hgsigs", "ab")
263 sigsfile.write(sigmessage)
264 sigsfile.write(sigmessage)
264 sigsfile.close()
265 sigsfile.close()
265
266
266 if '.hgsigs' not in repo.dirstate:
267 if '.hgsigs' not in repo.dirstate:
267 repo[None].add([".hgsigs"])
268 repo[None].add([".hgsigs"])
268
269
269 if opts["no_commit"]:
270 if opts["no_commit"]:
270 return
271 return
271
272
272 message = opts['message']
273 message = opts['message']
273 if not message:
274 if not message:
274 # we don't translate commit messages
275 # we don't translate commit messages
275 message = "\n".join(["Added signature for changeset %s"
276 message = "\n".join(["Added signature for changeset %s"
276 % hgnode.short(n)
277 % hgnode.short(n)
277 for n in nodes])
278 for n in nodes])
278 try:
279 try:
279 repo.commit(message, opts['user'], opts['date'], match=msigs)
280 repo.commit(message, opts['user'], opts['date'], match=msigs)
280 except ValueError, inst:
281 except ValueError, inst:
281 raise util.Abort(str(inst))
282 raise util.Abort(str(inst))
282
283
283 def node2txt(repo, node, ver):
284 def node2txt(repo, node, ver):
284 """map a manifest into some text"""
285 """map a manifest into some text"""
285 if ver == "0":
286 if ver == "0":
286 return "%s\n" % hgnode.hex(node)
287 return "%s\n" % hgnode.hex(node)
287 else:
288 else:
288 raise util.Abort(_("unknown signature version"))
289 raise util.Abort(_("unknown signature version"))
289
@@ -1,562 +1,563 b''
1 # ASCII graph log extension for Mercurial
1 # ASCII graph log extension for Mercurial
2 #
2 #
3 # Copyright 2007 Joel Rosdahl <joel@rosdahl.net>
3 # Copyright 2007 Joel Rosdahl <joel@rosdahl.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to view revision graphs from a shell
8 '''command to view revision graphs from a shell
9
9
10 This extension adds a --graph option to the incoming, outgoing and log
10 This extension adds a --graph option to the incoming, outgoing and log
11 commands. When this options is given, an ASCII representation of the
11 commands. When this options is given, an ASCII representation of the
12 revision graph is also shown.
12 revision graph is also shown.
13 '''
13 '''
14
14
15 from mercurial.cmdutil import show_changeset
15 from mercurial.cmdutil import show_changeset
16 from mercurial.i18n import _
16 from mercurial.i18n import _
17 from mercurial.node import nullrev
17 from mercurial.node import nullrev
18 from mercurial import cmdutil, commands, extensions, scmutil
18 from mercurial import cmdutil, commands, extensions, scmutil
19 from mercurial import hg, util, graphmod, templatekw, revset
19 from mercurial import hg, util, graphmod, templatekw, revset
20
20
21 cmdtable = {}
21 cmdtable = {}
22 command = cmdutil.command(cmdtable)
22 command = cmdutil.command(cmdtable)
23 testedwith = 'internal'
23
24
24 ASCIIDATA = 'ASC'
25 ASCIIDATA = 'ASC'
25
26
26 def asciiedges(type, char, lines, seen, rev, parents):
27 def asciiedges(type, char, lines, seen, rev, parents):
27 """adds edge info to changelog DAG walk suitable for ascii()"""
28 """adds edge info to changelog DAG walk suitable for ascii()"""
28 if rev not in seen:
29 if rev not in seen:
29 seen.append(rev)
30 seen.append(rev)
30 nodeidx = seen.index(rev)
31 nodeidx = seen.index(rev)
31
32
32 knownparents = []
33 knownparents = []
33 newparents = []
34 newparents = []
34 for parent in parents:
35 for parent in parents:
35 if parent in seen:
36 if parent in seen:
36 knownparents.append(parent)
37 knownparents.append(parent)
37 else:
38 else:
38 newparents.append(parent)
39 newparents.append(parent)
39
40
40 ncols = len(seen)
41 ncols = len(seen)
41 nextseen = seen[:]
42 nextseen = seen[:]
42 nextseen[nodeidx:nodeidx + 1] = newparents
43 nextseen[nodeidx:nodeidx + 1] = newparents
43 edges = [(nodeidx, nextseen.index(p)) for p in knownparents]
44 edges = [(nodeidx, nextseen.index(p)) for p in knownparents]
44
45
45 while len(newparents) > 2:
46 while len(newparents) > 2:
46 # ascii() only knows how to add or remove a single column between two
47 # ascii() only knows how to add or remove a single column between two
47 # calls. Nodes with more than two parents break this constraint so we
48 # calls. Nodes with more than two parents break this constraint so we
48 # introduce intermediate expansion lines to grow the active node list
49 # introduce intermediate expansion lines to grow the active node list
49 # slowly.
50 # slowly.
50 edges.append((nodeidx, nodeidx))
51 edges.append((nodeidx, nodeidx))
51 edges.append((nodeidx, nodeidx + 1))
52 edges.append((nodeidx, nodeidx + 1))
52 nmorecols = 1
53 nmorecols = 1
53 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
54 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
54 char = '\\'
55 char = '\\'
55 lines = []
56 lines = []
56 nodeidx += 1
57 nodeidx += 1
57 ncols += 1
58 ncols += 1
58 edges = []
59 edges = []
59 del newparents[0]
60 del newparents[0]
60
61
61 if len(newparents) > 0:
62 if len(newparents) > 0:
62 edges.append((nodeidx, nodeidx))
63 edges.append((nodeidx, nodeidx))
63 if len(newparents) > 1:
64 if len(newparents) > 1:
64 edges.append((nodeidx, nodeidx + 1))
65 edges.append((nodeidx, nodeidx + 1))
65 nmorecols = len(nextseen) - ncols
66 nmorecols = len(nextseen) - ncols
66 seen[:] = nextseen
67 seen[:] = nextseen
67 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
68 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
68
69
69 def fix_long_right_edges(edges):
70 def fix_long_right_edges(edges):
70 for (i, (start, end)) in enumerate(edges):
71 for (i, (start, end)) in enumerate(edges):
71 if end > start:
72 if end > start:
72 edges[i] = (start, end + 1)
73 edges[i] = (start, end + 1)
73
74
74 def get_nodeline_edges_tail(
75 def get_nodeline_edges_tail(
75 node_index, p_node_index, n_columns, n_columns_diff, p_diff, fix_tail):
76 node_index, p_node_index, n_columns, n_columns_diff, p_diff, fix_tail):
76 if fix_tail and n_columns_diff == p_diff and n_columns_diff != 0:
77 if fix_tail and n_columns_diff == p_diff and n_columns_diff != 0:
77 # Still going in the same non-vertical direction.
78 # Still going in the same non-vertical direction.
78 if n_columns_diff == -1:
79 if n_columns_diff == -1:
79 start = max(node_index + 1, p_node_index)
80 start = max(node_index + 1, p_node_index)
80 tail = ["|", " "] * (start - node_index - 1)
81 tail = ["|", " "] * (start - node_index - 1)
81 tail.extend(["/", " "] * (n_columns - start))
82 tail.extend(["/", " "] * (n_columns - start))
82 return tail
83 return tail
83 else:
84 else:
84 return ["\\", " "] * (n_columns - node_index - 1)
85 return ["\\", " "] * (n_columns - node_index - 1)
85 else:
86 else:
86 return ["|", " "] * (n_columns - node_index - 1)
87 return ["|", " "] * (n_columns - node_index - 1)
87
88
88 def draw_edges(edges, nodeline, interline):
89 def draw_edges(edges, nodeline, interline):
89 for (start, end) in edges:
90 for (start, end) in edges:
90 if start == end + 1:
91 if start == end + 1:
91 interline[2 * end + 1] = "/"
92 interline[2 * end + 1] = "/"
92 elif start == end - 1:
93 elif start == end - 1:
93 interline[2 * start + 1] = "\\"
94 interline[2 * start + 1] = "\\"
94 elif start == end:
95 elif start == end:
95 interline[2 * start] = "|"
96 interline[2 * start] = "|"
96 else:
97 else:
97 if 2 * end >= len(nodeline):
98 if 2 * end >= len(nodeline):
98 continue
99 continue
99 nodeline[2 * end] = "+"
100 nodeline[2 * end] = "+"
100 if start > end:
101 if start > end:
101 (start, end) = (end, start)
102 (start, end) = (end, start)
102 for i in range(2 * start + 1, 2 * end):
103 for i in range(2 * start + 1, 2 * end):
103 if nodeline[i] != "+":
104 if nodeline[i] != "+":
104 nodeline[i] = "-"
105 nodeline[i] = "-"
105
106
106 def get_padding_line(ni, n_columns, edges):
107 def get_padding_line(ni, n_columns, edges):
107 line = []
108 line = []
108 line.extend(["|", " "] * ni)
109 line.extend(["|", " "] * ni)
109 if (ni, ni - 1) in edges or (ni, ni) in edges:
110 if (ni, ni - 1) in edges or (ni, ni) in edges:
110 # (ni, ni - 1) (ni, ni)
111 # (ni, ni - 1) (ni, ni)
111 # | | | | | | | |
112 # | | | | | | | |
112 # +---o | | o---+
113 # +---o | | o---+
113 # | | c | | c | |
114 # | | c | | c | |
114 # | |/ / | |/ /
115 # | |/ / | |/ /
115 # | | | | | |
116 # | | | | | |
116 c = "|"
117 c = "|"
117 else:
118 else:
118 c = " "
119 c = " "
119 line.extend([c, " "])
120 line.extend([c, " "])
120 line.extend(["|", " "] * (n_columns - ni - 1))
121 line.extend(["|", " "] * (n_columns - ni - 1))
121 return line
122 return line
122
123
123 def asciistate():
124 def asciistate():
124 """returns the initial value for the "state" argument to ascii()"""
125 """returns the initial value for the "state" argument to ascii()"""
125 return [0, 0]
126 return [0, 0]
126
127
127 def ascii(ui, state, type, char, text, coldata):
128 def ascii(ui, state, type, char, text, coldata):
128 """prints an ASCII graph of the DAG
129 """prints an ASCII graph of the DAG
129
130
130 takes the following arguments (one call per node in the graph):
131 takes the following arguments (one call per node in the graph):
131
132
132 - ui to write to
133 - ui to write to
133 - Somewhere to keep the needed state in (init to asciistate())
134 - Somewhere to keep the needed state in (init to asciistate())
134 - Column of the current node in the set of ongoing edges.
135 - Column of the current node in the set of ongoing edges.
135 - Type indicator of node data == ASCIIDATA.
136 - Type indicator of node data == ASCIIDATA.
136 - Payload: (char, lines):
137 - Payload: (char, lines):
137 - Character to use as node's symbol.
138 - Character to use as node's symbol.
138 - List of lines to display as the node's text.
139 - List of lines to display as the node's text.
139 - Edges; a list of (col, next_col) indicating the edges between
140 - Edges; a list of (col, next_col) indicating the edges between
140 the current node and its parents.
141 the current node and its parents.
141 - Number of columns (ongoing edges) in the current revision.
142 - Number of columns (ongoing edges) in the current revision.
142 - The difference between the number of columns (ongoing edges)
143 - The difference between the number of columns (ongoing edges)
143 in the next revision and the number of columns (ongoing edges)
144 in the next revision and the number of columns (ongoing edges)
144 in the current revision. That is: -1 means one column removed;
145 in the current revision. That is: -1 means one column removed;
145 0 means no columns added or removed; 1 means one column added.
146 0 means no columns added or removed; 1 means one column added.
146 """
147 """
147
148
148 idx, edges, ncols, coldiff = coldata
149 idx, edges, ncols, coldiff = coldata
149 assert -2 < coldiff < 2
150 assert -2 < coldiff < 2
150 if coldiff == -1:
151 if coldiff == -1:
151 # Transform
152 # Transform
152 #
153 #
153 # | | | | | |
154 # | | | | | |
154 # o | | into o---+
155 # o | | into o---+
155 # |X / |/ /
156 # |X / |/ /
156 # | | | |
157 # | | | |
157 fix_long_right_edges(edges)
158 fix_long_right_edges(edges)
158
159
159 # add_padding_line says whether to rewrite
160 # add_padding_line says whether to rewrite
160 #
161 #
161 # | | | | | | | |
162 # | | | | | | | |
162 # | o---+ into | o---+
163 # | o---+ into | o---+
163 # | / / | | | # <--- padding line
164 # | / / | | | # <--- padding line
164 # o | | | / /
165 # o | | | / /
165 # o | |
166 # o | |
166 add_padding_line = (len(text) > 2 and coldiff == -1 and
167 add_padding_line = (len(text) > 2 and coldiff == -1 and
167 [x for (x, y) in edges if x + 1 < y])
168 [x for (x, y) in edges if x + 1 < y])
168
169
169 # fix_nodeline_tail says whether to rewrite
170 # fix_nodeline_tail says whether to rewrite
170 #
171 #
171 # | | o | | | | o | |
172 # | | o | | | | o | |
172 # | | |/ / | | |/ /
173 # | | |/ / | | |/ /
173 # | o | | into | o / / # <--- fixed nodeline tail
174 # | o | | into | o / / # <--- fixed nodeline tail
174 # | |/ / | |/ /
175 # | |/ / | |/ /
175 # o | | o | |
176 # o | | o | |
176 fix_nodeline_tail = len(text) <= 2 and not add_padding_line
177 fix_nodeline_tail = len(text) <= 2 and not add_padding_line
177
178
178 # nodeline is the line containing the node character (typically o)
179 # nodeline is the line containing the node character (typically o)
179 nodeline = ["|", " "] * idx
180 nodeline = ["|", " "] * idx
180 nodeline.extend([char, " "])
181 nodeline.extend([char, " "])
181
182
182 nodeline.extend(
183 nodeline.extend(
183 get_nodeline_edges_tail(idx, state[1], ncols, coldiff,
184 get_nodeline_edges_tail(idx, state[1], ncols, coldiff,
184 state[0], fix_nodeline_tail))
185 state[0], fix_nodeline_tail))
185
186
186 # shift_interline is the line containing the non-vertical
187 # shift_interline is the line containing the non-vertical
187 # edges between this entry and the next
188 # edges between this entry and the next
188 shift_interline = ["|", " "] * idx
189 shift_interline = ["|", " "] * idx
189 if coldiff == -1:
190 if coldiff == -1:
190 n_spaces = 1
191 n_spaces = 1
191 edge_ch = "/"
192 edge_ch = "/"
192 elif coldiff == 0:
193 elif coldiff == 0:
193 n_spaces = 2
194 n_spaces = 2
194 edge_ch = "|"
195 edge_ch = "|"
195 else:
196 else:
196 n_spaces = 3
197 n_spaces = 3
197 edge_ch = "\\"
198 edge_ch = "\\"
198 shift_interline.extend(n_spaces * [" "])
199 shift_interline.extend(n_spaces * [" "])
199 shift_interline.extend([edge_ch, " "] * (ncols - idx - 1))
200 shift_interline.extend([edge_ch, " "] * (ncols - idx - 1))
200
201
201 # draw edges from the current node to its parents
202 # draw edges from the current node to its parents
202 draw_edges(edges, nodeline, shift_interline)
203 draw_edges(edges, nodeline, shift_interline)
203
204
204 # lines is the list of all graph lines to print
205 # lines is the list of all graph lines to print
205 lines = [nodeline]
206 lines = [nodeline]
206 if add_padding_line:
207 if add_padding_line:
207 lines.append(get_padding_line(idx, ncols, edges))
208 lines.append(get_padding_line(idx, ncols, edges))
208 lines.append(shift_interline)
209 lines.append(shift_interline)
209
210
210 # make sure that there are as many graph lines as there are
211 # make sure that there are as many graph lines as there are
211 # log strings
212 # log strings
212 while len(text) < len(lines):
213 while len(text) < len(lines):
213 text.append("")
214 text.append("")
214 if len(lines) < len(text):
215 if len(lines) < len(text):
215 extra_interline = ["|", " "] * (ncols + coldiff)
216 extra_interline = ["|", " "] * (ncols + coldiff)
216 while len(lines) < len(text):
217 while len(lines) < len(text):
217 lines.append(extra_interline)
218 lines.append(extra_interline)
218
219
219 # print lines
220 # print lines
220 indentation_level = max(ncols, ncols + coldiff)
221 indentation_level = max(ncols, ncols + coldiff)
221 for (line, logstr) in zip(lines, text):
222 for (line, logstr) in zip(lines, text):
222 ln = "%-*s %s" % (2 * indentation_level, "".join(line), logstr)
223 ln = "%-*s %s" % (2 * indentation_level, "".join(line), logstr)
223 ui.write(ln.rstrip() + '\n')
224 ui.write(ln.rstrip() + '\n')
224
225
225 # ... and start over
226 # ... and start over
226 state[0] = coldiff
227 state[0] = coldiff
227 state[1] = idx
228 state[1] = idx
228
229
229 def get_revs(repo, rev_opt):
230 def get_revs(repo, rev_opt):
230 if rev_opt:
231 if rev_opt:
231 revs = scmutil.revrange(repo, rev_opt)
232 revs = scmutil.revrange(repo, rev_opt)
232 if len(revs) == 0:
233 if len(revs) == 0:
233 return (nullrev, nullrev)
234 return (nullrev, nullrev)
234 return (max(revs), min(revs))
235 return (max(revs), min(revs))
235 else:
236 else:
236 return (len(repo) - 1, 0)
237 return (len(repo) - 1, 0)
237
238
238 def check_unsupported_flags(pats, opts):
239 def check_unsupported_flags(pats, opts):
239 for op in ["newest_first"]:
240 for op in ["newest_first"]:
240 if op in opts and opts[op]:
241 if op in opts and opts[op]:
241 raise util.Abort(_("-G/--graph option is incompatible with --%s")
242 raise util.Abort(_("-G/--graph option is incompatible with --%s")
242 % op.replace("_", "-"))
243 % op.replace("_", "-"))
243
244
244 def _makefilematcher(repo, pats, followfirst):
245 def _makefilematcher(repo, pats, followfirst):
245 # When displaying a revision with --patch --follow FILE, we have
246 # When displaying a revision with --patch --follow FILE, we have
246 # to know which file of the revision must be diffed. With
247 # to know which file of the revision must be diffed. With
247 # --follow, we want the names of the ancestors of FILE in the
248 # --follow, we want the names of the ancestors of FILE in the
248 # revision, stored in "fcache". "fcache" is populated by
249 # revision, stored in "fcache". "fcache" is populated by
249 # reproducing the graph traversal already done by --follow revset
250 # reproducing the graph traversal already done by --follow revset
250 # and relating linkrevs to file names (which is not "correct" but
251 # and relating linkrevs to file names (which is not "correct" but
251 # good enough).
252 # good enough).
252 fcache = {}
253 fcache = {}
253 fcacheready = [False]
254 fcacheready = [False]
254 pctx = repo['.']
255 pctx = repo['.']
255 wctx = repo[None]
256 wctx = repo[None]
256
257
257 def populate():
258 def populate():
258 for fn in pats:
259 for fn in pats:
259 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
260 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
260 for c in i:
261 for c in i:
261 fcache.setdefault(c.linkrev(), set()).add(c.path())
262 fcache.setdefault(c.linkrev(), set()).add(c.path())
262
263
263 def filematcher(rev):
264 def filematcher(rev):
264 if not fcacheready[0]:
265 if not fcacheready[0]:
265 # Lazy initialization
266 # Lazy initialization
266 fcacheready[0] = True
267 fcacheready[0] = True
267 populate()
268 populate()
268 return scmutil.match(wctx, fcache.get(rev, []), default='path')
269 return scmutil.match(wctx, fcache.get(rev, []), default='path')
269
270
270 return filematcher
271 return filematcher
271
272
272 def _makelogrevset(repo, pats, opts, revs):
273 def _makelogrevset(repo, pats, opts, revs):
273 """Return (expr, filematcher) where expr is a revset string built
274 """Return (expr, filematcher) where expr is a revset string built
274 from log options and file patterns or None. If --stat or --patch
275 from log options and file patterns or None. If --stat or --patch
275 are not passed filematcher is None. Otherwise it is a callable
276 are not passed filematcher is None. Otherwise it is a callable
276 taking a revision number and returning a match objects filtering
277 taking a revision number and returning a match objects filtering
277 the files to be detailed when displaying the revision.
278 the files to be detailed when displaying the revision.
278 """
279 """
279 opt2revset = {
280 opt2revset = {
280 'no_merges': ('not merge()', None),
281 'no_merges': ('not merge()', None),
281 'only_merges': ('merge()', None),
282 'only_merges': ('merge()', None),
282 '_ancestors': ('ancestors(%(val)s)', None),
283 '_ancestors': ('ancestors(%(val)s)', None),
283 '_fancestors': ('_firstancestors(%(val)s)', None),
284 '_fancestors': ('_firstancestors(%(val)s)', None),
284 '_descendants': ('descendants(%(val)s)', None),
285 '_descendants': ('descendants(%(val)s)', None),
285 '_fdescendants': ('_firstdescendants(%(val)s)', None),
286 '_fdescendants': ('_firstdescendants(%(val)s)', None),
286 '_matchfiles': ('_matchfiles(%(val)s)', None),
287 '_matchfiles': ('_matchfiles(%(val)s)', None),
287 'date': ('date(%(val)r)', None),
288 'date': ('date(%(val)r)', None),
288 'branch': ('branch(%(val)r)', ' or '),
289 'branch': ('branch(%(val)r)', ' or '),
289 '_patslog': ('filelog(%(val)r)', ' or '),
290 '_patslog': ('filelog(%(val)r)', ' or '),
290 '_patsfollow': ('follow(%(val)r)', ' or '),
291 '_patsfollow': ('follow(%(val)r)', ' or '),
291 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
292 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
292 'keyword': ('keyword(%(val)r)', ' or '),
293 'keyword': ('keyword(%(val)r)', ' or '),
293 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
294 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
294 'user': ('user(%(val)r)', ' or '),
295 'user': ('user(%(val)r)', ' or '),
295 }
296 }
296
297
297 opts = dict(opts)
298 opts = dict(opts)
298 # follow or not follow?
299 # follow or not follow?
299 follow = opts.get('follow') or opts.get('follow_first')
300 follow = opts.get('follow') or opts.get('follow_first')
300 followfirst = opts.get('follow_first') and 1 or 0
301 followfirst = opts.get('follow_first') and 1 or 0
301 # --follow with FILE behaviour depends on revs...
302 # --follow with FILE behaviour depends on revs...
302 startrev = revs[0]
303 startrev = revs[0]
303 followdescendants = (len(revs) > 1 and revs[0] < revs[1]) and 1 or 0
304 followdescendants = (len(revs) > 1 and revs[0] < revs[1]) and 1 or 0
304
305
305 # branch and only_branch are really aliases and must be handled at
306 # branch and only_branch are really aliases and must be handled at
306 # the same time
307 # the same time
307 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
308 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
308 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
309 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
309 # pats/include/exclude are passed to match.match() directly in
310 # pats/include/exclude are passed to match.match() directly in
310 # _matchfile() revset but walkchangerevs() builds its matcher with
311 # _matchfile() revset but walkchangerevs() builds its matcher with
311 # scmutil.match(). The difference is input pats are globbed on
312 # scmutil.match(). The difference is input pats are globbed on
312 # platforms without shell expansion (windows).
313 # platforms without shell expansion (windows).
313 pctx = repo[None]
314 pctx = repo[None]
314 match, pats = scmutil.matchandpats(pctx, pats, opts)
315 match, pats = scmutil.matchandpats(pctx, pats, opts)
315 slowpath = match.anypats() or (match.files() and opts.get('removed'))
316 slowpath = match.anypats() or (match.files() and opts.get('removed'))
316 if not slowpath:
317 if not slowpath:
317 for f in match.files():
318 for f in match.files():
318 if follow and f not in pctx:
319 if follow and f not in pctx:
319 raise util.Abort(_('cannot follow file not in parent '
320 raise util.Abort(_('cannot follow file not in parent '
320 'revision: "%s"') % f)
321 'revision: "%s"') % f)
321 filelog = repo.file(f)
322 filelog = repo.file(f)
322 if not len(filelog):
323 if not len(filelog):
323 # A zero count may be a directory or deleted file, so
324 # A zero count may be a directory or deleted file, so
324 # try to find matching entries on the slow path.
325 # try to find matching entries on the slow path.
325 if follow:
326 if follow:
326 raise util.Abort(
327 raise util.Abort(
327 _('cannot follow nonexistent file: "%s"') % f)
328 _('cannot follow nonexistent file: "%s"') % f)
328 slowpath = True
329 slowpath = True
329 if slowpath:
330 if slowpath:
330 # See cmdutil.walkchangerevs() slow path.
331 # See cmdutil.walkchangerevs() slow path.
331 #
332 #
332 if follow:
333 if follow:
333 raise util.Abort(_('can only follow copies/renames for explicit '
334 raise util.Abort(_('can only follow copies/renames for explicit '
334 'filenames'))
335 'filenames'))
335 # pats/include/exclude cannot be represented as separate
336 # pats/include/exclude cannot be represented as separate
336 # revset expressions as their filtering logic applies at file
337 # revset expressions as their filtering logic applies at file
337 # level. For instance "-I a -X a" matches a revision touching
338 # level. For instance "-I a -X a" matches a revision touching
338 # "a" and "b" while "file(a) and not file(b)" does
339 # "a" and "b" while "file(a) and not file(b)" does
339 # not. Besides, filesets are evaluated against the working
340 # not. Besides, filesets are evaluated against the working
340 # directory.
341 # directory.
341 matchargs = ['r:', 'd:relpath']
342 matchargs = ['r:', 'd:relpath']
342 for p in pats:
343 for p in pats:
343 matchargs.append('p:' + p)
344 matchargs.append('p:' + p)
344 for p in opts.get('include', []):
345 for p in opts.get('include', []):
345 matchargs.append('i:' + p)
346 matchargs.append('i:' + p)
346 for p in opts.get('exclude', []):
347 for p in opts.get('exclude', []):
347 matchargs.append('x:' + p)
348 matchargs.append('x:' + p)
348 matchargs = ','.join(('%r' % p) for p in matchargs)
349 matchargs = ','.join(('%r' % p) for p in matchargs)
349 opts['_matchfiles'] = matchargs
350 opts['_matchfiles'] = matchargs
350 else:
351 else:
351 if follow:
352 if follow:
352 fpats = ('_patsfollow', '_patsfollowfirst')
353 fpats = ('_patsfollow', '_patsfollowfirst')
353 fnopats = (('_ancestors', '_fancestors'),
354 fnopats = (('_ancestors', '_fancestors'),
354 ('_descendants', '_fdescendants'))
355 ('_descendants', '_fdescendants'))
355 if pats:
356 if pats:
356 # follow() revset inteprets its file argument as a
357 # follow() revset inteprets its file argument as a
357 # manifest entry, so use match.files(), not pats.
358 # manifest entry, so use match.files(), not pats.
358 opts[fpats[followfirst]] = list(match.files())
359 opts[fpats[followfirst]] = list(match.files())
359 else:
360 else:
360 opts[fnopats[followdescendants][followfirst]] = str(startrev)
361 opts[fnopats[followdescendants][followfirst]] = str(startrev)
361 else:
362 else:
362 opts['_patslog'] = list(pats)
363 opts['_patslog'] = list(pats)
363
364
364 filematcher = None
365 filematcher = None
365 if opts.get('patch') or opts.get('stat'):
366 if opts.get('patch') or opts.get('stat'):
366 if follow:
367 if follow:
367 filematcher = _makefilematcher(repo, pats, followfirst)
368 filematcher = _makefilematcher(repo, pats, followfirst)
368 else:
369 else:
369 filematcher = lambda rev: match
370 filematcher = lambda rev: match
370
371
371 expr = []
372 expr = []
372 for op, val in opts.iteritems():
373 for op, val in opts.iteritems():
373 if not val:
374 if not val:
374 continue
375 continue
375 if op not in opt2revset:
376 if op not in opt2revset:
376 continue
377 continue
377 revop, andor = opt2revset[op]
378 revop, andor = opt2revset[op]
378 if '%(val)' not in revop:
379 if '%(val)' not in revop:
379 expr.append(revop)
380 expr.append(revop)
380 else:
381 else:
381 if not isinstance(val, list):
382 if not isinstance(val, list):
382 e = revop % {'val': val}
383 e = revop % {'val': val}
383 else:
384 else:
384 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
385 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
385 expr.append(e)
386 expr.append(e)
386
387
387 if expr:
388 if expr:
388 expr = '(' + ' and '.join(expr) + ')'
389 expr = '(' + ' and '.join(expr) + ')'
389 else:
390 else:
390 expr = None
391 expr = None
391 return expr, filematcher
392 return expr, filematcher
392
393
393 def getlogrevs(repo, pats, opts):
394 def getlogrevs(repo, pats, opts):
394 """Return (revs, expr, filematcher) where revs is a list of
395 """Return (revs, expr, filematcher) where revs is a list of
395 revision numbers, expr is a revset string built from log options
396 revision numbers, expr is a revset string built from log options
396 and file patterns or None, and used to filter 'revs'. If --stat or
397 and file patterns or None, and used to filter 'revs'. If --stat or
397 --patch are not passed filematcher is None. Otherwise it is a
398 --patch are not passed filematcher is None. Otherwise it is a
398 callable taking a revision number and returning a match objects
399 callable taking a revision number and returning a match objects
399 filtering the files to be detailed when displaying the revision.
400 filtering the files to be detailed when displaying the revision.
400 """
401 """
401 if not len(repo):
402 if not len(repo):
402 return [], None, None
403 return [], None, None
403 # Default --rev value depends on --follow but --follow behaviour
404 # Default --rev value depends on --follow but --follow behaviour
404 # depends on revisions resolved from --rev...
405 # depends on revisions resolved from --rev...
405 follow = opts.get('follow') or opts.get('follow_first')
406 follow = opts.get('follow') or opts.get('follow_first')
406 if opts.get('rev'):
407 if opts.get('rev'):
407 revs = scmutil.revrange(repo, opts['rev'])
408 revs = scmutil.revrange(repo, opts['rev'])
408 else:
409 else:
409 if follow and len(repo) > 0:
410 if follow and len(repo) > 0:
410 revs = scmutil.revrange(repo, ['.:0'])
411 revs = scmutil.revrange(repo, ['.:0'])
411 else:
412 else:
412 revs = range(len(repo) - 1, -1, -1)
413 revs = range(len(repo) - 1, -1, -1)
413 if not revs:
414 if not revs:
414 return [], None, None
415 return [], None, None
415 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
416 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
416 if expr:
417 if expr:
417 # Evaluate revisions in changelog order for performance
418 # Evaluate revisions in changelog order for performance
418 # reasons but preserve the original sequence order in the
419 # reasons but preserve the original sequence order in the
419 # filtered result.
420 # filtered result.
420 matched = set(revset.match(repo.ui, expr)(repo, sorted(revs)))
421 matched = set(revset.match(repo.ui, expr)(repo, sorted(revs)))
421 revs = [r for r in revs if r in matched]
422 revs = [r for r in revs if r in matched]
422 if not opts.get('hidden'):
423 if not opts.get('hidden'):
423 # --hidden is still experimental and not worth a dedicated revset
424 # --hidden is still experimental and not worth a dedicated revset
424 # yet. Fortunately, filtering revision number is fast.
425 # yet. Fortunately, filtering revision number is fast.
425 revs = [r for r in revs if r not in repo.changelog.hiddenrevs]
426 revs = [r for r in revs if r not in repo.changelog.hiddenrevs]
426 return revs, expr, filematcher
427 return revs, expr, filematcher
427
428
428 def generate(ui, dag, displayer, showparents, edgefn, getrenamed=None,
429 def generate(ui, dag, displayer, showparents, edgefn, getrenamed=None,
429 filematcher=None):
430 filematcher=None):
430 seen, state = [], asciistate()
431 seen, state = [], asciistate()
431 for rev, type, ctx, parents in dag:
432 for rev, type, ctx, parents in dag:
432 char = ctx.node() in showparents and '@' or 'o'
433 char = ctx.node() in showparents and '@' or 'o'
433 copies = None
434 copies = None
434 if getrenamed and ctx.rev():
435 if getrenamed and ctx.rev():
435 copies = []
436 copies = []
436 for fn in ctx.files():
437 for fn in ctx.files():
437 rename = getrenamed(fn, ctx.rev())
438 rename = getrenamed(fn, ctx.rev())
438 if rename:
439 if rename:
439 copies.append((fn, rename[0]))
440 copies.append((fn, rename[0]))
440 revmatchfn = None
441 revmatchfn = None
441 if filematcher is not None:
442 if filematcher is not None:
442 revmatchfn = filematcher(ctx.rev())
443 revmatchfn = filematcher(ctx.rev())
443 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
444 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
444 lines = displayer.hunk.pop(rev).split('\n')[:-1]
445 lines = displayer.hunk.pop(rev).split('\n')[:-1]
445 displayer.flush(rev)
446 displayer.flush(rev)
446 edges = edgefn(type, char, lines, seen, rev, parents)
447 edges = edgefn(type, char, lines, seen, rev, parents)
447 for type, char, lines, coldata in edges:
448 for type, char, lines, coldata in edges:
448 ascii(ui, state, type, char, lines, coldata)
449 ascii(ui, state, type, char, lines, coldata)
449 displayer.close()
450 displayer.close()
450
451
451 @command('glog',
452 @command('glog',
452 [('f', 'follow', None,
453 [('f', 'follow', None,
453 _('follow changeset history, or file history across copies and renames')),
454 _('follow changeset history, or file history across copies and renames')),
454 ('', 'follow-first', None,
455 ('', 'follow-first', None,
455 _('only follow the first parent of merge changesets (DEPRECATED)')),
456 _('only follow the first parent of merge changesets (DEPRECATED)')),
456 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
457 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
457 ('C', 'copies', None, _('show copied files')),
458 ('C', 'copies', None, _('show copied files')),
458 ('k', 'keyword', [],
459 ('k', 'keyword', [],
459 _('do case-insensitive search for a given text'), _('TEXT')),
460 _('do case-insensitive search for a given text'), _('TEXT')),
460 ('r', 'rev', [], _('show the specified revision or range'), _('REV')),
461 ('r', 'rev', [], _('show the specified revision or range'), _('REV')),
461 ('', 'removed', None, _('include revisions where files were removed')),
462 ('', 'removed', None, _('include revisions where files were removed')),
462 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
463 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
463 ('u', 'user', [], _('revisions committed by user'), _('USER')),
464 ('u', 'user', [], _('revisions committed by user'), _('USER')),
464 ('', 'only-branch', [],
465 ('', 'only-branch', [],
465 _('show only changesets within the given named branch (DEPRECATED)'),
466 _('show only changesets within the given named branch (DEPRECATED)'),
466 _('BRANCH')),
467 _('BRANCH')),
467 ('b', 'branch', [],
468 ('b', 'branch', [],
468 _('show changesets within the given named branch'), _('BRANCH')),
469 _('show changesets within the given named branch'), _('BRANCH')),
469 ('P', 'prune', [],
470 ('P', 'prune', [],
470 _('do not display revision or any of its ancestors'), _('REV')),
471 _('do not display revision or any of its ancestors'), _('REV')),
471 ('', 'hidden', False, _('show hidden changesets (DEPRECATED)')),
472 ('', 'hidden', False, _('show hidden changesets (DEPRECATED)')),
472 ] + commands.logopts + commands.walkopts,
473 ] + commands.logopts + commands.walkopts,
473 _('[OPTION]... [FILE]'))
474 _('[OPTION]... [FILE]'))
474 def graphlog(ui, repo, *pats, **opts):
475 def graphlog(ui, repo, *pats, **opts):
475 """show revision history alongside an ASCII revision graph
476 """show revision history alongside an ASCII revision graph
476
477
477 Print a revision history alongside a revision graph drawn with
478 Print a revision history alongside a revision graph drawn with
478 ASCII characters.
479 ASCII characters.
479
480
480 Nodes printed as an @ character are parents of the working
481 Nodes printed as an @ character are parents of the working
481 directory.
482 directory.
482 """
483 """
483
484
484 revs, expr, filematcher = getlogrevs(repo, pats, opts)
485 revs, expr, filematcher = getlogrevs(repo, pats, opts)
485 revs = sorted(revs, reverse=1)
486 revs = sorted(revs, reverse=1)
486 limit = cmdutil.loglimit(opts)
487 limit = cmdutil.loglimit(opts)
487 if limit is not None:
488 if limit is not None:
488 revs = revs[:limit]
489 revs = revs[:limit]
489 revdag = graphmod.dagwalker(repo, revs)
490 revdag = graphmod.dagwalker(repo, revs)
490
491
491 getrenamed = None
492 getrenamed = None
492 if opts.get('copies'):
493 if opts.get('copies'):
493 endrev = None
494 endrev = None
494 if opts.get('rev'):
495 if opts.get('rev'):
495 endrev = max(scmutil.revrange(repo, opts.get('rev'))) + 1
496 endrev = max(scmutil.revrange(repo, opts.get('rev'))) + 1
496 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
497 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
497 displayer = show_changeset(ui, repo, opts, buffered=True)
498 displayer = show_changeset(ui, repo, opts, buffered=True)
498 showparents = [ctx.node() for ctx in repo[None].parents()]
499 showparents = [ctx.node() for ctx in repo[None].parents()]
499 generate(ui, revdag, displayer, showparents, asciiedges, getrenamed,
500 generate(ui, revdag, displayer, showparents, asciiedges, getrenamed,
500 filematcher)
501 filematcher)
501
502
502 def graphrevs(repo, nodes, opts):
503 def graphrevs(repo, nodes, opts):
503 limit = cmdutil.loglimit(opts)
504 limit = cmdutil.loglimit(opts)
504 nodes.reverse()
505 nodes.reverse()
505 if limit is not None:
506 if limit is not None:
506 nodes = nodes[:limit]
507 nodes = nodes[:limit]
507 return graphmod.nodes(repo, nodes)
508 return graphmod.nodes(repo, nodes)
508
509
509 def goutgoing(ui, repo, dest=None, **opts):
510 def goutgoing(ui, repo, dest=None, **opts):
510 """show the outgoing changesets alongside an ASCII revision graph
511 """show the outgoing changesets alongside an ASCII revision graph
511
512
512 Print the outgoing changesets alongside a revision graph drawn with
513 Print the outgoing changesets alongside a revision graph drawn with
513 ASCII characters.
514 ASCII characters.
514
515
515 Nodes printed as an @ character are parents of the working
516 Nodes printed as an @ character are parents of the working
516 directory.
517 directory.
517 """
518 """
518
519
519 check_unsupported_flags([], opts)
520 check_unsupported_flags([], opts)
520 o = hg._outgoing(ui, repo, dest, opts)
521 o = hg._outgoing(ui, repo, dest, opts)
521 if o is None:
522 if o is None:
522 return
523 return
523
524
524 revdag = graphrevs(repo, o, opts)
525 revdag = graphrevs(repo, o, opts)
525 displayer = show_changeset(ui, repo, opts, buffered=True)
526 displayer = show_changeset(ui, repo, opts, buffered=True)
526 showparents = [ctx.node() for ctx in repo[None].parents()]
527 showparents = [ctx.node() for ctx in repo[None].parents()]
527 generate(ui, revdag, displayer, showparents, asciiedges)
528 generate(ui, revdag, displayer, showparents, asciiedges)
528
529
529 def gincoming(ui, repo, source="default", **opts):
530 def gincoming(ui, repo, source="default", **opts):
530 """show the incoming changesets alongside an ASCII revision graph
531 """show the incoming changesets alongside an ASCII revision graph
531
532
532 Print the incoming changesets alongside a revision graph drawn with
533 Print the incoming changesets alongside a revision graph drawn with
533 ASCII characters.
534 ASCII characters.
534
535
535 Nodes printed as an @ character are parents of the working
536 Nodes printed as an @ character are parents of the working
536 directory.
537 directory.
537 """
538 """
538 def subreporecurse():
539 def subreporecurse():
539 return 1
540 return 1
540
541
541 check_unsupported_flags([], opts)
542 check_unsupported_flags([], opts)
542 def display(other, chlist, displayer):
543 def display(other, chlist, displayer):
543 revdag = graphrevs(other, chlist, opts)
544 revdag = graphrevs(other, chlist, opts)
544 showparents = [ctx.node() for ctx in repo[None].parents()]
545 showparents = [ctx.node() for ctx in repo[None].parents()]
545 generate(ui, revdag, displayer, showparents, asciiedges)
546 generate(ui, revdag, displayer, showparents, asciiedges)
546
547
547 hg._incoming(display, subreporecurse, ui, repo, source, opts, buffered=True)
548 hg._incoming(display, subreporecurse, ui, repo, source, opts, buffered=True)
548
549
549 def uisetup(ui):
550 def uisetup(ui):
550 '''Initialize the extension.'''
551 '''Initialize the extension.'''
551 _wrapcmd('log', commands.table, graphlog)
552 _wrapcmd('log', commands.table, graphlog)
552 _wrapcmd('incoming', commands.table, gincoming)
553 _wrapcmd('incoming', commands.table, gincoming)
553 _wrapcmd('outgoing', commands.table, goutgoing)
554 _wrapcmd('outgoing', commands.table, goutgoing)
554
555
555 def _wrapcmd(cmd, table, wrapfn):
556 def _wrapcmd(cmd, table, wrapfn):
556 '''wrap the command'''
557 '''wrap the command'''
557 def graph(orig, *args, **kwargs):
558 def graph(orig, *args, **kwargs):
558 if kwargs['graph']:
559 if kwargs['graph']:
559 return wrapfn(*args, **kwargs)
560 return wrapfn(*args, **kwargs)
560 return orig(*args, **kwargs)
561 return orig(*args, **kwargs)
561 entry = extensions.wrapcommand(table, cmd, graph)
562 entry = extensions.wrapcommand(table, cmd, graph)
562 entry[1].append(('G', 'graph', None, _("show the revision DAG")))
563 entry[1].append(('G', 'graph', None, _("show the revision DAG")))
@@ -1,276 +1,277 b''
1 # Copyright (C) 2007-8 Brendan Cully <brendan@kublai.com>
1 # Copyright (C) 2007-8 Brendan Cully <brendan@kublai.com>
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5
5
6 """hooks for integrating with the CIA.vc notification service
6 """hooks for integrating with the CIA.vc notification service
7
7
8 This is meant to be run as a changegroup or incoming hook. To
8 This is meant to be run as a changegroup or incoming hook. To
9 configure it, set the following options in your hgrc::
9 configure it, set the following options in your hgrc::
10
10
11 [cia]
11 [cia]
12 # your registered CIA user name
12 # your registered CIA user name
13 user = foo
13 user = foo
14 # the name of the project in CIA
14 # the name of the project in CIA
15 project = foo
15 project = foo
16 # the module (subproject) (optional)
16 # the module (subproject) (optional)
17 #module = foo
17 #module = foo
18 # Append a diffstat to the log message (optional)
18 # Append a diffstat to the log message (optional)
19 #diffstat = False
19 #diffstat = False
20 # Template to use for log messages (optional)
20 # Template to use for log messages (optional)
21 #template = {desc}\\n{baseurl}{webroot}/rev/{node}-- {diffstat}
21 #template = {desc}\\n{baseurl}{webroot}/rev/{node}-- {diffstat}
22 # Style to use (optional)
22 # Style to use (optional)
23 #style = foo
23 #style = foo
24 # The URL of the CIA notification service (optional)
24 # The URL of the CIA notification service (optional)
25 # You can use mailto: URLs to send by email, eg
25 # You can use mailto: URLs to send by email, eg
26 # mailto:cia@cia.vc
26 # mailto:cia@cia.vc
27 # Make sure to set email.from if you do this.
27 # Make sure to set email.from if you do this.
28 #url = http://cia.vc/
28 #url = http://cia.vc/
29 # print message instead of sending it (optional)
29 # print message instead of sending it (optional)
30 #test = False
30 #test = False
31 # number of slashes to strip for url paths
31 # number of slashes to strip for url paths
32 #strip = 0
32 #strip = 0
33
33
34 [hooks]
34 [hooks]
35 # one of these:
35 # one of these:
36 changegroup.cia = python:hgcia.hook
36 changegroup.cia = python:hgcia.hook
37 #incoming.cia = python:hgcia.hook
37 #incoming.cia = python:hgcia.hook
38
38
39 [web]
39 [web]
40 # If you want hyperlinks (optional)
40 # If you want hyperlinks (optional)
41 baseurl = http://server/path/to/repo
41 baseurl = http://server/path/to/repo
42 """
42 """
43
43
44 from mercurial.i18n import _
44 from mercurial.i18n import _
45 from mercurial.node import bin, short
45 from mercurial.node import bin, short
46 from mercurial import cmdutil, patch, templater, util, mail
46 from mercurial import cmdutil, patch, templater, util, mail
47 import email.Parser
47 import email.Parser
48
48
49 import socket, xmlrpclib
49 import socket, xmlrpclib
50 from xml.sax import saxutils
50 from xml.sax import saxutils
51 testedwith = 'internal'
51
52
52 socket_timeout = 30 # seconds
53 socket_timeout = 30 # seconds
53 if util.safehasattr(socket, 'setdefaulttimeout'):
54 if util.safehasattr(socket, 'setdefaulttimeout'):
54 # set a timeout for the socket so you don't have to wait so looooong
55 # set a timeout for the socket so you don't have to wait so looooong
55 # when cia.vc is having problems. requires python >= 2.3:
56 # when cia.vc is having problems. requires python >= 2.3:
56 socket.setdefaulttimeout(socket_timeout)
57 socket.setdefaulttimeout(socket_timeout)
57
58
58 HGCIA_VERSION = '0.1'
59 HGCIA_VERSION = '0.1'
59 HGCIA_URL = 'http://hg.kublai.com/mercurial/hgcia'
60 HGCIA_URL = 'http://hg.kublai.com/mercurial/hgcia'
60
61
61
62
62 class ciamsg(object):
63 class ciamsg(object):
63 """ A CIA message """
64 """ A CIA message """
64 def __init__(self, cia, ctx):
65 def __init__(self, cia, ctx):
65 self.cia = cia
66 self.cia = cia
66 self.ctx = ctx
67 self.ctx = ctx
67 self.url = self.cia.url
68 self.url = self.cia.url
68 if self.url:
69 if self.url:
69 self.url += self.cia.root
70 self.url += self.cia.root
70
71
71 def fileelem(self, path, uri, action):
72 def fileelem(self, path, uri, action):
72 if uri:
73 if uri:
73 uri = ' uri=%s' % saxutils.quoteattr(uri)
74 uri = ' uri=%s' % saxutils.quoteattr(uri)
74 return '<file%s action=%s>%s</file>' % (
75 return '<file%s action=%s>%s</file>' % (
75 uri, saxutils.quoteattr(action), saxutils.escape(path))
76 uri, saxutils.quoteattr(action), saxutils.escape(path))
76
77
77 def fileelems(self):
78 def fileelems(self):
78 n = self.ctx.node()
79 n = self.ctx.node()
79 f = self.cia.repo.status(self.ctx.p1().node(), n)
80 f = self.cia.repo.status(self.ctx.p1().node(), n)
80 url = self.url or ''
81 url = self.url or ''
81 if url and url[-1] == '/':
82 if url and url[-1] == '/':
82 url = url[:-1]
83 url = url[:-1]
83 elems = []
84 elems = []
84 for path in f[0]:
85 for path in f[0]:
85 uri = '%s/diff/%s/%s' % (url, short(n), path)
86 uri = '%s/diff/%s/%s' % (url, short(n), path)
86 elems.append(self.fileelem(path, url and uri, 'modify'))
87 elems.append(self.fileelem(path, url and uri, 'modify'))
87 for path in f[1]:
88 for path in f[1]:
88 # TODO: copy/rename ?
89 # TODO: copy/rename ?
89 uri = '%s/file/%s/%s' % (url, short(n), path)
90 uri = '%s/file/%s/%s' % (url, short(n), path)
90 elems.append(self.fileelem(path, url and uri, 'add'))
91 elems.append(self.fileelem(path, url and uri, 'add'))
91 for path in f[2]:
92 for path in f[2]:
92 elems.append(self.fileelem(path, '', 'remove'))
93 elems.append(self.fileelem(path, '', 'remove'))
93
94
94 return '\n'.join(elems)
95 return '\n'.join(elems)
95
96
96 def sourceelem(self, project, module=None, branch=None):
97 def sourceelem(self, project, module=None, branch=None):
97 msg = ['<source>', '<project>%s</project>' % saxutils.escape(project)]
98 msg = ['<source>', '<project>%s</project>' % saxutils.escape(project)]
98 if module:
99 if module:
99 msg.append('<module>%s</module>' % saxutils.escape(module))
100 msg.append('<module>%s</module>' % saxutils.escape(module))
100 if branch:
101 if branch:
101 msg.append('<branch>%s</branch>' % saxutils.escape(branch))
102 msg.append('<branch>%s</branch>' % saxutils.escape(branch))
102 msg.append('</source>')
103 msg.append('</source>')
103
104
104 return '\n'.join(msg)
105 return '\n'.join(msg)
105
106
106 def diffstat(self):
107 def diffstat(self):
107 class patchbuf(object):
108 class patchbuf(object):
108 def __init__(self):
109 def __init__(self):
109 self.lines = []
110 self.lines = []
110 # diffstat is stupid
111 # diffstat is stupid
111 self.name = 'cia'
112 self.name = 'cia'
112 def write(self, data):
113 def write(self, data):
113 self.lines += data.splitlines(True)
114 self.lines += data.splitlines(True)
114 def close(self):
115 def close(self):
115 pass
116 pass
116
117
117 n = self.ctx.node()
118 n = self.ctx.node()
118 pbuf = patchbuf()
119 pbuf = patchbuf()
119 cmdutil.export(self.cia.repo, [n], fp=pbuf)
120 cmdutil.export(self.cia.repo, [n], fp=pbuf)
120 return patch.diffstat(pbuf.lines) or ''
121 return patch.diffstat(pbuf.lines) or ''
121
122
122 def logmsg(self):
123 def logmsg(self):
123 diffstat = self.cia.diffstat and self.diffstat() or ''
124 diffstat = self.cia.diffstat and self.diffstat() or ''
124 self.cia.ui.pushbuffer()
125 self.cia.ui.pushbuffer()
125 self.cia.templater.show(self.ctx, changes=self.ctx.changeset(),
126 self.cia.templater.show(self.ctx, changes=self.ctx.changeset(),
126 baseurl=self.cia.ui.config('web', 'baseurl'),
127 baseurl=self.cia.ui.config('web', 'baseurl'),
127 url=self.url, diffstat=diffstat,
128 url=self.url, diffstat=diffstat,
128 webroot=self.cia.root)
129 webroot=self.cia.root)
129 return self.cia.ui.popbuffer()
130 return self.cia.ui.popbuffer()
130
131
131 def xml(self):
132 def xml(self):
132 n = short(self.ctx.node())
133 n = short(self.ctx.node())
133 src = self.sourceelem(self.cia.project, module=self.cia.module,
134 src = self.sourceelem(self.cia.project, module=self.cia.module,
134 branch=self.ctx.branch())
135 branch=self.ctx.branch())
135 # unix timestamp
136 # unix timestamp
136 dt = self.ctx.date()
137 dt = self.ctx.date()
137 timestamp = dt[0]
138 timestamp = dt[0]
138
139
139 author = saxutils.escape(self.ctx.user())
140 author = saxutils.escape(self.ctx.user())
140 rev = '%d:%s' % (self.ctx.rev(), n)
141 rev = '%d:%s' % (self.ctx.rev(), n)
141 log = saxutils.escape(self.logmsg())
142 log = saxutils.escape(self.logmsg())
142
143
143 url = self.url
144 url = self.url
144 if url and url[-1] == '/':
145 if url and url[-1] == '/':
145 url = url[:-1]
146 url = url[:-1]
146 url = url and '<url>%s/rev/%s</url>' % (saxutils.escape(url), n) or ''
147 url = url and '<url>%s/rev/%s</url>' % (saxutils.escape(url), n) or ''
147
148
148 msg = """
149 msg = """
149 <message>
150 <message>
150 <generator>
151 <generator>
151 <name>Mercurial (hgcia)</name>
152 <name>Mercurial (hgcia)</name>
152 <version>%s</version>
153 <version>%s</version>
153 <url>%s</url>
154 <url>%s</url>
154 <user>%s</user>
155 <user>%s</user>
155 </generator>
156 </generator>
156 %s
157 %s
157 <body>
158 <body>
158 <commit>
159 <commit>
159 <author>%s</author>
160 <author>%s</author>
160 <version>%s</version>
161 <version>%s</version>
161 <log>%s</log>
162 <log>%s</log>
162 %s
163 %s
163 <files>%s</files>
164 <files>%s</files>
164 </commit>
165 </commit>
165 </body>
166 </body>
166 <timestamp>%d</timestamp>
167 <timestamp>%d</timestamp>
167 </message>
168 </message>
168 """ % \
169 """ % \
169 (HGCIA_VERSION, saxutils.escape(HGCIA_URL),
170 (HGCIA_VERSION, saxutils.escape(HGCIA_URL),
170 saxutils.escape(self.cia.user), src, author, rev, log, url,
171 saxutils.escape(self.cia.user), src, author, rev, log, url,
171 self.fileelems(), timestamp)
172 self.fileelems(), timestamp)
172
173
173 return msg
174 return msg
174
175
175
176
176 class hgcia(object):
177 class hgcia(object):
177 """ CIA notification class """
178 """ CIA notification class """
178
179
179 deftemplate = '{desc}'
180 deftemplate = '{desc}'
180 dstemplate = '{desc}\n-- \n{diffstat}'
181 dstemplate = '{desc}\n-- \n{diffstat}'
181
182
182 def __init__(self, ui, repo):
183 def __init__(self, ui, repo):
183 self.ui = ui
184 self.ui = ui
184 self.repo = repo
185 self.repo = repo
185
186
186 self.ciaurl = self.ui.config('cia', 'url', 'http://cia.vc')
187 self.ciaurl = self.ui.config('cia', 'url', 'http://cia.vc')
187 self.user = self.ui.config('cia', 'user')
188 self.user = self.ui.config('cia', 'user')
188 self.project = self.ui.config('cia', 'project')
189 self.project = self.ui.config('cia', 'project')
189 self.module = self.ui.config('cia', 'module')
190 self.module = self.ui.config('cia', 'module')
190 self.diffstat = self.ui.configbool('cia', 'diffstat')
191 self.diffstat = self.ui.configbool('cia', 'diffstat')
191 self.emailfrom = self.ui.config('email', 'from')
192 self.emailfrom = self.ui.config('email', 'from')
192 self.dryrun = self.ui.configbool('cia', 'test')
193 self.dryrun = self.ui.configbool('cia', 'test')
193 self.url = self.ui.config('web', 'baseurl')
194 self.url = self.ui.config('web', 'baseurl')
194 # Default to -1 for backward compatibility
195 # Default to -1 for backward compatibility
195 self.stripcount = int(self.ui.config('cia', 'strip', -1))
196 self.stripcount = int(self.ui.config('cia', 'strip', -1))
196 self.root = self.strip(self.repo.root)
197 self.root = self.strip(self.repo.root)
197
198
198 style = self.ui.config('cia', 'style')
199 style = self.ui.config('cia', 'style')
199 template = self.ui.config('cia', 'template')
200 template = self.ui.config('cia', 'template')
200 if not template:
201 if not template:
201 template = self.diffstat and self.dstemplate or self.deftemplate
202 template = self.diffstat and self.dstemplate or self.deftemplate
202 template = templater.parsestring(template, quoted=False)
203 template = templater.parsestring(template, quoted=False)
203 t = cmdutil.changeset_templater(self.ui, self.repo, False, None,
204 t = cmdutil.changeset_templater(self.ui, self.repo, False, None,
204 style, False)
205 style, False)
205 t.use_template(template)
206 t.use_template(template)
206 self.templater = t
207 self.templater = t
207
208
208 def strip(self, path):
209 def strip(self, path):
209 '''strip leading slashes from local path, turn into web-safe path.'''
210 '''strip leading slashes from local path, turn into web-safe path.'''
210
211
211 path = util.pconvert(path)
212 path = util.pconvert(path)
212 count = self.stripcount
213 count = self.stripcount
213 if count < 0:
214 if count < 0:
214 return ''
215 return ''
215 while count > 0:
216 while count > 0:
216 c = path.find('/')
217 c = path.find('/')
217 if c == -1:
218 if c == -1:
218 break
219 break
219 path = path[c + 1:]
220 path = path[c + 1:]
220 count -= 1
221 count -= 1
221 return path
222 return path
222
223
223 def sendrpc(self, msg):
224 def sendrpc(self, msg):
224 srv = xmlrpclib.Server(self.ciaurl)
225 srv = xmlrpclib.Server(self.ciaurl)
225 res = srv.hub.deliver(msg)
226 res = srv.hub.deliver(msg)
226 if res is not True and res != 'queued.':
227 if res is not True and res != 'queued.':
227 raise util.Abort(_('%s returned an error: %s') %
228 raise util.Abort(_('%s returned an error: %s') %
228 (self.ciaurl, res))
229 (self.ciaurl, res))
229
230
230 def sendemail(self, address, data):
231 def sendemail(self, address, data):
231 p = email.Parser.Parser()
232 p = email.Parser.Parser()
232 msg = p.parsestr(data)
233 msg = p.parsestr(data)
233 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
234 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
234 msg['To'] = address
235 msg['To'] = address
235 msg['From'] = self.emailfrom
236 msg['From'] = self.emailfrom
236 msg['Subject'] = 'DeliverXML'
237 msg['Subject'] = 'DeliverXML'
237 msg['Content-type'] = 'text/xml'
238 msg['Content-type'] = 'text/xml'
238 msgtext = msg.as_string()
239 msgtext = msg.as_string()
239
240
240 self.ui.status(_('hgcia: sending update to %s\n') % address)
241 self.ui.status(_('hgcia: sending update to %s\n') % address)
241 mail.sendmail(self.ui, util.email(self.emailfrom),
242 mail.sendmail(self.ui, util.email(self.emailfrom),
242 [address], msgtext)
243 [address], msgtext)
243
244
244
245
245 def hook(ui, repo, hooktype, node=None, url=None, **kwargs):
246 def hook(ui, repo, hooktype, node=None, url=None, **kwargs):
246 """ send CIA notification """
247 """ send CIA notification """
247 def sendmsg(cia, ctx):
248 def sendmsg(cia, ctx):
248 msg = ciamsg(cia, ctx).xml()
249 msg = ciamsg(cia, ctx).xml()
249 if cia.dryrun:
250 if cia.dryrun:
250 ui.write(msg)
251 ui.write(msg)
251 elif cia.ciaurl.startswith('mailto:'):
252 elif cia.ciaurl.startswith('mailto:'):
252 if not cia.emailfrom:
253 if not cia.emailfrom:
253 raise util.Abort(_('email.from must be defined when '
254 raise util.Abort(_('email.from must be defined when '
254 'sending by email'))
255 'sending by email'))
255 cia.sendemail(cia.ciaurl[7:], msg)
256 cia.sendemail(cia.ciaurl[7:], msg)
256 else:
257 else:
257 cia.sendrpc(msg)
258 cia.sendrpc(msg)
258
259
259 n = bin(node)
260 n = bin(node)
260 cia = hgcia(ui, repo)
261 cia = hgcia(ui, repo)
261 if not cia.user:
262 if not cia.user:
262 ui.debug('cia: no user specified')
263 ui.debug('cia: no user specified')
263 return
264 return
264 if not cia.project:
265 if not cia.project:
265 ui.debug('cia: no project specified')
266 ui.debug('cia: no project specified')
266 return
267 return
267 if hooktype == 'changegroup':
268 if hooktype == 'changegroup':
268 start = repo.changelog.rev(n)
269 start = repo.changelog.rev(n)
269 end = len(repo.changelog)
270 end = len(repo.changelog)
270 for rev in xrange(start, end):
271 for rev in xrange(start, end):
271 n = repo.changelog.node(rev)
272 n = repo.changelog.node(rev)
272 ctx = repo.changectx(n)
273 ctx = repo.changectx(n)
273 sendmsg(cia, ctx)
274 sendmsg(cia, ctx)
274 else:
275 else:
275 ctx = repo.changectx(n)
276 ctx = repo.changectx(n)
276 sendmsg(cia, ctx)
277 sendmsg(cia, ctx)
@@ -1,350 +1,352 b''
1 # Minimal support for git commands on an hg repository
1 # Minimal support for git commands on an hg repository
2 #
2 #
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''browse the repository in a graphical way
8 '''browse the repository in a graphical way
9
9
10 The hgk extension allows browsing the history of a repository in a
10 The hgk extension allows browsing the history of a repository in a
11 graphical way. It requires Tcl/Tk version 8.4 or later. (Tcl/Tk is not
11 graphical way. It requires Tcl/Tk version 8.4 or later. (Tcl/Tk is not
12 distributed with Mercurial.)
12 distributed with Mercurial.)
13
13
14 hgk consists of two parts: a Tcl script that does the displaying and
14 hgk consists of two parts: a Tcl script that does the displaying and
15 querying of information, and an extension to Mercurial named hgk.py,
15 querying of information, and an extension to Mercurial named hgk.py,
16 which provides hooks for hgk to get information. hgk can be found in
16 which provides hooks for hgk to get information. hgk can be found in
17 the contrib directory, and the extension is shipped in the hgext
17 the contrib directory, and the extension is shipped in the hgext
18 repository, and needs to be enabled.
18 repository, and needs to be enabled.
19
19
20 The :hg:`view` command will launch the hgk Tcl script. For this command
20 The :hg:`view` command will launch the hgk Tcl script. For this command
21 to work, hgk must be in your search path. Alternately, you can specify
21 to work, hgk must be in your search path. Alternately, you can specify
22 the path to hgk in your configuration file::
22 the path to hgk in your configuration file::
23
23
24 [hgk]
24 [hgk]
25 path=/location/of/hgk
25 path=/location/of/hgk
26
26
27 hgk can make use of the extdiff extension to visualize revisions.
27 hgk can make use of the extdiff extension to visualize revisions.
28 Assuming you had already configured extdiff vdiff command, just add::
28 Assuming you had already configured extdiff vdiff command, just add::
29
29
30 [hgk]
30 [hgk]
31 vdiff=vdiff
31 vdiff=vdiff
32
32
33 Revisions context menu will now display additional entries to fire
33 Revisions context menu will now display additional entries to fire
34 vdiff on hovered and selected revisions.
34 vdiff on hovered and selected revisions.
35 '''
35 '''
36
36
37 import os
37 import os
38 from mercurial import commands, util, patch, revlog, scmutil
38 from mercurial import commands, util, patch, revlog, scmutil
39 from mercurial.node import nullid, nullrev, short
39 from mercurial.node import nullid, nullrev, short
40 from mercurial.i18n import _
40 from mercurial.i18n import _
41
41
42 testedwith = 'internal'
43
42 def difftree(ui, repo, node1=None, node2=None, *files, **opts):
44 def difftree(ui, repo, node1=None, node2=None, *files, **opts):
43 """diff trees from two commits"""
45 """diff trees from two commits"""
44 def __difftree(repo, node1, node2, files=[]):
46 def __difftree(repo, node1, node2, files=[]):
45 assert node2 is not None
47 assert node2 is not None
46 mmap = repo[node1].manifest()
48 mmap = repo[node1].manifest()
47 mmap2 = repo[node2].manifest()
49 mmap2 = repo[node2].manifest()
48 m = scmutil.match(repo[node1], files)
50 m = scmutil.match(repo[node1], files)
49 modified, added, removed = repo.status(node1, node2, m)[:3]
51 modified, added, removed = repo.status(node1, node2, m)[:3]
50 empty = short(nullid)
52 empty = short(nullid)
51
53
52 for f in modified:
54 for f in modified:
53 # TODO get file permissions
55 # TODO get file permissions
54 ui.write(":100664 100664 %s %s M\t%s\t%s\n" %
56 ui.write(":100664 100664 %s %s M\t%s\t%s\n" %
55 (short(mmap[f]), short(mmap2[f]), f, f))
57 (short(mmap[f]), short(mmap2[f]), f, f))
56 for f in added:
58 for f in added:
57 ui.write(":000000 100664 %s %s N\t%s\t%s\n" %
59 ui.write(":000000 100664 %s %s N\t%s\t%s\n" %
58 (empty, short(mmap2[f]), f, f))
60 (empty, short(mmap2[f]), f, f))
59 for f in removed:
61 for f in removed:
60 ui.write(":100664 000000 %s %s D\t%s\t%s\n" %
62 ui.write(":100664 000000 %s %s D\t%s\t%s\n" %
61 (short(mmap[f]), empty, f, f))
63 (short(mmap[f]), empty, f, f))
62 ##
64 ##
63
65
64 while True:
66 while True:
65 if opts['stdin']:
67 if opts['stdin']:
66 try:
68 try:
67 line = raw_input().split(' ')
69 line = raw_input().split(' ')
68 node1 = line[0]
70 node1 = line[0]
69 if len(line) > 1:
71 if len(line) > 1:
70 node2 = line[1]
72 node2 = line[1]
71 else:
73 else:
72 node2 = None
74 node2 = None
73 except EOFError:
75 except EOFError:
74 break
76 break
75 node1 = repo.lookup(node1)
77 node1 = repo.lookup(node1)
76 if node2:
78 if node2:
77 node2 = repo.lookup(node2)
79 node2 = repo.lookup(node2)
78 else:
80 else:
79 node2 = node1
81 node2 = node1
80 node1 = repo.changelog.parents(node1)[0]
82 node1 = repo.changelog.parents(node1)[0]
81 if opts['patch']:
83 if opts['patch']:
82 if opts['pretty']:
84 if opts['pretty']:
83 catcommit(ui, repo, node2, "")
85 catcommit(ui, repo, node2, "")
84 m = scmutil.match(repo[node1], files)
86 m = scmutil.match(repo[node1], files)
85 chunks = patch.diff(repo, node1, node2, match=m,
87 chunks = patch.diff(repo, node1, node2, match=m,
86 opts=patch.diffopts(ui, {'git': True}))
88 opts=patch.diffopts(ui, {'git': True}))
87 for chunk in chunks:
89 for chunk in chunks:
88 ui.write(chunk)
90 ui.write(chunk)
89 else:
91 else:
90 __difftree(repo, node1, node2, files=files)
92 __difftree(repo, node1, node2, files=files)
91 if not opts['stdin']:
93 if not opts['stdin']:
92 break
94 break
93
95
94 def catcommit(ui, repo, n, prefix, ctx=None):
96 def catcommit(ui, repo, n, prefix, ctx=None):
95 nlprefix = '\n' + prefix
97 nlprefix = '\n' + prefix
96 if ctx is None:
98 if ctx is None:
97 ctx = repo[n]
99 ctx = repo[n]
98 # use ctx.node() instead ??
100 # use ctx.node() instead ??
99 ui.write("tree %s\n" % short(ctx.changeset()[0]))
101 ui.write("tree %s\n" % short(ctx.changeset()[0]))
100 for p in ctx.parents():
102 for p in ctx.parents():
101 ui.write("parent %s\n" % p)
103 ui.write("parent %s\n" % p)
102
104
103 date = ctx.date()
105 date = ctx.date()
104 description = ctx.description().replace("\0", "")
106 description = ctx.description().replace("\0", "")
105 lines = description.splitlines()
107 lines = description.splitlines()
106 if lines and lines[-1].startswith('committer:'):
108 if lines and lines[-1].startswith('committer:'):
107 committer = lines[-1].split(': ')[1].rstrip()
109 committer = lines[-1].split(': ')[1].rstrip()
108 else:
110 else:
109 committer = ctx.user()
111 committer = ctx.user()
110
112
111 ui.write("author %s %s %s\n" % (ctx.user(), int(date[0]), date[1]))
113 ui.write("author %s %s %s\n" % (ctx.user(), int(date[0]), date[1]))
112 ui.write("committer %s %s %s\n" % (committer, int(date[0]), date[1]))
114 ui.write("committer %s %s %s\n" % (committer, int(date[0]), date[1]))
113 ui.write("revision %d\n" % ctx.rev())
115 ui.write("revision %d\n" % ctx.rev())
114 ui.write("branch %s\n\n" % ctx.branch())
116 ui.write("branch %s\n\n" % ctx.branch())
115
117
116 if prefix != "":
118 if prefix != "":
117 ui.write("%s%s\n" % (prefix,
119 ui.write("%s%s\n" % (prefix,
118 description.replace('\n', nlprefix).strip()))
120 description.replace('\n', nlprefix).strip()))
119 else:
121 else:
120 ui.write(description + "\n")
122 ui.write(description + "\n")
121 if prefix:
123 if prefix:
122 ui.write('\0')
124 ui.write('\0')
123
125
124 def base(ui, repo, node1, node2):
126 def base(ui, repo, node1, node2):
125 """output common ancestor information"""
127 """output common ancestor information"""
126 node1 = repo.lookup(node1)
128 node1 = repo.lookup(node1)
127 node2 = repo.lookup(node2)
129 node2 = repo.lookup(node2)
128 n = repo.changelog.ancestor(node1, node2)
130 n = repo.changelog.ancestor(node1, node2)
129 ui.write(short(n) + "\n")
131 ui.write(short(n) + "\n")
130
132
131 def catfile(ui, repo, type=None, r=None, **opts):
133 def catfile(ui, repo, type=None, r=None, **opts):
132 """cat a specific revision"""
134 """cat a specific revision"""
133 # in stdin mode, every line except the commit is prefixed with two
135 # in stdin mode, every line except the commit is prefixed with two
134 # spaces. This way the our caller can find the commit without magic
136 # spaces. This way the our caller can find the commit without magic
135 # strings
137 # strings
136 #
138 #
137 prefix = ""
139 prefix = ""
138 if opts['stdin']:
140 if opts['stdin']:
139 try:
141 try:
140 (type, r) = raw_input().split(' ')
142 (type, r) = raw_input().split(' ')
141 prefix = " "
143 prefix = " "
142 except EOFError:
144 except EOFError:
143 return
145 return
144
146
145 else:
147 else:
146 if not type or not r:
148 if not type or not r:
147 ui.warn(_("cat-file: type or revision not supplied\n"))
149 ui.warn(_("cat-file: type or revision not supplied\n"))
148 commands.help_(ui, 'cat-file')
150 commands.help_(ui, 'cat-file')
149
151
150 while r:
152 while r:
151 if type != "commit":
153 if type != "commit":
152 ui.warn(_("aborting hg cat-file only understands commits\n"))
154 ui.warn(_("aborting hg cat-file only understands commits\n"))
153 return 1
155 return 1
154 n = repo.lookup(r)
156 n = repo.lookup(r)
155 catcommit(ui, repo, n, prefix)
157 catcommit(ui, repo, n, prefix)
156 if opts['stdin']:
158 if opts['stdin']:
157 try:
159 try:
158 (type, r) = raw_input().split(' ')
160 (type, r) = raw_input().split(' ')
159 except EOFError:
161 except EOFError:
160 break
162 break
161 else:
163 else:
162 break
164 break
163
165
164 # git rev-tree is a confusing thing. You can supply a number of
166 # git rev-tree is a confusing thing. You can supply a number of
165 # commit sha1s on the command line, and it walks the commit history
167 # commit sha1s on the command line, and it walks the commit history
166 # telling you which commits are reachable from the supplied ones via
168 # telling you which commits are reachable from the supplied ones via
167 # a bitmask based on arg position.
169 # a bitmask based on arg position.
168 # you can specify a commit to stop at by starting the sha1 with ^
170 # you can specify a commit to stop at by starting the sha1 with ^
169 def revtree(ui, args, repo, full="tree", maxnr=0, parents=False):
171 def revtree(ui, args, repo, full="tree", maxnr=0, parents=False):
170 def chlogwalk():
172 def chlogwalk():
171 count = len(repo)
173 count = len(repo)
172 i = count
174 i = count
173 l = [0] * 100
175 l = [0] * 100
174 chunk = 100
176 chunk = 100
175 while True:
177 while True:
176 if chunk > i:
178 if chunk > i:
177 chunk = i
179 chunk = i
178 i = 0
180 i = 0
179 else:
181 else:
180 i -= chunk
182 i -= chunk
181
183
182 for x in xrange(chunk):
184 for x in xrange(chunk):
183 if i + x >= count:
185 if i + x >= count:
184 l[chunk - x:] = [0] * (chunk - x)
186 l[chunk - x:] = [0] * (chunk - x)
185 break
187 break
186 if full is not None:
188 if full is not None:
187 l[x] = repo[i + x]
189 l[x] = repo[i + x]
188 l[x].changeset() # force reading
190 l[x].changeset() # force reading
189 else:
191 else:
190 l[x] = 1
192 l[x] = 1
191 for x in xrange(chunk - 1, -1, -1):
193 for x in xrange(chunk - 1, -1, -1):
192 if l[x] != 0:
194 if l[x] != 0:
193 yield (i + x, full is not None and l[x] or None)
195 yield (i + x, full is not None and l[x] or None)
194 if i == 0:
196 if i == 0:
195 break
197 break
196
198
197 # calculate and return the reachability bitmask for sha
199 # calculate and return the reachability bitmask for sha
198 def is_reachable(ar, reachable, sha):
200 def is_reachable(ar, reachable, sha):
199 if len(ar) == 0:
201 if len(ar) == 0:
200 return 1
202 return 1
201 mask = 0
203 mask = 0
202 for i in xrange(len(ar)):
204 for i in xrange(len(ar)):
203 if sha in reachable[i]:
205 if sha in reachable[i]:
204 mask |= 1 << i
206 mask |= 1 << i
205
207
206 return mask
208 return mask
207
209
208 reachable = []
210 reachable = []
209 stop_sha1 = []
211 stop_sha1 = []
210 want_sha1 = []
212 want_sha1 = []
211 count = 0
213 count = 0
212
214
213 # figure out which commits they are asking for and which ones they
215 # figure out which commits they are asking for and which ones they
214 # want us to stop on
216 # want us to stop on
215 for i, arg in enumerate(args):
217 for i, arg in enumerate(args):
216 if arg.startswith('^'):
218 if arg.startswith('^'):
217 s = repo.lookup(arg[1:])
219 s = repo.lookup(arg[1:])
218 stop_sha1.append(s)
220 stop_sha1.append(s)
219 want_sha1.append(s)
221 want_sha1.append(s)
220 elif arg != 'HEAD':
222 elif arg != 'HEAD':
221 want_sha1.append(repo.lookup(arg))
223 want_sha1.append(repo.lookup(arg))
222
224
223 # calculate the graph for the supplied commits
225 # calculate the graph for the supplied commits
224 for i, n in enumerate(want_sha1):
226 for i, n in enumerate(want_sha1):
225 reachable.append(set())
227 reachable.append(set())
226 visit = [n]
228 visit = [n]
227 reachable[i].add(n)
229 reachable[i].add(n)
228 while visit:
230 while visit:
229 n = visit.pop(0)
231 n = visit.pop(0)
230 if n in stop_sha1:
232 if n in stop_sha1:
231 continue
233 continue
232 for p in repo.changelog.parents(n):
234 for p in repo.changelog.parents(n):
233 if p not in reachable[i]:
235 if p not in reachable[i]:
234 reachable[i].add(p)
236 reachable[i].add(p)
235 visit.append(p)
237 visit.append(p)
236 if p in stop_sha1:
238 if p in stop_sha1:
237 continue
239 continue
238
240
239 # walk the repository looking for commits that are in our
241 # walk the repository looking for commits that are in our
240 # reachability graph
242 # reachability graph
241 for i, ctx in chlogwalk():
243 for i, ctx in chlogwalk():
242 n = repo.changelog.node(i)
244 n = repo.changelog.node(i)
243 mask = is_reachable(want_sha1, reachable, n)
245 mask = is_reachable(want_sha1, reachable, n)
244 if mask:
246 if mask:
245 parentstr = ""
247 parentstr = ""
246 if parents:
248 if parents:
247 pp = repo.changelog.parents(n)
249 pp = repo.changelog.parents(n)
248 if pp[0] != nullid:
250 if pp[0] != nullid:
249 parentstr += " " + short(pp[0])
251 parentstr += " " + short(pp[0])
250 if pp[1] != nullid:
252 if pp[1] != nullid:
251 parentstr += " " + short(pp[1])
253 parentstr += " " + short(pp[1])
252 if not full:
254 if not full:
253 ui.write("%s%s\n" % (short(n), parentstr))
255 ui.write("%s%s\n" % (short(n), parentstr))
254 elif full == "commit":
256 elif full == "commit":
255 ui.write("%s%s\n" % (short(n), parentstr))
257 ui.write("%s%s\n" % (short(n), parentstr))
256 catcommit(ui, repo, n, ' ', ctx)
258 catcommit(ui, repo, n, ' ', ctx)
257 else:
259 else:
258 (p1, p2) = repo.changelog.parents(n)
260 (p1, p2) = repo.changelog.parents(n)
259 (h, h1, h2) = map(short, (n, p1, p2))
261 (h, h1, h2) = map(short, (n, p1, p2))
260 (i1, i2) = map(repo.changelog.rev, (p1, p2))
262 (i1, i2) = map(repo.changelog.rev, (p1, p2))
261
263
262 date = ctx.date()[0]
264 date = ctx.date()[0]
263 ui.write("%s %s:%s" % (date, h, mask))
265 ui.write("%s %s:%s" % (date, h, mask))
264 mask = is_reachable(want_sha1, reachable, p1)
266 mask = is_reachable(want_sha1, reachable, p1)
265 if i1 != nullrev and mask > 0:
267 if i1 != nullrev and mask > 0:
266 ui.write("%s:%s " % (h1, mask)),
268 ui.write("%s:%s " % (h1, mask)),
267 mask = is_reachable(want_sha1, reachable, p2)
269 mask = is_reachable(want_sha1, reachable, p2)
268 if i2 != nullrev and mask > 0:
270 if i2 != nullrev and mask > 0:
269 ui.write("%s:%s " % (h2, mask))
271 ui.write("%s:%s " % (h2, mask))
270 ui.write("\n")
272 ui.write("\n")
271 if maxnr and count >= maxnr:
273 if maxnr and count >= maxnr:
272 break
274 break
273 count += 1
275 count += 1
274
276
275 def revparse(ui, repo, *revs, **opts):
277 def revparse(ui, repo, *revs, **opts):
276 """parse given revisions"""
278 """parse given revisions"""
277 def revstr(rev):
279 def revstr(rev):
278 if rev == 'HEAD':
280 if rev == 'HEAD':
279 rev = 'tip'
281 rev = 'tip'
280 return revlog.hex(repo.lookup(rev))
282 return revlog.hex(repo.lookup(rev))
281
283
282 for r in revs:
284 for r in revs:
283 revrange = r.split(':', 1)
285 revrange = r.split(':', 1)
284 ui.write('%s\n' % revstr(revrange[0]))
286 ui.write('%s\n' % revstr(revrange[0]))
285 if len(revrange) == 2:
287 if len(revrange) == 2:
286 ui.write('^%s\n' % revstr(revrange[1]))
288 ui.write('^%s\n' % revstr(revrange[1]))
287
289
288 # git rev-list tries to order things by date, and has the ability to stop
290 # git rev-list tries to order things by date, and has the ability to stop
289 # at a given commit without walking the whole repo. TODO add the stop
291 # at a given commit without walking the whole repo. TODO add the stop
290 # parameter
292 # parameter
291 def revlist(ui, repo, *revs, **opts):
293 def revlist(ui, repo, *revs, **opts):
292 """print revisions"""
294 """print revisions"""
293 if opts['header']:
295 if opts['header']:
294 full = "commit"
296 full = "commit"
295 else:
297 else:
296 full = None
298 full = None
297 copy = [x for x in revs]
299 copy = [x for x in revs]
298 revtree(ui, copy, repo, full, opts['max_count'], opts['parents'])
300 revtree(ui, copy, repo, full, opts['max_count'], opts['parents'])
299
301
300 def config(ui, repo, **opts):
302 def config(ui, repo, **opts):
301 """print extension options"""
303 """print extension options"""
302 def writeopt(name, value):
304 def writeopt(name, value):
303 ui.write('k=%s\nv=%s\n' % (name, value))
305 ui.write('k=%s\nv=%s\n' % (name, value))
304
306
305 writeopt('vdiff', ui.config('hgk', 'vdiff', ''))
307 writeopt('vdiff', ui.config('hgk', 'vdiff', ''))
306
308
307
309
308 def view(ui, repo, *etc, **opts):
310 def view(ui, repo, *etc, **opts):
309 "start interactive history viewer"
311 "start interactive history viewer"
310 os.chdir(repo.root)
312 os.chdir(repo.root)
311 optstr = ' '.join(['--%s %s' % (k, v) for k, v in opts.iteritems() if v])
313 optstr = ' '.join(['--%s %s' % (k, v) for k, v in opts.iteritems() if v])
312 cmd = ui.config("hgk", "path", "hgk") + " %s %s" % (optstr, " ".join(etc))
314 cmd = ui.config("hgk", "path", "hgk") + " %s %s" % (optstr, " ".join(etc))
313 ui.debug("running %s\n" % cmd)
315 ui.debug("running %s\n" % cmd)
314 util.system(cmd)
316 util.system(cmd)
315
317
316 cmdtable = {
318 cmdtable = {
317 "^view":
319 "^view":
318 (view,
320 (view,
319 [('l', 'limit', '',
321 [('l', 'limit', '',
320 _('limit number of changes displayed'), _('NUM'))],
322 _('limit number of changes displayed'), _('NUM'))],
321 _('hg view [-l LIMIT] [REVRANGE]')),
323 _('hg view [-l LIMIT] [REVRANGE]')),
322 "debug-diff-tree":
324 "debug-diff-tree":
323 (difftree,
325 (difftree,
324 [('p', 'patch', None, _('generate patch')),
326 [('p', 'patch', None, _('generate patch')),
325 ('r', 'recursive', None, _('recursive')),
327 ('r', 'recursive', None, _('recursive')),
326 ('P', 'pretty', None, _('pretty')),
328 ('P', 'pretty', None, _('pretty')),
327 ('s', 'stdin', None, _('stdin')),
329 ('s', 'stdin', None, _('stdin')),
328 ('C', 'copy', None, _('detect copies')),
330 ('C', 'copy', None, _('detect copies')),
329 ('S', 'search', "", _('search'))],
331 ('S', 'search', "", _('search'))],
330 _('hg git-diff-tree [OPTION]... NODE1 NODE2 [FILE]...')),
332 _('hg git-diff-tree [OPTION]... NODE1 NODE2 [FILE]...')),
331 "debug-cat-file":
333 "debug-cat-file":
332 (catfile,
334 (catfile,
333 [('s', 'stdin', None, _('stdin'))],
335 [('s', 'stdin', None, _('stdin'))],
334 _('hg debug-cat-file [OPTION]... TYPE FILE')),
336 _('hg debug-cat-file [OPTION]... TYPE FILE')),
335 "debug-config":
337 "debug-config":
336 (config, [], _('hg debug-config')),
338 (config, [], _('hg debug-config')),
337 "debug-merge-base":
339 "debug-merge-base":
338 (base, [], _('hg debug-merge-base REV REV')),
340 (base, [], _('hg debug-merge-base REV REV')),
339 "debug-rev-parse":
341 "debug-rev-parse":
340 (revparse,
342 (revparse,
341 [('', 'default', '', _('ignored'))],
343 [('', 'default', '', _('ignored'))],
342 _('hg debug-rev-parse REV')),
344 _('hg debug-rev-parse REV')),
343 "debug-rev-list":
345 "debug-rev-list":
344 (revlist,
346 (revlist,
345 [('H', 'header', None, _('header')),
347 [('H', 'header', None, _('header')),
346 ('t', 'topo-order', None, _('topo-order')),
348 ('t', 'topo-order', None, _('topo-order')),
347 ('p', 'parents', None, _('parents')),
349 ('p', 'parents', None, _('parents')),
348 ('n', 'max-count', 0, _('max-count'))],
350 ('n', 'max-count', 0, _('max-count'))],
349 _('hg debug-rev-list [OPTION]... REV...')),
351 _('hg debug-rev-list [OPTION]... REV...')),
350 }
352 }
@@ -1,63 +1,64 b''
1 # highlight - syntax highlighting in hgweb, based on Pygments
1 # highlight - syntax highlighting in hgweb, based on Pygments
2 #
2 #
3 # Copyright 2008, 2009 Patrick Mezard <pmezard@gmail.com> and others
3 # Copyright 2008, 2009 Patrick Mezard <pmezard@gmail.com> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 #
7 #
8 # The original module was split in an interface and an implementation
8 # The original module was split in an interface and an implementation
9 # file to defer pygments loading and speedup extension setup.
9 # file to defer pygments loading and speedup extension setup.
10
10
11 """syntax highlighting for hgweb (requires Pygments)
11 """syntax highlighting for hgweb (requires Pygments)
12
12
13 It depends on the Pygments syntax highlighting library:
13 It depends on the Pygments syntax highlighting library:
14 http://pygments.org/
14 http://pygments.org/
15
15
16 There is a single configuration option::
16 There is a single configuration option::
17
17
18 [web]
18 [web]
19 pygments_style = <style>
19 pygments_style = <style>
20
20
21 The default is 'colorful'.
21 The default is 'colorful'.
22 """
22 """
23
23
24 import highlight
24 import highlight
25 from mercurial.hgweb import webcommands, webutil, common
25 from mercurial.hgweb import webcommands, webutil, common
26 from mercurial import extensions, encoding
26 from mercurial import extensions, encoding
27 testedwith = 'internal'
27
28
28 def filerevision_highlight(orig, web, tmpl, fctx):
29 def filerevision_highlight(orig, web, tmpl, fctx):
29 mt = ''.join(tmpl('mimetype', encoding=encoding.encoding))
30 mt = ''.join(tmpl('mimetype', encoding=encoding.encoding))
30 # only pygmentize for mimetype containing 'html' so we both match
31 # only pygmentize for mimetype containing 'html' so we both match
31 # 'text/html' and possibly 'application/xhtml+xml' in the future
32 # 'text/html' and possibly 'application/xhtml+xml' in the future
32 # so that we don't have to touch the extension when the mimetype
33 # so that we don't have to touch the extension when the mimetype
33 # for a template changes; also hgweb optimizes the case that a
34 # for a template changes; also hgweb optimizes the case that a
34 # raw file is sent using rawfile() and doesn't call us, so we
35 # raw file is sent using rawfile() and doesn't call us, so we
35 # can't clash with the file's content-type here in case we
36 # can't clash with the file's content-type here in case we
36 # pygmentize a html file
37 # pygmentize a html file
37 if 'html' in mt:
38 if 'html' in mt:
38 style = web.config('web', 'pygments_style', 'colorful')
39 style = web.config('web', 'pygments_style', 'colorful')
39 highlight.pygmentize('fileline', fctx, style, tmpl)
40 highlight.pygmentize('fileline', fctx, style, tmpl)
40 return orig(web, tmpl, fctx)
41 return orig(web, tmpl, fctx)
41
42
42 def annotate_highlight(orig, web, req, tmpl):
43 def annotate_highlight(orig, web, req, tmpl):
43 mt = ''.join(tmpl('mimetype', encoding=encoding.encoding))
44 mt = ''.join(tmpl('mimetype', encoding=encoding.encoding))
44 if 'html' in mt:
45 if 'html' in mt:
45 fctx = webutil.filectx(web.repo, req)
46 fctx = webutil.filectx(web.repo, req)
46 style = web.config('web', 'pygments_style', 'colorful')
47 style = web.config('web', 'pygments_style', 'colorful')
47 highlight.pygmentize('annotateline', fctx, style, tmpl)
48 highlight.pygmentize('annotateline', fctx, style, tmpl)
48 return orig(web, req, tmpl)
49 return orig(web, req, tmpl)
49
50
50 def generate_css(web, req, tmpl):
51 def generate_css(web, req, tmpl):
51 pg_style = web.config('web', 'pygments_style', 'colorful')
52 pg_style = web.config('web', 'pygments_style', 'colorful')
52 fmter = highlight.HtmlFormatter(style = pg_style)
53 fmter = highlight.HtmlFormatter(style = pg_style)
53 req.respond(common.HTTP_OK, 'text/css')
54 req.respond(common.HTTP_OK, 'text/css')
54 return ['/* pygments_style = %s */\n\n' % pg_style,
55 return ['/* pygments_style = %s */\n\n' % pg_style,
55 fmter.get_style_defs('')]
56 fmter.get_style_defs('')]
56
57
57 def extsetup():
58 def extsetup():
58 # monkeypatch in the new version
59 # monkeypatch in the new version
59 extensions.wrapfunction(webcommands, '_filerevision',
60 extensions.wrapfunction(webcommands, '_filerevision',
60 filerevision_highlight)
61 filerevision_highlight)
61 extensions.wrapfunction(webcommands, 'annotate', annotate_highlight)
62 extensions.wrapfunction(webcommands, 'annotate', annotate_highlight)
62 webcommands.highlightcss = generate_css
63 webcommands.highlightcss = generate_css
63 webcommands.__all__.append('highlightcss')
64 webcommands.__all__.append('highlightcss')
@@ -1,91 +1,93 b''
1 # __init__.py - inotify-based status acceleration for Linux
1 # __init__.py - inotify-based status acceleration for Linux
2 #
2 #
3 # Copyright 2006, 2007, 2008 Bryan O'Sullivan <bos@serpentine.com>
3 # Copyright 2006, 2007, 2008 Bryan O'Sullivan <bos@serpentine.com>
4 # Copyright 2007, 2008 Brendan Cully <brendan@kublai.com>
4 # Copyright 2007, 2008 Brendan Cully <brendan@kublai.com>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''accelerate status report using Linux's inotify service'''
9 '''accelerate status report using Linux's inotify service'''
10
10
11 # todo: socket permissions
11 # todo: socket permissions
12
12
13 from mercurial.i18n import _
13 from mercurial.i18n import _
14 from mercurial import util
14 from mercurial import util
15 import server
15 import server
16 from client import client, QueryFailed
16 from client import client, QueryFailed
17
17
18 testedwith = 'internal'
19
18 def serve(ui, repo, **opts):
20 def serve(ui, repo, **opts):
19 '''start an inotify server for this repository'''
21 '''start an inotify server for this repository'''
20 server.start(ui, repo.dirstate, repo.root, opts)
22 server.start(ui, repo.dirstate, repo.root, opts)
21
23
22 def debuginotify(ui, repo, **opts):
24 def debuginotify(ui, repo, **opts):
23 '''debugging information for inotify extension
25 '''debugging information for inotify extension
24
26
25 Prints the list of directories being watched by the inotify server.
27 Prints the list of directories being watched by the inotify server.
26 '''
28 '''
27 cli = client(ui, repo)
29 cli = client(ui, repo)
28 response = cli.debugquery()
30 response = cli.debugquery()
29
31
30 ui.write(_('directories being watched:\n'))
32 ui.write(_('directories being watched:\n'))
31 for path in response:
33 for path in response:
32 ui.write((' %s/\n') % path)
34 ui.write((' %s/\n') % path)
33
35
34 def reposetup(ui, repo):
36 def reposetup(ui, repo):
35 if not util.safehasattr(repo, 'dirstate'):
37 if not util.safehasattr(repo, 'dirstate'):
36 return
38 return
37
39
38 class inotifydirstate(repo.dirstate.__class__):
40 class inotifydirstate(repo.dirstate.__class__):
39
41
40 # We'll set this to false after an unsuccessful attempt so that
42 # We'll set this to false after an unsuccessful attempt so that
41 # next calls of status() within the same instance don't try again
43 # next calls of status() within the same instance don't try again
42 # to start an inotify server if it won't start.
44 # to start an inotify server if it won't start.
43 _inotifyon = True
45 _inotifyon = True
44
46
45 def status(self, match, subrepos, ignored, clean, unknown):
47 def status(self, match, subrepos, ignored, clean, unknown):
46 files = match.files()
48 files = match.files()
47 if '.' in files:
49 if '.' in files:
48 files = []
50 files = []
49 if (self._inotifyon and not ignored and not subrepos and
51 if (self._inotifyon and not ignored and not subrepos and
50 not self._dirty):
52 not self._dirty):
51 cli = client(ui, repo)
53 cli = client(ui, repo)
52 try:
54 try:
53 result = cli.statusquery(files, match, False,
55 result = cli.statusquery(files, match, False,
54 clean, unknown)
56 clean, unknown)
55 except QueryFailed, instr:
57 except QueryFailed, instr:
56 ui.debug(str(instr))
58 ui.debug(str(instr))
57 # don't retry within the same hg instance
59 # don't retry within the same hg instance
58 inotifydirstate._inotifyon = False
60 inotifydirstate._inotifyon = False
59 pass
61 pass
60 else:
62 else:
61 if ui.config('inotify', 'debug'):
63 if ui.config('inotify', 'debug'):
62 r2 = super(inotifydirstate, self).status(
64 r2 = super(inotifydirstate, self).status(
63 match, [], False, clean, unknown)
65 match, [], False, clean, unknown)
64 for c, a, b in zip('LMARDUIC', result, r2):
66 for c, a, b in zip('LMARDUIC', result, r2):
65 for f in a:
67 for f in a:
66 if f not in b:
68 if f not in b:
67 ui.warn('*** inotify: %s +%s\n' % (c, f))
69 ui.warn('*** inotify: %s +%s\n' % (c, f))
68 for f in b:
70 for f in b:
69 if f not in a:
71 if f not in a:
70 ui.warn('*** inotify: %s -%s\n' % (c, f))
72 ui.warn('*** inotify: %s -%s\n' % (c, f))
71 result = r2
73 result = r2
72 return result
74 return result
73 return super(inotifydirstate, self).status(
75 return super(inotifydirstate, self).status(
74 match, subrepos, ignored, clean, unknown)
76 match, subrepos, ignored, clean, unknown)
75
77
76 repo.dirstate.__class__ = inotifydirstate
78 repo.dirstate.__class__ = inotifydirstate
77
79
78 cmdtable = {
80 cmdtable = {
79 'debuginotify':
81 'debuginotify':
80 (debuginotify, [], ('hg debuginotify')),
82 (debuginotify, [], ('hg debuginotify')),
81 '^inserve':
83 '^inserve':
82 (serve,
84 (serve,
83 [('d', 'daemon', None, _('run server in background')),
85 [('d', 'daemon', None, _('run server in background')),
84 ('', 'daemon-pipefds', '',
86 ('', 'daemon-pipefds', '',
85 _('used internally by daemon mode'), _('NUM')),
87 _('used internally by daemon mode'), _('NUM')),
86 ('t', 'idle-timeout', '',
88 ('t', 'idle-timeout', '',
87 _('minutes to sit idle before exiting'), _('NUM')),
89 _('minutes to sit idle before exiting'), _('NUM')),
88 ('', 'pid-file', '',
90 ('', 'pid-file', '',
89 _('name of file to write process ID to'), _('FILE'))],
91 _('name of file to write process ID to'), _('FILE'))],
90 _('hg inserve [OPTION]...')),
92 _('hg inserve [OPTION]...')),
91 }
93 }
@@ -1,81 +1,83 b''
1 # interhg.py - interhg
1 # interhg.py - interhg
2 #
2 #
3 # Copyright 2007 OHASHI Hideya <ohachige@gmail.com>
3 # Copyright 2007 OHASHI Hideya <ohachige@gmail.com>
4 #
4 #
5 # Contributor(s):
5 # Contributor(s):
6 # Edward Lee <edward.lee@engineering.uiuc.edu>
6 # Edward Lee <edward.lee@engineering.uiuc.edu>
7 #
7 #
8 # This software may be used and distributed according to the terms of the
8 # This software may be used and distributed according to the terms of the
9 # GNU General Public License version 2 or any later version.
9 # GNU General Public License version 2 or any later version.
10
10
11 '''expand expressions into changelog and summaries
11 '''expand expressions into changelog and summaries
12
12
13 This extension allows the use of a special syntax in summaries, which
13 This extension allows the use of a special syntax in summaries, which
14 will be automatically expanded into links or any other arbitrary
14 will be automatically expanded into links or any other arbitrary
15 expression, much like InterWiki does.
15 expression, much like InterWiki does.
16
16
17 A few example patterns (link to bug tracking, etc.) that may be used
17 A few example patterns (link to bug tracking, etc.) that may be used
18 in your hgrc::
18 in your hgrc::
19
19
20 [interhg]
20 [interhg]
21 issues = s!issue(\\d+)!<a href="http://bts/issue\\1">issue\\1</a>!
21 issues = s!issue(\\d+)!<a href="http://bts/issue\\1">issue\\1</a>!
22 bugzilla = s!((?:bug|b=|(?=#?\\d{4,}))(?:\\s*#?)(\\d+))!<a..=\\2">\\1</a>!i
22 bugzilla = s!((?:bug|b=|(?=#?\\d{4,}))(?:\\s*#?)(\\d+))!<a..=\\2">\\1</a>!i
23 boldify = s!(^|\\s)#(\\d+)\\b! <b>#\\2</b>!
23 boldify = s!(^|\\s)#(\\d+)\\b! <b>#\\2</b>!
24 '''
24 '''
25
25
26 import re
26 import re
27 from mercurial.hgweb import hgweb_mod
27 from mercurial.hgweb import hgweb_mod
28 from mercurial import templatefilters, extensions
28 from mercurial import templatefilters, extensions
29 from mercurial.i18n import _
29 from mercurial.i18n import _
30
30
31 testedwith = 'internal'
32
31 interhg_table = []
33 interhg_table = []
32
34
33 def uisetup(ui):
35 def uisetup(ui):
34 orig_escape = templatefilters.filters["escape"]
36 orig_escape = templatefilters.filters["escape"]
35
37
36 def interhg_escape(x):
38 def interhg_escape(x):
37 escstr = orig_escape(x)
39 escstr = orig_escape(x)
38 for regexp, format in interhg_table:
40 for regexp, format in interhg_table:
39 escstr = regexp.sub(format, escstr)
41 escstr = regexp.sub(format, escstr)
40 return escstr
42 return escstr
41
43
42 templatefilters.filters["escape"] = interhg_escape
44 templatefilters.filters["escape"] = interhg_escape
43
45
44 def interhg_refresh(orig, self, *args, **kwargs):
46 def interhg_refresh(orig, self, *args, **kwargs):
45 interhg_table[:] = []
47 interhg_table[:] = []
46 for key, pattern in self.repo.ui.configitems('interhg'):
48 for key, pattern in self.repo.ui.configitems('interhg'):
47 # grab the delimiter from the character after the "s"
49 # grab the delimiter from the character after the "s"
48 unesc = pattern[1]
50 unesc = pattern[1]
49 delim = re.escape(unesc)
51 delim = re.escape(unesc)
50
52
51 # identify portions of the pattern, taking care to avoid escaped
53 # identify portions of the pattern, taking care to avoid escaped
52 # delimiters. the replace format and flags are optional, but delimiters
54 # delimiters. the replace format and flags are optional, but delimiters
53 # are required.
55 # are required.
54 match = re.match(r'^s%s(.+)(?:(?<=\\\\)|(?<!\\))%s(.*)%s([ilmsux])*$'
56 match = re.match(r'^s%s(.+)(?:(?<=\\\\)|(?<!\\))%s(.*)%s([ilmsux])*$'
55 % (delim, delim, delim), pattern)
57 % (delim, delim, delim), pattern)
56 if not match:
58 if not match:
57 self.repo.ui.warn(_("interhg: invalid pattern for %s: %s\n")
59 self.repo.ui.warn(_("interhg: invalid pattern for %s: %s\n")
58 % (key, pattern))
60 % (key, pattern))
59 continue
61 continue
60
62
61 # we need to unescape the delimiter for regexp and format
63 # we need to unescape the delimiter for regexp and format
62 delim_re = re.compile(r'(?<!\\)\\%s' % delim)
64 delim_re = re.compile(r'(?<!\\)\\%s' % delim)
63 regexp = delim_re.sub(unesc, match.group(1))
65 regexp = delim_re.sub(unesc, match.group(1))
64 format = delim_re.sub(unesc, match.group(2))
66 format = delim_re.sub(unesc, match.group(2))
65
67
66 # the pattern allows for 6 regexp flags, so set them if necessary
68 # the pattern allows for 6 regexp flags, so set them if necessary
67 flagin = match.group(3)
69 flagin = match.group(3)
68 flags = 0
70 flags = 0
69 if flagin:
71 if flagin:
70 for flag in flagin.upper():
72 for flag in flagin.upper():
71 flags |= re.__dict__[flag]
73 flags |= re.__dict__[flag]
72
74
73 try:
75 try:
74 regexp = re.compile(regexp, flags)
76 regexp = re.compile(regexp, flags)
75 interhg_table.append((regexp, format))
77 interhg_table.append((regexp, format))
76 except re.error:
78 except re.error:
77 self.repo.ui.warn(_("interhg: invalid regexp for %s: %s\n")
79 self.repo.ui.warn(_("interhg: invalid regexp for %s: %s\n")
78 % (key, regexp))
80 % (key, regexp))
79 return orig(self, *args, **kwargs)
81 return orig(self, *args, **kwargs)
80
82
81 extensions.wrapfunction(hgweb_mod.hgweb, 'refresh', interhg_refresh)
83 extensions.wrapfunction(hgweb_mod.hgweb, 'refresh', interhg_refresh)
@@ -1,702 +1,703 b''
1 # keyword.py - $Keyword$ expansion for Mercurial
1 # keyword.py - $Keyword$ expansion for Mercurial
2 #
2 #
3 # Copyright 2007-2010 Christian Ebert <blacktrash@gmx.net>
3 # Copyright 2007-2010 Christian Ebert <blacktrash@gmx.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 #
7 #
8 # $Id$
8 # $Id$
9 #
9 #
10 # Keyword expansion hack against the grain of a DSCM
10 # Keyword expansion hack against the grain of a DSCM
11 #
11 #
12 # There are many good reasons why this is not needed in a distributed
12 # There are many good reasons why this is not needed in a distributed
13 # SCM, still it may be useful in very small projects based on single
13 # SCM, still it may be useful in very small projects based on single
14 # files (like LaTeX packages), that are mostly addressed to an
14 # files (like LaTeX packages), that are mostly addressed to an
15 # audience not running a version control system.
15 # audience not running a version control system.
16 #
16 #
17 # For in-depth discussion refer to
17 # For in-depth discussion refer to
18 # <http://mercurial.selenic.com/wiki/KeywordPlan>.
18 # <http://mercurial.selenic.com/wiki/KeywordPlan>.
19 #
19 #
20 # Keyword expansion is based on Mercurial's changeset template mappings.
20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 #
21 #
22 # Binary files are not touched.
22 # Binary files are not touched.
23 #
23 #
24 # Files to act upon/ignore are specified in the [keyword] section.
24 # Files to act upon/ignore are specified in the [keyword] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
26 #
26 #
27 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
27 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
28
28
29 '''expand keywords in tracked files
29 '''expand keywords in tracked files
30
30
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
32 tracked text files selected by your configuration.
32 tracked text files selected by your configuration.
33
33
34 Keywords are only expanded in local repositories and not stored in the
34 Keywords are only expanded in local repositories and not stored in the
35 change history. The mechanism can be regarded as a convenience for the
35 change history. The mechanism can be regarded as a convenience for the
36 current user or for archive distribution.
36 current user or for archive distribution.
37
37
38 Keywords expand to the changeset data pertaining to the latest change
38 Keywords expand to the changeset data pertaining to the latest change
39 relative to the working directory parent of each file.
39 relative to the working directory parent of each file.
40
40
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
42 sections of hgrc files.
42 sections of hgrc files.
43
43
44 Example::
44 Example::
45
45
46 [keyword]
46 [keyword]
47 # expand keywords in every python file except those matching "x*"
47 # expand keywords in every python file except those matching "x*"
48 **.py =
48 **.py =
49 x* = ignore
49 x* = ignore
50
50
51 [keywordset]
51 [keywordset]
52 # prefer svn- over cvs-like default keywordmaps
52 # prefer svn- over cvs-like default keywordmaps
53 svn = True
53 svn = True
54
54
55 .. note::
55 .. note::
56 The more specific you are in your filename patterns the less you
56 The more specific you are in your filename patterns the less you
57 lose speed in huge repositories.
57 lose speed in huge repositories.
58
58
59 For [keywordmaps] template mapping and expansion demonstration and
59 For [keywordmaps] template mapping and expansion demonstration and
60 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
60 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
61 available templates and filters.
61 available templates and filters.
62
62
63 Three additional date template filters are provided:
63 Three additional date template filters are provided:
64
64
65 :``utcdate``: "2006/09/18 15:13:13"
65 :``utcdate``: "2006/09/18 15:13:13"
66 :``svnutcdate``: "2006-09-18 15:13:13Z"
66 :``svnutcdate``: "2006-09-18 15:13:13Z"
67 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
67 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
68
68
69 The default template mappings (view with :hg:`kwdemo -d`) can be
69 The default template mappings (view with :hg:`kwdemo -d`) can be
70 replaced with customized keywords and templates. Again, run
70 replaced with customized keywords and templates. Again, run
71 :hg:`kwdemo` to control the results of your configuration changes.
71 :hg:`kwdemo` to control the results of your configuration changes.
72
72
73 Before changing/disabling active keywords, you must run :hg:`kwshrink`
73 Before changing/disabling active keywords, you must run :hg:`kwshrink`
74 to avoid storing expanded keywords in the change history.
74 to avoid storing expanded keywords in the change history.
75
75
76 To force expansion after enabling it, or a configuration change, run
76 To force expansion after enabling it, or a configuration change, run
77 :hg:`kwexpand`.
77 :hg:`kwexpand`.
78
78
79 Expansions spanning more than one line and incremental expansions,
79 Expansions spanning more than one line and incremental expansions,
80 like CVS' $Log$, are not supported. A keyword template map "Log =
80 like CVS' $Log$, are not supported. A keyword template map "Log =
81 {desc}" expands to the first line of the changeset description.
81 {desc}" expands to the first line of the changeset description.
82 '''
82 '''
83
83
84 from mercurial import commands, context, cmdutil, dispatch, filelog, extensions
84 from mercurial import commands, context, cmdutil, dispatch, filelog, extensions
85 from mercurial import localrepo, match, patch, templatefilters, templater, util
85 from mercurial import localrepo, match, patch, templatefilters, templater, util
86 from mercurial import scmutil
86 from mercurial import scmutil
87 from mercurial.hgweb import webcommands
87 from mercurial.hgweb import webcommands
88 from mercurial.i18n import _
88 from mercurial.i18n import _
89 import os, re, shutil, tempfile
89 import os, re, shutil, tempfile
90
90
91 commands.optionalrepo += ' kwdemo'
91 commands.optionalrepo += ' kwdemo'
92
92
93 cmdtable = {}
93 cmdtable = {}
94 command = cmdutil.command(cmdtable)
94 command = cmdutil.command(cmdtable)
95 testedwith = 'internal'
95
96
96 # hg commands that do not act on keywords
97 # hg commands that do not act on keywords
97 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
98 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
98 ' outgoing push tip verify convert email glog')
99 ' outgoing push tip verify convert email glog')
99
100
100 # hg commands that trigger expansion only when writing to working dir,
101 # hg commands that trigger expansion only when writing to working dir,
101 # not when reading filelog, and unexpand when reading from working dir
102 # not when reading filelog, and unexpand when reading from working dir
102 restricted = 'merge kwexpand kwshrink record qrecord resolve transplant'
103 restricted = 'merge kwexpand kwshrink record qrecord resolve transplant'
103
104
104 # names of extensions using dorecord
105 # names of extensions using dorecord
105 recordextensions = 'record'
106 recordextensions = 'record'
106
107
107 colortable = {
108 colortable = {
108 'kwfiles.enabled': 'green bold',
109 'kwfiles.enabled': 'green bold',
109 'kwfiles.deleted': 'cyan bold underline',
110 'kwfiles.deleted': 'cyan bold underline',
110 'kwfiles.enabledunknown': 'green',
111 'kwfiles.enabledunknown': 'green',
111 'kwfiles.ignored': 'bold',
112 'kwfiles.ignored': 'bold',
112 'kwfiles.ignoredunknown': 'none'
113 'kwfiles.ignoredunknown': 'none'
113 }
114 }
114
115
115 # date like in cvs' $Date
116 # date like in cvs' $Date
116 def utcdate(text):
117 def utcdate(text):
117 ''':utcdate: Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
118 ''':utcdate: Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
118 '''
119 '''
119 return util.datestr((text[0], 0), '%Y/%m/%d %H:%M:%S')
120 return util.datestr((text[0], 0), '%Y/%m/%d %H:%M:%S')
120 # date like in svn's $Date
121 # date like in svn's $Date
121 def svnisodate(text):
122 def svnisodate(text):
122 ''':svnisodate: Date. Returns a date in this format: "2009-08-18 13:00:13
123 ''':svnisodate: Date. Returns a date in this format: "2009-08-18 13:00:13
123 +0200 (Tue, 18 Aug 2009)".
124 +0200 (Tue, 18 Aug 2009)".
124 '''
125 '''
125 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
126 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
126 # date like in svn's $Id
127 # date like in svn's $Id
127 def svnutcdate(text):
128 def svnutcdate(text):
128 ''':svnutcdate: Date. Returns a UTC-date in this format: "2009-08-18
129 ''':svnutcdate: Date. Returns a UTC-date in this format: "2009-08-18
129 11:00:13Z".
130 11:00:13Z".
130 '''
131 '''
131 return util.datestr((text[0], 0), '%Y-%m-%d %H:%M:%SZ')
132 return util.datestr((text[0], 0), '%Y-%m-%d %H:%M:%SZ')
132
133
133 templatefilters.filters.update({'utcdate': utcdate,
134 templatefilters.filters.update({'utcdate': utcdate,
134 'svnisodate': svnisodate,
135 'svnisodate': svnisodate,
135 'svnutcdate': svnutcdate})
136 'svnutcdate': svnutcdate})
136
137
137 # make keyword tools accessible
138 # make keyword tools accessible
138 kwtools = {'templater': None, 'hgcmd': ''}
139 kwtools = {'templater': None, 'hgcmd': ''}
139
140
140 def _defaultkwmaps(ui):
141 def _defaultkwmaps(ui):
141 '''Returns default keywordmaps according to keywordset configuration.'''
142 '''Returns default keywordmaps according to keywordset configuration.'''
142 templates = {
143 templates = {
143 'Revision': '{node|short}',
144 'Revision': '{node|short}',
144 'Author': '{author|user}',
145 'Author': '{author|user}',
145 }
146 }
146 kwsets = ({
147 kwsets = ({
147 'Date': '{date|utcdate}',
148 'Date': '{date|utcdate}',
148 'RCSfile': '{file|basename},v',
149 'RCSfile': '{file|basename},v',
149 'RCSFile': '{file|basename},v', # kept for backwards compatibility
150 'RCSFile': '{file|basename},v', # kept for backwards compatibility
150 # with hg-keyword
151 # with hg-keyword
151 'Source': '{root}/{file},v',
152 'Source': '{root}/{file},v',
152 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
153 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
153 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
154 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
154 }, {
155 }, {
155 'Date': '{date|svnisodate}',
156 'Date': '{date|svnisodate}',
156 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
157 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
157 'LastChangedRevision': '{node|short}',
158 'LastChangedRevision': '{node|short}',
158 'LastChangedBy': '{author|user}',
159 'LastChangedBy': '{author|user}',
159 'LastChangedDate': '{date|svnisodate}',
160 'LastChangedDate': '{date|svnisodate}',
160 })
161 })
161 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
162 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
162 return templates
163 return templates
163
164
164 def _shrinktext(text, subfunc):
165 def _shrinktext(text, subfunc):
165 '''Helper for keyword expansion removal in text.
166 '''Helper for keyword expansion removal in text.
166 Depending on subfunc also returns number of substitutions.'''
167 Depending on subfunc also returns number of substitutions.'''
167 return subfunc(r'$\1$', text)
168 return subfunc(r'$\1$', text)
168
169
169 def _preselect(wstatus, changed):
170 def _preselect(wstatus, changed):
170 '''Retrieves modfied and added files from a working directory state
171 '''Retrieves modfied and added files from a working directory state
171 and returns the subset of each contained in given changed files
172 and returns the subset of each contained in given changed files
172 retrieved from a change context.'''
173 retrieved from a change context.'''
173 modified, added = wstatus[:2]
174 modified, added = wstatus[:2]
174 modified = [f for f in modified if f in changed]
175 modified = [f for f in modified if f in changed]
175 added = [f for f in added if f in changed]
176 added = [f for f in added if f in changed]
176 return modified, added
177 return modified, added
177
178
178
179
179 class kwtemplater(object):
180 class kwtemplater(object):
180 '''
181 '''
181 Sets up keyword templates, corresponding keyword regex, and
182 Sets up keyword templates, corresponding keyword regex, and
182 provides keyword substitution functions.
183 provides keyword substitution functions.
183 '''
184 '''
184
185
185 def __init__(self, ui, repo, inc, exc):
186 def __init__(self, ui, repo, inc, exc):
186 self.ui = ui
187 self.ui = ui
187 self.repo = repo
188 self.repo = repo
188 self.match = match.match(repo.root, '', [], inc, exc)
189 self.match = match.match(repo.root, '', [], inc, exc)
189 self.restrict = kwtools['hgcmd'] in restricted.split()
190 self.restrict = kwtools['hgcmd'] in restricted.split()
190 self.record = False
191 self.record = False
191
192
192 kwmaps = self.ui.configitems('keywordmaps')
193 kwmaps = self.ui.configitems('keywordmaps')
193 if kwmaps: # override default templates
194 if kwmaps: # override default templates
194 self.templates = dict((k, templater.parsestring(v, False))
195 self.templates = dict((k, templater.parsestring(v, False))
195 for k, v in kwmaps)
196 for k, v in kwmaps)
196 else:
197 else:
197 self.templates = _defaultkwmaps(self.ui)
198 self.templates = _defaultkwmaps(self.ui)
198
199
199 @util.propertycache
200 @util.propertycache
200 def escape(self):
201 def escape(self):
201 '''Returns bar-separated and escaped keywords.'''
202 '''Returns bar-separated and escaped keywords.'''
202 return '|'.join(map(re.escape, self.templates.keys()))
203 return '|'.join(map(re.escape, self.templates.keys()))
203
204
204 @util.propertycache
205 @util.propertycache
205 def rekw(self):
206 def rekw(self):
206 '''Returns regex for unexpanded keywords.'''
207 '''Returns regex for unexpanded keywords.'''
207 return re.compile(r'\$(%s)\$' % self.escape)
208 return re.compile(r'\$(%s)\$' % self.escape)
208
209
209 @util.propertycache
210 @util.propertycache
210 def rekwexp(self):
211 def rekwexp(self):
211 '''Returns regex for expanded keywords.'''
212 '''Returns regex for expanded keywords.'''
212 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
213 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
213
214
214 def substitute(self, data, path, ctx, subfunc):
215 def substitute(self, data, path, ctx, subfunc):
215 '''Replaces keywords in data with expanded template.'''
216 '''Replaces keywords in data with expanded template.'''
216 def kwsub(mobj):
217 def kwsub(mobj):
217 kw = mobj.group(1)
218 kw = mobj.group(1)
218 ct = cmdutil.changeset_templater(self.ui, self.repo,
219 ct = cmdutil.changeset_templater(self.ui, self.repo,
219 False, None, '', False)
220 False, None, '', False)
220 ct.use_template(self.templates[kw])
221 ct.use_template(self.templates[kw])
221 self.ui.pushbuffer()
222 self.ui.pushbuffer()
222 ct.show(ctx, root=self.repo.root, file=path)
223 ct.show(ctx, root=self.repo.root, file=path)
223 ekw = templatefilters.firstline(self.ui.popbuffer())
224 ekw = templatefilters.firstline(self.ui.popbuffer())
224 return '$%s: %s $' % (kw, ekw)
225 return '$%s: %s $' % (kw, ekw)
225 return subfunc(kwsub, data)
226 return subfunc(kwsub, data)
226
227
227 def linkctx(self, path, fileid):
228 def linkctx(self, path, fileid):
228 '''Similar to filelog.linkrev, but returns a changectx.'''
229 '''Similar to filelog.linkrev, but returns a changectx.'''
229 return self.repo.filectx(path, fileid=fileid).changectx()
230 return self.repo.filectx(path, fileid=fileid).changectx()
230
231
231 def expand(self, path, node, data):
232 def expand(self, path, node, data):
232 '''Returns data with keywords expanded.'''
233 '''Returns data with keywords expanded.'''
233 if not self.restrict and self.match(path) and not util.binary(data):
234 if not self.restrict and self.match(path) and not util.binary(data):
234 ctx = self.linkctx(path, node)
235 ctx = self.linkctx(path, node)
235 return self.substitute(data, path, ctx, self.rekw.sub)
236 return self.substitute(data, path, ctx, self.rekw.sub)
236 return data
237 return data
237
238
238 def iskwfile(self, cand, ctx):
239 def iskwfile(self, cand, ctx):
239 '''Returns subset of candidates which are configured for keyword
240 '''Returns subset of candidates which are configured for keyword
240 expansion but are not symbolic links.'''
241 expansion but are not symbolic links.'''
241 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
242 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
242
243
243 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
244 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
244 '''Overwrites selected files expanding/shrinking keywords.'''
245 '''Overwrites selected files expanding/shrinking keywords.'''
245 if self.restrict or lookup or self.record: # exclude kw_copy
246 if self.restrict or lookup or self.record: # exclude kw_copy
246 candidates = self.iskwfile(candidates, ctx)
247 candidates = self.iskwfile(candidates, ctx)
247 if not candidates:
248 if not candidates:
248 return
249 return
249 kwcmd = self.restrict and lookup # kwexpand/kwshrink
250 kwcmd = self.restrict and lookup # kwexpand/kwshrink
250 if self.restrict or expand and lookup:
251 if self.restrict or expand and lookup:
251 mf = ctx.manifest()
252 mf = ctx.manifest()
252 if self.restrict or rekw:
253 if self.restrict or rekw:
253 re_kw = self.rekw
254 re_kw = self.rekw
254 else:
255 else:
255 re_kw = self.rekwexp
256 re_kw = self.rekwexp
256 if expand:
257 if expand:
257 msg = _('overwriting %s expanding keywords\n')
258 msg = _('overwriting %s expanding keywords\n')
258 else:
259 else:
259 msg = _('overwriting %s shrinking keywords\n')
260 msg = _('overwriting %s shrinking keywords\n')
260 for f in candidates:
261 for f in candidates:
261 if self.restrict:
262 if self.restrict:
262 data = self.repo.file(f).read(mf[f])
263 data = self.repo.file(f).read(mf[f])
263 else:
264 else:
264 data = self.repo.wread(f)
265 data = self.repo.wread(f)
265 if util.binary(data):
266 if util.binary(data):
266 continue
267 continue
267 if expand:
268 if expand:
268 if lookup:
269 if lookup:
269 ctx = self.linkctx(f, mf[f])
270 ctx = self.linkctx(f, mf[f])
270 data, found = self.substitute(data, f, ctx, re_kw.subn)
271 data, found = self.substitute(data, f, ctx, re_kw.subn)
271 elif self.restrict:
272 elif self.restrict:
272 found = re_kw.search(data)
273 found = re_kw.search(data)
273 else:
274 else:
274 data, found = _shrinktext(data, re_kw.subn)
275 data, found = _shrinktext(data, re_kw.subn)
275 if found:
276 if found:
276 self.ui.note(msg % f)
277 self.ui.note(msg % f)
277 fp = self.repo.wopener(f, "wb", atomictemp=True)
278 fp = self.repo.wopener(f, "wb", atomictemp=True)
278 fp.write(data)
279 fp.write(data)
279 fp.close()
280 fp.close()
280 if kwcmd:
281 if kwcmd:
281 self.repo.dirstate.normal(f)
282 self.repo.dirstate.normal(f)
282 elif self.record:
283 elif self.record:
283 self.repo.dirstate.normallookup(f)
284 self.repo.dirstate.normallookup(f)
284
285
285 def shrink(self, fname, text):
286 def shrink(self, fname, text):
286 '''Returns text with all keyword substitutions removed.'''
287 '''Returns text with all keyword substitutions removed.'''
287 if self.match(fname) and not util.binary(text):
288 if self.match(fname) and not util.binary(text):
288 return _shrinktext(text, self.rekwexp.sub)
289 return _shrinktext(text, self.rekwexp.sub)
289 return text
290 return text
290
291
291 def shrinklines(self, fname, lines):
292 def shrinklines(self, fname, lines):
292 '''Returns lines with keyword substitutions removed.'''
293 '''Returns lines with keyword substitutions removed.'''
293 if self.match(fname):
294 if self.match(fname):
294 text = ''.join(lines)
295 text = ''.join(lines)
295 if not util.binary(text):
296 if not util.binary(text):
296 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
297 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
297 return lines
298 return lines
298
299
299 def wread(self, fname, data):
300 def wread(self, fname, data):
300 '''If in restricted mode returns data read from wdir with
301 '''If in restricted mode returns data read from wdir with
301 keyword substitutions removed.'''
302 keyword substitutions removed.'''
302 if self.restrict:
303 if self.restrict:
303 return self.shrink(fname, data)
304 return self.shrink(fname, data)
304 return data
305 return data
305
306
306 class kwfilelog(filelog.filelog):
307 class kwfilelog(filelog.filelog):
307 '''
308 '''
308 Subclass of filelog to hook into its read, add, cmp methods.
309 Subclass of filelog to hook into its read, add, cmp methods.
309 Keywords are "stored" unexpanded, and processed on reading.
310 Keywords are "stored" unexpanded, and processed on reading.
310 '''
311 '''
311 def __init__(self, opener, kwt, path):
312 def __init__(self, opener, kwt, path):
312 super(kwfilelog, self).__init__(opener, path)
313 super(kwfilelog, self).__init__(opener, path)
313 self.kwt = kwt
314 self.kwt = kwt
314 self.path = path
315 self.path = path
315
316
316 def read(self, node):
317 def read(self, node):
317 '''Expands keywords when reading filelog.'''
318 '''Expands keywords when reading filelog.'''
318 data = super(kwfilelog, self).read(node)
319 data = super(kwfilelog, self).read(node)
319 if self.renamed(node):
320 if self.renamed(node):
320 return data
321 return data
321 return self.kwt.expand(self.path, node, data)
322 return self.kwt.expand(self.path, node, data)
322
323
323 def add(self, text, meta, tr, link, p1=None, p2=None):
324 def add(self, text, meta, tr, link, p1=None, p2=None):
324 '''Removes keyword substitutions when adding to filelog.'''
325 '''Removes keyword substitutions when adding to filelog.'''
325 text = self.kwt.shrink(self.path, text)
326 text = self.kwt.shrink(self.path, text)
326 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
327 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
327
328
328 def cmp(self, node, text):
329 def cmp(self, node, text):
329 '''Removes keyword substitutions for comparison.'''
330 '''Removes keyword substitutions for comparison.'''
330 text = self.kwt.shrink(self.path, text)
331 text = self.kwt.shrink(self.path, text)
331 return super(kwfilelog, self).cmp(node, text)
332 return super(kwfilelog, self).cmp(node, text)
332
333
333 def _status(ui, repo, wctx, kwt, *pats, **opts):
334 def _status(ui, repo, wctx, kwt, *pats, **opts):
334 '''Bails out if [keyword] configuration is not active.
335 '''Bails out if [keyword] configuration is not active.
335 Returns status of working directory.'''
336 Returns status of working directory.'''
336 if kwt:
337 if kwt:
337 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
338 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
338 unknown=opts.get('unknown') or opts.get('all'))
339 unknown=opts.get('unknown') or opts.get('all'))
339 if ui.configitems('keyword'):
340 if ui.configitems('keyword'):
340 raise util.Abort(_('[keyword] patterns cannot match'))
341 raise util.Abort(_('[keyword] patterns cannot match'))
341 raise util.Abort(_('no [keyword] patterns configured'))
342 raise util.Abort(_('no [keyword] patterns configured'))
342
343
343 def _kwfwrite(ui, repo, expand, *pats, **opts):
344 def _kwfwrite(ui, repo, expand, *pats, **opts):
344 '''Selects files and passes them to kwtemplater.overwrite.'''
345 '''Selects files and passes them to kwtemplater.overwrite.'''
345 wctx = repo[None]
346 wctx = repo[None]
346 if len(wctx.parents()) > 1:
347 if len(wctx.parents()) > 1:
347 raise util.Abort(_('outstanding uncommitted merge'))
348 raise util.Abort(_('outstanding uncommitted merge'))
348 kwt = kwtools['templater']
349 kwt = kwtools['templater']
349 wlock = repo.wlock()
350 wlock = repo.wlock()
350 try:
351 try:
351 status = _status(ui, repo, wctx, kwt, *pats, **opts)
352 status = _status(ui, repo, wctx, kwt, *pats, **opts)
352 modified, added, removed, deleted, unknown, ignored, clean = status
353 modified, added, removed, deleted, unknown, ignored, clean = status
353 if modified or added or removed or deleted:
354 if modified or added or removed or deleted:
354 raise util.Abort(_('outstanding uncommitted changes'))
355 raise util.Abort(_('outstanding uncommitted changes'))
355 kwt.overwrite(wctx, clean, True, expand)
356 kwt.overwrite(wctx, clean, True, expand)
356 finally:
357 finally:
357 wlock.release()
358 wlock.release()
358
359
359 @command('kwdemo',
360 @command('kwdemo',
360 [('d', 'default', None, _('show default keyword template maps')),
361 [('d', 'default', None, _('show default keyword template maps')),
361 ('f', 'rcfile', '',
362 ('f', 'rcfile', '',
362 _('read maps from rcfile'), _('FILE'))],
363 _('read maps from rcfile'), _('FILE'))],
363 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'))
364 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'))
364 def demo(ui, repo, *args, **opts):
365 def demo(ui, repo, *args, **opts):
365 '''print [keywordmaps] configuration and an expansion example
366 '''print [keywordmaps] configuration and an expansion example
366
367
367 Show current, custom, or default keyword template maps and their
368 Show current, custom, or default keyword template maps and their
368 expansions.
369 expansions.
369
370
370 Extend the current configuration by specifying maps as arguments
371 Extend the current configuration by specifying maps as arguments
371 and using -f/--rcfile to source an external hgrc file.
372 and using -f/--rcfile to source an external hgrc file.
372
373
373 Use -d/--default to disable current configuration.
374 Use -d/--default to disable current configuration.
374
375
375 See :hg:`help templates` for information on templates and filters.
376 See :hg:`help templates` for information on templates and filters.
376 '''
377 '''
377 def demoitems(section, items):
378 def demoitems(section, items):
378 ui.write('[%s]\n' % section)
379 ui.write('[%s]\n' % section)
379 for k, v in sorted(items):
380 for k, v in sorted(items):
380 ui.write('%s = %s\n' % (k, v))
381 ui.write('%s = %s\n' % (k, v))
381
382
382 fn = 'demo.txt'
383 fn = 'demo.txt'
383 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
384 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
384 ui.note(_('creating temporary repository at %s\n') % tmpdir)
385 ui.note(_('creating temporary repository at %s\n') % tmpdir)
385 repo = localrepo.localrepository(ui, tmpdir, True)
386 repo = localrepo.localrepository(ui, tmpdir, True)
386 ui.setconfig('keyword', fn, '')
387 ui.setconfig('keyword', fn, '')
387 svn = ui.configbool('keywordset', 'svn')
388 svn = ui.configbool('keywordset', 'svn')
388 # explicitly set keywordset for demo output
389 # explicitly set keywordset for demo output
389 ui.setconfig('keywordset', 'svn', svn)
390 ui.setconfig('keywordset', 'svn', svn)
390
391
391 uikwmaps = ui.configitems('keywordmaps')
392 uikwmaps = ui.configitems('keywordmaps')
392 if args or opts.get('rcfile'):
393 if args or opts.get('rcfile'):
393 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
394 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
394 if uikwmaps:
395 if uikwmaps:
395 ui.status(_('\textending current template maps\n'))
396 ui.status(_('\textending current template maps\n'))
396 if opts.get('default') or not uikwmaps:
397 if opts.get('default') or not uikwmaps:
397 if svn:
398 if svn:
398 ui.status(_('\toverriding default svn keywordset\n'))
399 ui.status(_('\toverriding default svn keywordset\n'))
399 else:
400 else:
400 ui.status(_('\toverriding default cvs keywordset\n'))
401 ui.status(_('\toverriding default cvs keywordset\n'))
401 if opts.get('rcfile'):
402 if opts.get('rcfile'):
402 ui.readconfig(opts.get('rcfile'))
403 ui.readconfig(opts.get('rcfile'))
403 if args:
404 if args:
404 # simulate hgrc parsing
405 # simulate hgrc parsing
405 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
406 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
406 fp = repo.opener('hgrc', 'w')
407 fp = repo.opener('hgrc', 'w')
407 fp.writelines(rcmaps)
408 fp.writelines(rcmaps)
408 fp.close()
409 fp.close()
409 ui.readconfig(repo.join('hgrc'))
410 ui.readconfig(repo.join('hgrc'))
410 kwmaps = dict(ui.configitems('keywordmaps'))
411 kwmaps = dict(ui.configitems('keywordmaps'))
411 elif opts.get('default'):
412 elif opts.get('default'):
412 if svn:
413 if svn:
413 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
414 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
414 else:
415 else:
415 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
416 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
416 kwmaps = _defaultkwmaps(ui)
417 kwmaps = _defaultkwmaps(ui)
417 if uikwmaps:
418 if uikwmaps:
418 ui.status(_('\tdisabling current template maps\n'))
419 ui.status(_('\tdisabling current template maps\n'))
419 for k, v in kwmaps.iteritems():
420 for k, v in kwmaps.iteritems():
420 ui.setconfig('keywordmaps', k, v)
421 ui.setconfig('keywordmaps', k, v)
421 else:
422 else:
422 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
423 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
423 if uikwmaps:
424 if uikwmaps:
424 kwmaps = dict(uikwmaps)
425 kwmaps = dict(uikwmaps)
425 else:
426 else:
426 kwmaps = _defaultkwmaps(ui)
427 kwmaps = _defaultkwmaps(ui)
427
428
428 uisetup(ui)
429 uisetup(ui)
429 reposetup(ui, repo)
430 reposetup(ui, repo)
430 ui.write('[extensions]\nkeyword =\n')
431 ui.write('[extensions]\nkeyword =\n')
431 demoitems('keyword', ui.configitems('keyword'))
432 demoitems('keyword', ui.configitems('keyword'))
432 demoitems('keywordset', ui.configitems('keywordset'))
433 demoitems('keywordset', ui.configitems('keywordset'))
433 demoitems('keywordmaps', kwmaps.iteritems())
434 demoitems('keywordmaps', kwmaps.iteritems())
434 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
435 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
435 repo.wopener.write(fn, keywords)
436 repo.wopener.write(fn, keywords)
436 repo[None].add([fn])
437 repo[None].add([fn])
437 ui.note(_('\nkeywords written to %s:\n') % fn)
438 ui.note(_('\nkeywords written to %s:\n') % fn)
438 ui.note(keywords)
439 ui.note(keywords)
439 repo.dirstate.setbranch('demobranch')
440 repo.dirstate.setbranch('demobranch')
440 for name, cmd in ui.configitems('hooks'):
441 for name, cmd in ui.configitems('hooks'):
441 if name.split('.', 1)[0].find('commit') > -1:
442 if name.split('.', 1)[0].find('commit') > -1:
442 repo.ui.setconfig('hooks', name, '')
443 repo.ui.setconfig('hooks', name, '')
443 msg = _('hg keyword configuration and expansion example')
444 msg = _('hg keyword configuration and expansion example')
444 ui.note("hg ci -m '%s'\n" % msg) # check-code-ignore
445 ui.note("hg ci -m '%s'\n" % msg) # check-code-ignore
445 repo.commit(text=msg)
446 repo.commit(text=msg)
446 ui.status(_('\n\tkeywords expanded\n'))
447 ui.status(_('\n\tkeywords expanded\n'))
447 ui.write(repo.wread(fn))
448 ui.write(repo.wread(fn))
448 shutil.rmtree(tmpdir, ignore_errors=True)
449 shutil.rmtree(tmpdir, ignore_errors=True)
449
450
450 @command('kwexpand', commands.walkopts, _('hg kwexpand [OPTION]... [FILE]...'))
451 @command('kwexpand', commands.walkopts, _('hg kwexpand [OPTION]... [FILE]...'))
451 def expand(ui, repo, *pats, **opts):
452 def expand(ui, repo, *pats, **opts):
452 '''expand keywords in the working directory
453 '''expand keywords in the working directory
453
454
454 Run after (re)enabling keyword expansion.
455 Run after (re)enabling keyword expansion.
455
456
456 kwexpand refuses to run if given files contain local changes.
457 kwexpand refuses to run if given files contain local changes.
457 '''
458 '''
458 # 3rd argument sets expansion to True
459 # 3rd argument sets expansion to True
459 _kwfwrite(ui, repo, True, *pats, **opts)
460 _kwfwrite(ui, repo, True, *pats, **opts)
460
461
461 @command('kwfiles',
462 @command('kwfiles',
462 [('A', 'all', None, _('show keyword status flags of all files')),
463 [('A', 'all', None, _('show keyword status flags of all files')),
463 ('i', 'ignore', None, _('show files excluded from expansion')),
464 ('i', 'ignore', None, _('show files excluded from expansion')),
464 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
465 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
465 ] + commands.walkopts,
466 ] + commands.walkopts,
466 _('hg kwfiles [OPTION]... [FILE]...'))
467 _('hg kwfiles [OPTION]... [FILE]...'))
467 def files(ui, repo, *pats, **opts):
468 def files(ui, repo, *pats, **opts):
468 '''show files configured for keyword expansion
469 '''show files configured for keyword expansion
469
470
470 List which files in the working directory are matched by the
471 List which files in the working directory are matched by the
471 [keyword] configuration patterns.
472 [keyword] configuration patterns.
472
473
473 Useful to prevent inadvertent keyword expansion and to speed up
474 Useful to prevent inadvertent keyword expansion and to speed up
474 execution by including only files that are actual candidates for
475 execution by including only files that are actual candidates for
475 expansion.
476 expansion.
476
477
477 See :hg:`help keyword` on how to construct patterns both for
478 See :hg:`help keyword` on how to construct patterns both for
478 inclusion and exclusion of files.
479 inclusion and exclusion of files.
479
480
480 With -A/--all and -v/--verbose the codes used to show the status
481 With -A/--all and -v/--verbose the codes used to show the status
481 of files are::
482 of files are::
482
483
483 K = keyword expansion candidate
484 K = keyword expansion candidate
484 k = keyword expansion candidate (not tracked)
485 k = keyword expansion candidate (not tracked)
485 I = ignored
486 I = ignored
486 i = ignored (not tracked)
487 i = ignored (not tracked)
487 '''
488 '''
488 kwt = kwtools['templater']
489 kwt = kwtools['templater']
489 wctx = repo[None]
490 wctx = repo[None]
490 status = _status(ui, repo, wctx, kwt, *pats, **opts)
491 status = _status(ui, repo, wctx, kwt, *pats, **opts)
491 cwd = pats and repo.getcwd() or ''
492 cwd = pats and repo.getcwd() or ''
492 modified, added, removed, deleted, unknown, ignored, clean = status
493 modified, added, removed, deleted, unknown, ignored, clean = status
493 files = []
494 files = []
494 if not opts.get('unknown') or opts.get('all'):
495 if not opts.get('unknown') or opts.get('all'):
495 files = sorted(modified + added + clean)
496 files = sorted(modified + added + clean)
496 kwfiles = kwt.iskwfile(files, wctx)
497 kwfiles = kwt.iskwfile(files, wctx)
497 kwdeleted = kwt.iskwfile(deleted, wctx)
498 kwdeleted = kwt.iskwfile(deleted, wctx)
498 kwunknown = kwt.iskwfile(unknown, wctx)
499 kwunknown = kwt.iskwfile(unknown, wctx)
499 if not opts.get('ignore') or opts.get('all'):
500 if not opts.get('ignore') or opts.get('all'):
500 showfiles = kwfiles, kwdeleted, kwunknown
501 showfiles = kwfiles, kwdeleted, kwunknown
501 else:
502 else:
502 showfiles = [], [], []
503 showfiles = [], [], []
503 if opts.get('all') or opts.get('ignore'):
504 if opts.get('all') or opts.get('ignore'):
504 showfiles += ([f for f in files if f not in kwfiles],
505 showfiles += ([f for f in files if f not in kwfiles],
505 [f for f in unknown if f not in kwunknown])
506 [f for f in unknown if f not in kwunknown])
506 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
507 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
507 kwstates = zip('K!kIi', showfiles, kwlabels)
508 kwstates = zip('K!kIi', showfiles, kwlabels)
508 for char, filenames, kwstate in kwstates:
509 for char, filenames, kwstate in kwstates:
509 fmt = (opts.get('all') or ui.verbose) and '%s %%s\n' % char or '%s\n'
510 fmt = (opts.get('all') or ui.verbose) and '%s %%s\n' % char or '%s\n'
510 for f in filenames:
511 for f in filenames:
511 ui.write(fmt % repo.pathto(f, cwd), label='kwfiles.' + kwstate)
512 ui.write(fmt % repo.pathto(f, cwd), label='kwfiles.' + kwstate)
512
513
513 @command('kwshrink', commands.walkopts, _('hg kwshrink [OPTION]... [FILE]...'))
514 @command('kwshrink', commands.walkopts, _('hg kwshrink [OPTION]... [FILE]...'))
514 def shrink(ui, repo, *pats, **opts):
515 def shrink(ui, repo, *pats, **opts):
515 '''revert expanded keywords in the working directory
516 '''revert expanded keywords in the working directory
516
517
517 Must be run before changing/disabling active keywords.
518 Must be run before changing/disabling active keywords.
518
519
519 kwshrink refuses to run if given files contain local changes.
520 kwshrink refuses to run if given files contain local changes.
520 '''
521 '''
521 # 3rd argument sets expansion to False
522 # 3rd argument sets expansion to False
522 _kwfwrite(ui, repo, False, *pats, **opts)
523 _kwfwrite(ui, repo, False, *pats, **opts)
523
524
524
525
525 def uisetup(ui):
526 def uisetup(ui):
526 ''' Monkeypatches dispatch._parse to retrieve user command.'''
527 ''' Monkeypatches dispatch._parse to retrieve user command.'''
527
528
528 def kwdispatch_parse(orig, ui, args):
529 def kwdispatch_parse(orig, ui, args):
529 '''Monkeypatch dispatch._parse to obtain running hg command.'''
530 '''Monkeypatch dispatch._parse to obtain running hg command.'''
530 cmd, func, args, options, cmdoptions = orig(ui, args)
531 cmd, func, args, options, cmdoptions = orig(ui, args)
531 kwtools['hgcmd'] = cmd
532 kwtools['hgcmd'] = cmd
532 return cmd, func, args, options, cmdoptions
533 return cmd, func, args, options, cmdoptions
533
534
534 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
535 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
535
536
536 def reposetup(ui, repo):
537 def reposetup(ui, repo):
537 '''Sets up repo as kwrepo for keyword substitution.
538 '''Sets up repo as kwrepo for keyword substitution.
538 Overrides file method to return kwfilelog instead of filelog
539 Overrides file method to return kwfilelog instead of filelog
539 if file matches user configuration.
540 if file matches user configuration.
540 Wraps commit to overwrite configured files with updated
541 Wraps commit to overwrite configured files with updated
541 keyword substitutions.
542 keyword substitutions.
542 Monkeypatches patch and webcommands.'''
543 Monkeypatches patch and webcommands.'''
543
544
544 try:
545 try:
545 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
546 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
546 or '.hg' in util.splitpath(repo.root)
547 or '.hg' in util.splitpath(repo.root)
547 or repo._url.startswith('bundle:')):
548 or repo._url.startswith('bundle:')):
548 return
549 return
549 except AttributeError:
550 except AttributeError:
550 pass
551 pass
551
552
552 inc, exc = [], ['.hg*']
553 inc, exc = [], ['.hg*']
553 for pat, opt in ui.configitems('keyword'):
554 for pat, opt in ui.configitems('keyword'):
554 if opt != 'ignore':
555 if opt != 'ignore':
555 inc.append(pat)
556 inc.append(pat)
556 else:
557 else:
557 exc.append(pat)
558 exc.append(pat)
558 if not inc:
559 if not inc:
559 return
560 return
560
561
561 kwtools['templater'] = kwt = kwtemplater(ui, repo, inc, exc)
562 kwtools['templater'] = kwt = kwtemplater(ui, repo, inc, exc)
562
563
563 class kwrepo(repo.__class__):
564 class kwrepo(repo.__class__):
564 def file(self, f):
565 def file(self, f):
565 if f[0] == '/':
566 if f[0] == '/':
566 f = f[1:]
567 f = f[1:]
567 return kwfilelog(self.sopener, kwt, f)
568 return kwfilelog(self.sopener, kwt, f)
568
569
569 def wread(self, filename):
570 def wread(self, filename):
570 data = super(kwrepo, self).wread(filename)
571 data = super(kwrepo, self).wread(filename)
571 return kwt.wread(filename, data)
572 return kwt.wread(filename, data)
572
573
573 def commit(self, *args, **opts):
574 def commit(self, *args, **opts):
574 # use custom commitctx for user commands
575 # use custom commitctx for user commands
575 # other extensions can still wrap repo.commitctx directly
576 # other extensions can still wrap repo.commitctx directly
576 self.commitctx = self.kwcommitctx
577 self.commitctx = self.kwcommitctx
577 try:
578 try:
578 return super(kwrepo, self).commit(*args, **opts)
579 return super(kwrepo, self).commit(*args, **opts)
579 finally:
580 finally:
580 del self.commitctx
581 del self.commitctx
581
582
582 def kwcommitctx(self, ctx, error=False):
583 def kwcommitctx(self, ctx, error=False):
583 n = super(kwrepo, self).commitctx(ctx, error)
584 n = super(kwrepo, self).commitctx(ctx, error)
584 # no lock needed, only called from repo.commit() which already locks
585 # no lock needed, only called from repo.commit() which already locks
585 if not kwt.record:
586 if not kwt.record:
586 restrict = kwt.restrict
587 restrict = kwt.restrict
587 kwt.restrict = True
588 kwt.restrict = True
588 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
589 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
589 False, True)
590 False, True)
590 kwt.restrict = restrict
591 kwt.restrict = restrict
591 return n
592 return n
592
593
593 def rollback(self, dryrun=False, force=False):
594 def rollback(self, dryrun=False, force=False):
594 wlock = self.wlock()
595 wlock = self.wlock()
595 try:
596 try:
596 if not dryrun:
597 if not dryrun:
597 changed = self['.'].files()
598 changed = self['.'].files()
598 ret = super(kwrepo, self).rollback(dryrun, force)
599 ret = super(kwrepo, self).rollback(dryrun, force)
599 if not dryrun:
600 if not dryrun:
600 ctx = self['.']
601 ctx = self['.']
601 modified, added = _preselect(self[None].status(), changed)
602 modified, added = _preselect(self[None].status(), changed)
602 kwt.overwrite(ctx, modified, True, True)
603 kwt.overwrite(ctx, modified, True, True)
603 kwt.overwrite(ctx, added, True, False)
604 kwt.overwrite(ctx, added, True, False)
604 return ret
605 return ret
605 finally:
606 finally:
606 wlock.release()
607 wlock.release()
607
608
608 # monkeypatches
609 # monkeypatches
609 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
610 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
610 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
611 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
611 rejects or conflicts due to expanded keywords in working dir.'''
612 rejects or conflicts due to expanded keywords in working dir.'''
612 orig(self, ui, gp, backend, store, eolmode)
613 orig(self, ui, gp, backend, store, eolmode)
613 # shrink keywords read from working dir
614 # shrink keywords read from working dir
614 self.lines = kwt.shrinklines(self.fname, self.lines)
615 self.lines = kwt.shrinklines(self.fname, self.lines)
615
616
616 def kw_diff(orig, repo, node1=None, node2=None, match=None, changes=None,
617 def kw_diff(orig, repo, node1=None, node2=None, match=None, changes=None,
617 opts=None, prefix=''):
618 opts=None, prefix=''):
618 '''Monkeypatch patch.diff to avoid expansion.'''
619 '''Monkeypatch patch.diff to avoid expansion.'''
619 kwt.restrict = True
620 kwt.restrict = True
620 return orig(repo, node1, node2, match, changes, opts, prefix)
621 return orig(repo, node1, node2, match, changes, opts, prefix)
621
622
622 def kwweb_skip(orig, web, req, tmpl):
623 def kwweb_skip(orig, web, req, tmpl):
623 '''Wraps webcommands.x turning off keyword expansion.'''
624 '''Wraps webcommands.x turning off keyword expansion.'''
624 kwt.match = util.never
625 kwt.match = util.never
625 return orig(web, req, tmpl)
626 return orig(web, req, tmpl)
626
627
627 def kw_copy(orig, ui, repo, pats, opts, rename=False):
628 def kw_copy(orig, ui, repo, pats, opts, rename=False):
628 '''Wraps cmdutil.copy so that copy/rename destinations do not
629 '''Wraps cmdutil.copy so that copy/rename destinations do not
629 contain expanded keywords.
630 contain expanded keywords.
630 Note that the source of a regular file destination may also be a
631 Note that the source of a regular file destination may also be a
631 symlink:
632 symlink:
632 hg cp sym x -> x is symlink
633 hg cp sym x -> x is symlink
633 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
634 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
634 For the latter we have to follow the symlink to find out whether its
635 For the latter we have to follow the symlink to find out whether its
635 target is configured for expansion and we therefore must unexpand the
636 target is configured for expansion and we therefore must unexpand the
636 keywords in the destination.'''
637 keywords in the destination.'''
637 orig(ui, repo, pats, opts, rename)
638 orig(ui, repo, pats, opts, rename)
638 if opts.get('dry_run'):
639 if opts.get('dry_run'):
639 return
640 return
640 wctx = repo[None]
641 wctx = repo[None]
641 cwd = repo.getcwd()
642 cwd = repo.getcwd()
642
643
643 def haskwsource(dest):
644 def haskwsource(dest):
644 '''Returns true if dest is a regular file and configured for
645 '''Returns true if dest is a regular file and configured for
645 expansion or a symlink which points to a file configured for
646 expansion or a symlink which points to a file configured for
646 expansion. '''
647 expansion. '''
647 source = repo.dirstate.copied(dest)
648 source = repo.dirstate.copied(dest)
648 if 'l' in wctx.flags(source):
649 if 'l' in wctx.flags(source):
649 source = scmutil.canonpath(repo.root, cwd,
650 source = scmutil.canonpath(repo.root, cwd,
650 os.path.realpath(source))
651 os.path.realpath(source))
651 return kwt.match(source)
652 return kwt.match(source)
652
653
653 candidates = [f for f in repo.dirstate.copies() if
654 candidates = [f for f in repo.dirstate.copies() if
654 'l' not in wctx.flags(f) and haskwsource(f)]
655 'l' not in wctx.flags(f) and haskwsource(f)]
655 kwt.overwrite(wctx, candidates, False, False)
656 kwt.overwrite(wctx, candidates, False, False)
656
657
657 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
658 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
658 '''Wraps record.dorecord expanding keywords after recording.'''
659 '''Wraps record.dorecord expanding keywords after recording.'''
659 wlock = repo.wlock()
660 wlock = repo.wlock()
660 try:
661 try:
661 # record returns 0 even when nothing has changed
662 # record returns 0 even when nothing has changed
662 # therefore compare nodes before and after
663 # therefore compare nodes before and after
663 kwt.record = True
664 kwt.record = True
664 ctx = repo['.']
665 ctx = repo['.']
665 wstatus = repo[None].status()
666 wstatus = repo[None].status()
666 ret = orig(ui, repo, commitfunc, *pats, **opts)
667 ret = orig(ui, repo, commitfunc, *pats, **opts)
667 recctx = repo['.']
668 recctx = repo['.']
668 if ctx != recctx:
669 if ctx != recctx:
669 modified, added = _preselect(wstatus, recctx.files())
670 modified, added = _preselect(wstatus, recctx.files())
670 kwt.restrict = False
671 kwt.restrict = False
671 kwt.overwrite(recctx, modified, False, True)
672 kwt.overwrite(recctx, modified, False, True)
672 kwt.overwrite(recctx, added, False, True, True)
673 kwt.overwrite(recctx, added, False, True, True)
673 kwt.restrict = True
674 kwt.restrict = True
674 return ret
675 return ret
675 finally:
676 finally:
676 wlock.release()
677 wlock.release()
677
678
678 def kwfilectx_cmp(orig, self, fctx):
679 def kwfilectx_cmp(orig, self, fctx):
679 # keyword affects data size, comparing wdir and filelog size does
680 # keyword affects data size, comparing wdir and filelog size does
680 # not make sense
681 # not make sense
681 if (fctx._filerev is None and
682 if (fctx._filerev is None and
682 (self._repo._encodefilterpats or
683 (self._repo._encodefilterpats or
683 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
684 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
684 self.size() - 4 == fctx.size()) or
685 self.size() - 4 == fctx.size()) or
685 self.size() == fctx.size()):
686 self.size() == fctx.size()):
686 return self._filelog.cmp(self._filenode, fctx.data())
687 return self._filelog.cmp(self._filenode, fctx.data())
687 return True
688 return True
688
689
689 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
690 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
690 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
691 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
691 extensions.wrapfunction(patch, 'diff', kw_diff)
692 extensions.wrapfunction(patch, 'diff', kw_diff)
692 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
693 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
693 for c in 'annotate changeset rev filediff diff'.split():
694 for c in 'annotate changeset rev filediff diff'.split():
694 extensions.wrapfunction(webcommands, c, kwweb_skip)
695 extensions.wrapfunction(webcommands, c, kwweb_skip)
695 for name in recordextensions.split():
696 for name in recordextensions.split():
696 try:
697 try:
697 record = extensions.find(name)
698 record = extensions.find(name)
698 extensions.wrapfunction(record, 'dorecord', kw_dorecord)
699 extensions.wrapfunction(record, 'dorecord', kw_dorecord)
699 except KeyError:
700 except KeyError:
700 pass
701 pass
701
702
702 repo.__class__ = kwrepo
703 repo.__class__ = kwrepo
@@ -1,3571 +1,3572 b''
1 # mq.py - patch queues for mercurial
1 # mq.py - patch queues for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''manage a stack of patches
8 '''manage a stack of patches
9
9
10 This extension lets you work with a stack of patches in a Mercurial
10 This extension lets you work with a stack of patches in a Mercurial
11 repository. It manages two stacks of patches - all known patches, and
11 repository. It manages two stacks of patches - all known patches, and
12 applied patches (subset of known patches).
12 applied patches (subset of known patches).
13
13
14 Known patches are represented as patch files in the .hg/patches
14 Known patches are represented as patch files in the .hg/patches
15 directory. Applied patches are both patch files and changesets.
15 directory. Applied patches are both patch files and changesets.
16
16
17 Common tasks (use :hg:`help command` for more details)::
17 Common tasks (use :hg:`help command` for more details)::
18
18
19 create new patch qnew
19 create new patch qnew
20 import existing patch qimport
20 import existing patch qimport
21
21
22 print patch series qseries
22 print patch series qseries
23 print applied patches qapplied
23 print applied patches qapplied
24
24
25 add known patch to applied stack qpush
25 add known patch to applied stack qpush
26 remove patch from applied stack qpop
26 remove patch from applied stack qpop
27 refresh contents of top applied patch qrefresh
27 refresh contents of top applied patch qrefresh
28
28
29 By default, mq will automatically use git patches when required to
29 By default, mq will automatically use git patches when required to
30 avoid losing file mode changes, copy records, binary files or empty
30 avoid losing file mode changes, copy records, binary files or empty
31 files creations or deletions. This behaviour can be configured with::
31 files creations or deletions. This behaviour can be configured with::
32
32
33 [mq]
33 [mq]
34 git = auto/keep/yes/no
34 git = auto/keep/yes/no
35
35
36 If set to 'keep', mq will obey the [diff] section configuration while
36 If set to 'keep', mq will obey the [diff] section configuration while
37 preserving existing git patches upon qrefresh. If set to 'yes' or
37 preserving existing git patches upon qrefresh. If set to 'yes' or
38 'no', mq will override the [diff] section and always generate git or
38 'no', mq will override the [diff] section and always generate git or
39 regular patches, possibly losing data in the second case.
39 regular patches, possibly losing data in the second case.
40
40
41 It may be desirable for mq changesets to be kept in the secret phase (see
41 It may be desirable for mq changesets to be kept in the secret phase (see
42 :hg:`help phases`), which can be enabled with the following setting::
42 :hg:`help phases`), which can be enabled with the following setting::
43
43
44 [mq]
44 [mq]
45 secret = True
45 secret = True
46
46
47 You will by default be managing a patch queue named "patches". You can
47 You will by default be managing a patch queue named "patches". You can
48 create other, independent patch queues with the :hg:`qqueue` command.
48 create other, independent patch queues with the :hg:`qqueue` command.
49
49
50 If the working directory contains uncommitted files, qpush, qpop and
50 If the working directory contains uncommitted files, qpush, qpop and
51 qgoto abort immediately. If -f/--force is used, the changes are
51 qgoto abort immediately. If -f/--force is used, the changes are
52 discarded. Setting::
52 discarded. Setting::
53
53
54 [mq]
54 [mq]
55 keepchanges = True
55 keepchanges = True
56
56
57 make them behave as if --keep-changes were passed, and non-conflicting
57 make them behave as if --keep-changes were passed, and non-conflicting
58 local changes will be tolerated and preserved. If incompatible options
58 local changes will be tolerated and preserved. If incompatible options
59 such as -f/--force or --exact are passed, this setting is ignored.
59 such as -f/--force or --exact are passed, this setting is ignored.
60 '''
60 '''
61
61
62 from mercurial.i18n import _
62 from mercurial.i18n import _
63 from mercurial.node import bin, hex, short, nullid, nullrev
63 from mercurial.node import bin, hex, short, nullid, nullrev
64 from mercurial.lock import release
64 from mercurial.lock import release
65 from mercurial import commands, cmdutil, hg, scmutil, util, revset
65 from mercurial import commands, cmdutil, hg, scmutil, util, revset
66 from mercurial import repair, extensions, url, error, phases
66 from mercurial import repair, extensions, url, error, phases
67 from mercurial import patch as patchmod
67 from mercurial import patch as patchmod
68 import os, re, errno, shutil
68 import os, re, errno, shutil
69
69
70 commands.norepo += " qclone"
70 commands.norepo += " qclone"
71
71
72 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
72 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
73
73
74 cmdtable = {}
74 cmdtable = {}
75 command = cmdutil.command(cmdtable)
75 command = cmdutil.command(cmdtable)
76 testedwith = 'internal'
76
77
77 # Patch names looks like unix-file names.
78 # Patch names looks like unix-file names.
78 # They must be joinable with queue directory and result in the patch path.
79 # They must be joinable with queue directory and result in the patch path.
79 normname = util.normpath
80 normname = util.normpath
80
81
81 class statusentry(object):
82 class statusentry(object):
82 def __init__(self, node, name):
83 def __init__(self, node, name):
83 self.node, self.name = node, name
84 self.node, self.name = node, name
84 def __repr__(self):
85 def __repr__(self):
85 return hex(self.node) + ':' + self.name
86 return hex(self.node) + ':' + self.name
86
87
87 class patchheader(object):
88 class patchheader(object):
88 def __init__(self, pf, plainmode=False):
89 def __init__(self, pf, plainmode=False):
89 def eatdiff(lines):
90 def eatdiff(lines):
90 while lines:
91 while lines:
91 l = lines[-1]
92 l = lines[-1]
92 if (l.startswith("diff -") or
93 if (l.startswith("diff -") or
93 l.startswith("Index:") or
94 l.startswith("Index:") or
94 l.startswith("===========")):
95 l.startswith("===========")):
95 del lines[-1]
96 del lines[-1]
96 else:
97 else:
97 break
98 break
98 def eatempty(lines):
99 def eatempty(lines):
99 while lines:
100 while lines:
100 if not lines[-1].strip():
101 if not lines[-1].strip():
101 del lines[-1]
102 del lines[-1]
102 else:
103 else:
103 break
104 break
104
105
105 message = []
106 message = []
106 comments = []
107 comments = []
107 user = None
108 user = None
108 date = None
109 date = None
109 parent = None
110 parent = None
110 format = None
111 format = None
111 subject = None
112 subject = None
112 branch = None
113 branch = None
113 nodeid = None
114 nodeid = None
114 diffstart = 0
115 diffstart = 0
115
116
116 for line in file(pf):
117 for line in file(pf):
117 line = line.rstrip()
118 line = line.rstrip()
118 if (line.startswith('diff --git')
119 if (line.startswith('diff --git')
119 or (diffstart and line.startswith('+++ '))):
120 or (diffstart and line.startswith('+++ '))):
120 diffstart = 2
121 diffstart = 2
121 break
122 break
122 diffstart = 0 # reset
123 diffstart = 0 # reset
123 if line.startswith("--- "):
124 if line.startswith("--- "):
124 diffstart = 1
125 diffstart = 1
125 continue
126 continue
126 elif format == "hgpatch":
127 elif format == "hgpatch":
127 # parse values when importing the result of an hg export
128 # parse values when importing the result of an hg export
128 if line.startswith("# User "):
129 if line.startswith("# User "):
129 user = line[7:]
130 user = line[7:]
130 elif line.startswith("# Date "):
131 elif line.startswith("# Date "):
131 date = line[7:]
132 date = line[7:]
132 elif line.startswith("# Parent "):
133 elif line.startswith("# Parent "):
133 parent = line[9:].lstrip()
134 parent = line[9:].lstrip()
134 elif line.startswith("# Branch "):
135 elif line.startswith("# Branch "):
135 branch = line[9:]
136 branch = line[9:]
136 elif line.startswith("# Node ID "):
137 elif line.startswith("# Node ID "):
137 nodeid = line[10:]
138 nodeid = line[10:]
138 elif not line.startswith("# ") and line:
139 elif not line.startswith("# ") and line:
139 message.append(line)
140 message.append(line)
140 format = None
141 format = None
141 elif line == '# HG changeset patch':
142 elif line == '# HG changeset patch':
142 message = []
143 message = []
143 format = "hgpatch"
144 format = "hgpatch"
144 elif (format != "tagdone" and (line.startswith("Subject: ") or
145 elif (format != "tagdone" and (line.startswith("Subject: ") or
145 line.startswith("subject: "))):
146 line.startswith("subject: "))):
146 subject = line[9:]
147 subject = line[9:]
147 format = "tag"
148 format = "tag"
148 elif (format != "tagdone" and (line.startswith("From: ") or
149 elif (format != "tagdone" and (line.startswith("From: ") or
149 line.startswith("from: "))):
150 line.startswith("from: "))):
150 user = line[6:]
151 user = line[6:]
151 format = "tag"
152 format = "tag"
152 elif (format != "tagdone" and (line.startswith("Date: ") or
153 elif (format != "tagdone" and (line.startswith("Date: ") or
153 line.startswith("date: "))):
154 line.startswith("date: "))):
154 date = line[6:]
155 date = line[6:]
155 format = "tag"
156 format = "tag"
156 elif format == "tag" and line == "":
157 elif format == "tag" and line == "":
157 # when looking for tags (subject: from: etc) they
158 # when looking for tags (subject: from: etc) they
158 # end once you find a blank line in the source
159 # end once you find a blank line in the source
159 format = "tagdone"
160 format = "tagdone"
160 elif message or line:
161 elif message or line:
161 message.append(line)
162 message.append(line)
162 comments.append(line)
163 comments.append(line)
163
164
164 eatdiff(message)
165 eatdiff(message)
165 eatdiff(comments)
166 eatdiff(comments)
166 # Remember the exact starting line of the patch diffs before consuming
167 # Remember the exact starting line of the patch diffs before consuming
167 # empty lines, for external use by TortoiseHg and others
168 # empty lines, for external use by TortoiseHg and others
168 self.diffstartline = len(comments)
169 self.diffstartline = len(comments)
169 eatempty(message)
170 eatempty(message)
170 eatempty(comments)
171 eatempty(comments)
171
172
172 # make sure message isn't empty
173 # make sure message isn't empty
173 if format and format.startswith("tag") and subject:
174 if format and format.startswith("tag") and subject:
174 message.insert(0, "")
175 message.insert(0, "")
175 message.insert(0, subject)
176 message.insert(0, subject)
176
177
177 self.message = message
178 self.message = message
178 self.comments = comments
179 self.comments = comments
179 self.user = user
180 self.user = user
180 self.date = date
181 self.date = date
181 self.parent = parent
182 self.parent = parent
182 # nodeid and branch are for external use by TortoiseHg and others
183 # nodeid and branch are for external use by TortoiseHg and others
183 self.nodeid = nodeid
184 self.nodeid = nodeid
184 self.branch = branch
185 self.branch = branch
185 self.haspatch = diffstart > 1
186 self.haspatch = diffstart > 1
186 self.plainmode = plainmode
187 self.plainmode = plainmode
187
188
188 def setuser(self, user):
189 def setuser(self, user):
189 if not self.updateheader(['From: ', '# User '], user):
190 if not self.updateheader(['From: ', '# User '], user):
190 try:
191 try:
191 patchheaderat = self.comments.index('# HG changeset patch')
192 patchheaderat = self.comments.index('# HG changeset patch')
192 self.comments.insert(patchheaderat + 1, '# User ' + user)
193 self.comments.insert(patchheaderat + 1, '# User ' + user)
193 except ValueError:
194 except ValueError:
194 if self.plainmode or self._hasheader(['Date: ']):
195 if self.plainmode or self._hasheader(['Date: ']):
195 self.comments = ['From: ' + user] + self.comments
196 self.comments = ['From: ' + user] + self.comments
196 else:
197 else:
197 tmp = ['# HG changeset patch', '# User ' + user, '']
198 tmp = ['# HG changeset patch', '# User ' + user, '']
198 self.comments = tmp + self.comments
199 self.comments = tmp + self.comments
199 self.user = user
200 self.user = user
200
201
201 def setdate(self, date):
202 def setdate(self, date):
202 if not self.updateheader(['Date: ', '# Date '], date):
203 if not self.updateheader(['Date: ', '# Date '], date):
203 try:
204 try:
204 patchheaderat = self.comments.index('# HG changeset patch')
205 patchheaderat = self.comments.index('# HG changeset patch')
205 self.comments.insert(patchheaderat + 1, '# Date ' + date)
206 self.comments.insert(patchheaderat + 1, '# Date ' + date)
206 except ValueError:
207 except ValueError:
207 if self.plainmode or self._hasheader(['From: ']):
208 if self.plainmode or self._hasheader(['From: ']):
208 self.comments = ['Date: ' + date] + self.comments
209 self.comments = ['Date: ' + date] + self.comments
209 else:
210 else:
210 tmp = ['# HG changeset patch', '# Date ' + date, '']
211 tmp = ['# HG changeset patch', '# Date ' + date, '']
211 self.comments = tmp + self.comments
212 self.comments = tmp + self.comments
212 self.date = date
213 self.date = date
213
214
214 def setparent(self, parent):
215 def setparent(self, parent):
215 if not self.updateheader(['# Parent '], parent):
216 if not self.updateheader(['# Parent '], parent):
216 try:
217 try:
217 patchheaderat = self.comments.index('# HG changeset patch')
218 patchheaderat = self.comments.index('# HG changeset patch')
218 self.comments.insert(patchheaderat + 1, '# Parent ' + parent)
219 self.comments.insert(patchheaderat + 1, '# Parent ' + parent)
219 except ValueError:
220 except ValueError:
220 pass
221 pass
221 self.parent = parent
222 self.parent = parent
222
223
223 def setmessage(self, message):
224 def setmessage(self, message):
224 if self.comments:
225 if self.comments:
225 self._delmsg()
226 self._delmsg()
226 self.message = [message]
227 self.message = [message]
227 self.comments += self.message
228 self.comments += self.message
228
229
229 def updateheader(self, prefixes, new):
230 def updateheader(self, prefixes, new):
230 '''Update all references to a field in the patch header.
231 '''Update all references to a field in the patch header.
231 Return whether the field is present.'''
232 Return whether the field is present.'''
232 res = False
233 res = False
233 for prefix in prefixes:
234 for prefix in prefixes:
234 for i in xrange(len(self.comments)):
235 for i in xrange(len(self.comments)):
235 if self.comments[i].startswith(prefix):
236 if self.comments[i].startswith(prefix):
236 self.comments[i] = prefix + new
237 self.comments[i] = prefix + new
237 res = True
238 res = True
238 break
239 break
239 return res
240 return res
240
241
241 def _hasheader(self, prefixes):
242 def _hasheader(self, prefixes):
242 '''Check if a header starts with any of the given prefixes.'''
243 '''Check if a header starts with any of the given prefixes.'''
243 for prefix in prefixes:
244 for prefix in prefixes:
244 for comment in self.comments:
245 for comment in self.comments:
245 if comment.startswith(prefix):
246 if comment.startswith(prefix):
246 return True
247 return True
247 return False
248 return False
248
249
249 def __str__(self):
250 def __str__(self):
250 if not self.comments:
251 if not self.comments:
251 return ''
252 return ''
252 return '\n'.join(self.comments) + '\n\n'
253 return '\n'.join(self.comments) + '\n\n'
253
254
254 def _delmsg(self):
255 def _delmsg(self):
255 '''Remove existing message, keeping the rest of the comments fields.
256 '''Remove existing message, keeping the rest of the comments fields.
256 If comments contains 'subject: ', message will prepend
257 If comments contains 'subject: ', message will prepend
257 the field and a blank line.'''
258 the field and a blank line.'''
258 if self.message:
259 if self.message:
259 subj = 'subject: ' + self.message[0].lower()
260 subj = 'subject: ' + self.message[0].lower()
260 for i in xrange(len(self.comments)):
261 for i in xrange(len(self.comments)):
261 if subj == self.comments[i].lower():
262 if subj == self.comments[i].lower():
262 del self.comments[i]
263 del self.comments[i]
263 self.message = self.message[2:]
264 self.message = self.message[2:]
264 break
265 break
265 ci = 0
266 ci = 0
266 for mi in self.message:
267 for mi in self.message:
267 while mi != self.comments[ci]:
268 while mi != self.comments[ci]:
268 ci += 1
269 ci += 1
269 del self.comments[ci]
270 del self.comments[ci]
270
271
271 def newcommit(repo, phase, *args, **kwargs):
272 def newcommit(repo, phase, *args, **kwargs):
272 """helper dedicated to ensure a commit respect mq.secret setting
273 """helper dedicated to ensure a commit respect mq.secret setting
273
274
274 It should be used instead of repo.commit inside the mq source for operation
275 It should be used instead of repo.commit inside the mq source for operation
275 creating new changeset.
276 creating new changeset.
276 """
277 """
277 if phase is None:
278 if phase is None:
278 if repo.ui.configbool('mq', 'secret', False):
279 if repo.ui.configbool('mq', 'secret', False):
279 phase = phases.secret
280 phase = phases.secret
280 if phase is not None:
281 if phase is not None:
281 backup = repo.ui.backupconfig('phases', 'new-commit')
282 backup = repo.ui.backupconfig('phases', 'new-commit')
282 # Marking the repository as committing an mq patch can be used
283 # Marking the repository as committing an mq patch can be used
283 # to optimize operations like _branchtags().
284 # to optimize operations like _branchtags().
284 repo._committingpatch = True
285 repo._committingpatch = True
285 try:
286 try:
286 if phase is not None:
287 if phase is not None:
287 repo.ui.setconfig('phases', 'new-commit', phase)
288 repo.ui.setconfig('phases', 'new-commit', phase)
288 return repo.commit(*args, **kwargs)
289 return repo.commit(*args, **kwargs)
289 finally:
290 finally:
290 repo._committingpatch = False
291 repo._committingpatch = False
291 if phase is not None:
292 if phase is not None:
292 repo.ui.restoreconfig(backup)
293 repo.ui.restoreconfig(backup)
293
294
294 class AbortNoCleanup(error.Abort):
295 class AbortNoCleanup(error.Abort):
295 pass
296 pass
296
297
297 class queue(object):
298 class queue(object):
298 def __init__(self, ui, path, patchdir=None):
299 def __init__(self, ui, path, patchdir=None):
299 self.basepath = path
300 self.basepath = path
300 try:
301 try:
301 fh = open(os.path.join(path, 'patches.queue'))
302 fh = open(os.path.join(path, 'patches.queue'))
302 cur = fh.read().rstrip()
303 cur = fh.read().rstrip()
303 fh.close()
304 fh.close()
304 if not cur:
305 if not cur:
305 curpath = os.path.join(path, 'patches')
306 curpath = os.path.join(path, 'patches')
306 else:
307 else:
307 curpath = os.path.join(path, 'patches-' + cur)
308 curpath = os.path.join(path, 'patches-' + cur)
308 except IOError:
309 except IOError:
309 curpath = os.path.join(path, 'patches')
310 curpath = os.path.join(path, 'patches')
310 self.path = patchdir or curpath
311 self.path = patchdir or curpath
311 self.opener = scmutil.opener(self.path)
312 self.opener = scmutil.opener(self.path)
312 self.ui = ui
313 self.ui = ui
313 self.applieddirty = False
314 self.applieddirty = False
314 self.seriesdirty = False
315 self.seriesdirty = False
315 self.added = []
316 self.added = []
316 self.seriespath = "series"
317 self.seriespath = "series"
317 self.statuspath = "status"
318 self.statuspath = "status"
318 self.guardspath = "guards"
319 self.guardspath = "guards"
319 self.activeguards = None
320 self.activeguards = None
320 self.guardsdirty = False
321 self.guardsdirty = False
321 # Handle mq.git as a bool with extended values
322 # Handle mq.git as a bool with extended values
322 try:
323 try:
323 gitmode = ui.configbool('mq', 'git', None)
324 gitmode = ui.configbool('mq', 'git', None)
324 if gitmode is None:
325 if gitmode is None:
325 raise error.ConfigError
326 raise error.ConfigError
326 self.gitmode = gitmode and 'yes' or 'no'
327 self.gitmode = gitmode and 'yes' or 'no'
327 except error.ConfigError:
328 except error.ConfigError:
328 self.gitmode = ui.config('mq', 'git', 'auto').lower()
329 self.gitmode = ui.config('mq', 'git', 'auto').lower()
329 self.plainmode = ui.configbool('mq', 'plain', False)
330 self.plainmode = ui.configbool('mq', 'plain', False)
330
331
331 @util.propertycache
332 @util.propertycache
332 def applied(self):
333 def applied(self):
333 def parselines(lines):
334 def parselines(lines):
334 for l in lines:
335 for l in lines:
335 entry = l.split(':', 1)
336 entry = l.split(':', 1)
336 if len(entry) > 1:
337 if len(entry) > 1:
337 n, name = entry
338 n, name = entry
338 yield statusentry(bin(n), name)
339 yield statusentry(bin(n), name)
339 elif l.strip():
340 elif l.strip():
340 self.ui.warn(_('malformated mq status line: %s\n') % entry)
341 self.ui.warn(_('malformated mq status line: %s\n') % entry)
341 # else we ignore empty lines
342 # else we ignore empty lines
342 try:
343 try:
343 lines = self.opener.read(self.statuspath).splitlines()
344 lines = self.opener.read(self.statuspath).splitlines()
344 return list(parselines(lines))
345 return list(parselines(lines))
345 except IOError, e:
346 except IOError, e:
346 if e.errno == errno.ENOENT:
347 if e.errno == errno.ENOENT:
347 return []
348 return []
348 raise
349 raise
349
350
350 @util.propertycache
351 @util.propertycache
351 def fullseries(self):
352 def fullseries(self):
352 try:
353 try:
353 return self.opener.read(self.seriespath).splitlines()
354 return self.opener.read(self.seriespath).splitlines()
354 except IOError, e:
355 except IOError, e:
355 if e.errno == errno.ENOENT:
356 if e.errno == errno.ENOENT:
356 return []
357 return []
357 raise
358 raise
358
359
359 @util.propertycache
360 @util.propertycache
360 def series(self):
361 def series(self):
361 self.parseseries()
362 self.parseseries()
362 return self.series
363 return self.series
363
364
364 @util.propertycache
365 @util.propertycache
365 def seriesguards(self):
366 def seriesguards(self):
366 self.parseseries()
367 self.parseseries()
367 return self.seriesguards
368 return self.seriesguards
368
369
369 def invalidate(self):
370 def invalidate(self):
370 for a in 'applied fullseries series seriesguards'.split():
371 for a in 'applied fullseries series seriesguards'.split():
371 if a in self.__dict__:
372 if a in self.__dict__:
372 delattr(self, a)
373 delattr(self, a)
373 self.applieddirty = False
374 self.applieddirty = False
374 self.seriesdirty = False
375 self.seriesdirty = False
375 self.guardsdirty = False
376 self.guardsdirty = False
376 self.activeguards = None
377 self.activeguards = None
377
378
378 def diffopts(self, opts={}, patchfn=None):
379 def diffopts(self, opts={}, patchfn=None):
379 diffopts = patchmod.diffopts(self.ui, opts)
380 diffopts = patchmod.diffopts(self.ui, opts)
380 if self.gitmode == 'auto':
381 if self.gitmode == 'auto':
381 diffopts.upgrade = True
382 diffopts.upgrade = True
382 elif self.gitmode == 'keep':
383 elif self.gitmode == 'keep':
383 pass
384 pass
384 elif self.gitmode in ('yes', 'no'):
385 elif self.gitmode in ('yes', 'no'):
385 diffopts.git = self.gitmode == 'yes'
386 diffopts.git = self.gitmode == 'yes'
386 else:
387 else:
387 raise util.Abort(_('mq.git option can be auto/keep/yes/no'
388 raise util.Abort(_('mq.git option can be auto/keep/yes/no'
388 ' got %s') % self.gitmode)
389 ' got %s') % self.gitmode)
389 if patchfn:
390 if patchfn:
390 diffopts = self.patchopts(diffopts, patchfn)
391 diffopts = self.patchopts(diffopts, patchfn)
391 return diffopts
392 return diffopts
392
393
393 def patchopts(self, diffopts, *patches):
394 def patchopts(self, diffopts, *patches):
394 """Return a copy of input diff options with git set to true if
395 """Return a copy of input diff options with git set to true if
395 referenced patch is a git patch and should be preserved as such.
396 referenced patch is a git patch and should be preserved as such.
396 """
397 """
397 diffopts = diffopts.copy()
398 diffopts = diffopts.copy()
398 if not diffopts.git and self.gitmode == 'keep':
399 if not diffopts.git and self.gitmode == 'keep':
399 for patchfn in patches:
400 for patchfn in patches:
400 patchf = self.opener(patchfn, 'r')
401 patchf = self.opener(patchfn, 'r')
401 # if the patch was a git patch, refresh it as a git patch
402 # if the patch was a git patch, refresh it as a git patch
402 for line in patchf:
403 for line in patchf:
403 if line.startswith('diff --git'):
404 if line.startswith('diff --git'):
404 diffopts.git = True
405 diffopts.git = True
405 break
406 break
406 patchf.close()
407 patchf.close()
407 return diffopts
408 return diffopts
408
409
409 def join(self, *p):
410 def join(self, *p):
410 return os.path.join(self.path, *p)
411 return os.path.join(self.path, *p)
411
412
412 def findseries(self, patch):
413 def findseries(self, patch):
413 def matchpatch(l):
414 def matchpatch(l):
414 l = l.split('#', 1)[0]
415 l = l.split('#', 1)[0]
415 return l.strip() == patch
416 return l.strip() == patch
416 for index, l in enumerate(self.fullseries):
417 for index, l in enumerate(self.fullseries):
417 if matchpatch(l):
418 if matchpatch(l):
418 return index
419 return index
419 return None
420 return None
420
421
421 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
422 guard_re = re.compile(r'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
422
423
423 def parseseries(self):
424 def parseseries(self):
424 self.series = []
425 self.series = []
425 self.seriesguards = []
426 self.seriesguards = []
426 for l in self.fullseries:
427 for l in self.fullseries:
427 h = l.find('#')
428 h = l.find('#')
428 if h == -1:
429 if h == -1:
429 patch = l
430 patch = l
430 comment = ''
431 comment = ''
431 elif h == 0:
432 elif h == 0:
432 continue
433 continue
433 else:
434 else:
434 patch = l[:h]
435 patch = l[:h]
435 comment = l[h:]
436 comment = l[h:]
436 patch = patch.strip()
437 patch = patch.strip()
437 if patch:
438 if patch:
438 if patch in self.series:
439 if patch in self.series:
439 raise util.Abort(_('%s appears more than once in %s') %
440 raise util.Abort(_('%s appears more than once in %s') %
440 (patch, self.join(self.seriespath)))
441 (patch, self.join(self.seriespath)))
441 self.series.append(patch)
442 self.series.append(patch)
442 self.seriesguards.append(self.guard_re.findall(comment))
443 self.seriesguards.append(self.guard_re.findall(comment))
443
444
444 def checkguard(self, guard):
445 def checkguard(self, guard):
445 if not guard:
446 if not guard:
446 return _('guard cannot be an empty string')
447 return _('guard cannot be an empty string')
447 bad_chars = '# \t\r\n\f'
448 bad_chars = '# \t\r\n\f'
448 first = guard[0]
449 first = guard[0]
449 if first in '-+':
450 if first in '-+':
450 return (_('guard %r starts with invalid character: %r') %
451 return (_('guard %r starts with invalid character: %r') %
451 (guard, first))
452 (guard, first))
452 for c in bad_chars:
453 for c in bad_chars:
453 if c in guard:
454 if c in guard:
454 return _('invalid character in guard %r: %r') % (guard, c)
455 return _('invalid character in guard %r: %r') % (guard, c)
455
456
456 def setactive(self, guards):
457 def setactive(self, guards):
457 for guard in guards:
458 for guard in guards:
458 bad = self.checkguard(guard)
459 bad = self.checkguard(guard)
459 if bad:
460 if bad:
460 raise util.Abort(bad)
461 raise util.Abort(bad)
461 guards = sorted(set(guards))
462 guards = sorted(set(guards))
462 self.ui.debug('active guards: %s\n' % ' '.join(guards))
463 self.ui.debug('active guards: %s\n' % ' '.join(guards))
463 self.activeguards = guards
464 self.activeguards = guards
464 self.guardsdirty = True
465 self.guardsdirty = True
465
466
466 def active(self):
467 def active(self):
467 if self.activeguards is None:
468 if self.activeguards is None:
468 self.activeguards = []
469 self.activeguards = []
469 try:
470 try:
470 guards = self.opener.read(self.guardspath).split()
471 guards = self.opener.read(self.guardspath).split()
471 except IOError, err:
472 except IOError, err:
472 if err.errno != errno.ENOENT:
473 if err.errno != errno.ENOENT:
473 raise
474 raise
474 guards = []
475 guards = []
475 for i, guard in enumerate(guards):
476 for i, guard in enumerate(guards):
476 bad = self.checkguard(guard)
477 bad = self.checkguard(guard)
477 if bad:
478 if bad:
478 self.ui.warn('%s:%d: %s\n' %
479 self.ui.warn('%s:%d: %s\n' %
479 (self.join(self.guardspath), i + 1, bad))
480 (self.join(self.guardspath), i + 1, bad))
480 else:
481 else:
481 self.activeguards.append(guard)
482 self.activeguards.append(guard)
482 return self.activeguards
483 return self.activeguards
483
484
484 def setguards(self, idx, guards):
485 def setguards(self, idx, guards):
485 for g in guards:
486 for g in guards:
486 if len(g) < 2:
487 if len(g) < 2:
487 raise util.Abort(_('guard %r too short') % g)
488 raise util.Abort(_('guard %r too short') % g)
488 if g[0] not in '-+':
489 if g[0] not in '-+':
489 raise util.Abort(_('guard %r starts with invalid char') % g)
490 raise util.Abort(_('guard %r starts with invalid char') % g)
490 bad = self.checkguard(g[1:])
491 bad = self.checkguard(g[1:])
491 if bad:
492 if bad:
492 raise util.Abort(bad)
493 raise util.Abort(bad)
493 drop = self.guard_re.sub('', self.fullseries[idx])
494 drop = self.guard_re.sub('', self.fullseries[idx])
494 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
495 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
495 self.parseseries()
496 self.parseseries()
496 self.seriesdirty = True
497 self.seriesdirty = True
497
498
498 def pushable(self, idx):
499 def pushable(self, idx):
499 if isinstance(idx, str):
500 if isinstance(idx, str):
500 idx = self.series.index(idx)
501 idx = self.series.index(idx)
501 patchguards = self.seriesguards[idx]
502 patchguards = self.seriesguards[idx]
502 if not patchguards:
503 if not patchguards:
503 return True, None
504 return True, None
504 guards = self.active()
505 guards = self.active()
505 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
506 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
506 if exactneg:
507 if exactneg:
507 return False, repr(exactneg[0])
508 return False, repr(exactneg[0])
508 pos = [g for g in patchguards if g[0] == '+']
509 pos = [g for g in patchguards if g[0] == '+']
509 exactpos = [g for g in pos if g[1:] in guards]
510 exactpos = [g for g in pos if g[1:] in guards]
510 if pos:
511 if pos:
511 if exactpos:
512 if exactpos:
512 return True, repr(exactpos[0])
513 return True, repr(exactpos[0])
513 return False, ' '.join(map(repr, pos))
514 return False, ' '.join(map(repr, pos))
514 return True, ''
515 return True, ''
515
516
516 def explainpushable(self, idx, all_patches=False):
517 def explainpushable(self, idx, all_patches=False):
517 write = all_patches and self.ui.write or self.ui.warn
518 write = all_patches and self.ui.write or self.ui.warn
518 if all_patches or self.ui.verbose:
519 if all_patches or self.ui.verbose:
519 if isinstance(idx, str):
520 if isinstance(idx, str):
520 idx = self.series.index(idx)
521 idx = self.series.index(idx)
521 pushable, why = self.pushable(idx)
522 pushable, why = self.pushable(idx)
522 if all_patches and pushable:
523 if all_patches and pushable:
523 if why is None:
524 if why is None:
524 write(_('allowing %s - no guards in effect\n') %
525 write(_('allowing %s - no guards in effect\n') %
525 self.series[idx])
526 self.series[idx])
526 else:
527 else:
527 if not why:
528 if not why:
528 write(_('allowing %s - no matching negative guards\n') %
529 write(_('allowing %s - no matching negative guards\n') %
529 self.series[idx])
530 self.series[idx])
530 else:
531 else:
531 write(_('allowing %s - guarded by %s\n') %
532 write(_('allowing %s - guarded by %s\n') %
532 (self.series[idx], why))
533 (self.series[idx], why))
533 if not pushable:
534 if not pushable:
534 if why:
535 if why:
535 write(_('skipping %s - guarded by %s\n') %
536 write(_('skipping %s - guarded by %s\n') %
536 (self.series[idx], why))
537 (self.series[idx], why))
537 else:
538 else:
538 write(_('skipping %s - no matching guards\n') %
539 write(_('skipping %s - no matching guards\n') %
539 self.series[idx])
540 self.series[idx])
540
541
541 def savedirty(self):
542 def savedirty(self):
542 def writelist(items, path):
543 def writelist(items, path):
543 fp = self.opener(path, 'w')
544 fp = self.opener(path, 'w')
544 for i in items:
545 for i in items:
545 fp.write("%s\n" % i)
546 fp.write("%s\n" % i)
546 fp.close()
547 fp.close()
547 if self.applieddirty:
548 if self.applieddirty:
548 writelist(map(str, self.applied), self.statuspath)
549 writelist(map(str, self.applied), self.statuspath)
549 self.applieddirty = False
550 self.applieddirty = False
550 if self.seriesdirty:
551 if self.seriesdirty:
551 writelist(self.fullseries, self.seriespath)
552 writelist(self.fullseries, self.seriespath)
552 self.seriesdirty = False
553 self.seriesdirty = False
553 if self.guardsdirty:
554 if self.guardsdirty:
554 writelist(self.activeguards, self.guardspath)
555 writelist(self.activeguards, self.guardspath)
555 self.guardsdirty = False
556 self.guardsdirty = False
556 if self.added:
557 if self.added:
557 qrepo = self.qrepo()
558 qrepo = self.qrepo()
558 if qrepo:
559 if qrepo:
559 qrepo[None].add(f for f in self.added if f not in qrepo[None])
560 qrepo[None].add(f for f in self.added if f not in qrepo[None])
560 self.added = []
561 self.added = []
561
562
562 def removeundo(self, repo):
563 def removeundo(self, repo):
563 undo = repo.sjoin('undo')
564 undo = repo.sjoin('undo')
564 if not os.path.exists(undo):
565 if not os.path.exists(undo):
565 return
566 return
566 try:
567 try:
567 os.unlink(undo)
568 os.unlink(undo)
568 except OSError, inst:
569 except OSError, inst:
569 self.ui.warn(_('error removing undo: %s\n') % str(inst))
570 self.ui.warn(_('error removing undo: %s\n') % str(inst))
570
571
571 def backup(self, repo, files, copy=False):
572 def backup(self, repo, files, copy=False):
572 # backup local changes in --force case
573 # backup local changes in --force case
573 for f in sorted(files):
574 for f in sorted(files):
574 absf = repo.wjoin(f)
575 absf = repo.wjoin(f)
575 if os.path.lexists(absf):
576 if os.path.lexists(absf):
576 self.ui.note(_('saving current version of %s as %s\n') %
577 self.ui.note(_('saving current version of %s as %s\n') %
577 (f, f + '.orig'))
578 (f, f + '.orig'))
578 if copy:
579 if copy:
579 util.copyfile(absf, absf + '.orig')
580 util.copyfile(absf, absf + '.orig')
580 else:
581 else:
581 util.rename(absf, absf + '.orig')
582 util.rename(absf, absf + '.orig')
582
583
583 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
584 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
584 fp=None, changes=None, opts={}):
585 fp=None, changes=None, opts={}):
585 stat = opts.get('stat')
586 stat = opts.get('stat')
586 m = scmutil.match(repo[node1], files, opts)
587 m = scmutil.match(repo[node1], files, opts)
587 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
588 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
588 changes, stat, fp)
589 changes, stat, fp)
589
590
590 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
591 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
591 # first try just applying the patch
592 # first try just applying the patch
592 (err, n) = self.apply(repo, [patch], update_status=False,
593 (err, n) = self.apply(repo, [patch], update_status=False,
593 strict=True, merge=rev)
594 strict=True, merge=rev)
594
595
595 if err == 0:
596 if err == 0:
596 return (err, n)
597 return (err, n)
597
598
598 if n is None:
599 if n is None:
599 raise util.Abort(_("apply failed for patch %s") % patch)
600 raise util.Abort(_("apply failed for patch %s") % patch)
600
601
601 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
602 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
602
603
603 # apply failed, strip away that rev and merge.
604 # apply failed, strip away that rev and merge.
604 hg.clean(repo, head)
605 hg.clean(repo, head)
605 self.strip(repo, [n], update=False, backup='strip')
606 self.strip(repo, [n], update=False, backup='strip')
606
607
607 ctx = repo[rev]
608 ctx = repo[rev]
608 ret = hg.merge(repo, rev)
609 ret = hg.merge(repo, rev)
609 if ret:
610 if ret:
610 raise util.Abort(_("update returned %d") % ret)
611 raise util.Abort(_("update returned %d") % ret)
611 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
612 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
612 if n is None:
613 if n is None:
613 raise util.Abort(_("repo commit failed"))
614 raise util.Abort(_("repo commit failed"))
614 try:
615 try:
615 ph = patchheader(mergeq.join(patch), self.plainmode)
616 ph = patchheader(mergeq.join(patch), self.plainmode)
616 except Exception:
617 except Exception:
617 raise util.Abort(_("unable to read %s") % patch)
618 raise util.Abort(_("unable to read %s") % patch)
618
619
619 diffopts = self.patchopts(diffopts, patch)
620 diffopts = self.patchopts(diffopts, patch)
620 patchf = self.opener(patch, "w")
621 patchf = self.opener(patch, "w")
621 comments = str(ph)
622 comments = str(ph)
622 if comments:
623 if comments:
623 patchf.write(comments)
624 patchf.write(comments)
624 self.printdiff(repo, diffopts, head, n, fp=patchf)
625 self.printdiff(repo, diffopts, head, n, fp=patchf)
625 patchf.close()
626 patchf.close()
626 self.removeundo(repo)
627 self.removeundo(repo)
627 return (0, n)
628 return (0, n)
628
629
629 def qparents(self, repo, rev=None):
630 def qparents(self, repo, rev=None):
630 if rev is None:
631 if rev is None:
631 (p1, p2) = repo.dirstate.parents()
632 (p1, p2) = repo.dirstate.parents()
632 if p2 == nullid:
633 if p2 == nullid:
633 return p1
634 return p1
634 if not self.applied:
635 if not self.applied:
635 return None
636 return None
636 return self.applied[-1].node
637 return self.applied[-1].node
637 p1, p2 = repo.changelog.parents(rev)
638 p1, p2 = repo.changelog.parents(rev)
638 if p2 != nullid and p2 in [x.node for x in self.applied]:
639 if p2 != nullid and p2 in [x.node for x in self.applied]:
639 return p2
640 return p2
640 return p1
641 return p1
641
642
642 def mergepatch(self, repo, mergeq, series, diffopts):
643 def mergepatch(self, repo, mergeq, series, diffopts):
643 if not self.applied:
644 if not self.applied:
644 # each of the patches merged in will have two parents. This
645 # each of the patches merged in will have two parents. This
645 # can confuse the qrefresh, qdiff, and strip code because it
646 # can confuse the qrefresh, qdiff, and strip code because it
646 # needs to know which parent is actually in the patch queue.
647 # needs to know which parent is actually in the patch queue.
647 # so, we insert a merge marker with only one parent. This way
648 # so, we insert a merge marker with only one parent. This way
648 # the first patch in the queue is never a merge patch
649 # the first patch in the queue is never a merge patch
649 #
650 #
650 pname = ".hg.patches.merge.marker"
651 pname = ".hg.patches.merge.marker"
651 n = newcommit(repo, None, '[mq]: merge marker', force=True)
652 n = newcommit(repo, None, '[mq]: merge marker', force=True)
652 self.removeundo(repo)
653 self.removeundo(repo)
653 self.applied.append(statusentry(n, pname))
654 self.applied.append(statusentry(n, pname))
654 self.applieddirty = True
655 self.applieddirty = True
655
656
656 head = self.qparents(repo)
657 head = self.qparents(repo)
657
658
658 for patch in series:
659 for patch in series:
659 patch = mergeq.lookup(patch, strict=True)
660 patch = mergeq.lookup(patch, strict=True)
660 if not patch:
661 if not patch:
661 self.ui.warn(_("patch %s does not exist\n") % patch)
662 self.ui.warn(_("patch %s does not exist\n") % patch)
662 return (1, None)
663 return (1, None)
663 pushable, reason = self.pushable(patch)
664 pushable, reason = self.pushable(patch)
664 if not pushable:
665 if not pushable:
665 self.explainpushable(patch, all_patches=True)
666 self.explainpushable(patch, all_patches=True)
666 continue
667 continue
667 info = mergeq.isapplied(patch)
668 info = mergeq.isapplied(patch)
668 if not info:
669 if not info:
669 self.ui.warn(_("patch %s is not applied\n") % patch)
670 self.ui.warn(_("patch %s is not applied\n") % patch)
670 return (1, None)
671 return (1, None)
671 rev = info[1]
672 rev = info[1]
672 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
673 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
673 if head:
674 if head:
674 self.applied.append(statusentry(head, patch))
675 self.applied.append(statusentry(head, patch))
675 self.applieddirty = True
676 self.applieddirty = True
676 if err:
677 if err:
677 return (err, head)
678 return (err, head)
678 self.savedirty()
679 self.savedirty()
679 return (0, head)
680 return (0, head)
680
681
681 def patch(self, repo, patchfile):
682 def patch(self, repo, patchfile):
682 '''Apply patchfile to the working directory.
683 '''Apply patchfile to the working directory.
683 patchfile: name of patch file'''
684 patchfile: name of patch file'''
684 files = set()
685 files = set()
685 try:
686 try:
686 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
687 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
687 files=files, eolmode=None)
688 files=files, eolmode=None)
688 return (True, list(files), fuzz)
689 return (True, list(files), fuzz)
689 except Exception, inst:
690 except Exception, inst:
690 self.ui.note(str(inst) + '\n')
691 self.ui.note(str(inst) + '\n')
691 if not self.ui.verbose:
692 if not self.ui.verbose:
692 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
693 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
693 self.ui.traceback()
694 self.ui.traceback()
694 return (False, list(files), False)
695 return (False, list(files), False)
695
696
696 def apply(self, repo, series, list=False, update_status=True,
697 def apply(self, repo, series, list=False, update_status=True,
697 strict=False, patchdir=None, merge=None, all_files=None,
698 strict=False, patchdir=None, merge=None, all_files=None,
698 tobackup=None, keepchanges=False):
699 tobackup=None, keepchanges=False):
699 wlock = lock = tr = None
700 wlock = lock = tr = None
700 try:
701 try:
701 wlock = repo.wlock()
702 wlock = repo.wlock()
702 lock = repo.lock()
703 lock = repo.lock()
703 tr = repo.transaction("qpush")
704 tr = repo.transaction("qpush")
704 try:
705 try:
705 ret = self._apply(repo, series, list, update_status,
706 ret = self._apply(repo, series, list, update_status,
706 strict, patchdir, merge, all_files=all_files,
707 strict, patchdir, merge, all_files=all_files,
707 tobackup=tobackup, keepchanges=keepchanges)
708 tobackup=tobackup, keepchanges=keepchanges)
708 tr.close()
709 tr.close()
709 self.savedirty()
710 self.savedirty()
710 return ret
711 return ret
711 except AbortNoCleanup:
712 except AbortNoCleanup:
712 tr.close()
713 tr.close()
713 self.savedirty()
714 self.savedirty()
714 return 2, repo.dirstate.p1()
715 return 2, repo.dirstate.p1()
715 except: # re-raises
716 except: # re-raises
716 try:
717 try:
717 tr.abort()
718 tr.abort()
718 finally:
719 finally:
719 repo.invalidate()
720 repo.invalidate()
720 repo.dirstate.invalidate()
721 repo.dirstate.invalidate()
721 self.invalidate()
722 self.invalidate()
722 raise
723 raise
723 finally:
724 finally:
724 release(tr, lock, wlock)
725 release(tr, lock, wlock)
725 self.removeundo(repo)
726 self.removeundo(repo)
726
727
727 def _apply(self, repo, series, list=False, update_status=True,
728 def _apply(self, repo, series, list=False, update_status=True,
728 strict=False, patchdir=None, merge=None, all_files=None,
729 strict=False, patchdir=None, merge=None, all_files=None,
729 tobackup=None, keepchanges=False):
730 tobackup=None, keepchanges=False):
730 """returns (error, hash)
731 """returns (error, hash)
731
732
732 error = 1 for unable to read, 2 for patch failed, 3 for patch
733 error = 1 for unable to read, 2 for patch failed, 3 for patch
733 fuzz. tobackup is None or a set of files to backup before they
734 fuzz. tobackup is None or a set of files to backup before they
734 are modified by a patch.
735 are modified by a patch.
735 """
736 """
736 # TODO unify with commands.py
737 # TODO unify with commands.py
737 if not patchdir:
738 if not patchdir:
738 patchdir = self.path
739 patchdir = self.path
739 err = 0
740 err = 0
740 n = None
741 n = None
741 for patchname in series:
742 for patchname in series:
742 pushable, reason = self.pushable(patchname)
743 pushable, reason = self.pushable(patchname)
743 if not pushable:
744 if not pushable:
744 self.explainpushable(patchname, all_patches=True)
745 self.explainpushable(patchname, all_patches=True)
745 continue
746 continue
746 self.ui.status(_("applying %s\n") % patchname)
747 self.ui.status(_("applying %s\n") % patchname)
747 pf = os.path.join(patchdir, patchname)
748 pf = os.path.join(patchdir, patchname)
748
749
749 try:
750 try:
750 ph = patchheader(self.join(patchname), self.plainmode)
751 ph = patchheader(self.join(patchname), self.plainmode)
751 except IOError:
752 except IOError:
752 self.ui.warn(_("unable to read %s\n") % patchname)
753 self.ui.warn(_("unable to read %s\n") % patchname)
753 err = 1
754 err = 1
754 break
755 break
755
756
756 message = ph.message
757 message = ph.message
757 if not message:
758 if not message:
758 # The commit message should not be translated
759 # The commit message should not be translated
759 message = "imported patch %s\n" % patchname
760 message = "imported patch %s\n" % patchname
760 else:
761 else:
761 if list:
762 if list:
762 # The commit message should not be translated
763 # The commit message should not be translated
763 message.append("\nimported patch %s" % patchname)
764 message.append("\nimported patch %s" % patchname)
764 message = '\n'.join(message)
765 message = '\n'.join(message)
765
766
766 if ph.haspatch:
767 if ph.haspatch:
767 if tobackup:
768 if tobackup:
768 touched = patchmod.changedfiles(self.ui, repo, pf)
769 touched = patchmod.changedfiles(self.ui, repo, pf)
769 touched = set(touched) & tobackup
770 touched = set(touched) & tobackup
770 if touched and keepchanges:
771 if touched and keepchanges:
771 raise AbortNoCleanup(
772 raise AbortNoCleanup(
772 _("local changes found, refresh first"))
773 _("local changes found, refresh first"))
773 self.backup(repo, touched, copy=True)
774 self.backup(repo, touched, copy=True)
774 tobackup = tobackup - touched
775 tobackup = tobackup - touched
775 (patcherr, files, fuzz) = self.patch(repo, pf)
776 (patcherr, files, fuzz) = self.patch(repo, pf)
776 if all_files is not None:
777 if all_files is not None:
777 all_files.update(files)
778 all_files.update(files)
778 patcherr = not patcherr
779 patcherr = not patcherr
779 else:
780 else:
780 self.ui.warn(_("patch %s is empty\n") % patchname)
781 self.ui.warn(_("patch %s is empty\n") % patchname)
781 patcherr, files, fuzz = 0, [], 0
782 patcherr, files, fuzz = 0, [], 0
782
783
783 if merge and files:
784 if merge and files:
784 # Mark as removed/merged and update dirstate parent info
785 # Mark as removed/merged and update dirstate parent info
785 removed = []
786 removed = []
786 merged = []
787 merged = []
787 for f in files:
788 for f in files:
788 if os.path.lexists(repo.wjoin(f)):
789 if os.path.lexists(repo.wjoin(f)):
789 merged.append(f)
790 merged.append(f)
790 else:
791 else:
791 removed.append(f)
792 removed.append(f)
792 for f in removed:
793 for f in removed:
793 repo.dirstate.remove(f)
794 repo.dirstate.remove(f)
794 for f in merged:
795 for f in merged:
795 repo.dirstate.merge(f)
796 repo.dirstate.merge(f)
796 p1, p2 = repo.dirstate.parents()
797 p1, p2 = repo.dirstate.parents()
797 repo.setparents(p1, merge)
798 repo.setparents(p1, merge)
798
799
799 match = scmutil.matchfiles(repo, files or [])
800 match = scmutil.matchfiles(repo, files or [])
800 oldtip = repo['tip']
801 oldtip = repo['tip']
801 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
802 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
802 force=True)
803 force=True)
803 if repo['tip'] == oldtip:
804 if repo['tip'] == oldtip:
804 raise util.Abort(_("qpush exactly duplicates child changeset"))
805 raise util.Abort(_("qpush exactly duplicates child changeset"))
805 if n is None:
806 if n is None:
806 raise util.Abort(_("repository commit failed"))
807 raise util.Abort(_("repository commit failed"))
807
808
808 if update_status:
809 if update_status:
809 self.applied.append(statusentry(n, patchname))
810 self.applied.append(statusentry(n, patchname))
810
811
811 if patcherr:
812 if patcherr:
812 self.ui.warn(_("patch failed, rejects left in working dir\n"))
813 self.ui.warn(_("patch failed, rejects left in working dir\n"))
813 err = 2
814 err = 2
814 break
815 break
815
816
816 if fuzz and strict:
817 if fuzz and strict:
817 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
818 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
818 err = 3
819 err = 3
819 break
820 break
820 return (err, n)
821 return (err, n)
821
822
822 def _cleanup(self, patches, numrevs, keep=False):
823 def _cleanup(self, patches, numrevs, keep=False):
823 if not keep:
824 if not keep:
824 r = self.qrepo()
825 r = self.qrepo()
825 if r:
826 if r:
826 r[None].forget(patches)
827 r[None].forget(patches)
827 for p in patches:
828 for p in patches:
828 os.unlink(self.join(p))
829 os.unlink(self.join(p))
829
830
830 qfinished = []
831 qfinished = []
831 if numrevs:
832 if numrevs:
832 qfinished = self.applied[:numrevs]
833 qfinished = self.applied[:numrevs]
833 del self.applied[:numrevs]
834 del self.applied[:numrevs]
834 self.applieddirty = True
835 self.applieddirty = True
835
836
836 unknown = []
837 unknown = []
837
838
838 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
839 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
839 reverse=True):
840 reverse=True):
840 if i is not None:
841 if i is not None:
841 del self.fullseries[i]
842 del self.fullseries[i]
842 else:
843 else:
843 unknown.append(p)
844 unknown.append(p)
844
845
845 if unknown:
846 if unknown:
846 if numrevs:
847 if numrevs:
847 rev = dict((entry.name, entry.node) for entry in qfinished)
848 rev = dict((entry.name, entry.node) for entry in qfinished)
848 for p in unknown:
849 for p in unknown:
849 msg = _('revision %s refers to unknown patches: %s\n')
850 msg = _('revision %s refers to unknown patches: %s\n')
850 self.ui.warn(msg % (short(rev[p]), p))
851 self.ui.warn(msg % (short(rev[p]), p))
851 else:
852 else:
852 msg = _('unknown patches: %s\n')
853 msg = _('unknown patches: %s\n')
853 raise util.Abort(''.join(msg % p for p in unknown))
854 raise util.Abort(''.join(msg % p for p in unknown))
854
855
855 self.parseseries()
856 self.parseseries()
856 self.seriesdirty = True
857 self.seriesdirty = True
857 return [entry.node for entry in qfinished]
858 return [entry.node for entry in qfinished]
858
859
859 def _revpatches(self, repo, revs):
860 def _revpatches(self, repo, revs):
860 firstrev = repo[self.applied[0].node].rev()
861 firstrev = repo[self.applied[0].node].rev()
861 patches = []
862 patches = []
862 for i, rev in enumerate(revs):
863 for i, rev in enumerate(revs):
863
864
864 if rev < firstrev:
865 if rev < firstrev:
865 raise util.Abort(_('revision %d is not managed') % rev)
866 raise util.Abort(_('revision %d is not managed') % rev)
866
867
867 ctx = repo[rev]
868 ctx = repo[rev]
868 base = self.applied[i].node
869 base = self.applied[i].node
869 if ctx.node() != base:
870 if ctx.node() != base:
870 msg = _('cannot delete revision %d above applied patches')
871 msg = _('cannot delete revision %d above applied patches')
871 raise util.Abort(msg % rev)
872 raise util.Abort(msg % rev)
872
873
873 patch = self.applied[i].name
874 patch = self.applied[i].name
874 for fmt in ('[mq]: %s', 'imported patch %s'):
875 for fmt in ('[mq]: %s', 'imported patch %s'):
875 if ctx.description() == fmt % patch:
876 if ctx.description() == fmt % patch:
876 msg = _('patch %s finalized without changeset message\n')
877 msg = _('patch %s finalized without changeset message\n')
877 repo.ui.status(msg % patch)
878 repo.ui.status(msg % patch)
878 break
879 break
879
880
880 patches.append(patch)
881 patches.append(patch)
881 return patches
882 return patches
882
883
883 def finish(self, repo, revs):
884 def finish(self, repo, revs):
884 # Manually trigger phase computation to ensure phasedefaults is
885 # Manually trigger phase computation to ensure phasedefaults is
885 # executed before we remove the patches.
886 # executed before we remove the patches.
886 repo._phasecache
887 repo._phasecache
887 patches = self._revpatches(repo, sorted(revs))
888 patches = self._revpatches(repo, sorted(revs))
888 qfinished = self._cleanup(patches, len(patches))
889 qfinished = self._cleanup(patches, len(patches))
889 if qfinished and repo.ui.configbool('mq', 'secret', False):
890 if qfinished and repo.ui.configbool('mq', 'secret', False):
890 # only use this logic when the secret option is added
891 # only use this logic when the secret option is added
891 oldqbase = repo[qfinished[0]]
892 oldqbase = repo[qfinished[0]]
892 tphase = repo.ui.config('phases', 'new-commit', phases.draft)
893 tphase = repo.ui.config('phases', 'new-commit', phases.draft)
893 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
894 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
894 phases.advanceboundary(repo, tphase, qfinished)
895 phases.advanceboundary(repo, tphase, qfinished)
895
896
896 def delete(self, repo, patches, opts):
897 def delete(self, repo, patches, opts):
897 if not patches and not opts.get('rev'):
898 if not patches and not opts.get('rev'):
898 raise util.Abort(_('qdelete requires at least one revision or '
899 raise util.Abort(_('qdelete requires at least one revision or '
899 'patch name'))
900 'patch name'))
900
901
901 realpatches = []
902 realpatches = []
902 for patch in patches:
903 for patch in patches:
903 patch = self.lookup(patch, strict=True)
904 patch = self.lookup(patch, strict=True)
904 info = self.isapplied(patch)
905 info = self.isapplied(patch)
905 if info:
906 if info:
906 raise util.Abort(_("cannot delete applied patch %s") % patch)
907 raise util.Abort(_("cannot delete applied patch %s") % patch)
907 if patch not in self.series:
908 if patch not in self.series:
908 raise util.Abort(_("patch %s not in series file") % patch)
909 raise util.Abort(_("patch %s not in series file") % patch)
909 if patch not in realpatches:
910 if patch not in realpatches:
910 realpatches.append(patch)
911 realpatches.append(patch)
911
912
912 numrevs = 0
913 numrevs = 0
913 if opts.get('rev'):
914 if opts.get('rev'):
914 if not self.applied:
915 if not self.applied:
915 raise util.Abort(_('no patches applied'))
916 raise util.Abort(_('no patches applied'))
916 revs = scmutil.revrange(repo, opts.get('rev'))
917 revs = scmutil.revrange(repo, opts.get('rev'))
917 if len(revs) > 1 and revs[0] > revs[1]:
918 if len(revs) > 1 and revs[0] > revs[1]:
918 revs.reverse()
919 revs.reverse()
919 revpatches = self._revpatches(repo, revs)
920 revpatches = self._revpatches(repo, revs)
920 realpatches += revpatches
921 realpatches += revpatches
921 numrevs = len(revpatches)
922 numrevs = len(revpatches)
922
923
923 self._cleanup(realpatches, numrevs, opts.get('keep'))
924 self._cleanup(realpatches, numrevs, opts.get('keep'))
924
925
925 def checktoppatch(self, repo):
926 def checktoppatch(self, repo):
926 if self.applied:
927 if self.applied:
927 top = self.applied[-1].node
928 top = self.applied[-1].node
928 patch = self.applied[-1].name
929 patch = self.applied[-1].name
929 pp = repo.dirstate.parents()
930 pp = repo.dirstate.parents()
930 if top not in pp:
931 if top not in pp:
931 raise util.Abort(_("working directory revision is not qtip"))
932 raise util.Abort(_("working directory revision is not qtip"))
932 return top, patch
933 return top, patch
933 return None, None
934 return None, None
934
935
935 def checksubstate(self, repo):
936 def checksubstate(self, repo):
936 '''return list of subrepos at a different revision than substate.
937 '''return list of subrepos at a different revision than substate.
937 Abort if any subrepos have uncommitted changes.'''
938 Abort if any subrepos have uncommitted changes.'''
938 inclsubs = []
939 inclsubs = []
939 wctx = repo[None]
940 wctx = repo[None]
940 for s in wctx.substate:
941 for s in wctx.substate:
941 if wctx.sub(s).dirty(True):
942 if wctx.sub(s).dirty(True):
942 raise util.Abort(
943 raise util.Abort(
943 _("uncommitted changes in subrepository %s") % s)
944 _("uncommitted changes in subrepository %s") % s)
944 elif wctx.sub(s).dirty():
945 elif wctx.sub(s).dirty():
945 inclsubs.append(s)
946 inclsubs.append(s)
946 return inclsubs
947 return inclsubs
947
948
948 def localchangesfound(self, refresh=True):
949 def localchangesfound(self, refresh=True):
949 if refresh:
950 if refresh:
950 raise util.Abort(_("local changes found, refresh first"))
951 raise util.Abort(_("local changes found, refresh first"))
951 else:
952 else:
952 raise util.Abort(_("local changes found"))
953 raise util.Abort(_("local changes found"))
953
954
954 def checklocalchanges(self, repo, force=False, refresh=True):
955 def checklocalchanges(self, repo, force=False, refresh=True):
955 m, a, r, d = repo.status()[:4]
956 m, a, r, d = repo.status()[:4]
956 if (m or a or r or d) and not force:
957 if (m or a or r or d) and not force:
957 self.localchangesfound(refresh)
958 self.localchangesfound(refresh)
958 return m, a, r, d
959 return m, a, r, d
959
960
960 _reserved = ('series', 'status', 'guards', '.', '..')
961 _reserved = ('series', 'status', 'guards', '.', '..')
961 def checkreservedname(self, name):
962 def checkreservedname(self, name):
962 if name in self._reserved:
963 if name in self._reserved:
963 raise util.Abort(_('"%s" cannot be used as the name of a patch')
964 raise util.Abort(_('"%s" cannot be used as the name of a patch')
964 % name)
965 % name)
965 for prefix in ('.hg', '.mq'):
966 for prefix in ('.hg', '.mq'):
966 if name.startswith(prefix):
967 if name.startswith(prefix):
967 raise util.Abort(_('patch name cannot begin with "%s"')
968 raise util.Abort(_('patch name cannot begin with "%s"')
968 % prefix)
969 % prefix)
969 for c in ('#', ':'):
970 for c in ('#', ':'):
970 if c in name:
971 if c in name:
971 raise util.Abort(_('"%s" cannot be used in the name of a patch')
972 raise util.Abort(_('"%s" cannot be used in the name of a patch')
972 % c)
973 % c)
973
974
974 def checkpatchname(self, name, force=False):
975 def checkpatchname(self, name, force=False):
975 self.checkreservedname(name)
976 self.checkreservedname(name)
976 if not force and os.path.exists(self.join(name)):
977 if not force and os.path.exists(self.join(name)):
977 if os.path.isdir(self.join(name)):
978 if os.path.isdir(self.join(name)):
978 raise util.Abort(_('"%s" already exists as a directory')
979 raise util.Abort(_('"%s" already exists as a directory')
979 % name)
980 % name)
980 else:
981 else:
981 raise util.Abort(_('patch "%s" already exists') % name)
982 raise util.Abort(_('patch "%s" already exists') % name)
982
983
983 def checkkeepchanges(self, keepchanges, force):
984 def checkkeepchanges(self, keepchanges, force):
984 if force and keepchanges:
985 if force and keepchanges:
985 raise util.Abort(_('cannot use both --force and --keep-changes'))
986 raise util.Abort(_('cannot use both --force and --keep-changes'))
986
987
987 def new(self, repo, patchfn, *pats, **opts):
988 def new(self, repo, patchfn, *pats, **opts):
988 """options:
989 """options:
989 msg: a string or a no-argument function returning a string
990 msg: a string or a no-argument function returning a string
990 """
991 """
991 msg = opts.get('msg')
992 msg = opts.get('msg')
992 user = opts.get('user')
993 user = opts.get('user')
993 date = opts.get('date')
994 date = opts.get('date')
994 if date:
995 if date:
995 date = util.parsedate(date)
996 date = util.parsedate(date)
996 diffopts = self.diffopts({'git': opts.get('git')})
997 diffopts = self.diffopts({'git': opts.get('git')})
997 if opts.get('checkname', True):
998 if opts.get('checkname', True):
998 self.checkpatchname(patchfn)
999 self.checkpatchname(patchfn)
999 inclsubs = self.checksubstate(repo)
1000 inclsubs = self.checksubstate(repo)
1000 if inclsubs:
1001 if inclsubs:
1001 inclsubs.append('.hgsubstate')
1002 inclsubs.append('.hgsubstate')
1002 substatestate = repo.dirstate['.hgsubstate']
1003 substatestate = repo.dirstate['.hgsubstate']
1003 if opts.get('include') or opts.get('exclude') or pats:
1004 if opts.get('include') or opts.get('exclude') or pats:
1004 if inclsubs:
1005 if inclsubs:
1005 pats = list(pats or []) + inclsubs
1006 pats = list(pats or []) + inclsubs
1006 match = scmutil.match(repo[None], pats, opts)
1007 match = scmutil.match(repo[None], pats, opts)
1007 # detect missing files in pats
1008 # detect missing files in pats
1008 def badfn(f, msg):
1009 def badfn(f, msg):
1009 if f != '.hgsubstate': # .hgsubstate is auto-created
1010 if f != '.hgsubstate': # .hgsubstate is auto-created
1010 raise util.Abort('%s: %s' % (f, msg))
1011 raise util.Abort('%s: %s' % (f, msg))
1011 match.bad = badfn
1012 match.bad = badfn
1012 changes = repo.status(match=match)
1013 changes = repo.status(match=match)
1013 m, a, r, d = changes[:4]
1014 m, a, r, d = changes[:4]
1014 else:
1015 else:
1015 changes = self.checklocalchanges(repo, force=True)
1016 changes = self.checklocalchanges(repo, force=True)
1016 m, a, r, d = changes
1017 m, a, r, d = changes
1017 match = scmutil.matchfiles(repo, m + a + r + inclsubs)
1018 match = scmutil.matchfiles(repo, m + a + r + inclsubs)
1018 if len(repo[None].parents()) > 1:
1019 if len(repo[None].parents()) > 1:
1019 raise util.Abort(_('cannot manage merge changesets'))
1020 raise util.Abort(_('cannot manage merge changesets'))
1020 commitfiles = m + a + r
1021 commitfiles = m + a + r
1021 self.checktoppatch(repo)
1022 self.checktoppatch(repo)
1022 insert = self.fullseriesend()
1023 insert = self.fullseriesend()
1023 wlock = repo.wlock()
1024 wlock = repo.wlock()
1024 try:
1025 try:
1025 try:
1026 try:
1026 # if patch file write fails, abort early
1027 # if patch file write fails, abort early
1027 p = self.opener(patchfn, "w")
1028 p = self.opener(patchfn, "w")
1028 except IOError, e:
1029 except IOError, e:
1029 raise util.Abort(_('cannot write patch "%s": %s')
1030 raise util.Abort(_('cannot write patch "%s": %s')
1030 % (patchfn, e.strerror))
1031 % (patchfn, e.strerror))
1031 try:
1032 try:
1032 if self.plainmode:
1033 if self.plainmode:
1033 if user:
1034 if user:
1034 p.write("From: " + user + "\n")
1035 p.write("From: " + user + "\n")
1035 if not date:
1036 if not date:
1036 p.write("\n")
1037 p.write("\n")
1037 if date:
1038 if date:
1038 p.write("Date: %d %d\n\n" % date)
1039 p.write("Date: %d %d\n\n" % date)
1039 else:
1040 else:
1040 p.write("# HG changeset patch\n")
1041 p.write("# HG changeset patch\n")
1041 p.write("# Parent "
1042 p.write("# Parent "
1042 + hex(repo[None].p1().node()) + "\n")
1043 + hex(repo[None].p1().node()) + "\n")
1043 if user:
1044 if user:
1044 p.write("# User " + user + "\n")
1045 p.write("# User " + user + "\n")
1045 if date:
1046 if date:
1046 p.write("# Date %s %s\n\n" % date)
1047 p.write("# Date %s %s\n\n" % date)
1047 if util.safehasattr(msg, '__call__'):
1048 if util.safehasattr(msg, '__call__'):
1048 msg = msg()
1049 msg = msg()
1049 commitmsg = msg and msg or ("[mq]: %s" % patchfn)
1050 commitmsg = msg and msg or ("[mq]: %s" % patchfn)
1050 n = newcommit(repo, None, commitmsg, user, date, match=match,
1051 n = newcommit(repo, None, commitmsg, user, date, match=match,
1051 force=True)
1052 force=True)
1052 if n is None:
1053 if n is None:
1053 raise util.Abort(_("repo commit failed"))
1054 raise util.Abort(_("repo commit failed"))
1054 try:
1055 try:
1055 self.fullseries[insert:insert] = [patchfn]
1056 self.fullseries[insert:insert] = [patchfn]
1056 self.applied.append(statusentry(n, patchfn))
1057 self.applied.append(statusentry(n, patchfn))
1057 self.parseseries()
1058 self.parseseries()
1058 self.seriesdirty = True
1059 self.seriesdirty = True
1059 self.applieddirty = True
1060 self.applieddirty = True
1060 if msg:
1061 if msg:
1061 msg = msg + "\n\n"
1062 msg = msg + "\n\n"
1062 p.write(msg)
1063 p.write(msg)
1063 if commitfiles:
1064 if commitfiles:
1064 parent = self.qparents(repo, n)
1065 parent = self.qparents(repo, n)
1065 if inclsubs:
1066 if inclsubs:
1066 if substatestate in 'a?':
1067 if substatestate in 'a?':
1067 changes[1].append('.hgsubstate')
1068 changes[1].append('.hgsubstate')
1068 elif substatestate in 'r':
1069 elif substatestate in 'r':
1069 changes[2].append('.hgsubstate')
1070 changes[2].append('.hgsubstate')
1070 else: # modified
1071 else: # modified
1071 changes[0].append('.hgsubstate')
1072 changes[0].append('.hgsubstate')
1072 chunks = patchmod.diff(repo, node1=parent, node2=n,
1073 chunks = patchmod.diff(repo, node1=parent, node2=n,
1073 changes=changes, opts=diffopts)
1074 changes=changes, opts=diffopts)
1074 for chunk in chunks:
1075 for chunk in chunks:
1075 p.write(chunk)
1076 p.write(chunk)
1076 p.close()
1077 p.close()
1077 r = self.qrepo()
1078 r = self.qrepo()
1078 if r:
1079 if r:
1079 r[None].add([patchfn])
1080 r[None].add([patchfn])
1080 except: # re-raises
1081 except: # re-raises
1081 repo.rollback()
1082 repo.rollback()
1082 raise
1083 raise
1083 except Exception:
1084 except Exception:
1084 patchpath = self.join(patchfn)
1085 patchpath = self.join(patchfn)
1085 try:
1086 try:
1086 os.unlink(patchpath)
1087 os.unlink(patchpath)
1087 except OSError:
1088 except OSError:
1088 self.ui.warn(_('error unlinking %s\n') % patchpath)
1089 self.ui.warn(_('error unlinking %s\n') % patchpath)
1089 raise
1090 raise
1090 self.removeundo(repo)
1091 self.removeundo(repo)
1091 finally:
1092 finally:
1092 release(wlock)
1093 release(wlock)
1093
1094
1094 def strip(self, repo, revs, update=True, backup="all", force=None):
1095 def strip(self, repo, revs, update=True, backup="all", force=None):
1095 wlock = lock = None
1096 wlock = lock = None
1096 try:
1097 try:
1097 wlock = repo.wlock()
1098 wlock = repo.wlock()
1098 lock = repo.lock()
1099 lock = repo.lock()
1099
1100
1100 if update:
1101 if update:
1101 self.checklocalchanges(repo, force=force, refresh=False)
1102 self.checklocalchanges(repo, force=force, refresh=False)
1102 urev = self.qparents(repo, revs[0])
1103 urev = self.qparents(repo, revs[0])
1103 hg.clean(repo, urev)
1104 hg.clean(repo, urev)
1104 repo.dirstate.write()
1105 repo.dirstate.write()
1105
1106
1106 repair.strip(self.ui, repo, revs, backup)
1107 repair.strip(self.ui, repo, revs, backup)
1107 finally:
1108 finally:
1108 release(lock, wlock)
1109 release(lock, wlock)
1109
1110
1110 def isapplied(self, patch):
1111 def isapplied(self, patch):
1111 """returns (index, rev, patch)"""
1112 """returns (index, rev, patch)"""
1112 for i, a in enumerate(self.applied):
1113 for i, a in enumerate(self.applied):
1113 if a.name == patch:
1114 if a.name == patch:
1114 return (i, a.node, a.name)
1115 return (i, a.node, a.name)
1115 return None
1116 return None
1116
1117
1117 # if the exact patch name does not exist, we try a few
1118 # if the exact patch name does not exist, we try a few
1118 # variations. If strict is passed, we try only #1
1119 # variations. If strict is passed, we try only #1
1119 #
1120 #
1120 # 1) a number (as string) to indicate an offset in the series file
1121 # 1) a number (as string) to indicate an offset in the series file
1121 # 2) a unique substring of the patch name was given
1122 # 2) a unique substring of the patch name was given
1122 # 3) patchname[-+]num to indicate an offset in the series file
1123 # 3) patchname[-+]num to indicate an offset in the series file
1123 def lookup(self, patch, strict=False):
1124 def lookup(self, patch, strict=False):
1124 def partialname(s):
1125 def partialname(s):
1125 if s in self.series:
1126 if s in self.series:
1126 return s
1127 return s
1127 matches = [x for x in self.series if s in x]
1128 matches = [x for x in self.series if s in x]
1128 if len(matches) > 1:
1129 if len(matches) > 1:
1129 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1130 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1130 for m in matches:
1131 for m in matches:
1131 self.ui.warn(' %s\n' % m)
1132 self.ui.warn(' %s\n' % m)
1132 return None
1133 return None
1133 if matches:
1134 if matches:
1134 return matches[0]
1135 return matches[0]
1135 if self.series and self.applied:
1136 if self.series and self.applied:
1136 if s == 'qtip':
1137 if s == 'qtip':
1137 return self.series[self.seriesend(True)-1]
1138 return self.series[self.seriesend(True)-1]
1138 if s == 'qbase':
1139 if s == 'qbase':
1139 return self.series[0]
1140 return self.series[0]
1140 return None
1141 return None
1141
1142
1142 if patch in self.series:
1143 if patch in self.series:
1143 return patch
1144 return patch
1144
1145
1145 if not os.path.isfile(self.join(patch)):
1146 if not os.path.isfile(self.join(patch)):
1146 try:
1147 try:
1147 sno = int(patch)
1148 sno = int(patch)
1148 except (ValueError, OverflowError):
1149 except (ValueError, OverflowError):
1149 pass
1150 pass
1150 else:
1151 else:
1151 if -len(self.series) <= sno < len(self.series):
1152 if -len(self.series) <= sno < len(self.series):
1152 return self.series[sno]
1153 return self.series[sno]
1153
1154
1154 if not strict:
1155 if not strict:
1155 res = partialname(patch)
1156 res = partialname(patch)
1156 if res:
1157 if res:
1157 return res
1158 return res
1158 minus = patch.rfind('-')
1159 minus = patch.rfind('-')
1159 if minus >= 0:
1160 if minus >= 0:
1160 res = partialname(patch[:minus])
1161 res = partialname(patch[:minus])
1161 if res:
1162 if res:
1162 i = self.series.index(res)
1163 i = self.series.index(res)
1163 try:
1164 try:
1164 off = int(patch[minus + 1:] or 1)
1165 off = int(patch[minus + 1:] or 1)
1165 except (ValueError, OverflowError):
1166 except (ValueError, OverflowError):
1166 pass
1167 pass
1167 else:
1168 else:
1168 if i - off >= 0:
1169 if i - off >= 0:
1169 return self.series[i - off]
1170 return self.series[i - off]
1170 plus = patch.rfind('+')
1171 plus = patch.rfind('+')
1171 if plus >= 0:
1172 if plus >= 0:
1172 res = partialname(patch[:plus])
1173 res = partialname(patch[:plus])
1173 if res:
1174 if res:
1174 i = self.series.index(res)
1175 i = self.series.index(res)
1175 try:
1176 try:
1176 off = int(patch[plus + 1:] or 1)
1177 off = int(patch[plus + 1:] or 1)
1177 except (ValueError, OverflowError):
1178 except (ValueError, OverflowError):
1178 pass
1179 pass
1179 else:
1180 else:
1180 if i + off < len(self.series):
1181 if i + off < len(self.series):
1181 return self.series[i + off]
1182 return self.series[i + off]
1182 raise util.Abort(_("patch %s not in series") % patch)
1183 raise util.Abort(_("patch %s not in series") % patch)
1183
1184
1184 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1185 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1185 all=False, move=False, exact=False, nobackup=False,
1186 all=False, move=False, exact=False, nobackup=False,
1186 keepchanges=False):
1187 keepchanges=False):
1187 self.checkkeepchanges(keepchanges, force)
1188 self.checkkeepchanges(keepchanges, force)
1188 diffopts = self.diffopts()
1189 diffopts = self.diffopts()
1189 wlock = repo.wlock()
1190 wlock = repo.wlock()
1190 try:
1191 try:
1191 heads = []
1192 heads = []
1192 for b, ls in repo.branchmap().iteritems():
1193 for b, ls in repo.branchmap().iteritems():
1193 heads += ls
1194 heads += ls
1194 if not heads:
1195 if not heads:
1195 heads = [nullid]
1196 heads = [nullid]
1196 if repo.dirstate.p1() not in heads and not exact:
1197 if repo.dirstate.p1() not in heads and not exact:
1197 self.ui.status(_("(working directory not at a head)\n"))
1198 self.ui.status(_("(working directory not at a head)\n"))
1198
1199
1199 if not self.series:
1200 if not self.series:
1200 self.ui.warn(_('no patches in series\n'))
1201 self.ui.warn(_('no patches in series\n'))
1201 return 0
1202 return 0
1202
1203
1203 # Suppose our series file is: A B C and the current 'top'
1204 # Suppose our series file is: A B C and the current 'top'
1204 # patch is B. qpush C should be performed (moving forward)
1205 # patch is B. qpush C should be performed (moving forward)
1205 # qpush B is a NOP (no change) qpush A is an error (can't
1206 # qpush B is a NOP (no change) qpush A is an error (can't
1206 # go backwards with qpush)
1207 # go backwards with qpush)
1207 if patch:
1208 if patch:
1208 patch = self.lookup(patch)
1209 patch = self.lookup(patch)
1209 info = self.isapplied(patch)
1210 info = self.isapplied(patch)
1210 if info and info[0] >= len(self.applied) - 1:
1211 if info and info[0] >= len(self.applied) - 1:
1211 self.ui.warn(
1212 self.ui.warn(
1212 _('qpush: %s is already at the top\n') % patch)
1213 _('qpush: %s is already at the top\n') % patch)
1213 return 0
1214 return 0
1214
1215
1215 pushable, reason = self.pushable(patch)
1216 pushable, reason = self.pushable(patch)
1216 if pushable:
1217 if pushable:
1217 if self.series.index(patch) < self.seriesend():
1218 if self.series.index(patch) < self.seriesend():
1218 raise util.Abort(
1219 raise util.Abort(
1219 _("cannot push to a previous patch: %s") % patch)
1220 _("cannot push to a previous patch: %s") % patch)
1220 else:
1221 else:
1221 if reason:
1222 if reason:
1222 reason = _('guarded by %s') % reason
1223 reason = _('guarded by %s') % reason
1223 else:
1224 else:
1224 reason = _('no matching guards')
1225 reason = _('no matching guards')
1225 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1226 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1226 return 1
1227 return 1
1227 elif all:
1228 elif all:
1228 patch = self.series[-1]
1229 patch = self.series[-1]
1229 if self.isapplied(patch):
1230 if self.isapplied(patch):
1230 self.ui.warn(_('all patches are currently applied\n'))
1231 self.ui.warn(_('all patches are currently applied\n'))
1231 return 0
1232 return 0
1232
1233
1233 # Following the above example, starting at 'top' of B:
1234 # Following the above example, starting at 'top' of B:
1234 # qpush should be performed (pushes C), but a subsequent
1235 # qpush should be performed (pushes C), but a subsequent
1235 # qpush without an argument is an error (nothing to
1236 # qpush without an argument is an error (nothing to
1236 # apply). This allows a loop of "...while hg qpush..." to
1237 # apply). This allows a loop of "...while hg qpush..." to
1237 # work as it detects an error when done
1238 # work as it detects an error when done
1238 start = self.seriesend()
1239 start = self.seriesend()
1239 if start == len(self.series):
1240 if start == len(self.series):
1240 self.ui.warn(_('patch series already fully applied\n'))
1241 self.ui.warn(_('patch series already fully applied\n'))
1241 return 1
1242 return 1
1242 if not force and not keepchanges:
1243 if not force and not keepchanges:
1243 self.checklocalchanges(repo, refresh=self.applied)
1244 self.checklocalchanges(repo, refresh=self.applied)
1244
1245
1245 if exact:
1246 if exact:
1246 if keepchanges:
1247 if keepchanges:
1247 raise util.Abort(
1248 raise util.Abort(
1248 _("cannot use --exact and --keep-changes together"))
1249 _("cannot use --exact and --keep-changes together"))
1249 if move:
1250 if move:
1250 raise util.Abort(_('cannot use --exact and --move '
1251 raise util.Abort(_('cannot use --exact and --move '
1251 'together'))
1252 'together'))
1252 if self.applied:
1253 if self.applied:
1253 raise util.Abort(_('cannot push --exact with applied '
1254 raise util.Abort(_('cannot push --exact with applied '
1254 'patches'))
1255 'patches'))
1255 root = self.series[start]
1256 root = self.series[start]
1256 target = patchheader(self.join(root), self.plainmode).parent
1257 target = patchheader(self.join(root), self.plainmode).parent
1257 if not target:
1258 if not target:
1258 raise util.Abort(
1259 raise util.Abort(
1259 _("%s does not have a parent recorded") % root)
1260 _("%s does not have a parent recorded") % root)
1260 if not repo[target] == repo['.']:
1261 if not repo[target] == repo['.']:
1261 hg.update(repo, target)
1262 hg.update(repo, target)
1262
1263
1263 if move:
1264 if move:
1264 if not patch:
1265 if not patch:
1265 raise util.Abort(_("please specify the patch to move"))
1266 raise util.Abort(_("please specify the patch to move"))
1266 for fullstart, rpn in enumerate(self.fullseries):
1267 for fullstart, rpn in enumerate(self.fullseries):
1267 # strip markers for patch guards
1268 # strip markers for patch guards
1268 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1269 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1269 break
1270 break
1270 for i, rpn in enumerate(self.fullseries[fullstart:]):
1271 for i, rpn in enumerate(self.fullseries[fullstart:]):
1271 # strip markers for patch guards
1272 # strip markers for patch guards
1272 if self.guard_re.split(rpn, 1)[0] == patch:
1273 if self.guard_re.split(rpn, 1)[0] == patch:
1273 break
1274 break
1274 index = fullstart + i
1275 index = fullstart + i
1275 assert index < len(self.fullseries)
1276 assert index < len(self.fullseries)
1276 fullpatch = self.fullseries[index]
1277 fullpatch = self.fullseries[index]
1277 del self.fullseries[index]
1278 del self.fullseries[index]
1278 self.fullseries.insert(fullstart, fullpatch)
1279 self.fullseries.insert(fullstart, fullpatch)
1279 self.parseseries()
1280 self.parseseries()
1280 self.seriesdirty = True
1281 self.seriesdirty = True
1281
1282
1282 self.applieddirty = True
1283 self.applieddirty = True
1283 if start > 0:
1284 if start > 0:
1284 self.checktoppatch(repo)
1285 self.checktoppatch(repo)
1285 if not patch:
1286 if not patch:
1286 patch = self.series[start]
1287 patch = self.series[start]
1287 end = start + 1
1288 end = start + 1
1288 else:
1289 else:
1289 end = self.series.index(patch, start) + 1
1290 end = self.series.index(patch, start) + 1
1290
1291
1291 tobackup = set()
1292 tobackup = set()
1292 if (not nobackup and force) or keepchanges:
1293 if (not nobackup and force) or keepchanges:
1293 m, a, r, d = self.checklocalchanges(repo, force=True)
1294 m, a, r, d = self.checklocalchanges(repo, force=True)
1294 if keepchanges:
1295 if keepchanges:
1295 tobackup.update(m + a + r + d)
1296 tobackup.update(m + a + r + d)
1296 else:
1297 else:
1297 tobackup.update(m + a)
1298 tobackup.update(m + a)
1298
1299
1299 s = self.series[start:end]
1300 s = self.series[start:end]
1300 all_files = set()
1301 all_files = set()
1301 try:
1302 try:
1302 if mergeq:
1303 if mergeq:
1303 ret = self.mergepatch(repo, mergeq, s, diffopts)
1304 ret = self.mergepatch(repo, mergeq, s, diffopts)
1304 else:
1305 else:
1305 ret = self.apply(repo, s, list, all_files=all_files,
1306 ret = self.apply(repo, s, list, all_files=all_files,
1306 tobackup=tobackup, keepchanges=keepchanges)
1307 tobackup=tobackup, keepchanges=keepchanges)
1307 except: # re-raises
1308 except: # re-raises
1308 self.ui.warn(_('cleaning up working directory...'))
1309 self.ui.warn(_('cleaning up working directory...'))
1309 node = repo.dirstate.p1()
1310 node = repo.dirstate.p1()
1310 hg.revert(repo, node, None)
1311 hg.revert(repo, node, None)
1311 # only remove unknown files that we know we touched or
1312 # only remove unknown files that we know we touched or
1312 # created while patching
1313 # created while patching
1313 for f in all_files:
1314 for f in all_files:
1314 if f not in repo.dirstate:
1315 if f not in repo.dirstate:
1315 try:
1316 try:
1316 util.unlinkpath(repo.wjoin(f))
1317 util.unlinkpath(repo.wjoin(f))
1317 except OSError, inst:
1318 except OSError, inst:
1318 if inst.errno != errno.ENOENT:
1319 if inst.errno != errno.ENOENT:
1319 raise
1320 raise
1320 self.ui.warn(_('done\n'))
1321 self.ui.warn(_('done\n'))
1321 raise
1322 raise
1322
1323
1323 if not self.applied:
1324 if not self.applied:
1324 return ret[0]
1325 return ret[0]
1325 top = self.applied[-1].name
1326 top = self.applied[-1].name
1326 if ret[0] and ret[0] > 1:
1327 if ret[0] and ret[0] > 1:
1327 msg = _("errors during apply, please fix and refresh %s\n")
1328 msg = _("errors during apply, please fix and refresh %s\n")
1328 self.ui.write(msg % top)
1329 self.ui.write(msg % top)
1329 else:
1330 else:
1330 self.ui.write(_("now at: %s\n") % top)
1331 self.ui.write(_("now at: %s\n") % top)
1331 return ret[0]
1332 return ret[0]
1332
1333
1333 finally:
1334 finally:
1334 wlock.release()
1335 wlock.release()
1335
1336
1336 def pop(self, repo, patch=None, force=False, update=True, all=False,
1337 def pop(self, repo, patch=None, force=False, update=True, all=False,
1337 nobackup=False, keepchanges=False):
1338 nobackup=False, keepchanges=False):
1338 self.checkkeepchanges(keepchanges, force)
1339 self.checkkeepchanges(keepchanges, force)
1339 wlock = repo.wlock()
1340 wlock = repo.wlock()
1340 try:
1341 try:
1341 if patch:
1342 if patch:
1342 # index, rev, patch
1343 # index, rev, patch
1343 info = self.isapplied(patch)
1344 info = self.isapplied(patch)
1344 if not info:
1345 if not info:
1345 patch = self.lookup(patch)
1346 patch = self.lookup(patch)
1346 info = self.isapplied(patch)
1347 info = self.isapplied(patch)
1347 if not info:
1348 if not info:
1348 raise util.Abort(_("patch %s is not applied") % patch)
1349 raise util.Abort(_("patch %s is not applied") % patch)
1349
1350
1350 if not self.applied:
1351 if not self.applied:
1351 # Allow qpop -a to work repeatedly,
1352 # Allow qpop -a to work repeatedly,
1352 # but not qpop without an argument
1353 # but not qpop without an argument
1353 self.ui.warn(_("no patches applied\n"))
1354 self.ui.warn(_("no patches applied\n"))
1354 return not all
1355 return not all
1355
1356
1356 if all:
1357 if all:
1357 start = 0
1358 start = 0
1358 elif patch:
1359 elif patch:
1359 start = info[0] + 1
1360 start = info[0] + 1
1360 else:
1361 else:
1361 start = len(self.applied) - 1
1362 start = len(self.applied) - 1
1362
1363
1363 if start >= len(self.applied):
1364 if start >= len(self.applied):
1364 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1365 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1365 return
1366 return
1366
1367
1367 if not update:
1368 if not update:
1368 parents = repo.dirstate.parents()
1369 parents = repo.dirstate.parents()
1369 rr = [x.node for x in self.applied]
1370 rr = [x.node for x in self.applied]
1370 for p in parents:
1371 for p in parents:
1371 if p in rr:
1372 if p in rr:
1372 self.ui.warn(_("qpop: forcing dirstate update\n"))
1373 self.ui.warn(_("qpop: forcing dirstate update\n"))
1373 update = True
1374 update = True
1374 else:
1375 else:
1375 parents = [p.node() for p in repo[None].parents()]
1376 parents = [p.node() for p in repo[None].parents()]
1376 needupdate = False
1377 needupdate = False
1377 for entry in self.applied[start:]:
1378 for entry in self.applied[start:]:
1378 if entry.node in parents:
1379 if entry.node in parents:
1379 needupdate = True
1380 needupdate = True
1380 break
1381 break
1381 update = needupdate
1382 update = needupdate
1382
1383
1383 tobackup = set()
1384 tobackup = set()
1384 if update:
1385 if update:
1385 m, a, r, d = self.checklocalchanges(
1386 m, a, r, d = self.checklocalchanges(
1386 repo, force=force or keepchanges)
1387 repo, force=force or keepchanges)
1387 if force:
1388 if force:
1388 if not nobackup:
1389 if not nobackup:
1389 tobackup.update(m + a)
1390 tobackup.update(m + a)
1390 elif keepchanges:
1391 elif keepchanges:
1391 tobackup.update(m + a + r + d)
1392 tobackup.update(m + a + r + d)
1392
1393
1393 self.applieddirty = True
1394 self.applieddirty = True
1394 end = len(self.applied)
1395 end = len(self.applied)
1395 rev = self.applied[start].node
1396 rev = self.applied[start].node
1396 if update:
1397 if update:
1397 top = self.checktoppatch(repo)[0]
1398 top = self.checktoppatch(repo)[0]
1398
1399
1399 try:
1400 try:
1400 heads = repo.changelog.heads(rev)
1401 heads = repo.changelog.heads(rev)
1401 except error.LookupError:
1402 except error.LookupError:
1402 node = short(rev)
1403 node = short(rev)
1403 raise util.Abort(_('trying to pop unknown node %s') % node)
1404 raise util.Abort(_('trying to pop unknown node %s') % node)
1404
1405
1405 if heads != [self.applied[-1].node]:
1406 if heads != [self.applied[-1].node]:
1406 raise util.Abort(_("popping would remove a revision not "
1407 raise util.Abort(_("popping would remove a revision not "
1407 "managed by this patch queue"))
1408 "managed by this patch queue"))
1408 if not repo[self.applied[-1].node].mutable():
1409 if not repo[self.applied[-1].node].mutable():
1409 raise util.Abort(
1410 raise util.Abort(
1410 _("popping would remove an immutable revision"),
1411 _("popping would remove an immutable revision"),
1411 hint=_('see "hg help phases" for details'))
1412 hint=_('see "hg help phases" for details'))
1412
1413
1413 # we know there are no local changes, so we can make a simplified
1414 # we know there are no local changes, so we can make a simplified
1414 # form of hg.update.
1415 # form of hg.update.
1415 if update:
1416 if update:
1416 qp = self.qparents(repo, rev)
1417 qp = self.qparents(repo, rev)
1417 ctx = repo[qp]
1418 ctx = repo[qp]
1418 m, a, r, d = repo.status(qp, top)[:4]
1419 m, a, r, d = repo.status(qp, top)[:4]
1419 if d:
1420 if d:
1420 raise util.Abort(_("deletions found between repo revs"))
1421 raise util.Abort(_("deletions found between repo revs"))
1421
1422
1422 tobackup = set(a + m + r) & tobackup
1423 tobackup = set(a + m + r) & tobackup
1423 if keepchanges and tobackup:
1424 if keepchanges and tobackup:
1424 self.localchangesfound()
1425 self.localchangesfound()
1425 self.backup(repo, tobackup)
1426 self.backup(repo, tobackup)
1426
1427
1427 for f in a:
1428 for f in a:
1428 try:
1429 try:
1429 util.unlinkpath(repo.wjoin(f))
1430 util.unlinkpath(repo.wjoin(f))
1430 except OSError, e:
1431 except OSError, e:
1431 if e.errno != errno.ENOENT:
1432 if e.errno != errno.ENOENT:
1432 raise
1433 raise
1433 repo.dirstate.drop(f)
1434 repo.dirstate.drop(f)
1434 for f in m + r:
1435 for f in m + r:
1435 fctx = ctx[f]
1436 fctx = ctx[f]
1436 repo.wwrite(f, fctx.data(), fctx.flags())
1437 repo.wwrite(f, fctx.data(), fctx.flags())
1437 repo.dirstate.normal(f)
1438 repo.dirstate.normal(f)
1438 repo.setparents(qp, nullid)
1439 repo.setparents(qp, nullid)
1439 for patch in reversed(self.applied[start:end]):
1440 for patch in reversed(self.applied[start:end]):
1440 self.ui.status(_("popping %s\n") % patch.name)
1441 self.ui.status(_("popping %s\n") % patch.name)
1441 del self.applied[start:end]
1442 del self.applied[start:end]
1442 self.strip(repo, [rev], update=False, backup='strip')
1443 self.strip(repo, [rev], update=False, backup='strip')
1443 if self.applied:
1444 if self.applied:
1444 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1445 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1445 else:
1446 else:
1446 self.ui.write(_("patch queue now empty\n"))
1447 self.ui.write(_("patch queue now empty\n"))
1447 finally:
1448 finally:
1448 wlock.release()
1449 wlock.release()
1449
1450
1450 def diff(self, repo, pats, opts):
1451 def diff(self, repo, pats, opts):
1451 top, patch = self.checktoppatch(repo)
1452 top, patch = self.checktoppatch(repo)
1452 if not top:
1453 if not top:
1453 self.ui.write(_("no patches applied\n"))
1454 self.ui.write(_("no patches applied\n"))
1454 return
1455 return
1455 qp = self.qparents(repo, top)
1456 qp = self.qparents(repo, top)
1456 if opts.get('reverse'):
1457 if opts.get('reverse'):
1457 node1, node2 = None, qp
1458 node1, node2 = None, qp
1458 else:
1459 else:
1459 node1, node2 = qp, None
1460 node1, node2 = qp, None
1460 diffopts = self.diffopts(opts, patch)
1461 diffopts = self.diffopts(opts, patch)
1461 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1462 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1462
1463
1463 def refresh(self, repo, pats=None, **opts):
1464 def refresh(self, repo, pats=None, **opts):
1464 if not self.applied:
1465 if not self.applied:
1465 self.ui.write(_("no patches applied\n"))
1466 self.ui.write(_("no patches applied\n"))
1466 return 1
1467 return 1
1467 msg = opts.get('msg', '').rstrip()
1468 msg = opts.get('msg', '').rstrip()
1468 newuser = opts.get('user')
1469 newuser = opts.get('user')
1469 newdate = opts.get('date')
1470 newdate = opts.get('date')
1470 if newdate:
1471 if newdate:
1471 newdate = '%d %d' % util.parsedate(newdate)
1472 newdate = '%d %d' % util.parsedate(newdate)
1472 wlock = repo.wlock()
1473 wlock = repo.wlock()
1473
1474
1474 try:
1475 try:
1475 self.checktoppatch(repo)
1476 self.checktoppatch(repo)
1476 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1477 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1477 if repo.changelog.heads(top) != [top]:
1478 if repo.changelog.heads(top) != [top]:
1478 raise util.Abort(_("cannot refresh a revision with children"))
1479 raise util.Abort(_("cannot refresh a revision with children"))
1479 if not repo[top].mutable():
1480 if not repo[top].mutable():
1480 raise util.Abort(_("cannot refresh immutable revision"),
1481 raise util.Abort(_("cannot refresh immutable revision"),
1481 hint=_('see "hg help phases" for details'))
1482 hint=_('see "hg help phases" for details'))
1482
1483
1483 inclsubs = self.checksubstate(repo)
1484 inclsubs = self.checksubstate(repo)
1484
1485
1485 cparents = repo.changelog.parents(top)
1486 cparents = repo.changelog.parents(top)
1486 patchparent = self.qparents(repo, top)
1487 patchparent = self.qparents(repo, top)
1487 ph = patchheader(self.join(patchfn), self.plainmode)
1488 ph = patchheader(self.join(patchfn), self.plainmode)
1488 diffopts = self.diffopts({'git': opts.get('git')}, patchfn)
1489 diffopts = self.diffopts({'git': opts.get('git')}, patchfn)
1489 if msg:
1490 if msg:
1490 ph.setmessage(msg)
1491 ph.setmessage(msg)
1491 if newuser:
1492 if newuser:
1492 ph.setuser(newuser)
1493 ph.setuser(newuser)
1493 if newdate:
1494 if newdate:
1494 ph.setdate(newdate)
1495 ph.setdate(newdate)
1495 ph.setparent(hex(patchparent))
1496 ph.setparent(hex(patchparent))
1496
1497
1497 # only commit new patch when write is complete
1498 # only commit new patch when write is complete
1498 patchf = self.opener(patchfn, 'w', atomictemp=True)
1499 patchf = self.opener(patchfn, 'w', atomictemp=True)
1499
1500
1500 comments = str(ph)
1501 comments = str(ph)
1501 if comments:
1502 if comments:
1502 patchf.write(comments)
1503 patchf.write(comments)
1503
1504
1504 # update the dirstate in place, strip off the qtip commit
1505 # update the dirstate in place, strip off the qtip commit
1505 # and then commit.
1506 # and then commit.
1506 #
1507 #
1507 # this should really read:
1508 # this should really read:
1508 # mm, dd, aa = repo.status(top, patchparent)[:3]
1509 # mm, dd, aa = repo.status(top, patchparent)[:3]
1509 # but we do it backwards to take advantage of manifest/chlog
1510 # but we do it backwards to take advantage of manifest/chlog
1510 # caching against the next repo.status call
1511 # caching against the next repo.status call
1511 mm, aa, dd = repo.status(patchparent, top)[:3]
1512 mm, aa, dd = repo.status(patchparent, top)[:3]
1512 changes = repo.changelog.read(top)
1513 changes = repo.changelog.read(top)
1513 man = repo.manifest.read(changes[0])
1514 man = repo.manifest.read(changes[0])
1514 aaa = aa[:]
1515 aaa = aa[:]
1515 matchfn = scmutil.match(repo[None], pats, opts)
1516 matchfn = scmutil.match(repo[None], pats, opts)
1516 # in short mode, we only diff the files included in the
1517 # in short mode, we only diff the files included in the
1517 # patch already plus specified files
1518 # patch already plus specified files
1518 if opts.get('short'):
1519 if opts.get('short'):
1519 # if amending a patch, we start with existing
1520 # if amending a patch, we start with existing
1520 # files plus specified files - unfiltered
1521 # files plus specified files - unfiltered
1521 match = scmutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1522 match = scmutil.matchfiles(repo, mm + aa + dd + matchfn.files())
1522 # filter with inc/exl options
1523 # filter with inc/exl options
1523 matchfn = scmutil.match(repo[None], opts=opts)
1524 matchfn = scmutil.match(repo[None], opts=opts)
1524 else:
1525 else:
1525 match = scmutil.matchall(repo)
1526 match = scmutil.matchall(repo)
1526 m, a, r, d = repo.status(match=match)[:4]
1527 m, a, r, d = repo.status(match=match)[:4]
1527 mm = set(mm)
1528 mm = set(mm)
1528 aa = set(aa)
1529 aa = set(aa)
1529 dd = set(dd)
1530 dd = set(dd)
1530
1531
1531 # we might end up with files that were added between
1532 # we might end up with files that were added between
1532 # qtip and the dirstate parent, but then changed in the
1533 # qtip and the dirstate parent, but then changed in the
1533 # local dirstate. in this case, we want them to only
1534 # local dirstate. in this case, we want them to only
1534 # show up in the added section
1535 # show up in the added section
1535 for x in m:
1536 for x in m:
1536 if x not in aa:
1537 if x not in aa:
1537 mm.add(x)
1538 mm.add(x)
1538 # we might end up with files added by the local dirstate that
1539 # we might end up with files added by the local dirstate that
1539 # were deleted by the patch. In this case, they should only
1540 # were deleted by the patch. In this case, they should only
1540 # show up in the changed section.
1541 # show up in the changed section.
1541 for x in a:
1542 for x in a:
1542 if x in dd:
1543 if x in dd:
1543 dd.remove(x)
1544 dd.remove(x)
1544 mm.add(x)
1545 mm.add(x)
1545 else:
1546 else:
1546 aa.add(x)
1547 aa.add(x)
1547 # make sure any files deleted in the local dirstate
1548 # make sure any files deleted in the local dirstate
1548 # are not in the add or change column of the patch
1549 # are not in the add or change column of the patch
1549 forget = []
1550 forget = []
1550 for x in d + r:
1551 for x in d + r:
1551 if x in aa:
1552 if x in aa:
1552 aa.remove(x)
1553 aa.remove(x)
1553 forget.append(x)
1554 forget.append(x)
1554 continue
1555 continue
1555 else:
1556 else:
1556 mm.discard(x)
1557 mm.discard(x)
1557 dd.add(x)
1558 dd.add(x)
1558
1559
1559 m = list(mm)
1560 m = list(mm)
1560 r = list(dd)
1561 r = list(dd)
1561 a = list(aa)
1562 a = list(aa)
1562 c = [filter(matchfn, l) for l in (m, a, r)]
1563 c = [filter(matchfn, l) for l in (m, a, r)]
1563 match = scmutil.matchfiles(repo, set(c[0] + c[1] + c[2] + inclsubs))
1564 match = scmutil.matchfiles(repo, set(c[0] + c[1] + c[2] + inclsubs))
1564 chunks = patchmod.diff(repo, patchparent, match=match,
1565 chunks = patchmod.diff(repo, patchparent, match=match,
1565 changes=c, opts=diffopts)
1566 changes=c, opts=diffopts)
1566 for chunk in chunks:
1567 for chunk in chunks:
1567 patchf.write(chunk)
1568 patchf.write(chunk)
1568
1569
1569 try:
1570 try:
1570 if diffopts.git or diffopts.upgrade:
1571 if diffopts.git or diffopts.upgrade:
1571 copies = {}
1572 copies = {}
1572 for dst in a:
1573 for dst in a:
1573 src = repo.dirstate.copied(dst)
1574 src = repo.dirstate.copied(dst)
1574 # during qfold, the source file for copies may
1575 # during qfold, the source file for copies may
1575 # be removed. Treat this as a simple add.
1576 # be removed. Treat this as a simple add.
1576 if src is not None and src in repo.dirstate:
1577 if src is not None and src in repo.dirstate:
1577 copies.setdefault(src, []).append(dst)
1578 copies.setdefault(src, []).append(dst)
1578 repo.dirstate.add(dst)
1579 repo.dirstate.add(dst)
1579 # remember the copies between patchparent and qtip
1580 # remember the copies between patchparent and qtip
1580 for dst in aaa:
1581 for dst in aaa:
1581 f = repo.file(dst)
1582 f = repo.file(dst)
1582 src = f.renamed(man[dst])
1583 src = f.renamed(man[dst])
1583 if src:
1584 if src:
1584 copies.setdefault(src[0], []).extend(
1585 copies.setdefault(src[0], []).extend(
1585 copies.get(dst, []))
1586 copies.get(dst, []))
1586 if dst in a:
1587 if dst in a:
1587 copies[src[0]].append(dst)
1588 copies[src[0]].append(dst)
1588 # we can't copy a file created by the patch itself
1589 # we can't copy a file created by the patch itself
1589 if dst in copies:
1590 if dst in copies:
1590 del copies[dst]
1591 del copies[dst]
1591 for src, dsts in copies.iteritems():
1592 for src, dsts in copies.iteritems():
1592 for dst in dsts:
1593 for dst in dsts:
1593 repo.dirstate.copy(src, dst)
1594 repo.dirstate.copy(src, dst)
1594 else:
1595 else:
1595 for dst in a:
1596 for dst in a:
1596 repo.dirstate.add(dst)
1597 repo.dirstate.add(dst)
1597 # Drop useless copy information
1598 # Drop useless copy information
1598 for f in list(repo.dirstate.copies()):
1599 for f in list(repo.dirstate.copies()):
1599 repo.dirstate.copy(None, f)
1600 repo.dirstate.copy(None, f)
1600 for f in r:
1601 for f in r:
1601 repo.dirstate.remove(f)
1602 repo.dirstate.remove(f)
1602 # if the patch excludes a modified file, mark that
1603 # if the patch excludes a modified file, mark that
1603 # file with mtime=0 so status can see it.
1604 # file with mtime=0 so status can see it.
1604 mm = []
1605 mm = []
1605 for i in xrange(len(m)-1, -1, -1):
1606 for i in xrange(len(m)-1, -1, -1):
1606 if not matchfn(m[i]):
1607 if not matchfn(m[i]):
1607 mm.append(m[i])
1608 mm.append(m[i])
1608 del m[i]
1609 del m[i]
1609 for f in m:
1610 for f in m:
1610 repo.dirstate.normal(f)
1611 repo.dirstate.normal(f)
1611 for f in mm:
1612 for f in mm:
1612 repo.dirstate.normallookup(f)
1613 repo.dirstate.normallookup(f)
1613 for f in forget:
1614 for f in forget:
1614 repo.dirstate.drop(f)
1615 repo.dirstate.drop(f)
1615
1616
1616 if not msg:
1617 if not msg:
1617 if not ph.message:
1618 if not ph.message:
1618 message = "[mq]: %s\n" % patchfn
1619 message = "[mq]: %s\n" % patchfn
1619 else:
1620 else:
1620 message = "\n".join(ph.message)
1621 message = "\n".join(ph.message)
1621 else:
1622 else:
1622 message = msg
1623 message = msg
1623
1624
1624 user = ph.user or changes[1]
1625 user = ph.user or changes[1]
1625
1626
1626 oldphase = repo[top].phase()
1627 oldphase = repo[top].phase()
1627
1628
1628 # assumes strip can roll itself back if interrupted
1629 # assumes strip can roll itself back if interrupted
1629 repo.setparents(*cparents)
1630 repo.setparents(*cparents)
1630 self.applied.pop()
1631 self.applied.pop()
1631 self.applieddirty = True
1632 self.applieddirty = True
1632 self.strip(repo, [top], update=False,
1633 self.strip(repo, [top], update=False,
1633 backup='strip')
1634 backup='strip')
1634 except: # re-raises
1635 except: # re-raises
1635 repo.dirstate.invalidate()
1636 repo.dirstate.invalidate()
1636 raise
1637 raise
1637
1638
1638 try:
1639 try:
1639 # might be nice to attempt to roll back strip after this
1640 # might be nice to attempt to roll back strip after this
1640
1641
1641 # Ensure we create a new changeset in the same phase than
1642 # Ensure we create a new changeset in the same phase than
1642 # the old one.
1643 # the old one.
1643 n = newcommit(repo, oldphase, message, user, ph.date,
1644 n = newcommit(repo, oldphase, message, user, ph.date,
1644 match=match, force=True)
1645 match=match, force=True)
1645 # only write patch after a successful commit
1646 # only write patch after a successful commit
1646 patchf.close()
1647 patchf.close()
1647 self.applied.append(statusentry(n, patchfn))
1648 self.applied.append(statusentry(n, patchfn))
1648 except: # re-raises
1649 except: # re-raises
1649 ctx = repo[cparents[0]]
1650 ctx = repo[cparents[0]]
1650 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1651 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1651 self.savedirty()
1652 self.savedirty()
1652 self.ui.warn(_('refresh interrupted while patch was popped! '
1653 self.ui.warn(_('refresh interrupted while patch was popped! '
1653 '(revert --all, qpush to recover)\n'))
1654 '(revert --all, qpush to recover)\n'))
1654 raise
1655 raise
1655 finally:
1656 finally:
1656 wlock.release()
1657 wlock.release()
1657 self.removeundo(repo)
1658 self.removeundo(repo)
1658
1659
1659 def init(self, repo, create=False):
1660 def init(self, repo, create=False):
1660 if not create and os.path.isdir(self.path):
1661 if not create and os.path.isdir(self.path):
1661 raise util.Abort(_("patch queue directory already exists"))
1662 raise util.Abort(_("patch queue directory already exists"))
1662 try:
1663 try:
1663 os.mkdir(self.path)
1664 os.mkdir(self.path)
1664 except OSError, inst:
1665 except OSError, inst:
1665 if inst.errno != errno.EEXIST or not create:
1666 if inst.errno != errno.EEXIST or not create:
1666 raise
1667 raise
1667 if create:
1668 if create:
1668 return self.qrepo(create=True)
1669 return self.qrepo(create=True)
1669
1670
1670 def unapplied(self, repo, patch=None):
1671 def unapplied(self, repo, patch=None):
1671 if patch and patch not in self.series:
1672 if patch and patch not in self.series:
1672 raise util.Abort(_("patch %s is not in series file") % patch)
1673 raise util.Abort(_("patch %s is not in series file") % patch)
1673 if not patch:
1674 if not patch:
1674 start = self.seriesend()
1675 start = self.seriesend()
1675 else:
1676 else:
1676 start = self.series.index(patch) + 1
1677 start = self.series.index(patch) + 1
1677 unapplied = []
1678 unapplied = []
1678 for i in xrange(start, len(self.series)):
1679 for i in xrange(start, len(self.series)):
1679 pushable, reason = self.pushable(i)
1680 pushable, reason = self.pushable(i)
1680 if pushable:
1681 if pushable:
1681 unapplied.append((i, self.series[i]))
1682 unapplied.append((i, self.series[i]))
1682 self.explainpushable(i)
1683 self.explainpushable(i)
1683 return unapplied
1684 return unapplied
1684
1685
1685 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1686 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1686 summary=False):
1687 summary=False):
1687 def displayname(pfx, patchname, state):
1688 def displayname(pfx, patchname, state):
1688 if pfx:
1689 if pfx:
1689 self.ui.write(pfx)
1690 self.ui.write(pfx)
1690 if summary:
1691 if summary:
1691 ph = patchheader(self.join(patchname), self.plainmode)
1692 ph = patchheader(self.join(patchname), self.plainmode)
1692 msg = ph.message and ph.message[0] or ''
1693 msg = ph.message and ph.message[0] or ''
1693 if self.ui.formatted():
1694 if self.ui.formatted():
1694 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1695 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1695 if width > 0:
1696 if width > 0:
1696 msg = util.ellipsis(msg, width)
1697 msg = util.ellipsis(msg, width)
1697 else:
1698 else:
1698 msg = ''
1699 msg = ''
1699 self.ui.write(patchname, label='qseries.' + state)
1700 self.ui.write(patchname, label='qseries.' + state)
1700 self.ui.write(': ')
1701 self.ui.write(': ')
1701 self.ui.write(msg, label='qseries.message.' + state)
1702 self.ui.write(msg, label='qseries.message.' + state)
1702 else:
1703 else:
1703 self.ui.write(patchname, label='qseries.' + state)
1704 self.ui.write(patchname, label='qseries.' + state)
1704 self.ui.write('\n')
1705 self.ui.write('\n')
1705
1706
1706 applied = set([p.name for p in self.applied])
1707 applied = set([p.name for p in self.applied])
1707 if length is None:
1708 if length is None:
1708 length = len(self.series) - start
1709 length = len(self.series) - start
1709 if not missing:
1710 if not missing:
1710 if self.ui.verbose:
1711 if self.ui.verbose:
1711 idxwidth = len(str(start + length - 1))
1712 idxwidth = len(str(start + length - 1))
1712 for i in xrange(start, start + length):
1713 for i in xrange(start, start + length):
1713 patch = self.series[i]
1714 patch = self.series[i]
1714 if patch in applied:
1715 if patch in applied:
1715 char, state = 'A', 'applied'
1716 char, state = 'A', 'applied'
1716 elif self.pushable(i)[0]:
1717 elif self.pushable(i)[0]:
1717 char, state = 'U', 'unapplied'
1718 char, state = 'U', 'unapplied'
1718 else:
1719 else:
1719 char, state = 'G', 'guarded'
1720 char, state = 'G', 'guarded'
1720 pfx = ''
1721 pfx = ''
1721 if self.ui.verbose:
1722 if self.ui.verbose:
1722 pfx = '%*d %s ' % (idxwidth, i, char)
1723 pfx = '%*d %s ' % (idxwidth, i, char)
1723 elif status and status != char:
1724 elif status and status != char:
1724 continue
1725 continue
1725 displayname(pfx, patch, state)
1726 displayname(pfx, patch, state)
1726 else:
1727 else:
1727 msng_list = []
1728 msng_list = []
1728 for root, dirs, files in os.walk(self.path):
1729 for root, dirs, files in os.walk(self.path):
1729 d = root[len(self.path) + 1:]
1730 d = root[len(self.path) + 1:]
1730 for f in files:
1731 for f in files:
1731 fl = os.path.join(d, f)
1732 fl = os.path.join(d, f)
1732 if (fl not in self.series and
1733 if (fl not in self.series and
1733 fl not in (self.statuspath, self.seriespath,
1734 fl not in (self.statuspath, self.seriespath,
1734 self.guardspath)
1735 self.guardspath)
1735 and not fl.startswith('.')):
1736 and not fl.startswith('.')):
1736 msng_list.append(fl)
1737 msng_list.append(fl)
1737 for x in sorted(msng_list):
1738 for x in sorted(msng_list):
1738 pfx = self.ui.verbose and ('D ') or ''
1739 pfx = self.ui.verbose and ('D ') or ''
1739 displayname(pfx, x, 'missing')
1740 displayname(pfx, x, 'missing')
1740
1741
1741 def issaveline(self, l):
1742 def issaveline(self, l):
1742 if l.name == '.hg.patches.save.line':
1743 if l.name == '.hg.patches.save.line':
1743 return True
1744 return True
1744
1745
1745 def qrepo(self, create=False):
1746 def qrepo(self, create=False):
1746 ui = self.ui.copy()
1747 ui = self.ui.copy()
1747 ui.setconfig('paths', 'default', '', overlay=False)
1748 ui.setconfig('paths', 'default', '', overlay=False)
1748 ui.setconfig('paths', 'default-push', '', overlay=False)
1749 ui.setconfig('paths', 'default-push', '', overlay=False)
1749 if create or os.path.isdir(self.join(".hg")):
1750 if create or os.path.isdir(self.join(".hg")):
1750 return hg.repository(ui, path=self.path, create=create)
1751 return hg.repository(ui, path=self.path, create=create)
1751
1752
1752 def restore(self, repo, rev, delete=None, qupdate=None):
1753 def restore(self, repo, rev, delete=None, qupdate=None):
1753 desc = repo[rev].description().strip()
1754 desc = repo[rev].description().strip()
1754 lines = desc.splitlines()
1755 lines = desc.splitlines()
1755 i = 0
1756 i = 0
1756 datastart = None
1757 datastart = None
1757 series = []
1758 series = []
1758 applied = []
1759 applied = []
1759 qpp = None
1760 qpp = None
1760 for i, line in enumerate(lines):
1761 for i, line in enumerate(lines):
1761 if line == 'Patch Data:':
1762 if line == 'Patch Data:':
1762 datastart = i + 1
1763 datastart = i + 1
1763 elif line.startswith('Dirstate:'):
1764 elif line.startswith('Dirstate:'):
1764 l = line.rstrip()
1765 l = line.rstrip()
1765 l = l[10:].split(' ')
1766 l = l[10:].split(' ')
1766 qpp = [bin(x) for x in l]
1767 qpp = [bin(x) for x in l]
1767 elif datastart is not None:
1768 elif datastart is not None:
1768 l = line.rstrip()
1769 l = line.rstrip()
1769 n, name = l.split(':', 1)
1770 n, name = l.split(':', 1)
1770 if n:
1771 if n:
1771 applied.append(statusentry(bin(n), name))
1772 applied.append(statusentry(bin(n), name))
1772 else:
1773 else:
1773 series.append(l)
1774 series.append(l)
1774 if datastart is None:
1775 if datastart is None:
1775 self.ui.warn(_("No saved patch data found\n"))
1776 self.ui.warn(_("No saved patch data found\n"))
1776 return 1
1777 return 1
1777 self.ui.warn(_("restoring status: %s\n") % lines[0])
1778 self.ui.warn(_("restoring status: %s\n") % lines[0])
1778 self.fullseries = series
1779 self.fullseries = series
1779 self.applied = applied
1780 self.applied = applied
1780 self.parseseries()
1781 self.parseseries()
1781 self.seriesdirty = True
1782 self.seriesdirty = True
1782 self.applieddirty = True
1783 self.applieddirty = True
1783 heads = repo.changelog.heads()
1784 heads = repo.changelog.heads()
1784 if delete:
1785 if delete:
1785 if rev not in heads:
1786 if rev not in heads:
1786 self.ui.warn(_("save entry has children, leaving it alone\n"))
1787 self.ui.warn(_("save entry has children, leaving it alone\n"))
1787 else:
1788 else:
1788 self.ui.warn(_("removing save entry %s\n") % short(rev))
1789 self.ui.warn(_("removing save entry %s\n") % short(rev))
1789 pp = repo.dirstate.parents()
1790 pp = repo.dirstate.parents()
1790 if rev in pp:
1791 if rev in pp:
1791 update = True
1792 update = True
1792 else:
1793 else:
1793 update = False
1794 update = False
1794 self.strip(repo, [rev], update=update, backup='strip')
1795 self.strip(repo, [rev], update=update, backup='strip')
1795 if qpp:
1796 if qpp:
1796 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1797 self.ui.warn(_("saved queue repository parents: %s %s\n") %
1797 (short(qpp[0]), short(qpp[1])))
1798 (short(qpp[0]), short(qpp[1])))
1798 if qupdate:
1799 if qupdate:
1799 self.ui.status(_("updating queue directory\n"))
1800 self.ui.status(_("updating queue directory\n"))
1800 r = self.qrepo()
1801 r = self.qrepo()
1801 if not r:
1802 if not r:
1802 self.ui.warn(_("Unable to load queue repository\n"))
1803 self.ui.warn(_("Unable to load queue repository\n"))
1803 return 1
1804 return 1
1804 hg.clean(r, qpp[0])
1805 hg.clean(r, qpp[0])
1805
1806
1806 def save(self, repo, msg=None):
1807 def save(self, repo, msg=None):
1807 if not self.applied:
1808 if not self.applied:
1808 self.ui.warn(_("save: no patches applied, exiting\n"))
1809 self.ui.warn(_("save: no patches applied, exiting\n"))
1809 return 1
1810 return 1
1810 if self.issaveline(self.applied[-1]):
1811 if self.issaveline(self.applied[-1]):
1811 self.ui.warn(_("status is already saved\n"))
1812 self.ui.warn(_("status is already saved\n"))
1812 return 1
1813 return 1
1813
1814
1814 if not msg:
1815 if not msg:
1815 msg = _("hg patches saved state")
1816 msg = _("hg patches saved state")
1816 else:
1817 else:
1817 msg = "hg patches: " + msg.rstrip('\r\n')
1818 msg = "hg patches: " + msg.rstrip('\r\n')
1818 r = self.qrepo()
1819 r = self.qrepo()
1819 if r:
1820 if r:
1820 pp = r.dirstate.parents()
1821 pp = r.dirstate.parents()
1821 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1822 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
1822 msg += "\n\nPatch Data:\n"
1823 msg += "\n\nPatch Data:\n"
1823 msg += ''.join('%s\n' % x for x in self.applied)
1824 msg += ''.join('%s\n' % x for x in self.applied)
1824 msg += ''.join(':%s\n' % x for x in self.fullseries)
1825 msg += ''.join(':%s\n' % x for x in self.fullseries)
1825 n = repo.commit(msg, force=True)
1826 n = repo.commit(msg, force=True)
1826 if not n:
1827 if not n:
1827 self.ui.warn(_("repo commit failed\n"))
1828 self.ui.warn(_("repo commit failed\n"))
1828 return 1
1829 return 1
1829 self.applied.append(statusentry(n, '.hg.patches.save.line'))
1830 self.applied.append(statusentry(n, '.hg.patches.save.line'))
1830 self.applieddirty = True
1831 self.applieddirty = True
1831 self.removeundo(repo)
1832 self.removeundo(repo)
1832
1833
1833 def fullseriesend(self):
1834 def fullseriesend(self):
1834 if self.applied:
1835 if self.applied:
1835 p = self.applied[-1].name
1836 p = self.applied[-1].name
1836 end = self.findseries(p)
1837 end = self.findseries(p)
1837 if end is None:
1838 if end is None:
1838 return len(self.fullseries)
1839 return len(self.fullseries)
1839 return end + 1
1840 return end + 1
1840 return 0
1841 return 0
1841
1842
1842 def seriesend(self, all_patches=False):
1843 def seriesend(self, all_patches=False):
1843 """If all_patches is False, return the index of the next pushable patch
1844 """If all_patches is False, return the index of the next pushable patch
1844 in the series, or the series length. If all_patches is True, return the
1845 in the series, or the series length. If all_patches is True, return the
1845 index of the first patch past the last applied one.
1846 index of the first patch past the last applied one.
1846 """
1847 """
1847 end = 0
1848 end = 0
1848 def next(start):
1849 def next(start):
1849 if all_patches or start >= len(self.series):
1850 if all_patches or start >= len(self.series):
1850 return start
1851 return start
1851 for i in xrange(start, len(self.series)):
1852 for i in xrange(start, len(self.series)):
1852 p, reason = self.pushable(i)
1853 p, reason = self.pushable(i)
1853 if p:
1854 if p:
1854 return i
1855 return i
1855 self.explainpushable(i)
1856 self.explainpushable(i)
1856 return len(self.series)
1857 return len(self.series)
1857 if self.applied:
1858 if self.applied:
1858 p = self.applied[-1].name
1859 p = self.applied[-1].name
1859 try:
1860 try:
1860 end = self.series.index(p)
1861 end = self.series.index(p)
1861 except ValueError:
1862 except ValueError:
1862 return 0
1863 return 0
1863 return next(end + 1)
1864 return next(end + 1)
1864 return next(end)
1865 return next(end)
1865
1866
1866 def appliedname(self, index):
1867 def appliedname(self, index):
1867 pname = self.applied[index].name
1868 pname = self.applied[index].name
1868 if not self.ui.verbose:
1869 if not self.ui.verbose:
1869 p = pname
1870 p = pname
1870 else:
1871 else:
1871 p = str(self.series.index(pname)) + " " + pname
1872 p = str(self.series.index(pname)) + " " + pname
1872 return p
1873 return p
1873
1874
1874 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
1875 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
1875 force=None, git=False):
1876 force=None, git=False):
1876 def checkseries(patchname):
1877 def checkseries(patchname):
1877 if patchname in self.series:
1878 if patchname in self.series:
1878 raise util.Abort(_('patch %s is already in the series file')
1879 raise util.Abort(_('patch %s is already in the series file')
1879 % patchname)
1880 % patchname)
1880
1881
1881 if rev:
1882 if rev:
1882 if files:
1883 if files:
1883 raise util.Abort(_('option "-r" not valid when importing '
1884 raise util.Abort(_('option "-r" not valid when importing '
1884 'files'))
1885 'files'))
1885 rev = scmutil.revrange(repo, rev)
1886 rev = scmutil.revrange(repo, rev)
1886 rev.sort(reverse=True)
1887 rev.sort(reverse=True)
1887 if (len(files) > 1 or len(rev) > 1) and patchname:
1888 if (len(files) > 1 or len(rev) > 1) and patchname:
1888 raise util.Abort(_('option "-n" not valid when importing multiple '
1889 raise util.Abort(_('option "-n" not valid when importing multiple '
1889 'patches'))
1890 'patches'))
1890 imported = []
1891 imported = []
1891 if rev:
1892 if rev:
1892 # If mq patches are applied, we can only import revisions
1893 # If mq patches are applied, we can only import revisions
1893 # that form a linear path to qbase.
1894 # that form a linear path to qbase.
1894 # Otherwise, they should form a linear path to a head.
1895 # Otherwise, they should form a linear path to a head.
1895 heads = repo.changelog.heads(repo.changelog.node(rev[-1]))
1896 heads = repo.changelog.heads(repo.changelog.node(rev[-1]))
1896 if len(heads) > 1:
1897 if len(heads) > 1:
1897 raise util.Abort(_('revision %d is the root of more than one '
1898 raise util.Abort(_('revision %d is the root of more than one '
1898 'branch') % rev[-1])
1899 'branch') % rev[-1])
1899 if self.applied:
1900 if self.applied:
1900 base = repo.changelog.node(rev[0])
1901 base = repo.changelog.node(rev[0])
1901 if base in [n.node for n in self.applied]:
1902 if base in [n.node for n in self.applied]:
1902 raise util.Abort(_('revision %d is already managed')
1903 raise util.Abort(_('revision %d is already managed')
1903 % rev[0])
1904 % rev[0])
1904 if heads != [self.applied[-1].node]:
1905 if heads != [self.applied[-1].node]:
1905 raise util.Abort(_('revision %d is not the parent of '
1906 raise util.Abort(_('revision %d is not the parent of '
1906 'the queue') % rev[0])
1907 'the queue') % rev[0])
1907 base = repo.changelog.rev(self.applied[0].node)
1908 base = repo.changelog.rev(self.applied[0].node)
1908 lastparent = repo.changelog.parentrevs(base)[0]
1909 lastparent = repo.changelog.parentrevs(base)[0]
1909 else:
1910 else:
1910 if heads != [repo.changelog.node(rev[0])]:
1911 if heads != [repo.changelog.node(rev[0])]:
1911 raise util.Abort(_('revision %d has unmanaged children')
1912 raise util.Abort(_('revision %d has unmanaged children')
1912 % rev[0])
1913 % rev[0])
1913 lastparent = None
1914 lastparent = None
1914
1915
1915 diffopts = self.diffopts({'git': git})
1916 diffopts = self.diffopts({'git': git})
1916 for r in rev:
1917 for r in rev:
1917 if not repo[r].mutable():
1918 if not repo[r].mutable():
1918 raise util.Abort(_('revision %d is not mutable') % r,
1919 raise util.Abort(_('revision %d is not mutable') % r,
1919 hint=_('see "hg help phases" for details'))
1920 hint=_('see "hg help phases" for details'))
1920 p1, p2 = repo.changelog.parentrevs(r)
1921 p1, p2 = repo.changelog.parentrevs(r)
1921 n = repo.changelog.node(r)
1922 n = repo.changelog.node(r)
1922 if p2 != nullrev:
1923 if p2 != nullrev:
1923 raise util.Abort(_('cannot import merge revision %d') % r)
1924 raise util.Abort(_('cannot import merge revision %d') % r)
1924 if lastparent and lastparent != r:
1925 if lastparent and lastparent != r:
1925 raise util.Abort(_('revision %d is not the parent of %d')
1926 raise util.Abort(_('revision %d is not the parent of %d')
1926 % (r, lastparent))
1927 % (r, lastparent))
1927 lastparent = p1
1928 lastparent = p1
1928
1929
1929 if not patchname:
1930 if not patchname:
1930 patchname = normname('%d.diff' % r)
1931 patchname = normname('%d.diff' % r)
1931 checkseries(patchname)
1932 checkseries(patchname)
1932 self.checkpatchname(patchname, force)
1933 self.checkpatchname(patchname, force)
1933 self.fullseries.insert(0, patchname)
1934 self.fullseries.insert(0, patchname)
1934
1935
1935 patchf = self.opener(patchname, "w")
1936 patchf = self.opener(patchname, "w")
1936 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
1937 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
1937 patchf.close()
1938 patchf.close()
1938
1939
1939 se = statusentry(n, patchname)
1940 se = statusentry(n, patchname)
1940 self.applied.insert(0, se)
1941 self.applied.insert(0, se)
1941
1942
1942 self.added.append(patchname)
1943 self.added.append(patchname)
1943 imported.append(patchname)
1944 imported.append(patchname)
1944 patchname = None
1945 patchname = None
1945 if rev and repo.ui.configbool('mq', 'secret', False):
1946 if rev and repo.ui.configbool('mq', 'secret', False):
1946 # if we added anything with --rev, we must move the secret root
1947 # if we added anything with --rev, we must move the secret root
1947 phases.retractboundary(repo, phases.secret, [n])
1948 phases.retractboundary(repo, phases.secret, [n])
1948 self.parseseries()
1949 self.parseseries()
1949 self.applieddirty = True
1950 self.applieddirty = True
1950 self.seriesdirty = True
1951 self.seriesdirty = True
1951
1952
1952 for i, filename in enumerate(files):
1953 for i, filename in enumerate(files):
1953 if existing:
1954 if existing:
1954 if filename == '-':
1955 if filename == '-':
1955 raise util.Abort(_('-e is incompatible with import from -'))
1956 raise util.Abort(_('-e is incompatible with import from -'))
1956 filename = normname(filename)
1957 filename = normname(filename)
1957 self.checkreservedname(filename)
1958 self.checkreservedname(filename)
1958 originpath = self.join(filename)
1959 originpath = self.join(filename)
1959 if not os.path.isfile(originpath):
1960 if not os.path.isfile(originpath):
1960 raise util.Abort(_("patch %s does not exist") % filename)
1961 raise util.Abort(_("patch %s does not exist") % filename)
1961
1962
1962 if patchname:
1963 if patchname:
1963 self.checkpatchname(patchname, force)
1964 self.checkpatchname(patchname, force)
1964
1965
1965 self.ui.write(_('renaming %s to %s\n')
1966 self.ui.write(_('renaming %s to %s\n')
1966 % (filename, patchname))
1967 % (filename, patchname))
1967 util.rename(originpath, self.join(patchname))
1968 util.rename(originpath, self.join(patchname))
1968 else:
1969 else:
1969 patchname = filename
1970 patchname = filename
1970
1971
1971 else:
1972 else:
1972 if filename == '-' and not patchname:
1973 if filename == '-' and not patchname:
1973 raise util.Abort(_('need --name to import a patch from -'))
1974 raise util.Abort(_('need --name to import a patch from -'))
1974 elif not patchname:
1975 elif not patchname:
1975 patchname = normname(os.path.basename(filename.rstrip('/')))
1976 patchname = normname(os.path.basename(filename.rstrip('/')))
1976 self.checkpatchname(patchname, force)
1977 self.checkpatchname(patchname, force)
1977 try:
1978 try:
1978 if filename == '-':
1979 if filename == '-':
1979 text = self.ui.fin.read()
1980 text = self.ui.fin.read()
1980 else:
1981 else:
1981 fp = url.open(self.ui, filename)
1982 fp = url.open(self.ui, filename)
1982 text = fp.read()
1983 text = fp.read()
1983 fp.close()
1984 fp.close()
1984 except (OSError, IOError):
1985 except (OSError, IOError):
1985 raise util.Abort(_("unable to read file %s") % filename)
1986 raise util.Abort(_("unable to read file %s") % filename)
1986 patchf = self.opener(patchname, "w")
1987 patchf = self.opener(patchname, "w")
1987 patchf.write(text)
1988 patchf.write(text)
1988 patchf.close()
1989 patchf.close()
1989 if not force:
1990 if not force:
1990 checkseries(patchname)
1991 checkseries(patchname)
1991 if patchname not in self.series:
1992 if patchname not in self.series:
1992 index = self.fullseriesend() + i
1993 index = self.fullseriesend() + i
1993 self.fullseries[index:index] = [patchname]
1994 self.fullseries[index:index] = [patchname]
1994 self.parseseries()
1995 self.parseseries()
1995 self.seriesdirty = True
1996 self.seriesdirty = True
1996 self.ui.warn(_("adding %s to series file\n") % patchname)
1997 self.ui.warn(_("adding %s to series file\n") % patchname)
1997 self.added.append(patchname)
1998 self.added.append(patchname)
1998 imported.append(patchname)
1999 imported.append(patchname)
1999 patchname = None
2000 patchname = None
2000
2001
2001 self.removeundo(repo)
2002 self.removeundo(repo)
2002 return imported
2003 return imported
2003
2004
2004 def fixkeepchangesopts(ui, opts):
2005 def fixkeepchangesopts(ui, opts):
2005 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2006 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2006 or opts.get('exact')):
2007 or opts.get('exact')):
2007 return opts
2008 return opts
2008 opts = dict(opts)
2009 opts = dict(opts)
2009 opts['keep_changes'] = True
2010 opts['keep_changes'] = True
2010 return opts
2011 return opts
2011
2012
2012 @command("qdelete|qremove|qrm",
2013 @command("qdelete|qremove|qrm",
2013 [('k', 'keep', None, _('keep patch file')),
2014 [('k', 'keep', None, _('keep patch file')),
2014 ('r', 'rev', [],
2015 ('r', 'rev', [],
2015 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2016 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2016 _('hg qdelete [-k] [PATCH]...'))
2017 _('hg qdelete [-k] [PATCH]...'))
2017 def delete(ui, repo, *patches, **opts):
2018 def delete(ui, repo, *patches, **opts):
2018 """remove patches from queue
2019 """remove patches from queue
2019
2020
2020 The patches must not be applied, and at least one patch is required. Exact
2021 The patches must not be applied, and at least one patch is required. Exact
2021 patch identifiers must be given. With -k/--keep, the patch files are
2022 patch identifiers must be given. With -k/--keep, the patch files are
2022 preserved in the patch directory.
2023 preserved in the patch directory.
2023
2024
2024 To stop managing a patch and move it into permanent history,
2025 To stop managing a patch and move it into permanent history,
2025 use the :hg:`qfinish` command."""
2026 use the :hg:`qfinish` command."""
2026 q = repo.mq
2027 q = repo.mq
2027 q.delete(repo, patches, opts)
2028 q.delete(repo, patches, opts)
2028 q.savedirty()
2029 q.savedirty()
2029 return 0
2030 return 0
2030
2031
2031 @command("qapplied",
2032 @command("qapplied",
2032 [('1', 'last', None, _('show only the preceding applied patch'))
2033 [('1', 'last', None, _('show only the preceding applied patch'))
2033 ] + seriesopts,
2034 ] + seriesopts,
2034 _('hg qapplied [-1] [-s] [PATCH]'))
2035 _('hg qapplied [-1] [-s] [PATCH]'))
2035 def applied(ui, repo, patch=None, **opts):
2036 def applied(ui, repo, patch=None, **opts):
2036 """print the patches already applied
2037 """print the patches already applied
2037
2038
2038 Returns 0 on success."""
2039 Returns 0 on success."""
2039
2040
2040 q = repo.mq
2041 q = repo.mq
2041
2042
2042 if patch:
2043 if patch:
2043 if patch not in q.series:
2044 if patch not in q.series:
2044 raise util.Abort(_("patch %s is not in series file") % patch)
2045 raise util.Abort(_("patch %s is not in series file") % patch)
2045 end = q.series.index(patch) + 1
2046 end = q.series.index(patch) + 1
2046 else:
2047 else:
2047 end = q.seriesend(True)
2048 end = q.seriesend(True)
2048
2049
2049 if opts.get('last') and not end:
2050 if opts.get('last') and not end:
2050 ui.write(_("no patches applied\n"))
2051 ui.write(_("no patches applied\n"))
2051 return 1
2052 return 1
2052 elif opts.get('last') and end == 1:
2053 elif opts.get('last') and end == 1:
2053 ui.write(_("only one patch applied\n"))
2054 ui.write(_("only one patch applied\n"))
2054 return 1
2055 return 1
2055 elif opts.get('last'):
2056 elif opts.get('last'):
2056 start = end - 2
2057 start = end - 2
2057 end = 1
2058 end = 1
2058 else:
2059 else:
2059 start = 0
2060 start = 0
2060
2061
2061 q.qseries(repo, length=end, start=start, status='A',
2062 q.qseries(repo, length=end, start=start, status='A',
2062 summary=opts.get('summary'))
2063 summary=opts.get('summary'))
2063
2064
2064
2065
2065 @command("qunapplied",
2066 @command("qunapplied",
2066 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2067 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2067 _('hg qunapplied [-1] [-s] [PATCH]'))
2068 _('hg qunapplied [-1] [-s] [PATCH]'))
2068 def unapplied(ui, repo, patch=None, **opts):
2069 def unapplied(ui, repo, patch=None, **opts):
2069 """print the patches not yet applied
2070 """print the patches not yet applied
2070
2071
2071 Returns 0 on success."""
2072 Returns 0 on success."""
2072
2073
2073 q = repo.mq
2074 q = repo.mq
2074 if patch:
2075 if patch:
2075 if patch not in q.series:
2076 if patch not in q.series:
2076 raise util.Abort(_("patch %s is not in series file") % patch)
2077 raise util.Abort(_("patch %s is not in series file") % patch)
2077 start = q.series.index(patch) + 1
2078 start = q.series.index(patch) + 1
2078 else:
2079 else:
2079 start = q.seriesend(True)
2080 start = q.seriesend(True)
2080
2081
2081 if start == len(q.series) and opts.get('first'):
2082 if start == len(q.series) and opts.get('first'):
2082 ui.write(_("all patches applied\n"))
2083 ui.write(_("all patches applied\n"))
2083 return 1
2084 return 1
2084
2085
2085 length = opts.get('first') and 1 or None
2086 length = opts.get('first') and 1 or None
2086 q.qseries(repo, start=start, length=length, status='U',
2087 q.qseries(repo, start=start, length=length, status='U',
2087 summary=opts.get('summary'))
2088 summary=opts.get('summary'))
2088
2089
2089 @command("qimport",
2090 @command("qimport",
2090 [('e', 'existing', None, _('import file in patch directory')),
2091 [('e', 'existing', None, _('import file in patch directory')),
2091 ('n', 'name', '',
2092 ('n', 'name', '',
2092 _('name of patch file'), _('NAME')),
2093 _('name of patch file'), _('NAME')),
2093 ('f', 'force', None, _('overwrite existing files')),
2094 ('f', 'force', None, _('overwrite existing files')),
2094 ('r', 'rev', [],
2095 ('r', 'rev', [],
2095 _('place existing revisions under mq control'), _('REV')),
2096 _('place existing revisions under mq control'), _('REV')),
2096 ('g', 'git', None, _('use git extended diff format')),
2097 ('g', 'git', None, _('use git extended diff format')),
2097 ('P', 'push', None, _('qpush after importing'))],
2098 ('P', 'push', None, _('qpush after importing'))],
2098 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... FILE...'))
2099 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... FILE...'))
2099 def qimport(ui, repo, *filename, **opts):
2100 def qimport(ui, repo, *filename, **opts):
2100 """import a patch or existing changeset
2101 """import a patch or existing changeset
2101
2102
2102 The patch is inserted into the series after the last applied
2103 The patch is inserted into the series after the last applied
2103 patch. If no patches have been applied, qimport prepends the patch
2104 patch. If no patches have been applied, qimport prepends the patch
2104 to the series.
2105 to the series.
2105
2106
2106 The patch will have the same name as its source file unless you
2107 The patch will have the same name as its source file unless you
2107 give it a new one with -n/--name.
2108 give it a new one with -n/--name.
2108
2109
2109 You can register an existing patch inside the patch directory with
2110 You can register an existing patch inside the patch directory with
2110 the -e/--existing flag.
2111 the -e/--existing flag.
2111
2112
2112 With -f/--force, an existing patch of the same name will be
2113 With -f/--force, an existing patch of the same name will be
2113 overwritten.
2114 overwritten.
2114
2115
2115 An existing changeset may be placed under mq control with -r/--rev
2116 An existing changeset may be placed under mq control with -r/--rev
2116 (e.g. qimport --rev tip -n patch will place tip under mq control).
2117 (e.g. qimport --rev tip -n patch will place tip under mq control).
2117 With -g/--git, patches imported with --rev will use the git diff
2118 With -g/--git, patches imported with --rev will use the git diff
2118 format. See the diffs help topic for information on why this is
2119 format. See the diffs help topic for information on why this is
2119 important for preserving rename/copy information and permission
2120 important for preserving rename/copy information and permission
2120 changes. Use :hg:`qfinish` to remove changesets from mq control.
2121 changes. Use :hg:`qfinish` to remove changesets from mq control.
2121
2122
2122 To import a patch from standard input, pass - as the patch file.
2123 To import a patch from standard input, pass - as the patch file.
2123 When importing from standard input, a patch name must be specified
2124 When importing from standard input, a patch name must be specified
2124 using the --name flag.
2125 using the --name flag.
2125
2126
2126 To import an existing patch while renaming it::
2127 To import an existing patch while renaming it::
2127
2128
2128 hg qimport -e existing-patch -n new-name
2129 hg qimport -e existing-patch -n new-name
2129
2130
2130 Returns 0 if import succeeded.
2131 Returns 0 if import succeeded.
2131 """
2132 """
2132 lock = repo.lock() # cause this may move phase
2133 lock = repo.lock() # cause this may move phase
2133 try:
2134 try:
2134 q = repo.mq
2135 q = repo.mq
2135 try:
2136 try:
2136 imported = q.qimport(
2137 imported = q.qimport(
2137 repo, filename, patchname=opts.get('name'),
2138 repo, filename, patchname=opts.get('name'),
2138 existing=opts.get('existing'), force=opts.get('force'),
2139 existing=opts.get('existing'), force=opts.get('force'),
2139 rev=opts.get('rev'), git=opts.get('git'))
2140 rev=opts.get('rev'), git=opts.get('git'))
2140 finally:
2141 finally:
2141 q.savedirty()
2142 q.savedirty()
2142 finally:
2143 finally:
2143 lock.release()
2144 lock.release()
2144
2145
2145 if imported and opts.get('push') and not opts.get('rev'):
2146 if imported and opts.get('push') and not opts.get('rev'):
2146 return q.push(repo, imported[-1])
2147 return q.push(repo, imported[-1])
2147 return 0
2148 return 0
2148
2149
2149 def qinit(ui, repo, create):
2150 def qinit(ui, repo, create):
2150 """initialize a new queue repository
2151 """initialize a new queue repository
2151
2152
2152 This command also creates a series file for ordering patches, and
2153 This command also creates a series file for ordering patches, and
2153 an mq-specific .hgignore file in the queue repository, to exclude
2154 an mq-specific .hgignore file in the queue repository, to exclude
2154 the status and guards files (these contain mostly transient state).
2155 the status and guards files (these contain mostly transient state).
2155
2156
2156 Returns 0 if initialization succeeded."""
2157 Returns 0 if initialization succeeded."""
2157 q = repo.mq
2158 q = repo.mq
2158 r = q.init(repo, create)
2159 r = q.init(repo, create)
2159 q.savedirty()
2160 q.savedirty()
2160 if r:
2161 if r:
2161 if not os.path.exists(r.wjoin('.hgignore')):
2162 if not os.path.exists(r.wjoin('.hgignore')):
2162 fp = r.wopener('.hgignore', 'w')
2163 fp = r.wopener('.hgignore', 'w')
2163 fp.write('^\\.hg\n')
2164 fp.write('^\\.hg\n')
2164 fp.write('^\\.mq\n')
2165 fp.write('^\\.mq\n')
2165 fp.write('syntax: glob\n')
2166 fp.write('syntax: glob\n')
2166 fp.write('status\n')
2167 fp.write('status\n')
2167 fp.write('guards\n')
2168 fp.write('guards\n')
2168 fp.close()
2169 fp.close()
2169 if not os.path.exists(r.wjoin('series')):
2170 if not os.path.exists(r.wjoin('series')):
2170 r.wopener('series', 'w').close()
2171 r.wopener('series', 'w').close()
2171 r[None].add(['.hgignore', 'series'])
2172 r[None].add(['.hgignore', 'series'])
2172 commands.add(ui, r)
2173 commands.add(ui, r)
2173 return 0
2174 return 0
2174
2175
2175 @command("^qinit",
2176 @command("^qinit",
2176 [('c', 'create-repo', None, _('create queue repository'))],
2177 [('c', 'create-repo', None, _('create queue repository'))],
2177 _('hg qinit [-c]'))
2178 _('hg qinit [-c]'))
2178 def init(ui, repo, **opts):
2179 def init(ui, repo, **opts):
2179 """init a new queue repository (DEPRECATED)
2180 """init a new queue repository (DEPRECATED)
2180
2181
2181 The queue repository is unversioned by default. If
2182 The queue repository is unversioned by default. If
2182 -c/--create-repo is specified, qinit will create a separate nested
2183 -c/--create-repo is specified, qinit will create a separate nested
2183 repository for patches (qinit -c may also be run later to convert
2184 repository for patches (qinit -c may also be run later to convert
2184 an unversioned patch repository into a versioned one). You can use
2185 an unversioned patch repository into a versioned one). You can use
2185 qcommit to commit changes to this queue repository.
2186 qcommit to commit changes to this queue repository.
2186
2187
2187 This command is deprecated. Without -c, it's implied by other relevant
2188 This command is deprecated. Without -c, it's implied by other relevant
2188 commands. With -c, use :hg:`init --mq` instead."""
2189 commands. With -c, use :hg:`init --mq` instead."""
2189 return qinit(ui, repo, create=opts.get('create_repo'))
2190 return qinit(ui, repo, create=opts.get('create_repo'))
2190
2191
2191 @command("qclone",
2192 @command("qclone",
2192 [('', 'pull', None, _('use pull protocol to copy metadata')),
2193 [('', 'pull', None, _('use pull protocol to copy metadata')),
2193 ('U', 'noupdate', None,
2194 ('U', 'noupdate', None,
2194 _('do not update the new working directories')),
2195 _('do not update the new working directories')),
2195 ('', 'uncompressed', None,
2196 ('', 'uncompressed', None,
2196 _('use uncompressed transfer (fast over LAN)')),
2197 _('use uncompressed transfer (fast over LAN)')),
2197 ('p', 'patches', '',
2198 ('p', 'patches', '',
2198 _('location of source patch repository'), _('REPO')),
2199 _('location of source patch repository'), _('REPO')),
2199 ] + commands.remoteopts,
2200 ] + commands.remoteopts,
2200 _('hg qclone [OPTION]... SOURCE [DEST]'))
2201 _('hg qclone [OPTION]... SOURCE [DEST]'))
2201 def clone(ui, source, dest=None, **opts):
2202 def clone(ui, source, dest=None, **opts):
2202 '''clone main and patch repository at same time
2203 '''clone main and patch repository at same time
2203
2204
2204 If source is local, destination will have no patches applied. If
2205 If source is local, destination will have no patches applied. If
2205 source is remote, this command can not check if patches are
2206 source is remote, this command can not check if patches are
2206 applied in source, so cannot guarantee that patches are not
2207 applied in source, so cannot guarantee that patches are not
2207 applied in destination. If you clone remote repository, be sure
2208 applied in destination. If you clone remote repository, be sure
2208 before that it has no patches applied.
2209 before that it has no patches applied.
2209
2210
2210 Source patch repository is looked for in <src>/.hg/patches by
2211 Source patch repository is looked for in <src>/.hg/patches by
2211 default. Use -p <url> to change.
2212 default. Use -p <url> to change.
2212
2213
2213 The patch directory must be a nested Mercurial repository, as
2214 The patch directory must be a nested Mercurial repository, as
2214 would be created by :hg:`init --mq`.
2215 would be created by :hg:`init --mq`.
2215
2216
2216 Return 0 on success.
2217 Return 0 on success.
2217 '''
2218 '''
2218 def patchdir(repo):
2219 def patchdir(repo):
2219 """compute a patch repo url from a repo object"""
2220 """compute a patch repo url from a repo object"""
2220 url = repo.url()
2221 url = repo.url()
2221 if url.endswith('/'):
2222 if url.endswith('/'):
2222 url = url[:-1]
2223 url = url[:-1]
2223 return url + '/.hg/patches'
2224 return url + '/.hg/patches'
2224
2225
2225 # main repo (destination and sources)
2226 # main repo (destination and sources)
2226 if dest is None:
2227 if dest is None:
2227 dest = hg.defaultdest(source)
2228 dest = hg.defaultdest(source)
2228 sr = hg.repository(hg.remoteui(ui, opts), ui.expandpath(source))
2229 sr = hg.repository(hg.remoteui(ui, opts), ui.expandpath(source))
2229
2230
2230 # patches repo (source only)
2231 # patches repo (source only)
2231 if opts.get('patches'):
2232 if opts.get('patches'):
2232 patchespath = ui.expandpath(opts.get('patches'))
2233 patchespath = ui.expandpath(opts.get('patches'))
2233 else:
2234 else:
2234 patchespath = patchdir(sr)
2235 patchespath = patchdir(sr)
2235 try:
2236 try:
2236 hg.repository(ui, patchespath)
2237 hg.repository(ui, patchespath)
2237 except error.RepoError:
2238 except error.RepoError:
2238 raise util.Abort(_('versioned patch repository not found'
2239 raise util.Abort(_('versioned patch repository not found'
2239 ' (see init --mq)'))
2240 ' (see init --mq)'))
2240 qbase, destrev = None, None
2241 qbase, destrev = None, None
2241 if sr.local():
2242 if sr.local():
2242 if sr.mq.applied and sr[qbase].phase() != phases.secret:
2243 if sr.mq.applied and sr[qbase].phase() != phases.secret:
2243 qbase = sr.mq.applied[0].node
2244 qbase = sr.mq.applied[0].node
2244 if not hg.islocal(dest):
2245 if not hg.islocal(dest):
2245 heads = set(sr.heads())
2246 heads = set(sr.heads())
2246 destrev = list(heads.difference(sr.heads(qbase)))
2247 destrev = list(heads.difference(sr.heads(qbase)))
2247 destrev.append(sr.changelog.parents(qbase)[0])
2248 destrev.append(sr.changelog.parents(qbase)[0])
2248 elif sr.capable('lookup'):
2249 elif sr.capable('lookup'):
2249 try:
2250 try:
2250 qbase = sr.lookup('qbase')
2251 qbase = sr.lookup('qbase')
2251 except error.RepoError:
2252 except error.RepoError:
2252 pass
2253 pass
2253
2254
2254 ui.note(_('cloning main repository\n'))
2255 ui.note(_('cloning main repository\n'))
2255 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2256 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2256 pull=opts.get('pull'),
2257 pull=opts.get('pull'),
2257 rev=destrev,
2258 rev=destrev,
2258 update=False,
2259 update=False,
2259 stream=opts.get('uncompressed'))
2260 stream=opts.get('uncompressed'))
2260
2261
2261 ui.note(_('cloning patch repository\n'))
2262 ui.note(_('cloning patch repository\n'))
2262 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2263 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2263 pull=opts.get('pull'), update=not opts.get('noupdate'),
2264 pull=opts.get('pull'), update=not opts.get('noupdate'),
2264 stream=opts.get('uncompressed'))
2265 stream=opts.get('uncompressed'))
2265
2266
2266 if dr.local():
2267 if dr.local():
2267 if qbase:
2268 if qbase:
2268 ui.note(_('stripping applied patches from destination '
2269 ui.note(_('stripping applied patches from destination '
2269 'repository\n'))
2270 'repository\n'))
2270 dr.mq.strip(dr, [qbase], update=False, backup=None)
2271 dr.mq.strip(dr, [qbase], update=False, backup=None)
2271 if not opts.get('noupdate'):
2272 if not opts.get('noupdate'):
2272 ui.note(_('updating destination repository\n'))
2273 ui.note(_('updating destination repository\n'))
2273 hg.update(dr, dr.changelog.tip())
2274 hg.update(dr, dr.changelog.tip())
2274
2275
2275 @command("qcommit|qci",
2276 @command("qcommit|qci",
2276 commands.table["^commit|ci"][1],
2277 commands.table["^commit|ci"][1],
2277 _('hg qcommit [OPTION]... [FILE]...'))
2278 _('hg qcommit [OPTION]... [FILE]...'))
2278 def commit(ui, repo, *pats, **opts):
2279 def commit(ui, repo, *pats, **opts):
2279 """commit changes in the queue repository (DEPRECATED)
2280 """commit changes in the queue repository (DEPRECATED)
2280
2281
2281 This command is deprecated; use :hg:`commit --mq` instead."""
2282 This command is deprecated; use :hg:`commit --mq` instead."""
2282 q = repo.mq
2283 q = repo.mq
2283 r = q.qrepo()
2284 r = q.qrepo()
2284 if not r:
2285 if not r:
2285 raise util.Abort('no queue repository')
2286 raise util.Abort('no queue repository')
2286 commands.commit(r.ui, r, *pats, **opts)
2287 commands.commit(r.ui, r, *pats, **opts)
2287
2288
2288 @command("qseries",
2289 @command("qseries",
2289 [('m', 'missing', None, _('print patches not in series')),
2290 [('m', 'missing', None, _('print patches not in series')),
2290 ] + seriesopts,
2291 ] + seriesopts,
2291 _('hg qseries [-ms]'))
2292 _('hg qseries [-ms]'))
2292 def series(ui, repo, **opts):
2293 def series(ui, repo, **opts):
2293 """print the entire series file
2294 """print the entire series file
2294
2295
2295 Returns 0 on success."""
2296 Returns 0 on success."""
2296 repo.mq.qseries(repo, missing=opts.get('missing'),
2297 repo.mq.qseries(repo, missing=opts.get('missing'),
2297 summary=opts.get('summary'))
2298 summary=opts.get('summary'))
2298 return 0
2299 return 0
2299
2300
2300 @command("qtop", seriesopts, _('hg qtop [-s]'))
2301 @command("qtop", seriesopts, _('hg qtop [-s]'))
2301 def top(ui, repo, **opts):
2302 def top(ui, repo, **opts):
2302 """print the name of the current patch
2303 """print the name of the current patch
2303
2304
2304 Returns 0 on success."""
2305 Returns 0 on success."""
2305 q = repo.mq
2306 q = repo.mq
2306 t = q.applied and q.seriesend(True) or 0
2307 t = q.applied and q.seriesend(True) or 0
2307 if t:
2308 if t:
2308 q.qseries(repo, start=t - 1, length=1, status='A',
2309 q.qseries(repo, start=t - 1, length=1, status='A',
2309 summary=opts.get('summary'))
2310 summary=opts.get('summary'))
2310 else:
2311 else:
2311 ui.write(_("no patches applied\n"))
2312 ui.write(_("no patches applied\n"))
2312 return 1
2313 return 1
2313
2314
2314 @command("qnext", seriesopts, _('hg qnext [-s]'))
2315 @command("qnext", seriesopts, _('hg qnext [-s]'))
2315 def next(ui, repo, **opts):
2316 def next(ui, repo, **opts):
2316 """print the name of the next pushable patch
2317 """print the name of the next pushable patch
2317
2318
2318 Returns 0 on success."""
2319 Returns 0 on success."""
2319 q = repo.mq
2320 q = repo.mq
2320 end = q.seriesend()
2321 end = q.seriesend()
2321 if end == len(q.series):
2322 if end == len(q.series):
2322 ui.write(_("all patches applied\n"))
2323 ui.write(_("all patches applied\n"))
2323 return 1
2324 return 1
2324 q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
2325 q.qseries(repo, start=end, length=1, summary=opts.get('summary'))
2325
2326
2326 @command("qprev", seriesopts, _('hg qprev [-s]'))
2327 @command("qprev", seriesopts, _('hg qprev [-s]'))
2327 def prev(ui, repo, **opts):
2328 def prev(ui, repo, **opts):
2328 """print the name of the preceding applied patch
2329 """print the name of the preceding applied patch
2329
2330
2330 Returns 0 on success."""
2331 Returns 0 on success."""
2331 q = repo.mq
2332 q = repo.mq
2332 l = len(q.applied)
2333 l = len(q.applied)
2333 if l == 1:
2334 if l == 1:
2334 ui.write(_("only one patch applied\n"))
2335 ui.write(_("only one patch applied\n"))
2335 return 1
2336 return 1
2336 if not l:
2337 if not l:
2337 ui.write(_("no patches applied\n"))
2338 ui.write(_("no patches applied\n"))
2338 return 1
2339 return 1
2339 idx = q.series.index(q.applied[-2].name)
2340 idx = q.series.index(q.applied[-2].name)
2340 q.qseries(repo, start=idx, length=1, status='A',
2341 q.qseries(repo, start=idx, length=1, status='A',
2341 summary=opts.get('summary'))
2342 summary=opts.get('summary'))
2342
2343
2343 def setupheaderopts(ui, opts):
2344 def setupheaderopts(ui, opts):
2344 if not opts.get('user') and opts.get('currentuser'):
2345 if not opts.get('user') and opts.get('currentuser'):
2345 opts['user'] = ui.username()
2346 opts['user'] = ui.username()
2346 if not opts.get('date') and opts.get('currentdate'):
2347 if not opts.get('date') and opts.get('currentdate'):
2347 opts['date'] = "%d %d" % util.makedate()
2348 opts['date'] = "%d %d" % util.makedate()
2348
2349
2349 @command("^qnew",
2350 @command("^qnew",
2350 [('e', 'edit', None, _('edit commit message')),
2351 [('e', 'edit', None, _('edit commit message')),
2351 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2352 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2352 ('g', 'git', None, _('use git extended diff format')),
2353 ('g', 'git', None, _('use git extended diff format')),
2353 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2354 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2354 ('u', 'user', '',
2355 ('u', 'user', '',
2355 _('add "From: <USER>" to patch'), _('USER')),
2356 _('add "From: <USER>" to patch'), _('USER')),
2356 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2357 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2357 ('d', 'date', '',
2358 ('d', 'date', '',
2358 _('add "Date: <DATE>" to patch'), _('DATE'))
2359 _('add "Date: <DATE>" to patch'), _('DATE'))
2359 ] + commands.walkopts + commands.commitopts,
2360 ] + commands.walkopts + commands.commitopts,
2360 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'))
2361 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'))
2361 def new(ui, repo, patch, *args, **opts):
2362 def new(ui, repo, patch, *args, **opts):
2362 """create a new patch
2363 """create a new patch
2363
2364
2364 qnew creates a new patch on top of the currently-applied patch (if
2365 qnew creates a new patch on top of the currently-applied patch (if
2365 any). The patch will be initialized with any outstanding changes
2366 any). The patch will be initialized with any outstanding changes
2366 in the working directory. You may also use -I/--include,
2367 in the working directory. You may also use -I/--include,
2367 -X/--exclude, and/or a list of files after the patch name to add
2368 -X/--exclude, and/or a list of files after the patch name to add
2368 only changes to matching files to the new patch, leaving the rest
2369 only changes to matching files to the new patch, leaving the rest
2369 as uncommitted modifications.
2370 as uncommitted modifications.
2370
2371
2371 -u/--user and -d/--date can be used to set the (given) user and
2372 -u/--user and -d/--date can be used to set the (given) user and
2372 date, respectively. -U/--currentuser and -D/--currentdate set user
2373 date, respectively. -U/--currentuser and -D/--currentdate set user
2373 to current user and date to current date.
2374 to current user and date to current date.
2374
2375
2375 -e/--edit, -m/--message or -l/--logfile set the patch header as
2376 -e/--edit, -m/--message or -l/--logfile set the patch header as
2376 well as the commit message. If none is specified, the header is
2377 well as the commit message. If none is specified, the header is
2377 empty and the commit message is '[mq]: PATCH'.
2378 empty and the commit message is '[mq]: PATCH'.
2378
2379
2379 Use the -g/--git option to keep the patch in the git extended diff
2380 Use the -g/--git option to keep the patch in the git extended diff
2380 format. Read the diffs help topic for more information on why this
2381 format. Read the diffs help topic for more information on why this
2381 is important for preserving permission changes and copy/rename
2382 is important for preserving permission changes and copy/rename
2382 information.
2383 information.
2383
2384
2384 Returns 0 on successful creation of a new patch.
2385 Returns 0 on successful creation of a new patch.
2385 """
2386 """
2386 msg = cmdutil.logmessage(ui, opts)
2387 msg = cmdutil.logmessage(ui, opts)
2387 def getmsg():
2388 def getmsg():
2388 return ui.edit(msg, opts.get('user') or ui.username())
2389 return ui.edit(msg, opts.get('user') or ui.username())
2389 q = repo.mq
2390 q = repo.mq
2390 opts['msg'] = msg
2391 opts['msg'] = msg
2391 if opts.get('edit'):
2392 if opts.get('edit'):
2392 opts['msg'] = getmsg
2393 opts['msg'] = getmsg
2393 else:
2394 else:
2394 opts['msg'] = msg
2395 opts['msg'] = msg
2395 setupheaderopts(ui, opts)
2396 setupheaderopts(ui, opts)
2396 q.new(repo, patch, *args, **opts)
2397 q.new(repo, patch, *args, **opts)
2397 q.savedirty()
2398 q.savedirty()
2398 return 0
2399 return 0
2399
2400
2400 @command("^qrefresh",
2401 @command("^qrefresh",
2401 [('e', 'edit', None, _('edit commit message')),
2402 [('e', 'edit', None, _('edit commit message')),
2402 ('g', 'git', None, _('use git extended diff format')),
2403 ('g', 'git', None, _('use git extended diff format')),
2403 ('s', 'short', None,
2404 ('s', 'short', None,
2404 _('refresh only files already in the patch and specified files')),
2405 _('refresh only files already in the patch and specified files')),
2405 ('U', 'currentuser', None,
2406 ('U', 'currentuser', None,
2406 _('add/update author field in patch with current user')),
2407 _('add/update author field in patch with current user')),
2407 ('u', 'user', '',
2408 ('u', 'user', '',
2408 _('add/update author field in patch with given user'), _('USER')),
2409 _('add/update author field in patch with given user'), _('USER')),
2409 ('D', 'currentdate', None,
2410 ('D', 'currentdate', None,
2410 _('add/update date field in patch with current date')),
2411 _('add/update date field in patch with current date')),
2411 ('d', 'date', '',
2412 ('d', 'date', '',
2412 _('add/update date field in patch with given date'), _('DATE'))
2413 _('add/update date field in patch with given date'), _('DATE'))
2413 ] + commands.walkopts + commands.commitopts,
2414 ] + commands.walkopts + commands.commitopts,
2414 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'))
2415 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'))
2415 def refresh(ui, repo, *pats, **opts):
2416 def refresh(ui, repo, *pats, **opts):
2416 """update the current patch
2417 """update the current patch
2417
2418
2418 If any file patterns are provided, the refreshed patch will
2419 If any file patterns are provided, the refreshed patch will
2419 contain only the modifications that match those patterns; the
2420 contain only the modifications that match those patterns; the
2420 remaining modifications will remain in the working directory.
2421 remaining modifications will remain in the working directory.
2421
2422
2422 If -s/--short is specified, files currently included in the patch
2423 If -s/--short is specified, files currently included in the patch
2423 will be refreshed just like matched files and remain in the patch.
2424 will be refreshed just like matched files and remain in the patch.
2424
2425
2425 If -e/--edit is specified, Mercurial will start your configured editor for
2426 If -e/--edit is specified, Mercurial will start your configured editor for
2426 you to enter a message. In case qrefresh fails, you will find a backup of
2427 you to enter a message. In case qrefresh fails, you will find a backup of
2427 your message in ``.hg/last-message.txt``.
2428 your message in ``.hg/last-message.txt``.
2428
2429
2429 hg add/remove/copy/rename work as usual, though you might want to
2430 hg add/remove/copy/rename work as usual, though you might want to
2430 use git-style patches (-g/--git or [diff] git=1) to track copies
2431 use git-style patches (-g/--git or [diff] git=1) to track copies
2431 and renames. See the diffs help topic for more information on the
2432 and renames. See the diffs help topic for more information on the
2432 git diff format.
2433 git diff format.
2433
2434
2434 Returns 0 on success.
2435 Returns 0 on success.
2435 """
2436 """
2436 q = repo.mq
2437 q = repo.mq
2437 message = cmdutil.logmessage(ui, opts)
2438 message = cmdutil.logmessage(ui, opts)
2438 if opts.get('edit'):
2439 if opts.get('edit'):
2439 if not q.applied:
2440 if not q.applied:
2440 ui.write(_("no patches applied\n"))
2441 ui.write(_("no patches applied\n"))
2441 return 1
2442 return 1
2442 if message:
2443 if message:
2443 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
2444 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
2444 patch = q.applied[-1].name
2445 patch = q.applied[-1].name
2445 ph = patchheader(q.join(patch), q.plainmode)
2446 ph = patchheader(q.join(patch), q.plainmode)
2446 message = ui.edit('\n'.join(ph.message), ph.user or ui.username())
2447 message = ui.edit('\n'.join(ph.message), ph.user or ui.username())
2447 # We don't want to lose the patch message if qrefresh fails (issue2062)
2448 # We don't want to lose the patch message if qrefresh fails (issue2062)
2448 repo.savecommitmessage(message)
2449 repo.savecommitmessage(message)
2449 setupheaderopts(ui, opts)
2450 setupheaderopts(ui, opts)
2450 wlock = repo.wlock()
2451 wlock = repo.wlock()
2451 try:
2452 try:
2452 ret = q.refresh(repo, pats, msg=message, **opts)
2453 ret = q.refresh(repo, pats, msg=message, **opts)
2453 q.savedirty()
2454 q.savedirty()
2454 return ret
2455 return ret
2455 finally:
2456 finally:
2456 wlock.release()
2457 wlock.release()
2457
2458
2458 @command("^qdiff",
2459 @command("^qdiff",
2459 commands.diffopts + commands.diffopts2 + commands.walkopts,
2460 commands.diffopts + commands.diffopts2 + commands.walkopts,
2460 _('hg qdiff [OPTION]... [FILE]...'))
2461 _('hg qdiff [OPTION]... [FILE]...'))
2461 def diff(ui, repo, *pats, **opts):
2462 def diff(ui, repo, *pats, **opts):
2462 """diff of the current patch and subsequent modifications
2463 """diff of the current patch and subsequent modifications
2463
2464
2464 Shows a diff which includes the current patch as well as any
2465 Shows a diff which includes the current patch as well as any
2465 changes which have been made in the working directory since the
2466 changes which have been made in the working directory since the
2466 last refresh (thus showing what the current patch would become
2467 last refresh (thus showing what the current patch would become
2467 after a qrefresh).
2468 after a qrefresh).
2468
2469
2469 Use :hg:`diff` if you only want to see the changes made since the
2470 Use :hg:`diff` if you only want to see the changes made since the
2470 last qrefresh, or :hg:`export qtip` if you want to see changes
2471 last qrefresh, or :hg:`export qtip` if you want to see changes
2471 made by the current patch without including changes made since the
2472 made by the current patch without including changes made since the
2472 qrefresh.
2473 qrefresh.
2473
2474
2474 Returns 0 on success.
2475 Returns 0 on success.
2475 """
2476 """
2476 repo.mq.diff(repo, pats, opts)
2477 repo.mq.diff(repo, pats, opts)
2477 return 0
2478 return 0
2478
2479
2479 @command('qfold',
2480 @command('qfold',
2480 [('e', 'edit', None, _('edit patch header')),
2481 [('e', 'edit', None, _('edit patch header')),
2481 ('k', 'keep', None, _('keep folded patch files')),
2482 ('k', 'keep', None, _('keep folded patch files')),
2482 ] + commands.commitopts,
2483 ] + commands.commitopts,
2483 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2484 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2484 def fold(ui, repo, *files, **opts):
2485 def fold(ui, repo, *files, **opts):
2485 """fold the named patches into the current patch
2486 """fold the named patches into the current patch
2486
2487
2487 Patches must not yet be applied. Each patch will be successively
2488 Patches must not yet be applied. Each patch will be successively
2488 applied to the current patch in the order given. If all the
2489 applied to the current patch in the order given. If all the
2489 patches apply successfully, the current patch will be refreshed
2490 patches apply successfully, the current patch will be refreshed
2490 with the new cumulative patch, and the folded patches will be
2491 with the new cumulative patch, and the folded patches will be
2491 deleted. With -k/--keep, the folded patch files will not be
2492 deleted. With -k/--keep, the folded patch files will not be
2492 removed afterwards.
2493 removed afterwards.
2493
2494
2494 The header for each folded patch will be concatenated with the
2495 The header for each folded patch will be concatenated with the
2495 current patch header, separated by a line of ``* * *``.
2496 current patch header, separated by a line of ``* * *``.
2496
2497
2497 Returns 0 on success."""
2498 Returns 0 on success."""
2498 q = repo.mq
2499 q = repo.mq
2499 if not files:
2500 if not files:
2500 raise util.Abort(_('qfold requires at least one patch name'))
2501 raise util.Abort(_('qfold requires at least one patch name'))
2501 if not q.checktoppatch(repo)[0]:
2502 if not q.checktoppatch(repo)[0]:
2502 raise util.Abort(_('no patches applied'))
2503 raise util.Abort(_('no patches applied'))
2503 q.checklocalchanges(repo)
2504 q.checklocalchanges(repo)
2504
2505
2505 message = cmdutil.logmessage(ui, opts)
2506 message = cmdutil.logmessage(ui, opts)
2506 if opts.get('edit'):
2507 if opts.get('edit'):
2507 if message:
2508 if message:
2508 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
2509 raise util.Abort(_('option "-e" incompatible with "-m" or "-l"'))
2509
2510
2510 parent = q.lookup('qtip')
2511 parent = q.lookup('qtip')
2511 patches = []
2512 patches = []
2512 messages = []
2513 messages = []
2513 for f in files:
2514 for f in files:
2514 p = q.lookup(f)
2515 p = q.lookup(f)
2515 if p in patches or p == parent:
2516 if p in patches or p == parent:
2516 ui.warn(_('Skipping already folded patch %s\n') % p)
2517 ui.warn(_('Skipping already folded patch %s\n') % p)
2517 if q.isapplied(p):
2518 if q.isapplied(p):
2518 raise util.Abort(_('qfold cannot fold already applied patch %s')
2519 raise util.Abort(_('qfold cannot fold already applied patch %s')
2519 % p)
2520 % p)
2520 patches.append(p)
2521 patches.append(p)
2521
2522
2522 for p in patches:
2523 for p in patches:
2523 if not message:
2524 if not message:
2524 ph = patchheader(q.join(p), q.plainmode)
2525 ph = patchheader(q.join(p), q.plainmode)
2525 if ph.message:
2526 if ph.message:
2526 messages.append(ph.message)
2527 messages.append(ph.message)
2527 pf = q.join(p)
2528 pf = q.join(p)
2528 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2529 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2529 if not patchsuccess:
2530 if not patchsuccess:
2530 raise util.Abort(_('error folding patch %s') % p)
2531 raise util.Abort(_('error folding patch %s') % p)
2531
2532
2532 if not message:
2533 if not message:
2533 ph = patchheader(q.join(parent), q.plainmode)
2534 ph = patchheader(q.join(parent), q.plainmode)
2534 message, user = ph.message, ph.user
2535 message, user = ph.message, ph.user
2535 for msg in messages:
2536 for msg in messages:
2536 message.append('* * *')
2537 message.append('* * *')
2537 message.extend(msg)
2538 message.extend(msg)
2538 message = '\n'.join(message)
2539 message = '\n'.join(message)
2539
2540
2540 if opts.get('edit'):
2541 if opts.get('edit'):
2541 message = ui.edit(message, user or ui.username())
2542 message = ui.edit(message, user or ui.username())
2542
2543
2543 diffopts = q.patchopts(q.diffopts(), *patches)
2544 diffopts = q.patchopts(q.diffopts(), *patches)
2544 wlock = repo.wlock()
2545 wlock = repo.wlock()
2545 try:
2546 try:
2546 q.refresh(repo, msg=message, git=diffopts.git)
2547 q.refresh(repo, msg=message, git=diffopts.git)
2547 q.delete(repo, patches, opts)
2548 q.delete(repo, patches, opts)
2548 q.savedirty()
2549 q.savedirty()
2549 finally:
2550 finally:
2550 wlock.release()
2551 wlock.release()
2551
2552
2552 @command("qgoto",
2553 @command("qgoto",
2553 [('', 'keep-changes', None,
2554 [('', 'keep-changes', None,
2554 _('tolerate non-conflicting local changes')),
2555 _('tolerate non-conflicting local changes')),
2555 ('f', 'force', None, _('overwrite any local changes')),
2556 ('f', 'force', None, _('overwrite any local changes')),
2556 ('', 'no-backup', None, _('do not save backup copies of files'))],
2557 ('', 'no-backup', None, _('do not save backup copies of files'))],
2557 _('hg qgoto [OPTION]... PATCH'))
2558 _('hg qgoto [OPTION]... PATCH'))
2558 def goto(ui, repo, patch, **opts):
2559 def goto(ui, repo, patch, **opts):
2559 '''push or pop patches until named patch is at top of stack
2560 '''push or pop patches until named patch is at top of stack
2560
2561
2561 Returns 0 on success.'''
2562 Returns 0 on success.'''
2562 opts = fixkeepchangesopts(ui, opts)
2563 opts = fixkeepchangesopts(ui, opts)
2563 q = repo.mq
2564 q = repo.mq
2564 patch = q.lookup(patch)
2565 patch = q.lookup(patch)
2565 nobackup = opts.get('no_backup')
2566 nobackup = opts.get('no_backup')
2566 keepchanges = opts.get('keep_changes')
2567 keepchanges = opts.get('keep_changes')
2567 if q.isapplied(patch):
2568 if q.isapplied(patch):
2568 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2569 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2569 keepchanges=keepchanges)
2570 keepchanges=keepchanges)
2570 else:
2571 else:
2571 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2572 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2572 keepchanges=keepchanges)
2573 keepchanges=keepchanges)
2573 q.savedirty()
2574 q.savedirty()
2574 return ret
2575 return ret
2575
2576
2576 @command("qguard",
2577 @command("qguard",
2577 [('l', 'list', None, _('list all patches and guards')),
2578 [('l', 'list', None, _('list all patches and guards')),
2578 ('n', 'none', None, _('drop all guards'))],
2579 ('n', 'none', None, _('drop all guards'))],
2579 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2580 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2580 def guard(ui, repo, *args, **opts):
2581 def guard(ui, repo, *args, **opts):
2581 '''set or print guards for a patch
2582 '''set or print guards for a patch
2582
2583
2583 Guards control whether a patch can be pushed. A patch with no
2584 Guards control whether a patch can be pushed. A patch with no
2584 guards is always pushed. A patch with a positive guard ("+foo") is
2585 guards is always pushed. A patch with a positive guard ("+foo") is
2585 pushed only if the :hg:`qselect` command has activated it. A patch with
2586 pushed only if the :hg:`qselect` command has activated it. A patch with
2586 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2587 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2587 has activated it.
2588 has activated it.
2588
2589
2589 With no arguments, print the currently active guards.
2590 With no arguments, print the currently active guards.
2590 With arguments, set guards for the named patch.
2591 With arguments, set guards for the named patch.
2591
2592
2592 .. note::
2593 .. note::
2593 Specifying negative guards now requires '--'.
2594 Specifying negative guards now requires '--'.
2594
2595
2595 To set guards on another patch::
2596 To set guards on another patch::
2596
2597
2597 hg qguard other.patch -- +2.6.17 -stable
2598 hg qguard other.patch -- +2.6.17 -stable
2598
2599
2599 Returns 0 on success.
2600 Returns 0 on success.
2600 '''
2601 '''
2601 def status(idx):
2602 def status(idx):
2602 guards = q.seriesguards[idx] or ['unguarded']
2603 guards = q.seriesguards[idx] or ['unguarded']
2603 if q.series[idx] in applied:
2604 if q.series[idx] in applied:
2604 state = 'applied'
2605 state = 'applied'
2605 elif q.pushable(idx)[0]:
2606 elif q.pushable(idx)[0]:
2606 state = 'unapplied'
2607 state = 'unapplied'
2607 else:
2608 else:
2608 state = 'guarded'
2609 state = 'guarded'
2609 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2610 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2610 ui.write('%s: ' % ui.label(q.series[idx], label))
2611 ui.write('%s: ' % ui.label(q.series[idx], label))
2611
2612
2612 for i, guard in enumerate(guards):
2613 for i, guard in enumerate(guards):
2613 if guard.startswith('+'):
2614 if guard.startswith('+'):
2614 ui.write(guard, label='qguard.positive')
2615 ui.write(guard, label='qguard.positive')
2615 elif guard.startswith('-'):
2616 elif guard.startswith('-'):
2616 ui.write(guard, label='qguard.negative')
2617 ui.write(guard, label='qguard.negative')
2617 else:
2618 else:
2618 ui.write(guard, label='qguard.unguarded')
2619 ui.write(guard, label='qguard.unguarded')
2619 if i != len(guards) - 1:
2620 if i != len(guards) - 1:
2620 ui.write(' ')
2621 ui.write(' ')
2621 ui.write('\n')
2622 ui.write('\n')
2622 q = repo.mq
2623 q = repo.mq
2623 applied = set(p.name for p in q.applied)
2624 applied = set(p.name for p in q.applied)
2624 patch = None
2625 patch = None
2625 args = list(args)
2626 args = list(args)
2626 if opts.get('list'):
2627 if opts.get('list'):
2627 if args or opts.get('none'):
2628 if args or opts.get('none'):
2628 raise util.Abort(_('cannot mix -l/--list with options or '
2629 raise util.Abort(_('cannot mix -l/--list with options or '
2629 'arguments'))
2630 'arguments'))
2630 for i in xrange(len(q.series)):
2631 for i in xrange(len(q.series)):
2631 status(i)
2632 status(i)
2632 return
2633 return
2633 if not args or args[0][0:1] in '-+':
2634 if not args or args[0][0:1] in '-+':
2634 if not q.applied:
2635 if not q.applied:
2635 raise util.Abort(_('no patches applied'))
2636 raise util.Abort(_('no patches applied'))
2636 patch = q.applied[-1].name
2637 patch = q.applied[-1].name
2637 if patch is None and args[0][0:1] not in '-+':
2638 if patch is None and args[0][0:1] not in '-+':
2638 patch = args.pop(0)
2639 patch = args.pop(0)
2639 if patch is None:
2640 if patch is None:
2640 raise util.Abort(_('no patch to work with'))
2641 raise util.Abort(_('no patch to work with'))
2641 if args or opts.get('none'):
2642 if args or opts.get('none'):
2642 idx = q.findseries(patch)
2643 idx = q.findseries(patch)
2643 if idx is None:
2644 if idx is None:
2644 raise util.Abort(_('no patch named %s') % patch)
2645 raise util.Abort(_('no patch named %s') % patch)
2645 q.setguards(idx, args)
2646 q.setguards(idx, args)
2646 q.savedirty()
2647 q.savedirty()
2647 else:
2648 else:
2648 status(q.series.index(q.lookup(patch)))
2649 status(q.series.index(q.lookup(patch)))
2649
2650
2650 @command("qheader", [], _('hg qheader [PATCH]'))
2651 @command("qheader", [], _('hg qheader [PATCH]'))
2651 def header(ui, repo, patch=None):
2652 def header(ui, repo, patch=None):
2652 """print the header of the topmost or specified patch
2653 """print the header of the topmost or specified patch
2653
2654
2654 Returns 0 on success."""
2655 Returns 0 on success."""
2655 q = repo.mq
2656 q = repo.mq
2656
2657
2657 if patch:
2658 if patch:
2658 patch = q.lookup(patch)
2659 patch = q.lookup(patch)
2659 else:
2660 else:
2660 if not q.applied:
2661 if not q.applied:
2661 ui.write(_('no patches applied\n'))
2662 ui.write(_('no patches applied\n'))
2662 return 1
2663 return 1
2663 patch = q.lookup('qtip')
2664 patch = q.lookup('qtip')
2664 ph = patchheader(q.join(patch), q.plainmode)
2665 ph = patchheader(q.join(patch), q.plainmode)
2665
2666
2666 ui.write('\n'.join(ph.message) + '\n')
2667 ui.write('\n'.join(ph.message) + '\n')
2667
2668
2668 def lastsavename(path):
2669 def lastsavename(path):
2669 (directory, base) = os.path.split(path)
2670 (directory, base) = os.path.split(path)
2670 names = os.listdir(directory)
2671 names = os.listdir(directory)
2671 namere = re.compile("%s.([0-9]+)" % base)
2672 namere = re.compile("%s.([0-9]+)" % base)
2672 maxindex = None
2673 maxindex = None
2673 maxname = None
2674 maxname = None
2674 for f in names:
2675 for f in names:
2675 m = namere.match(f)
2676 m = namere.match(f)
2676 if m:
2677 if m:
2677 index = int(m.group(1))
2678 index = int(m.group(1))
2678 if maxindex is None or index > maxindex:
2679 if maxindex is None or index > maxindex:
2679 maxindex = index
2680 maxindex = index
2680 maxname = f
2681 maxname = f
2681 if maxname:
2682 if maxname:
2682 return (os.path.join(directory, maxname), maxindex)
2683 return (os.path.join(directory, maxname), maxindex)
2683 return (None, None)
2684 return (None, None)
2684
2685
2685 def savename(path):
2686 def savename(path):
2686 (last, index) = lastsavename(path)
2687 (last, index) = lastsavename(path)
2687 if last is None:
2688 if last is None:
2688 index = 0
2689 index = 0
2689 newpath = path + ".%d" % (index + 1)
2690 newpath = path + ".%d" % (index + 1)
2690 return newpath
2691 return newpath
2691
2692
2692 @command("^qpush",
2693 @command("^qpush",
2693 [('', 'keep-changes', None,
2694 [('', 'keep-changes', None,
2694 _('tolerate non-conflicting local changes')),
2695 _('tolerate non-conflicting local changes')),
2695 ('f', 'force', None, _('apply on top of local changes')),
2696 ('f', 'force', None, _('apply on top of local changes')),
2696 ('e', 'exact', None,
2697 ('e', 'exact', None,
2697 _('apply the target patch to its recorded parent')),
2698 _('apply the target patch to its recorded parent')),
2698 ('l', 'list', None, _('list patch name in commit text')),
2699 ('l', 'list', None, _('list patch name in commit text')),
2699 ('a', 'all', None, _('apply all patches')),
2700 ('a', 'all', None, _('apply all patches')),
2700 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2701 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2701 ('n', 'name', '',
2702 ('n', 'name', '',
2702 _('merge queue name (DEPRECATED)'), _('NAME')),
2703 _('merge queue name (DEPRECATED)'), _('NAME')),
2703 ('', 'move', None,
2704 ('', 'move', None,
2704 _('reorder patch series and apply only the patch')),
2705 _('reorder patch series and apply only the patch')),
2705 ('', 'no-backup', None, _('do not save backup copies of files'))],
2706 ('', 'no-backup', None, _('do not save backup copies of files'))],
2706 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2707 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2707 def push(ui, repo, patch=None, **opts):
2708 def push(ui, repo, patch=None, **opts):
2708 """push the next patch onto the stack
2709 """push the next patch onto the stack
2709
2710
2710 By default, abort if the working directory contains uncommitted
2711 By default, abort if the working directory contains uncommitted
2711 changes. With --keep-changes, abort only if the uncommitted files
2712 changes. With --keep-changes, abort only if the uncommitted files
2712 overlap with patched files. With -f/--force, backup and patch over
2713 overlap with patched files. With -f/--force, backup and patch over
2713 uncommitted changes.
2714 uncommitted changes.
2714
2715
2715 Return 0 on success.
2716 Return 0 on success.
2716 """
2717 """
2717 q = repo.mq
2718 q = repo.mq
2718 mergeq = None
2719 mergeq = None
2719
2720
2720 opts = fixkeepchangesopts(ui, opts)
2721 opts = fixkeepchangesopts(ui, opts)
2721 if opts.get('merge'):
2722 if opts.get('merge'):
2722 if opts.get('name'):
2723 if opts.get('name'):
2723 newpath = repo.join(opts.get('name'))
2724 newpath = repo.join(opts.get('name'))
2724 else:
2725 else:
2725 newpath, i = lastsavename(q.path)
2726 newpath, i = lastsavename(q.path)
2726 if not newpath:
2727 if not newpath:
2727 ui.warn(_("no saved queues found, please use -n\n"))
2728 ui.warn(_("no saved queues found, please use -n\n"))
2728 return 1
2729 return 1
2729 mergeq = queue(ui, repo.path, newpath)
2730 mergeq = queue(ui, repo.path, newpath)
2730 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2731 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2731 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2732 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2732 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2733 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2733 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2734 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2734 keepchanges=opts.get('keep_changes'))
2735 keepchanges=opts.get('keep_changes'))
2735 return ret
2736 return ret
2736
2737
2737 @command("^qpop",
2738 @command("^qpop",
2738 [('a', 'all', None, _('pop all patches')),
2739 [('a', 'all', None, _('pop all patches')),
2739 ('n', 'name', '',
2740 ('n', 'name', '',
2740 _('queue name to pop (DEPRECATED)'), _('NAME')),
2741 _('queue name to pop (DEPRECATED)'), _('NAME')),
2741 ('', 'keep-changes', None,
2742 ('', 'keep-changes', None,
2742 _('tolerate non-conflicting local changes')),
2743 _('tolerate non-conflicting local changes')),
2743 ('f', 'force', None, _('forget any local changes to patched files')),
2744 ('f', 'force', None, _('forget any local changes to patched files')),
2744 ('', 'no-backup', None, _('do not save backup copies of files'))],
2745 ('', 'no-backup', None, _('do not save backup copies of files'))],
2745 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2746 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2746 def pop(ui, repo, patch=None, **opts):
2747 def pop(ui, repo, patch=None, **opts):
2747 """pop the current patch off the stack
2748 """pop the current patch off the stack
2748
2749
2749 Without argument, pops off the top of the patch stack. If given a
2750 Without argument, pops off the top of the patch stack. If given a
2750 patch name, keeps popping off patches until the named patch is at
2751 patch name, keeps popping off patches until the named patch is at
2751 the top of the stack.
2752 the top of the stack.
2752
2753
2753 By default, abort if the working directory contains uncommitted
2754 By default, abort if the working directory contains uncommitted
2754 changes. With --keep-changes, abort only if the uncommitted files
2755 changes. With --keep-changes, abort only if the uncommitted files
2755 overlap with patched files. With -f/--force, backup and discard
2756 overlap with patched files. With -f/--force, backup and discard
2756 changes made to such files.
2757 changes made to such files.
2757
2758
2758 Return 0 on success.
2759 Return 0 on success.
2759 """
2760 """
2760 opts = fixkeepchangesopts(ui, opts)
2761 opts = fixkeepchangesopts(ui, opts)
2761 localupdate = True
2762 localupdate = True
2762 if opts.get('name'):
2763 if opts.get('name'):
2763 q = queue(ui, repo.path, repo.join(opts.get('name')))
2764 q = queue(ui, repo.path, repo.join(opts.get('name')))
2764 ui.warn(_('using patch queue: %s\n') % q.path)
2765 ui.warn(_('using patch queue: %s\n') % q.path)
2765 localupdate = False
2766 localupdate = False
2766 else:
2767 else:
2767 q = repo.mq
2768 q = repo.mq
2768 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
2769 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
2769 all=opts.get('all'), nobackup=opts.get('no_backup'),
2770 all=opts.get('all'), nobackup=opts.get('no_backup'),
2770 keepchanges=opts.get('keep_changes'))
2771 keepchanges=opts.get('keep_changes'))
2771 q.savedirty()
2772 q.savedirty()
2772 return ret
2773 return ret
2773
2774
2774 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
2775 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
2775 def rename(ui, repo, patch, name=None, **opts):
2776 def rename(ui, repo, patch, name=None, **opts):
2776 """rename a patch
2777 """rename a patch
2777
2778
2778 With one argument, renames the current patch to PATCH1.
2779 With one argument, renames the current patch to PATCH1.
2779 With two arguments, renames PATCH1 to PATCH2.
2780 With two arguments, renames PATCH1 to PATCH2.
2780
2781
2781 Returns 0 on success."""
2782 Returns 0 on success."""
2782 q = repo.mq
2783 q = repo.mq
2783 if not name:
2784 if not name:
2784 name = patch
2785 name = patch
2785 patch = None
2786 patch = None
2786
2787
2787 if patch:
2788 if patch:
2788 patch = q.lookup(patch)
2789 patch = q.lookup(patch)
2789 else:
2790 else:
2790 if not q.applied:
2791 if not q.applied:
2791 ui.write(_('no patches applied\n'))
2792 ui.write(_('no patches applied\n'))
2792 return
2793 return
2793 patch = q.lookup('qtip')
2794 patch = q.lookup('qtip')
2794 absdest = q.join(name)
2795 absdest = q.join(name)
2795 if os.path.isdir(absdest):
2796 if os.path.isdir(absdest):
2796 name = normname(os.path.join(name, os.path.basename(patch)))
2797 name = normname(os.path.join(name, os.path.basename(patch)))
2797 absdest = q.join(name)
2798 absdest = q.join(name)
2798 q.checkpatchname(name)
2799 q.checkpatchname(name)
2799
2800
2800 ui.note(_('renaming %s to %s\n') % (patch, name))
2801 ui.note(_('renaming %s to %s\n') % (patch, name))
2801 i = q.findseries(patch)
2802 i = q.findseries(patch)
2802 guards = q.guard_re.findall(q.fullseries[i])
2803 guards = q.guard_re.findall(q.fullseries[i])
2803 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
2804 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
2804 q.parseseries()
2805 q.parseseries()
2805 q.seriesdirty = True
2806 q.seriesdirty = True
2806
2807
2807 info = q.isapplied(patch)
2808 info = q.isapplied(patch)
2808 if info:
2809 if info:
2809 q.applied[info[0]] = statusentry(info[1], name)
2810 q.applied[info[0]] = statusentry(info[1], name)
2810 q.applieddirty = True
2811 q.applieddirty = True
2811
2812
2812 destdir = os.path.dirname(absdest)
2813 destdir = os.path.dirname(absdest)
2813 if not os.path.isdir(destdir):
2814 if not os.path.isdir(destdir):
2814 os.makedirs(destdir)
2815 os.makedirs(destdir)
2815 util.rename(q.join(patch), absdest)
2816 util.rename(q.join(patch), absdest)
2816 r = q.qrepo()
2817 r = q.qrepo()
2817 if r and patch in r.dirstate:
2818 if r and patch in r.dirstate:
2818 wctx = r[None]
2819 wctx = r[None]
2819 wlock = r.wlock()
2820 wlock = r.wlock()
2820 try:
2821 try:
2821 if r.dirstate[patch] == 'a':
2822 if r.dirstate[patch] == 'a':
2822 r.dirstate.drop(patch)
2823 r.dirstate.drop(patch)
2823 r.dirstate.add(name)
2824 r.dirstate.add(name)
2824 else:
2825 else:
2825 wctx.copy(patch, name)
2826 wctx.copy(patch, name)
2826 wctx.forget([patch])
2827 wctx.forget([patch])
2827 finally:
2828 finally:
2828 wlock.release()
2829 wlock.release()
2829
2830
2830 q.savedirty()
2831 q.savedirty()
2831
2832
2832 @command("qrestore",
2833 @command("qrestore",
2833 [('d', 'delete', None, _('delete save entry')),
2834 [('d', 'delete', None, _('delete save entry')),
2834 ('u', 'update', None, _('update queue working directory'))],
2835 ('u', 'update', None, _('update queue working directory'))],
2835 _('hg qrestore [-d] [-u] REV'))
2836 _('hg qrestore [-d] [-u] REV'))
2836 def restore(ui, repo, rev, **opts):
2837 def restore(ui, repo, rev, **opts):
2837 """restore the queue state saved by a revision (DEPRECATED)
2838 """restore the queue state saved by a revision (DEPRECATED)
2838
2839
2839 This command is deprecated, use :hg:`rebase` instead."""
2840 This command is deprecated, use :hg:`rebase` instead."""
2840 rev = repo.lookup(rev)
2841 rev = repo.lookup(rev)
2841 q = repo.mq
2842 q = repo.mq
2842 q.restore(repo, rev, delete=opts.get('delete'),
2843 q.restore(repo, rev, delete=opts.get('delete'),
2843 qupdate=opts.get('update'))
2844 qupdate=opts.get('update'))
2844 q.savedirty()
2845 q.savedirty()
2845 return 0
2846 return 0
2846
2847
2847 @command("qsave",
2848 @command("qsave",
2848 [('c', 'copy', None, _('copy patch directory')),
2849 [('c', 'copy', None, _('copy patch directory')),
2849 ('n', 'name', '',
2850 ('n', 'name', '',
2850 _('copy directory name'), _('NAME')),
2851 _('copy directory name'), _('NAME')),
2851 ('e', 'empty', None, _('clear queue status file')),
2852 ('e', 'empty', None, _('clear queue status file')),
2852 ('f', 'force', None, _('force copy'))] + commands.commitopts,
2853 ('f', 'force', None, _('force copy'))] + commands.commitopts,
2853 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
2854 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
2854 def save(ui, repo, **opts):
2855 def save(ui, repo, **opts):
2855 """save current queue state (DEPRECATED)
2856 """save current queue state (DEPRECATED)
2856
2857
2857 This command is deprecated, use :hg:`rebase` instead."""
2858 This command is deprecated, use :hg:`rebase` instead."""
2858 q = repo.mq
2859 q = repo.mq
2859 message = cmdutil.logmessage(ui, opts)
2860 message = cmdutil.logmessage(ui, opts)
2860 ret = q.save(repo, msg=message)
2861 ret = q.save(repo, msg=message)
2861 if ret:
2862 if ret:
2862 return ret
2863 return ret
2863 q.savedirty() # save to .hg/patches before copying
2864 q.savedirty() # save to .hg/patches before copying
2864 if opts.get('copy'):
2865 if opts.get('copy'):
2865 path = q.path
2866 path = q.path
2866 if opts.get('name'):
2867 if opts.get('name'):
2867 newpath = os.path.join(q.basepath, opts.get('name'))
2868 newpath = os.path.join(q.basepath, opts.get('name'))
2868 if os.path.exists(newpath):
2869 if os.path.exists(newpath):
2869 if not os.path.isdir(newpath):
2870 if not os.path.isdir(newpath):
2870 raise util.Abort(_('destination %s exists and is not '
2871 raise util.Abort(_('destination %s exists and is not '
2871 'a directory') % newpath)
2872 'a directory') % newpath)
2872 if not opts.get('force'):
2873 if not opts.get('force'):
2873 raise util.Abort(_('destination %s exists, '
2874 raise util.Abort(_('destination %s exists, '
2874 'use -f to force') % newpath)
2875 'use -f to force') % newpath)
2875 else:
2876 else:
2876 newpath = savename(path)
2877 newpath = savename(path)
2877 ui.warn(_("copy %s to %s\n") % (path, newpath))
2878 ui.warn(_("copy %s to %s\n") % (path, newpath))
2878 util.copyfiles(path, newpath)
2879 util.copyfiles(path, newpath)
2879 if opts.get('empty'):
2880 if opts.get('empty'):
2880 del q.applied[:]
2881 del q.applied[:]
2881 q.applieddirty = True
2882 q.applieddirty = True
2882 q.savedirty()
2883 q.savedirty()
2883 return 0
2884 return 0
2884
2885
2885 @command("strip",
2886 @command("strip",
2886 [
2887 [
2887 ('r', 'rev', [], _('strip specified revision (optional, '
2888 ('r', 'rev', [], _('strip specified revision (optional, '
2888 'can specify revisions without this '
2889 'can specify revisions without this '
2889 'option)'), _('REV')),
2890 'option)'), _('REV')),
2890 ('f', 'force', None, _('force removal of changesets, discard '
2891 ('f', 'force', None, _('force removal of changesets, discard '
2891 'uncommitted changes (no backup)')),
2892 'uncommitted changes (no backup)')),
2892 ('b', 'backup', None, _('bundle only changesets with local revision'
2893 ('b', 'backup', None, _('bundle only changesets with local revision'
2893 ' number greater than REV which are not'
2894 ' number greater than REV which are not'
2894 ' descendants of REV (DEPRECATED)')),
2895 ' descendants of REV (DEPRECATED)')),
2895 ('', 'no-backup', None, _('no backups')),
2896 ('', 'no-backup', None, _('no backups')),
2896 ('', 'nobackup', None, _('no backups (DEPRECATED)')),
2897 ('', 'nobackup', None, _('no backups (DEPRECATED)')),
2897 ('n', '', None, _('ignored (DEPRECATED)')),
2898 ('n', '', None, _('ignored (DEPRECATED)')),
2898 ('k', 'keep', None, _("do not modify working copy during strip")),
2899 ('k', 'keep', None, _("do not modify working copy during strip")),
2899 ('B', 'bookmark', '', _("remove revs only reachable from given"
2900 ('B', 'bookmark', '', _("remove revs only reachable from given"
2900 " bookmark"))],
2901 " bookmark"))],
2901 _('hg strip [-k] [-f] [-n] [-B bookmark] REV...'))
2902 _('hg strip [-k] [-f] [-n] [-B bookmark] REV...'))
2902 def strip(ui, repo, *revs, **opts):
2903 def strip(ui, repo, *revs, **opts):
2903 """strip changesets and all their descendants from the repository
2904 """strip changesets and all their descendants from the repository
2904
2905
2905 The strip command removes the specified changesets and all their
2906 The strip command removes the specified changesets and all their
2906 descendants. If the working directory has uncommitted changes, the
2907 descendants. If the working directory has uncommitted changes, the
2907 operation is aborted unless the --force flag is supplied, in which
2908 operation is aborted unless the --force flag is supplied, in which
2908 case changes will be discarded.
2909 case changes will be discarded.
2909
2910
2910 If a parent of the working directory is stripped, then the working
2911 If a parent of the working directory is stripped, then the working
2911 directory will automatically be updated to the most recent
2912 directory will automatically be updated to the most recent
2912 available ancestor of the stripped parent after the operation
2913 available ancestor of the stripped parent after the operation
2913 completes.
2914 completes.
2914
2915
2915 Any stripped changesets are stored in ``.hg/strip-backup`` as a
2916 Any stripped changesets are stored in ``.hg/strip-backup`` as a
2916 bundle (see :hg:`help bundle` and :hg:`help unbundle`). They can
2917 bundle (see :hg:`help bundle` and :hg:`help unbundle`). They can
2917 be restored by running :hg:`unbundle .hg/strip-backup/BUNDLE`,
2918 be restored by running :hg:`unbundle .hg/strip-backup/BUNDLE`,
2918 where BUNDLE is the bundle file created by the strip. Note that
2919 where BUNDLE is the bundle file created by the strip. Note that
2919 the local revision numbers will in general be different after the
2920 the local revision numbers will in general be different after the
2920 restore.
2921 restore.
2921
2922
2922 Use the --no-backup option to discard the backup bundle once the
2923 Use the --no-backup option to discard the backup bundle once the
2923 operation completes.
2924 operation completes.
2924
2925
2925 Return 0 on success.
2926 Return 0 on success.
2926 """
2927 """
2927 backup = 'all'
2928 backup = 'all'
2928 if opts.get('backup'):
2929 if opts.get('backup'):
2929 backup = 'strip'
2930 backup = 'strip'
2930 elif opts.get('no_backup') or opts.get('nobackup'):
2931 elif opts.get('no_backup') or opts.get('nobackup'):
2931 backup = 'none'
2932 backup = 'none'
2932
2933
2933 cl = repo.changelog
2934 cl = repo.changelog
2934 revs = list(revs) + opts.get('rev')
2935 revs = list(revs) + opts.get('rev')
2935 revs = set(scmutil.revrange(repo, revs))
2936 revs = set(scmutil.revrange(repo, revs))
2936
2937
2937 if opts.get('bookmark'):
2938 if opts.get('bookmark'):
2938 mark = opts.get('bookmark')
2939 mark = opts.get('bookmark')
2939 marks = repo._bookmarks
2940 marks = repo._bookmarks
2940 if mark not in marks:
2941 if mark not in marks:
2941 raise util.Abort(_("bookmark '%s' not found") % mark)
2942 raise util.Abort(_("bookmark '%s' not found") % mark)
2942
2943
2943 # If the requested bookmark is not the only one pointing to a
2944 # If the requested bookmark is not the only one pointing to a
2944 # a revision we have to only delete the bookmark and not strip
2945 # a revision we have to only delete the bookmark and not strip
2945 # anything. revsets cannot detect that case.
2946 # anything. revsets cannot detect that case.
2946 uniquebm = True
2947 uniquebm = True
2947 for m, n in marks.iteritems():
2948 for m, n in marks.iteritems():
2948 if m != mark and n == repo[mark].node():
2949 if m != mark and n == repo[mark].node():
2949 uniquebm = False
2950 uniquebm = False
2950 break
2951 break
2951 if uniquebm:
2952 if uniquebm:
2952 rsrevs = repo.revs("ancestors(bookmark(%s)) - "
2953 rsrevs = repo.revs("ancestors(bookmark(%s)) - "
2953 "ancestors(head() and not bookmark(%s)) - "
2954 "ancestors(head() and not bookmark(%s)) - "
2954 "ancestors(bookmark() and not bookmark(%s))",
2955 "ancestors(bookmark() and not bookmark(%s))",
2955 mark, mark, mark)
2956 mark, mark, mark)
2956 revs.update(set(rsrevs))
2957 revs.update(set(rsrevs))
2957 if not revs:
2958 if not revs:
2958 del marks[mark]
2959 del marks[mark]
2959 repo._writebookmarks(mark)
2960 repo._writebookmarks(mark)
2960 ui.write(_("bookmark '%s' deleted\n") % mark)
2961 ui.write(_("bookmark '%s' deleted\n") % mark)
2961
2962
2962 if not revs:
2963 if not revs:
2963 raise util.Abort(_('empty revision set'))
2964 raise util.Abort(_('empty revision set'))
2964
2965
2965 descendants = set(cl.descendants(*revs))
2966 descendants = set(cl.descendants(*revs))
2966 strippedrevs = revs.union(descendants)
2967 strippedrevs = revs.union(descendants)
2967 roots = revs.difference(descendants)
2968 roots = revs.difference(descendants)
2968
2969
2969 update = False
2970 update = False
2970 # if one of the wdir parent is stripped we'll need
2971 # if one of the wdir parent is stripped we'll need
2971 # to update away to an earlier revision
2972 # to update away to an earlier revision
2972 for p in repo.dirstate.parents():
2973 for p in repo.dirstate.parents():
2973 if p != nullid and cl.rev(p) in strippedrevs:
2974 if p != nullid and cl.rev(p) in strippedrevs:
2974 update = True
2975 update = True
2975 break
2976 break
2976
2977
2977 rootnodes = set(cl.node(r) for r in roots)
2978 rootnodes = set(cl.node(r) for r in roots)
2978
2979
2979 q = repo.mq
2980 q = repo.mq
2980 if q.applied:
2981 if q.applied:
2981 # refresh queue state if we're about to strip
2982 # refresh queue state if we're about to strip
2982 # applied patches
2983 # applied patches
2983 if cl.rev(repo.lookup('qtip')) in strippedrevs:
2984 if cl.rev(repo.lookup('qtip')) in strippedrevs:
2984 q.applieddirty = True
2985 q.applieddirty = True
2985 start = 0
2986 start = 0
2986 end = len(q.applied)
2987 end = len(q.applied)
2987 for i, statusentry in enumerate(q.applied):
2988 for i, statusentry in enumerate(q.applied):
2988 if statusentry.node in rootnodes:
2989 if statusentry.node in rootnodes:
2989 # if one of the stripped roots is an applied
2990 # if one of the stripped roots is an applied
2990 # patch, only part of the queue is stripped
2991 # patch, only part of the queue is stripped
2991 start = i
2992 start = i
2992 break
2993 break
2993 del q.applied[start:end]
2994 del q.applied[start:end]
2994 q.savedirty()
2995 q.savedirty()
2995
2996
2996 revs = list(rootnodes)
2997 revs = list(rootnodes)
2997 if update and opts.get('keep'):
2998 if update and opts.get('keep'):
2998 wlock = repo.wlock()
2999 wlock = repo.wlock()
2999 try:
3000 try:
3000 urev = repo.mq.qparents(repo, revs[0])
3001 urev = repo.mq.qparents(repo, revs[0])
3001 repo.dirstate.rebuild(urev, repo[urev].manifest())
3002 repo.dirstate.rebuild(urev, repo[urev].manifest())
3002 repo.dirstate.write()
3003 repo.dirstate.write()
3003 update = False
3004 update = False
3004 finally:
3005 finally:
3005 wlock.release()
3006 wlock.release()
3006
3007
3007 repo.mq.strip(repo, revs, backup=backup, update=update,
3008 repo.mq.strip(repo, revs, backup=backup, update=update,
3008 force=opts.get('force'))
3009 force=opts.get('force'))
3009
3010
3010 if opts.get('bookmark'):
3011 if opts.get('bookmark'):
3011 del marks[mark]
3012 del marks[mark]
3012 repo._writebookmarks(marks)
3013 repo._writebookmarks(marks)
3013 ui.write(_("bookmark '%s' deleted\n") % mark)
3014 ui.write(_("bookmark '%s' deleted\n") % mark)
3014
3015
3015 return 0
3016 return 0
3016
3017
3017 @command("qselect",
3018 @command("qselect",
3018 [('n', 'none', None, _('disable all guards')),
3019 [('n', 'none', None, _('disable all guards')),
3019 ('s', 'series', None, _('list all guards in series file')),
3020 ('s', 'series', None, _('list all guards in series file')),
3020 ('', 'pop', None, _('pop to before first guarded applied patch')),
3021 ('', 'pop', None, _('pop to before first guarded applied patch')),
3021 ('', 'reapply', None, _('pop, then reapply patches'))],
3022 ('', 'reapply', None, _('pop, then reapply patches'))],
3022 _('hg qselect [OPTION]... [GUARD]...'))
3023 _('hg qselect [OPTION]... [GUARD]...'))
3023 def select(ui, repo, *args, **opts):
3024 def select(ui, repo, *args, **opts):
3024 '''set or print guarded patches to push
3025 '''set or print guarded patches to push
3025
3026
3026 Use the :hg:`qguard` command to set or print guards on patch, then use
3027 Use the :hg:`qguard` command to set or print guards on patch, then use
3027 qselect to tell mq which guards to use. A patch will be pushed if
3028 qselect to tell mq which guards to use. A patch will be pushed if
3028 it has no guards or any positive guards match the currently
3029 it has no guards or any positive guards match the currently
3029 selected guard, but will not be pushed if any negative guards
3030 selected guard, but will not be pushed if any negative guards
3030 match the current guard. For example::
3031 match the current guard. For example::
3031
3032
3032 qguard foo.patch -- -stable (negative guard)
3033 qguard foo.patch -- -stable (negative guard)
3033 qguard bar.patch +stable (positive guard)
3034 qguard bar.patch +stable (positive guard)
3034 qselect stable
3035 qselect stable
3035
3036
3036 This activates the "stable" guard. mq will skip foo.patch (because
3037 This activates the "stable" guard. mq will skip foo.patch (because
3037 it has a negative match) but push bar.patch (because it has a
3038 it has a negative match) but push bar.patch (because it has a
3038 positive match).
3039 positive match).
3039
3040
3040 With no arguments, prints the currently active guards.
3041 With no arguments, prints the currently active guards.
3041 With one argument, sets the active guard.
3042 With one argument, sets the active guard.
3042
3043
3043 Use -n/--none to deactivate guards (no other arguments needed).
3044 Use -n/--none to deactivate guards (no other arguments needed).
3044 When no guards are active, patches with positive guards are
3045 When no guards are active, patches with positive guards are
3045 skipped and patches with negative guards are pushed.
3046 skipped and patches with negative guards are pushed.
3046
3047
3047 qselect can change the guards on applied patches. It does not pop
3048 qselect can change the guards on applied patches. It does not pop
3048 guarded patches by default. Use --pop to pop back to the last
3049 guarded patches by default. Use --pop to pop back to the last
3049 applied patch that is not guarded. Use --reapply (which implies
3050 applied patch that is not guarded. Use --reapply (which implies
3050 --pop) to push back to the current patch afterwards, but skip
3051 --pop) to push back to the current patch afterwards, but skip
3051 guarded patches.
3052 guarded patches.
3052
3053
3053 Use -s/--series to print a list of all guards in the series file
3054 Use -s/--series to print a list of all guards in the series file
3054 (no other arguments needed). Use -v for more information.
3055 (no other arguments needed). Use -v for more information.
3055
3056
3056 Returns 0 on success.'''
3057 Returns 0 on success.'''
3057
3058
3058 q = repo.mq
3059 q = repo.mq
3059 guards = q.active()
3060 guards = q.active()
3060 if args or opts.get('none'):
3061 if args or opts.get('none'):
3061 old_unapplied = q.unapplied(repo)
3062 old_unapplied = q.unapplied(repo)
3062 old_guarded = [i for i in xrange(len(q.applied)) if
3063 old_guarded = [i for i in xrange(len(q.applied)) if
3063 not q.pushable(i)[0]]
3064 not q.pushable(i)[0]]
3064 q.setactive(args)
3065 q.setactive(args)
3065 q.savedirty()
3066 q.savedirty()
3066 if not args:
3067 if not args:
3067 ui.status(_('guards deactivated\n'))
3068 ui.status(_('guards deactivated\n'))
3068 if not opts.get('pop') and not opts.get('reapply'):
3069 if not opts.get('pop') and not opts.get('reapply'):
3069 unapplied = q.unapplied(repo)
3070 unapplied = q.unapplied(repo)
3070 guarded = [i for i in xrange(len(q.applied))
3071 guarded = [i for i in xrange(len(q.applied))
3071 if not q.pushable(i)[0]]
3072 if not q.pushable(i)[0]]
3072 if len(unapplied) != len(old_unapplied):
3073 if len(unapplied) != len(old_unapplied):
3073 ui.status(_('number of unguarded, unapplied patches has '
3074 ui.status(_('number of unguarded, unapplied patches has '
3074 'changed from %d to %d\n') %
3075 'changed from %d to %d\n') %
3075 (len(old_unapplied), len(unapplied)))
3076 (len(old_unapplied), len(unapplied)))
3076 if len(guarded) != len(old_guarded):
3077 if len(guarded) != len(old_guarded):
3077 ui.status(_('number of guarded, applied patches has changed '
3078 ui.status(_('number of guarded, applied patches has changed '
3078 'from %d to %d\n') %
3079 'from %d to %d\n') %
3079 (len(old_guarded), len(guarded)))
3080 (len(old_guarded), len(guarded)))
3080 elif opts.get('series'):
3081 elif opts.get('series'):
3081 guards = {}
3082 guards = {}
3082 noguards = 0
3083 noguards = 0
3083 for gs in q.seriesguards:
3084 for gs in q.seriesguards:
3084 if not gs:
3085 if not gs:
3085 noguards += 1
3086 noguards += 1
3086 for g in gs:
3087 for g in gs:
3087 guards.setdefault(g, 0)
3088 guards.setdefault(g, 0)
3088 guards[g] += 1
3089 guards[g] += 1
3089 if ui.verbose:
3090 if ui.verbose:
3090 guards['NONE'] = noguards
3091 guards['NONE'] = noguards
3091 guards = guards.items()
3092 guards = guards.items()
3092 guards.sort(key=lambda x: x[0][1:])
3093 guards.sort(key=lambda x: x[0][1:])
3093 if guards:
3094 if guards:
3094 ui.note(_('guards in series file:\n'))
3095 ui.note(_('guards in series file:\n'))
3095 for guard, count in guards:
3096 for guard, count in guards:
3096 ui.note('%2d ' % count)
3097 ui.note('%2d ' % count)
3097 ui.write(guard, '\n')
3098 ui.write(guard, '\n')
3098 else:
3099 else:
3099 ui.note(_('no guards in series file\n'))
3100 ui.note(_('no guards in series file\n'))
3100 else:
3101 else:
3101 if guards:
3102 if guards:
3102 ui.note(_('active guards:\n'))
3103 ui.note(_('active guards:\n'))
3103 for g in guards:
3104 for g in guards:
3104 ui.write(g, '\n')
3105 ui.write(g, '\n')
3105 else:
3106 else:
3106 ui.write(_('no active guards\n'))
3107 ui.write(_('no active guards\n'))
3107 reapply = opts.get('reapply') and q.applied and q.appliedname(-1)
3108 reapply = opts.get('reapply') and q.applied and q.appliedname(-1)
3108 popped = False
3109 popped = False
3109 if opts.get('pop') or opts.get('reapply'):
3110 if opts.get('pop') or opts.get('reapply'):
3110 for i in xrange(len(q.applied)):
3111 for i in xrange(len(q.applied)):
3111 pushable, reason = q.pushable(i)
3112 pushable, reason = q.pushable(i)
3112 if not pushable:
3113 if not pushable:
3113 ui.status(_('popping guarded patches\n'))
3114 ui.status(_('popping guarded patches\n'))
3114 popped = True
3115 popped = True
3115 if i == 0:
3116 if i == 0:
3116 q.pop(repo, all=True)
3117 q.pop(repo, all=True)
3117 else:
3118 else:
3118 q.pop(repo, str(i - 1))
3119 q.pop(repo, str(i - 1))
3119 break
3120 break
3120 if popped:
3121 if popped:
3121 try:
3122 try:
3122 if reapply:
3123 if reapply:
3123 ui.status(_('reapplying unguarded patches\n'))
3124 ui.status(_('reapplying unguarded patches\n'))
3124 q.push(repo, reapply)
3125 q.push(repo, reapply)
3125 finally:
3126 finally:
3126 q.savedirty()
3127 q.savedirty()
3127
3128
3128 @command("qfinish",
3129 @command("qfinish",
3129 [('a', 'applied', None, _('finish all applied changesets'))],
3130 [('a', 'applied', None, _('finish all applied changesets'))],
3130 _('hg qfinish [-a] [REV]...'))
3131 _('hg qfinish [-a] [REV]...'))
3131 def finish(ui, repo, *revrange, **opts):
3132 def finish(ui, repo, *revrange, **opts):
3132 """move applied patches into repository history
3133 """move applied patches into repository history
3133
3134
3134 Finishes the specified revisions (corresponding to applied
3135 Finishes the specified revisions (corresponding to applied
3135 patches) by moving them out of mq control into regular repository
3136 patches) by moving them out of mq control into regular repository
3136 history.
3137 history.
3137
3138
3138 Accepts a revision range or the -a/--applied option. If --applied
3139 Accepts a revision range or the -a/--applied option. If --applied
3139 is specified, all applied mq revisions are removed from mq
3140 is specified, all applied mq revisions are removed from mq
3140 control. Otherwise, the given revisions must be at the base of the
3141 control. Otherwise, the given revisions must be at the base of the
3141 stack of applied patches.
3142 stack of applied patches.
3142
3143
3143 This can be especially useful if your changes have been applied to
3144 This can be especially useful if your changes have been applied to
3144 an upstream repository, or if you are about to push your changes
3145 an upstream repository, or if you are about to push your changes
3145 to upstream.
3146 to upstream.
3146
3147
3147 Returns 0 on success.
3148 Returns 0 on success.
3148 """
3149 """
3149 if not opts.get('applied') and not revrange:
3150 if not opts.get('applied') and not revrange:
3150 raise util.Abort(_('no revisions specified'))
3151 raise util.Abort(_('no revisions specified'))
3151 elif opts.get('applied'):
3152 elif opts.get('applied'):
3152 revrange = ('qbase::qtip',) + revrange
3153 revrange = ('qbase::qtip',) + revrange
3153
3154
3154 q = repo.mq
3155 q = repo.mq
3155 if not q.applied:
3156 if not q.applied:
3156 ui.status(_('no patches applied\n'))
3157 ui.status(_('no patches applied\n'))
3157 return 0
3158 return 0
3158
3159
3159 revs = scmutil.revrange(repo, revrange)
3160 revs = scmutil.revrange(repo, revrange)
3160 if repo['.'].rev() in revs and repo[None].files():
3161 if repo['.'].rev() in revs and repo[None].files():
3161 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3162 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3162 # queue.finish may changes phases but leave the responsability to lock the
3163 # queue.finish may changes phases but leave the responsability to lock the
3163 # repo to the caller to avoid deadlock with wlock. This command code is
3164 # repo to the caller to avoid deadlock with wlock. This command code is
3164 # responsability for this locking.
3165 # responsability for this locking.
3165 lock = repo.lock()
3166 lock = repo.lock()
3166 try:
3167 try:
3167 q.finish(repo, revs)
3168 q.finish(repo, revs)
3168 q.savedirty()
3169 q.savedirty()
3169 finally:
3170 finally:
3170 lock.release()
3171 lock.release()
3171 return 0
3172 return 0
3172
3173
3173 @command("qqueue",
3174 @command("qqueue",
3174 [('l', 'list', False, _('list all available queues')),
3175 [('l', 'list', False, _('list all available queues')),
3175 ('', 'active', False, _('print name of active queue')),
3176 ('', 'active', False, _('print name of active queue')),
3176 ('c', 'create', False, _('create new queue')),
3177 ('c', 'create', False, _('create new queue')),
3177 ('', 'rename', False, _('rename active queue')),
3178 ('', 'rename', False, _('rename active queue')),
3178 ('', 'delete', False, _('delete reference to queue')),
3179 ('', 'delete', False, _('delete reference to queue')),
3179 ('', 'purge', False, _('delete queue, and remove patch dir')),
3180 ('', 'purge', False, _('delete queue, and remove patch dir')),
3180 ],
3181 ],
3181 _('[OPTION] [QUEUE]'))
3182 _('[OPTION] [QUEUE]'))
3182 def qqueue(ui, repo, name=None, **opts):
3183 def qqueue(ui, repo, name=None, **opts):
3183 '''manage multiple patch queues
3184 '''manage multiple patch queues
3184
3185
3185 Supports switching between different patch queues, as well as creating
3186 Supports switching between different patch queues, as well as creating
3186 new patch queues and deleting existing ones.
3187 new patch queues and deleting existing ones.
3187
3188
3188 Omitting a queue name or specifying -l/--list will show you the registered
3189 Omitting a queue name or specifying -l/--list will show you the registered
3189 queues - by default the "normal" patches queue is registered. The currently
3190 queues - by default the "normal" patches queue is registered. The currently
3190 active queue will be marked with "(active)". Specifying --active will print
3191 active queue will be marked with "(active)". Specifying --active will print
3191 only the name of the active queue.
3192 only the name of the active queue.
3192
3193
3193 To create a new queue, use -c/--create. The queue is automatically made
3194 To create a new queue, use -c/--create. The queue is automatically made
3194 active, except in the case where there are applied patches from the
3195 active, except in the case where there are applied patches from the
3195 currently active queue in the repository. Then the queue will only be
3196 currently active queue in the repository. Then the queue will only be
3196 created and switching will fail.
3197 created and switching will fail.
3197
3198
3198 To delete an existing queue, use --delete. You cannot delete the currently
3199 To delete an existing queue, use --delete. You cannot delete the currently
3199 active queue.
3200 active queue.
3200
3201
3201 Returns 0 on success.
3202 Returns 0 on success.
3202 '''
3203 '''
3203 q = repo.mq
3204 q = repo.mq
3204 _defaultqueue = 'patches'
3205 _defaultqueue = 'patches'
3205 _allqueues = 'patches.queues'
3206 _allqueues = 'patches.queues'
3206 _activequeue = 'patches.queue'
3207 _activequeue = 'patches.queue'
3207
3208
3208 def _getcurrent():
3209 def _getcurrent():
3209 cur = os.path.basename(q.path)
3210 cur = os.path.basename(q.path)
3210 if cur.startswith('patches-'):
3211 if cur.startswith('patches-'):
3211 cur = cur[8:]
3212 cur = cur[8:]
3212 return cur
3213 return cur
3213
3214
3214 def _noqueues():
3215 def _noqueues():
3215 try:
3216 try:
3216 fh = repo.opener(_allqueues, 'r')
3217 fh = repo.opener(_allqueues, 'r')
3217 fh.close()
3218 fh.close()
3218 except IOError:
3219 except IOError:
3219 return True
3220 return True
3220
3221
3221 return False
3222 return False
3222
3223
3223 def _getqueues():
3224 def _getqueues():
3224 current = _getcurrent()
3225 current = _getcurrent()
3225
3226
3226 try:
3227 try:
3227 fh = repo.opener(_allqueues, 'r')
3228 fh = repo.opener(_allqueues, 'r')
3228 queues = [queue.strip() for queue in fh if queue.strip()]
3229 queues = [queue.strip() for queue in fh if queue.strip()]
3229 fh.close()
3230 fh.close()
3230 if current not in queues:
3231 if current not in queues:
3231 queues.append(current)
3232 queues.append(current)
3232 except IOError:
3233 except IOError:
3233 queues = [_defaultqueue]
3234 queues = [_defaultqueue]
3234
3235
3235 return sorted(queues)
3236 return sorted(queues)
3236
3237
3237 def _setactive(name):
3238 def _setactive(name):
3238 if q.applied:
3239 if q.applied:
3239 raise util.Abort(_('patches applied - cannot set new queue active'))
3240 raise util.Abort(_('patches applied - cannot set new queue active'))
3240 _setactivenocheck(name)
3241 _setactivenocheck(name)
3241
3242
3242 def _setactivenocheck(name):
3243 def _setactivenocheck(name):
3243 fh = repo.opener(_activequeue, 'w')
3244 fh = repo.opener(_activequeue, 'w')
3244 if name != 'patches':
3245 if name != 'patches':
3245 fh.write(name)
3246 fh.write(name)
3246 fh.close()
3247 fh.close()
3247
3248
3248 def _addqueue(name):
3249 def _addqueue(name):
3249 fh = repo.opener(_allqueues, 'a')
3250 fh = repo.opener(_allqueues, 'a')
3250 fh.write('%s\n' % (name,))
3251 fh.write('%s\n' % (name,))
3251 fh.close()
3252 fh.close()
3252
3253
3253 def _queuedir(name):
3254 def _queuedir(name):
3254 if name == 'patches':
3255 if name == 'patches':
3255 return repo.join('patches')
3256 return repo.join('patches')
3256 else:
3257 else:
3257 return repo.join('patches-' + name)
3258 return repo.join('patches-' + name)
3258
3259
3259 def _validname(name):
3260 def _validname(name):
3260 for n in name:
3261 for n in name:
3261 if n in ':\\/.':
3262 if n in ':\\/.':
3262 return False
3263 return False
3263 return True
3264 return True
3264
3265
3265 def _delete(name):
3266 def _delete(name):
3266 if name not in existing:
3267 if name not in existing:
3267 raise util.Abort(_('cannot delete queue that does not exist'))
3268 raise util.Abort(_('cannot delete queue that does not exist'))
3268
3269
3269 current = _getcurrent()
3270 current = _getcurrent()
3270
3271
3271 if name == current:
3272 if name == current:
3272 raise util.Abort(_('cannot delete currently active queue'))
3273 raise util.Abort(_('cannot delete currently active queue'))
3273
3274
3274 fh = repo.opener('patches.queues.new', 'w')
3275 fh = repo.opener('patches.queues.new', 'w')
3275 for queue in existing:
3276 for queue in existing:
3276 if queue == name:
3277 if queue == name:
3277 continue
3278 continue
3278 fh.write('%s\n' % (queue,))
3279 fh.write('%s\n' % (queue,))
3279 fh.close()
3280 fh.close()
3280 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3281 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3281
3282
3282 if not name or opts.get('list') or opts.get('active'):
3283 if not name or opts.get('list') or opts.get('active'):
3283 current = _getcurrent()
3284 current = _getcurrent()
3284 if opts.get('active'):
3285 if opts.get('active'):
3285 ui.write('%s\n' % (current,))
3286 ui.write('%s\n' % (current,))
3286 return
3287 return
3287 for queue in _getqueues():
3288 for queue in _getqueues():
3288 ui.write('%s' % (queue,))
3289 ui.write('%s' % (queue,))
3289 if queue == current and not ui.quiet:
3290 if queue == current and not ui.quiet:
3290 ui.write(_(' (active)\n'))
3291 ui.write(_(' (active)\n'))
3291 else:
3292 else:
3292 ui.write('\n')
3293 ui.write('\n')
3293 return
3294 return
3294
3295
3295 if not _validname(name):
3296 if not _validname(name):
3296 raise util.Abort(
3297 raise util.Abort(
3297 _('invalid queue name, may not contain the characters ":\\/."'))
3298 _('invalid queue name, may not contain the characters ":\\/."'))
3298
3299
3299 existing = _getqueues()
3300 existing = _getqueues()
3300
3301
3301 if opts.get('create'):
3302 if opts.get('create'):
3302 if name in existing:
3303 if name in existing:
3303 raise util.Abort(_('queue "%s" already exists') % name)
3304 raise util.Abort(_('queue "%s" already exists') % name)
3304 if _noqueues():
3305 if _noqueues():
3305 _addqueue(_defaultqueue)
3306 _addqueue(_defaultqueue)
3306 _addqueue(name)
3307 _addqueue(name)
3307 _setactive(name)
3308 _setactive(name)
3308 elif opts.get('rename'):
3309 elif opts.get('rename'):
3309 current = _getcurrent()
3310 current = _getcurrent()
3310 if name == current:
3311 if name == current:
3311 raise util.Abort(_('can\'t rename "%s" to its current name') % name)
3312 raise util.Abort(_('can\'t rename "%s" to its current name') % name)
3312 if name in existing:
3313 if name in existing:
3313 raise util.Abort(_('queue "%s" already exists') % name)
3314 raise util.Abort(_('queue "%s" already exists') % name)
3314
3315
3315 olddir = _queuedir(current)
3316 olddir = _queuedir(current)
3316 newdir = _queuedir(name)
3317 newdir = _queuedir(name)
3317
3318
3318 if os.path.exists(newdir):
3319 if os.path.exists(newdir):
3319 raise util.Abort(_('non-queue directory "%s" already exists') %
3320 raise util.Abort(_('non-queue directory "%s" already exists') %
3320 newdir)
3321 newdir)
3321
3322
3322 fh = repo.opener('patches.queues.new', 'w')
3323 fh = repo.opener('patches.queues.new', 'w')
3323 for queue in existing:
3324 for queue in existing:
3324 if queue == current:
3325 if queue == current:
3325 fh.write('%s\n' % (name,))
3326 fh.write('%s\n' % (name,))
3326 if os.path.exists(olddir):
3327 if os.path.exists(olddir):
3327 util.rename(olddir, newdir)
3328 util.rename(olddir, newdir)
3328 else:
3329 else:
3329 fh.write('%s\n' % (queue,))
3330 fh.write('%s\n' % (queue,))
3330 fh.close()
3331 fh.close()
3331 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3332 util.rename(repo.join('patches.queues.new'), repo.join(_allqueues))
3332 _setactivenocheck(name)
3333 _setactivenocheck(name)
3333 elif opts.get('delete'):
3334 elif opts.get('delete'):
3334 _delete(name)
3335 _delete(name)
3335 elif opts.get('purge'):
3336 elif opts.get('purge'):
3336 if name in existing:
3337 if name in existing:
3337 _delete(name)
3338 _delete(name)
3338 qdir = _queuedir(name)
3339 qdir = _queuedir(name)
3339 if os.path.exists(qdir):
3340 if os.path.exists(qdir):
3340 shutil.rmtree(qdir)
3341 shutil.rmtree(qdir)
3341 else:
3342 else:
3342 if name not in existing:
3343 if name not in existing:
3343 raise util.Abort(_('use --create to create a new queue'))
3344 raise util.Abort(_('use --create to create a new queue'))
3344 _setactive(name)
3345 _setactive(name)
3345
3346
3346 def mqphasedefaults(repo, roots):
3347 def mqphasedefaults(repo, roots):
3347 """callback used to set mq changeset as secret when no phase data exists"""
3348 """callback used to set mq changeset as secret when no phase data exists"""
3348 if repo.mq.applied:
3349 if repo.mq.applied:
3349 if repo.ui.configbool('mq', 'secret', False):
3350 if repo.ui.configbool('mq', 'secret', False):
3350 mqphase = phases.secret
3351 mqphase = phases.secret
3351 else:
3352 else:
3352 mqphase = phases.draft
3353 mqphase = phases.draft
3353 qbase = repo[repo.mq.applied[0].node]
3354 qbase = repo[repo.mq.applied[0].node]
3354 roots[mqphase].add(qbase.node())
3355 roots[mqphase].add(qbase.node())
3355 return roots
3356 return roots
3356
3357
3357 def reposetup(ui, repo):
3358 def reposetup(ui, repo):
3358 class mqrepo(repo.__class__):
3359 class mqrepo(repo.__class__):
3359 @util.propertycache
3360 @util.propertycache
3360 def mq(self):
3361 def mq(self):
3361 return queue(self.ui, self.path)
3362 return queue(self.ui, self.path)
3362
3363
3363 def abortifwdirpatched(self, errmsg, force=False):
3364 def abortifwdirpatched(self, errmsg, force=False):
3364 if self.mq.applied and not force:
3365 if self.mq.applied and not force:
3365 parents = self.dirstate.parents()
3366 parents = self.dirstate.parents()
3366 patches = [s.node for s in self.mq.applied]
3367 patches = [s.node for s in self.mq.applied]
3367 if parents[0] in patches or parents[1] in patches:
3368 if parents[0] in patches or parents[1] in patches:
3368 raise util.Abort(errmsg)
3369 raise util.Abort(errmsg)
3369
3370
3370 def commit(self, text="", user=None, date=None, match=None,
3371 def commit(self, text="", user=None, date=None, match=None,
3371 force=False, editor=False, extra={}):
3372 force=False, editor=False, extra={}):
3372 self.abortifwdirpatched(
3373 self.abortifwdirpatched(
3373 _('cannot commit over an applied mq patch'),
3374 _('cannot commit over an applied mq patch'),
3374 force)
3375 force)
3375
3376
3376 return super(mqrepo, self).commit(text, user, date, match, force,
3377 return super(mqrepo, self).commit(text, user, date, match, force,
3377 editor, extra)
3378 editor, extra)
3378
3379
3379 def checkpush(self, force, revs):
3380 def checkpush(self, force, revs):
3380 if self.mq.applied and not force:
3381 if self.mq.applied and not force:
3381 outapplied = [e.node for e in self.mq.applied]
3382 outapplied = [e.node for e in self.mq.applied]
3382 if revs:
3383 if revs:
3383 # Assume applied patches have no non-patch descendants and
3384 # Assume applied patches have no non-patch descendants and
3384 # are not on remote already. Filtering any changeset not
3385 # are not on remote already. Filtering any changeset not
3385 # pushed.
3386 # pushed.
3386 heads = set(revs)
3387 heads = set(revs)
3387 for node in reversed(outapplied):
3388 for node in reversed(outapplied):
3388 if node in heads:
3389 if node in heads:
3389 break
3390 break
3390 else:
3391 else:
3391 outapplied.pop()
3392 outapplied.pop()
3392 # looking for pushed and shared changeset
3393 # looking for pushed and shared changeset
3393 for node in outapplied:
3394 for node in outapplied:
3394 if repo[node].phase() < phases.secret:
3395 if repo[node].phase() < phases.secret:
3395 raise util.Abort(_('source has mq patches applied'))
3396 raise util.Abort(_('source has mq patches applied'))
3396 # no non-secret patches pushed
3397 # no non-secret patches pushed
3397 super(mqrepo, self).checkpush(force, revs)
3398 super(mqrepo, self).checkpush(force, revs)
3398
3399
3399 def _findtags(self):
3400 def _findtags(self):
3400 '''augment tags from base class with patch tags'''
3401 '''augment tags from base class with patch tags'''
3401 result = super(mqrepo, self)._findtags()
3402 result = super(mqrepo, self)._findtags()
3402
3403
3403 q = self.mq
3404 q = self.mq
3404 if not q.applied:
3405 if not q.applied:
3405 return result
3406 return result
3406
3407
3407 mqtags = [(patch.node, patch.name) for patch in q.applied]
3408 mqtags = [(patch.node, patch.name) for patch in q.applied]
3408
3409
3409 try:
3410 try:
3410 self.changelog.rev(mqtags[-1][0])
3411 self.changelog.rev(mqtags[-1][0])
3411 except error.LookupError:
3412 except error.LookupError:
3412 self.ui.warn(_('mq status file refers to unknown node %s\n')
3413 self.ui.warn(_('mq status file refers to unknown node %s\n')
3413 % short(mqtags[-1][0]))
3414 % short(mqtags[-1][0]))
3414 return result
3415 return result
3415
3416
3416 mqtags.append((mqtags[-1][0], 'qtip'))
3417 mqtags.append((mqtags[-1][0], 'qtip'))
3417 mqtags.append((mqtags[0][0], 'qbase'))
3418 mqtags.append((mqtags[0][0], 'qbase'))
3418 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3419 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3419 tags = result[0]
3420 tags = result[0]
3420 for patch in mqtags:
3421 for patch in mqtags:
3421 if patch[1] in tags:
3422 if patch[1] in tags:
3422 self.ui.warn(_('Tag %s overrides mq patch of the same '
3423 self.ui.warn(_('Tag %s overrides mq patch of the same '
3423 'name\n') % patch[1])
3424 'name\n') % patch[1])
3424 else:
3425 else:
3425 tags[patch[1]] = patch[0]
3426 tags[patch[1]] = patch[0]
3426
3427
3427 return result
3428 return result
3428
3429
3429 def _branchtags(self, partial, lrev):
3430 def _branchtags(self, partial, lrev):
3430 q = self.mq
3431 q = self.mq
3431 cl = self.changelog
3432 cl = self.changelog
3432 qbase = None
3433 qbase = None
3433 if not q.applied:
3434 if not q.applied:
3434 if getattr(self, '_committingpatch', False):
3435 if getattr(self, '_committingpatch', False):
3435 # Committing a new patch, must be tip
3436 # Committing a new patch, must be tip
3436 qbase = len(cl) - 1
3437 qbase = len(cl) - 1
3437 else:
3438 else:
3438 qbasenode = q.applied[0].node
3439 qbasenode = q.applied[0].node
3439 try:
3440 try:
3440 qbase = cl.rev(qbasenode)
3441 qbase = cl.rev(qbasenode)
3441 except error.LookupError:
3442 except error.LookupError:
3442 self.ui.warn(_('mq status file refers to unknown node %s\n')
3443 self.ui.warn(_('mq status file refers to unknown node %s\n')
3443 % short(qbasenode))
3444 % short(qbasenode))
3444 if qbase is None:
3445 if qbase is None:
3445 return super(mqrepo, self)._branchtags(partial, lrev)
3446 return super(mqrepo, self)._branchtags(partial, lrev)
3446
3447
3447 start = lrev + 1
3448 start = lrev + 1
3448 if start < qbase:
3449 if start < qbase:
3449 # update the cache (excluding the patches) and save it
3450 # update the cache (excluding the patches) and save it
3450 ctxgen = (self[r] for r in xrange(lrev + 1, qbase))
3451 ctxgen = (self[r] for r in xrange(lrev + 1, qbase))
3451 self._updatebranchcache(partial, ctxgen)
3452 self._updatebranchcache(partial, ctxgen)
3452 self._writebranchcache(partial, cl.node(qbase - 1), qbase - 1)
3453 self._writebranchcache(partial, cl.node(qbase - 1), qbase - 1)
3453 start = qbase
3454 start = qbase
3454 # if start = qbase, the cache is as updated as it should be.
3455 # if start = qbase, the cache is as updated as it should be.
3455 # if start > qbase, the cache includes (part of) the patches.
3456 # if start > qbase, the cache includes (part of) the patches.
3456 # we might as well use it, but we won't save it.
3457 # we might as well use it, but we won't save it.
3457
3458
3458 # update the cache up to the tip
3459 # update the cache up to the tip
3459 ctxgen = (self[r] for r in xrange(start, len(cl)))
3460 ctxgen = (self[r] for r in xrange(start, len(cl)))
3460 self._updatebranchcache(partial, ctxgen)
3461 self._updatebranchcache(partial, ctxgen)
3461
3462
3462 return partial
3463 return partial
3463
3464
3464 if repo.local():
3465 if repo.local():
3465 repo.__class__ = mqrepo
3466 repo.__class__ = mqrepo
3466
3467
3467 repo._phasedefaults.append(mqphasedefaults)
3468 repo._phasedefaults.append(mqphasedefaults)
3468
3469
3469 def mqimport(orig, ui, repo, *args, **kwargs):
3470 def mqimport(orig, ui, repo, *args, **kwargs):
3470 if (util.safehasattr(repo, 'abortifwdirpatched')
3471 if (util.safehasattr(repo, 'abortifwdirpatched')
3471 and not kwargs.get('no_commit', False)):
3472 and not kwargs.get('no_commit', False)):
3472 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3473 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3473 kwargs.get('force'))
3474 kwargs.get('force'))
3474 return orig(ui, repo, *args, **kwargs)
3475 return orig(ui, repo, *args, **kwargs)
3475
3476
3476 def mqinit(orig, ui, *args, **kwargs):
3477 def mqinit(orig, ui, *args, **kwargs):
3477 mq = kwargs.pop('mq', None)
3478 mq = kwargs.pop('mq', None)
3478
3479
3479 if not mq:
3480 if not mq:
3480 return orig(ui, *args, **kwargs)
3481 return orig(ui, *args, **kwargs)
3481
3482
3482 if args:
3483 if args:
3483 repopath = args[0]
3484 repopath = args[0]
3484 if not hg.islocal(repopath):
3485 if not hg.islocal(repopath):
3485 raise util.Abort(_('only a local queue repository '
3486 raise util.Abort(_('only a local queue repository '
3486 'may be initialized'))
3487 'may be initialized'))
3487 else:
3488 else:
3488 repopath = cmdutil.findrepo(os.getcwd())
3489 repopath = cmdutil.findrepo(os.getcwd())
3489 if not repopath:
3490 if not repopath:
3490 raise util.Abort(_('there is no Mercurial repository here '
3491 raise util.Abort(_('there is no Mercurial repository here '
3491 '(.hg not found)'))
3492 '(.hg not found)'))
3492 repo = hg.repository(ui, repopath)
3493 repo = hg.repository(ui, repopath)
3493 return qinit(ui, repo, True)
3494 return qinit(ui, repo, True)
3494
3495
3495 def mqcommand(orig, ui, repo, *args, **kwargs):
3496 def mqcommand(orig, ui, repo, *args, **kwargs):
3496 """Add --mq option to operate on patch repository instead of main"""
3497 """Add --mq option to operate on patch repository instead of main"""
3497
3498
3498 # some commands do not like getting unknown options
3499 # some commands do not like getting unknown options
3499 mq = kwargs.pop('mq', None)
3500 mq = kwargs.pop('mq', None)
3500
3501
3501 if not mq:
3502 if not mq:
3502 return orig(ui, repo, *args, **kwargs)
3503 return orig(ui, repo, *args, **kwargs)
3503
3504
3504 q = repo.mq
3505 q = repo.mq
3505 r = q.qrepo()
3506 r = q.qrepo()
3506 if not r:
3507 if not r:
3507 raise util.Abort(_('no queue repository'))
3508 raise util.Abort(_('no queue repository'))
3508 return orig(r.ui, r, *args, **kwargs)
3509 return orig(r.ui, r, *args, **kwargs)
3509
3510
3510 def summary(orig, ui, repo, *args, **kwargs):
3511 def summary(orig, ui, repo, *args, **kwargs):
3511 r = orig(ui, repo, *args, **kwargs)
3512 r = orig(ui, repo, *args, **kwargs)
3512 q = repo.mq
3513 q = repo.mq
3513 m = []
3514 m = []
3514 a, u = len(q.applied), len(q.unapplied(repo))
3515 a, u = len(q.applied), len(q.unapplied(repo))
3515 if a:
3516 if a:
3516 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3517 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3517 if u:
3518 if u:
3518 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3519 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3519 if m:
3520 if m:
3520 ui.write("mq: %s\n" % ', '.join(m))
3521 ui.write("mq: %s\n" % ', '.join(m))
3521 else:
3522 else:
3522 ui.note(_("mq: (empty queue)\n"))
3523 ui.note(_("mq: (empty queue)\n"))
3523 return r
3524 return r
3524
3525
3525 def revsetmq(repo, subset, x):
3526 def revsetmq(repo, subset, x):
3526 """``mq()``
3527 """``mq()``
3527 Changesets managed by MQ.
3528 Changesets managed by MQ.
3528 """
3529 """
3529 revset.getargs(x, 0, 0, _("mq takes no arguments"))
3530 revset.getargs(x, 0, 0, _("mq takes no arguments"))
3530 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3531 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3531 return [r for r in subset if r in applied]
3532 return [r for r in subset if r in applied]
3532
3533
3533 def extsetup(ui):
3534 def extsetup(ui):
3534 revset.symbols['mq'] = revsetmq
3535 revset.symbols['mq'] = revsetmq
3535
3536
3536 # tell hggettext to extract docstrings from these functions:
3537 # tell hggettext to extract docstrings from these functions:
3537 i18nfunctions = [revsetmq]
3538 i18nfunctions = [revsetmq]
3538
3539
3539 def uisetup(ui):
3540 def uisetup(ui):
3540 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3541 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3541
3542
3542 extensions.wrapcommand(commands.table, 'import', mqimport)
3543 extensions.wrapcommand(commands.table, 'import', mqimport)
3543 extensions.wrapcommand(commands.table, 'summary', summary)
3544 extensions.wrapcommand(commands.table, 'summary', summary)
3544
3545
3545 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3546 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3546 entry[1].extend(mqopt)
3547 entry[1].extend(mqopt)
3547
3548
3548 nowrap = set(commands.norepo.split(" "))
3549 nowrap = set(commands.norepo.split(" "))
3549
3550
3550 def dotable(cmdtable):
3551 def dotable(cmdtable):
3551 for cmd in cmdtable.keys():
3552 for cmd in cmdtable.keys():
3552 cmd = cmdutil.parsealiases(cmd)[0]
3553 cmd = cmdutil.parsealiases(cmd)[0]
3553 if cmd in nowrap:
3554 if cmd in nowrap:
3554 continue
3555 continue
3555 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3556 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3556 entry[1].extend(mqopt)
3557 entry[1].extend(mqopt)
3557
3558
3558 dotable(commands.table)
3559 dotable(commands.table)
3559
3560
3560 for extname, extmodule in extensions.extensions():
3561 for extname, extmodule in extensions.extensions():
3561 if extmodule.__file__ != __file__:
3562 if extmodule.__file__ != __file__:
3562 dotable(getattr(extmodule, 'cmdtable', {}))
3563 dotable(getattr(extmodule, 'cmdtable', {}))
3563
3564
3564
3565
3565 colortable = {'qguard.negative': 'red',
3566 colortable = {'qguard.negative': 'red',
3566 'qguard.positive': 'yellow',
3567 'qguard.positive': 'yellow',
3567 'qguard.unguarded': 'green',
3568 'qguard.unguarded': 'green',
3568 'qseries.applied': 'blue bold underline',
3569 'qseries.applied': 'blue bold underline',
3569 'qseries.guarded': 'black bold',
3570 'qseries.guarded': 'black bold',
3570 'qseries.missing': 'red bold',
3571 'qseries.missing': 'red bold',
3571 'qseries.unapplied': 'black bold'}
3572 'qseries.unapplied': 'black bold'}
@@ -1,376 +1,378 b''
1 # notify.py - email notifications for mercurial
1 # notify.py - email notifications for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''hooks for sending email push notifications
8 '''hooks for sending email push notifications
9
9
10 This extension let you run hooks sending email notifications when
10 This extension let you run hooks sending email notifications when
11 changesets are being pushed, from the sending or receiving side.
11 changesets are being pushed, from the sending or receiving side.
12
12
13 First, enable the extension as explained in :hg:`help extensions`, and
13 First, enable the extension as explained in :hg:`help extensions`, and
14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
15 are run by the changesets receiver while the ``outgoing`` one is for
15 are run by the changesets receiver while the ``outgoing`` one is for
16 the sender::
16 the sender::
17
17
18 [hooks]
18 [hooks]
19 # one email for each incoming changeset
19 # one email for each incoming changeset
20 incoming.notify = python:hgext.notify.hook
20 incoming.notify = python:hgext.notify.hook
21 # one email for all incoming changesets
21 # one email for all incoming changesets
22 changegroup.notify = python:hgext.notify.hook
22 changegroup.notify = python:hgext.notify.hook
23
23
24 # one email for all outgoing changesets
24 # one email for all outgoing changesets
25 outgoing.notify = python:hgext.notify.hook
25 outgoing.notify = python:hgext.notify.hook
26
26
27 Now the hooks are running, subscribers must be assigned to
27 Now the hooks are running, subscribers must be assigned to
28 repositories. Use the ``[usersubs]`` section to map repositories to a
28 repositories. Use the ``[usersubs]`` section to map repositories to a
29 given email or the ``[reposubs]`` section to map emails to a single
29 given email or the ``[reposubs]`` section to map emails to a single
30 repository::
30 repository::
31
31
32 [usersubs]
32 [usersubs]
33 # key is subscriber email, value is a comma-separated list of glob
33 # key is subscriber email, value is a comma-separated list of glob
34 # patterns
34 # patterns
35 user@host = pattern
35 user@host = pattern
36
36
37 [reposubs]
37 [reposubs]
38 # key is glob pattern, value is a comma-separated list of subscriber
38 # key is glob pattern, value is a comma-separated list of subscriber
39 # emails
39 # emails
40 pattern = user@host
40 pattern = user@host
41
41
42 Glob patterns are matched against absolute path to repository
42 Glob patterns are matched against absolute path to repository
43 root. The subscriptions can be defined in their own file and
43 root. The subscriptions can be defined in their own file and
44 referenced with::
44 referenced with::
45
45
46 [notify]
46 [notify]
47 config = /path/to/subscriptionsfile
47 config = /path/to/subscriptionsfile
48
48
49 Alternatively, they can be added to Mercurial configuration files by
49 Alternatively, they can be added to Mercurial configuration files by
50 setting the previous entry to an empty value.
50 setting the previous entry to an empty value.
51
51
52 At this point, notifications should be generated but will not be sent until you
52 At this point, notifications should be generated but will not be sent until you
53 set the ``notify.test`` entry to ``False``.
53 set the ``notify.test`` entry to ``False``.
54
54
55 Notifications content can be tweaked with the following configuration entries:
55 Notifications content can be tweaked with the following configuration entries:
56
56
57 notify.test
57 notify.test
58 If ``True``, print messages to stdout instead of sending them. Default: True.
58 If ``True``, print messages to stdout instead of sending them. Default: True.
59
59
60 notify.sources
60 notify.sources
61 Space separated list of change sources. Notifications are sent only
61 Space separated list of change sources. Notifications are sent only
62 if it includes the incoming or outgoing changes source. Incoming
62 if it includes the incoming or outgoing changes source. Incoming
63 sources can be ``serve`` for changes coming from http or ssh,
63 sources can be ``serve`` for changes coming from http or ssh,
64 ``pull`` for pulled changes, ``unbundle`` for changes added by
64 ``pull`` for pulled changes, ``unbundle`` for changes added by
65 :hg:`unbundle` or ``push`` for changes being pushed
65 :hg:`unbundle` or ``push`` for changes being pushed
66 locally. Outgoing sources are the same except for ``unbundle`` which
66 locally. Outgoing sources are the same except for ``unbundle`` which
67 is replaced by ``bundle``. Default: serve.
67 is replaced by ``bundle``. Default: serve.
68
68
69 notify.strip
69 notify.strip
70 Number of leading slashes to strip from url paths. By default, notifications
70 Number of leading slashes to strip from url paths. By default, notifications
71 references repositories with their absolute path. ``notify.strip`` let you
71 references repositories with their absolute path. ``notify.strip`` let you
72 turn them into relative paths. For example, ``notify.strip=3`` will change
72 turn them into relative paths. For example, ``notify.strip=3`` will change
73 ``/long/path/repository`` into ``repository``. Default: 0.
73 ``/long/path/repository`` into ``repository``. Default: 0.
74
74
75 notify.domain
75 notify.domain
76 If subscribers emails or the from email have no domain set, complete them
76 If subscribers emails or the from email have no domain set, complete them
77 with this value.
77 with this value.
78
78
79 notify.style
79 notify.style
80 Style file to use when formatting emails.
80 Style file to use when formatting emails.
81
81
82 notify.template
82 notify.template
83 Template to use when formatting emails.
83 Template to use when formatting emails.
84
84
85 notify.incoming
85 notify.incoming
86 Template to use when run as incoming hook, override ``notify.template``.
86 Template to use when run as incoming hook, override ``notify.template``.
87
87
88 notify.outgoing
88 notify.outgoing
89 Template to use when run as outgoing hook, override ``notify.template``.
89 Template to use when run as outgoing hook, override ``notify.template``.
90
90
91 notify.changegroup
91 notify.changegroup
92 Template to use when running as changegroup hook, override
92 Template to use when running as changegroup hook, override
93 ``notify.template``.
93 ``notify.template``.
94
94
95 notify.maxdiff
95 notify.maxdiff
96 Maximum number of diff lines to include in notification email. Set to 0
96 Maximum number of diff lines to include in notification email. Set to 0
97 to disable the diff, -1 to include all of it. Default: 300.
97 to disable the diff, -1 to include all of it. Default: 300.
98
98
99 notify.maxsubject
99 notify.maxsubject
100 Maximum number of characters in emails subject line. Default: 67.
100 Maximum number of characters in emails subject line. Default: 67.
101
101
102 notify.diffstat
102 notify.diffstat
103 Set to True to include a diffstat before diff content. Default: True.
103 Set to True to include a diffstat before diff content. Default: True.
104
104
105 notify.merge
105 notify.merge
106 If True, send notifications for merge changesets. Default: True.
106 If True, send notifications for merge changesets. Default: True.
107
107
108 notify.mbox
108 notify.mbox
109 If set, append mails to this mbox file instead of sending. Default: None.
109 If set, append mails to this mbox file instead of sending. Default: None.
110
110
111 notify.fromauthor
111 notify.fromauthor
112 If set, use the first committer of the changegroup for the "From" field of
112 If set, use the first committer of the changegroup for the "From" field of
113 the notification mail. If not set, take the user from the pushing repo.
113 the notification mail. If not set, take the user from the pushing repo.
114 Default: False.
114 Default: False.
115
115
116 If set, the following entries will also be used to customize the notifications:
116 If set, the following entries will also be used to customize the notifications:
117
117
118 email.from
118 email.from
119 Email ``From`` address to use if none can be found in generated email content.
119 Email ``From`` address to use if none can be found in generated email content.
120
120
121 web.baseurl
121 web.baseurl
122 Root repository browsing URL to combine with repository paths when making
122 Root repository browsing URL to combine with repository paths when making
123 references. See also ``notify.strip``.
123 references. See also ``notify.strip``.
124
124
125 '''
125 '''
126
126
127 from mercurial.i18n import _
127 from mercurial.i18n import _
128 from mercurial import patch, cmdutil, templater, util, mail
128 from mercurial import patch, cmdutil, templater, util, mail
129 import email.Parser, email.Errors, fnmatch, socket, time
129 import email.Parser, email.Errors, fnmatch, socket, time
130
130
131 testedwith = 'internal'
132
131 # template for single changeset can include email headers.
133 # template for single changeset can include email headers.
132 single_template = '''
134 single_template = '''
133 Subject: changeset in {webroot}: {desc|firstline|strip}
135 Subject: changeset in {webroot}: {desc|firstline|strip}
134 From: {author}
136 From: {author}
135
137
136 changeset {node|short} in {root}
138 changeset {node|short} in {root}
137 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
139 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
138 description:
140 description:
139 \t{desc|tabindent|strip}
141 \t{desc|tabindent|strip}
140 '''.lstrip()
142 '''.lstrip()
141
143
142 # template for multiple changesets should not contain email headers,
144 # template for multiple changesets should not contain email headers,
143 # because only first set of headers will be used and result will look
145 # because only first set of headers will be used and result will look
144 # strange.
146 # strange.
145 multiple_template = '''
147 multiple_template = '''
146 changeset {node|short} in {root}
148 changeset {node|short} in {root}
147 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
149 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
148 summary: {desc|firstline}
150 summary: {desc|firstline}
149 '''
151 '''
150
152
151 deftemplates = {
153 deftemplates = {
152 'changegroup': multiple_template,
154 'changegroup': multiple_template,
153 }
155 }
154
156
155 class notifier(object):
157 class notifier(object):
156 '''email notification class.'''
158 '''email notification class.'''
157
159
158 def __init__(self, ui, repo, hooktype):
160 def __init__(self, ui, repo, hooktype):
159 self.ui = ui
161 self.ui = ui
160 cfg = self.ui.config('notify', 'config')
162 cfg = self.ui.config('notify', 'config')
161 if cfg:
163 if cfg:
162 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
164 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
163 self.repo = repo
165 self.repo = repo
164 self.stripcount = int(self.ui.config('notify', 'strip', 0))
166 self.stripcount = int(self.ui.config('notify', 'strip', 0))
165 self.root = self.strip(self.repo.root)
167 self.root = self.strip(self.repo.root)
166 self.domain = self.ui.config('notify', 'domain')
168 self.domain = self.ui.config('notify', 'domain')
167 self.mbox = self.ui.config('notify', 'mbox')
169 self.mbox = self.ui.config('notify', 'mbox')
168 self.test = self.ui.configbool('notify', 'test', True)
170 self.test = self.ui.configbool('notify', 'test', True)
169 self.charsets = mail._charsets(self.ui)
171 self.charsets = mail._charsets(self.ui)
170 self.subs = self.subscribers()
172 self.subs = self.subscribers()
171 self.merge = self.ui.configbool('notify', 'merge', True)
173 self.merge = self.ui.configbool('notify', 'merge', True)
172
174
173 mapfile = self.ui.config('notify', 'style')
175 mapfile = self.ui.config('notify', 'style')
174 template = (self.ui.config('notify', hooktype) or
176 template = (self.ui.config('notify', hooktype) or
175 self.ui.config('notify', 'template'))
177 self.ui.config('notify', 'template'))
176 self.t = cmdutil.changeset_templater(self.ui, self.repo,
178 self.t = cmdutil.changeset_templater(self.ui, self.repo,
177 False, None, mapfile, False)
179 False, None, mapfile, False)
178 if not mapfile and not template:
180 if not mapfile and not template:
179 template = deftemplates.get(hooktype) or single_template
181 template = deftemplates.get(hooktype) or single_template
180 if template:
182 if template:
181 template = templater.parsestring(template, quoted=False)
183 template = templater.parsestring(template, quoted=False)
182 self.t.use_template(template)
184 self.t.use_template(template)
183
185
184 def strip(self, path):
186 def strip(self, path):
185 '''strip leading slashes from local path, turn into web-safe path.'''
187 '''strip leading slashes from local path, turn into web-safe path.'''
186
188
187 path = util.pconvert(path)
189 path = util.pconvert(path)
188 count = self.stripcount
190 count = self.stripcount
189 while count > 0:
191 while count > 0:
190 c = path.find('/')
192 c = path.find('/')
191 if c == -1:
193 if c == -1:
192 break
194 break
193 path = path[c + 1:]
195 path = path[c + 1:]
194 count -= 1
196 count -= 1
195 return path
197 return path
196
198
197 def fixmail(self, addr):
199 def fixmail(self, addr):
198 '''try to clean up email addresses.'''
200 '''try to clean up email addresses.'''
199
201
200 addr = util.email(addr.strip())
202 addr = util.email(addr.strip())
201 if self.domain:
203 if self.domain:
202 a = addr.find('@localhost')
204 a = addr.find('@localhost')
203 if a != -1:
205 if a != -1:
204 addr = addr[:a]
206 addr = addr[:a]
205 if '@' not in addr:
207 if '@' not in addr:
206 return addr + '@' + self.domain
208 return addr + '@' + self.domain
207 return addr
209 return addr
208
210
209 def subscribers(self):
211 def subscribers(self):
210 '''return list of email addresses of subscribers to this repo.'''
212 '''return list of email addresses of subscribers to this repo.'''
211 subs = set()
213 subs = set()
212 for user, pats in self.ui.configitems('usersubs'):
214 for user, pats in self.ui.configitems('usersubs'):
213 for pat in pats.split(','):
215 for pat in pats.split(','):
214 if fnmatch.fnmatch(self.repo.root, pat.strip()):
216 if fnmatch.fnmatch(self.repo.root, pat.strip()):
215 subs.add(self.fixmail(user))
217 subs.add(self.fixmail(user))
216 for pat, users in self.ui.configitems('reposubs'):
218 for pat, users in self.ui.configitems('reposubs'):
217 if fnmatch.fnmatch(self.repo.root, pat):
219 if fnmatch.fnmatch(self.repo.root, pat):
218 for user in users.split(','):
220 for user in users.split(','):
219 subs.add(self.fixmail(user))
221 subs.add(self.fixmail(user))
220 return [mail.addressencode(self.ui, s, self.charsets, self.test)
222 return [mail.addressencode(self.ui, s, self.charsets, self.test)
221 for s in sorted(subs)]
223 for s in sorted(subs)]
222
224
223 def node(self, ctx, **props):
225 def node(self, ctx, **props):
224 '''format one changeset, unless it is a suppressed merge.'''
226 '''format one changeset, unless it is a suppressed merge.'''
225 if not self.merge and len(ctx.parents()) > 1:
227 if not self.merge and len(ctx.parents()) > 1:
226 return False
228 return False
227 self.t.show(ctx, changes=ctx.changeset(),
229 self.t.show(ctx, changes=ctx.changeset(),
228 baseurl=self.ui.config('web', 'baseurl'),
230 baseurl=self.ui.config('web', 'baseurl'),
229 root=self.repo.root, webroot=self.root, **props)
231 root=self.repo.root, webroot=self.root, **props)
230 return True
232 return True
231
233
232 def skipsource(self, source):
234 def skipsource(self, source):
233 '''true if incoming changes from this source should be skipped.'''
235 '''true if incoming changes from this source should be skipped.'''
234 ok_sources = self.ui.config('notify', 'sources', 'serve').split()
236 ok_sources = self.ui.config('notify', 'sources', 'serve').split()
235 return source not in ok_sources
237 return source not in ok_sources
236
238
237 def send(self, ctx, count, data):
239 def send(self, ctx, count, data):
238 '''send message.'''
240 '''send message.'''
239
241
240 p = email.Parser.Parser()
242 p = email.Parser.Parser()
241 try:
243 try:
242 msg = p.parsestr(data)
244 msg = p.parsestr(data)
243 except email.Errors.MessageParseError, inst:
245 except email.Errors.MessageParseError, inst:
244 raise util.Abort(inst)
246 raise util.Abort(inst)
245
247
246 # store sender and subject
248 # store sender and subject
247 sender, subject = msg['From'], msg['Subject']
249 sender, subject = msg['From'], msg['Subject']
248 del msg['From'], msg['Subject']
250 del msg['From'], msg['Subject']
249
251
250 if not msg.is_multipart():
252 if not msg.is_multipart():
251 # create fresh mime message from scratch
253 # create fresh mime message from scratch
252 # (multipart templates must take care of this themselves)
254 # (multipart templates must take care of this themselves)
253 headers = msg.items()
255 headers = msg.items()
254 payload = msg.get_payload()
256 payload = msg.get_payload()
255 # for notification prefer readability over data precision
257 # for notification prefer readability over data precision
256 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
258 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
257 # reinstate custom headers
259 # reinstate custom headers
258 for k, v in headers:
260 for k, v in headers:
259 msg[k] = v
261 msg[k] = v
260
262
261 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
263 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
262
264
263 # try to make subject line exist and be useful
265 # try to make subject line exist and be useful
264 if not subject:
266 if not subject:
265 if count > 1:
267 if count > 1:
266 subject = _('%s: %d new changesets') % (self.root, count)
268 subject = _('%s: %d new changesets') % (self.root, count)
267 else:
269 else:
268 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
270 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
269 subject = '%s: %s' % (self.root, s)
271 subject = '%s: %s' % (self.root, s)
270 maxsubject = int(self.ui.config('notify', 'maxsubject', 67))
272 maxsubject = int(self.ui.config('notify', 'maxsubject', 67))
271 if maxsubject:
273 if maxsubject:
272 subject = util.ellipsis(subject, maxsubject)
274 subject = util.ellipsis(subject, maxsubject)
273 msg['Subject'] = mail.headencode(self.ui, subject,
275 msg['Subject'] = mail.headencode(self.ui, subject,
274 self.charsets, self.test)
276 self.charsets, self.test)
275
277
276 # try to make message have proper sender
278 # try to make message have proper sender
277 if not sender:
279 if not sender:
278 sender = self.ui.config('email', 'from') or self.ui.username()
280 sender = self.ui.config('email', 'from') or self.ui.username()
279 if '@' not in sender or '@localhost' in sender:
281 if '@' not in sender or '@localhost' in sender:
280 sender = self.fixmail(sender)
282 sender = self.fixmail(sender)
281 msg['From'] = mail.addressencode(self.ui, sender,
283 msg['From'] = mail.addressencode(self.ui, sender,
282 self.charsets, self.test)
284 self.charsets, self.test)
283
285
284 msg['X-Hg-Notification'] = 'changeset %s' % ctx
286 msg['X-Hg-Notification'] = 'changeset %s' % ctx
285 if not msg['Message-Id']:
287 if not msg['Message-Id']:
286 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
288 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
287 (ctx, int(time.time()),
289 (ctx, int(time.time()),
288 hash(self.repo.root), socket.getfqdn()))
290 hash(self.repo.root), socket.getfqdn()))
289 msg['To'] = ', '.join(self.subs)
291 msg['To'] = ', '.join(self.subs)
290
292
291 msgtext = msg.as_string()
293 msgtext = msg.as_string()
292 if self.test:
294 if self.test:
293 self.ui.write(msgtext)
295 self.ui.write(msgtext)
294 if not msgtext.endswith('\n'):
296 if not msgtext.endswith('\n'):
295 self.ui.write('\n')
297 self.ui.write('\n')
296 else:
298 else:
297 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
299 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
298 (len(self.subs), count))
300 (len(self.subs), count))
299 mail.sendmail(self.ui, util.email(msg['From']),
301 mail.sendmail(self.ui, util.email(msg['From']),
300 self.subs, msgtext, mbox=self.mbox)
302 self.subs, msgtext, mbox=self.mbox)
301
303
302 def diff(self, ctx, ref=None):
304 def diff(self, ctx, ref=None):
303
305
304 maxdiff = int(self.ui.config('notify', 'maxdiff', 300))
306 maxdiff = int(self.ui.config('notify', 'maxdiff', 300))
305 prev = ctx.p1().node()
307 prev = ctx.p1().node()
306 ref = ref and ref.node() or ctx.node()
308 ref = ref and ref.node() or ctx.node()
307 chunks = patch.diff(self.repo, prev, ref, opts=patch.diffopts(self.ui))
309 chunks = patch.diff(self.repo, prev, ref, opts=patch.diffopts(self.ui))
308 difflines = ''.join(chunks).splitlines()
310 difflines = ''.join(chunks).splitlines()
309
311
310 if self.ui.configbool('notify', 'diffstat', True):
312 if self.ui.configbool('notify', 'diffstat', True):
311 s = patch.diffstat(difflines)
313 s = patch.diffstat(difflines)
312 # s may be nil, don't include the header if it is
314 # s may be nil, don't include the header if it is
313 if s:
315 if s:
314 self.ui.write('\ndiffstat:\n\n%s' % s)
316 self.ui.write('\ndiffstat:\n\n%s' % s)
315
317
316 if maxdiff == 0:
318 if maxdiff == 0:
317 return
319 return
318 elif maxdiff > 0 and len(difflines) > maxdiff:
320 elif maxdiff > 0 and len(difflines) > maxdiff:
319 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
321 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
320 self.ui.write(msg % (len(difflines), maxdiff))
322 self.ui.write(msg % (len(difflines), maxdiff))
321 difflines = difflines[:maxdiff]
323 difflines = difflines[:maxdiff]
322 elif difflines:
324 elif difflines:
323 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
325 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
324
326
325 self.ui.write("\n".join(difflines))
327 self.ui.write("\n".join(difflines))
326
328
327 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
329 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
328 '''send email notifications to interested subscribers.
330 '''send email notifications to interested subscribers.
329
331
330 if used as changegroup hook, send one email for all changesets in
332 if used as changegroup hook, send one email for all changesets in
331 changegroup. else send one email per changeset.'''
333 changegroup. else send one email per changeset.'''
332
334
333 n = notifier(ui, repo, hooktype)
335 n = notifier(ui, repo, hooktype)
334 ctx = repo[node]
336 ctx = repo[node]
335
337
336 if not n.subs:
338 if not n.subs:
337 ui.debug('notify: no subscribers to repository %s\n' % n.root)
339 ui.debug('notify: no subscribers to repository %s\n' % n.root)
338 return
340 return
339 if n.skipsource(source):
341 if n.skipsource(source):
340 ui.debug('notify: changes have source "%s" - skipping\n' % source)
342 ui.debug('notify: changes have source "%s" - skipping\n' % source)
341 return
343 return
342
344
343 ui.pushbuffer()
345 ui.pushbuffer()
344 data = ''
346 data = ''
345 count = 0
347 count = 0
346 author = ''
348 author = ''
347 if hooktype == 'changegroup' or hooktype == 'outgoing':
349 if hooktype == 'changegroup' or hooktype == 'outgoing':
348 start, end = ctx.rev(), len(repo)
350 start, end = ctx.rev(), len(repo)
349 for rev in xrange(start, end):
351 for rev in xrange(start, end):
350 if n.node(repo[rev]):
352 if n.node(repo[rev]):
351 count += 1
353 count += 1
352 if not author:
354 if not author:
353 author = repo[rev].user()
355 author = repo[rev].user()
354 else:
356 else:
355 data += ui.popbuffer()
357 data += ui.popbuffer()
356 ui.note(_('notify: suppressing notification for merge %d:%s\n')
358 ui.note(_('notify: suppressing notification for merge %d:%s\n')
357 % (rev, repo[rev].hex()[:12]))
359 % (rev, repo[rev].hex()[:12]))
358 ui.pushbuffer()
360 ui.pushbuffer()
359 if count:
361 if count:
360 n.diff(ctx, repo['tip'])
362 n.diff(ctx, repo['tip'])
361 else:
363 else:
362 if not n.node(ctx):
364 if not n.node(ctx):
363 ui.popbuffer()
365 ui.popbuffer()
364 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
366 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
365 (ctx.rev(), ctx.hex()[:12]))
367 (ctx.rev(), ctx.hex()[:12]))
366 return
368 return
367 count += 1
369 count += 1
368 n.diff(ctx)
370 n.diff(ctx)
369
371
370 data += ui.popbuffer()
372 data += ui.popbuffer()
371 fromauthor = ui.config('notify', 'fromauthor')
373 fromauthor = ui.config('notify', 'fromauthor')
372 if author and fromauthor:
374 if author and fromauthor:
373 data = '\n'.join(['From: %s' % author, data])
375 data = '\n'.join(['From: %s' % author, data])
374
376
375 if count:
377 if count:
376 n.send(ctx, count, data)
378 n.send(ctx, count, data)
@@ -1,101 +1,103 b''
1 # pager.py - display output using a pager
1 # pager.py - display output using a pager
2 #
2 #
3 # Copyright 2008 David Soria Parra <dsp@php.net>
3 # Copyright 2008 David Soria Parra <dsp@php.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 #
7 #
8 # To load the extension, add it to your configuration file:
8 # To load the extension, add it to your configuration file:
9 #
9 #
10 # [extension]
10 # [extension]
11 # pager =
11 # pager =
12 #
12 #
13 # Run "hg help pager" to get info on configuration.
13 # Run "hg help pager" to get info on configuration.
14
14
15 '''browse command output with an external pager
15 '''browse command output with an external pager
16
16
17 To set the pager that should be used, set the application variable::
17 To set the pager that should be used, set the application variable::
18
18
19 [pager]
19 [pager]
20 pager = less -FRSX
20 pager = less -FRSX
21
21
22 If no pager is set, the pager extensions uses the environment variable
22 If no pager is set, the pager extensions uses the environment variable
23 $PAGER. If neither pager.pager, nor $PAGER is set, no pager is used.
23 $PAGER. If neither pager.pager, nor $PAGER is set, no pager is used.
24
24
25 You can disable the pager for certain commands by adding them to the
25 You can disable the pager for certain commands by adding them to the
26 pager.ignore list::
26 pager.ignore list::
27
27
28 [pager]
28 [pager]
29 ignore = version, help, update
29 ignore = version, help, update
30
30
31 You can also enable the pager only for certain commands using
31 You can also enable the pager only for certain commands using
32 pager.attend. Below is the default list of commands to be paged::
32 pager.attend. Below is the default list of commands to be paged::
33
33
34 [pager]
34 [pager]
35 attend = annotate, cat, diff, export, glog, log, qdiff
35 attend = annotate, cat, diff, export, glog, log, qdiff
36
36
37 Setting pager.attend to an empty value will cause all commands to be
37 Setting pager.attend to an empty value will cause all commands to be
38 paged.
38 paged.
39
39
40 If pager.attend is present, pager.ignore will be ignored.
40 If pager.attend is present, pager.ignore will be ignored.
41
41
42 To ignore global commands like :hg:`version` or :hg:`help`, you have
42 To ignore global commands like :hg:`version` or :hg:`help`, you have
43 to specify them in your user configuration file.
43 to specify them in your user configuration file.
44
44
45 The --pager=... option can also be used to control when the pager is
45 The --pager=... option can also be used to control when the pager is
46 used. Use a boolean value like yes, no, on, off, or use auto for
46 used. Use a boolean value like yes, no, on, off, or use auto for
47 normal behavior.
47 normal behavior.
48 '''
48 '''
49
49
50 import atexit, sys, os, signal, subprocess
50 import atexit, sys, os, signal, subprocess
51 from mercurial import commands, dispatch, util, extensions
51 from mercurial import commands, dispatch, util, extensions
52 from mercurial.i18n import _
52 from mercurial.i18n import _
53
53
54 testedwith = 'internal'
55
54 def _runpager(p):
56 def _runpager(p):
55 pager = subprocess.Popen(p, shell=True, bufsize=-1,
57 pager = subprocess.Popen(p, shell=True, bufsize=-1,
56 close_fds=util.closefds, stdin=subprocess.PIPE,
58 close_fds=util.closefds, stdin=subprocess.PIPE,
57 stdout=sys.stdout, stderr=sys.stderr)
59 stdout=sys.stdout, stderr=sys.stderr)
58
60
59 stdout = os.dup(sys.stdout.fileno())
61 stdout = os.dup(sys.stdout.fileno())
60 stderr = os.dup(sys.stderr.fileno())
62 stderr = os.dup(sys.stderr.fileno())
61 os.dup2(pager.stdin.fileno(), sys.stdout.fileno())
63 os.dup2(pager.stdin.fileno(), sys.stdout.fileno())
62 if util.isatty(sys.stderr):
64 if util.isatty(sys.stderr):
63 os.dup2(pager.stdin.fileno(), sys.stderr.fileno())
65 os.dup2(pager.stdin.fileno(), sys.stderr.fileno())
64
66
65 @atexit.register
67 @atexit.register
66 def killpager():
68 def killpager():
67 pager.stdin.close()
69 pager.stdin.close()
68 os.dup2(stdout, sys.stdout.fileno())
70 os.dup2(stdout, sys.stdout.fileno())
69 os.dup2(stderr, sys.stderr.fileno())
71 os.dup2(stderr, sys.stderr.fileno())
70 pager.wait()
72 pager.wait()
71
73
72 def uisetup(ui):
74 def uisetup(ui):
73 if ui.plain() or '--debugger' in sys.argv or not util.isatty(sys.stdout):
75 if ui.plain() or '--debugger' in sys.argv or not util.isatty(sys.stdout):
74 return
76 return
75
77
76 def pagecmd(orig, ui, options, cmd, cmdfunc):
78 def pagecmd(orig, ui, options, cmd, cmdfunc):
77 p = ui.config("pager", "pager", os.environ.get("PAGER"))
79 p = ui.config("pager", "pager", os.environ.get("PAGER"))
78
80
79 if p:
81 if p:
80 attend = ui.configlist('pager', 'attend', attended)
82 attend = ui.configlist('pager', 'attend', attended)
81 auto = options['pager'] == 'auto'
83 auto = options['pager'] == 'auto'
82 always = util.parsebool(options['pager'])
84 always = util.parsebool(options['pager'])
83 if (always or auto and
85 if (always or auto and
84 (cmd in attend or
86 (cmd in attend or
85 (cmd not in ui.configlist('pager', 'ignore') and not attend))):
87 (cmd not in ui.configlist('pager', 'ignore') and not attend))):
86 ui.setconfig('ui', 'formatted', ui.formatted())
88 ui.setconfig('ui', 'formatted', ui.formatted())
87 ui.setconfig('ui', 'interactive', False)
89 ui.setconfig('ui', 'interactive', False)
88 if util.safehasattr(signal, "SIGPIPE"):
90 if util.safehasattr(signal, "SIGPIPE"):
89 signal.signal(signal.SIGPIPE, signal.SIG_DFL)
91 signal.signal(signal.SIGPIPE, signal.SIG_DFL)
90 _runpager(p)
92 _runpager(p)
91 return orig(ui, options, cmd, cmdfunc)
93 return orig(ui, options, cmd, cmdfunc)
92
94
93 extensions.wrapfunction(dispatch, '_runcommand', pagecmd)
95 extensions.wrapfunction(dispatch, '_runcommand', pagecmd)
94
96
95 def extsetup(ui):
97 def extsetup(ui):
96 commands.globalopts.append(
98 commands.globalopts.append(
97 ('', 'pager', 'auto',
99 ('', 'pager', 'auto',
98 _("when to paginate (boolean, always, auto, or never)"),
100 _("when to paginate (boolean, always, auto, or never)"),
99 _('TYPE')))
101 _('TYPE')))
100
102
101 attended = ['annotate', 'cat', 'diff', 'export', 'glog', 'log', 'qdiff']
103 attended = ['annotate', 'cat', 'diff', 'export', 'glog', 'log', 'qdiff']
@@ -1,557 +1,558 b''
1 # patchbomb.py - sending Mercurial changesets as patch emails
1 # patchbomb.py - sending Mercurial changesets as patch emails
2 #
2 #
3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to send changesets as (a series of) patch emails
8 '''command to send changesets as (a series of) patch emails
9
9
10 The series is started off with a "[PATCH 0 of N]" introduction, which
10 The series is started off with a "[PATCH 0 of N]" introduction, which
11 describes the series as a whole.
11 describes the series as a whole.
12
12
13 Each patch email has a Subject line of "[PATCH M of N] ...", using the
13 Each patch email has a Subject line of "[PATCH M of N] ...", using the
14 first line of the changeset description as the subject text. The
14 first line of the changeset description as the subject text. The
15 message contains two or three body parts:
15 message contains two or three body parts:
16
16
17 - The changeset description.
17 - The changeset description.
18 - [Optional] The result of running diffstat on the patch.
18 - [Optional] The result of running diffstat on the patch.
19 - The patch itself, as generated by :hg:`export`.
19 - The patch itself, as generated by :hg:`export`.
20
20
21 Each message refers to the first in the series using the In-Reply-To
21 Each message refers to the first in the series using the In-Reply-To
22 and References headers, so they will show up as a sequence in threaded
22 and References headers, so they will show up as a sequence in threaded
23 mail and news readers, and in mail archives.
23 mail and news readers, and in mail archives.
24
24
25 To configure other defaults, add a section like this to your
25 To configure other defaults, add a section like this to your
26 configuration file::
26 configuration file::
27
27
28 [email]
28 [email]
29 from = My Name <my@email>
29 from = My Name <my@email>
30 to = recipient1, recipient2, ...
30 to = recipient1, recipient2, ...
31 cc = cc1, cc2, ...
31 cc = cc1, cc2, ...
32 bcc = bcc1, bcc2, ...
32 bcc = bcc1, bcc2, ...
33 reply-to = address1, address2, ...
33 reply-to = address1, address2, ...
34
34
35 Use ``[patchbomb]`` as configuration section name if you need to
35 Use ``[patchbomb]`` as configuration section name if you need to
36 override global ``[email]`` address settings.
36 override global ``[email]`` address settings.
37
37
38 Then you can use the :hg:`email` command to mail a series of
38 Then you can use the :hg:`email` command to mail a series of
39 changesets as a patchbomb.
39 changesets as a patchbomb.
40
40
41 You can also either configure the method option in the email section
41 You can also either configure the method option in the email section
42 to be a sendmail compatible mailer or fill out the [smtp] section so
42 to be a sendmail compatible mailer or fill out the [smtp] section so
43 that the patchbomb extension can automatically send patchbombs
43 that the patchbomb extension can automatically send patchbombs
44 directly from the commandline. See the [email] and [smtp] sections in
44 directly from the commandline. See the [email] and [smtp] sections in
45 hgrc(5) for details.
45 hgrc(5) for details.
46 '''
46 '''
47
47
48 import os, errno, socket, tempfile, cStringIO
48 import os, errno, socket, tempfile, cStringIO
49 import email.MIMEMultipart, email.MIMEBase
49 import email.MIMEMultipart, email.MIMEBase
50 import email.Utils, email.Encoders, email.Generator
50 import email.Utils, email.Encoders, email.Generator
51 from mercurial import cmdutil, commands, hg, mail, patch, util, discovery
51 from mercurial import cmdutil, commands, hg, mail, patch, util, discovery
52 from mercurial import scmutil
52 from mercurial import scmutil
53 from mercurial.i18n import _
53 from mercurial.i18n import _
54 from mercurial.node import bin
54 from mercurial.node import bin
55
55
56 cmdtable = {}
56 cmdtable = {}
57 command = cmdutil.command(cmdtable)
57 command = cmdutil.command(cmdtable)
58 testedwith = 'internal'
58
59
59 def prompt(ui, prompt, default=None, rest=':'):
60 def prompt(ui, prompt, default=None, rest=':'):
60 if default:
61 if default:
61 prompt += ' [%s]' % default
62 prompt += ' [%s]' % default
62 return ui.prompt(prompt + rest, default)
63 return ui.prompt(prompt + rest, default)
63
64
64 def introwanted(opts, number):
65 def introwanted(opts, number):
65 '''is an introductory message apparently wanted?'''
66 '''is an introductory message apparently wanted?'''
66 return number > 1 or opts.get('intro') or opts.get('desc')
67 return number > 1 or opts.get('intro') or opts.get('desc')
67
68
68 def makepatch(ui, repo, patchlines, opts, _charsets, idx, total, numbered,
69 def makepatch(ui, repo, patchlines, opts, _charsets, idx, total, numbered,
69 patchname=None):
70 patchname=None):
70
71
71 desc = []
72 desc = []
72 node = None
73 node = None
73 body = ''
74 body = ''
74
75
75 for line in patchlines:
76 for line in patchlines:
76 if line.startswith('#'):
77 if line.startswith('#'):
77 if line.startswith('# Node ID'):
78 if line.startswith('# Node ID'):
78 node = line.split()[-1]
79 node = line.split()[-1]
79 continue
80 continue
80 if line.startswith('diff -r') or line.startswith('diff --git'):
81 if line.startswith('diff -r') or line.startswith('diff --git'):
81 break
82 break
82 desc.append(line)
83 desc.append(line)
83
84
84 if not patchname and not node:
85 if not patchname and not node:
85 raise ValueError
86 raise ValueError
86
87
87 if opts.get('attach') and not opts.get('body'):
88 if opts.get('attach') and not opts.get('body'):
88 body = ('\n'.join(desc[1:]).strip() or
89 body = ('\n'.join(desc[1:]).strip() or
89 'Patch subject is complete summary.')
90 'Patch subject is complete summary.')
90 body += '\n\n\n'
91 body += '\n\n\n'
91
92
92 if opts.get('plain'):
93 if opts.get('plain'):
93 while patchlines and patchlines[0].startswith('# '):
94 while patchlines and patchlines[0].startswith('# '):
94 patchlines.pop(0)
95 patchlines.pop(0)
95 if patchlines:
96 if patchlines:
96 patchlines.pop(0)
97 patchlines.pop(0)
97 while patchlines and not patchlines[0].strip():
98 while patchlines and not patchlines[0].strip():
98 patchlines.pop(0)
99 patchlines.pop(0)
99
100
100 ds = patch.diffstat(patchlines, git=opts.get('git'))
101 ds = patch.diffstat(patchlines, git=opts.get('git'))
101 if opts.get('diffstat'):
102 if opts.get('diffstat'):
102 body += ds + '\n\n'
103 body += ds + '\n\n'
103
104
104 addattachment = opts.get('attach') or opts.get('inline')
105 addattachment = opts.get('attach') or opts.get('inline')
105 if not addattachment or opts.get('body'):
106 if not addattachment or opts.get('body'):
106 body += '\n'.join(patchlines)
107 body += '\n'.join(patchlines)
107
108
108 if addattachment:
109 if addattachment:
109 msg = email.MIMEMultipart.MIMEMultipart()
110 msg = email.MIMEMultipart.MIMEMultipart()
110 if body:
111 if body:
111 msg.attach(mail.mimeencode(ui, body, _charsets, opts.get('test')))
112 msg.attach(mail.mimeencode(ui, body, _charsets, opts.get('test')))
112 p = mail.mimetextpatch('\n'.join(patchlines), 'x-patch',
113 p = mail.mimetextpatch('\n'.join(patchlines), 'x-patch',
113 opts.get('test'))
114 opts.get('test'))
114 binnode = bin(node)
115 binnode = bin(node)
115 # if node is mq patch, it will have the patch file's name as a tag
116 # if node is mq patch, it will have the patch file's name as a tag
116 if not patchname:
117 if not patchname:
117 patchtags = [t for t in repo.nodetags(binnode)
118 patchtags = [t for t in repo.nodetags(binnode)
118 if t.endswith('.patch') or t.endswith('.diff')]
119 if t.endswith('.patch') or t.endswith('.diff')]
119 if patchtags:
120 if patchtags:
120 patchname = patchtags[0]
121 patchname = patchtags[0]
121 elif total > 1:
122 elif total > 1:
122 patchname = cmdutil.makefilename(repo, '%b-%n.patch',
123 patchname = cmdutil.makefilename(repo, '%b-%n.patch',
123 binnode, seqno=idx,
124 binnode, seqno=idx,
124 total=total)
125 total=total)
125 else:
126 else:
126 patchname = cmdutil.makefilename(repo, '%b.patch', binnode)
127 patchname = cmdutil.makefilename(repo, '%b.patch', binnode)
127 disposition = 'inline'
128 disposition = 'inline'
128 if opts.get('attach'):
129 if opts.get('attach'):
129 disposition = 'attachment'
130 disposition = 'attachment'
130 p['Content-Disposition'] = disposition + '; filename=' + patchname
131 p['Content-Disposition'] = disposition + '; filename=' + patchname
131 msg.attach(p)
132 msg.attach(p)
132 else:
133 else:
133 msg = mail.mimetextpatch(body, display=opts.get('test'))
134 msg = mail.mimetextpatch(body, display=opts.get('test'))
134
135
135 flag = ' '.join(opts.get('flag'))
136 flag = ' '.join(opts.get('flag'))
136 if flag:
137 if flag:
137 flag = ' ' + flag
138 flag = ' ' + flag
138
139
139 subj = desc[0].strip().rstrip('. ')
140 subj = desc[0].strip().rstrip('. ')
140 if not numbered:
141 if not numbered:
141 subj = '[PATCH%s] %s' % (flag, opts.get('subject') or subj)
142 subj = '[PATCH%s] %s' % (flag, opts.get('subject') or subj)
142 else:
143 else:
143 tlen = len(str(total))
144 tlen = len(str(total))
144 subj = '[PATCH %0*d of %d%s] %s' % (tlen, idx, total, flag, subj)
145 subj = '[PATCH %0*d of %d%s] %s' % (tlen, idx, total, flag, subj)
145 msg['Subject'] = mail.headencode(ui, subj, _charsets, opts.get('test'))
146 msg['Subject'] = mail.headencode(ui, subj, _charsets, opts.get('test'))
146 msg['X-Mercurial-Node'] = node
147 msg['X-Mercurial-Node'] = node
147 return msg, subj, ds
148 return msg, subj, ds
148
149
149 emailopts = [
150 emailopts = [
150 ('', 'body', None, _('send patches as inline message text (default)')),
151 ('', 'body', None, _('send patches as inline message text (default)')),
151 ('a', 'attach', None, _('send patches as attachments')),
152 ('a', 'attach', None, _('send patches as attachments')),
152 ('i', 'inline', None, _('send patches as inline attachments')),
153 ('i', 'inline', None, _('send patches as inline attachments')),
153 ('', 'bcc', [], _('email addresses of blind carbon copy recipients')),
154 ('', 'bcc', [], _('email addresses of blind carbon copy recipients')),
154 ('c', 'cc', [], _('email addresses of copy recipients')),
155 ('c', 'cc', [], _('email addresses of copy recipients')),
155 ('', 'confirm', None, _('ask for confirmation before sending')),
156 ('', 'confirm', None, _('ask for confirmation before sending')),
156 ('d', 'diffstat', None, _('add diffstat output to messages')),
157 ('d', 'diffstat', None, _('add diffstat output to messages')),
157 ('', 'date', '', _('use the given date as the sending date')),
158 ('', 'date', '', _('use the given date as the sending date')),
158 ('', 'desc', '', _('use the given file as the series description')),
159 ('', 'desc', '', _('use the given file as the series description')),
159 ('f', 'from', '', _('email address of sender')),
160 ('f', 'from', '', _('email address of sender')),
160 ('n', 'test', None, _('print messages that would be sent')),
161 ('n', 'test', None, _('print messages that would be sent')),
161 ('m', 'mbox', '', _('write messages to mbox file instead of sending them')),
162 ('m', 'mbox', '', _('write messages to mbox file instead of sending them')),
162 ('', 'reply-to', [], _('email addresses replies should be sent to')),
163 ('', 'reply-to', [], _('email addresses replies should be sent to')),
163 ('s', 'subject', '', _('subject of first message (intro or single patch)')),
164 ('s', 'subject', '', _('subject of first message (intro or single patch)')),
164 ('', 'in-reply-to', '', _('message identifier to reply to')),
165 ('', 'in-reply-to', '', _('message identifier to reply to')),
165 ('', 'flag', [], _('flags to add in subject prefixes')),
166 ('', 'flag', [], _('flags to add in subject prefixes')),
166 ('t', 'to', [], _('email addresses of recipients'))]
167 ('t', 'to', [], _('email addresses of recipients'))]
167
168
168 @command('email',
169 @command('email',
169 [('g', 'git', None, _('use git extended diff format')),
170 [('g', 'git', None, _('use git extended diff format')),
170 ('', 'plain', None, _('omit hg patch header')),
171 ('', 'plain', None, _('omit hg patch header')),
171 ('o', 'outgoing', None,
172 ('o', 'outgoing', None,
172 _('send changes not found in the target repository')),
173 _('send changes not found in the target repository')),
173 ('b', 'bundle', None, _('send changes not in target as a binary bundle')),
174 ('b', 'bundle', None, _('send changes not in target as a binary bundle')),
174 ('', 'bundlename', 'bundle',
175 ('', 'bundlename', 'bundle',
175 _('name of the bundle attachment file'), _('NAME')),
176 _('name of the bundle attachment file'), _('NAME')),
176 ('r', 'rev', [], _('a revision to send'), _('REV')),
177 ('r', 'rev', [], _('a revision to send'), _('REV')),
177 ('', 'force', None, _('run even when remote repository is unrelated '
178 ('', 'force', None, _('run even when remote repository is unrelated '
178 '(with -b/--bundle)')),
179 '(with -b/--bundle)')),
179 ('', 'base', [], _('a base changeset to specify instead of a destination '
180 ('', 'base', [], _('a base changeset to specify instead of a destination '
180 '(with -b/--bundle)'), _('REV')),
181 '(with -b/--bundle)'), _('REV')),
181 ('', 'intro', None, _('send an introduction email for a single patch')),
182 ('', 'intro', None, _('send an introduction email for a single patch')),
182 ] + emailopts + commands.remoteopts,
183 ] + emailopts + commands.remoteopts,
183 _('hg email [OPTION]... [DEST]...'))
184 _('hg email [OPTION]... [DEST]...'))
184 def patchbomb(ui, repo, *revs, **opts):
185 def patchbomb(ui, repo, *revs, **opts):
185 '''send changesets by email
186 '''send changesets by email
186
187
187 By default, diffs are sent in the format generated by
188 By default, diffs are sent in the format generated by
188 :hg:`export`, one per message. The series starts with a "[PATCH 0
189 :hg:`export`, one per message. The series starts with a "[PATCH 0
189 of N]" introduction, which describes the series as a whole.
190 of N]" introduction, which describes the series as a whole.
190
191
191 Each patch email has a Subject line of "[PATCH M of N] ...", using
192 Each patch email has a Subject line of "[PATCH M of N] ...", using
192 the first line of the changeset description as the subject text.
193 the first line of the changeset description as the subject text.
193 The message contains two or three parts. First, the changeset
194 The message contains two or three parts. First, the changeset
194 description.
195 description.
195
196
196 With the -d/--diffstat option, if the diffstat program is
197 With the -d/--diffstat option, if the diffstat program is
197 installed, the result of running diffstat on the patch is inserted.
198 installed, the result of running diffstat on the patch is inserted.
198
199
199 Finally, the patch itself, as generated by :hg:`export`.
200 Finally, the patch itself, as generated by :hg:`export`.
200
201
201 With the -d/--diffstat or -c/--confirm options, you will be presented
202 With the -d/--diffstat or -c/--confirm options, you will be presented
202 with a final summary of all messages and asked for confirmation before
203 with a final summary of all messages and asked for confirmation before
203 the messages are sent.
204 the messages are sent.
204
205
205 By default the patch is included as text in the email body for
206 By default the patch is included as text in the email body for
206 easy reviewing. Using the -a/--attach option will instead create
207 easy reviewing. Using the -a/--attach option will instead create
207 an attachment for the patch. With -i/--inline an inline attachment
208 an attachment for the patch. With -i/--inline an inline attachment
208 will be created. You can include a patch both as text in the email
209 will be created. You can include a patch both as text in the email
209 body and as a regular or an inline attachment by combining the
210 body and as a regular or an inline attachment by combining the
210 -a/--attach or -i/--inline with the --body option.
211 -a/--attach or -i/--inline with the --body option.
211
212
212 With -o/--outgoing, emails will be generated for patches not found
213 With -o/--outgoing, emails will be generated for patches not found
213 in the destination repository (or only those which are ancestors
214 in the destination repository (or only those which are ancestors
214 of the specified revisions if any are provided)
215 of the specified revisions if any are provided)
215
216
216 With -b/--bundle, changesets are selected as for --outgoing, but a
217 With -b/--bundle, changesets are selected as for --outgoing, but a
217 single email containing a binary Mercurial bundle as an attachment
218 single email containing a binary Mercurial bundle as an attachment
218 will be sent.
219 will be sent.
219
220
220 With -m/--mbox, instead of previewing each patchbomb message in a
221 With -m/--mbox, instead of previewing each patchbomb message in a
221 pager or sending the messages directly, it will create a UNIX
222 pager or sending the messages directly, it will create a UNIX
222 mailbox file with the patch emails. This mailbox file can be
223 mailbox file with the patch emails. This mailbox file can be
223 previewed with any mail user agent which supports UNIX mbox
224 previewed with any mail user agent which supports UNIX mbox
224 files.
225 files.
225
226
226 With -n/--test, all steps will run, but mail will not be sent.
227 With -n/--test, all steps will run, but mail will not be sent.
227 You will be prompted for an email recipient address, a subject and
228 You will be prompted for an email recipient address, a subject and
228 an introductory message describing the patches of your patchbomb.
229 an introductory message describing the patches of your patchbomb.
229 Then when all is done, patchbomb messages are displayed. If the
230 Then when all is done, patchbomb messages are displayed. If the
230 PAGER environment variable is set, your pager will be fired up once
231 PAGER environment variable is set, your pager will be fired up once
231 for each patchbomb message, so you can verify everything is alright.
232 for each patchbomb message, so you can verify everything is alright.
232
233
233 In case email sending fails, you will find a backup of your series
234 In case email sending fails, you will find a backup of your series
234 introductory message in ``.hg/last-email.txt``.
235 introductory message in ``.hg/last-email.txt``.
235
236
236 Examples::
237 Examples::
237
238
238 hg email -r 3000 # send patch 3000 only
239 hg email -r 3000 # send patch 3000 only
239 hg email -r 3000 -r 3001 # send patches 3000 and 3001
240 hg email -r 3000 -r 3001 # send patches 3000 and 3001
240 hg email -r 3000:3005 # send patches 3000 through 3005
241 hg email -r 3000:3005 # send patches 3000 through 3005
241 hg email 3000 # send patch 3000 (deprecated)
242 hg email 3000 # send patch 3000 (deprecated)
242
243
243 hg email -o # send all patches not in default
244 hg email -o # send all patches not in default
244 hg email -o DEST # send all patches not in DEST
245 hg email -o DEST # send all patches not in DEST
245 hg email -o -r 3000 # send all ancestors of 3000 not in default
246 hg email -o -r 3000 # send all ancestors of 3000 not in default
246 hg email -o -r 3000 DEST # send all ancestors of 3000 not in DEST
247 hg email -o -r 3000 DEST # send all ancestors of 3000 not in DEST
247
248
248 hg email -b # send bundle of all patches not in default
249 hg email -b # send bundle of all patches not in default
249 hg email -b DEST # send bundle of all patches not in DEST
250 hg email -b DEST # send bundle of all patches not in DEST
250 hg email -b -r 3000 # bundle of all ancestors of 3000 not in default
251 hg email -b -r 3000 # bundle of all ancestors of 3000 not in default
251 hg email -b -r 3000 DEST # bundle of all ancestors of 3000 not in DEST
252 hg email -b -r 3000 DEST # bundle of all ancestors of 3000 not in DEST
252
253
253 hg email -o -m mbox && # generate an mbox file...
254 hg email -o -m mbox && # generate an mbox file...
254 mutt -R -f mbox # ... and view it with mutt
255 mutt -R -f mbox # ... and view it with mutt
255 hg email -o -m mbox && # generate an mbox file ...
256 hg email -o -m mbox && # generate an mbox file ...
256 formail -s sendmail \\ # ... and use formail to send from the mbox
257 formail -s sendmail \\ # ... and use formail to send from the mbox
257 -bm -t < mbox # ... using sendmail
258 -bm -t < mbox # ... using sendmail
258
259
259 Before using this command, you will need to enable email in your
260 Before using this command, you will need to enable email in your
260 hgrc. See the [email] section in hgrc(5) for details.
261 hgrc. See the [email] section in hgrc(5) for details.
261 '''
262 '''
262
263
263 _charsets = mail._charsets(ui)
264 _charsets = mail._charsets(ui)
264
265
265 bundle = opts.get('bundle')
266 bundle = opts.get('bundle')
266 date = opts.get('date')
267 date = opts.get('date')
267 mbox = opts.get('mbox')
268 mbox = opts.get('mbox')
268 outgoing = opts.get('outgoing')
269 outgoing = opts.get('outgoing')
269 rev = opts.get('rev')
270 rev = opts.get('rev')
270 # internal option used by pbranches
271 # internal option used by pbranches
271 patches = opts.get('patches')
272 patches = opts.get('patches')
272
273
273 def getoutgoing(dest, revs):
274 def getoutgoing(dest, revs):
274 '''Return the revisions present locally but not in dest'''
275 '''Return the revisions present locally but not in dest'''
275 dest = ui.expandpath(dest or 'default-push', dest or 'default')
276 dest = ui.expandpath(dest or 'default-push', dest or 'default')
276 dest, branches = hg.parseurl(dest)
277 dest, branches = hg.parseurl(dest)
277 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
278 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
278 other = hg.peer(repo, opts, dest)
279 other = hg.peer(repo, opts, dest)
279 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
280 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
280 common, _anyinc, _heads = discovery.findcommonincoming(repo, other)
281 common, _anyinc, _heads = discovery.findcommonincoming(repo, other)
281 nodes = revs and map(repo.lookup, revs) or revs
282 nodes = revs and map(repo.lookup, revs) or revs
282 o = repo.changelog.findmissing(common, heads=nodes)
283 o = repo.changelog.findmissing(common, heads=nodes)
283 if not o:
284 if not o:
284 ui.status(_("no changes found\n"))
285 ui.status(_("no changes found\n"))
285 return []
286 return []
286 return [str(repo.changelog.rev(r)) for r in o]
287 return [str(repo.changelog.rev(r)) for r in o]
287
288
288 def getpatches(revs):
289 def getpatches(revs):
289 for r in scmutil.revrange(repo, revs):
290 for r in scmutil.revrange(repo, revs):
290 output = cStringIO.StringIO()
291 output = cStringIO.StringIO()
291 cmdutil.export(repo, [r], fp=output,
292 cmdutil.export(repo, [r], fp=output,
292 opts=patch.diffopts(ui, opts))
293 opts=patch.diffopts(ui, opts))
293 yield output.getvalue().split('\n')
294 yield output.getvalue().split('\n')
294
295
295 def getbundle(dest):
296 def getbundle(dest):
296 tmpdir = tempfile.mkdtemp(prefix='hg-email-bundle-')
297 tmpdir = tempfile.mkdtemp(prefix='hg-email-bundle-')
297 tmpfn = os.path.join(tmpdir, 'bundle')
298 tmpfn = os.path.join(tmpdir, 'bundle')
298 try:
299 try:
299 commands.bundle(ui, repo, tmpfn, dest, **opts)
300 commands.bundle(ui, repo, tmpfn, dest, **opts)
300 fp = open(tmpfn, 'rb')
301 fp = open(tmpfn, 'rb')
301 data = fp.read()
302 data = fp.read()
302 fp.close()
303 fp.close()
303 return data
304 return data
304 finally:
305 finally:
305 try:
306 try:
306 os.unlink(tmpfn)
307 os.unlink(tmpfn)
307 except OSError:
308 except OSError:
308 pass
309 pass
309 os.rmdir(tmpdir)
310 os.rmdir(tmpdir)
310
311
311 if not (opts.get('test') or mbox):
312 if not (opts.get('test') or mbox):
312 # really sending
313 # really sending
313 mail.validateconfig(ui)
314 mail.validateconfig(ui)
314
315
315 if not (revs or rev or outgoing or bundle or patches):
316 if not (revs or rev or outgoing or bundle or patches):
316 raise util.Abort(_('specify at least one changeset with -r or -o'))
317 raise util.Abort(_('specify at least one changeset with -r or -o'))
317
318
318 if outgoing and bundle:
319 if outgoing and bundle:
319 raise util.Abort(_("--outgoing mode always on with --bundle;"
320 raise util.Abort(_("--outgoing mode always on with --bundle;"
320 " do not re-specify --outgoing"))
321 " do not re-specify --outgoing"))
321
322
322 if outgoing or bundle:
323 if outgoing or bundle:
323 if len(revs) > 1:
324 if len(revs) > 1:
324 raise util.Abort(_("too many destinations"))
325 raise util.Abort(_("too many destinations"))
325 dest = revs and revs[0] or None
326 dest = revs and revs[0] or None
326 revs = []
327 revs = []
327
328
328 if rev:
329 if rev:
329 if revs:
330 if revs:
330 raise util.Abort(_('use only one form to specify the revision'))
331 raise util.Abort(_('use only one form to specify the revision'))
331 revs = rev
332 revs = rev
332
333
333 if outgoing:
334 if outgoing:
334 revs = getoutgoing(dest, rev)
335 revs = getoutgoing(dest, rev)
335 if bundle:
336 if bundle:
336 opts['revs'] = revs
337 opts['revs'] = revs
337
338
338 # start
339 # start
339 if date:
340 if date:
340 start_time = util.parsedate(date)
341 start_time = util.parsedate(date)
341 else:
342 else:
342 start_time = util.makedate()
343 start_time = util.makedate()
343
344
344 def genmsgid(id):
345 def genmsgid(id):
345 return '<%s.%s@%s>' % (id[:20], int(start_time[0]), socket.getfqdn())
346 return '<%s.%s@%s>' % (id[:20], int(start_time[0]), socket.getfqdn())
346
347
347 def getdescription(body, sender):
348 def getdescription(body, sender):
348 if opts.get('desc'):
349 if opts.get('desc'):
349 body = open(opts.get('desc')).read()
350 body = open(opts.get('desc')).read()
350 else:
351 else:
351 ui.write(_('\nWrite the introductory message for the '
352 ui.write(_('\nWrite the introductory message for the '
352 'patch series.\n\n'))
353 'patch series.\n\n'))
353 body = ui.edit(body, sender)
354 body = ui.edit(body, sender)
354 # Save series description in case sendmail fails
355 # Save series description in case sendmail fails
355 msgfile = repo.opener('last-email.txt', 'wb')
356 msgfile = repo.opener('last-email.txt', 'wb')
356 msgfile.write(body)
357 msgfile.write(body)
357 msgfile.close()
358 msgfile.close()
358 return body
359 return body
359
360
360 def getpatchmsgs(patches, patchnames=None):
361 def getpatchmsgs(patches, patchnames=None):
361 msgs = []
362 msgs = []
362
363
363 ui.write(_('This patch series consists of %d patches.\n\n')
364 ui.write(_('This patch series consists of %d patches.\n\n')
364 % len(patches))
365 % len(patches))
365
366
366 # build the intro message, or skip it if the user declines
367 # build the intro message, or skip it if the user declines
367 if introwanted(opts, len(patches)):
368 if introwanted(opts, len(patches)):
368 msg = makeintro(patches)
369 msg = makeintro(patches)
369 if msg:
370 if msg:
370 msgs.append(msg)
371 msgs.append(msg)
371
372
372 # are we going to send more than one message?
373 # are we going to send more than one message?
373 numbered = len(msgs) + len(patches) > 1
374 numbered = len(msgs) + len(patches) > 1
374
375
375 # now generate the actual patch messages
376 # now generate the actual patch messages
376 name = None
377 name = None
377 for i, p in enumerate(patches):
378 for i, p in enumerate(patches):
378 if patchnames:
379 if patchnames:
379 name = patchnames[i]
380 name = patchnames[i]
380 msg = makepatch(ui, repo, p, opts, _charsets, i + 1,
381 msg = makepatch(ui, repo, p, opts, _charsets, i + 1,
381 len(patches), numbered, name)
382 len(patches), numbered, name)
382 msgs.append(msg)
383 msgs.append(msg)
383
384
384 return msgs
385 return msgs
385
386
386 def makeintro(patches):
387 def makeintro(patches):
387 tlen = len(str(len(patches)))
388 tlen = len(str(len(patches)))
388
389
389 flag = opts.get('flag') or ''
390 flag = opts.get('flag') or ''
390 if flag:
391 if flag:
391 flag = ' ' + ' '.join(flag)
392 flag = ' ' + ' '.join(flag)
392 prefix = '[PATCH %0*d of %d%s]' % (tlen, 0, len(patches), flag)
393 prefix = '[PATCH %0*d of %d%s]' % (tlen, 0, len(patches), flag)
393
394
394 subj = (opts.get('subject') or
395 subj = (opts.get('subject') or
395 prompt(ui, '(optional) Subject: ', rest=prefix, default=''))
396 prompt(ui, '(optional) Subject: ', rest=prefix, default=''))
396 if not subj:
397 if not subj:
397 return None # skip intro if the user doesn't bother
398 return None # skip intro if the user doesn't bother
398
399
399 subj = prefix + ' ' + subj
400 subj = prefix + ' ' + subj
400
401
401 body = ''
402 body = ''
402 if opts.get('diffstat'):
403 if opts.get('diffstat'):
403 # generate a cumulative diffstat of the whole patch series
404 # generate a cumulative diffstat of the whole patch series
404 diffstat = patch.diffstat(sum(patches, []))
405 diffstat = patch.diffstat(sum(patches, []))
405 body = '\n' + diffstat
406 body = '\n' + diffstat
406 else:
407 else:
407 diffstat = None
408 diffstat = None
408
409
409 body = getdescription(body, sender)
410 body = getdescription(body, sender)
410 msg = mail.mimeencode(ui, body, _charsets, opts.get('test'))
411 msg = mail.mimeencode(ui, body, _charsets, opts.get('test'))
411 msg['Subject'] = mail.headencode(ui, subj, _charsets,
412 msg['Subject'] = mail.headencode(ui, subj, _charsets,
412 opts.get('test'))
413 opts.get('test'))
413 return (msg, subj, diffstat)
414 return (msg, subj, diffstat)
414
415
415 def getbundlemsgs(bundle):
416 def getbundlemsgs(bundle):
416 subj = (opts.get('subject')
417 subj = (opts.get('subject')
417 or prompt(ui, 'Subject:', 'A bundle for your repository'))
418 or prompt(ui, 'Subject:', 'A bundle for your repository'))
418
419
419 body = getdescription('', sender)
420 body = getdescription('', sender)
420 msg = email.MIMEMultipart.MIMEMultipart()
421 msg = email.MIMEMultipart.MIMEMultipart()
421 if body:
422 if body:
422 msg.attach(mail.mimeencode(ui, body, _charsets, opts.get('test')))
423 msg.attach(mail.mimeencode(ui, body, _charsets, opts.get('test')))
423 datapart = email.MIMEBase.MIMEBase('application', 'x-mercurial-bundle')
424 datapart = email.MIMEBase.MIMEBase('application', 'x-mercurial-bundle')
424 datapart.set_payload(bundle)
425 datapart.set_payload(bundle)
425 bundlename = '%s.hg' % opts.get('bundlename', 'bundle')
426 bundlename = '%s.hg' % opts.get('bundlename', 'bundle')
426 datapart.add_header('Content-Disposition', 'attachment',
427 datapart.add_header('Content-Disposition', 'attachment',
427 filename=bundlename)
428 filename=bundlename)
428 email.Encoders.encode_base64(datapart)
429 email.Encoders.encode_base64(datapart)
429 msg.attach(datapart)
430 msg.attach(datapart)
430 msg['Subject'] = mail.headencode(ui, subj, _charsets, opts.get('test'))
431 msg['Subject'] = mail.headencode(ui, subj, _charsets, opts.get('test'))
431 return [(msg, subj, None)]
432 return [(msg, subj, None)]
432
433
433 sender = (opts.get('from') or ui.config('email', 'from') or
434 sender = (opts.get('from') or ui.config('email', 'from') or
434 ui.config('patchbomb', 'from') or
435 ui.config('patchbomb', 'from') or
435 prompt(ui, 'From', ui.username()))
436 prompt(ui, 'From', ui.username()))
436
437
437 if patches:
438 if patches:
438 msgs = getpatchmsgs(patches, opts.get('patchnames'))
439 msgs = getpatchmsgs(patches, opts.get('patchnames'))
439 elif bundle:
440 elif bundle:
440 msgs = getbundlemsgs(getbundle(dest))
441 msgs = getbundlemsgs(getbundle(dest))
441 else:
442 else:
442 msgs = getpatchmsgs(list(getpatches(revs)))
443 msgs = getpatchmsgs(list(getpatches(revs)))
443
444
444 showaddrs = []
445 showaddrs = []
445
446
446 def getaddrs(header, ask=False, default=None):
447 def getaddrs(header, ask=False, default=None):
447 configkey = header.lower()
448 configkey = header.lower()
448 opt = header.replace('-', '_').lower()
449 opt = header.replace('-', '_').lower()
449 addrs = opts.get(opt)
450 addrs = opts.get(opt)
450 if addrs:
451 if addrs:
451 showaddrs.append('%s: %s' % (header, ', '.join(addrs)))
452 showaddrs.append('%s: %s' % (header, ', '.join(addrs)))
452 return mail.addrlistencode(ui, addrs, _charsets, opts.get('test'))
453 return mail.addrlistencode(ui, addrs, _charsets, opts.get('test'))
453
454
454 # not on the command line: fallback to config and then maybe ask
455 # not on the command line: fallback to config and then maybe ask
455 addr = (ui.config('email', configkey) or
456 addr = (ui.config('email', configkey) or
456 ui.config('patchbomb', configkey) or
457 ui.config('patchbomb', configkey) or
457 '')
458 '')
458 if not addr and ask:
459 if not addr and ask:
459 addr = prompt(ui, header, default=default)
460 addr = prompt(ui, header, default=default)
460 if addr:
461 if addr:
461 showaddrs.append('%s: %s' % (header, addr))
462 showaddrs.append('%s: %s' % (header, addr))
462 return mail.addrlistencode(ui, [addr], _charsets, opts.get('test'))
463 return mail.addrlistencode(ui, [addr], _charsets, opts.get('test'))
463 else:
464 else:
464 return default
465 return default
465
466
466 to = getaddrs('To', ask=True)
467 to = getaddrs('To', ask=True)
467 if not to:
468 if not to:
468 # we can get here in non-interactive mode
469 # we can get here in non-interactive mode
469 raise util.Abort(_('no recipient addresses provided'))
470 raise util.Abort(_('no recipient addresses provided'))
470 cc = getaddrs('Cc', ask=True, default='') or []
471 cc = getaddrs('Cc', ask=True, default='') or []
471 bcc = getaddrs('Bcc') or []
472 bcc = getaddrs('Bcc') or []
472 replyto = getaddrs('Reply-To')
473 replyto = getaddrs('Reply-To')
473
474
474 if opts.get('diffstat') or opts.get('confirm'):
475 if opts.get('diffstat') or opts.get('confirm'):
475 ui.write(_('\nFinal summary:\n\n'))
476 ui.write(_('\nFinal summary:\n\n'))
476 ui.write('From: %s\n' % sender)
477 ui.write('From: %s\n' % sender)
477 for addr in showaddrs:
478 for addr in showaddrs:
478 ui.write('%s\n' % addr)
479 ui.write('%s\n' % addr)
479 for m, subj, ds in msgs:
480 for m, subj, ds in msgs:
480 ui.write('Subject: %s\n' % subj)
481 ui.write('Subject: %s\n' % subj)
481 if ds:
482 if ds:
482 ui.write(ds)
483 ui.write(ds)
483 ui.write('\n')
484 ui.write('\n')
484 if ui.promptchoice(_('are you sure you want to send (yn)?'),
485 if ui.promptchoice(_('are you sure you want to send (yn)?'),
485 (_('&Yes'), _('&No'))):
486 (_('&Yes'), _('&No'))):
486 raise util.Abort(_('patchbomb canceled'))
487 raise util.Abort(_('patchbomb canceled'))
487
488
488 ui.write('\n')
489 ui.write('\n')
489
490
490 parent = opts.get('in_reply_to') or None
491 parent = opts.get('in_reply_to') or None
491 # angle brackets may be omitted, they're not semantically part of the msg-id
492 # angle brackets may be omitted, they're not semantically part of the msg-id
492 if parent is not None:
493 if parent is not None:
493 if not parent.startswith('<'):
494 if not parent.startswith('<'):
494 parent = '<' + parent
495 parent = '<' + parent
495 if not parent.endswith('>'):
496 if not parent.endswith('>'):
496 parent += '>'
497 parent += '>'
497
498
498 first = True
499 first = True
499
500
500 sender_addr = email.Utils.parseaddr(sender)[1]
501 sender_addr = email.Utils.parseaddr(sender)[1]
501 sender = mail.addressencode(ui, sender, _charsets, opts.get('test'))
502 sender = mail.addressencode(ui, sender, _charsets, opts.get('test'))
502 sendmail = None
503 sendmail = None
503 for i, (m, subj, ds) in enumerate(msgs):
504 for i, (m, subj, ds) in enumerate(msgs):
504 try:
505 try:
505 m['Message-Id'] = genmsgid(m['X-Mercurial-Node'])
506 m['Message-Id'] = genmsgid(m['X-Mercurial-Node'])
506 except TypeError:
507 except TypeError:
507 m['Message-Id'] = genmsgid('patchbomb')
508 m['Message-Id'] = genmsgid('patchbomb')
508 if parent:
509 if parent:
509 m['In-Reply-To'] = parent
510 m['In-Reply-To'] = parent
510 m['References'] = parent
511 m['References'] = parent
511 if first:
512 if first:
512 parent = m['Message-Id']
513 parent = m['Message-Id']
513 first = False
514 first = False
514
515
515 m['User-Agent'] = 'Mercurial-patchbomb/%s' % util.version()
516 m['User-Agent'] = 'Mercurial-patchbomb/%s' % util.version()
516 m['Date'] = email.Utils.formatdate(start_time[0], localtime=True)
517 m['Date'] = email.Utils.formatdate(start_time[0], localtime=True)
517
518
518 start_time = (start_time[0] + 1, start_time[1])
519 start_time = (start_time[0] + 1, start_time[1])
519 m['From'] = sender
520 m['From'] = sender
520 m['To'] = ', '.join(to)
521 m['To'] = ', '.join(to)
521 if cc:
522 if cc:
522 m['Cc'] = ', '.join(cc)
523 m['Cc'] = ', '.join(cc)
523 if bcc:
524 if bcc:
524 m['Bcc'] = ', '.join(bcc)
525 m['Bcc'] = ', '.join(bcc)
525 if replyto:
526 if replyto:
526 m['Reply-To'] = ', '.join(replyto)
527 m['Reply-To'] = ', '.join(replyto)
527 if opts.get('test'):
528 if opts.get('test'):
528 ui.status(_('Displaying '), subj, ' ...\n')
529 ui.status(_('Displaying '), subj, ' ...\n')
529 ui.flush()
530 ui.flush()
530 if 'PAGER' in os.environ and not ui.plain():
531 if 'PAGER' in os.environ and not ui.plain():
531 fp = util.popen(os.environ['PAGER'], 'w')
532 fp = util.popen(os.environ['PAGER'], 'w')
532 else:
533 else:
533 fp = ui
534 fp = ui
534 generator = email.Generator.Generator(fp, mangle_from_=False)
535 generator = email.Generator.Generator(fp, mangle_from_=False)
535 try:
536 try:
536 generator.flatten(m, 0)
537 generator.flatten(m, 0)
537 fp.write('\n')
538 fp.write('\n')
538 except IOError, inst:
539 except IOError, inst:
539 if inst.errno != errno.EPIPE:
540 if inst.errno != errno.EPIPE:
540 raise
541 raise
541 if fp is not ui:
542 if fp is not ui:
542 fp.close()
543 fp.close()
543 else:
544 else:
544 if not sendmail:
545 if not sendmail:
545 sendmail = mail.connect(ui, mbox=mbox)
546 sendmail = mail.connect(ui, mbox=mbox)
546 ui.status(_('Sending '), subj, ' ...\n')
547 ui.status(_('Sending '), subj, ' ...\n')
547 ui.progress(_('sending'), i, item=subj, total=len(msgs))
548 ui.progress(_('sending'), i, item=subj, total=len(msgs))
548 if not mbox:
549 if not mbox:
549 # Exim does not remove the Bcc field
550 # Exim does not remove the Bcc field
550 del m['Bcc']
551 del m['Bcc']
551 fp = cStringIO.StringIO()
552 fp = cStringIO.StringIO()
552 generator = email.Generator.Generator(fp, mangle_from_=False)
553 generator = email.Generator.Generator(fp, mangle_from_=False)
553 generator.flatten(m, 0)
554 generator.flatten(m, 0)
554 sendmail(sender_addr, to + bcc + cc, fp.getvalue())
555 sendmail(sender_addr, to + bcc + cc, fp.getvalue())
555
556
556 ui.progress(_('writing'), None)
557 ui.progress(_('writing'), None)
557 ui.progress(_('sending'), None)
558 ui.progress(_('sending'), None)
@@ -1,295 +1,296 b''
1 # progress.py show progress bars for some actions
1 # progress.py show progress bars for some actions
2 #
2 #
3 # Copyright (C) 2010 Augie Fackler <durin42@gmail.com>
3 # Copyright (C) 2010 Augie Fackler <durin42@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """show progress bars for some actions
8 """show progress bars for some actions
9
9
10 This extension uses the progress information logged by hg commands
10 This extension uses the progress information logged by hg commands
11 to draw progress bars that are as informative as possible. Some progress
11 to draw progress bars that are as informative as possible. Some progress
12 bars only offer indeterminate information, while others have a definite
12 bars only offer indeterminate information, while others have a definite
13 end point.
13 end point.
14
14
15 The following settings are available::
15 The following settings are available::
16
16
17 [progress]
17 [progress]
18 delay = 3 # number of seconds (float) before showing the progress bar
18 delay = 3 # number of seconds (float) before showing the progress bar
19 changedelay = 1 # changedelay: minimum delay before showing a new topic.
19 changedelay = 1 # changedelay: minimum delay before showing a new topic.
20 # If set to less than 3 * refresh, that value will
20 # If set to less than 3 * refresh, that value will
21 # be used instead.
21 # be used instead.
22 refresh = 0.1 # time in seconds between refreshes of the progress bar
22 refresh = 0.1 # time in seconds between refreshes of the progress bar
23 format = topic bar number estimate # format of the progress bar
23 format = topic bar number estimate # format of the progress bar
24 width = <none> # if set, the maximum width of the progress information
24 width = <none> # if set, the maximum width of the progress information
25 # (that is, min(width, term width) will be used)
25 # (that is, min(width, term width) will be used)
26 clear-complete = True # clear the progress bar after it's done
26 clear-complete = True # clear the progress bar after it's done
27 disable = False # if true, don't show a progress bar
27 disable = False # if true, don't show a progress bar
28 assume-tty = False # if true, ALWAYS show a progress bar, unless
28 assume-tty = False # if true, ALWAYS show a progress bar, unless
29 # disable is given
29 # disable is given
30
30
31 Valid entries for the format field are topic, bar, number, unit,
31 Valid entries for the format field are topic, bar, number, unit,
32 estimate, speed, and item. item defaults to the last 20 characters of
32 estimate, speed, and item. item defaults to the last 20 characters of
33 the item, but this can be changed by adding either ``-<num>`` which
33 the item, but this can be changed by adding either ``-<num>`` which
34 would take the last num characters, or ``+<num>`` for the first num
34 would take the last num characters, or ``+<num>`` for the first num
35 characters.
35 characters.
36 """
36 """
37
37
38 import sys
38 import sys
39 import time
39 import time
40
40
41 from mercurial import util
41 from mercurial import util
42 from mercurial.i18n import _
42 from mercurial.i18n import _
43 testedwith = 'internal'
43
44
44 def spacejoin(*args):
45 def spacejoin(*args):
45 return ' '.join(s for s in args if s)
46 return ' '.join(s for s in args if s)
46
47
47 def shouldprint(ui):
48 def shouldprint(ui):
48 return util.isatty(sys.stderr) or ui.configbool('progress', 'assume-tty')
49 return util.isatty(sys.stderr) or ui.configbool('progress', 'assume-tty')
49
50
50 def fmtremaining(seconds):
51 def fmtremaining(seconds):
51 if seconds < 60:
52 if seconds < 60:
52 # i18n: format XX seconds as "XXs"
53 # i18n: format XX seconds as "XXs"
53 return _("%02ds") % (seconds)
54 return _("%02ds") % (seconds)
54 minutes = seconds // 60
55 minutes = seconds // 60
55 if minutes < 60:
56 if minutes < 60:
56 seconds -= minutes * 60
57 seconds -= minutes * 60
57 # i18n: format X minutes and YY seconds as "XmYYs"
58 # i18n: format X minutes and YY seconds as "XmYYs"
58 return _("%dm%02ds") % (minutes, seconds)
59 return _("%dm%02ds") % (minutes, seconds)
59 # we're going to ignore seconds in this case
60 # we're going to ignore seconds in this case
60 minutes += 1
61 minutes += 1
61 hours = minutes // 60
62 hours = minutes // 60
62 minutes -= hours * 60
63 minutes -= hours * 60
63 if hours < 30:
64 if hours < 30:
64 # i18n: format X hours and YY minutes as "XhYYm"
65 # i18n: format X hours and YY minutes as "XhYYm"
65 return _("%dh%02dm") % (hours, minutes)
66 return _("%dh%02dm") % (hours, minutes)
66 # we're going to ignore minutes in this case
67 # we're going to ignore minutes in this case
67 hours += 1
68 hours += 1
68 days = hours // 24
69 days = hours // 24
69 hours -= days * 24
70 hours -= days * 24
70 if days < 15:
71 if days < 15:
71 # i18n: format X days and YY hours as "XdYYh"
72 # i18n: format X days and YY hours as "XdYYh"
72 return _("%dd%02dh") % (days, hours)
73 return _("%dd%02dh") % (days, hours)
73 # we're going to ignore hours in this case
74 # we're going to ignore hours in this case
74 days += 1
75 days += 1
75 weeks = days // 7
76 weeks = days // 7
76 days -= weeks * 7
77 days -= weeks * 7
77 if weeks < 55:
78 if weeks < 55:
78 # i18n: format X weeks and YY days as "XwYYd"
79 # i18n: format X weeks and YY days as "XwYYd"
79 return _("%dw%02dd") % (weeks, days)
80 return _("%dw%02dd") % (weeks, days)
80 # we're going to ignore days and treat a year as 52 weeks
81 # we're going to ignore days and treat a year as 52 weeks
81 weeks += 1
82 weeks += 1
82 years = weeks // 52
83 years = weeks // 52
83 weeks -= years * 52
84 weeks -= years * 52
84 # i18n: format X years and YY weeks as "XyYYw"
85 # i18n: format X years and YY weeks as "XyYYw"
85 return _("%dy%02dw") % (years, weeks)
86 return _("%dy%02dw") % (years, weeks)
86
87
87 class progbar(object):
88 class progbar(object):
88 def __init__(self, ui):
89 def __init__(self, ui):
89 self.ui = ui
90 self.ui = ui
90 self.resetstate()
91 self.resetstate()
91
92
92 def resetstate(self):
93 def resetstate(self):
93 self.topics = []
94 self.topics = []
94 self.topicstates = {}
95 self.topicstates = {}
95 self.starttimes = {}
96 self.starttimes = {}
96 self.startvals = {}
97 self.startvals = {}
97 self.printed = False
98 self.printed = False
98 self.lastprint = time.time() + float(self.ui.config(
99 self.lastprint = time.time() + float(self.ui.config(
99 'progress', 'delay', default=3))
100 'progress', 'delay', default=3))
100 self.lasttopic = None
101 self.lasttopic = None
101 self.indetcount = 0
102 self.indetcount = 0
102 self.refresh = float(self.ui.config(
103 self.refresh = float(self.ui.config(
103 'progress', 'refresh', default=0.1))
104 'progress', 'refresh', default=0.1))
104 self.changedelay = max(3 * self.refresh,
105 self.changedelay = max(3 * self.refresh,
105 float(self.ui.config(
106 float(self.ui.config(
106 'progress', 'changedelay', default=1)))
107 'progress', 'changedelay', default=1)))
107 self.order = self.ui.configlist(
108 self.order = self.ui.configlist(
108 'progress', 'format',
109 'progress', 'format',
109 default=['topic', 'bar', 'number', 'estimate'])
110 default=['topic', 'bar', 'number', 'estimate'])
110
111
111 def show(self, now, topic, pos, item, unit, total):
112 def show(self, now, topic, pos, item, unit, total):
112 if not shouldprint(self.ui):
113 if not shouldprint(self.ui):
113 return
114 return
114 termwidth = self.width()
115 termwidth = self.width()
115 self.printed = True
116 self.printed = True
116 head = ''
117 head = ''
117 needprogress = False
118 needprogress = False
118 tail = ''
119 tail = ''
119 for indicator in self.order:
120 for indicator in self.order:
120 add = ''
121 add = ''
121 if indicator == 'topic':
122 if indicator == 'topic':
122 add = topic
123 add = topic
123 elif indicator == 'number':
124 elif indicator == 'number':
124 if total:
125 if total:
125 add = ('% ' + str(len(str(total))) +
126 add = ('% ' + str(len(str(total))) +
126 's/%s') % (pos, total)
127 's/%s') % (pos, total)
127 else:
128 else:
128 add = str(pos)
129 add = str(pos)
129 elif indicator.startswith('item') and item:
130 elif indicator.startswith('item') and item:
130 slice = 'end'
131 slice = 'end'
131 if '-' in indicator:
132 if '-' in indicator:
132 wid = int(indicator.split('-')[1])
133 wid = int(indicator.split('-')[1])
133 elif '+' in indicator:
134 elif '+' in indicator:
134 slice = 'beginning'
135 slice = 'beginning'
135 wid = int(indicator.split('+')[1])
136 wid = int(indicator.split('+')[1])
136 else:
137 else:
137 wid = 20
138 wid = 20
138 if slice == 'end':
139 if slice == 'end':
139 add = item[-wid:]
140 add = item[-wid:]
140 else:
141 else:
141 add = item[:wid]
142 add = item[:wid]
142 add += (wid - len(add)) * ' '
143 add += (wid - len(add)) * ' '
143 elif indicator == 'bar':
144 elif indicator == 'bar':
144 add = ''
145 add = ''
145 needprogress = True
146 needprogress = True
146 elif indicator == 'unit' and unit:
147 elif indicator == 'unit' and unit:
147 add = unit
148 add = unit
148 elif indicator == 'estimate':
149 elif indicator == 'estimate':
149 add = self.estimate(topic, pos, total, now)
150 add = self.estimate(topic, pos, total, now)
150 elif indicator == 'speed':
151 elif indicator == 'speed':
151 add = self.speed(topic, pos, unit, now)
152 add = self.speed(topic, pos, unit, now)
152 if not needprogress:
153 if not needprogress:
153 head = spacejoin(head, add)
154 head = spacejoin(head, add)
154 else:
155 else:
155 tail = spacejoin(tail, add)
156 tail = spacejoin(tail, add)
156 if needprogress:
157 if needprogress:
157 used = 0
158 used = 0
158 if head:
159 if head:
159 used += len(head) + 1
160 used += len(head) + 1
160 if tail:
161 if tail:
161 used += len(tail) + 1
162 used += len(tail) + 1
162 progwidth = termwidth - used - 3
163 progwidth = termwidth - used - 3
163 if total and pos <= total:
164 if total and pos <= total:
164 amt = pos * progwidth // total
165 amt = pos * progwidth // total
165 bar = '=' * (amt - 1)
166 bar = '=' * (amt - 1)
166 if amt > 0:
167 if amt > 0:
167 bar += '>'
168 bar += '>'
168 bar += ' ' * (progwidth - amt)
169 bar += ' ' * (progwidth - amt)
169 else:
170 else:
170 progwidth -= 3
171 progwidth -= 3
171 self.indetcount += 1
172 self.indetcount += 1
172 # mod the count by twice the width so we can make the
173 # mod the count by twice the width so we can make the
173 # cursor bounce between the right and left sides
174 # cursor bounce between the right and left sides
174 amt = self.indetcount % (2 * progwidth)
175 amt = self.indetcount % (2 * progwidth)
175 amt -= progwidth
176 amt -= progwidth
176 bar = (' ' * int(progwidth - abs(amt)) + '<=>' +
177 bar = (' ' * int(progwidth - abs(amt)) + '<=>' +
177 ' ' * int(abs(amt)))
178 ' ' * int(abs(amt)))
178 prog = ''.join(('[', bar , ']'))
179 prog = ''.join(('[', bar , ']'))
179 out = spacejoin(head, prog, tail)
180 out = spacejoin(head, prog, tail)
180 else:
181 else:
181 out = spacejoin(head, tail)
182 out = spacejoin(head, tail)
182 sys.stderr.write('\r' + out[:termwidth])
183 sys.stderr.write('\r' + out[:termwidth])
183 self.lasttopic = topic
184 self.lasttopic = topic
184 sys.stderr.flush()
185 sys.stderr.flush()
185
186
186 def clear(self):
187 def clear(self):
187 if not shouldprint(self.ui):
188 if not shouldprint(self.ui):
188 return
189 return
189 sys.stderr.write('\r%s\r' % (' ' * self.width()))
190 sys.stderr.write('\r%s\r' % (' ' * self.width()))
190
191
191 def complete(self):
192 def complete(self):
192 if not shouldprint(self.ui):
193 if not shouldprint(self.ui):
193 return
194 return
194 if self.ui.configbool('progress', 'clear-complete', default=True):
195 if self.ui.configbool('progress', 'clear-complete', default=True):
195 self.clear()
196 self.clear()
196 else:
197 else:
197 sys.stderr.write('\n')
198 sys.stderr.write('\n')
198 sys.stderr.flush()
199 sys.stderr.flush()
199
200
200 def width(self):
201 def width(self):
201 tw = self.ui.termwidth()
202 tw = self.ui.termwidth()
202 return min(int(self.ui.config('progress', 'width', default=tw)), tw)
203 return min(int(self.ui.config('progress', 'width', default=tw)), tw)
203
204
204 def estimate(self, topic, pos, total, now):
205 def estimate(self, topic, pos, total, now):
205 if total is None:
206 if total is None:
206 return ''
207 return ''
207 initialpos = self.startvals[topic]
208 initialpos = self.startvals[topic]
208 target = total - initialpos
209 target = total - initialpos
209 delta = pos - initialpos
210 delta = pos - initialpos
210 if delta > 0:
211 if delta > 0:
211 elapsed = now - self.starttimes[topic]
212 elapsed = now - self.starttimes[topic]
212 if elapsed > float(
213 if elapsed > float(
213 self.ui.config('progress', 'estimate', default=2)):
214 self.ui.config('progress', 'estimate', default=2)):
214 seconds = (elapsed * (target - delta)) // delta + 1
215 seconds = (elapsed * (target - delta)) // delta + 1
215 return fmtremaining(seconds)
216 return fmtremaining(seconds)
216 return ''
217 return ''
217
218
218 def speed(self, topic, pos, unit, now):
219 def speed(self, topic, pos, unit, now):
219 initialpos = self.startvals[topic]
220 initialpos = self.startvals[topic]
220 delta = pos - initialpos
221 delta = pos - initialpos
221 elapsed = now - self.starttimes[topic]
222 elapsed = now - self.starttimes[topic]
222 if elapsed > float(
223 if elapsed > float(
223 self.ui.config('progress', 'estimate', default=2)):
224 self.ui.config('progress', 'estimate', default=2)):
224 return _('%d %s/sec') % (delta / elapsed, unit)
225 return _('%d %s/sec') % (delta / elapsed, unit)
225 return ''
226 return ''
226
227
227 def progress(self, topic, pos, item='', unit='', total=None):
228 def progress(self, topic, pos, item='', unit='', total=None):
228 now = time.time()
229 now = time.time()
229 if pos is None:
230 if pos is None:
230 self.starttimes.pop(topic, None)
231 self.starttimes.pop(topic, None)
231 self.startvals.pop(topic, None)
232 self.startvals.pop(topic, None)
232 self.topicstates.pop(topic, None)
233 self.topicstates.pop(topic, None)
233 # reset the progress bar if this is the outermost topic
234 # reset the progress bar if this is the outermost topic
234 if self.topics and self.topics[0] == topic and self.printed:
235 if self.topics and self.topics[0] == topic and self.printed:
235 self.complete()
236 self.complete()
236 self.resetstate()
237 self.resetstate()
237 # truncate the list of topics assuming all topics within
238 # truncate the list of topics assuming all topics within
238 # this one are also closed
239 # this one are also closed
239 if topic in self.topics:
240 if topic in self.topics:
240 self.topics = self.topics[:self.topics.index(topic)]
241 self.topics = self.topics[:self.topics.index(topic)]
241 else:
242 else:
242 if topic not in self.topics:
243 if topic not in self.topics:
243 self.starttimes[topic] = now
244 self.starttimes[topic] = now
244 self.startvals[topic] = pos
245 self.startvals[topic] = pos
245 self.topics.append(topic)
246 self.topics.append(topic)
246 self.topicstates[topic] = pos, item, unit, total
247 self.topicstates[topic] = pos, item, unit, total
247 if now - self.lastprint >= self.refresh and self.topics:
248 if now - self.lastprint >= self.refresh and self.topics:
248 if (self.lasttopic is None # first time we printed
249 if (self.lasttopic is None # first time we printed
249 # not a topic change
250 # not a topic change
250 or topic == self.lasttopic
251 or topic == self.lasttopic
251 # it's been long enough we should print anyway
252 # it's been long enough we should print anyway
252 or now - self.lastprint >= self.changedelay):
253 or now - self.lastprint >= self.changedelay):
253 self.lastprint = now
254 self.lastprint = now
254 self.show(now, topic, *self.topicstates[topic])
255 self.show(now, topic, *self.topicstates[topic])
255
256
256 _singleton = None
257 _singleton = None
257
258
258 def uisetup(ui):
259 def uisetup(ui):
259 global _singleton
260 global _singleton
260 class progressui(ui.__class__):
261 class progressui(ui.__class__):
261 _progbar = None
262 _progbar = None
262
263
263 def _quiet(self):
264 def _quiet(self):
264 return self.debugflag or self.quiet
265 return self.debugflag or self.quiet
265
266
266 def progress(self, *args, **opts):
267 def progress(self, *args, **opts):
267 if not self._quiet():
268 if not self._quiet():
268 self._progbar.progress(*args, **opts)
269 self._progbar.progress(*args, **opts)
269 return super(progressui, self).progress(*args, **opts)
270 return super(progressui, self).progress(*args, **opts)
270
271
271 def write(self, *args, **opts):
272 def write(self, *args, **opts):
272 if not self._quiet() and self._progbar.printed:
273 if not self._quiet() and self._progbar.printed:
273 self._progbar.clear()
274 self._progbar.clear()
274 return super(progressui, self).write(*args, **opts)
275 return super(progressui, self).write(*args, **opts)
275
276
276 def write_err(self, *args, **opts):
277 def write_err(self, *args, **opts):
277 if not self._quiet() and self._progbar.printed:
278 if not self._quiet() and self._progbar.printed:
278 self._progbar.clear()
279 self._progbar.clear()
279 return super(progressui, self).write_err(*args, **opts)
280 return super(progressui, self).write_err(*args, **opts)
280
281
281 # Apps that derive a class from ui.ui() can use
282 # Apps that derive a class from ui.ui() can use
282 # setconfig('progress', 'disable', 'True') to disable this extension
283 # setconfig('progress', 'disable', 'True') to disable this extension
283 if ui.configbool('progress', 'disable'):
284 if ui.configbool('progress', 'disable'):
284 return
285 return
285 if shouldprint(ui) and not ui.debugflag and not ui.quiet:
286 if shouldprint(ui) and not ui.debugflag and not ui.quiet:
286 ui.__class__ = progressui
287 ui.__class__ = progressui
287 # we instantiate one globally shared progress bar to avoid
288 # we instantiate one globally shared progress bar to avoid
288 # competing progress bars when multiple UI objects get created
289 # competing progress bars when multiple UI objects get created
289 if not progressui._progbar:
290 if not progressui._progbar:
290 if _singleton is None:
291 if _singleton is None:
291 _singleton = progbar(ui)
292 _singleton = progbar(ui)
292 progressui._progbar = _singleton
293 progressui._progbar = _singleton
293
294
294 def reposetup(ui, repo):
295 def reposetup(ui, repo):
295 uisetup(repo.ui)
296 uisetup(repo.ui)
@@ -1,109 +1,110 b''
1 # Copyright (C) 2006 - Marco Barisione <marco@barisione.org>
1 # Copyright (C) 2006 - Marco Barisione <marco@barisione.org>
2 #
2 #
3 # This is a small extension for Mercurial (http://mercurial.selenic.com/)
3 # This is a small extension for Mercurial (http://mercurial.selenic.com/)
4 # that removes files not known to mercurial
4 # that removes files not known to mercurial
5 #
5 #
6 # This program was inspired by the "cvspurge" script contained in CVS
6 # This program was inspired by the "cvspurge" script contained in CVS
7 # utilities (http://www.red-bean.com/cvsutils/).
7 # utilities (http://www.red-bean.com/cvsutils/).
8 #
8 #
9 # For help on the usage of "hg purge" use:
9 # For help on the usage of "hg purge" use:
10 # hg help purge
10 # hg help purge
11 #
11 #
12 # This program is free software; you can redistribute it and/or modify
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU General Public License as published by
13 # it under the terms of the GNU General Public License as published by
14 # the Free Software Foundation; either version 2 of the License, or
14 # the Free Software Foundation; either version 2 of the License, or
15 # (at your option) any later version.
15 # (at your option) any later version.
16 #
16 #
17 # This program is distributed in the hope that it will be useful,
17 # This program is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
20 # GNU General Public License for more details.
21 #
21 #
22 # You should have received a copy of the GNU General Public License
22 # You should have received a copy of the GNU General Public License
23 # along with this program; if not, see <http://www.gnu.org/licenses/>.
23 # along with this program; if not, see <http://www.gnu.org/licenses/>.
24
24
25 '''command to delete untracked files from the working directory'''
25 '''command to delete untracked files from the working directory'''
26
26
27 from mercurial import util, commands, cmdutil, scmutil
27 from mercurial import util, commands, cmdutil, scmutil
28 from mercurial.i18n import _
28 from mercurial.i18n import _
29 import os, stat
29 import os, stat
30
30
31 cmdtable = {}
31 cmdtable = {}
32 command = cmdutil.command(cmdtable)
32 command = cmdutil.command(cmdtable)
33 testedwith = 'internal'
33
34
34 @command('purge|clean',
35 @command('purge|clean',
35 [('a', 'abort-on-err', None, _('abort if an error occurs')),
36 [('a', 'abort-on-err', None, _('abort if an error occurs')),
36 ('', 'all', None, _('purge ignored files too')),
37 ('', 'all', None, _('purge ignored files too')),
37 ('p', 'print', None, _('print filenames instead of deleting them')),
38 ('p', 'print', None, _('print filenames instead of deleting them')),
38 ('0', 'print0', None, _('end filenames with NUL, for use with xargs'
39 ('0', 'print0', None, _('end filenames with NUL, for use with xargs'
39 ' (implies -p/--print)')),
40 ' (implies -p/--print)')),
40 ] + commands.walkopts,
41 ] + commands.walkopts,
41 _('hg purge [OPTION]... [DIR]...'))
42 _('hg purge [OPTION]... [DIR]...'))
42 def purge(ui, repo, *dirs, **opts):
43 def purge(ui, repo, *dirs, **opts):
43 '''removes files not tracked by Mercurial
44 '''removes files not tracked by Mercurial
44
45
45 Delete files not known to Mercurial. This is useful to test local
46 Delete files not known to Mercurial. This is useful to test local
46 and uncommitted changes in an otherwise-clean source tree.
47 and uncommitted changes in an otherwise-clean source tree.
47
48
48 This means that purge will delete:
49 This means that purge will delete:
49
50
50 - Unknown files: files marked with "?" by :hg:`status`
51 - Unknown files: files marked with "?" by :hg:`status`
51 - Empty directories: in fact Mercurial ignores directories unless
52 - Empty directories: in fact Mercurial ignores directories unless
52 they contain files under source control management
53 they contain files under source control management
53
54
54 But it will leave untouched:
55 But it will leave untouched:
55
56
56 - Modified and unmodified tracked files
57 - Modified and unmodified tracked files
57 - Ignored files (unless --all is specified)
58 - Ignored files (unless --all is specified)
58 - New files added to the repository (with :hg:`add`)
59 - New files added to the repository (with :hg:`add`)
59
60
60 If directories are given on the command line, only files in these
61 If directories are given on the command line, only files in these
61 directories are considered.
62 directories are considered.
62
63
63 Be careful with purge, as you could irreversibly delete some files
64 Be careful with purge, as you could irreversibly delete some files
64 you forgot to add to the repository. If you only want to print the
65 you forgot to add to the repository. If you only want to print the
65 list of files that this program would delete, use the --print
66 list of files that this program would delete, use the --print
66 option.
67 option.
67 '''
68 '''
68 act = not opts['print']
69 act = not opts['print']
69 eol = '\n'
70 eol = '\n'
70 if opts['print0']:
71 if opts['print0']:
71 eol = '\0'
72 eol = '\0'
72 act = False # --print0 implies --print
73 act = False # --print0 implies --print
73
74
74 def remove(remove_func, name):
75 def remove(remove_func, name):
75 if act:
76 if act:
76 try:
77 try:
77 remove_func(repo.wjoin(name))
78 remove_func(repo.wjoin(name))
78 except OSError:
79 except OSError:
79 m = _('%s cannot be removed') % name
80 m = _('%s cannot be removed') % name
80 if opts['abort_on_err']:
81 if opts['abort_on_err']:
81 raise util.Abort(m)
82 raise util.Abort(m)
82 ui.warn(_('warning: %s\n') % m)
83 ui.warn(_('warning: %s\n') % m)
83 else:
84 else:
84 ui.write('%s%s' % (name, eol))
85 ui.write('%s%s' % (name, eol))
85
86
86 def removefile(path):
87 def removefile(path):
87 try:
88 try:
88 os.remove(path)
89 os.remove(path)
89 except OSError:
90 except OSError:
90 # read-only files cannot be unlinked under Windows
91 # read-only files cannot be unlinked under Windows
91 s = os.stat(path)
92 s = os.stat(path)
92 if (s.st_mode & stat.S_IWRITE) != 0:
93 if (s.st_mode & stat.S_IWRITE) != 0:
93 raise
94 raise
94 os.chmod(path, stat.S_IMODE(s.st_mode) | stat.S_IWRITE)
95 os.chmod(path, stat.S_IMODE(s.st_mode) | stat.S_IWRITE)
95 os.remove(path)
96 os.remove(path)
96
97
97 directories = []
98 directories = []
98 match = scmutil.match(repo[None], dirs, opts)
99 match = scmutil.match(repo[None], dirs, opts)
99 match.dir = directories.append
100 match.dir = directories.append
100 status = repo.status(match=match, ignored=opts['all'], unknown=True)
101 status = repo.status(match=match, ignored=opts['all'], unknown=True)
101
102
102 for f in sorted(status[4] + status[5]):
103 for f in sorted(status[4] + status[5]):
103 ui.note(_('Removing file %s\n') % f)
104 ui.note(_('Removing file %s\n') % f)
104 remove(removefile, f)
105 remove(removefile, f)
105
106
106 for f in sorted(directories, reverse=True):
107 for f in sorted(directories, reverse=True):
107 if match(f) and not os.listdir(repo.wjoin(f)):
108 if match(f) and not os.listdir(repo.wjoin(f)):
108 ui.note(_('Removing directory %s\n') % f)
109 ui.note(_('Removing directory %s\n') % f)
109 remove(os.rmdir, f)
110 remove(os.rmdir, f)
@@ -1,676 +1,677 b''
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 http://mercurial.selenic.com/wiki/RebaseExtension
14 http://mercurial.selenic.com/wiki/RebaseExtension
15 '''
15 '''
16
16
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
18 from mercurial import extensions, patch, scmutil, phases
18 from mercurial import extensions, patch, scmutil, phases
19 from mercurial.commands import templateopts
19 from mercurial.commands import templateopts
20 from mercurial.node import nullrev
20 from mercurial.node import nullrev
21 from mercurial.lock import release
21 from mercurial.lock import release
22 from mercurial.i18n import _
22 from mercurial.i18n import _
23 import os, errno
23 import os, errno
24
24
25 nullmerge = -2
25 nullmerge = -2
26
26
27 cmdtable = {}
27 cmdtable = {}
28 command = cmdutil.command(cmdtable)
28 command = cmdutil.command(cmdtable)
29 testedwith = 'internal'
29
30
30 @command('rebase',
31 @command('rebase',
31 [('s', 'source', '',
32 [('s', 'source', '',
32 _('rebase from the specified changeset'), _('REV')),
33 _('rebase from the specified changeset'), _('REV')),
33 ('b', 'base', '',
34 ('b', 'base', '',
34 _('rebase from the base of the specified changeset '
35 _('rebase from the base of the specified changeset '
35 '(up to greatest common ancestor of base and dest)'),
36 '(up to greatest common ancestor of base and dest)'),
36 _('REV')),
37 _('REV')),
37 ('r', 'rev', [],
38 ('r', 'rev', [],
38 _('rebase these revisions'),
39 _('rebase these revisions'),
39 _('REV')),
40 _('REV')),
40 ('d', 'dest', '',
41 ('d', 'dest', '',
41 _('rebase onto the specified changeset'), _('REV')),
42 _('rebase onto the specified changeset'), _('REV')),
42 ('', 'collapse', False, _('collapse the rebased changesets')),
43 ('', 'collapse', False, _('collapse the rebased changesets')),
43 ('m', 'message', '',
44 ('m', 'message', '',
44 _('use text as collapse commit message'), _('TEXT')),
45 _('use text as collapse commit message'), _('TEXT')),
45 ('e', 'edit', False, _('invoke editor on commit messages')),
46 ('e', 'edit', False, _('invoke editor on commit messages')),
46 ('l', 'logfile', '',
47 ('l', 'logfile', '',
47 _('read collapse commit message from file'), _('FILE')),
48 _('read collapse commit message from file'), _('FILE')),
48 ('', 'keep', False, _('keep original changesets')),
49 ('', 'keep', False, _('keep original changesets')),
49 ('', 'keepbranches', False, _('keep original branch names')),
50 ('', 'keepbranches', False, _('keep original branch names')),
50 ('D', 'detach', False, _('force detaching of source from its original '
51 ('D', 'detach', False, _('force detaching of source from its original '
51 'branch')),
52 'branch')),
52 ('t', 'tool', '', _('specify merge tool')),
53 ('t', 'tool', '', _('specify merge tool')),
53 ('c', 'continue', False, _('continue an interrupted rebase')),
54 ('c', 'continue', False, _('continue an interrupted rebase')),
54 ('a', 'abort', False, _('abort an interrupted rebase'))] +
55 ('a', 'abort', False, _('abort an interrupted rebase'))] +
55 templateopts,
56 templateopts,
56 _('hg rebase [-s REV | -b REV] [-d REV] [options]\n'
57 _('hg rebase [-s REV | -b REV] [-d REV] [options]\n'
57 'hg rebase {-a|-c}'))
58 'hg rebase {-a|-c}'))
58 def rebase(ui, repo, **opts):
59 def rebase(ui, repo, **opts):
59 """move changeset (and descendants) to a different branch
60 """move changeset (and descendants) to a different branch
60
61
61 Rebase uses repeated merging to graft changesets from one part of
62 Rebase uses repeated merging to graft changesets from one part of
62 history (the source) onto another (the destination). This can be
63 history (the source) onto another (the destination). This can be
63 useful for linearizing *local* changes relative to a master
64 useful for linearizing *local* changes relative to a master
64 development tree.
65 development tree.
65
66
66 You should not rebase changesets that have already been shared
67 You should not rebase changesets that have already been shared
67 with others. Doing so will force everybody else to perform the
68 with others. Doing so will force everybody else to perform the
68 same rebase or they will end up with duplicated changesets after
69 same rebase or they will end up with duplicated changesets after
69 pulling in your rebased changesets.
70 pulling in your rebased changesets.
70
71
71 If you don't specify a destination changeset (``-d/--dest``),
72 If you don't specify a destination changeset (``-d/--dest``),
72 rebase uses the tipmost head of the current named branch as the
73 rebase uses the tipmost head of the current named branch as the
73 destination. (The destination changeset is not modified by
74 destination. (The destination changeset is not modified by
74 rebasing, but new changesets are added as its descendants.)
75 rebasing, but new changesets are added as its descendants.)
75
76
76 You can specify which changesets to rebase in two ways: as a
77 You can specify which changesets to rebase in two ways: as a
77 "source" changeset or as a "base" changeset. Both are shorthand
78 "source" changeset or as a "base" changeset. Both are shorthand
78 for a topologically related set of changesets (the "source
79 for a topologically related set of changesets (the "source
79 branch"). If you specify source (``-s/--source``), rebase will
80 branch"). If you specify source (``-s/--source``), rebase will
80 rebase that changeset and all of its descendants onto dest. If you
81 rebase that changeset and all of its descendants onto dest. If you
81 specify base (``-b/--base``), rebase will select ancestors of base
82 specify base (``-b/--base``), rebase will select ancestors of base
82 back to but not including the common ancestor with dest. Thus,
83 back to but not including the common ancestor with dest. Thus,
83 ``-b`` is less precise but more convenient than ``-s``: you can
84 ``-b`` is less precise but more convenient than ``-s``: you can
84 specify any changeset in the source branch, and rebase will select
85 specify any changeset in the source branch, and rebase will select
85 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
86 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
86 uses the parent of the working directory as the base.
87 uses the parent of the working directory as the base.
87
88
88 By default, rebase recreates the changesets in the source branch
89 By default, rebase recreates the changesets in the source branch
89 as descendants of dest and then destroys the originals. Use
90 as descendants of dest and then destroys the originals. Use
90 ``--keep`` to preserve the original source changesets. Some
91 ``--keep`` to preserve the original source changesets. Some
91 changesets in the source branch (e.g. merges from the destination
92 changesets in the source branch (e.g. merges from the destination
92 branch) may be dropped if they no longer contribute any change.
93 branch) may be dropped if they no longer contribute any change.
93
94
94 One result of the rules for selecting the destination changeset
95 One result of the rules for selecting the destination changeset
95 and source branch is that, unlike ``merge``, rebase will do
96 and source branch is that, unlike ``merge``, rebase will do
96 nothing if you are at the latest (tipmost) head of a named branch
97 nothing if you are at the latest (tipmost) head of a named branch
97 with two heads. You need to explicitly specify source and/or
98 with two heads. You need to explicitly specify source and/or
98 destination (or ``update`` to the other head, if it's the head of
99 destination (or ``update`` to the other head, if it's the head of
99 the intended source branch).
100 the intended source branch).
100
101
101 If a rebase is interrupted to manually resolve a merge, it can be
102 If a rebase is interrupted to manually resolve a merge, it can be
102 continued with --continue/-c or aborted with --abort/-a.
103 continued with --continue/-c or aborted with --abort/-a.
103
104
104 Returns 0 on success, 1 if nothing to rebase.
105 Returns 0 on success, 1 if nothing to rebase.
105 """
106 """
106 originalwd = target = None
107 originalwd = target = None
107 external = nullrev
108 external = nullrev
108 state = {}
109 state = {}
109 skipped = set()
110 skipped = set()
110 targetancestors = set()
111 targetancestors = set()
111
112
112 editor = None
113 editor = None
113 if opts.get('edit'):
114 if opts.get('edit'):
114 editor = cmdutil.commitforceeditor
115 editor = cmdutil.commitforceeditor
115
116
116 lock = wlock = None
117 lock = wlock = None
117 try:
118 try:
118 wlock = repo.wlock()
119 wlock = repo.wlock()
119 lock = repo.lock()
120 lock = repo.lock()
120
121
121 # Validate input and define rebasing points
122 # Validate input and define rebasing points
122 destf = opts.get('dest', None)
123 destf = opts.get('dest', None)
123 srcf = opts.get('source', None)
124 srcf = opts.get('source', None)
124 basef = opts.get('base', None)
125 basef = opts.get('base', None)
125 revf = opts.get('rev', [])
126 revf = opts.get('rev', [])
126 contf = opts.get('continue')
127 contf = opts.get('continue')
127 abortf = opts.get('abort')
128 abortf = opts.get('abort')
128 collapsef = opts.get('collapse', False)
129 collapsef = opts.get('collapse', False)
129 collapsemsg = cmdutil.logmessage(ui, opts)
130 collapsemsg = cmdutil.logmessage(ui, opts)
130 extrafn = opts.get('extrafn') # internal, used by e.g. hgsubversion
131 extrafn = opts.get('extrafn') # internal, used by e.g. hgsubversion
131 keepf = opts.get('keep', False)
132 keepf = opts.get('keep', False)
132 keepbranchesf = opts.get('keepbranches', False)
133 keepbranchesf = opts.get('keepbranches', False)
133 detachf = opts.get('detach', False)
134 detachf = opts.get('detach', False)
134 # keepopen is not meant for use on the command line, but by
135 # keepopen is not meant for use on the command line, but by
135 # other extensions
136 # other extensions
136 keepopen = opts.get('keepopen', False)
137 keepopen = opts.get('keepopen', False)
137
138
138 if collapsemsg and not collapsef:
139 if collapsemsg and not collapsef:
139 raise util.Abort(
140 raise util.Abort(
140 _('message can only be specified with collapse'))
141 _('message can only be specified with collapse'))
141
142
142 if contf or abortf:
143 if contf or abortf:
143 if contf and abortf:
144 if contf and abortf:
144 raise util.Abort(_('cannot use both abort and continue'))
145 raise util.Abort(_('cannot use both abort and continue'))
145 if collapsef:
146 if collapsef:
146 raise util.Abort(
147 raise util.Abort(
147 _('cannot use collapse with continue or abort'))
148 _('cannot use collapse with continue or abort'))
148 if detachf:
149 if detachf:
149 raise util.Abort(_('cannot use detach with continue or abort'))
150 raise util.Abort(_('cannot use detach with continue or abort'))
150 if srcf or basef or destf:
151 if srcf or basef or destf:
151 raise util.Abort(
152 raise util.Abort(
152 _('abort and continue do not allow specifying revisions'))
153 _('abort and continue do not allow specifying revisions'))
153 if opts.get('tool', False):
154 if opts.get('tool', False):
154 ui.warn(_('tool option will be ignored\n'))
155 ui.warn(_('tool option will be ignored\n'))
155
156
156 (originalwd, target, state, skipped, collapsef, keepf,
157 (originalwd, target, state, skipped, collapsef, keepf,
157 keepbranchesf, external) = restorestatus(repo)
158 keepbranchesf, external) = restorestatus(repo)
158 if abortf:
159 if abortf:
159 return abort(repo, originalwd, target, state)
160 return abort(repo, originalwd, target, state)
160 else:
161 else:
161 if srcf and basef:
162 if srcf and basef:
162 raise util.Abort(_('cannot specify both a '
163 raise util.Abort(_('cannot specify both a '
163 'source and a base'))
164 'source and a base'))
164 if revf and basef:
165 if revf and basef:
165 raise util.Abort(_('cannot specify both a '
166 raise util.Abort(_('cannot specify both a '
166 'revision and a base'))
167 'revision and a base'))
167 if revf and srcf:
168 if revf and srcf:
168 raise util.Abort(_('cannot specify both a '
169 raise util.Abort(_('cannot specify both a '
169 'revision and a source'))
170 'revision and a source'))
170 if detachf:
171 if detachf:
171 if not (srcf or revf):
172 if not (srcf or revf):
172 raise util.Abort(
173 raise util.Abort(
173 _('detach requires a revision to be specified'))
174 _('detach requires a revision to be specified'))
174 if basef:
175 if basef:
175 raise util.Abort(_('cannot specify a base with detach'))
176 raise util.Abort(_('cannot specify a base with detach'))
176
177
177 cmdutil.bailifchanged(repo)
178 cmdutil.bailifchanged(repo)
178
179
179 if not destf:
180 if not destf:
180 # Destination defaults to the latest revision in the
181 # Destination defaults to the latest revision in the
181 # current branch
182 # current branch
182 branch = repo[None].branch()
183 branch = repo[None].branch()
183 dest = repo[branch]
184 dest = repo[branch]
184 else:
185 else:
185 dest = scmutil.revsingle(repo, destf)
186 dest = scmutil.revsingle(repo, destf)
186
187
187 if revf:
188 if revf:
188 rebaseset = repo.revs('%lr', revf)
189 rebaseset = repo.revs('%lr', revf)
189 elif srcf:
190 elif srcf:
190 src = scmutil.revrange(repo, [srcf])
191 src = scmutil.revrange(repo, [srcf])
191 rebaseset = repo.revs('(%ld)::', src)
192 rebaseset = repo.revs('(%ld)::', src)
192 else:
193 else:
193 base = scmutil.revrange(repo, [basef or '.'])
194 base = scmutil.revrange(repo, [basef or '.'])
194 rebaseset = repo.revs(
195 rebaseset = repo.revs(
195 '(children(ancestor(%ld, %d)) and ::(%ld))::',
196 '(children(ancestor(%ld, %d)) and ::(%ld))::',
196 base, dest, base)
197 base, dest, base)
197
198
198 if rebaseset:
199 if rebaseset:
199 root = min(rebaseset)
200 root = min(rebaseset)
200 else:
201 else:
201 root = None
202 root = None
202
203
203 if not rebaseset:
204 if not rebaseset:
204 repo.ui.debug('base is ancestor of destination\n')
205 repo.ui.debug('base is ancestor of destination\n')
205 result = None
206 result = None
206 elif not keepf and list(repo.revs('first(children(%ld) - %ld)',
207 elif not keepf and list(repo.revs('first(children(%ld) - %ld)',
207 rebaseset, rebaseset)):
208 rebaseset, rebaseset)):
208 raise util.Abort(
209 raise util.Abort(
209 _("can't remove original changesets with"
210 _("can't remove original changesets with"
210 " unrebased descendants"),
211 " unrebased descendants"),
211 hint=_('use --keep to keep original changesets'))
212 hint=_('use --keep to keep original changesets'))
212 elif not keepf and not repo[root].mutable():
213 elif not keepf and not repo[root].mutable():
213 raise util.Abort(_("can't rebase immutable changeset %s")
214 raise util.Abort(_("can't rebase immutable changeset %s")
214 % repo[root],
215 % repo[root],
215 hint=_('see hg help phases for details'))
216 hint=_('see hg help phases for details'))
216 else:
217 else:
217 result = buildstate(repo, dest, rebaseset, detachf, collapsef)
218 result = buildstate(repo, dest, rebaseset, detachf, collapsef)
218
219
219 if not result:
220 if not result:
220 # Empty state built, nothing to rebase
221 # Empty state built, nothing to rebase
221 ui.status(_('nothing to rebase\n'))
222 ui.status(_('nothing to rebase\n'))
222 return 1
223 return 1
223 else:
224 else:
224 originalwd, target, state = result
225 originalwd, target, state = result
225 if collapsef:
226 if collapsef:
226 targetancestors = set(repo.changelog.ancestors(target))
227 targetancestors = set(repo.changelog.ancestors(target))
227 targetancestors.add(target)
228 targetancestors.add(target)
228 external = checkexternal(repo, state, targetancestors)
229 external = checkexternal(repo, state, targetancestors)
229
230
230 if keepbranchesf:
231 if keepbranchesf:
231 assert not extrafn, 'cannot use both keepbranches and extrafn'
232 assert not extrafn, 'cannot use both keepbranches and extrafn'
232 def extrafn(ctx, extra):
233 def extrafn(ctx, extra):
233 extra['branch'] = ctx.branch()
234 extra['branch'] = ctx.branch()
234 if collapsef:
235 if collapsef:
235 branches = set()
236 branches = set()
236 for rev in state:
237 for rev in state:
237 branches.add(repo[rev].branch())
238 branches.add(repo[rev].branch())
238 if len(branches) > 1:
239 if len(branches) > 1:
239 raise util.Abort(_('cannot collapse multiple named '
240 raise util.Abort(_('cannot collapse multiple named '
240 'branches'))
241 'branches'))
241
242
242
243
243 # Rebase
244 # Rebase
244 if not targetancestors:
245 if not targetancestors:
245 targetancestors = set(repo.changelog.ancestors(target))
246 targetancestors = set(repo.changelog.ancestors(target))
246 targetancestors.add(target)
247 targetancestors.add(target)
247
248
248 # Keep track of the current bookmarks in order to reset them later
249 # Keep track of the current bookmarks in order to reset them later
249 currentbookmarks = repo._bookmarks.copy()
250 currentbookmarks = repo._bookmarks.copy()
250
251
251 sortedstate = sorted(state)
252 sortedstate = sorted(state)
252 total = len(sortedstate)
253 total = len(sortedstate)
253 pos = 0
254 pos = 0
254 for rev in sortedstate:
255 for rev in sortedstate:
255 pos += 1
256 pos += 1
256 if state[rev] == -1:
257 if state[rev] == -1:
257 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, repo[rev])),
258 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, repo[rev])),
258 _('changesets'), total)
259 _('changesets'), total)
259 storestatus(repo, originalwd, target, state, collapsef, keepf,
260 storestatus(repo, originalwd, target, state, collapsef, keepf,
260 keepbranchesf, external)
261 keepbranchesf, external)
261 p1, p2 = defineparents(repo, rev, target, state,
262 p1, p2 = defineparents(repo, rev, target, state,
262 targetancestors)
263 targetancestors)
263 if len(repo.parents()) == 2:
264 if len(repo.parents()) == 2:
264 repo.ui.debug('resuming interrupted rebase\n')
265 repo.ui.debug('resuming interrupted rebase\n')
265 else:
266 else:
266 try:
267 try:
267 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''))
268 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''))
268 stats = rebasenode(repo, rev, p1, state, collapsef)
269 stats = rebasenode(repo, rev, p1, state, collapsef)
269 if stats and stats[3] > 0:
270 if stats and stats[3] > 0:
270 raise util.Abort(_('unresolved conflicts (see hg '
271 raise util.Abort(_('unresolved conflicts (see hg '
271 'resolve, then hg rebase --continue)'))
272 'resolve, then hg rebase --continue)'))
272 finally:
273 finally:
273 ui.setconfig('ui', 'forcemerge', '')
274 ui.setconfig('ui', 'forcemerge', '')
274 cmdutil.duplicatecopies(repo, rev, target)
275 cmdutil.duplicatecopies(repo, rev, target)
275 if not collapsef:
276 if not collapsef:
276 newrev = concludenode(repo, rev, p1, p2, extrafn=extrafn,
277 newrev = concludenode(repo, rev, p1, p2, extrafn=extrafn,
277 editor=editor)
278 editor=editor)
278 else:
279 else:
279 # Skip commit if we are collapsing
280 # Skip commit if we are collapsing
280 repo.setparents(repo[p1].node())
281 repo.setparents(repo[p1].node())
281 newrev = None
282 newrev = None
282 # Update the state
283 # Update the state
283 if newrev is not None:
284 if newrev is not None:
284 state[rev] = repo[newrev].rev()
285 state[rev] = repo[newrev].rev()
285 else:
286 else:
286 if not collapsef:
287 if not collapsef:
287 ui.note(_('no changes, revision %d skipped\n') % rev)
288 ui.note(_('no changes, revision %d skipped\n') % rev)
288 ui.debug('next revision set to %s\n' % p1)
289 ui.debug('next revision set to %s\n' % p1)
289 skipped.add(rev)
290 skipped.add(rev)
290 state[rev] = p1
291 state[rev] = p1
291
292
292 ui.progress(_('rebasing'), None)
293 ui.progress(_('rebasing'), None)
293 ui.note(_('rebase merging completed\n'))
294 ui.note(_('rebase merging completed\n'))
294
295
295 if collapsef and not keepopen:
296 if collapsef and not keepopen:
296 p1, p2 = defineparents(repo, min(state), target,
297 p1, p2 = defineparents(repo, min(state), target,
297 state, targetancestors)
298 state, targetancestors)
298 if collapsemsg:
299 if collapsemsg:
299 commitmsg = collapsemsg
300 commitmsg = collapsemsg
300 else:
301 else:
301 commitmsg = 'Collapsed revision'
302 commitmsg = 'Collapsed revision'
302 for rebased in state:
303 for rebased in state:
303 if rebased not in skipped and state[rebased] != nullmerge:
304 if rebased not in skipped and state[rebased] != nullmerge:
304 commitmsg += '\n* %s' % repo[rebased].description()
305 commitmsg += '\n* %s' % repo[rebased].description()
305 commitmsg = ui.edit(commitmsg, repo.ui.username())
306 commitmsg = ui.edit(commitmsg, repo.ui.username())
306 newrev = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
307 newrev = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
307 extrafn=extrafn, editor=editor)
308 extrafn=extrafn, editor=editor)
308
309
309 if 'qtip' in repo.tags():
310 if 'qtip' in repo.tags():
310 updatemq(repo, state, skipped, **opts)
311 updatemq(repo, state, skipped, **opts)
311
312
312 if currentbookmarks:
313 if currentbookmarks:
313 # Nodeids are needed to reset bookmarks
314 # Nodeids are needed to reset bookmarks
314 nstate = {}
315 nstate = {}
315 for k, v in state.iteritems():
316 for k, v in state.iteritems():
316 if v != nullmerge:
317 if v != nullmerge:
317 nstate[repo[k].node()] = repo[v].node()
318 nstate[repo[k].node()] = repo[v].node()
318
319
319 if not keepf:
320 if not keepf:
320 # Remove no more useful revisions
321 # Remove no more useful revisions
321 rebased = [rev for rev in state if state[rev] != nullmerge]
322 rebased = [rev for rev in state if state[rev] != nullmerge]
322 if rebased:
323 if rebased:
323 if set(repo.changelog.descendants(min(rebased))) - set(state):
324 if set(repo.changelog.descendants(min(rebased))) - set(state):
324 ui.warn(_("warning: new changesets detected "
325 ui.warn(_("warning: new changesets detected "
325 "on source branch, not stripping\n"))
326 "on source branch, not stripping\n"))
326 else:
327 else:
327 # backup the old csets by default
328 # backup the old csets by default
328 repair.strip(ui, repo, repo[min(rebased)].node(), "all")
329 repair.strip(ui, repo, repo[min(rebased)].node(), "all")
329
330
330 if currentbookmarks:
331 if currentbookmarks:
331 updatebookmarks(repo, nstate, currentbookmarks, **opts)
332 updatebookmarks(repo, nstate, currentbookmarks, **opts)
332
333
333 clearstatus(repo)
334 clearstatus(repo)
334 ui.note(_("rebase completed\n"))
335 ui.note(_("rebase completed\n"))
335 if os.path.exists(repo.sjoin('undo')):
336 if os.path.exists(repo.sjoin('undo')):
336 util.unlinkpath(repo.sjoin('undo'))
337 util.unlinkpath(repo.sjoin('undo'))
337 if skipped:
338 if skipped:
338 ui.note(_("%d revisions have been skipped\n") % len(skipped))
339 ui.note(_("%d revisions have been skipped\n") % len(skipped))
339 finally:
340 finally:
340 release(lock, wlock)
341 release(lock, wlock)
341
342
342 def checkexternal(repo, state, targetancestors):
343 def checkexternal(repo, state, targetancestors):
343 """Check whether one or more external revisions need to be taken in
344 """Check whether one or more external revisions need to be taken in
344 consideration. In the latter case, abort.
345 consideration. In the latter case, abort.
345 """
346 """
346 external = nullrev
347 external = nullrev
347 source = min(state)
348 source = min(state)
348 for rev in state:
349 for rev in state:
349 if rev == source:
350 if rev == source:
350 continue
351 continue
351 # Check externals and fail if there are more than one
352 # Check externals and fail if there are more than one
352 for p in repo[rev].parents():
353 for p in repo[rev].parents():
353 if (p.rev() not in state
354 if (p.rev() not in state
354 and p.rev() not in targetancestors):
355 and p.rev() not in targetancestors):
355 if external != nullrev:
356 if external != nullrev:
356 raise util.Abort(_('unable to collapse, there is more '
357 raise util.Abort(_('unable to collapse, there is more '
357 'than one external parent'))
358 'than one external parent'))
358 external = p.rev()
359 external = p.rev()
359 return external
360 return external
360
361
361 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None):
362 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None):
362 'Commit the changes and store useful information in extra'
363 'Commit the changes and store useful information in extra'
363 try:
364 try:
364 repo.setparents(repo[p1].node(), repo[p2].node())
365 repo.setparents(repo[p1].node(), repo[p2].node())
365 ctx = repo[rev]
366 ctx = repo[rev]
366 if commitmsg is None:
367 if commitmsg is None:
367 commitmsg = ctx.description()
368 commitmsg = ctx.description()
368 extra = {'rebase_source': ctx.hex()}
369 extra = {'rebase_source': ctx.hex()}
369 if extrafn:
370 if extrafn:
370 extrafn(ctx, extra)
371 extrafn(ctx, extra)
371 # Commit might fail if unresolved files exist
372 # Commit might fail if unresolved files exist
372 newrev = repo.commit(text=commitmsg, user=ctx.user(),
373 newrev = repo.commit(text=commitmsg, user=ctx.user(),
373 date=ctx.date(), extra=extra, editor=editor)
374 date=ctx.date(), extra=extra, editor=editor)
374 repo.dirstate.setbranch(repo[newrev].branch())
375 repo.dirstate.setbranch(repo[newrev].branch())
375 targetphase = max(ctx.phase(), phases.draft)
376 targetphase = max(ctx.phase(), phases.draft)
376 # retractboundary doesn't overwrite upper phase inherited from parent
377 # retractboundary doesn't overwrite upper phase inherited from parent
377 newnode = repo[newrev].node()
378 newnode = repo[newrev].node()
378 if newnode:
379 if newnode:
379 phases.retractboundary(repo, targetphase, [newnode])
380 phases.retractboundary(repo, targetphase, [newnode])
380 return newrev
381 return newrev
381 except util.Abort:
382 except util.Abort:
382 # Invalidate the previous setparents
383 # Invalidate the previous setparents
383 repo.dirstate.invalidate()
384 repo.dirstate.invalidate()
384 raise
385 raise
385
386
386 def rebasenode(repo, rev, p1, state, collapse):
387 def rebasenode(repo, rev, p1, state, collapse):
387 'Rebase a single revision'
388 'Rebase a single revision'
388 # Merge phase
389 # Merge phase
389 # Update to target and merge it with local
390 # Update to target and merge it with local
390 if repo['.'].rev() != repo[p1].rev():
391 if repo['.'].rev() != repo[p1].rev():
391 repo.ui.debug(" update to %d:%s\n" % (repo[p1].rev(), repo[p1]))
392 repo.ui.debug(" update to %d:%s\n" % (repo[p1].rev(), repo[p1]))
392 merge.update(repo, p1, False, True, False)
393 merge.update(repo, p1, False, True, False)
393 else:
394 else:
394 repo.ui.debug(" already in target\n")
395 repo.ui.debug(" already in target\n")
395 repo.dirstate.write()
396 repo.dirstate.write()
396 repo.ui.debug(" merge against %d:%s\n" % (repo[rev].rev(), repo[rev]))
397 repo.ui.debug(" merge against %d:%s\n" % (repo[rev].rev(), repo[rev]))
397 base = None
398 base = None
398 if repo[rev].rev() != repo[min(state)].rev():
399 if repo[rev].rev() != repo[min(state)].rev():
399 base = repo[rev].p1().node()
400 base = repo[rev].p1().node()
400 # When collapsing in-place, the parent is the common ancestor, we
401 # When collapsing in-place, the parent is the common ancestor, we
401 # have to allow merging with it.
402 # have to allow merging with it.
402 return merge.update(repo, rev, True, True, False, base, collapse)
403 return merge.update(repo, rev, True, True, False, base, collapse)
403
404
404 def defineparents(repo, rev, target, state, targetancestors):
405 def defineparents(repo, rev, target, state, targetancestors):
405 'Return the new parent relationship of the revision that will be rebased'
406 'Return the new parent relationship of the revision that will be rebased'
406 parents = repo[rev].parents()
407 parents = repo[rev].parents()
407 p1 = p2 = nullrev
408 p1 = p2 = nullrev
408
409
409 P1n = parents[0].rev()
410 P1n = parents[0].rev()
410 if P1n in targetancestors:
411 if P1n in targetancestors:
411 p1 = target
412 p1 = target
412 elif P1n in state:
413 elif P1n in state:
413 if state[P1n] == nullmerge:
414 if state[P1n] == nullmerge:
414 p1 = target
415 p1 = target
415 else:
416 else:
416 p1 = state[P1n]
417 p1 = state[P1n]
417 else: # P1n external
418 else: # P1n external
418 p1 = target
419 p1 = target
419 p2 = P1n
420 p2 = P1n
420
421
421 if len(parents) == 2 and parents[1].rev() not in targetancestors:
422 if len(parents) == 2 and parents[1].rev() not in targetancestors:
422 P2n = parents[1].rev()
423 P2n = parents[1].rev()
423 # interesting second parent
424 # interesting second parent
424 if P2n in state:
425 if P2n in state:
425 if p1 == target: # P1n in targetancestors or external
426 if p1 == target: # P1n in targetancestors or external
426 p1 = state[P2n]
427 p1 = state[P2n]
427 else:
428 else:
428 p2 = state[P2n]
429 p2 = state[P2n]
429 else: # P2n external
430 else: # P2n external
430 if p2 != nullrev: # P1n external too => rev is a merged revision
431 if p2 != nullrev: # P1n external too => rev is a merged revision
431 raise util.Abort(_('cannot use revision %d as base, result '
432 raise util.Abort(_('cannot use revision %d as base, result '
432 'would have 3 parents') % rev)
433 'would have 3 parents') % rev)
433 p2 = P2n
434 p2 = P2n
434 repo.ui.debug(" future parents are %d and %d\n" %
435 repo.ui.debug(" future parents are %d and %d\n" %
435 (repo[p1].rev(), repo[p2].rev()))
436 (repo[p1].rev(), repo[p2].rev()))
436 return p1, p2
437 return p1, p2
437
438
438 def isagitpatch(repo, patchname):
439 def isagitpatch(repo, patchname):
439 'Return true if the given patch is in git format'
440 'Return true if the given patch is in git format'
440 mqpatch = os.path.join(repo.mq.path, patchname)
441 mqpatch = os.path.join(repo.mq.path, patchname)
441 for line in patch.linereader(file(mqpatch, 'rb')):
442 for line in patch.linereader(file(mqpatch, 'rb')):
442 if line.startswith('diff --git'):
443 if line.startswith('diff --git'):
443 return True
444 return True
444 return False
445 return False
445
446
446 def updatemq(repo, state, skipped, **opts):
447 def updatemq(repo, state, skipped, **opts):
447 'Update rebased mq patches - finalize and then import them'
448 'Update rebased mq patches - finalize and then import them'
448 mqrebase = {}
449 mqrebase = {}
449 mq = repo.mq
450 mq = repo.mq
450 original_series = mq.fullseries[:]
451 original_series = mq.fullseries[:]
451 skippedpatches = set()
452 skippedpatches = set()
452
453
453 for p in mq.applied:
454 for p in mq.applied:
454 rev = repo[p.node].rev()
455 rev = repo[p.node].rev()
455 if rev in state:
456 if rev in state:
456 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
457 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
457 (rev, p.name))
458 (rev, p.name))
458 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
459 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
459 else:
460 else:
460 # Applied but not rebased, not sure this should happen
461 # Applied but not rebased, not sure this should happen
461 skippedpatches.add(p.name)
462 skippedpatches.add(p.name)
462
463
463 if mqrebase:
464 if mqrebase:
464 mq.finish(repo, mqrebase.keys())
465 mq.finish(repo, mqrebase.keys())
465
466
466 # We must start import from the newest revision
467 # We must start import from the newest revision
467 for rev in sorted(mqrebase, reverse=True):
468 for rev in sorted(mqrebase, reverse=True):
468 if rev not in skipped:
469 if rev not in skipped:
469 name, isgit = mqrebase[rev]
470 name, isgit = mqrebase[rev]
470 repo.ui.debug('import mq patch %d (%s)\n' % (state[rev], name))
471 repo.ui.debug('import mq patch %d (%s)\n' % (state[rev], name))
471 mq.qimport(repo, (), patchname=name, git=isgit,
472 mq.qimport(repo, (), patchname=name, git=isgit,
472 rev=[str(state[rev])])
473 rev=[str(state[rev])])
473 else:
474 else:
474 # Rebased and skipped
475 # Rebased and skipped
475 skippedpatches.add(mqrebase[rev][0])
476 skippedpatches.add(mqrebase[rev][0])
476
477
477 # Patches were either applied and rebased and imported in
478 # Patches were either applied and rebased and imported in
478 # order, applied and removed or unapplied. Discard the removed
479 # order, applied and removed or unapplied. Discard the removed
479 # ones while preserving the original series order and guards.
480 # ones while preserving the original series order and guards.
480 newseries = [s for s in original_series
481 newseries = [s for s in original_series
481 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
482 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
482 mq.fullseries[:] = newseries
483 mq.fullseries[:] = newseries
483 mq.seriesdirty = True
484 mq.seriesdirty = True
484 mq.savedirty()
485 mq.savedirty()
485
486
486 def updatebookmarks(repo, nstate, originalbookmarks, **opts):
487 def updatebookmarks(repo, nstate, originalbookmarks, **opts):
487 'Move bookmarks to their correct changesets'
488 'Move bookmarks to their correct changesets'
488 current = repo._bookmarkcurrent
489 current = repo._bookmarkcurrent
489 for k, v in originalbookmarks.iteritems():
490 for k, v in originalbookmarks.iteritems():
490 if v in nstate:
491 if v in nstate:
491 if nstate[v] != nullmerge:
492 if nstate[v] != nullmerge:
492 # reset the pointer if the bookmark was moved incorrectly
493 # reset the pointer if the bookmark was moved incorrectly
493 if k != current:
494 if k != current:
494 repo._bookmarks[k] = nstate[v]
495 repo._bookmarks[k] = nstate[v]
495
496
496 bookmarks.write(repo)
497 bookmarks.write(repo)
497
498
498 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
499 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
499 external):
500 external):
500 'Store the current status to allow recovery'
501 'Store the current status to allow recovery'
501 f = repo.opener("rebasestate", "w")
502 f = repo.opener("rebasestate", "w")
502 f.write(repo[originalwd].hex() + '\n')
503 f.write(repo[originalwd].hex() + '\n')
503 f.write(repo[target].hex() + '\n')
504 f.write(repo[target].hex() + '\n')
504 f.write(repo[external].hex() + '\n')
505 f.write(repo[external].hex() + '\n')
505 f.write('%d\n' % int(collapse))
506 f.write('%d\n' % int(collapse))
506 f.write('%d\n' % int(keep))
507 f.write('%d\n' % int(keep))
507 f.write('%d\n' % int(keepbranches))
508 f.write('%d\n' % int(keepbranches))
508 for d, v in state.iteritems():
509 for d, v in state.iteritems():
509 oldrev = repo[d].hex()
510 oldrev = repo[d].hex()
510 if v != nullmerge:
511 if v != nullmerge:
511 newrev = repo[v].hex()
512 newrev = repo[v].hex()
512 else:
513 else:
513 newrev = v
514 newrev = v
514 f.write("%s:%s\n" % (oldrev, newrev))
515 f.write("%s:%s\n" % (oldrev, newrev))
515 f.close()
516 f.close()
516 repo.ui.debug('rebase status stored\n')
517 repo.ui.debug('rebase status stored\n')
517
518
518 def clearstatus(repo):
519 def clearstatus(repo):
519 'Remove the status files'
520 'Remove the status files'
520 if os.path.exists(repo.join("rebasestate")):
521 if os.path.exists(repo.join("rebasestate")):
521 util.unlinkpath(repo.join("rebasestate"))
522 util.unlinkpath(repo.join("rebasestate"))
522
523
523 def restorestatus(repo):
524 def restorestatus(repo):
524 'Restore a previously stored status'
525 'Restore a previously stored status'
525 try:
526 try:
526 target = None
527 target = None
527 collapse = False
528 collapse = False
528 external = nullrev
529 external = nullrev
529 state = {}
530 state = {}
530 f = repo.opener("rebasestate")
531 f = repo.opener("rebasestate")
531 for i, l in enumerate(f.read().splitlines()):
532 for i, l in enumerate(f.read().splitlines()):
532 if i == 0:
533 if i == 0:
533 originalwd = repo[l].rev()
534 originalwd = repo[l].rev()
534 elif i == 1:
535 elif i == 1:
535 target = repo[l].rev()
536 target = repo[l].rev()
536 elif i == 2:
537 elif i == 2:
537 external = repo[l].rev()
538 external = repo[l].rev()
538 elif i == 3:
539 elif i == 3:
539 collapse = bool(int(l))
540 collapse = bool(int(l))
540 elif i == 4:
541 elif i == 4:
541 keep = bool(int(l))
542 keep = bool(int(l))
542 elif i == 5:
543 elif i == 5:
543 keepbranches = bool(int(l))
544 keepbranches = bool(int(l))
544 else:
545 else:
545 oldrev, newrev = l.split(':')
546 oldrev, newrev = l.split(':')
546 if newrev != str(nullmerge):
547 if newrev != str(nullmerge):
547 state[repo[oldrev].rev()] = repo[newrev].rev()
548 state[repo[oldrev].rev()] = repo[newrev].rev()
548 else:
549 else:
549 state[repo[oldrev].rev()] = int(newrev)
550 state[repo[oldrev].rev()] = int(newrev)
550 skipped = set()
551 skipped = set()
551 # recompute the set of skipped revs
552 # recompute the set of skipped revs
552 if not collapse:
553 if not collapse:
553 seen = set([target])
554 seen = set([target])
554 for old, new in sorted(state.items()):
555 for old, new in sorted(state.items()):
555 if new != nullrev and new in seen:
556 if new != nullrev and new in seen:
556 skipped.add(old)
557 skipped.add(old)
557 seen.add(new)
558 seen.add(new)
558 repo.ui.debug('computed skipped revs: %s\n' % skipped)
559 repo.ui.debug('computed skipped revs: %s\n' % skipped)
559 repo.ui.debug('rebase status resumed\n')
560 repo.ui.debug('rebase status resumed\n')
560 return (originalwd, target, state, skipped,
561 return (originalwd, target, state, skipped,
561 collapse, keep, keepbranches, external)
562 collapse, keep, keepbranches, external)
562 except IOError, err:
563 except IOError, err:
563 if err.errno != errno.ENOENT:
564 if err.errno != errno.ENOENT:
564 raise
565 raise
565 raise util.Abort(_('no rebase in progress'))
566 raise util.Abort(_('no rebase in progress'))
566
567
567 def abort(repo, originalwd, target, state):
568 def abort(repo, originalwd, target, state):
568 'Restore the repository to its original state'
569 'Restore the repository to its original state'
569 dstates = [s for s in state.values() if s != nullrev]
570 dstates = [s for s in state.values() if s != nullrev]
570 if [d for d in dstates if not repo[d].mutable()]:
571 if [d for d in dstates if not repo[d].mutable()]:
571 repo.ui.warn(_("warning: immutable rebased changeset detected, "
572 repo.ui.warn(_("warning: immutable rebased changeset detected, "
572 "can't abort\n"))
573 "can't abort\n"))
573 return -1
574 return -1
574
575
575 descendants = set()
576 descendants = set()
576 if dstates:
577 if dstates:
577 descendants = set(repo.changelog.descendants(*dstates))
578 descendants = set(repo.changelog.descendants(*dstates))
578 if descendants - set(dstates):
579 if descendants - set(dstates):
579 repo.ui.warn(_("warning: new changesets detected on target branch, "
580 repo.ui.warn(_("warning: new changesets detected on target branch, "
580 "can't abort\n"))
581 "can't abort\n"))
581 return -1
582 return -1
582 else:
583 else:
583 # Strip from the first rebased revision
584 # Strip from the first rebased revision
584 merge.update(repo, repo[originalwd].rev(), False, True, False)
585 merge.update(repo, repo[originalwd].rev(), False, True, False)
585 rebased = filter(lambda x: x > -1 and x != target, state.values())
586 rebased = filter(lambda x: x > -1 and x != target, state.values())
586 if rebased:
587 if rebased:
587 strippoint = min(rebased)
588 strippoint = min(rebased)
588 # no backup of rebased cset versions needed
589 # no backup of rebased cset versions needed
589 repair.strip(repo.ui, repo, repo[strippoint].node())
590 repair.strip(repo.ui, repo, repo[strippoint].node())
590 clearstatus(repo)
591 clearstatus(repo)
591 repo.ui.warn(_('rebase aborted\n'))
592 repo.ui.warn(_('rebase aborted\n'))
592 return 0
593 return 0
593
594
594 def buildstate(repo, dest, rebaseset, detach, collapse):
595 def buildstate(repo, dest, rebaseset, detach, collapse):
595 '''Define which revisions are going to be rebased and where
596 '''Define which revisions are going to be rebased and where
596
597
597 repo: repo
598 repo: repo
598 dest: context
599 dest: context
599 rebaseset: set of rev
600 rebaseset: set of rev
600 detach: boolean'''
601 detach: boolean'''
601
602
602 # This check isn't strictly necessary, since mq detects commits over an
603 # This check isn't strictly necessary, since mq detects commits over an
603 # applied patch. But it prevents messing up the working directory when
604 # applied patch. But it prevents messing up the working directory when
604 # a partially completed rebase is blocked by mq.
605 # a partially completed rebase is blocked by mq.
605 if 'qtip' in repo.tags() and (dest.node() in
606 if 'qtip' in repo.tags() and (dest.node() in
606 [s.node for s in repo.mq.applied]):
607 [s.node for s in repo.mq.applied]):
607 raise util.Abort(_('cannot rebase onto an applied mq patch'))
608 raise util.Abort(_('cannot rebase onto an applied mq patch'))
608
609
609 detachset = set()
610 detachset = set()
610 roots = list(repo.set('roots(%ld)', rebaseset))
611 roots = list(repo.set('roots(%ld)', rebaseset))
611 if not roots:
612 if not roots:
612 raise util.Abort(_('no matching revisions'))
613 raise util.Abort(_('no matching revisions'))
613 if len(roots) > 1:
614 if len(roots) > 1:
614 raise util.Abort(_("can't rebase multiple roots"))
615 raise util.Abort(_("can't rebase multiple roots"))
615 root = roots[0]
616 root = roots[0]
616
617
617 commonbase = root.ancestor(dest)
618 commonbase = root.ancestor(dest)
618 if commonbase == root:
619 if commonbase == root:
619 raise util.Abort(_('source is ancestor of destination'))
620 raise util.Abort(_('source is ancestor of destination'))
620 if commonbase == dest:
621 if commonbase == dest:
621 samebranch = root.branch() == dest.branch()
622 samebranch = root.branch() == dest.branch()
622 if not collapse and samebranch and root in dest.children():
623 if not collapse and samebranch and root in dest.children():
623 repo.ui.debug('source is a child of destination\n')
624 repo.ui.debug('source is a child of destination\n')
624 return None
625 return None
625 # rebase on ancestor, force detach
626 # rebase on ancestor, force detach
626 detach = True
627 detach = True
627 if detach:
628 if detach:
628 detachset = repo.revs('::%d - ::%d - %d', root, commonbase, root)
629 detachset = repo.revs('::%d - ::%d - %d', root, commonbase, root)
629
630
630 repo.ui.debug('rebase onto %d starting from %d\n' % (dest, root))
631 repo.ui.debug('rebase onto %d starting from %d\n' % (dest, root))
631 state = dict.fromkeys(rebaseset, nullrev)
632 state = dict.fromkeys(rebaseset, nullrev)
632 state.update(dict.fromkeys(detachset, nullmerge))
633 state.update(dict.fromkeys(detachset, nullmerge))
633 return repo['.'].rev(), dest.rev(), state
634 return repo['.'].rev(), dest.rev(), state
634
635
635 def pullrebase(orig, ui, repo, *args, **opts):
636 def pullrebase(orig, ui, repo, *args, **opts):
636 'Call rebase after pull if the latter has been invoked with --rebase'
637 'Call rebase after pull if the latter has been invoked with --rebase'
637 if opts.get('rebase'):
638 if opts.get('rebase'):
638 if opts.get('update'):
639 if opts.get('update'):
639 del opts['update']
640 del opts['update']
640 ui.debug('--update and --rebase are not compatible, ignoring '
641 ui.debug('--update and --rebase are not compatible, ignoring '
641 'the update flag\n')
642 'the update flag\n')
642
643
643 movemarkfrom = repo['.'].node()
644 movemarkfrom = repo['.'].node()
644 cmdutil.bailifchanged(repo)
645 cmdutil.bailifchanged(repo)
645 revsprepull = len(repo)
646 revsprepull = len(repo)
646 origpostincoming = commands.postincoming
647 origpostincoming = commands.postincoming
647 def _dummy(*args, **kwargs):
648 def _dummy(*args, **kwargs):
648 pass
649 pass
649 commands.postincoming = _dummy
650 commands.postincoming = _dummy
650 try:
651 try:
651 orig(ui, repo, *args, **opts)
652 orig(ui, repo, *args, **opts)
652 finally:
653 finally:
653 commands.postincoming = origpostincoming
654 commands.postincoming = origpostincoming
654 revspostpull = len(repo)
655 revspostpull = len(repo)
655 if revspostpull > revsprepull:
656 if revspostpull > revsprepull:
656 rebase(ui, repo, **opts)
657 rebase(ui, repo, **opts)
657 branch = repo[None].branch()
658 branch = repo[None].branch()
658 dest = repo[branch].rev()
659 dest = repo[branch].rev()
659 if dest != repo['.'].rev():
660 if dest != repo['.'].rev():
660 # there was nothing to rebase we force an update
661 # there was nothing to rebase we force an update
661 hg.update(repo, dest)
662 hg.update(repo, dest)
662 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
663 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
663 ui.status(_("updating bookmark %s\n")
664 ui.status(_("updating bookmark %s\n")
664 % repo._bookmarkcurrent)
665 % repo._bookmarkcurrent)
665 else:
666 else:
666 if opts.get('tool'):
667 if opts.get('tool'):
667 raise util.Abort(_('--tool can only be used with --rebase'))
668 raise util.Abort(_('--tool can only be used with --rebase'))
668 orig(ui, repo, *args, **opts)
669 orig(ui, repo, *args, **opts)
669
670
670 def uisetup(ui):
671 def uisetup(ui):
671 'Replace pull with a decorator to provide --rebase option'
672 'Replace pull with a decorator to provide --rebase option'
672 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
673 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
673 entry[1].append(('', 'rebase', None,
674 entry[1].append(('', 'rebase', None,
674 _("rebase working directory to branch head")))
675 _("rebase working directory to branch head")))
675 entry[1].append(('t', 'tool', '',
676 entry[1].append(('t', 'tool', '',
676 _("specify merge tool for rebase")))
677 _("specify merge tool for rebase")))
@@ -1,665 +1,666 b''
1 # record.py
1 # record.py
2 #
2 #
3 # Copyright 2007 Bryan O'Sullivan <bos@serpentine.com>
3 # Copyright 2007 Bryan O'Sullivan <bos@serpentine.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''commands to interactively select changes for commit/qrefresh'''
8 '''commands to interactively select changes for commit/qrefresh'''
9
9
10 from mercurial.i18n import gettext, _
10 from mercurial.i18n import gettext, _
11 from mercurial import cmdutil, commands, extensions, hg, mdiff, patch
11 from mercurial import cmdutil, commands, extensions, hg, mdiff, patch
12 from mercurial import util
12 from mercurial import util
13 import copy, cStringIO, errno, os, re, shutil, tempfile
13 import copy, cStringIO, errno, os, re, shutil, tempfile
14
14
15 cmdtable = {}
15 cmdtable = {}
16 command = cmdutil.command(cmdtable)
16 command = cmdutil.command(cmdtable)
17 testedwith = 'internal'
17
18
18 lines_re = re.compile(r'@@ -(\d+),(\d+) \+(\d+),(\d+) @@\s*(.*)')
19 lines_re = re.compile(r'@@ -(\d+),(\d+) \+(\d+),(\d+) @@\s*(.*)')
19
20
20 diffopts = [
21 diffopts = [
21 ('w', 'ignore-all-space', False,
22 ('w', 'ignore-all-space', False,
22 _('ignore white space when comparing lines')),
23 _('ignore white space when comparing lines')),
23 ('b', 'ignore-space-change', None,
24 ('b', 'ignore-space-change', None,
24 _('ignore changes in the amount of white space')),
25 _('ignore changes in the amount of white space')),
25 ('B', 'ignore-blank-lines', None,
26 ('B', 'ignore-blank-lines', None,
26 _('ignore changes whose lines are all blank')),
27 _('ignore changes whose lines are all blank')),
27 ]
28 ]
28
29
29 def scanpatch(fp):
30 def scanpatch(fp):
30 """like patch.iterhunks, but yield different events
31 """like patch.iterhunks, but yield different events
31
32
32 - ('file', [header_lines + fromfile + tofile])
33 - ('file', [header_lines + fromfile + tofile])
33 - ('context', [context_lines])
34 - ('context', [context_lines])
34 - ('hunk', [hunk_lines])
35 - ('hunk', [hunk_lines])
35 - ('range', (-start,len, +start,len, diffp))
36 - ('range', (-start,len, +start,len, diffp))
36 """
37 """
37 lr = patch.linereader(fp)
38 lr = patch.linereader(fp)
38
39
39 def scanwhile(first, p):
40 def scanwhile(first, p):
40 """scan lr while predicate holds"""
41 """scan lr while predicate holds"""
41 lines = [first]
42 lines = [first]
42 while True:
43 while True:
43 line = lr.readline()
44 line = lr.readline()
44 if not line:
45 if not line:
45 break
46 break
46 if p(line):
47 if p(line):
47 lines.append(line)
48 lines.append(line)
48 else:
49 else:
49 lr.push(line)
50 lr.push(line)
50 break
51 break
51 return lines
52 return lines
52
53
53 while True:
54 while True:
54 line = lr.readline()
55 line = lr.readline()
55 if not line:
56 if not line:
56 break
57 break
57 if line.startswith('diff --git a/') or line.startswith('diff -r '):
58 if line.startswith('diff --git a/') or line.startswith('diff -r '):
58 def notheader(line):
59 def notheader(line):
59 s = line.split(None, 1)
60 s = line.split(None, 1)
60 return not s or s[0] not in ('---', 'diff')
61 return not s or s[0] not in ('---', 'diff')
61 header = scanwhile(line, notheader)
62 header = scanwhile(line, notheader)
62 fromfile = lr.readline()
63 fromfile = lr.readline()
63 if fromfile.startswith('---'):
64 if fromfile.startswith('---'):
64 tofile = lr.readline()
65 tofile = lr.readline()
65 header += [fromfile, tofile]
66 header += [fromfile, tofile]
66 else:
67 else:
67 lr.push(fromfile)
68 lr.push(fromfile)
68 yield 'file', header
69 yield 'file', header
69 elif line[0] == ' ':
70 elif line[0] == ' ':
70 yield 'context', scanwhile(line, lambda l: l[0] in ' \\')
71 yield 'context', scanwhile(line, lambda l: l[0] in ' \\')
71 elif line[0] in '-+':
72 elif line[0] in '-+':
72 yield 'hunk', scanwhile(line, lambda l: l[0] in '-+\\')
73 yield 'hunk', scanwhile(line, lambda l: l[0] in '-+\\')
73 else:
74 else:
74 m = lines_re.match(line)
75 m = lines_re.match(line)
75 if m:
76 if m:
76 yield 'range', m.groups()
77 yield 'range', m.groups()
77 else:
78 else:
78 raise patch.PatchError('unknown patch content: %r' % line)
79 raise patch.PatchError('unknown patch content: %r' % line)
79
80
80 class header(object):
81 class header(object):
81 """patch header
82 """patch header
82
83
83 XXX shoudn't we move this to mercurial/patch.py ?
84 XXX shoudn't we move this to mercurial/patch.py ?
84 """
85 """
85 diffgit_re = re.compile('diff --git a/(.*) b/(.*)$')
86 diffgit_re = re.compile('diff --git a/(.*) b/(.*)$')
86 diff_re = re.compile('diff -r .* (.*)$')
87 diff_re = re.compile('diff -r .* (.*)$')
87 allhunks_re = re.compile('(?:index|new file|deleted file) ')
88 allhunks_re = re.compile('(?:index|new file|deleted file) ')
88 pretty_re = re.compile('(?:new file|deleted file) ')
89 pretty_re = re.compile('(?:new file|deleted file) ')
89 special_re = re.compile('(?:index|new|deleted|copy|rename) ')
90 special_re = re.compile('(?:index|new|deleted|copy|rename) ')
90
91
91 def __init__(self, header):
92 def __init__(self, header):
92 self.header = header
93 self.header = header
93 self.hunks = []
94 self.hunks = []
94
95
95 def binary(self):
96 def binary(self):
96 return util.any(h.startswith('index ') for h in self.header)
97 return util.any(h.startswith('index ') for h in self.header)
97
98
98 def pretty(self, fp):
99 def pretty(self, fp):
99 for h in self.header:
100 for h in self.header:
100 if h.startswith('index '):
101 if h.startswith('index '):
101 fp.write(_('this modifies a binary file (all or nothing)\n'))
102 fp.write(_('this modifies a binary file (all or nothing)\n'))
102 break
103 break
103 if self.pretty_re.match(h):
104 if self.pretty_re.match(h):
104 fp.write(h)
105 fp.write(h)
105 if self.binary():
106 if self.binary():
106 fp.write(_('this is a binary file\n'))
107 fp.write(_('this is a binary file\n'))
107 break
108 break
108 if h.startswith('---'):
109 if h.startswith('---'):
109 fp.write(_('%d hunks, %d lines changed\n') %
110 fp.write(_('%d hunks, %d lines changed\n') %
110 (len(self.hunks),
111 (len(self.hunks),
111 sum([max(h.added, h.removed) for h in self.hunks])))
112 sum([max(h.added, h.removed) for h in self.hunks])))
112 break
113 break
113 fp.write(h)
114 fp.write(h)
114
115
115 def write(self, fp):
116 def write(self, fp):
116 fp.write(''.join(self.header))
117 fp.write(''.join(self.header))
117
118
118 def allhunks(self):
119 def allhunks(self):
119 return util.any(self.allhunks_re.match(h) for h in self.header)
120 return util.any(self.allhunks_re.match(h) for h in self.header)
120
121
121 def files(self):
122 def files(self):
122 match = self.diffgit_re.match(self.header[0])
123 match = self.diffgit_re.match(self.header[0])
123 if match:
124 if match:
124 fromfile, tofile = match.groups()
125 fromfile, tofile = match.groups()
125 if fromfile == tofile:
126 if fromfile == tofile:
126 return [fromfile]
127 return [fromfile]
127 return [fromfile, tofile]
128 return [fromfile, tofile]
128 else:
129 else:
129 return self.diff_re.match(self.header[0]).groups()
130 return self.diff_re.match(self.header[0]).groups()
130
131
131 def filename(self):
132 def filename(self):
132 return self.files()[-1]
133 return self.files()[-1]
133
134
134 def __repr__(self):
135 def __repr__(self):
135 return '<header %s>' % (' '.join(map(repr, self.files())))
136 return '<header %s>' % (' '.join(map(repr, self.files())))
136
137
137 def special(self):
138 def special(self):
138 return util.any(self.special_re.match(h) for h in self.header)
139 return util.any(self.special_re.match(h) for h in self.header)
139
140
140 def countchanges(hunk):
141 def countchanges(hunk):
141 """hunk -> (n+,n-)"""
142 """hunk -> (n+,n-)"""
142 add = len([h for h in hunk if h[0] == '+'])
143 add = len([h for h in hunk if h[0] == '+'])
143 rem = len([h for h in hunk if h[0] == '-'])
144 rem = len([h for h in hunk if h[0] == '-'])
144 return add, rem
145 return add, rem
145
146
146 class hunk(object):
147 class hunk(object):
147 """patch hunk
148 """patch hunk
148
149
149 XXX shouldn't we merge this with patch.hunk ?
150 XXX shouldn't we merge this with patch.hunk ?
150 """
151 """
151 maxcontext = 3
152 maxcontext = 3
152
153
153 def __init__(self, header, fromline, toline, proc, before, hunk, after):
154 def __init__(self, header, fromline, toline, proc, before, hunk, after):
154 def trimcontext(number, lines):
155 def trimcontext(number, lines):
155 delta = len(lines) - self.maxcontext
156 delta = len(lines) - self.maxcontext
156 if False and delta > 0:
157 if False and delta > 0:
157 return number + delta, lines[:self.maxcontext]
158 return number + delta, lines[:self.maxcontext]
158 return number, lines
159 return number, lines
159
160
160 self.header = header
161 self.header = header
161 self.fromline, self.before = trimcontext(fromline, before)
162 self.fromline, self.before = trimcontext(fromline, before)
162 self.toline, self.after = trimcontext(toline, after)
163 self.toline, self.after = trimcontext(toline, after)
163 self.proc = proc
164 self.proc = proc
164 self.hunk = hunk
165 self.hunk = hunk
165 self.added, self.removed = countchanges(self.hunk)
166 self.added, self.removed = countchanges(self.hunk)
166
167
167 def write(self, fp):
168 def write(self, fp):
168 delta = len(self.before) + len(self.after)
169 delta = len(self.before) + len(self.after)
169 if self.after and self.after[-1] == '\\ No newline at end of file\n':
170 if self.after and self.after[-1] == '\\ No newline at end of file\n':
170 delta -= 1
171 delta -= 1
171 fromlen = delta + self.removed
172 fromlen = delta + self.removed
172 tolen = delta + self.added
173 tolen = delta + self.added
173 fp.write('@@ -%d,%d +%d,%d @@%s\n' %
174 fp.write('@@ -%d,%d +%d,%d @@%s\n' %
174 (self.fromline, fromlen, self.toline, tolen,
175 (self.fromline, fromlen, self.toline, tolen,
175 self.proc and (' ' + self.proc)))
176 self.proc and (' ' + self.proc)))
176 fp.write(''.join(self.before + self.hunk + self.after))
177 fp.write(''.join(self.before + self.hunk + self.after))
177
178
178 pretty = write
179 pretty = write
179
180
180 def filename(self):
181 def filename(self):
181 return self.header.filename()
182 return self.header.filename()
182
183
183 def __repr__(self):
184 def __repr__(self):
184 return '<hunk %r@%d>' % (self.filename(), self.fromline)
185 return '<hunk %r@%d>' % (self.filename(), self.fromline)
185
186
186 def parsepatch(fp):
187 def parsepatch(fp):
187 """patch -> [] of headers -> [] of hunks """
188 """patch -> [] of headers -> [] of hunks """
188 class parser(object):
189 class parser(object):
189 """patch parsing state machine"""
190 """patch parsing state machine"""
190 def __init__(self):
191 def __init__(self):
191 self.fromline = 0
192 self.fromline = 0
192 self.toline = 0
193 self.toline = 0
193 self.proc = ''
194 self.proc = ''
194 self.header = None
195 self.header = None
195 self.context = []
196 self.context = []
196 self.before = []
197 self.before = []
197 self.hunk = []
198 self.hunk = []
198 self.headers = []
199 self.headers = []
199
200
200 def addrange(self, limits):
201 def addrange(self, limits):
201 fromstart, fromend, tostart, toend, proc = limits
202 fromstart, fromend, tostart, toend, proc = limits
202 self.fromline = int(fromstart)
203 self.fromline = int(fromstart)
203 self.toline = int(tostart)
204 self.toline = int(tostart)
204 self.proc = proc
205 self.proc = proc
205
206
206 def addcontext(self, context):
207 def addcontext(self, context):
207 if self.hunk:
208 if self.hunk:
208 h = hunk(self.header, self.fromline, self.toline, self.proc,
209 h = hunk(self.header, self.fromline, self.toline, self.proc,
209 self.before, self.hunk, context)
210 self.before, self.hunk, context)
210 self.header.hunks.append(h)
211 self.header.hunks.append(h)
211 self.fromline += len(self.before) + h.removed
212 self.fromline += len(self.before) + h.removed
212 self.toline += len(self.before) + h.added
213 self.toline += len(self.before) + h.added
213 self.before = []
214 self.before = []
214 self.hunk = []
215 self.hunk = []
215 self.proc = ''
216 self.proc = ''
216 self.context = context
217 self.context = context
217
218
218 def addhunk(self, hunk):
219 def addhunk(self, hunk):
219 if self.context:
220 if self.context:
220 self.before = self.context
221 self.before = self.context
221 self.context = []
222 self.context = []
222 self.hunk = hunk
223 self.hunk = hunk
223
224
224 def newfile(self, hdr):
225 def newfile(self, hdr):
225 self.addcontext([])
226 self.addcontext([])
226 h = header(hdr)
227 h = header(hdr)
227 self.headers.append(h)
228 self.headers.append(h)
228 self.header = h
229 self.header = h
229
230
230 def finished(self):
231 def finished(self):
231 self.addcontext([])
232 self.addcontext([])
232 return self.headers
233 return self.headers
233
234
234 transitions = {
235 transitions = {
235 'file': {'context': addcontext,
236 'file': {'context': addcontext,
236 'file': newfile,
237 'file': newfile,
237 'hunk': addhunk,
238 'hunk': addhunk,
238 'range': addrange},
239 'range': addrange},
239 'context': {'file': newfile,
240 'context': {'file': newfile,
240 'hunk': addhunk,
241 'hunk': addhunk,
241 'range': addrange},
242 'range': addrange},
242 'hunk': {'context': addcontext,
243 'hunk': {'context': addcontext,
243 'file': newfile,
244 'file': newfile,
244 'range': addrange},
245 'range': addrange},
245 'range': {'context': addcontext,
246 'range': {'context': addcontext,
246 'hunk': addhunk},
247 'hunk': addhunk},
247 }
248 }
248
249
249 p = parser()
250 p = parser()
250
251
251 state = 'context'
252 state = 'context'
252 for newstate, data in scanpatch(fp):
253 for newstate, data in scanpatch(fp):
253 try:
254 try:
254 p.transitions[state][newstate](p, data)
255 p.transitions[state][newstate](p, data)
255 except KeyError:
256 except KeyError:
256 raise patch.PatchError('unhandled transition: %s -> %s' %
257 raise patch.PatchError('unhandled transition: %s -> %s' %
257 (state, newstate))
258 (state, newstate))
258 state = newstate
259 state = newstate
259 return p.finished()
260 return p.finished()
260
261
261 def filterpatch(ui, headers):
262 def filterpatch(ui, headers):
262 """Interactively filter patch chunks into applied-only chunks"""
263 """Interactively filter patch chunks into applied-only chunks"""
263
264
264 def prompt(skipfile, skipall, query, chunk):
265 def prompt(skipfile, skipall, query, chunk):
265 """prompt query, and process base inputs
266 """prompt query, and process base inputs
266
267
267 - y/n for the rest of file
268 - y/n for the rest of file
268 - y/n for the rest
269 - y/n for the rest
269 - ? (help)
270 - ? (help)
270 - q (quit)
271 - q (quit)
271
272
272 Return True/False and possibly updated skipfile and skipall.
273 Return True/False and possibly updated skipfile and skipall.
273 """
274 """
274 newpatches = None
275 newpatches = None
275 if skipall is not None:
276 if skipall is not None:
276 return skipall, skipfile, skipall, newpatches
277 return skipall, skipfile, skipall, newpatches
277 if skipfile is not None:
278 if skipfile is not None:
278 return skipfile, skipfile, skipall, newpatches
279 return skipfile, skipfile, skipall, newpatches
279 while True:
280 while True:
280 resps = _('[Ynesfdaq?]')
281 resps = _('[Ynesfdaq?]')
281 choices = (_('&Yes, record this change'),
282 choices = (_('&Yes, record this change'),
282 _('&No, skip this change'),
283 _('&No, skip this change'),
283 _('&Edit the change manually'),
284 _('&Edit the change manually'),
284 _('&Skip remaining changes to this file'),
285 _('&Skip remaining changes to this file'),
285 _('Record remaining changes to this &file'),
286 _('Record remaining changes to this &file'),
286 _('&Done, skip remaining changes and files'),
287 _('&Done, skip remaining changes and files'),
287 _('Record &all changes to all remaining files'),
288 _('Record &all changes to all remaining files'),
288 _('&Quit, recording no changes'),
289 _('&Quit, recording no changes'),
289 _('&?'))
290 _('&?'))
290 r = ui.promptchoice("%s %s" % (query, resps), choices)
291 r = ui.promptchoice("%s %s" % (query, resps), choices)
291 ui.write("\n")
292 ui.write("\n")
292 if r == 8: # ?
293 if r == 8: # ?
293 doc = gettext(record.__doc__)
294 doc = gettext(record.__doc__)
294 c = doc.find('::') + 2
295 c = doc.find('::') + 2
295 for l in doc[c:].splitlines():
296 for l in doc[c:].splitlines():
296 if l.startswith(' '):
297 if l.startswith(' '):
297 ui.write(l.strip(), '\n')
298 ui.write(l.strip(), '\n')
298 continue
299 continue
299 elif r == 0: # yes
300 elif r == 0: # yes
300 ret = True
301 ret = True
301 elif r == 1: # no
302 elif r == 1: # no
302 ret = False
303 ret = False
303 elif r == 2: # Edit patch
304 elif r == 2: # Edit patch
304 if chunk is None:
305 if chunk is None:
305 ui.write(_('cannot edit patch for whole file'))
306 ui.write(_('cannot edit patch for whole file'))
306 ui.write("\n")
307 ui.write("\n")
307 continue
308 continue
308 if chunk.header.binary():
309 if chunk.header.binary():
309 ui.write(_('cannot edit patch for binary file'))
310 ui.write(_('cannot edit patch for binary file'))
310 ui.write("\n")
311 ui.write("\n")
311 continue
312 continue
312 # Patch comment based on the Git one (based on comment at end of
313 # Patch comment based on the Git one (based on comment at end of
313 # http://mercurial.selenic.com/wiki/RecordExtension)
314 # http://mercurial.selenic.com/wiki/RecordExtension)
314 phelp = '---' + _("""
315 phelp = '---' + _("""
315 To remove '-' lines, make them ' ' lines (context).
316 To remove '-' lines, make them ' ' lines (context).
316 To remove '+' lines, delete them.
317 To remove '+' lines, delete them.
317 Lines starting with # will be removed from the patch.
318 Lines starting with # will be removed from the patch.
318
319
319 If the patch applies cleanly, the edited hunk will immediately be
320 If the patch applies cleanly, the edited hunk will immediately be
320 added to the record list. If it does not apply cleanly, a rejects
321 added to the record list. If it does not apply cleanly, a rejects
321 file will be generated: you can use that when you try again. If
322 file will be generated: you can use that when you try again. If
322 all lines of the hunk are removed, then the edit is aborted and
323 all lines of the hunk are removed, then the edit is aborted and
323 the hunk is left unchanged.
324 the hunk is left unchanged.
324 """)
325 """)
325 (patchfd, patchfn) = tempfile.mkstemp(prefix="hg-editor-",
326 (patchfd, patchfn) = tempfile.mkstemp(prefix="hg-editor-",
326 suffix=".diff", text=True)
327 suffix=".diff", text=True)
327 ncpatchfp = None
328 ncpatchfp = None
328 try:
329 try:
329 # Write the initial patch
330 # Write the initial patch
330 f = os.fdopen(patchfd, "w")
331 f = os.fdopen(patchfd, "w")
331 chunk.header.write(f)
332 chunk.header.write(f)
332 chunk.write(f)
333 chunk.write(f)
333 f.write('\n'.join(['# ' + i for i in phelp.splitlines()]))
334 f.write('\n'.join(['# ' + i for i in phelp.splitlines()]))
334 f.close()
335 f.close()
335 # Start the editor and wait for it to complete
336 # Start the editor and wait for it to complete
336 editor = ui.geteditor()
337 editor = ui.geteditor()
337 util.system("%s \"%s\"" % (editor, patchfn),
338 util.system("%s \"%s\"" % (editor, patchfn),
338 environ={'HGUSER': ui.username()},
339 environ={'HGUSER': ui.username()},
339 onerr=util.Abort, errprefix=_("edit failed"),
340 onerr=util.Abort, errprefix=_("edit failed"),
340 out=ui.fout)
341 out=ui.fout)
341 # Remove comment lines
342 # Remove comment lines
342 patchfp = open(patchfn)
343 patchfp = open(patchfn)
343 ncpatchfp = cStringIO.StringIO()
344 ncpatchfp = cStringIO.StringIO()
344 for line in patchfp:
345 for line in patchfp:
345 if not line.startswith('#'):
346 if not line.startswith('#'):
346 ncpatchfp.write(line)
347 ncpatchfp.write(line)
347 patchfp.close()
348 patchfp.close()
348 ncpatchfp.seek(0)
349 ncpatchfp.seek(0)
349 newpatches = parsepatch(ncpatchfp)
350 newpatches = parsepatch(ncpatchfp)
350 finally:
351 finally:
351 os.unlink(patchfn)
352 os.unlink(patchfn)
352 del ncpatchfp
353 del ncpatchfp
353 # Signal that the chunk shouldn't be applied as-is, but
354 # Signal that the chunk shouldn't be applied as-is, but
354 # provide the new patch to be used instead.
355 # provide the new patch to be used instead.
355 ret = False
356 ret = False
356 elif r == 3: # Skip
357 elif r == 3: # Skip
357 ret = skipfile = False
358 ret = skipfile = False
358 elif r == 4: # file (Record remaining)
359 elif r == 4: # file (Record remaining)
359 ret = skipfile = True
360 ret = skipfile = True
360 elif r == 5: # done, skip remaining
361 elif r == 5: # done, skip remaining
361 ret = skipall = False
362 ret = skipall = False
362 elif r == 6: # all
363 elif r == 6: # all
363 ret = skipall = True
364 ret = skipall = True
364 elif r == 7: # quit
365 elif r == 7: # quit
365 raise util.Abort(_('user quit'))
366 raise util.Abort(_('user quit'))
366 return ret, skipfile, skipall, newpatches
367 return ret, skipfile, skipall, newpatches
367
368
368 seen = set()
369 seen = set()
369 applied = {} # 'filename' -> [] of chunks
370 applied = {} # 'filename' -> [] of chunks
370 skipfile, skipall = None, None
371 skipfile, skipall = None, None
371 pos, total = 1, sum(len(h.hunks) for h in headers)
372 pos, total = 1, sum(len(h.hunks) for h in headers)
372 for h in headers:
373 for h in headers:
373 pos += len(h.hunks)
374 pos += len(h.hunks)
374 skipfile = None
375 skipfile = None
375 fixoffset = 0
376 fixoffset = 0
376 hdr = ''.join(h.header)
377 hdr = ''.join(h.header)
377 if hdr in seen:
378 if hdr in seen:
378 continue
379 continue
379 seen.add(hdr)
380 seen.add(hdr)
380 if skipall is None:
381 if skipall is None:
381 h.pretty(ui)
382 h.pretty(ui)
382 msg = (_('examine changes to %s?') %
383 msg = (_('examine changes to %s?') %
383 _(' and ').join(map(repr, h.files())))
384 _(' and ').join(map(repr, h.files())))
384 r, skipfile, skipall, np = prompt(skipfile, skipall, msg, None)
385 r, skipfile, skipall, np = prompt(skipfile, skipall, msg, None)
385 if not r:
386 if not r:
386 continue
387 continue
387 applied[h.filename()] = [h]
388 applied[h.filename()] = [h]
388 if h.allhunks():
389 if h.allhunks():
389 applied[h.filename()] += h.hunks
390 applied[h.filename()] += h.hunks
390 continue
391 continue
391 for i, chunk in enumerate(h.hunks):
392 for i, chunk in enumerate(h.hunks):
392 if skipfile is None and skipall is None:
393 if skipfile is None and skipall is None:
393 chunk.pretty(ui)
394 chunk.pretty(ui)
394 if total == 1:
395 if total == 1:
395 msg = _('record this change to %r?') % chunk.filename()
396 msg = _('record this change to %r?') % chunk.filename()
396 else:
397 else:
397 idx = pos - len(h.hunks) + i
398 idx = pos - len(h.hunks) + i
398 msg = _('record change %d/%d to %r?') % (idx, total,
399 msg = _('record change %d/%d to %r?') % (idx, total,
399 chunk.filename())
400 chunk.filename())
400 r, skipfile, skipall, newpatches = prompt(skipfile,
401 r, skipfile, skipall, newpatches = prompt(skipfile,
401 skipall, msg, chunk)
402 skipall, msg, chunk)
402 if r:
403 if r:
403 if fixoffset:
404 if fixoffset:
404 chunk = copy.copy(chunk)
405 chunk = copy.copy(chunk)
405 chunk.toline += fixoffset
406 chunk.toline += fixoffset
406 applied[chunk.filename()].append(chunk)
407 applied[chunk.filename()].append(chunk)
407 elif newpatches is not None:
408 elif newpatches is not None:
408 for newpatch in newpatches:
409 for newpatch in newpatches:
409 for newhunk in newpatch.hunks:
410 for newhunk in newpatch.hunks:
410 if fixoffset:
411 if fixoffset:
411 newhunk.toline += fixoffset
412 newhunk.toline += fixoffset
412 applied[newhunk.filename()].append(newhunk)
413 applied[newhunk.filename()].append(newhunk)
413 else:
414 else:
414 fixoffset += chunk.removed - chunk.added
415 fixoffset += chunk.removed - chunk.added
415 return sum([h for h in applied.itervalues()
416 return sum([h for h in applied.itervalues()
416 if h[0].special() or len(h) > 1], [])
417 if h[0].special() or len(h) > 1], [])
417
418
418 @command("record",
419 @command("record",
419 # same options as commit + white space diff options
420 # same options as commit + white space diff options
420 commands.table['^commit|ci'][1][:] + diffopts,
421 commands.table['^commit|ci'][1][:] + diffopts,
421 _('hg record [OPTION]... [FILE]...'))
422 _('hg record [OPTION]... [FILE]...'))
422 def record(ui, repo, *pats, **opts):
423 def record(ui, repo, *pats, **opts):
423 '''interactively select changes to commit
424 '''interactively select changes to commit
424
425
425 If a list of files is omitted, all changes reported by :hg:`status`
426 If a list of files is omitted, all changes reported by :hg:`status`
426 will be candidates for recording.
427 will be candidates for recording.
427
428
428 See :hg:`help dates` for a list of formats valid for -d/--date.
429 See :hg:`help dates` for a list of formats valid for -d/--date.
429
430
430 You will be prompted for whether to record changes to each
431 You will be prompted for whether to record changes to each
431 modified file, and for files with multiple changes, for each
432 modified file, and for files with multiple changes, for each
432 change to use. For each query, the following responses are
433 change to use. For each query, the following responses are
433 possible::
434 possible::
434
435
435 y - record this change
436 y - record this change
436 n - skip this change
437 n - skip this change
437 e - edit this change manually
438 e - edit this change manually
438
439
439 s - skip remaining changes to this file
440 s - skip remaining changes to this file
440 f - record remaining changes to this file
441 f - record remaining changes to this file
441
442
442 d - done, skip remaining changes and files
443 d - done, skip remaining changes and files
443 a - record all changes to all remaining files
444 a - record all changes to all remaining files
444 q - quit, recording no changes
445 q - quit, recording no changes
445
446
446 ? - display help
447 ? - display help
447
448
448 This command is not available when committing a merge.'''
449 This command is not available when committing a merge.'''
449
450
450 dorecord(ui, repo, commands.commit, 'commit', False, *pats, **opts)
451 dorecord(ui, repo, commands.commit, 'commit', False, *pats, **opts)
451
452
452 def qrefresh(origfn, ui, repo, *pats, **opts):
453 def qrefresh(origfn, ui, repo, *pats, **opts):
453 if not opts['interactive']:
454 if not opts['interactive']:
454 return origfn(ui, repo, *pats, **opts)
455 return origfn(ui, repo, *pats, **opts)
455
456
456 mq = extensions.find('mq')
457 mq = extensions.find('mq')
457
458
458 def committomq(ui, repo, *pats, **opts):
459 def committomq(ui, repo, *pats, **opts):
459 # At this point the working copy contains only changes that
460 # At this point the working copy contains only changes that
460 # were accepted. All other changes were reverted.
461 # were accepted. All other changes were reverted.
461 # We can't pass *pats here since qrefresh will undo all other
462 # We can't pass *pats here since qrefresh will undo all other
462 # changed files in the patch that aren't in pats.
463 # changed files in the patch that aren't in pats.
463 mq.refresh(ui, repo, **opts)
464 mq.refresh(ui, repo, **opts)
464
465
465 # backup all changed files
466 # backup all changed files
466 dorecord(ui, repo, committomq, 'qrefresh', True, *pats, **opts)
467 dorecord(ui, repo, committomq, 'qrefresh', True, *pats, **opts)
467
468
468 def qrecord(ui, repo, patch, *pats, **opts):
469 def qrecord(ui, repo, patch, *pats, **opts):
469 '''interactively record a new patch
470 '''interactively record a new patch
470
471
471 See :hg:`help qnew` & :hg:`help record` for more information and
472 See :hg:`help qnew` & :hg:`help record` for more information and
472 usage.
473 usage.
473 '''
474 '''
474
475
475 try:
476 try:
476 mq = extensions.find('mq')
477 mq = extensions.find('mq')
477 except KeyError:
478 except KeyError:
478 raise util.Abort(_("'mq' extension not loaded"))
479 raise util.Abort(_("'mq' extension not loaded"))
479
480
480 repo.mq.checkpatchname(patch)
481 repo.mq.checkpatchname(patch)
481
482
482 def committomq(ui, repo, *pats, **opts):
483 def committomq(ui, repo, *pats, **opts):
483 opts['checkname'] = False
484 opts['checkname'] = False
484 mq.new(ui, repo, patch, *pats, **opts)
485 mq.new(ui, repo, patch, *pats, **opts)
485
486
486 dorecord(ui, repo, committomq, 'qnew', False, *pats, **opts)
487 dorecord(ui, repo, committomq, 'qnew', False, *pats, **opts)
487
488
488 def qnew(origfn, ui, repo, patch, *args, **opts):
489 def qnew(origfn, ui, repo, patch, *args, **opts):
489 if opts['interactive']:
490 if opts['interactive']:
490 return qrecord(ui, repo, patch, *args, **opts)
491 return qrecord(ui, repo, patch, *args, **opts)
491 return origfn(ui, repo, patch, *args, **opts)
492 return origfn(ui, repo, patch, *args, **opts)
492
493
493 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall, *pats, **opts):
494 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall, *pats, **opts):
494 if not ui.interactive():
495 if not ui.interactive():
495 raise util.Abort(_('running non-interactively, use %s instead') %
496 raise util.Abort(_('running non-interactively, use %s instead') %
496 cmdsuggest)
497 cmdsuggest)
497
498
498 def recordfunc(ui, repo, message, match, opts):
499 def recordfunc(ui, repo, message, match, opts):
499 """This is generic record driver.
500 """This is generic record driver.
500
501
501 Its job is to interactively filter local changes, and
502 Its job is to interactively filter local changes, and
502 accordingly prepare working directory into a state in which the
503 accordingly prepare working directory into a state in which the
503 job can be delegated to a non-interactive commit command such as
504 job can be delegated to a non-interactive commit command such as
504 'commit' or 'qrefresh'.
505 'commit' or 'qrefresh'.
505
506
506 After the actual job is done by non-interactive command, the
507 After the actual job is done by non-interactive command, the
507 working directory is restored to its original state.
508 working directory is restored to its original state.
508
509
509 In the end we'll record interesting changes, and everything else
510 In the end we'll record interesting changes, and everything else
510 will be left in place, so the user can continue working.
511 will be left in place, so the user can continue working.
511 """
512 """
512
513
513 merge = len(repo[None].parents()) > 1
514 merge = len(repo[None].parents()) > 1
514 if merge:
515 if merge:
515 raise util.Abort(_('cannot partially commit a merge '
516 raise util.Abort(_('cannot partially commit a merge '
516 '(use "hg commit" instead)'))
517 '(use "hg commit" instead)'))
517
518
518 changes = repo.status(match=match)[:3]
519 changes = repo.status(match=match)[:3]
519 diffopts = mdiff.diffopts(
520 diffopts = mdiff.diffopts(
520 git=True, nodates=True,
521 git=True, nodates=True,
521 ignorews=opts.get('ignore_all_space'),
522 ignorews=opts.get('ignore_all_space'),
522 ignorewsamount=opts.get('ignore_space_change'),
523 ignorewsamount=opts.get('ignore_space_change'),
523 ignoreblanklines=opts.get('ignore_blank_lines'))
524 ignoreblanklines=opts.get('ignore_blank_lines'))
524 chunks = patch.diff(repo, changes=changes, opts=diffopts)
525 chunks = patch.diff(repo, changes=changes, opts=diffopts)
525 fp = cStringIO.StringIO()
526 fp = cStringIO.StringIO()
526 fp.write(''.join(chunks))
527 fp.write(''.join(chunks))
527 fp.seek(0)
528 fp.seek(0)
528
529
529 # 1. filter patch, so we have intending-to apply subset of it
530 # 1. filter patch, so we have intending-to apply subset of it
530 chunks = filterpatch(ui, parsepatch(fp))
531 chunks = filterpatch(ui, parsepatch(fp))
531 del fp
532 del fp
532
533
533 contenders = set()
534 contenders = set()
534 for h in chunks:
535 for h in chunks:
535 try:
536 try:
536 contenders.update(set(h.files()))
537 contenders.update(set(h.files()))
537 except AttributeError:
538 except AttributeError:
538 pass
539 pass
539
540
540 changed = changes[0] + changes[1] + changes[2]
541 changed = changes[0] + changes[1] + changes[2]
541 newfiles = [f for f in changed if f in contenders]
542 newfiles = [f for f in changed if f in contenders]
542 if not newfiles:
543 if not newfiles:
543 ui.status(_('no changes to record\n'))
544 ui.status(_('no changes to record\n'))
544 return 0
545 return 0
545
546
546 modified = set(changes[0])
547 modified = set(changes[0])
547
548
548 # 2. backup changed files, so we can restore them in the end
549 # 2. backup changed files, so we can restore them in the end
549 if backupall:
550 if backupall:
550 tobackup = changed
551 tobackup = changed
551 else:
552 else:
552 tobackup = [f for f in newfiles if f in modified]
553 tobackup = [f for f in newfiles if f in modified]
553
554
554 backups = {}
555 backups = {}
555 if tobackup:
556 if tobackup:
556 backupdir = repo.join('record-backups')
557 backupdir = repo.join('record-backups')
557 try:
558 try:
558 os.mkdir(backupdir)
559 os.mkdir(backupdir)
559 except OSError, err:
560 except OSError, err:
560 if err.errno != errno.EEXIST:
561 if err.errno != errno.EEXIST:
561 raise
562 raise
562 try:
563 try:
563 # backup continues
564 # backup continues
564 for f in tobackup:
565 for f in tobackup:
565 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
566 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
566 dir=backupdir)
567 dir=backupdir)
567 os.close(fd)
568 os.close(fd)
568 ui.debug('backup %r as %r\n' % (f, tmpname))
569 ui.debug('backup %r as %r\n' % (f, tmpname))
569 util.copyfile(repo.wjoin(f), tmpname)
570 util.copyfile(repo.wjoin(f), tmpname)
570 shutil.copystat(repo.wjoin(f), tmpname)
571 shutil.copystat(repo.wjoin(f), tmpname)
571 backups[f] = tmpname
572 backups[f] = tmpname
572
573
573 fp = cStringIO.StringIO()
574 fp = cStringIO.StringIO()
574 for c in chunks:
575 for c in chunks:
575 if c.filename() in backups:
576 if c.filename() in backups:
576 c.write(fp)
577 c.write(fp)
577 dopatch = fp.tell()
578 dopatch = fp.tell()
578 fp.seek(0)
579 fp.seek(0)
579
580
580 # 3a. apply filtered patch to clean repo (clean)
581 # 3a. apply filtered patch to clean repo (clean)
581 if backups:
582 if backups:
582 hg.revert(repo, repo.dirstate.p1(),
583 hg.revert(repo, repo.dirstate.p1(),
583 lambda key: key in backups)
584 lambda key: key in backups)
584
585
585 # 3b. (apply)
586 # 3b. (apply)
586 if dopatch:
587 if dopatch:
587 try:
588 try:
588 ui.debug('applying patch\n')
589 ui.debug('applying patch\n')
589 ui.debug(fp.getvalue())
590 ui.debug(fp.getvalue())
590 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
591 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
591 except patch.PatchError, err:
592 except patch.PatchError, err:
592 raise util.Abort(str(err))
593 raise util.Abort(str(err))
593 del fp
594 del fp
594
595
595 # 4. We prepared working directory according to filtered
596 # 4. We prepared working directory according to filtered
596 # patch. Now is the time to delegate the job to
597 # patch. Now is the time to delegate the job to
597 # commit/qrefresh or the like!
598 # commit/qrefresh or the like!
598
599
599 # it is important to first chdir to repo root -- we'll call
600 # it is important to first chdir to repo root -- we'll call
600 # a highlevel command with list of pathnames relative to
601 # a highlevel command with list of pathnames relative to
601 # repo root
602 # repo root
602 cwd = os.getcwd()
603 cwd = os.getcwd()
603 os.chdir(repo.root)
604 os.chdir(repo.root)
604 try:
605 try:
605 commitfunc(ui, repo, *newfiles, **opts)
606 commitfunc(ui, repo, *newfiles, **opts)
606 finally:
607 finally:
607 os.chdir(cwd)
608 os.chdir(cwd)
608
609
609 return 0
610 return 0
610 finally:
611 finally:
611 # 5. finally restore backed-up files
612 # 5. finally restore backed-up files
612 try:
613 try:
613 for realname, tmpname in backups.iteritems():
614 for realname, tmpname in backups.iteritems():
614 ui.debug('restoring %r to %r\n' % (tmpname, realname))
615 ui.debug('restoring %r to %r\n' % (tmpname, realname))
615 util.copyfile(tmpname, repo.wjoin(realname))
616 util.copyfile(tmpname, repo.wjoin(realname))
616 # Our calls to copystat() here and above are a
617 # Our calls to copystat() here and above are a
617 # hack to trick any editors that have f open that
618 # hack to trick any editors that have f open that
618 # we haven't modified them.
619 # we haven't modified them.
619 #
620 #
620 # Also note that this racy as an editor could
621 # Also note that this racy as an editor could
621 # notice the file's mtime before we've finished
622 # notice the file's mtime before we've finished
622 # writing it.
623 # writing it.
623 shutil.copystat(tmpname, repo.wjoin(realname))
624 shutil.copystat(tmpname, repo.wjoin(realname))
624 os.unlink(tmpname)
625 os.unlink(tmpname)
625 if tobackup:
626 if tobackup:
626 os.rmdir(backupdir)
627 os.rmdir(backupdir)
627 except OSError:
628 except OSError:
628 pass
629 pass
629
630
630 # wrap ui.write so diff output can be labeled/colorized
631 # wrap ui.write so diff output can be labeled/colorized
631 def wrapwrite(orig, *args, **kw):
632 def wrapwrite(orig, *args, **kw):
632 label = kw.pop('label', '')
633 label = kw.pop('label', '')
633 for chunk, l in patch.difflabel(lambda: args):
634 for chunk, l in patch.difflabel(lambda: args):
634 orig(chunk, label=label + l)
635 orig(chunk, label=label + l)
635 oldwrite = ui.write
636 oldwrite = ui.write
636 extensions.wrapfunction(ui, 'write', wrapwrite)
637 extensions.wrapfunction(ui, 'write', wrapwrite)
637 try:
638 try:
638 return cmdutil.commit(ui, repo, recordfunc, pats, opts)
639 return cmdutil.commit(ui, repo, recordfunc, pats, opts)
639 finally:
640 finally:
640 ui.write = oldwrite
641 ui.write = oldwrite
641
642
642 cmdtable["qrecord"] = \
643 cmdtable["qrecord"] = \
643 (qrecord, [], # placeholder until mq is available
644 (qrecord, [], # placeholder until mq is available
644 _('hg qrecord [OPTION]... PATCH [FILE]...'))
645 _('hg qrecord [OPTION]... PATCH [FILE]...'))
645
646
646 def uisetup(ui):
647 def uisetup(ui):
647 try:
648 try:
648 mq = extensions.find('mq')
649 mq = extensions.find('mq')
649 except KeyError:
650 except KeyError:
650 return
651 return
651
652
652 cmdtable["qrecord"] = \
653 cmdtable["qrecord"] = \
653 (qrecord,
654 (qrecord,
654 # same options as qnew, but copy them so we don't get
655 # same options as qnew, but copy them so we don't get
655 # -i/--interactive for qrecord and add white space diff options
656 # -i/--interactive for qrecord and add white space diff options
656 mq.cmdtable['^qnew'][1][:] + diffopts,
657 mq.cmdtable['^qnew'][1][:] + diffopts,
657 _('hg qrecord [OPTION]... PATCH [FILE]...'))
658 _('hg qrecord [OPTION]... PATCH [FILE]...'))
658
659
659 _wrapcmd('qnew', mq.cmdtable, qnew, _("interactively record a new patch"))
660 _wrapcmd('qnew', mq.cmdtable, qnew, _("interactively record a new patch"))
660 _wrapcmd('qrefresh', mq.cmdtable, qrefresh,
661 _wrapcmd('qrefresh', mq.cmdtable, qrefresh,
661 _("interactively select changes to refresh"))
662 _("interactively select changes to refresh"))
662
663
663 def _wrapcmd(cmd, table, wrapfn, msg):
664 def _wrapcmd(cmd, table, wrapfn, msg):
664 entry = extensions.wrapcommand(table, cmd, wrapfn)
665 entry = extensions.wrapcommand(table, cmd, wrapfn)
665 entry[1].append(('i', 'interactive', None, msg))
666 entry[1].append(('i', 'interactive', None, msg))
@@ -1,184 +1,186 b''
1 # Mercurial extension to provide 'hg relink' command
1 # Mercurial extension to provide 'hg relink' command
2 #
2 #
3 # Copyright (C) 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright (C) 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """recreates hardlinks between repository clones"""
8 """recreates hardlinks between repository clones"""
9
9
10 from mercurial import hg, util
10 from mercurial import hg, util
11 from mercurial.i18n import _
11 from mercurial.i18n import _
12 import os, stat
12 import os, stat
13
13
14 testedwith = 'internal'
15
14 def relink(ui, repo, origin=None, **opts):
16 def relink(ui, repo, origin=None, **opts):
15 """recreate hardlinks between two repositories
17 """recreate hardlinks between two repositories
16
18
17 When repositories are cloned locally, their data files will be
19 When repositories are cloned locally, their data files will be
18 hardlinked so that they only use the space of a single repository.
20 hardlinked so that they only use the space of a single repository.
19
21
20 Unfortunately, subsequent pulls into either repository will break
22 Unfortunately, subsequent pulls into either repository will break
21 hardlinks for any files touched by the new changesets, even if
23 hardlinks for any files touched by the new changesets, even if
22 both repositories end up pulling the same changes.
24 both repositories end up pulling the same changes.
23
25
24 Similarly, passing --rev to "hg clone" will fail to use any
26 Similarly, passing --rev to "hg clone" will fail to use any
25 hardlinks, falling back to a complete copy of the source
27 hardlinks, falling back to a complete copy of the source
26 repository.
28 repository.
27
29
28 This command lets you recreate those hardlinks and reclaim that
30 This command lets you recreate those hardlinks and reclaim that
29 wasted space.
31 wasted space.
30
32
31 This repository will be relinked to share space with ORIGIN, which
33 This repository will be relinked to share space with ORIGIN, which
32 must be on the same local disk. If ORIGIN is omitted, looks for
34 must be on the same local disk. If ORIGIN is omitted, looks for
33 "default-relink", then "default", in [paths].
35 "default-relink", then "default", in [paths].
34
36
35 Do not attempt any read operations on this repository while the
37 Do not attempt any read operations on this repository while the
36 command is running. (Both repositories will be locked against
38 command is running. (Both repositories will be locked against
37 writes.)
39 writes.)
38 """
40 """
39 if (not util.safehasattr(util, 'samefile') or
41 if (not util.safehasattr(util, 'samefile') or
40 not util.safehasattr(util, 'samedevice')):
42 not util.safehasattr(util, 'samedevice')):
41 raise util.Abort(_('hardlinks are not supported on this system'))
43 raise util.Abort(_('hardlinks are not supported on this system'))
42 src = hg.repository(ui, ui.expandpath(origin or 'default-relink',
44 src = hg.repository(ui, ui.expandpath(origin or 'default-relink',
43 origin or 'default'))
45 origin or 'default'))
44 if not src.local():
46 if not src.local():
45 raise util.Abort(_('must specify local origin repository'))
47 raise util.Abort(_('must specify local origin repository'))
46 ui.status(_('relinking %s to %s\n') % (src.store.path, repo.store.path))
48 ui.status(_('relinking %s to %s\n') % (src.store.path, repo.store.path))
47 if repo.root == src.root:
49 if repo.root == src.root:
48 ui.status(_('there is nothing to relink\n'))
50 ui.status(_('there is nothing to relink\n'))
49 return
51 return
50
52
51 locallock = repo.lock()
53 locallock = repo.lock()
52 try:
54 try:
53 remotelock = src.lock()
55 remotelock = src.lock()
54 try:
56 try:
55 candidates = sorted(collect(src, ui))
57 candidates = sorted(collect(src, ui))
56 targets = prune(candidates, src.store.path, repo.store.path, ui)
58 targets = prune(candidates, src.store.path, repo.store.path, ui)
57 do_relink(src.store.path, repo.store.path, targets, ui)
59 do_relink(src.store.path, repo.store.path, targets, ui)
58 finally:
60 finally:
59 remotelock.release()
61 remotelock.release()
60 finally:
62 finally:
61 locallock.release()
63 locallock.release()
62
64
63 def collect(src, ui):
65 def collect(src, ui):
64 seplen = len(os.path.sep)
66 seplen = len(os.path.sep)
65 candidates = []
67 candidates = []
66 live = len(src['tip'].manifest())
68 live = len(src['tip'].manifest())
67 # Your average repository has some files which were deleted before
69 # Your average repository has some files which were deleted before
68 # the tip revision. We account for that by assuming that there are
70 # the tip revision. We account for that by assuming that there are
69 # 3 tracked files for every 2 live files as of the tip version of
71 # 3 tracked files for every 2 live files as of the tip version of
70 # the repository.
72 # the repository.
71 #
73 #
72 # mozilla-central as of 2010-06-10 had a ratio of just over 7:5.
74 # mozilla-central as of 2010-06-10 had a ratio of just over 7:5.
73 total = live * 3 // 2
75 total = live * 3 // 2
74 src = src.store.path
76 src = src.store.path
75 pos = 0
77 pos = 0
76 ui.status(_("tip has %d files, estimated total number of files: %s\n")
78 ui.status(_("tip has %d files, estimated total number of files: %s\n")
77 % (live, total))
79 % (live, total))
78 for dirpath, dirnames, filenames in os.walk(src):
80 for dirpath, dirnames, filenames in os.walk(src):
79 dirnames.sort()
81 dirnames.sort()
80 relpath = dirpath[len(src) + seplen:]
82 relpath = dirpath[len(src) + seplen:]
81 for filename in sorted(filenames):
83 for filename in sorted(filenames):
82 if filename[-2:] not in ('.d', '.i'):
84 if filename[-2:] not in ('.d', '.i'):
83 continue
85 continue
84 st = os.stat(os.path.join(dirpath, filename))
86 st = os.stat(os.path.join(dirpath, filename))
85 if not stat.S_ISREG(st.st_mode):
87 if not stat.S_ISREG(st.st_mode):
86 continue
88 continue
87 pos += 1
89 pos += 1
88 candidates.append((os.path.join(relpath, filename), st))
90 candidates.append((os.path.join(relpath, filename), st))
89 ui.progress(_('collecting'), pos, filename, _('files'), total)
91 ui.progress(_('collecting'), pos, filename, _('files'), total)
90
92
91 ui.progress(_('collecting'), None)
93 ui.progress(_('collecting'), None)
92 ui.status(_('collected %d candidate storage files\n') % len(candidates))
94 ui.status(_('collected %d candidate storage files\n') % len(candidates))
93 return candidates
95 return candidates
94
96
95 def prune(candidates, src, dst, ui):
97 def prune(candidates, src, dst, ui):
96 def linkfilter(src, dst, st):
98 def linkfilter(src, dst, st):
97 try:
99 try:
98 ts = os.stat(dst)
100 ts = os.stat(dst)
99 except OSError:
101 except OSError:
100 # Destination doesn't have this file?
102 # Destination doesn't have this file?
101 return False
103 return False
102 if util.samefile(src, dst):
104 if util.samefile(src, dst):
103 return False
105 return False
104 if not util.samedevice(src, dst):
106 if not util.samedevice(src, dst):
105 # No point in continuing
107 # No point in continuing
106 raise util.Abort(
108 raise util.Abort(
107 _('source and destination are on different devices'))
109 _('source and destination are on different devices'))
108 if st.st_size != ts.st_size:
110 if st.st_size != ts.st_size:
109 return False
111 return False
110 return st
112 return st
111
113
112 targets = []
114 targets = []
113 total = len(candidates)
115 total = len(candidates)
114 pos = 0
116 pos = 0
115 for fn, st in candidates:
117 for fn, st in candidates:
116 pos += 1
118 pos += 1
117 srcpath = os.path.join(src, fn)
119 srcpath = os.path.join(src, fn)
118 tgt = os.path.join(dst, fn)
120 tgt = os.path.join(dst, fn)
119 ts = linkfilter(srcpath, tgt, st)
121 ts = linkfilter(srcpath, tgt, st)
120 if not ts:
122 if not ts:
121 ui.debug('not linkable: %s\n' % fn)
123 ui.debug('not linkable: %s\n' % fn)
122 continue
124 continue
123 targets.append((fn, ts.st_size))
125 targets.append((fn, ts.st_size))
124 ui.progress(_('pruning'), pos, fn, _('files'), total)
126 ui.progress(_('pruning'), pos, fn, _('files'), total)
125
127
126 ui.progress(_('pruning'), None)
128 ui.progress(_('pruning'), None)
127 ui.status(_('pruned down to %d probably relinkable files\n') % len(targets))
129 ui.status(_('pruned down to %d probably relinkable files\n') % len(targets))
128 return targets
130 return targets
129
131
130 def do_relink(src, dst, files, ui):
132 def do_relink(src, dst, files, ui):
131 def relinkfile(src, dst):
133 def relinkfile(src, dst):
132 bak = dst + '.bak'
134 bak = dst + '.bak'
133 os.rename(dst, bak)
135 os.rename(dst, bak)
134 try:
136 try:
135 util.oslink(src, dst)
137 util.oslink(src, dst)
136 except OSError:
138 except OSError:
137 os.rename(bak, dst)
139 os.rename(bak, dst)
138 raise
140 raise
139 os.remove(bak)
141 os.remove(bak)
140
142
141 CHUNKLEN = 65536
143 CHUNKLEN = 65536
142 relinked = 0
144 relinked = 0
143 savedbytes = 0
145 savedbytes = 0
144
146
145 pos = 0
147 pos = 0
146 total = len(files)
148 total = len(files)
147 for f, sz in files:
149 for f, sz in files:
148 pos += 1
150 pos += 1
149 source = os.path.join(src, f)
151 source = os.path.join(src, f)
150 tgt = os.path.join(dst, f)
152 tgt = os.path.join(dst, f)
151 # Binary mode, so that read() works correctly, especially on Windows
153 # Binary mode, so that read() works correctly, especially on Windows
152 sfp = file(source, 'rb')
154 sfp = file(source, 'rb')
153 dfp = file(tgt, 'rb')
155 dfp = file(tgt, 'rb')
154 sin = sfp.read(CHUNKLEN)
156 sin = sfp.read(CHUNKLEN)
155 while sin:
157 while sin:
156 din = dfp.read(CHUNKLEN)
158 din = dfp.read(CHUNKLEN)
157 if sin != din:
159 if sin != din:
158 break
160 break
159 sin = sfp.read(CHUNKLEN)
161 sin = sfp.read(CHUNKLEN)
160 sfp.close()
162 sfp.close()
161 dfp.close()
163 dfp.close()
162 if sin:
164 if sin:
163 ui.debug('not linkable: %s\n' % f)
165 ui.debug('not linkable: %s\n' % f)
164 continue
166 continue
165 try:
167 try:
166 relinkfile(source, tgt)
168 relinkfile(source, tgt)
167 ui.progress(_('relinking'), pos, f, _('files'), total)
169 ui.progress(_('relinking'), pos, f, _('files'), total)
168 relinked += 1
170 relinked += 1
169 savedbytes += sz
171 savedbytes += sz
170 except OSError, inst:
172 except OSError, inst:
171 ui.warn('%s: %s\n' % (tgt, str(inst)))
173 ui.warn('%s: %s\n' % (tgt, str(inst)))
172
174
173 ui.progress(_('relinking'), None)
175 ui.progress(_('relinking'), None)
174
176
175 ui.status(_('relinked %d files (%s reclaimed)\n') %
177 ui.status(_('relinked %d files (%s reclaimed)\n') %
176 (relinked, util.bytecount(savedbytes)))
178 (relinked, util.bytecount(savedbytes)))
177
179
178 cmdtable = {
180 cmdtable = {
179 'relink': (
181 'relink': (
180 relink,
182 relink,
181 [],
183 [],
182 _('[ORIGIN]')
184 _('[ORIGIN]')
183 )
185 )
184 }
186 }
@@ -1,99 +1,101 b''
1 # Copyright 2009, Alexander Solovyov <piranha@piranha.org.ua>
1 # Copyright 2009, Alexander Solovyov <piranha@piranha.org.ua>
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5
5
6 """extend schemes with shortcuts to repository swarms
6 """extend schemes with shortcuts to repository swarms
7
7
8 This extension allows you to specify shortcuts for parent URLs with a
8 This extension allows you to specify shortcuts for parent URLs with a
9 lot of repositories to act like a scheme, for example::
9 lot of repositories to act like a scheme, for example::
10
10
11 [schemes]
11 [schemes]
12 py = http://code.python.org/hg/
12 py = http://code.python.org/hg/
13
13
14 After that you can use it like::
14 After that you can use it like::
15
15
16 hg clone py://trunk/
16 hg clone py://trunk/
17
17
18 Additionally there is support for some more complex schemas, for
18 Additionally there is support for some more complex schemas, for
19 example used by Google Code::
19 example used by Google Code::
20
20
21 [schemes]
21 [schemes]
22 gcode = http://{1}.googlecode.com/hg/
22 gcode = http://{1}.googlecode.com/hg/
23
23
24 The syntax is taken from Mercurial templates, and you have unlimited
24 The syntax is taken from Mercurial templates, and you have unlimited
25 number of variables, starting with ``{1}`` and continuing with
25 number of variables, starting with ``{1}`` and continuing with
26 ``{2}``, ``{3}`` and so on. This variables will receive parts of URL
26 ``{2}``, ``{3}`` and so on. This variables will receive parts of URL
27 supplied, split by ``/``. Anything not specified as ``{part}`` will be
27 supplied, split by ``/``. Anything not specified as ``{part}`` will be
28 just appended to an URL.
28 just appended to an URL.
29
29
30 For convenience, the extension adds these schemes by default::
30 For convenience, the extension adds these schemes by default::
31
31
32 [schemes]
32 [schemes]
33 py = http://hg.python.org/
33 py = http://hg.python.org/
34 bb = https://bitbucket.org/
34 bb = https://bitbucket.org/
35 bb+ssh = ssh://hg@bitbucket.org/
35 bb+ssh = ssh://hg@bitbucket.org/
36 gcode = https://{1}.googlecode.com/hg/
36 gcode = https://{1}.googlecode.com/hg/
37 kiln = https://{1}.kilnhg.com/Repo/
37 kiln = https://{1}.kilnhg.com/Repo/
38
38
39 You can override a predefined scheme by defining a new scheme with the
39 You can override a predefined scheme by defining a new scheme with the
40 same name.
40 same name.
41 """
41 """
42
42
43 import os, re
43 import os, re
44 from mercurial import extensions, hg, templater, util
44 from mercurial import extensions, hg, templater, util
45 from mercurial.i18n import _
45 from mercurial.i18n import _
46
46
47 testedwith = 'internal'
48
47
49
48 class ShortRepository(object):
50 class ShortRepository(object):
49 def __init__(self, url, scheme, templater):
51 def __init__(self, url, scheme, templater):
50 self.scheme = scheme
52 self.scheme = scheme
51 self.templater = templater
53 self.templater = templater
52 self.url = url
54 self.url = url
53 try:
55 try:
54 self.parts = max(map(int, re.findall(r'\{(\d+)\}', self.url)))
56 self.parts = max(map(int, re.findall(r'\{(\d+)\}', self.url)))
55 except ValueError:
57 except ValueError:
56 self.parts = 0
58 self.parts = 0
57
59
58 def __repr__(self):
60 def __repr__(self):
59 return '<ShortRepository: %s>' % self.scheme
61 return '<ShortRepository: %s>' % self.scheme
60
62
61 def instance(self, ui, url, create):
63 def instance(self, ui, url, create):
62 # Should this use urlmod.url(), or is manual parsing better?
64 # Should this use urlmod.url(), or is manual parsing better?
63 url = url.split('://', 1)[1]
65 url = url.split('://', 1)[1]
64 parts = url.split('/', self.parts)
66 parts = url.split('/', self.parts)
65 if len(parts) > self.parts:
67 if len(parts) > self.parts:
66 tail = parts[-1]
68 tail = parts[-1]
67 parts = parts[:-1]
69 parts = parts[:-1]
68 else:
70 else:
69 tail = ''
71 tail = ''
70 context = dict((str(i + 1), v) for i, v in enumerate(parts))
72 context = dict((str(i + 1), v) for i, v in enumerate(parts))
71 url = ''.join(self.templater.process(self.url, context)) + tail
73 url = ''.join(self.templater.process(self.url, context)) + tail
72 return hg._peerlookup(url).instance(ui, url, create)
74 return hg._peerlookup(url).instance(ui, url, create)
73
75
74 def hasdriveletter(orig, path):
76 def hasdriveletter(orig, path):
75 if path:
77 if path:
76 for scheme in schemes:
78 for scheme in schemes:
77 if path.startswith(scheme + ':'):
79 if path.startswith(scheme + ':'):
78 return False
80 return False
79 return orig(path)
81 return orig(path)
80
82
81 schemes = {
83 schemes = {
82 'py': 'http://hg.python.org/',
84 'py': 'http://hg.python.org/',
83 'bb': 'https://bitbucket.org/',
85 'bb': 'https://bitbucket.org/',
84 'bb+ssh': 'ssh://hg@bitbucket.org/',
86 'bb+ssh': 'ssh://hg@bitbucket.org/',
85 'gcode': 'https://{1}.googlecode.com/hg/',
87 'gcode': 'https://{1}.googlecode.com/hg/',
86 'kiln': 'https://{1}.kilnhg.com/Repo/'
88 'kiln': 'https://{1}.kilnhg.com/Repo/'
87 }
89 }
88
90
89 def extsetup(ui):
91 def extsetup(ui):
90 schemes.update(dict(ui.configitems('schemes')))
92 schemes.update(dict(ui.configitems('schemes')))
91 t = templater.engine(lambda x: x)
93 t = templater.engine(lambda x: x)
92 for scheme, url in schemes.items():
94 for scheme, url in schemes.items():
93 if (os.name == 'nt' and len(scheme) == 1 and scheme.isalpha()
95 if (os.name == 'nt' and len(scheme) == 1 and scheme.isalpha()
94 and os.path.exists('%s:\\' % scheme)):
96 and os.path.exists('%s:\\' % scheme)):
95 raise util.Abort(_('custom scheme %s:// conflicts with drive '
97 raise util.Abort(_('custom scheme %s:// conflicts with drive '
96 'letter %s:\\\n') % (scheme, scheme.upper()))
98 'letter %s:\\\n') % (scheme, scheme.upper()))
97 hg.schemes[scheme] = ShortRepository(url, scheme, t)
99 hg.schemes[scheme] = ShortRepository(url, scheme, t)
98
100
99 extensions.wrapfunction(util, 'hasdriveletter', hasdriveletter)
101 extensions.wrapfunction(util, 'hasdriveletter', hasdriveletter)
@@ -1,73 +1,75 b''
1 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
1 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5
5
6 '''share a common history between several working directories'''
6 '''share a common history between several working directories'''
7
7
8 from mercurial.i18n import _
8 from mercurial.i18n import _
9 from mercurial import hg, commands, util
9 from mercurial import hg, commands, util
10
10
11 testedwith = 'internal'
12
11 def share(ui, source, dest=None, noupdate=False):
13 def share(ui, source, dest=None, noupdate=False):
12 """create a new shared repository
14 """create a new shared repository
13
15
14 Initialize a new repository and working directory that shares its
16 Initialize a new repository and working directory that shares its
15 history with another repository.
17 history with another repository.
16
18
17 .. note::
19 .. note::
18 using rollback or extensions that destroy/modify history (mq,
20 using rollback or extensions that destroy/modify history (mq,
19 rebase, etc.) can cause considerable confusion with shared
21 rebase, etc.) can cause considerable confusion with shared
20 clones. In particular, if two shared clones are both updated to
22 clones. In particular, if two shared clones are both updated to
21 the same changeset, and one of them destroys that changeset
23 the same changeset, and one of them destroys that changeset
22 with rollback, the other clone will suddenly stop working: all
24 with rollback, the other clone will suddenly stop working: all
23 operations will fail with "abort: working directory has unknown
25 operations will fail with "abort: working directory has unknown
24 parent". The only known workaround is to use debugsetparents on
26 parent". The only known workaround is to use debugsetparents on
25 the broken clone to reset it to a changeset that still exists
27 the broken clone to reset it to a changeset that still exists
26 (e.g. tip).
28 (e.g. tip).
27 """
29 """
28
30
29 return hg.share(ui, source, dest, not noupdate)
31 return hg.share(ui, source, dest, not noupdate)
30
32
31 def unshare(ui, repo):
33 def unshare(ui, repo):
32 """convert a shared repository to a normal one
34 """convert a shared repository to a normal one
33
35
34 Copy the store data to the repo and remove the sharedpath data.
36 Copy the store data to the repo and remove the sharedpath data.
35 """
37 """
36
38
37 if repo.sharedpath == repo.path:
39 if repo.sharedpath == repo.path:
38 raise util.Abort(_("this is not a shared repo"))
40 raise util.Abort(_("this is not a shared repo"))
39
41
40 destlock = lock = None
42 destlock = lock = None
41 lock = repo.lock()
43 lock = repo.lock()
42 try:
44 try:
43 # we use locks here because if we race with commit, we
45 # we use locks here because if we race with commit, we
44 # can end up with extra data in the cloned revlogs that's
46 # can end up with extra data in the cloned revlogs that's
45 # not pointed to by changesets, thus causing verify to
47 # not pointed to by changesets, thus causing verify to
46 # fail
48 # fail
47
49
48 destlock = hg.copystore(ui, repo, repo.path)
50 destlock = hg.copystore(ui, repo, repo.path)
49
51
50 sharefile = repo.join('sharedpath')
52 sharefile = repo.join('sharedpath')
51 util.rename(sharefile, sharefile + '.old')
53 util.rename(sharefile, sharefile + '.old')
52
54
53 repo.requirements.discard('sharedpath')
55 repo.requirements.discard('sharedpath')
54 repo._writerequirements()
56 repo._writerequirements()
55 finally:
57 finally:
56 destlock and destlock.release()
58 destlock and destlock.release()
57 lock and lock.release()
59 lock and lock.release()
58
60
59 # update store, spath, sopener and sjoin of repo
61 # update store, spath, sopener and sjoin of repo
60 repo.__init__(ui, repo.root)
62 repo.__init__(ui, repo.root)
61
63
62 cmdtable = {
64 cmdtable = {
63 "share":
65 "share":
64 (share,
66 (share,
65 [('U', 'noupdate', None, _('do not create a working copy'))],
67 [('U', 'noupdate', None, _('do not create a working copy'))],
66 _('[-U] SOURCE [DEST]')),
68 _('[-U] SOURCE [DEST]')),
67 "unshare":
69 "unshare":
68 (unshare,
70 (unshare,
69 [],
71 [],
70 ''),
72 ''),
71 }
73 }
72
74
73 commands.norepo += " share"
75 commands.norepo += " share"
@@ -1,679 +1,680 b''
1 # Patch transplanting extension for Mercurial
1 # Patch transplanting extension for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to transplant changesets from another branch
8 '''command to transplant changesets from another branch
9
9
10 This extension allows you to transplant patches from another branch.
10 This extension allows you to transplant patches from another branch.
11
11
12 Transplanted patches are recorded in .hg/transplant/transplants, as a
12 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 map from a changeset hash to its hash in the source repository.
13 map from a changeset hash to its hash in the source repository.
14 '''
14 '''
15
15
16 from mercurial.i18n import _
16 from mercurial.i18n import _
17 import os, tempfile
17 import os, tempfile
18 from mercurial.node import short
18 from mercurial.node import short
19 from mercurial import bundlerepo, hg, merge, match
19 from mercurial import bundlerepo, hg, merge, match
20 from mercurial import patch, revlog, scmutil, util, error, cmdutil
20 from mercurial import patch, revlog, scmutil, util, error, cmdutil
21 from mercurial import revset, templatekw
21 from mercurial import revset, templatekw
22
22
23 class TransplantError(error.Abort):
23 class TransplantError(error.Abort):
24 pass
24 pass
25
25
26 cmdtable = {}
26 cmdtable = {}
27 command = cmdutil.command(cmdtable)
27 command = cmdutil.command(cmdtable)
28 testedwith = 'internal'
28
29
29 class transplantentry(object):
30 class transplantentry(object):
30 def __init__(self, lnode, rnode):
31 def __init__(self, lnode, rnode):
31 self.lnode = lnode
32 self.lnode = lnode
32 self.rnode = rnode
33 self.rnode = rnode
33
34
34 class transplants(object):
35 class transplants(object):
35 def __init__(self, path=None, transplantfile=None, opener=None):
36 def __init__(self, path=None, transplantfile=None, opener=None):
36 self.path = path
37 self.path = path
37 self.transplantfile = transplantfile
38 self.transplantfile = transplantfile
38 self.opener = opener
39 self.opener = opener
39
40
40 if not opener:
41 if not opener:
41 self.opener = scmutil.opener(self.path)
42 self.opener = scmutil.opener(self.path)
42 self.transplants = {}
43 self.transplants = {}
43 self.dirty = False
44 self.dirty = False
44 self.read()
45 self.read()
45
46
46 def read(self):
47 def read(self):
47 abspath = os.path.join(self.path, self.transplantfile)
48 abspath = os.path.join(self.path, self.transplantfile)
48 if self.transplantfile and os.path.exists(abspath):
49 if self.transplantfile and os.path.exists(abspath):
49 for line in self.opener.read(self.transplantfile).splitlines():
50 for line in self.opener.read(self.transplantfile).splitlines():
50 lnode, rnode = map(revlog.bin, line.split(':'))
51 lnode, rnode = map(revlog.bin, line.split(':'))
51 list = self.transplants.setdefault(rnode, [])
52 list = self.transplants.setdefault(rnode, [])
52 list.append(transplantentry(lnode, rnode))
53 list.append(transplantentry(lnode, rnode))
53
54
54 def write(self):
55 def write(self):
55 if self.dirty and self.transplantfile:
56 if self.dirty and self.transplantfile:
56 if not os.path.isdir(self.path):
57 if not os.path.isdir(self.path):
57 os.mkdir(self.path)
58 os.mkdir(self.path)
58 fp = self.opener(self.transplantfile, 'w')
59 fp = self.opener(self.transplantfile, 'w')
59 for list in self.transplants.itervalues():
60 for list in self.transplants.itervalues():
60 for t in list:
61 for t in list:
61 l, r = map(revlog.hex, (t.lnode, t.rnode))
62 l, r = map(revlog.hex, (t.lnode, t.rnode))
62 fp.write(l + ':' + r + '\n')
63 fp.write(l + ':' + r + '\n')
63 fp.close()
64 fp.close()
64 self.dirty = False
65 self.dirty = False
65
66
66 def get(self, rnode):
67 def get(self, rnode):
67 return self.transplants.get(rnode) or []
68 return self.transplants.get(rnode) or []
68
69
69 def set(self, lnode, rnode):
70 def set(self, lnode, rnode):
70 list = self.transplants.setdefault(rnode, [])
71 list = self.transplants.setdefault(rnode, [])
71 list.append(transplantentry(lnode, rnode))
72 list.append(transplantentry(lnode, rnode))
72 self.dirty = True
73 self.dirty = True
73
74
74 def remove(self, transplant):
75 def remove(self, transplant):
75 list = self.transplants.get(transplant.rnode)
76 list = self.transplants.get(transplant.rnode)
76 if list:
77 if list:
77 del list[list.index(transplant)]
78 del list[list.index(transplant)]
78 self.dirty = True
79 self.dirty = True
79
80
80 class transplanter(object):
81 class transplanter(object):
81 def __init__(self, ui, repo):
82 def __init__(self, ui, repo):
82 self.ui = ui
83 self.ui = ui
83 self.path = repo.join('transplant')
84 self.path = repo.join('transplant')
84 self.opener = scmutil.opener(self.path)
85 self.opener = scmutil.opener(self.path)
85 self.transplants = transplants(self.path, 'transplants',
86 self.transplants = transplants(self.path, 'transplants',
86 opener=self.opener)
87 opener=self.opener)
87 self.editor = None
88 self.editor = None
88
89
89 def applied(self, repo, node, parent):
90 def applied(self, repo, node, parent):
90 '''returns True if a node is already an ancestor of parent
91 '''returns True if a node is already an ancestor of parent
91 or has already been transplanted'''
92 or has already been transplanted'''
92 if hasnode(repo, node):
93 if hasnode(repo, node):
93 if node in repo.changelog.reachable(parent, stop=node):
94 if node in repo.changelog.reachable(parent, stop=node):
94 return True
95 return True
95 for t in self.transplants.get(node):
96 for t in self.transplants.get(node):
96 # it might have been stripped
97 # it might have been stripped
97 if not hasnode(repo, t.lnode):
98 if not hasnode(repo, t.lnode):
98 self.transplants.remove(t)
99 self.transplants.remove(t)
99 return False
100 return False
100 if t.lnode in repo.changelog.reachable(parent, stop=t.lnode):
101 if t.lnode in repo.changelog.reachable(parent, stop=t.lnode):
101 return True
102 return True
102 return False
103 return False
103
104
104 def apply(self, repo, source, revmap, merges, opts={}):
105 def apply(self, repo, source, revmap, merges, opts={}):
105 '''apply the revisions in revmap one by one in revision order'''
106 '''apply the revisions in revmap one by one in revision order'''
106 revs = sorted(revmap)
107 revs = sorted(revmap)
107 p1, p2 = repo.dirstate.parents()
108 p1, p2 = repo.dirstate.parents()
108 pulls = []
109 pulls = []
109 diffopts = patch.diffopts(self.ui, opts)
110 diffopts = patch.diffopts(self.ui, opts)
110 diffopts.git = True
111 diffopts.git = True
111
112
112 lock = wlock = tr = None
113 lock = wlock = tr = None
113 try:
114 try:
114 wlock = repo.wlock()
115 wlock = repo.wlock()
115 lock = repo.lock()
116 lock = repo.lock()
116 tr = repo.transaction('transplant')
117 tr = repo.transaction('transplant')
117 for rev in revs:
118 for rev in revs:
118 node = revmap[rev]
119 node = revmap[rev]
119 revstr = '%s:%s' % (rev, short(node))
120 revstr = '%s:%s' % (rev, short(node))
120
121
121 if self.applied(repo, node, p1):
122 if self.applied(repo, node, p1):
122 self.ui.warn(_('skipping already applied revision %s\n') %
123 self.ui.warn(_('skipping already applied revision %s\n') %
123 revstr)
124 revstr)
124 continue
125 continue
125
126
126 parents = source.changelog.parents(node)
127 parents = source.changelog.parents(node)
127 if not (opts.get('filter') or opts.get('log')):
128 if not (opts.get('filter') or opts.get('log')):
128 # If the changeset parent is the same as the
129 # If the changeset parent is the same as the
129 # wdir's parent, just pull it.
130 # wdir's parent, just pull it.
130 if parents[0] == p1:
131 if parents[0] == p1:
131 pulls.append(node)
132 pulls.append(node)
132 p1 = node
133 p1 = node
133 continue
134 continue
134 if pulls:
135 if pulls:
135 if source != repo:
136 if source != repo:
136 repo.pull(source, heads=pulls)
137 repo.pull(source, heads=pulls)
137 merge.update(repo, pulls[-1], False, False, None)
138 merge.update(repo, pulls[-1], False, False, None)
138 p1, p2 = repo.dirstate.parents()
139 p1, p2 = repo.dirstate.parents()
139 pulls = []
140 pulls = []
140
141
141 domerge = False
142 domerge = False
142 if node in merges:
143 if node in merges:
143 # pulling all the merge revs at once would mean we
144 # pulling all the merge revs at once would mean we
144 # couldn't transplant after the latest even if
145 # couldn't transplant after the latest even if
145 # transplants before them fail.
146 # transplants before them fail.
146 domerge = True
147 domerge = True
147 if not hasnode(repo, node):
148 if not hasnode(repo, node):
148 repo.pull(source, heads=[node])
149 repo.pull(source, heads=[node])
149
150
150 skipmerge = False
151 skipmerge = False
151 if parents[1] != revlog.nullid:
152 if parents[1] != revlog.nullid:
152 if not opts.get('parent'):
153 if not opts.get('parent'):
153 self.ui.note(_('skipping merge changeset %s:%s\n')
154 self.ui.note(_('skipping merge changeset %s:%s\n')
154 % (rev, short(node)))
155 % (rev, short(node)))
155 skipmerge = True
156 skipmerge = True
156 else:
157 else:
157 parent = source.lookup(opts['parent'])
158 parent = source.lookup(opts['parent'])
158 if parent not in parents:
159 if parent not in parents:
159 raise util.Abort(_('%s is not a parent of %s') %
160 raise util.Abort(_('%s is not a parent of %s') %
160 (short(parent), short(node)))
161 (short(parent), short(node)))
161 else:
162 else:
162 parent = parents[0]
163 parent = parents[0]
163
164
164 if skipmerge:
165 if skipmerge:
165 patchfile = None
166 patchfile = None
166 else:
167 else:
167 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
168 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
168 fp = os.fdopen(fd, 'w')
169 fp = os.fdopen(fd, 'w')
169 gen = patch.diff(source, parent, node, opts=diffopts)
170 gen = patch.diff(source, parent, node, opts=diffopts)
170 for chunk in gen:
171 for chunk in gen:
171 fp.write(chunk)
172 fp.write(chunk)
172 fp.close()
173 fp.close()
173
174
174 del revmap[rev]
175 del revmap[rev]
175 if patchfile or domerge:
176 if patchfile or domerge:
176 try:
177 try:
177 try:
178 try:
178 n = self.applyone(repo, node,
179 n = self.applyone(repo, node,
179 source.changelog.read(node),
180 source.changelog.read(node),
180 patchfile, merge=domerge,
181 patchfile, merge=domerge,
181 log=opts.get('log'),
182 log=opts.get('log'),
182 filter=opts.get('filter'))
183 filter=opts.get('filter'))
183 except TransplantError:
184 except TransplantError:
184 # Do not rollback, it is up to the user to
185 # Do not rollback, it is up to the user to
185 # fix the merge or cancel everything
186 # fix the merge or cancel everything
186 tr.close()
187 tr.close()
187 raise
188 raise
188 if n and domerge:
189 if n and domerge:
189 self.ui.status(_('%s merged at %s\n') % (revstr,
190 self.ui.status(_('%s merged at %s\n') % (revstr,
190 short(n)))
191 short(n)))
191 elif n:
192 elif n:
192 self.ui.status(_('%s transplanted to %s\n')
193 self.ui.status(_('%s transplanted to %s\n')
193 % (short(node),
194 % (short(node),
194 short(n)))
195 short(n)))
195 finally:
196 finally:
196 if patchfile:
197 if patchfile:
197 os.unlink(patchfile)
198 os.unlink(patchfile)
198 tr.close()
199 tr.close()
199 if pulls:
200 if pulls:
200 repo.pull(source, heads=pulls)
201 repo.pull(source, heads=pulls)
201 merge.update(repo, pulls[-1], False, False, None)
202 merge.update(repo, pulls[-1], False, False, None)
202 finally:
203 finally:
203 self.saveseries(revmap, merges)
204 self.saveseries(revmap, merges)
204 self.transplants.write()
205 self.transplants.write()
205 if tr:
206 if tr:
206 tr.release()
207 tr.release()
207 lock.release()
208 lock.release()
208 wlock.release()
209 wlock.release()
209
210
210 def filter(self, filter, node, changelog, patchfile):
211 def filter(self, filter, node, changelog, patchfile):
211 '''arbitrarily rewrite changeset before applying it'''
212 '''arbitrarily rewrite changeset before applying it'''
212
213
213 self.ui.status(_('filtering %s\n') % patchfile)
214 self.ui.status(_('filtering %s\n') % patchfile)
214 user, date, msg = (changelog[1], changelog[2], changelog[4])
215 user, date, msg = (changelog[1], changelog[2], changelog[4])
215 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
216 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
216 fp = os.fdopen(fd, 'w')
217 fp = os.fdopen(fd, 'w')
217 fp.write("# HG changeset patch\n")
218 fp.write("# HG changeset patch\n")
218 fp.write("# User %s\n" % user)
219 fp.write("# User %s\n" % user)
219 fp.write("# Date %d %d\n" % date)
220 fp.write("# Date %d %d\n" % date)
220 fp.write(msg + '\n')
221 fp.write(msg + '\n')
221 fp.close()
222 fp.close()
222
223
223 try:
224 try:
224 util.system('%s %s %s' % (filter, util.shellquote(headerfile),
225 util.system('%s %s %s' % (filter, util.shellquote(headerfile),
225 util.shellquote(patchfile)),
226 util.shellquote(patchfile)),
226 environ={'HGUSER': changelog[1],
227 environ={'HGUSER': changelog[1],
227 'HGREVISION': revlog.hex(node),
228 'HGREVISION': revlog.hex(node),
228 },
229 },
229 onerr=util.Abort, errprefix=_('filter failed'),
230 onerr=util.Abort, errprefix=_('filter failed'),
230 out=self.ui.fout)
231 out=self.ui.fout)
231 user, date, msg = self.parselog(file(headerfile))[1:4]
232 user, date, msg = self.parselog(file(headerfile))[1:4]
232 finally:
233 finally:
233 os.unlink(headerfile)
234 os.unlink(headerfile)
234
235
235 return (user, date, msg)
236 return (user, date, msg)
236
237
237 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
238 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
238 filter=None):
239 filter=None):
239 '''apply the patch in patchfile to the repository as a transplant'''
240 '''apply the patch in patchfile to the repository as a transplant'''
240 (manifest, user, (time, timezone), files, message) = cl[:5]
241 (manifest, user, (time, timezone), files, message) = cl[:5]
241 date = "%d %d" % (time, timezone)
242 date = "%d %d" % (time, timezone)
242 extra = {'transplant_source': node}
243 extra = {'transplant_source': node}
243 if filter:
244 if filter:
244 (user, date, message) = self.filter(filter, node, cl, patchfile)
245 (user, date, message) = self.filter(filter, node, cl, patchfile)
245
246
246 if log:
247 if log:
247 # we don't translate messages inserted into commits
248 # we don't translate messages inserted into commits
248 message += '\n(transplanted from %s)' % revlog.hex(node)
249 message += '\n(transplanted from %s)' % revlog.hex(node)
249
250
250 self.ui.status(_('applying %s\n') % short(node))
251 self.ui.status(_('applying %s\n') % short(node))
251 self.ui.note('%s %s\n%s\n' % (user, date, message))
252 self.ui.note('%s %s\n%s\n' % (user, date, message))
252
253
253 if not patchfile and not merge:
254 if not patchfile and not merge:
254 raise util.Abort(_('can only omit patchfile if merging'))
255 raise util.Abort(_('can only omit patchfile if merging'))
255 if patchfile:
256 if patchfile:
256 try:
257 try:
257 files = set()
258 files = set()
258 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
259 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
259 files = list(files)
260 files = list(files)
260 if not files:
261 if not files:
261 self.ui.warn(_('%s: empty changeset') % revlog.hex(node))
262 self.ui.warn(_('%s: empty changeset') % revlog.hex(node))
262 return None
263 return None
263 except Exception, inst:
264 except Exception, inst:
264 seriespath = os.path.join(self.path, 'series')
265 seriespath = os.path.join(self.path, 'series')
265 if os.path.exists(seriespath):
266 if os.path.exists(seriespath):
266 os.unlink(seriespath)
267 os.unlink(seriespath)
267 p1 = repo.dirstate.p1()
268 p1 = repo.dirstate.p1()
268 p2 = node
269 p2 = node
269 self.log(user, date, message, p1, p2, merge=merge)
270 self.log(user, date, message, p1, p2, merge=merge)
270 self.ui.write(str(inst) + '\n')
271 self.ui.write(str(inst) + '\n')
271 raise TransplantError(_('fix up the merge and run '
272 raise TransplantError(_('fix up the merge and run '
272 'hg transplant --continue'))
273 'hg transplant --continue'))
273 else:
274 else:
274 files = None
275 files = None
275 if merge:
276 if merge:
276 p1, p2 = repo.dirstate.parents()
277 p1, p2 = repo.dirstate.parents()
277 repo.setparents(p1, node)
278 repo.setparents(p1, node)
278 m = match.always(repo.root, '')
279 m = match.always(repo.root, '')
279 else:
280 else:
280 m = match.exact(repo.root, '', files)
281 m = match.exact(repo.root, '', files)
281
282
282 n = repo.commit(message, user, date, extra=extra, match=m,
283 n = repo.commit(message, user, date, extra=extra, match=m,
283 editor=self.editor)
284 editor=self.editor)
284 if not n:
285 if not n:
285 # Crash here to prevent an unclear crash later, in
286 # Crash here to prevent an unclear crash later, in
286 # transplants.write(). This can happen if patch.patch()
287 # transplants.write(). This can happen if patch.patch()
287 # does nothing but claims success or if repo.status() fails
288 # does nothing but claims success or if repo.status() fails
288 # to report changes done by patch.patch(). These both
289 # to report changes done by patch.patch(). These both
289 # appear to be bugs in other parts of Mercurial, but dying
290 # appear to be bugs in other parts of Mercurial, but dying
290 # here, as soon as we can detect the problem, is preferable
291 # here, as soon as we can detect the problem, is preferable
291 # to silently dropping changesets on the floor.
292 # to silently dropping changesets on the floor.
292 raise RuntimeError('nothing committed after transplant')
293 raise RuntimeError('nothing committed after transplant')
293 if not merge:
294 if not merge:
294 self.transplants.set(n, node)
295 self.transplants.set(n, node)
295
296
296 return n
297 return n
297
298
298 def resume(self, repo, source, opts=None):
299 def resume(self, repo, source, opts=None):
299 '''recover last transaction and apply remaining changesets'''
300 '''recover last transaction and apply remaining changesets'''
300 if os.path.exists(os.path.join(self.path, 'journal')):
301 if os.path.exists(os.path.join(self.path, 'journal')):
301 n, node = self.recover(repo)
302 n, node = self.recover(repo)
302 self.ui.status(_('%s transplanted as %s\n') % (short(node),
303 self.ui.status(_('%s transplanted as %s\n') % (short(node),
303 short(n)))
304 short(n)))
304 seriespath = os.path.join(self.path, 'series')
305 seriespath = os.path.join(self.path, 'series')
305 if not os.path.exists(seriespath):
306 if not os.path.exists(seriespath):
306 self.transplants.write()
307 self.transplants.write()
307 return
308 return
308 nodes, merges = self.readseries()
309 nodes, merges = self.readseries()
309 revmap = {}
310 revmap = {}
310 for n in nodes:
311 for n in nodes:
311 revmap[source.changelog.rev(n)] = n
312 revmap[source.changelog.rev(n)] = n
312 os.unlink(seriespath)
313 os.unlink(seriespath)
313
314
314 self.apply(repo, source, revmap, merges, opts)
315 self.apply(repo, source, revmap, merges, opts)
315
316
316 def recover(self, repo):
317 def recover(self, repo):
317 '''commit working directory using journal metadata'''
318 '''commit working directory using journal metadata'''
318 node, user, date, message, parents = self.readlog()
319 node, user, date, message, parents = self.readlog()
319 merge = False
320 merge = False
320
321
321 if not user or not date or not message or not parents[0]:
322 if not user or not date or not message or not parents[0]:
322 raise util.Abort(_('transplant log file is corrupt'))
323 raise util.Abort(_('transplant log file is corrupt'))
323
324
324 parent = parents[0]
325 parent = parents[0]
325 if len(parents) > 1:
326 if len(parents) > 1:
326 if opts.get('parent'):
327 if opts.get('parent'):
327 parent = source.lookup(opts['parent'])
328 parent = source.lookup(opts['parent'])
328 if parent not in parents:
329 if parent not in parents:
329 raise util.Abort(_('%s is not a parent of %s') %
330 raise util.Abort(_('%s is not a parent of %s') %
330 (short(parent), short(node)))
331 (short(parent), short(node)))
331 else:
332 else:
332 merge = True
333 merge = True
333
334
334 extra = {'transplant_source': node}
335 extra = {'transplant_source': node}
335 wlock = repo.wlock()
336 wlock = repo.wlock()
336 try:
337 try:
337 p1, p2 = repo.dirstate.parents()
338 p1, p2 = repo.dirstate.parents()
338 if p1 != parent:
339 if p1 != parent:
339 raise util.Abort(
340 raise util.Abort(
340 _('working dir not at transplant parent %s') %
341 _('working dir not at transplant parent %s') %
341 revlog.hex(parent))
342 revlog.hex(parent))
342 if merge:
343 if merge:
343 repo.setparents(p1, parents[1])
344 repo.setparents(p1, parents[1])
344 n = repo.commit(message, user, date, extra=extra,
345 n = repo.commit(message, user, date, extra=extra,
345 editor=self.editor)
346 editor=self.editor)
346 if not n:
347 if not n:
347 raise util.Abort(_('commit failed'))
348 raise util.Abort(_('commit failed'))
348 if not merge:
349 if not merge:
349 self.transplants.set(n, node)
350 self.transplants.set(n, node)
350 self.unlog()
351 self.unlog()
351
352
352 return n, node
353 return n, node
353 finally:
354 finally:
354 wlock.release()
355 wlock.release()
355
356
356 def readseries(self):
357 def readseries(self):
357 nodes = []
358 nodes = []
358 merges = []
359 merges = []
359 cur = nodes
360 cur = nodes
360 for line in self.opener.read('series').splitlines():
361 for line in self.opener.read('series').splitlines():
361 if line.startswith('# Merges'):
362 if line.startswith('# Merges'):
362 cur = merges
363 cur = merges
363 continue
364 continue
364 cur.append(revlog.bin(line))
365 cur.append(revlog.bin(line))
365
366
366 return (nodes, merges)
367 return (nodes, merges)
367
368
368 def saveseries(self, revmap, merges):
369 def saveseries(self, revmap, merges):
369 if not revmap:
370 if not revmap:
370 return
371 return
371
372
372 if not os.path.isdir(self.path):
373 if not os.path.isdir(self.path):
373 os.mkdir(self.path)
374 os.mkdir(self.path)
374 series = self.opener('series', 'w')
375 series = self.opener('series', 'w')
375 for rev in sorted(revmap):
376 for rev in sorted(revmap):
376 series.write(revlog.hex(revmap[rev]) + '\n')
377 series.write(revlog.hex(revmap[rev]) + '\n')
377 if merges:
378 if merges:
378 series.write('# Merges\n')
379 series.write('# Merges\n')
379 for m in merges:
380 for m in merges:
380 series.write(revlog.hex(m) + '\n')
381 series.write(revlog.hex(m) + '\n')
381 series.close()
382 series.close()
382
383
383 def parselog(self, fp):
384 def parselog(self, fp):
384 parents = []
385 parents = []
385 message = []
386 message = []
386 node = revlog.nullid
387 node = revlog.nullid
387 inmsg = False
388 inmsg = False
388 user = None
389 user = None
389 date = None
390 date = None
390 for line in fp.read().splitlines():
391 for line in fp.read().splitlines():
391 if inmsg:
392 if inmsg:
392 message.append(line)
393 message.append(line)
393 elif line.startswith('# User '):
394 elif line.startswith('# User '):
394 user = line[7:]
395 user = line[7:]
395 elif line.startswith('# Date '):
396 elif line.startswith('# Date '):
396 date = line[7:]
397 date = line[7:]
397 elif line.startswith('# Node ID '):
398 elif line.startswith('# Node ID '):
398 node = revlog.bin(line[10:])
399 node = revlog.bin(line[10:])
399 elif line.startswith('# Parent '):
400 elif line.startswith('# Parent '):
400 parents.append(revlog.bin(line[9:]))
401 parents.append(revlog.bin(line[9:]))
401 elif not line.startswith('# '):
402 elif not line.startswith('# '):
402 inmsg = True
403 inmsg = True
403 message.append(line)
404 message.append(line)
404 if None in (user, date):
405 if None in (user, date):
405 raise util.Abort(_("filter corrupted changeset (no user or date)"))
406 raise util.Abort(_("filter corrupted changeset (no user or date)"))
406 return (node, user, date, '\n'.join(message), parents)
407 return (node, user, date, '\n'.join(message), parents)
407
408
408 def log(self, user, date, message, p1, p2, merge=False):
409 def log(self, user, date, message, p1, p2, merge=False):
409 '''journal changelog metadata for later recover'''
410 '''journal changelog metadata for later recover'''
410
411
411 if not os.path.isdir(self.path):
412 if not os.path.isdir(self.path):
412 os.mkdir(self.path)
413 os.mkdir(self.path)
413 fp = self.opener('journal', 'w')
414 fp = self.opener('journal', 'w')
414 fp.write('# User %s\n' % user)
415 fp.write('# User %s\n' % user)
415 fp.write('# Date %s\n' % date)
416 fp.write('# Date %s\n' % date)
416 fp.write('# Node ID %s\n' % revlog.hex(p2))
417 fp.write('# Node ID %s\n' % revlog.hex(p2))
417 fp.write('# Parent ' + revlog.hex(p1) + '\n')
418 fp.write('# Parent ' + revlog.hex(p1) + '\n')
418 if merge:
419 if merge:
419 fp.write('# Parent ' + revlog.hex(p2) + '\n')
420 fp.write('# Parent ' + revlog.hex(p2) + '\n')
420 fp.write(message.rstrip() + '\n')
421 fp.write(message.rstrip() + '\n')
421 fp.close()
422 fp.close()
422
423
423 def readlog(self):
424 def readlog(self):
424 return self.parselog(self.opener('journal'))
425 return self.parselog(self.opener('journal'))
425
426
426 def unlog(self):
427 def unlog(self):
427 '''remove changelog journal'''
428 '''remove changelog journal'''
428 absdst = os.path.join(self.path, 'journal')
429 absdst = os.path.join(self.path, 'journal')
429 if os.path.exists(absdst):
430 if os.path.exists(absdst):
430 os.unlink(absdst)
431 os.unlink(absdst)
431
432
432 def transplantfilter(self, repo, source, root):
433 def transplantfilter(self, repo, source, root):
433 def matchfn(node):
434 def matchfn(node):
434 if self.applied(repo, node, root):
435 if self.applied(repo, node, root):
435 return False
436 return False
436 if source.changelog.parents(node)[1] != revlog.nullid:
437 if source.changelog.parents(node)[1] != revlog.nullid:
437 return False
438 return False
438 extra = source.changelog.read(node)[5]
439 extra = source.changelog.read(node)[5]
439 cnode = extra.get('transplant_source')
440 cnode = extra.get('transplant_source')
440 if cnode and self.applied(repo, cnode, root):
441 if cnode and self.applied(repo, cnode, root):
441 return False
442 return False
442 return True
443 return True
443
444
444 return matchfn
445 return matchfn
445
446
446 def hasnode(repo, node):
447 def hasnode(repo, node):
447 try:
448 try:
448 return repo.changelog.rev(node) is not None
449 return repo.changelog.rev(node) is not None
449 except error.RevlogError:
450 except error.RevlogError:
450 return False
451 return False
451
452
452 def browserevs(ui, repo, nodes, opts):
453 def browserevs(ui, repo, nodes, opts):
453 '''interactively transplant changesets'''
454 '''interactively transplant changesets'''
454 def browsehelp(ui):
455 def browsehelp(ui):
455 ui.write(_('y: transplant this changeset\n'
456 ui.write(_('y: transplant this changeset\n'
456 'n: skip this changeset\n'
457 'n: skip this changeset\n'
457 'm: merge at this changeset\n'
458 'm: merge at this changeset\n'
458 'p: show patch\n'
459 'p: show patch\n'
459 'c: commit selected changesets\n'
460 'c: commit selected changesets\n'
460 'q: cancel transplant\n'
461 'q: cancel transplant\n'
461 '?: show this help\n'))
462 '?: show this help\n'))
462
463
463 displayer = cmdutil.show_changeset(ui, repo, opts)
464 displayer = cmdutil.show_changeset(ui, repo, opts)
464 transplants = []
465 transplants = []
465 merges = []
466 merges = []
466 for node in nodes:
467 for node in nodes:
467 displayer.show(repo[node])
468 displayer.show(repo[node])
468 action = None
469 action = None
469 while not action:
470 while not action:
470 action = ui.prompt(_('apply changeset? [ynmpcq?]:'))
471 action = ui.prompt(_('apply changeset? [ynmpcq?]:'))
471 if action == '?':
472 if action == '?':
472 browsehelp(ui)
473 browsehelp(ui)
473 action = None
474 action = None
474 elif action == 'p':
475 elif action == 'p':
475 parent = repo.changelog.parents(node)[0]
476 parent = repo.changelog.parents(node)[0]
476 for chunk in patch.diff(repo, parent, node):
477 for chunk in patch.diff(repo, parent, node):
477 ui.write(chunk)
478 ui.write(chunk)
478 action = None
479 action = None
479 elif action not in ('y', 'n', 'm', 'c', 'q'):
480 elif action not in ('y', 'n', 'm', 'c', 'q'):
480 ui.write(_('no such option\n'))
481 ui.write(_('no such option\n'))
481 action = None
482 action = None
482 if action == 'y':
483 if action == 'y':
483 transplants.append(node)
484 transplants.append(node)
484 elif action == 'm':
485 elif action == 'm':
485 merges.append(node)
486 merges.append(node)
486 elif action == 'c':
487 elif action == 'c':
487 break
488 break
488 elif action == 'q':
489 elif action == 'q':
489 transplants = ()
490 transplants = ()
490 merges = ()
491 merges = ()
491 break
492 break
492 displayer.close()
493 displayer.close()
493 return (transplants, merges)
494 return (transplants, merges)
494
495
495 @command('transplant',
496 @command('transplant',
496 [('s', 'source', '', _('pull patches from REPO'), _('REPO')),
497 [('s', 'source', '', _('pull patches from REPO'), _('REPO')),
497 ('b', 'branch', [],
498 ('b', 'branch', [],
498 _('pull patches from branch BRANCH'), _('BRANCH')),
499 _('pull patches from branch BRANCH'), _('BRANCH')),
499 ('a', 'all', None, _('pull all changesets up to BRANCH')),
500 ('a', 'all', None, _('pull all changesets up to BRANCH')),
500 ('p', 'prune', [], _('skip over REV'), _('REV')),
501 ('p', 'prune', [], _('skip over REV'), _('REV')),
501 ('m', 'merge', [], _('merge at REV'), _('REV')),
502 ('m', 'merge', [], _('merge at REV'), _('REV')),
502 ('', 'parent', '',
503 ('', 'parent', '',
503 _('parent to choose when transplanting merge'), _('REV')),
504 _('parent to choose when transplanting merge'), _('REV')),
504 ('e', 'edit', False, _('invoke editor on commit messages')),
505 ('e', 'edit', False, _('invoke editor on commit messages')),
505 ('', 'log', None, _('append transplant info to log message')),
506 ('', 'log', None, _('append transplant info to log message')),
506 ('c', 'continue', None, _('continue last transplant session '
507 ('c', 'continue', None, _('continue last transplant session '
507 'after repair')),
508 'after repair')),
508 ('', 'filter', '',
509 ('', 'filter', '',
509 _('filter changesets through command'), _('CMD'))],
510 _('filter changesets through command'), _('CMD'))],
510 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
511 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
511 '[-m REV] [REV]...'))
512 '[-m REV] [REV]...'))
512 def transplant(ui, repo, *revs, **opts):
513 def transplant(ui, repo, *revs, **opts):
513 '''transplant changesets from another branch
514 '''transplant changesets from another branch
514
515
515 Selected changesets will be applied on top of the current working
516 Selected changesets will be applied on top of the current working
516 directory with the log of the original changeset. The changesets
517 directory with the log of the original changeset. The changesets
517 are copied and will thus appear twice in the history. Use the
518 are copied and will thus appear twice in the history. Use the
518 rebase extension instead if you want to move a whole branch of
519 rebase extension instead if you want to move a whole branch of
519 unpublished changesets.
520 unpublished changesets.
520
521
521 If --log is specified, log messages will have a comment appended
522 If --log is specified, log messages will have a comment appended
522 of the form::
523 of the form::
523
524
524 (transplanted from CHANGESETHASH)
525 (transplanted from CHANGESETHASH)
525
526
526 You can rewrite the changelog message with the --filter option.
527 You can rewrite the changelog message with the --filter option.
527 Its argument will be invoked with the current changelog message as
528 Its argument will be invoked with the current changelog message as
528 $1 and the patch as $2.
529 $1 and the patch as $2.
529
530
530 If --source/-s is specified, selects changesets from the named
531 If --source/-s is specified, selects changesets from the named
531 repository. If --branch/-b is specified, selects changesets from
532 repository. If --branch/-b is specified, selects changesets from
532 the branch holding the named revision, up to that revision. If
533 the branch holding the named revision, up to that revision. If
533 --all/-a is specified, all changesets on the branch will be
534 --all/-a is specified, all changesets on the branch will be
534 transplanted, otherwise you will be prompted to select the
535 transplanted, otherwise you will be prompted to select the
535 changesets you want.
536 changesets you want.
536
537
537 :hg:`transplant --branch REVISION --all` will transplant the
538 :hg:`transplant --branch REVISION --all` will transplant the
538 selected branch (up to the named revision) onto your current
539 selected branch (up to the named revision) onto your current
539 working directory.
540 working directory.
540
541
541 You can optionally mark selected transplanted changesets as merge
542 You can optionally mark selected transplanted changesets as merge
542 changesets. You will not be prompted to transplant any ancestors
543 changesets. You will not be prompted to transplant any ancestors
543 of a merged transplant, and you can merge descendants of them
544 of a merged transplant, and you can merge descendants of them
544 normally instead of transplanting them.
545 normally instead of transplanting them.
545
546
546 Merge changesets may be transplanted directly by specifying the
547 Merge changesets may be transplanted directly by specifying the
547 proper parent changeset by calling :hg:`transplant --parent`.
548 proper parent changeset by calling :hg:`transplant --parent`.
548
549
549 If no merges or revisions are provided, :hg:`transplant` will
550 If no merges or revisions are provided, :hg:`transplant` will
550 start an interactive changeset browser.
551 start an interactive changeset browser.
551
552
552 If a changeset application fails, you can fix the merge by hand
553 If a changeset application fails, you can fix the merge by hand
553 and then resume where you left off by calling :hg:`transplant
554 and then resume where you left off by calling :hg:`transplant
554 --continue/-c`.
555 --continue/-c`.
555 '''
556 '''
556 def incwalk(repo, csets, match=util.always):
557 def incwalk(repo, csets, match=util.always):
557 for node in csets:
558 for node in csets:
558 if match(node):
559 if match(node):
559 yield node
560 yield node
560
561
561 def transplantwalk(repo, root, branches, match=util.always):
562 def transplantwalk(repo, root, branches, match=util.always):
562 if not branches:
563 if not branches:
563 branches = repo.heads()
564 branches = repo.heads()
564 ancestors = []
565 ancestors = []
565 for branch in branches:
566 for branch in branches:
566 ancestors.append(repo.changelog.ancestor(root, branch))
567 ancestors.append(repo.changelog.ancestor(root, branch))
567 for node in repo.changelog.nodesbetween(ancestors, branches)[0]:
568 for node in repo.changelog.nodesbetween(ancestors, branches)[0]:
568 if match(node):
569 if match(node):
569 yield node
570 yield node
570
571
571 def checkopts(opts, revs):
572 def checkopts(opts, revs):
572 if opts.get('continue'):
573 if opts.get('continue'):
573 if opts.get('branch') or opts.get('all') or opts.get('merge'):
574 if opts.get('branch') or opts.get('all') or opts.get('merge'):
574 raise util.Abort(_('--continue is incompatible with '
575 raise util.Abort(_('--continue is incompatible with '
575 'branch, all or merge'))
576 'branch, all or merge'))
576 return
577 return
577 if not (opts.get('source') or revs or
578 if not (opts.get('source') or revs or
578 opts.get('merge') or opts.get('branch')):
579 opts.get('merge') or opts.get('branch')):
579 raise util.Abort(_('no source URL, branch tag or revision '
580 raise util.Abort(_('no source URL, branch tag or revision '
580 'list provided'))
581 'list provided'))
581 if opts.get('all'):
582 if opts.get('all'):
582 if not opts.get('branch'):
583 if not opts.get('branch'):
583 raise util.Abort(_('--all requires a branch revision'))
584 raise util.Abort(_('--all requires a branch revision'))
584 if revs:
585 if revs:
585 raise util.Abort(_('--all is incompatible with a '
586 raise util.Abort(_('--all is incompatible with a '
586 'revision list'))
587 'revision list'))
587
588
588 checkopts(opts, revs)
589 checkopts(opts, revs)
589
590
590 if not opts.get('log'):
591 if not opts.get('log'):
591 opts['log'] = ui.config('transplant', 'log')
592 opts['log'] = ui.config('transplant', 'log')
592 if not opts.get('filter'):
593 if not opts.get('filter'):
593 opts['filter'] = ui.config('transplant', 'filter')
594 opts['filter'] = ui.config('transplant', 'filter')
594
595
595 tp = transplanter(ui, repo)
596 tp = transplanter(ui, repo)
596 if opts.get('edit'):
597 if opts.get('edit'):
597 tp.editor = cmdutil.commitforceeditor
598 tp.editor = cmdutil.commitforceeditor
598
599
599 p1, p2 = repo.dirstate.parents()
600 p1, p2 = repo.dirstate.parents()
600 if len(repo) > 0 and p1 == revlog.nullid:
601 if len(repo) > 0 and p1 == revlog.nullid:
601 raise util.Abort(_('no revision checked out'))
602 raise util.Abort(_('no revision checked out'))
602 if not opts.get('continue'):
603 if not opts.get('continue'):
603 if p2 != revlog.nullid:
604 if p2 != revlog.nullid:
604 raise util.Abort(_('outstanding uncommitted merges'))
605 raise util.Abort(_('outstanding uncommitted merges'))
605 m, a, r, d = repo.status()[:4]
606 m, a, r, d = repo.status()[:4]
606 if m or a or r or d:
607 if m or a or r or d:
607 raise util.Abort(_('outstanding local changes'))
608 raise util.Abort(_('outstanding local changes'))
608
609
609 sourcerepo = opts.get('source')
610 sourcerepo = opts.get('source')
610 if sourcerepo:
611 if sourcerepo:
611 source = hg.peer(ui, opts, ui.expandpath(sourcerepo))
612 source = hg.peer(ui, opts, ui.expandpath(sourcerepo))
612 branches = map(source.lookup, opts.get('branch', ()))
613 branches = map(source.lookup, opts.get('branch', ()))
613 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, source,
614 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, source,
614 onlyheads=branches, force=True)
615 onlyheads=branches, force=True)
615 else:
616 else:
616 source = repo
617 source = repo
617 branches = map(source.lookup, opts.get('branch', ()))
618 branches = map(source.lookup, opts.get('branch', ()))
618 cleanupfn = None
619 cleanupfn = None
619
620
620 try:
621 try:
621 if opts.get('continue'):
622 if opts.get('continue'):
622 tp.resume(repo, source, opts)
623 tp.resume(repo, source, opts)
623 return
624 return
624
625
625 tf = tp.transplantfilter(repo, source, p1)
626 tf = tp.transplantfilter(repo, source, p1)
626 if opts.get('prune'):
627 if opts.get('prune'):
627 prune = [source.lookup(r)
628 prune = [source.lookup(r)
628 for r in scmutil.revrange(source, opts.get('prune'))]
629 for r in scmutil.revrange(source, opts.get('prune'))]
629 matchfn = lambda x: tf(x) and x not in prune
630 matchfn = lambda x: tf(x) and x not in prune
630 else:
631 else:
631 matchfn = tf
632 matchfn = tf
632 merges = map(source.lookup, opts.get('merge', ()))
633 merges = map(source.lookup, opts.get('merge', ()))
633 revmap = {}
634 revmap = {}
634 if revs:
635 if revs:
635 for r in scmutil.revrange(source, revs):
636 for r in scmutil.revrange(source, revs):
636 revmap[int(r)] = source.lookup(r)
637 revmap[int(r)] = source.lookup(r)
637 elif opts.get('all') or not merges:
638 elif opts.get('all') or not merges:
638 if source != repo:
639 if source != repo:
639 alltransplants = incwalk(source, csets, match=matchfn)
640 alltransplants = incwalk(source, csets, match=matchfn)
640 else:
641 else:
641 alltransplants = transplantwalk(source, p1, branches,
642 alltransplants = transplantwalk(source, p1, branches,
642 match=matchfn)
643 match=matchfn)
643 if opts.get('all'):
644 if opts.get('all'):
644 revs = alltransplants
645 revs = alltransplants
645 else:
646 else:
646 revs, newmerges = browserevs(ui, source, alltransplants, opts)
647 revs, newmerges = browserevs(ui, source, alltransplants, opts)
647 merges.extend(newmerges)
648 merges.extend(newmerges)
648 for r in revs:
649 for r in revs:
649 revmap[source.changelog.rev(r)] = r
650 revmap[source.changelog.rev(r)] = r
650 for r in merges:
651 for r in merges:
651 revmap[source.changelog.rev(r)] = r
652 revmap[source.changelog.rev(r)] = r
652
653
653 tp.apply(repo, source, revmap, merges, opts)
654 tp.apply(repo, source, revmap, merges, opts)
654 finally:
655 finally:
655 if cleanupfn:
656 if cleanupfn:
656 cleanupfn()
657 cleanupfn()
657
658
658 def revsettransplanted(repo, subset, x):
659 def revsettransplanted(repo, subset, x):
659 """``transplanted([set])``
660 """``transplanted([set])``
660 Transplanted changesets in set, or all transplanted changesets.
661 Transplanted changesets in set, or all transplanted changesets.
661 """
662 """
662 if x:
663 if x:
663 s = revset.getset(repo, subset, x)
664 s = revset.getset(repo, subset, x)
664 else:
665 else:
665 s = subset
666 s = subset
666 return [r for r in s if repo[r].extra().get('transplant_source')]
667 return [r for r in s if repo[r].extra().get('transplant_source')]
667
668
668 def kwtransplanted(repo, ctx, **args):
669 def kwtransplanted(repo, ctx, **args):
669 """:transplanted: String. The node identifier of the transplanted
670 """:transplanted: String. The node identifier of the transplanted
670 changeset if any."""
671 changeset if any."""
671 n = ctx.extra().get('transplant_source')
672 n = ctx.extra().get('transplant_source')
672 return n and revlog.hex(n) or ''
673 return n and revlog.hex(n) or ''
673
674
674 def extsetup(ui):
675 def extsetup(ui):
675 revset.symbols['transplanted'] = revsettransplanted
676 revset.symbols['transplanted'] = revsettransplanted
676 templatekw.keywords['transplanted'] = kwtransplanted
677 templatekw.keywords['transplanted'] = kwtransplanted
677
678
678 # tell hggettext to extract docstrings from these functions:
679 # tell hggettext to extract docstrings from these functions:
679 i18nfunctions = [revsettransplanted, kwtransplanted]
680 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,167 +1,167 b''
1 # win32mbcs.py -- MBCS filename support for Mercurial
1 # win32mbcs.py -- MBCS filename support for Mercurial
2 #
2 #
3 # Copyright (c) 2008 Shun-ichi Goto <shunichi.goto@gmail.com>
3 # Copyright (c) 2008 Shun-ichi Goto <shunichi.goto@gmail.com>
4 #
4 #
5 # Version: 0.3
5 # Version: 0.3
6 # Author: Shun-ichi Goto <shunichi.goto@gmail.com>
6 # Author: Shun-ichi Goto <shunichi.goto@gmail.com>
7 #
7 #
8 # This software may be used and distributed according to the terms of the
8 # This software may be used and distributed according to the terms of the
9 # GNU General Public License version 2 or any later version.
9 # GNU General Public License version 2 or any later version.
10 #
10 #
11
11
12 '''allow the use of MBCS paths with problematic encodings
12 '''allow the use of MBCS paths with problematic encodings
13
13
14 Some MBCS encodings are not good for some path operations (i.e.
14 Some MBCS encodings are not good for some path operations (i.e.
15 splitting path, case conversion, etc.) with its encoded bytes. We call
15 splitting path, case conversion, etc.) with its encoded bytes. We call
16 such a encoding (i.e. shift_jis and big5) as "problematic encoding".
16 such a encoding (i.e. shift_jis and big5) as "problematic encoding".
17 This extension can be used to fix the issue with those encodings by
17 This extension can be used to fix the issue with those encodings by
18 wrapping some functions to convert to Unicode string before path
18 wrapping some functions to convert to Unicode string before path
19 operation.
19 operation.
20
20
21 This extension is useful for:
21 This extension is useful for:
22
22
23 - Japanese Windows users using shift_jis encoding.
23 - Japanese Windows users using shift_jis encoding.
24 - Chinese Windows users using big5 encoding.
24 - Chinese Windows users using big5 encoding.
25 - All users who use a repository with one of problematic encodings on
25 - All users who use a repository with one of problematic encodings on
26 case-insensitive file system.
26 case-insensitive file system.
27
27
28 This extension is not needed for:
28 This extension is not needed for:
29
29
30 - Any user who use only ASCII chars in path.
30 - Any user who use only ASCII chars in path.
31 - Any user who do not use any of problematic encodings.
31 - Any user who do not use any of problematic encodings.
32
32
33 Note that there are some limitations on using this extension:
33 Note that there are some limitations on using this extension:
34
34
35 - You should use single encoding in one repository.
35 - You should use single encoding in one repository.
36 - If the repository path ends with 0x5c, .hg/hgrc cannot be read.
36 - If the repository path ends with 0x5c, .hg/hgrc cannot be read.
37 - win32mbcs is not compatible with fixutf8 extension.
37 - win32mbcs is not compatible with fixutf8 extension.
38
38
39 By default, win32mbcs uses encoding.encoding decided by Mercurial.
39 By default, win32mbcs uses encoding.encoding decided by Mercurial.
40 You can specify the encoding by config option::
40 You can specify the encoding by config option::
41
41
42 [win32mbcs]
42 [win32mbcs]
43 encoding = sjis
43 encoding = sjis
44
44
45 It is useful for the users who want to commit with UTF-8 log message.
45 It is useful for the users who want to commit with UTF-8 log message.
46 '''
46 '''
47
47
48 import os, sys
48 import os, sys
49 from mercurial.i18n import _
49 from mercurial.i18n import _
50 from mercurial import util, encoding
50 from mercurial import util, encoding
51 testedwith = 'internal'
51
52
52 _encoding = None # see extsetup
53 _encoding = None # see extsetup
53
54
54 def decode(arg):
55 def decode(arg):
55 if isinstance(arg, str):
56 if isinstance(arg, str):
56 uarg = arg.decode(_encoding)
57 uarg = arg.decode(_encoding)
57 if arg == uarg.encode(_encoding):
58 if arg == uarg.encode(_encoding):
58 return uarg
59 return uarg
59 raise UnicodeError("Not local encoding")
60 raise UnicodeError("Not local encoding")
60 elif isinstance(arg, tuple):
61 elif isinstance(arg, tuple):
61 return tuple(map(decode, arg))
62 return tuple(map(decode, arg))
62 elif isinstance(arg, list):
63 elif isinstance(arg, list):
63 return map(decode, arg)
64 return map(decode, arg)
64 elif isinstance(arg, dict):
65 elif isinstance(arg, dict):
65 for k, v in arg.items():
66 for k, v in arg.items():
66 arg[k] = decode(v)
67 arg[k] = decode(v)
67 return arg
68 return arg
68
69
69 def encode(arg):
70 def encode(arg):
70 if isinstance(arg, unicode):
71 if isinstance(arg, unicode):
71 return arg.encode(_encoding)
72 return arg.encode(_encoding)
72 elif isinstance(arg, tuple):
73 elif isinstance(arg, tuple):
73 return tuple(map(encode, arg))
74 return tuple(map(encode, arg))
74 elif isinstance(arg, list):
75 elif isinstance(arg, list):
75 return map(encode, arg)
76 return map(encode, arg)
76 elif isinstance(arg, dict):
77 elif isinstance(arg, dict):
77 for k, v in arg.items():
78 for k, v in arg.items():
78 arg[k] = encode(v)
79 arg[k] = encode(v)
79 return arg
80 return arg
80
81
81 def appendsep(s):
82 def appendsep(s):
82 # ensure the path ends with os.sep, appending it if necessary.
83 # ensure the path ends with os.sep, appending it if necessary.
83 try:
84 try:
84 us = decode(s)
85 us = decode(s)
85 except UnicodeError:
86 except UnicodeError:
86 us = s
87 us = s
87 if us and us[-1] not in ':/\\':
88 if us and us[-1] not in ':/\\':
88 s += os.sep
89 s += os.sep
89 return s
90 return s
90
91
91 def wrapper(func, args, kwds):
92 def wrapper(func, args, kwds):
92 # check argument is unicode, then call original
93 # check argument is unicode, then call original
93 for arg in args:
94 for arg in args:
94 if isinstance(arg, unicode):
95 if isinstance(arg, unicode):
95 return func(*args, **kwds)
96 return func(*args, **kwds)
96
97
97 try:
98 try:
98 # convert arguments to unicode, call func, then convert back
99 # convert arguments to unicode, call func, then convert back
99 return encode(func(*decode(args), **decode(kwds)))
100 return encode(func(*decode(args), **decode(kwds)))
100 except UnicodeError:
101 except UnicodeError:
101 raise util.Abort(_("[win32mbcs] filename conversion failed with"
102 raise util.Abort(_("[win32mbcs] filename conversion failed with"
102 " %s encoding\n") % (_encoding))
103 " %s encoding\n") % (_encoding))
103
104
104 def wrapperforlistdir(func, args, kwds):
105 def wrapperforlistdir(func, args, kwds):
105 # Ensure 'path' argument ends with os.sep to avoids
106 # Ensure 'path' argument ends with os.sep to avoids
106 # misinterpreting last 0x5c of MBCS 2nd byte as path separator.
107 # misinterpreting last 0x5c of MBCS 2nd byte as path separator.
107 if args:
108 if args:
108 args = list(args)
109 args = list(args)
109 args[0] = appendsep(args[0])
110 args[0] = appendsep(args[0])
110 if 'path' in kwds:
111 if 'path' in kwds:
111 kwds['path'] = appendsep(kwds['path'])
112 kwds['path'] = appendsep(kwds['path'])
112 return func(*args, **kwds)
113 return func(*args, **kwds)
113
114
114 def wrapname(name, wrapper):
115 def wrapname(name, wrapper):
115 module, name = name.rsplit('.', 1)
116 module, name = name.rsplit('.', 1)
116 module = sys.modules[module]
117 module = sys.modules[module]
117 func = getattr(module, name)
118 func = getattr(module, name)
118 def f(*args, **kwds):
119 def f(*args, **kwds):
119 return wrapper(func, args, kwds)
120 return wrapper(func, args, kwds)
120 try:
121 try:
121 f.__name__ = func.__name__ # fail with python23
122 f.__name__ = func.__name__ # fail with python23
122 except Exception:
123 except Exception:
123 pass
124 pass
124 setattr(module, name, f)
125 setattr(module, name, f)
125
126
126 # List of functions to be wrapped.
127 # List of functions to be wrapped.
127 # NOTE: os.path.dirname() and os.path.basename() are safe because
128 # NOTE: os.path.dirname() and os.path.basename() are safe because
128 # they use result of os.path.split()
129 # they use result of os.path.split()
129 funcs = '''os.path.join os.path.split os.path.splitext
130 funcs = '''os.path.join os.path.split os.path.splitext
130 os.path.normpath os.makedirs
131 os.path.normpath os.makedirs
131 mercurial.util.endswithsep mercurial.util.splitpath mercurial.util.checkcase
132 mercurial.util.endswithsep mercurial.util.splitpath mercurial.util.checkcase
132 mercurial.util.fspath mercurial.util.pconvert mercurial.util.normpath
133 mercurial.util.fspath mercurial.util.pconvert mercurial.util.normpath
133 mercurial.util.checkwinfilename mercurial.util.checkosfilename'''
134 mercurial.util.checkwinfilename mercurial.util.checkosfilename'''
134
135
135 # List of Windows specific functions to be wrapped.
136 # List of Windows specific functions to be wrapped.
136 winfuncs = '''os.path.splitunc'''
137 winfuncs = '''os.path.splitunc'''
137
138
138 # codec and alias names of sjis and big5 to be faked.
139 # codec and alias names of sjis and big5 to be faked.
139 problematic_encodings = '''big5 big5-tw csbig5 big5hkscs big5-hkscs
140 problematic_encodings = '''big5 big5-tw csbig5 big5hkscs big5-hkscs
140 hkscs cp932 932 ms932 mskanji ms-kanji shift_jis csshiftjis shiftjis
141 hkscs cp932 932 ms932 mskanji ms-kanji shift_jis csshiftjis shiftjis
141 sjis s_jis shift_jis_2004 shiftjis2004 sjis_2004 sjis2004
142 sjis s_jis shift_jis_2004 shiftjis2004 sjis_2004 sjis2004
142 shift_jisx0213 shiftjisx0213 sjisx0213 s_jisx0213 950 cp950 ms950 '''
143 shift_jisx0213 shiftjisx0213 sjisx0213 s_jisx0213 950 cp950 ms950 '''
143
144
144 def extsetup(ui):
145 def extsetup(ui):
145 # TODO: decide use of config section for this extension
146 # TODO: decide use of config section for this extension
146 if ((not os.path.supports_unicode_filenames) and
147 if ((not os.path.supports_unicode_filenames) and
147 (sys.platform != 'cygwin')):
148 (sys.platform != 'cygwin')):
148 ui.warn(_("[win32mbcs] cannot activate on this platform.\n"))
149 ui.warn(_("[win32mbcs] cannot activate on this platform.\n"))
149 return
150 return
150 # determine encoding for filename
151 # determine encoding for filename
151 global _encoding
152 global _encoding
152 _encoding = ui.config('win32mbcs', 'encoding', encoding.encoding)
153 _encoding = ui.config('win32mbcs', 'encoding', encoding.encoding)
153 # fake is only for relevant environment.
154 # fake is only for relevant environment.
154 if _encoding.lower() in problematic_encodings.split():
155 if _encoding.lower() in problematic_encodings.split():
155 for f in funcs.split():
156 for f in funcs.split():
156 wrapname(f, wrapper)
157 wrapname(f, wrapper)
157 if os.name == 'nt':
158 if os.name == 'nt':
158 for f in winfuncs.split():
159 for f in winfuncs.split():
159 wrapname(f, wrapper)
160 wrapname(f, wrapper)
160 wrapname("mercurial.osutil.listdir", wrapperforlistdir)
161 wrapname("mercurial.osutil.listdir", wrapperforlistdir)
161 # Check sys.args manually instead of using ui.debug() because
162 # Check sys.args manually instead of using ui.debug() because
162 # command line options is not yet applied when
163 # command line options is not yet applied when
163 # extensions.loadall() is called.
164 # extensions.loadall() is called.
164 if '--debug' in sys.argv:
165 if '--debug' in sys.argv:
165 ui.write("[win32mbcs] activated with encoding: %s\n"
166 ui.write("[win32mbcs] activated with encoding: %s\n"
166 % _encoding)
167 % _encoding)
167
@@ -1,170 +1,172 b''
1 # win32text.py - LF <-> CRLF/CR translation utilities for Windows/Mac users
1 # win32text.py - LF <-> CRLF/CR translation utilities for Windows/Mac users
2 #
2 #
3 # Copyright 2005, 2007-2009 Matt Mackall <mpm@selenic.com> and others
3 # Copyright 2005, 2007-2009 Matt Mackall <mpm@selenic.com> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''perform automatic newline conversion
8 '''perform automatic newline conversion
9
9
10 Deprecation: The win32text extension requires each user to configure
10 Deprecation: The win32text extension requires each user to configure
11 the extension again and again for each clone since the configuration
11 the extension again and again for each clone since the configuration
12 is not copied when cloning.
12 is not copied when cloning.
13
13
14 We have therefore made the ``eol`` as an alternative. The ``eol``
14 We have therefore made the ``eol`` as an alternative. The ``eol``
15 uses a version controlled file for its configuration and each clone
15 uses a version controlled file for its configuration and each clone
16 will therefore use the right settings from the start.
16 will therefore use the right settings from the start.
17
17
18 To perform automatic newline conversion, use::
18 To perform automatic newline conversion, use::
19
19
20 [extensions]
20 [extensions]
21 win32text =
21 win32text =
22 [encode]
22 [encode]
23 ** = cleverencode:
23 ** = cleverencode:
24 # or ** = macencode:
24 # or ** = macencode:
25
25
26 [decode]
26 [decode]
27 ** = cleverdecode:
27 ** = cleverdecode:
28 # or ** = macdecode:
28 # or ** = macdecode:
29
29
30 If not doing conversion, to make sure you do not commit CRLF/CR by accident::
30 If not doing conversion, to make sure you do not commit CRLF/CR by accident::
31
31
32 [hooks]
32 [hooks]
33 pretxncommit.crlf = python:hgext.win32text.forbidcrlf
33 pretxncommit.crlf = python:hgext.win32text.forbidcrlf
34 # or pretxncommit.cr = python:hgext.win32text.forbidcr
34 # or pretxncommit.cr = python:hgext.win32text.forbidcr
35
35
36 To do the same check on a server to prevent CRLF/CR from being
36 To do the same check on a server to prevent CRLF/CR from being
37 pushed or pulled::
37 pushed or pulled::
38
38
39 [hooks]
39 [hooks]
40 pretxnchangegroup.crlf = python:hgext.win32text.forbidcrlf
40 pretxnchangegroup.crlf = python:hgext.win32text.forbidcrlf
41 # or pretxnchangegroup.cr = python:hgext.win32text.forbidcr
41 # or pretxnchangegroup.cr = python:hgext.win32text.forbidcr
42 '''
42 '''
43
43
44 from mercurial.i18n import _
44 from mercurial.i18n import _
45 from mercurial.node import short
45 from mercurial.node import short
46 from mercurial import util
46 from mercurial import util
47 import re
47 import re
48
48
49 testedwith = 'internal'
50
49 # regexp for single LF without CR preceding.
51 # regexp for single LF without CR preceding.
50 re_single_lf = re.compile('(^|[^\r])\n', re.MULTILINE)
52 re_single_lf = re.compile('(^|[^\r])\n', re.MULTILINE)
51
53
52 newlinestr = {'\r\n': 'CRLF', '\r': 'CR'}
54 newlinestr = {'\r\n': 'CRLF', '\r': 'CR'}
53 filterstr = {'\r\n': 'clever', '\r': 'mac'}
55 filterstr = {'\r\n': 'clever', '\r': 'mac'}
54
56
55 def checknewline(s, newline, ui=None, repo=None, filename=None):
57 def checknewline(s, newline, ui=None, repo=None, filename=None):
56 # warn if already has 'newline' in repository.
58 # warn if already has 'newline' in repository.
57 # it might cause unexpected eol conversion.
59 # it might cause unexpected eol conversion.
58 # see issue 302:
60 # see issue 302:
59 # http://mercurial.selenic.com/bts/issue302
61 # http://mercurial.selenic.com/bts/issue302
60 if newline in s and ui and filename and repo:
62 if newline in s and ui and filename and repo:
61 ui.warn(_('WARNING: %s already has %s line endings\n'
63 ui.warn(_('WARNING: %s already has %s line endings\n'
62 'and does not need EOL conversion by the win32text plugin.\n'
64 'and does not need EOL conversion by the win32text plugin.\n'
63 'Before your next commit, please reconsider your '
65 'Before your next commit, please reconsider your '
64 'encode/decode settings in \nMercurial.ini or %s.\n') %
66 'encode/decode settings in \nMercurial.ini or %s.\n') %
65 (filename, newlinestr[newline], repo.join('hgrc')))
67 (filename, newlinestr[newline], repo.join('hgrc')))
66
68
67 def dumbdecode(s, cmd, **kwargs):
69 def dumbdecode(s, cmd, **kwargs):
68 checknewline(s, '\r\n', **kwargs)
70 checknewline(s, '\r\n', **kwargs)
69 # replace single LF to CRLF
71 # replace single LF to CRLF
70 return re_single_lf.sub('\\1\r\n', s)
72 return re_single_lf.sub('\\1\r\n', s)
71
73
72 def dumbencode(s, cmd):
74 def dumbencode(s, cmd):
73 return s.replace('\r\n', '\n')
75 return s.replace('\r\n', '\n')
74
76
75 def macdumbdecode(s, cmd, **kwargs):
77 def macdumbdecode(s, cmd, **kwargs):
76 checknewline(s, '\r', **kwargs)
78 checknewline(s, '\r', **kwargs)
77 return s.replace('\n', '\r')
79 return s.replace('\n', '\r')
78
80
79 def macdumbencode(s, cmd):
81 def macdumbencode(s, cmd):
80 return s.replace('\r', '\n')
82 return s.replace('\r', '\n')
81
83
82 def cleverdecode(s, cmd, **kwargs):
84 def cleverdecode(s, cmd, **kwargs):
83 if not util.binary(s):
85 if not util.binary(s):
84 return dumbdecode(s, cmd, **kwargs)
86 return dumbdecode(s, cmd, **kwargs)
85 return s
87 return s
86
88
87 def cleverencode(s, cmd):
89 def cleverencode(s, cmd):
88 if not util.binary(s):
90 if not util.binary(s):
89 return dumbencode(s, cmd)
91 return dumbencode(s, cmd)
90 return s
92 return s
91
93
92 def macdecode(s, cmd, **kwargs):
94 def macdecode(s, cmd, **kwargs):
93 if not util.binary(s):
95 if not util.binary(s):
94 return macdumbdecode(s, cmd, **kwargs)
96 return macdumbdecode(s, cmd, **kwargs)
95 return s
97 return s
96
98
97 def macencode(s, cmd):
99 def macencode(s, cmd):
98 if not util.binary(s):
100 if not util.binary(s):
99 return macdumbencode(s, cmd)
101 return macdumbencode(s, cmd)
100 return s
102 return s
101
103
102 _filters = {
104 _filters = {
103 'dumbdecode:': dumbdecode,
105 'dumbdecode:': dumbdecode,
104 'dumbencode:': dumbencode,
106 'dumbencode:': dumbencode,
105 'cleverdecode:': cleverdecode,
107 'cleverdecode:': cleverdecode,
106 'cleverencode:': cleverencode,
108 'cleverencode:': cleverencode,
107 'macdumbdecode:': macdumbdecode,
109 'macdumbdecode:': macdumbdecode,
108 'macdumbencode:': macdumbencode,
110 'macdumbencode:': macdumbencode,
109 'macdecode:': macdecode,
111 'macdecode:': macdecode,
110 'macencode:': macencode,
112 'macencode:': macencode,
111 }
113 }
112
114
113 def forbidnewline(ui, repo, hooktype, node, newline, **kwargs):
115 def forbidnewline(ui, repo, hooktype, node, newline, **kwargs):
114 halt = False
116 halt = False
115 seen = set()
117 seen = set()
116 # we try to walk changesets in reverse order from newest to
118 # we try to walk changesets in reverse order from newest to
117 # oldest, so that if we see a file multiple times, we take the
119 # oldest, so that if we see a file multiple times, we take the
118 # newest version as canonical. this prevents us from blocking a
120 # newest version as canonical. this prevents us from blocking a
119 # changegroup that contains an unacceptable commit followed later
121 # changegroup that contains an unacceptable commit followed later
120 # by a commit that fixes the problem.
122 # by a commit that fixes the problem.
121 tip = repo['tip']
123 tip = repo['tip']
122 for rev in xrange(len(repo)-1, repo[node].rev()-1, -1):
124 for rev in xrange(len(repo)-1, repo[node].rev()-1, -1):
123 c = repo[rev]
125 c = repo[rev]
124 for f in c.files():
126 for f in c.files():
125 if f in seen or f not in tip or f not in c:
127 if f in seen or f not in tip or f not in c:
126 continue
128 continue
127 seen.add(f)
129 seen.add(f)
128 data = c[f].data()
130 data = c[f].data()
129 if not util.binary(data) and newline in data:
131 if not util.binary(data) and newline in data:
130 if not halt:
132 if not halt:
131 ui.warn(_('Attempt to commit or push text file(s) '
133 ui.warn(_('Attempt to commit or push text file(s) '
132 'using %s line endings\n') %
134 'using %s line endings\n') %
133 newlinestr[newline])
135 newlinestr[newline])
134 ui.warn(_('in %s: %s\n') % (short(c.node()), f))
136 ui.warn(_('in %s: %s\n') % (short(c.node()), f))
135 halt = True
137 halt = True
136 if halt and hooktype == 'pretxnchangegroup':
138 if halt and hooktype == 'pretxnchangegroup':
137 crlf = newlinestr[newline].lower()
139 crlf = newlinestr[newline].lower()
138 filter = filterstr[newline]
140 filter = filterstr[newline]
139 ui.warn(_('\nTo prevent this mistake in your local repository,\n'
141 ui.warn(_('\nTo prevent this mistake in your local repository,\n'
140 'add to Mercurial.ini or .hg/hgrc:\n'
142 'add to Mercurial.ini or .hg/hgrc:\n'
141 '\n'
143 '\n'
142 '[hooks]\n'
144 '[hooks]\n'
143 'pretxncommit.%s = python:hgext.win32text.forbid%s\n'
145 'pretxncommit.%s = python:hgext.win32text.forbid%s\n'
144 '\n'
146 '\n'
145 'and also consider adding:\n'
147 'and also consider adding:\n'
146 '\n'
148 '\n'
147 '[extensions]\n'
149 '[extensions]\n'
148 'win32text =\n'
150 'win32text =\n'
149 '[encode]\n'
151 '[encode]\n'
150 '** = %sencode:\n'
152 '** = %sencode:\n'
151 '[decode]\n'
153 '[decode]\n'
152 '** = %sdecode:\n') % (crlf, crlf, filter, filter))
154 '** = %sdecode:\n') % (crlf, crlf, filter, filter))
153 return halt
155 return halt
154
156
155 def forbidcrlf(ui, repo, hooktype, node, **kwargs):
157 def forbidcrlf(ui, repo, hooktype, node, **kwargs):
156 return forbidnewline(ui, repo, hooktype, node, '\r\n', **kwargs)
158 return forbidnewline(ui, repo, hooktype, node, '\r\n', **kwargs)
157
159
158 def forbidcr(ui, repo, hooktype, node, **kwargs):
160 def forbidcr(ui, repo, hooktype, node, **kwargs):
159 return forbidnewline(ui, repo, hooktype, node, '\r', **kwargs)
161 return forbidnewline(ui, repo, hooktype, node, '\r', **kwargs)
160
162
161 def reposetup(ui, repo):
163 def reposetup(ui, repo):
162 if not repo.local():
164 if not repo.local():
163 return
165 return
164 for name, fn in _filters.iteritems():
166 for name, fn in _filters.iteritems():
165 repo.adddatafilter(name, fn)
167 repo.adddatafilter(name, fn)
166
168
167 def extsetup(ui):
169 def extsetup(ui):
168 if ui.configbool('win32text', 'warn', True):
170 if ui.configbool('win32text', 'warn', True):
169 ui.warn(_("win32text is deprecated: "
171 ui.warn(_("win32text is deprecated: "
170 "http://mercurial.selenic.com/wiki/Win32TextExtension\n"))
172 "http://mercurial.selenic.com/wiki/Win32TextExtension\n"))
@@ -1,186 +1,188 b''
1 # zeroconf.py - zeroconf support for Mercurial
1 # zeroconf.py - zeroconf support for Mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''discover and advertise repositories on the local network
8 '''discover and advertise repositories on the local network
9
9
10 Zeroconf-enabled repositories will be announced in a network without
10 Zeroconf-enabled repositories will be announced in a network without
11 the need to configure a server or a service. They can be discovered
11 the need to configure a server or a service. They can be discovered
12 without knowing their actual IP address.
12 without knowing their actual IP address.
13
13
14 To allow other people to discover your repository using run
14 To allow other people to discover your repository using run
15 :hg:`serve` in your repository::
15 :hg:`serve` in your repository::
16
16
17 $ cd test
17 $ cd test
18 $ hg serve
18 $ hg serve
19
19
20 You can discover Zeroconf-enabled repositories by running
20 You can discover Zeroconf-enabled repositories by running
21 :hg:`paths`::
21 :hg:`paths`::
22
22
23 $ hg paths
23 $ hg paths
24 zc-test = http://example.com:8000/test
24 zc-test = http://example.com:8000/test
25 '''
25 '''
26
26
27 import socket, time, os
27 import socket, time, os
28
28
29 import Zeroconf
29 import Zeroconf
30 from mercurial import ui, hg, encoding, util, dispatch
30 from mercurial import ui, hg, encoding, util, dispatch
31 from mercurial import extensions
31 from mercurial import extensions
32 from mercurial.hgweb import hgweb_mod
32 from mercurial.hgweb import hgweb_mod
33 from mercurial.hgweb import hgwebdir_mod
33 from mercurial.hgweb import hgwebdir_mod
34
34
35 testedwith = 'internal'
36
35 # publish
37 # publish
36
38
37 server = None
39 server = None
38 localip = None
40 localip = None
39
41
40 def getip():
42 def getip():
41 # finds external-facing interface without sending any packets (Linux)
43 # finds external-facing interface without sending any packets (Linux)
42 try:
44 try:
43 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
45 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
44 s.connect(('1.0.0.1', 0))
46 s.connect(('1.0.0.1', 0))
45 ip = s.getsockname()[0]
47 ip = s.getsockname()[0]
46 return ip
48 return ip
47 except socket.error:
49 except socket.error:
48 pass
50 pass
49
51
50 # Generic method, sometimes gives useless results
52 # Generic method, sometimes gives useless results
51 try:
53 try:
52 dumbip = socket.gethostbyaddr(socket.gethostname())[2][0]
54 dumbip = socket.gethostbyaddr(socket.gethostname())[2][0]
53 if not dumbip.startswith('127.') and ':' not in dumbip:
55 if not dumbip.startswith('127.') and ':' not in dumbip:
54 return dumbip
56 return dumbip
55 except (socket.gaierror, socket.herror):
57 except (socket.gaierror, socket.herror):
56 dumbip = '127.0.0.1'
58 dumbip = '127.0.0.1'
57
59
58 # works elsewhere, but actually sends a packet
60 # works elsewhere, but actually sends a packet
59 try:
61 try:
60 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
62 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
61 s.connect(('1.0.0.1', 1))
63 s.connect(('1.0.0.1', 1))
62 ip = s.getsockname()[0]
64 ip = s.getsockname()[0]
63 return ip
65 return ip
64 except socket.error:
66 except socket.error:
65 pass
67 pass
66
68
67 return dumbip
69 return dumbip
68
70
69 def publish(name, desc, path, port):
71 def publish(name, desc, path, port):
70 global server, localip
72 global server, localip
71 if not server:
73 if not server:
72 ip = getip()
74 ip = getip()
73 if ip.startswith('127.'):
75 if ip.startswith('127.'):
74 # if we have no internet connection, this can happen.
76 # if we have no internet connection, this can happen.
75 return
77 return
76 localip = socket.inet_aton(ip)
78 localip = socket.inet_aton(ip)
77 server = Zeroconf.Zeroconf(ip)
79 server = Zeroconf.Zeroconf(ip)
78
80
79 hostname = socket.gethostname().split('.')[0]
81 hostname = socket.gethostname().split('.')[0]
80 host = hostname + ".local"
82 host = hostname + ".local"
81 name = "%s-%s" % (hostname, name)
83 name = "%s-%s" % (hostname, name)
82
84
83 # advertise to browsers
85 # advertise to browsers
84 svc = Zeroconf.ServiceInfo('_http._tcp.local.',
86 svc = Zeroconf.ServiceInfo('_http._tcp.local.',
85 name + '._http._tcp.local.',
87 name + '._http._tcp.local.',
86 server = host,
88 server = host,
87 port = port,
89 port = port,
88 properties = {'description': desc,
90 properties = {'description': desc,
89 'path': "/" + path},
91 'path': "/" + path},
90 address = localip, weight = 0, priority = 0)
92 address = localip, weight = 0, priority = 0)
91 server.registerService(svc)
93 server.registerService(svc)
92
94
93 # advertise to Mercurial clients
95 # advertise to Mercurial clients
94 svc = Zeroconf.ServiceInfo('_hg._tcp.local.',
96 svc = Zeroconf.ServiceInfo('_hg._tcp.local.',
95 name + '._hg._tcp.local.',
97 name + '._hg._tcp.local.',
96 server = host,
98 server = host,
97 port = port,
99 port = port,
98 properties = {'description': desc,
100 properties = {'description': desc,
99 'path': "/" + path},
101 'path': "/" + path},
100 address = localip, weight = 0, priority = 0)
102 address = localip, weight = 0, priority = 0)
101 server.registerService(svc)
103 server.registerService(svc)
102
104
103 class hgwebzc(hgweb_mod.hgweb):
105 class hgwebzc(hgweb_mod.hgweb):
104 def __init__(self, repo, name=None, baseui=None):
106 def __init__(self, repo, name=None, baseui=None):
105 super(hgwebzc, self).__init__(repo, name=name, baseui=baseui)
107 super(hgwebzc, self).__init__(repo, name=name, baseui=baseui)
106 name = self.reponame or os.path.basename(self.repo.root)
108 name = self.reponame or os.path.basename(self.repo.root)
107 path = self.repo.ui.config("web", "prefix", "").strip('/')
109 path = self.repo.ui.config("web", "prefix", "").strip('/')
108 desc = self.repo.ui.config("web", "description", name)
110 desc = self.repo.ui.config("web", "description", name)
109 publish(name, desc, path,
111 publish(name, desc, path,
110 util.getport(self.repo.ui.config("web", "port", 8000)))
112 util.getport(self.repo.ui.config("web", "port", 8000)))
111
113
112 class hgwebdirzc(hgwebdir_mod.hgwebdir):
114 class hgwebdirzc(hgwebdir_mod.hgwebdir):
113 def __init__(self, conf, baseui=None):
115 def __init__(self, conf, baseui=None):
114 super(hgwebdirzc, self).__init__(conf, baseui=baseui)
116 super(hgwebdirzc, self).__init__(conf, baseui=baseui)
115 prefix = self.ui.config("web", "prefix", "").strip('/') + '/'
117 prefix = self.ui.config("web", "prefix", "").strip('/') + '/'
116 for repo, path in self.repos:
118 for repo, path in self.repos:
117 u = self.ui.copy()
119 u = self.ui.copy()
118 u.readconfig(os.path.join(path, '.hg', 'hgrc'))
120 u.readconfig(os.path.join(path, '.hg', 'hgrc'))
119 name = os.path.basename(repo)
121 name = os.path.basename(repo)
120 path = (prefix + repo).strip('/')
122 path = (prefix + repo).strip('/')
121 desc = u.config('web', 'description', name)
123 desc = u.config('web', 'description', name)
122 publish(name, desc, path,
124 publish(name, desc, path,
123 util.getport(u.config("web", "port", 8000)))
125 util.getport(u.config("web", "port", 8000)))
124
126
125 # listen
127 # listen
126
128
127 class listener(object):
129 class listener(object):
128 def __init__(self):
130 def __init__(self):
129 self.found = {}
131 self.found = {}
130 def removeService(self, server, type, name):
132 def removeService(self, server, type, name):
131 if repr(name) in self.found:
133 if repr(name) in self.found:
132 del self.found[repr(name)]
134 del self.found[repr(name)]
133 def addService(self, server, type, name):
135 def addService(self, server, type, name):
134 self.found[repr(name)] = server.getServiceInfo(type, name)
136 self.found[repr(name)] = server.getServiceInfo(type, name)
135
137
136 def getzcpaths():
138 def getzcpaths():
137 ip = getip()
139 ip = getip()
138 if ip.startswith('127.'):
140 if ip.startswith('127.'):
139 return
141 return
140 server = Zeroconf.Zeroconf(ip)
142 server = Zeroconf.Zeroconf(ip)
141 l = listener()
143 l = listener()
142 Zeroconf.ServiceBrowser(server, "_hg._tcp.local.", l)
144 Zeroconf.ServiceBrowser(server, "_hg._tcp.local.", l)
143 time.sleep(1)
145 time.sleep(1)
144 server.close()
146 server.close()
145 for value in l.found.values():
147 for value in l.found.values():
146 name = value.name[:value.name.index('.')]
148 name = value.name[:value.name.index('.')]
147 url = "http://%s:%s%s" % (socket.inet_ntoa(value.address), value.port,
149 url = "http://%s:%s%s" % (socket.inet_ntoa(value.address), value.port,
148 value.properties.get("path", "/"))
150 value.properties.get("path", "/"))
149 yield "zc-" + name, url
151 yield "zc-" + name, url
150
152
151 def config(orig, self, section, key, default=None, untrusted=False):
153 def config(orig, self, section, key, default=None, untrusted=False):
152 if section == "paths" and key.startswith("zc-"):
154 if section == "paths" and key.startswith("zc-"):
153 for name, path in getzcpaths():
155 for name, path in getzcpaths():
154 if name == key:
156 if name == key:
155 return path
157 return path
156 return orig(self, section, key, default, untrusted)
158 return orig(self, section, key, default, untrusted)
157
159
158 def configitems(orig, self, section, untrusted=False):
160 def configitems(orig, self, section, untrusted=False):
159 repos = orig(self, section, untrusted)
161 repos = orig(self, section, untrusted)
160 if section == "paths":
162 if section == "paths":
161 repos += getzcpaths()
163 repos += getzcpaths()
162 return repos
164 return repos
163
165
164 def defaultdest(orig, source):
166 def defaultdest(orig, source):
165 for name, path in getzcpaths():
167 for name, path in getzcpaths():
166 if path == source:
168 if path == source:
167 return name.encode(encoding.encoding)
169 return name.encode(encoding.encoding)
168 return orig(source)
170 return orig(source)
169
171
170 def cleanupafterdispatch(orig, ui, options, cmd, cmdfunc):
172 def cleanupafterdispatch(orig, ui, options, cmd, cmdfunc):
171 try:
173 try:
172 return orig(ui, options, cmd, cmdfunc)
174 return orig(ui, options, cmd, cmdfunc)
173 finally:
175 finally:
174 # we need to call close() on the server to notify() the various
176 # we need to call close() on the server to notify() the various
175 # threading Conditions and allow the background threads to exit
177 # threading Conditions and allow the background threads to exit
176 global server
178 global server
177 if server:
179 if server:
178 server.close()
180 server.close()
179
181
180 extensions.wrapfunction(dispatch, '_runcommand', cleanupafterdispatch)
182 extensions.wrapfunction(dispatch, '_runcommand', cleanupafterdispatch)
181
183
182 extensions.wrapfunction(ui.ui, 'config', config)
184 extensions.wrapfunction(ui.ui, 'config', config)
183 extensions.wrapfunction(ui.ui, 'configitems', configitems)
185 extensions.wrapfunction(ui.ui, 'configitems', configitems)
184 extensions.wrapfunction(hg, 'defaultdest', defaultdest)
186 extensions.wrapfunction(hg, 'defaultdest', defaultdest)
185 hgweb_mod.hgweb = hgwebzc
187 hgweb_mod.hgweb = hgwebzc
186 hgwebdir_mod.hgwebdir = hgwebdirzc
188 hgwebdir_mod.hgwebdir = hgwebdirzc
General Comments 0
You need to be logged in to leave comments. Login now