##// END OF EJS Templates
templater: remove noop calls of parsestring(s, quoted=False) (API)...
Yuya Nishihara -
r24987:fd7287f0 default
parent child Browse files
Show More
@@ -1,912 +1,910 b''
1 # bugzilla.py - bugzilla integration for mercurial
1 # bugzilla.py - bugzilla integration for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''hooks for integrating with the Bugzilla bug tracker
9 '''hooks for integrating with the Bugzilla bug tracker
10
10
11 This hook extension adds comments on bugs in Bugzilla when changesets
11 This hook extension adds comments on bugs in Bugzilla when changesets
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 the Mercurial template mechanism.
13 the Mercurial template mechanism.
14
14
15 The bug references can optionally include an update for Bugzilla of the
15 The bug references can optionally include an update for Bugzilla of the
16 hours spent working on the bug. Bugs can also be marked fixed.
16 hours spent working on the bug. Bugs can also be marked fixed.
17
17
18 Three basic modes of access to Bugzilla are provided:
18 Three basic modes of access to Bugzilla are provided:
19
19
20 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
20 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
21
21
22 2. Check data via the Bugzilla XMLRPC interface and submit bug change
22 2. Check data via the Bugzilla XMLRPC interface and submit bug change
23 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
23 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
24
24
25 3. Writing directly to the Bugzilla database. Only Bugzilla installations
25 3. Writing directly to the Bugzilla database. Only Bugzilla installations
26 using MySQL are supported. Requires Python MySQLdb.
26 using MySQL are supported. Requires Python MySQLdb.
27
27
28 Writing directly to the database is susceptible to schema changes, and
28 Writing directly to the database is susceptible to schema changes, and
29 relies on a Bugzilla contrib script to send out bug change
29 relies on a Bugzilla contrib script to send out bug change
30 notification emails. This script runs as the user running Mercurial,
30 notification emails. This script runs as the user running Mercurial,
31 must be run on the host with the Bugzilla install, and requires
31 must be run on the host with the Bugzilla install, and requires
32 permission to read Bugzilla configuration details and the necessary
32 permission to read Bugzilla configuration details and the necessary
33 MySQL user and password to have full access rights to the Bugzilla
33 MySQL user and password to have full access rights to the Bugzilla
34 database. For these reasons this access mode is now considered
34 database. For these reasons this access mode is now considered
35 deprecated, and will not be updated for new Bugzilla versions going
35 deprecated, and will not be updated for new Bugzilla versions going
36 forward. Only adding comments is supported in this access mode.
36 forward. Only adding comments is supported in this access mode.
37
37
38 Access via XMLRPC needs a Bugzilla username and password to be specified
38 Access via XMLRPC needs a Bugzilla username and password to be specified
39 in the configuration. Comments are added under that username. Since the
39 in the configuration. Comments are added under that username. Since the
40 configuration must be readable by all Mercurial users, it is recommended
40 configuration must be readable by all Mercurial users, it is recommended
41 that the rights of that user are restricted in Bugzilla to the minimum
41 that the rights of that user are restricted in Bugzilla to the minimum
42 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
42 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
43
43
44 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
44 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
45 email to the Bugzilla email interface to submit comments to bugs.
45 email to the Bugzilla email interface to submit comments to bugs.
46 The From: address in the email is set to the email address of the Mercurial
46 The From: address in the email is set to the email address of the Mercurial
47 user, so the comment appears to come from the Mercurial user. In the event
47 user, so the comment appears to come from the Mercurial user. In the event
48 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
48 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
49 user, the email associated with the Bugzilla username used to log into
49 user, the email associated with the Bugzilla username used to log into
50 Bugzilla is used instead as the source of the comment. Marking bugs fixed
50 Bugzilla is used instead as the source of the comment. Marking bugs fixed
51 works on all supported Bugzilla versions.
51 works on all supported Bugzilla versions.
52
52
53 Configuration items common to all access modes:
53 Configuration items common to all access modes:
54
54
55 bugzilla.version
55 bugzilla.version
56 The access type to use. Values recognized are:
56 The access type to use. Values recognized are:
57
57
58 :``xmlrpc``: Bugzilla XMLRPC interface.
58 :``xmlrpc``: Bugzilla XMLRPC interface.
59 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
59 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
60 :``3.0``: MySQL access, Bugzilla 3.0 and later.
60 :``3.0``: MySQL access, Bugzilla 3.0 and later.
61 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
61 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
62 including 3.0.
62 including 3.0.
63 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
63 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
64 including 2.18.
64 including 2.18.
65
65
66 bugzilla.regexp
66 bugzilla.regexp
67 Regular expression to match bug IDs for update in changeset commit message.
67 Regular expression to match bug IDs for update in changeset commit message.
68 It must contain one "()" named group ``<ids>`` containing the bug
68 It must contain one "()" named group ``<ids>`` containing the bug
69 IDs separated by non-digit characters. It may also contain
69 IDs separated by non-digit characters. It may also contain
70 a named group ``<hours>`` with a floating-point number giving the
70 a named group ``<hours>`` with a floating-point number giving the
71 hours worked on the bug. If no named groups are present, the first
71 hours worked on the bug. If no named groups are present, the first
72 "()" group is assumed to contain the bug IDs, and work time is not
72 "()" group is assumed to contain the bug IDs, and work time is not
73 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
73 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
74 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
74 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
75 variations thereof, followed by an hours number prefixed by ``h`` or
75 variations thereof, followed by an hours number prefixed by ``h`` or
76 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
76 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
77
77
78 bugzilla.fixregexp
78 bugzilla.fixregexp
79 Regular expression to match bug IDs for marking fixed in changeset
79 Regular expression to match bug IDs for marking fixed in changeset
80 commit message. This must contain a "()" named group ``<ids>` containing
80 commit message. This must contain a "()" named group ``<ids>` containing
81 the bug IDs separated by non-digit characters. It may also contain
81 the bug IDs separated by non-digit characters. It may also contain
82 a named group ``<hours>`` with a floating-point number giving the
82 a named group ``<hours>`` with a floating-point number giving the
83 hours worked on the bug. If no named groups are present, the first
83 hours worked on the bug. If no named groups are present, the first
84 "()" group is assumed to contain the bug IDs, and work time is not
84 "()" group is assumed to contain the bug IDs, and work time is not
85 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
85 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
86 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
86 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
87 variations thereof, followed by an hours number prefixed by ``h`` or
87 variations thereof, followed by an hours number prefixed by ``h`` or
88 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
88 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
89
89
90 bugzilla.fixstatus
90 bugzilla.fixstatus
91 The status to set a bug to when marking fixed. Default ``RESOLVED``.
91 The status to set a bug to when marking fixed. Default ``RESOLVED``.
92
92
93 bugzilla.fixresolution
93 bugzilla.fixresolution
94 The resolution to set a bug to when marking fixed. Default ``FIXED``.
94 The resolution to set a bug to when marking fixed. Default ``FIXED``.
95
95
96 bugzilla.style
96 bugzilla.style
97 The style file to use when formatting comments.
97 The style file to use when formatting comments.
98
98
99 bugzilla.template
99 bugzilla.template
100 Template to use when formatting comments. Overrides style if
100 Template to use when formatting comments. Overrides style if
101 specified. In addition to the usual Mercurial keywords, the
101 specified. In addition to the usual Mercurial keywords, the
102 extension specifies:
102 extension specifies:
103
103
104 :``{bug}``: The Bugzilla bug ID.
104 :``{bug}``: The Bugzilla bug ID.
105 :``{root}``: The full pathname of the Mercurial repository.
105 :``{root}``: The full pathname of the Mercurial repository.
106 :``{webroot}``: Stripped pathname of the Mercurial repository.
106 :``{webroot}``: Stripped pathname of the Mercurial repository.
107 :``{hgweb}``: Base URL for browsing Mercurial repositories.
107 :``{hgweb}``: Base URL for browsing Mercurial repositories.
108
108
109 Default ``changeset {node|short} in repo {root} refers to bug
109 Default ``changeset {node|short} in repo {root} refers to bug
110 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
110 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
111
111
112 bugzilla.strip
112 bugzilla.strip
113 The number of path separator characters to strip from the front of
113 The number of path separator characters to strip from the front of
114 the Mercurial repository path (``{root}`` in templates) to produce
114 the Mercurial repository path (``{root}`` in templates) to produce
115 ``{webroot}``. For example, a repository with ``{root}``
115 ``{webroot}``. For example, a repository with ``{root}``
116 ``/var/local/my-project`` with a strip of 2 gives a value for
116 ``/var/local/my-project`` with a strip of 2 gives a value for
117 ``{webroot}`` of ``my-project``. Default 0.
117 ``{webroot}`` of ``my-project``. Default 0.
118
118
119 web.baseurl
119 web.baseurl
120 Base URL for browsing Mercurial repositories. Referenced from
120 Base URL for browsing Mercurial repositories. Referenced from
121 templates as ``{hgweb}``.
121 templates as ``{hgweb}``.
122
122
123 Configuration items common to XMLRPC+email and MySQL access modes:
123 Configuration items common to XMLRPC+email and MySQL access modes:
124
124
125 bugzilla.usermap
125 bugzilla.usermap
126 Path of file containing Mercurial committer email to Bugzilla user email
126 Path of file containing Mercurial committer email to Bugzilla user email
127 mappings. If specified, the file should contain one mapping per
127 mappings. If specified, the file should contain one mapping per
128 line::
128 line::
129
129
130 committer = Bugzilla user
130 committer = Bugzilla user
131
131
132 See also the ``[usermap]`` section.
132 See also the ``[usermap]`` section.
133
133
134 The ``[usermap]`` section is used to specify mappings of Mercurial
134 The ``[usermap]`` section is used to specify mappings of Mercurial
135 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
135 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
136 Contains entries of the form ``committer = Bugzilla user``.
136 Contains entries of the form ``committer = Bugzilla user``.
137
137
138 XMLRPC access mode configuration:
138 XMLRPC access mode configuration:
139
139
140 bugzilla.bzurl
140 bugzilla.bzurl
141 The base URL for the Bugzilla installation.
141 The base URL for the Bugzilla installation.
142 Default ``http://localhost/bugzilla``.
142 Default ``http://localhost/bugzilla``.
143
143
144 bugzilla.user
144 bugzilla.user
145 The username to use to log into Bugzilla via XMLRPC. Default
145 The username to use to log into Bugzilla via XMLRPC. Default
146 ``bugs``.
146 ``bugs``.
147
147
148 bugzilla.password
148 bugzilla.password
149 The password for Bugzilla login.
149 The password for Bugzilla login.
150
150
151 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
151 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
152 and also:
152 and also:
153
153
154 bugzilla.bzemail
154 bugzilla.bzemail
155 The Bugzilla email address.
155 The Bugzilla email address.
156
156
157 In addition, the Mercurial email settings must be configured. See the
157 In addition, the Mercurial email settings must be configured. See the
158 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
158 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
159
159
160 MySQL access mode configuration:
160 MySQL access mode configuration:
161
161
162 bugzilla.host
162 bugzilla.host
163 Hostname of the MySQL server holding the Bugzilla database.
163 Hostname of the MySQL server holding the Bugzilla database.
164 Default ``localhost``.
164 Default ``localhost``.
165
165
166 bugzilla.db
166 bugzilla.db
167 Name of the Bugzilla database in MySQL. Default ``bugs``.
167 Name of the Bugzilla database in MySQL. Default ``bugs``.
168
168
169 bugzilla.user
169 bugzilla.user
170 Username to use to access MySQL server. Default ``bugs``.
170 Username to use to access MySQL server. Default ``bugs``.
171
171
172 bugzilla.password
172 bugzilla.password
173 Password to use to access MySQL server.
173 Password to use to access MySQL server.
174
174
175 bugzilla.timeout
175 bugzilla.timeout
176 Database connection timeout (seconds). Default 5.
176 Database connection timeout (seconds). Default 5.
177
177
178 bugzilla.bzuser
178 bugzilla.bzuser
179 Fallback Bugzilla user name to record comments with, if changeset
179 Fallback Bugzilla user name to record comments with, if changeset
180 committer cannot be found as a Bugzilla user.
180 committer cannot be found as a Bugzilla user.
181
181
182 bugzilla.bzdir
182 bugzilla.bzdir
183 Bugzilla install directory. Used by default notify. Default
183 Bugzilla install directory. Used by default notify. Default
184 ``/var/www/html/bugzilla``.
184 ``/var/www/html/bugzilla``.
185
185
186 bugzilla.notify
186 bugzilla.notify
187 The command to run to get Bugzilla to send bug change notification
187 The command to run to get Bugzilla to send bug change notification
188 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
188 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
189 id) and ``user`` (committer bugzilla email). Default depends on
189 id) and ``user`` (committer bugzilla email). Default depends on
190 version; from 2.18 it is "cd %(bzdir)s && perl -T
190 version; from 2.18 it is "cd %(bzdir)s && perl -T
191 contrib/sendbugmail.pl %(id)s %(user)s".
191 contrib/sendbugmail.pl %(id)s %(user)s".
192
192
193 Activating the extension::
193 Activating the extension::
194
194
195 [extensions]
195 [extensions]
196 bugzilla =
196 bugzilla =
197
197
198 [hooks]
198 [hooks]
199 # run bugzilla hook on every change pulled or pushed in here
199 # run bugzilla hook on every change pulled or pushed in here
200 incoming.bugzilla = python:hgext.bugzilla.hook
200 incoming.bugzilla = python:hgext.bugzilla.hook
201
201
202 Example configurations:
202 Example configurations:
203
203
204 XMLRPC example configuration. This uses the Bugzilla at
204 XMLRPC example configuration. This uses the Bugzilla at
205 ``http://my-project.org/bugzilla``, logging in as user
205 ``http://my-project.org/bugzilla``, logging in as user
206 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
206 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
207 collection of Mercurial repositories in ``/var/local/hg/repos/``,
207 collection of Mercurial repositories in ``/var/local/hg/repos/``,
208 with a web interface at ``http://my-project.org/hg``. ::
208 with a web interface at ``http://my-project.org/hg``. ::
209
209
210 [bugzilla]
210 [bugzilla]
211 bzurl=http://my-project.org/bugzilla
211 bzurl=http://my-project.org/bugzilla
212 user=bugmail@my-project.org
212 user=bugmail@my-project.org
213 password=plugh
213 password=plugh
214 version=xmlrpc
214 version=xmlrpc
215 template=Changeset {node|short} in {root|basename}.
215 template=Changeset {node|short} in {root|basename}.
216 {hgweb}/{webroot}/rev/{node|short}\\n
216 {hgweb}/{webroot}/rev/{node|short}\\n
217 {desc}\\n
217 {desc}\\n
218 strip=5
218 strip=5
219
219
220 [web]
220 [web]
221 baseurl=http://my-project.org/hg
221 baseurl=http://my-project.org/hg
222
222
223 XMLRPC+email example configuration. This uses the Bugzilla at
223 XMLRPC+email example configuration. This uses the Bugzilla at
224 ``http://my-project.org/bugzilla``, logging in as user
224 ``http://my-project.org/bugzilla``, logging in as user
225 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
225 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
226 collection of Mercurial repositories in ``/var/local/hg/repos/``,
226 collection of Mercurial repositories in ``/var/local/hg/repos/``,
227 with a web interface at ``http://my-project.org/hg``. Bug comments
227 with a web interface at ``http://my-project.org/hg``. Bug comments
228 are sent to the Bugzilla email address
228 are sent to the Bugzilla email address
229 ``bugzilla@my-project.org``. ::
229 ``bugzilla@my-project.org``. ::
230
230
231 [bugzilla]
231 [bugzilla]
232 bzurl=http://my-project.org/bugzilla
232 bzurl=http://my-project.org/bugzilla
233 user=bugmail@my-project.org
233 user=bugmail@my-project.org
234 password=plugh
234 password=plugh
235 version=xmlrpc+email
235 version=xmlrpc+email
236 bzemail=bugzilla@my-project.org
236 bzemail=bugzilla@my-project.org
237 template=Changeset {node|short} in {root|basename}.
237 template=Changeset {node|short} in {root|basename}.
238 {hgweb}/{webroot}/rev/{node|short}\\n
238 {hgweb}/{webroot}/rev/{node|short}\\n
239 {desc}\\n
239 {desc}\\n
240 strip=5
240 strip=5
241
241
242 [web]
242 [web]
243 baseurl=http://my-project.org/hg
243 baseurl=http://my-project.org/hg
244
244
245 [usermap]
245 [usermap]
246 user@emaildomain.com=user.name@bugzilladomain.com
246 user@emaildomain.com=user.name@bugzilladomain.com
247
247
248 MySQL example configuration. This has a local Bugzilla 3.2 installation
248 MySQL example configuration. This has a local Bugzilla 3.2 installation
249 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
249 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
250 the Bugzilla database name is ``bugs`` and MySQL is
250 the Bugzilla database name is ``bugs`` and MySQL is
251 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
251 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
252 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
252 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
253 with a web interface at ``http://my-project.org/hg``. ::
253 with a web interface at ``http://my-project.org/hg``. ::
254
254
255 [bugzilla]
255 [bugzilla]
256 host=localhost
256 host=localhost
257 password=XYZZY
257 password=XYZZY
258 version=3.0
258 version=3.0
259 bzuser=unknown@domain.com
259 bzuser=unknown@domain.com
260 bzdir=/opt/bugzilla-3.2
260 bzdir=/opt/bugzilla-3.2
261 template=Changeset {node|short} in {root|basename}.
261 template=Changeset {node|short} in {root|basename}.
262 {hgweb}/{webroot}/rev/{node|short}\\n
262 {hgweb}/{webroot}/rev/{node|short}\\n
263 {desc}\\n
263 {desc}\\n
264 strip=5
264 strip=5
265
265
266 [web]
266 [web]
267 baseurl=http://my-project.org/hg
267 baseurl=http://my-project.org/hg
268
268
269 [usermap]
269 [usermap]
270 user@emaildomain.com=user.name@bugzilladomain.com
270 user@emaildomain.com=user.name@bugzilladomain.com
271
271
272 All the above add a comment to the Bugzilla bug record of the form::
272 All the above add a comment to the Bugzilla bug record of the form::
273
273
274 Changeset 3b16791d6642 in repository-name.
274 Changeset 3b16791d6642 in repository-name.
275 http://my-project.org/hg/repository-name/rev/3b16791d6642
275 http://my-project.org/hg/repository-name/rev/3b16791d6642
276
276
277 Changeset commit comment. Bug 1234.
277 Changeset commit comment. Bug 1234.
278 '''
278 '''
279
279
280 from mercurial.i18n import _
280 from mercurial.i18n import _
281 from mercurial.node import short
281 from mercurial.node import short
282 from mercurial import cmdutil, mail, templater, util
282 from mercurial import cmdutil, mail, util
283 import re, time, urlparse, xmlrpclib
283 import re, time, urlparse, xmlrpclib
284
284
285 testedwith = 'internal'
285 testedwith = 'internal'
286
286
287 class bzaccess(object):
287 class bzaccess(object):
288 '''Base class for access to Bugzilla.'''
288 '''Base class for access to Bugzilla.'''
289
289
290 def __init__(self, ui):
290 def __init__(self, ui):
291 self.ui = ui
291 self.ui = ui
292 usermap = self.ui.config('bugzilla', 'usermap')
292 usermap = self.ui.config('bugzilla', 'usermap')
293 if usermap:
293 if usermap:
294 self.ui.readconfig(usermap, sections=['usermap'])
294 self.ui.readconfig(usermap, sections=['usermap'])
295
295
296 def map_committer(self, user):
296 def map_committer(self, user):
297 '''map name of committer to Bugzilla user name.'''
297 '''map name of committer to Bugzilla user name.'''
298 for committer, bzuser in self.ui.configitems('usermap'):
298 for committer, bzuser in self.ui.configitems('usermap'):
299 if committer.lower() == user.lower():
299 if committer.lower() == user.lower():
300 return bzuser
300 return bzuser
301 return user
301 return user
302
302
303 # Methods to be implemented by access classes.
303 # Methods to be implemented by access classes.
304 #
304 #
305 # 'bugs' is a dict keyed on bug id, where values are a dict holding
305 # 'bugs' is a dict keyed on bug id, where values are a dict holding
306 # updates to bug state. Recognized dict keys are:
306 # updates to bug state. Recognized dict keys are:
307 #
307 #
308 # 'hours': Value, float containing work hours to be updated.
308 # 'hours': Value, float containing work hours to be updated.
309 # 'fix': If key present, bug is to be marked fixed. Value ignored.
309 # 'fix': If key present, bug is to be marked fixed. Value ignored.
310
310
311 def filter_real_bug_ids(self, bugs):
311 def filter_real_bug_ids(self, bugs):
312 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
312 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
313 pass
313 pass
314
314
315 def filter_cset_known_bug_ids(self, node, bugs):
315 def filter_cset_known_bug_ids(self, node, bugs):
316 '''remove bug IDs where node occurs in comment text from bugs.'''
316 '''remove bug IDs where node occurs in comment text from bugs.'''
317 pass
317 pass
318
318
319 def updatebug(self, bugid, newstate, text, committer):
319 def updatebug(self, bugid, newstate, text, committer):
320 '''update the specified bug. Add comment text and set new states.
320 '''update the specified bug. Add comment text and set new states.
321
321
322 If possible add the comment as being from the committer of
322 If possible add the comment as being from the committer of
323 the changeset. Otherwise use the default Bugzilla user.
323 the changeset. Otherwise use the default Bugzilla user.
324 '''
324 '''
325 pass
325 pass
326
326
327 def notify(self, bugs, committer):
327 def notify(self, bugs, committer):
328 '''Force sending of Bugzilla notification emails.
328 '''Force sending of Bugzilla notification emails.
329
329
330 Only required if the access method does not trigger notification
330 Only required if the access method does not trigger notification
331 emails automatically.
331 emails automatically.
332 '''
332 '''
333 pass
333 pass
334
334
335 # Bugzilla via direct access to MySQL database.
335 # Bugzilla via direct access to MySQL database.
336 class bzmysql(bzaccess):
336 class bzmysql(bzaccess):
337 '''Support for direct MySQL access to Bugzilla.
337 '''Support for direct MySQL access to Bugzilla.
338
338
339 The earliest Bugzilla version this is tested with is version 2.16.
339 The earliest Bugzilla version this is tested with is version 2.16.
340
340
341 If your Bugzilla is version 3.4 or above, you are strongly
341 If your Bugzilla is version 3.4 or above, you are strongly
342 recommended to use the XMLRPC access method instead.
342 recommended to use the XMLRPC access method instead.
343 '''
343 '''
344
344
345 @staticmethod
345 @staticmethod
346 def sql_buglist(ids):
346 def sql_buglist(ids):
347 '''return SQL-friendly list of bug ids'''
347 '''return SQL-friendly list of bug ids'''
348 return '(' + ','.join(map(str, ids)) + ')'
348 return '(' + ','.join(map(str, ids)) + ')'
349
349
350 _MySQLdb = None
350 _MySQLdb = None
351
351
352 def __init__(self, ui):
352 def __init__(self, ui):
353 try:
353 try:
354 import MySQLdb as mysql
354 import MySQLdb as mysql
355 bzmysql._MySQLdb = mysql
355 bzmysql._MySQLdb = mysql
356 except ImportError, err:
356 except ImportError, err:
357 raise util.Abort(_('python mysql support not available: %s') % err)
357 raise util.Abort(_('python mysql support not available: %s') % err)
358
358
359 bzaccess.__init__(self, ui)
359 bzaccess.__init__(self, ui)
360
360
361 host = self.ui.config('bugzilla', 'host', 'localhost')
361 host = self.ui.config('bugzilla', 'host', 'localhost')
362 user = self.ui.config('bugzilla', 'user', 'bugs')
362 user = self.ui.config('bugzilla', 'user', 'bugs')
363 passwd = self.ui.config('bugzilla', 'password')
363 passwd = self.ui.config('bugzilla', 'password')
364 db = self.ui.config('bugzilla', 'db', 'bugs')
364 db = self.ui.config('bugzilla', 'db', 'bugs')
365 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
365 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
366 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
366 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
367 (host, db, user, '*' * len(passwd)))
367 (host, db, user, '*' * len(passwd)))
368 self.conn = bzmysql._MySQLdb.connect(host=host,
368 self.conn = bzmysql._MySQLdb.connect(host=host,
369 user=user, passwd=passwd,
369 user=user, passwd=passwd,
370 db=db,
370 db=db,
371 connect_timeout=timeout)
371 connect_timeout=timeout)
372 self.cursor = self.conn.cursor()
372 self.cursor = self.conn.cursor()
373 self.longdesc_id = self.get_longdesc_id()
373 self.longdesc_id = self.get_longdesc_id()
374 self.user_ids = {}
374 self.user_ids = {}
375 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
375 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
376
376
377 def run(self, *args, **kwargs):
377 def run(self, *args, **kwargs):
378 '''run a query.'''
378 '''run a query.'''
379 self.ui.note(_('query: %s %s\n') % (args, kwargs))
379 self.ui.note(_('query: %s %s\n') % (args, kwargs))
380 try:
380 try:
381 self.cursor.execute(*args, **kwargs)
381 self.cursor.execute(*args, **kwargs)
382 except bzmysql._MySQLdb.MySQLError:
382 except bzmysql._MySQLdb.MySQLError:
383 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
383 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
384 raise
384 raise
385
385
386 def get_longdesc_id(self):
386 def get_longdesc_id(self):
387 '''get identity of longdesc field'''
387 '''get identity of longdesc field'''
388 self.run('select fieldid from fielddefs where name = "longdesc"')
388 self.run('select fieldid from fielddefs where name = "longdesc"')
389 ids = self.cursor.fetchall()
389 ids = self.cursor.fetchall()
390 if len(ids) != 1:
390 if len(ids) != 1:
391 raise util.Abort(_('unknown database schema'))
391 raise util.Abort(_('unknown database schema'))
392 return ids[0][0]
392 return ids[0][0]
393
393
394 def filter_real_bug_ids(self, bugs):
394 def filter_real_bug_ids(self, bugs):
395 '''filter not-existing bugs from set.'''
395 '''filter not-existing bugs from set.'''
396 self.run('select bug_id from bugs where bug_id in %s' %
396 self.run('select bug_id from bugs where bug_id in %s' %
397 bzmysql.sql_buglist(bugs.keys()))
397 bzmysql.sql_buglist(bugs.keys()))
398 existing = [id for (id,) in self.cursor.fetchall()]
398 existing = [id for (id,) in self.cursor.fetchall()]
399 for id in bugs.keys():
399 for id in bugs.keys():
400 if id not in existing:
400 if id not in existing:
401 self.ui.status(_('bug %d does not exist\n') % id)
401 self.ui.status(_('bug %d does not exist\n') % id)
402 del bugs[id]
402 del bugs[id]
403
403
404 def filter_cset_known_bug_ids(self, node, bugs):
404 def filter_cset_known_bug_ids(self, node, bugs):
405 '''filter bug ids that already refer to this changeset from set.'''
405 '''filter bug ids that already refer to this changeset from set.'''
406 self.run('''select bug_id from longdescs where
406 self.run('''select bug_id from longdescs where
407 bug_id in %s and thetext like "%%%s%%"''' %
407 bug_id in %s and thetext like "%%%s%%"''' %
408 (bzmysql.sql_buglist(bugs.keys()), short(node)))
408 (bzmysql.sql_buglist(bugs.keys()), short(node)))
409 for (id,) in self.cursor.fetchall():
409 for (id,) in self.cursor.fetchall():
410 self.ui.status(_('bug %d already knows about changeset %s\n') %
410 self.ui.status(_('bug %d already knows about changeset %s\n') %
411 (id, short(node)))
411 (id, short(node)))
412 del bugs[id]
412 del bugs[id]
413
413
414 def notify(self, bugs, committer):
414 def notify(self, bugs, committer):
415 '''tell bugzilla to send mail.'''
415 '''tell bugzilla to send mail.'''
416 self.ui.status(_('telling bugzilla to send mail:\n'))
416 self.ui.status(_('telling bugzilla to send mail:\n'))
417 (user, userid) = self.get_bugzilla_user(committer)
417 (user, userid) = self.get_bugzilla_user(committer)
418 for id in bugs.keys():
418 for id in bugs.keys():
419 self.ui.status(_(' bug %s\n') % id)
419 self.ui.status(_(' bug %s\n') % id)
420 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
420 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
421 bzdir = self.ui.config('bugzilla', 'bzdir',
421 bzdir = self.ui.config('bugzilla', 'bzdir',
422 '/var/www/html/bugzilla')
422 '/var/www/html/bugzilla')
423 try:
423 try:
424 # Backwards-compatible with old notify string, which
424 # Backwards-compatible with old notify string, which
425 # took one string. This will throw with a new format
425 # took one string. This will throw with a new format
426 # string.
426 # string.
427 cmd = cmdfmt % id
427 cmd = cmdfmt % id
428 except TypeError:
428 except TypeError:
429 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
429 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
430 self.ui.note(_('running notify command %s\n') % cmd)
430 self.ui.note(_('running notify command %s\n') % cmd)
431 fp = util.popen('(%s) 2>&1' % cmd)
431 fp = util.popen('(%s) 2>&1' % cmd)
432 out = fp.read()
432 out = fp.read()
433 ret = fp.close()
433 ret = fp.close()
434 if ret:
434 if ret:
435 self.ui.warn(out)
435 self.ui.warn(out)
436 raise util.Abort(_('bugzilla notify command %s') %
436 raise util.Abort(_('bugzilla notify command %s') %
437 util.explainexit(ret)[0])
437 util.explainexit(ret)[0])
438 self.ui.status(_('done\n'))
438 self.ui.status(_('done\n'))
439
439
440 def get_user_id(self, user):
440 def get_user_id(self, user):
441 '''look up numeric bugzilla user id.'''
441 '''look up numeric bugzilla user id.'''
442 try:
442 try:
443 return self.user_ids[user]
443 return self.user_ids[user]
444 except KeyError:
444 except KeyError:
445 try:
445 try:
446 userid = int(user)
446 userid = int(user)
447 except ValueError:
447 except ValueError:
448 self.ui.note(_('looking up user %s\n') % user)
448 self.ui.note(_('looking up user %s\n') % user)
449 self.run('''select userid from profiles
449 self.run('''select userid from profiles
450 where login_name like %s''', user)
450 where login_name like %s''', user)
451 all = self.cursor.fetchall()
451 all = self.cursor.fetchall()
452 if len(all) != 1:
452 if len(all) != 1:
453 raise KeyError(user)
453 raise KeyError(user)
454 userid = int(all[0][0])
454 userid = int(all[0][0])
455 self.user_ids[user] = userid
455 self.user_ids[user] = userid
456 return userid
456 return userid
457
457
458 def get_bugzilla_user(self, committer):
458 def get_bugzilla_user(self, committer):
459 '''See if committer is a registered bugzilla user. Return
459 '''See if committer is a registered bugzilla user. Return
460 bugzilla username and userid if so. If not, return default
460 bugzilla username and userid if so. If not, return default
461 bugzilla username and userid.'''
461 bugzilla username and userid.'''
462 user = self.map_committer(committer)
462 user = self.map_committer(committer)
463 try:
463 try:
464 userid = self.get_user_id(user)
464 userid = self.get_user_id(user)
465 except KeyError:
465 except KeyError:
466 try:
466 try:
467 defaultuser = self.ui.config('bugzilla', 'bzuser')
467 defaultuser = self.ui.config('bugzilla', 'bzuser')
468 if not defaultuser:
468 if not defaultuser:
469 raise util.Abort(_('cannot find bugzilla user id for %s') %
469 raise util.Abort(_('cannot find bugzilla user id for %s') %
470 user)
470 user)
471 userid = self.get_user_id(defaultuser)
471 userid = self.get_user_id(defaultuser)
472 user = defaultuser
472 user = defaultuser
473 except KeyError:
473 except KeyError:
474 raise util.Abort(_('cannot find bugzilla user id for %s or %s')
474 raise util.Abort(_('cannot find bugzilla user id for %s or %s')
475 % (user, defaultuser))
475 % (user, defaultuser))
476 return (user, userid)
476 return (user, userid)
477
477
478 def updatebug(self, bugid, newstate, text, committer):
478 def updatebug(self, bugid, newstate, text, committer):
479 '''update bug state with comment text.
479 '''update bug state with comment text.
480
480
481 Try adding comment as committer of changeset, otherwise as
481 Try adding comment as committer of changeset, otherwise as
482 default bugzilla user.'''
482 default bugzilla user.'''
483 if len(newstate) > 0:
483 if len(newstate) > 0:
484 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
484 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
485
485
486 (user, userid) = self.get_bugzilla_user(committer)
486 (user, userid) = self.get_bugzilla_user(committer)
487 now = time.strftime('%Y-%m-%d %H:%M:%S')
487 now = time.strftime('%Y-%m-%d %H:%M:%S')
488 self.run('''insert into longdescs
488 self.run('''insert into longdescs
489 (bug_id, who, bug_when, thetext)
489 (bug_id, who, bug_when, thetext)
490 values (%s, %s, %s, %s)''',
490 values (%s, %s, %s, %s)''',
491 (bugid, userid, now, text))
491 (bugid, userid, now, text))
492 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
492 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
493 values (%s, %s, %s, %s)''',
493 values (%s, %s, %s, %s)''',
494 (bugid, userid, now, self.longdesc_id))
494 (bugid, userid, now, self.longdesc_id))
495 self.conn.commit()
495 self.conn.commit()
496
496
497 class bzmysql_2_18(bzmysql):
497 class bzmysql_2_18(bzmysql):
498 '''support for bugzilla 2.18 series.'''
498 '''support for bugzilla 2.18 series.'''
499
499
500 def __init__(self, ui):
500 def __init__(self, ui):
501 bzmysql.__init__(self, ui)
501 bzmysql.__init__(self, ui)
502 self.default_notify = \
502 self.default_notify = \
503 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
503 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
504
504
505 class bzmysql_3_0(bzmysql_2_18):
505 class bzmysql_3_0(bzmysql_2_18):
506 '''support for bugzilla 3.0 series.'''
506 '''support for bugzilla 3.0 series.'''
507
507
508 def __init__(self, ui):
508 def __init__(self, ui):
509 bzmysql_2_18.__init__(self, ui)
509 bzmysql_2_18.__init__(self, ui)
510
510
511 def get_longdesc_id(self):
511 def get_longdesc_id(self):
512 '''get identity of longdesc field'''
512 '''get identity of longdesc field'''
513 self.run('select id from fielddefs where name = "longdesc"')
513 self.run('select id from fielddefs where name = "longdesc"')
514 ids = self.cursor.fetchall()
514 ids = self.cursor.fetchall()
515 if len(ids) != 1:
515 if len(ids) != 1:
516 raise util.Abort(_('unknown database schema'))
516 raise util.Abort(_('unknown database schema'))
517 return ids[0][0]
517 return ids[0][0]
518
518
519 # Bugzilla via XMLRPC interface.
519 # Bugzilla via XMLRPC interface.
520
520
521 class cookietransportrequest(object):
521 class cookietransportrequest(object):
522 """A Transport request method that retains cookies over its lifetime.
522 """A Transport request method that retains cookies over its lifetime.
523
523
524 The regular xmlrpclib transports ignore cookies. Which causes
524 The regular xmlrpclib transports ignore cookies. Which causes
525 a bit of a problem when you need a cookie-based login, as with
525 a bit of a problem when you need a cookie-based login, as with
526 the Bugzilla XMLRPC interface prior to 4.4.3.
526 the Bugzilla XMLRPC interface prior to 4.4.3.
527
527
528 So this is a helper for defining a Transport which looks for
528 So this is a helper for defining a Transport which looks for
529 cookies being set in responses and saves them to add to all future
529 cookies being set in responses and saves them to add to all future
530 requests.
530 requests.
531 """
531 """
532
532
533 # Inspiration drawn from
533 # Inspiration drawn from
534 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
534 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
535 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
535 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
536
536
537 cookies = []
537 cookies = []
538 def send_cookies(self, connection):
538 def send_cookies(self, connection):
539 if self.cookies:
539 if self.cookies:
540 for cookie in self.cookies:
540 for cookie in self.cookies:
541 connection.putheader("Cookie", cookie)
541 connection.putheader("Cookie", cookie)
542
542
543 def request(self, host, handler, request_body, verbose=0):
543 def request(self, host, handler, request_body, verbose=0):
544 self.verbose = verbose
544 self.verbose = verbose
545 self.accept_gzip_encoding = False
545 self.accept_gzip_encoding = False
546
546
547 # issue XML-RPC request
547 # issue XML-RPC request
548 h = self.make_connection(host)
548 h = self.make_connection(host)
549 if verbose:
549 if verbose:
550 h.set_debuglevel(1)
550 h.set_debuglevel(1)
551
551
552 self.send_request(h, handler, request_body)
552 self.send_request(h, handler, request_body)
553 self.send_host(h, host)
553 self.send_host(h, host)
554 self.send_cookies(h)
554 self.send_cookies(h)
555 self.send_user_agent(h)
555 self.send_user_agent(h)
556 self.send_content(h, request_body)
556 self.send_content(h, request_body)
557
557
558 # Deal with differences between Python 2.4-2.6 and 2.7.
558 # Deal with differences between Python 2.4-2.6 and 2.7.
559 # In the former h is a HTTP(S). In the latter it's a
559 # In the former h is a HTTP(S). In the latter it's a
560 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
560 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
561 # HTTP(S) has an underlying HTTP(S)Connection, so extract
561 # HTTP(S) has an underlying HTTP(S)Connection, so extract
562 # that and use it.
562 # that and use it.
563 try:
563 try:
564 response = h.getresponse()
564 response = h.getresponse()
565 except AttributeError:
565 except AttributeError:
566 response = h._conn.getresponse()
566 response = h._conn.getresponse()
567
567
568 # Add any cookie definitions to our list.
568 # Add any cookie definitions to our list.
569 for header in response.msg.getallmatchingheaders("Set-Cookie"):
569 for header in response.msg.getallmatchingheaders("Set-Cookie"):
570 val = header.split(": ", 1)[1]
570 val = header.split(": ", 1)[1]
571 cookie = val.split(";", 1)[0]
571 cookie = val.split(";", 1)[0]
572 self.cookies.append(cookie)
572 self.cookies.append(cookie)
573
573
574 if response.status != 200:
574 if response.status != 200:
575 raise xmlrpclib.ProtocolError(host + handler, response.status,
575 raise xmlrpclib.ProtocolError(host + handler, response.status,
576 response.reason, response.msg.headers)
576 response.reason, response.msg.headers)
577
577
578 payload = response.read()
578 payload = response.read()
579 parser, unmarshaller = self.getparser()
579 parser, unmarshaller = self.getparser()
580 parser.feed(payload)
580 parser.feed(payload)
581 parser.close()
581 parser.close()
582
582
583 return unmarshaller.close()
583 return unmarshaller.close()
584
584
585 # The explicit calls to the underlying xmlrpclib __init__() methods are
585 # The explicit calls to the underlying xmlrpclib __init__() methods are
586 # necessary. The xmlrpclib.Transport classes are old-style classes, and
586 # necessary. The xmlrpclib.Transport classes are old-style classes, and
587 # it turns out their __init__() doesn't get called when doing multiple
587 # it turns out their __init__() doesn't get called when doing multiple
588 # inheritance with a new-style class.
588 # inheritance with a new-style class.
589 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
589 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
590 def __init__(self, use_datetime=0):
590 def __init__(self, use_datetime=0):
591 if util.safehasattr(xmlrpclib.Transport, "__init__"):
591 if util.safehasattr(xmlrpclib.Transport, "__init__"):
592 xmlrpclib.Transport.__init__(self, use_datetime)
592 xmlrpclib.Transport.__init__(self, use_datetime)
593
593
594 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
594 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
595 def __init__(self, use_datetime=0):
595 def __init__(self, use_datetime=0):
596 if util.safehasattr(xmlrpclib.Transport, "__init__"):
596 if util.safehasattr(xmlrpclib.Transport, "__init__"):
597 xmlrpclib.SafeTransport.__init__(self, use_datetime)
597 xmlrpclib.SafeTransport.__init__(self, use_datetime)
598
598
599 class bzxmlrpc(bzaccess):
599 class bzxmlrpc(bzaccess):
600 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
600 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
601
601
602 Requires a minimum Bugzilla version 3.4.
602 Requires a minimum Bugzilla version 3.4.
603 """
603 """
604
604
605 def __init__(self, ui):
605 def __init__(self, ui):
606 bzaccess.__init__(self, ui)
606 bzaccess.__init__(self, ui)
607
607
608 bzweb = self.ui.config('bugzilla', 'bzurl',
608 bzweb = self.ui.config('bugzilla', 'bzurl',
609 'http://localhost/bugzilla/')
609 'http://localhost/bugzilla/')
610 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
610 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
611
611
612 user = self.ui.config('bugzilla', 'user', 'bugs')
612 user = self.ui.config('bugzilla', 'user', 'bugs')
613 passwd = self.ui.config('bugzilla', 'password')
613 passwd = self.ui.config('bugzilla', 'password')
614
614
615 self.fixstatus = self.ui.config('bugzilla', 'fixstatus', 'RESOLVED')
615 self.fixstatus = self.ui.config('bugzilla', 'fixstatus', 'RESOLVED')
616 self.fixresolution = self.ui.config('bugzilla', 'fixresolution',
616 self.fixresolution = self.ui.config('bugzilla', 'fixresolution',
617 'FIXED')
617 'FIXED')
618
618
619 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
619 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
620 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
620 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
621 self.bzvermajor = int(ver[0])
621 self.bzvermajor = int(ver[0])
622 self.bzverminor = int(ver[1])
622 self.bzverminor = int(ver[1])
623 login = self.bzproxy.User.login({'login': user, 'password': passwd,
623 login = self.bzproxy.User.login({'login': user, 'password': passwd,
624 'restrict_login': True})
624 'restrict_login': True})
625 self.bztoken = login.get('token', '')
625 self.bztoken = login.get('token', '')
626
626
627 def transport(self, uri):
627 def transport(self, uri):
628 if urlparse.urlparse(uri, "http")[0] == "https":
628 if urlparse.urlparse(uri, "http")[0] == "https":
629 return cookiesafetransport()
629 return cookiesafetransport()
630 else:
630 else:
631 return cookietransport()
631 return cookietransport()
632
632
633 def get_bug_comments(self, id):
633 def get_bug_comments(self, id):
634 """Return a string with all comment text for a bug."""
634 """Return a string with all comment text for a bug."""
635 c = self.bzproxy.Bug.comments({'ids': [id],
635 c = self.bzproxy.Bug.comments({'ids': [id],
636 'include_fields': ['text'],
636 'include_fields': ['text'],
637 'token': self.bztoken})
637 'token': self.bztoken})
638 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
638 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
639
639
640 def filter_real_bug_ids(self, bugs):
640 def filter_real_bug_ids(self, bugs):
641 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
641 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
642 'include_fields': [],
642 'include_fields': [],
643 'permissive': True,
643 'permissive': True,
644 'token': self.bztoken,
644 'token': self.bztoken,
645 })
645 })
646 for badbug in probe['faults']:
646 for badbug in probe['faults']:
647 id = badbug['id']
647 id = badbug['id']
648 self.ui.status(_('bug %d does not exist\n') % id)
648 self.ui.status(_('bug %d does not exist\n') % id)
649 del bugs[id]
649 del bugs[id]
650
650
651 def filter_cset_known_bug_ids(self, node, bugs):
651 def filter_cset_known_bug_ids(self, node, bugs):
652 for id in sorted(bugs.keys()):
652 for id in sorted(bugs.keys()):
653 if self.get_bug_comments(id).find(short(node)) != -1:
653 if self.get_bug_comments(id).find(short(node)) != -1:
654 self.ui.status(_('bug %d already knows about changeset %s\n') %
654 self.ui.status(_('bug %d already knows about changeset %s\n') %
655 (id, short(node)))
655 (id, short(node)))
656 del bugs[id]
656 del bugs[id]
657
657
658 def updatebug(self, bugid, newstate, text, committer):
658 def updatebug(self, bugid, newstate, text, committer):
659 args = {}
659 args = {}
660 if 'hours' in newstate:
660 if 'hours' in newstate:
661 args['work_time'] = newstate['hours']
661 args['work_time'] = newstate['hours']
662
662
663 if self.bzvermajor >= 4:
663 if self.bzvermajor >= 4:
664 args['ids'] = [bugid]
664 args['ids'] = [bugid]
665 args['comment'] = {'body' : text}
665 args['comment'] = {'body' : text}
666 if 'fix' in newstate:
666 if 'fix' in newstate:
667 args['status'] = self.fixstatus
667 args['status'] = self.fixstatus
668 args['resolution'] = self.fixresolution
668 args['resolution'] = self.fixresolution
669 args['token'] = self.bztoken
669 args['token'] = self.bztoken
670 self.bzproxy.Bug.update(args)
670 self.bzproxy.Bug.update(args)
671 else:
671 else:
672 if 'fix' in newstate:
672 if 'fix' in newstate:
673 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
673 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
674 "to mark bugs fixed\n"))
674 "to mark bugs fixed\n"))
675 args['id'] = bugid
675 args['id'] = bugid
676 args['comment'] = text
676 args['comment'] = text
677 self.bzproxy.Bug.add_comment(args)
677 self.bzproxy.Bug.add_comment(args)
678
678
679 class bzxmlrpcemail(bzxmlrpc):
679 class bzxmlrpcemail(bzxmlrpc):
680 """Read data from Bugzilla via XMLRPC, send updates via email.
680 """Read data from Bugzilla via XMLRPC, send updates via email.
681
681
682 Advantages of sending updates via email:
682 Advantages of sending updates via email:
683 1. Comments can be added as any user, not just logged in user.
683 1. Comments can be added as any user, not just logged in user.
684 2. Bug statuses or other fields not accessible via XMLRPC can
684 2. Bug statuses or other fields not accessible via XMLRPC can
685 potentially be updated.
685 potentially be updated.
686
686
687 There is no XMLRPC function to change bug status before Bugzilla
687 There is no XMLRPC function to change bug status before Bugzilla
688 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
688 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
689 But bugs can be marked fixed via email from 3.4 onwards.
689 But bugs can be marked fixed via email from 3.4 onwards.
690 """
690 """
691
691
692 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
692 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
693 # in-email fields are specified as '@<fieldname> = <value>'. In
693 # in-email fields are specified as '@<fieldname> = <value>'. In
694 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
694 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
695 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
695 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
696 # compatibility, but rather than rely on this use the new format for
696 # compatibility, but rather than rely on this use the new format for
697 # 4.0 onwards.
697 # 4.0 onwards.
698
698
699 def __init__(self, ui):
699 def __init__(self, ui):
700 bzxmlrpc.__init__(self, ui)
700 bzxmlrpc.__init__(self, ui)
701
701
702 self.bzemail = self.ui.config('bugzilla', 'bzemail')
702 self.bzemail = self.ui.config('bugzilla', 'bzemail')
703 if not self.bzemail:
703 if not self.bzemail:
704 raise util.Abort(_("configuration 'bzemail' missing"))
704 raise util.Abort(_("configuration 'bzemail' missing"))
705 mail.validateconfig(self.ui)
705 mail.validateconfig(self.ui)
706
706
707 def makecommandline(self, fieldname, value):
707 def makecommandline(self, fieldname, value):
708 if self.bzvermajor >= 4:
708 if self.bzvermajor >= 4:
709 return "@%s %s" % (fieldname, str(value))
709 return "@%s %s" % (fieldname, str(value))
710 else:
710 else:
711 if fieldname == "id":
711 if fieldname == "id":
712 fieldname = "bug_id"
712 fieldname = "bug_id"
713 return "@%s = %s" % (fieldname, str(value))
713 return "@%s = %s" % (fieldname, str(value))
714
714
715 def send_bug_modify_email(self, bugid, commands, comment, committer):
715 def send_bug_modify_email(self, bugid, commands, comment, committer):
716 '''send modification message to Bugzilla bug via email.
716 '''send modification message to Bugzilla bug via email.
717
717
718 The message format is documented in the Bugzilla email_in.pl
718 The message format is documented in the Bugzilla email_in.pl
719 specification. commands is a list of command lines, comment is the
719 specification. commands is a list of command lines, comment is the
720 comment text.
720 comment text.
721
721
722 To stop users from crafting commit comments with
722 To stop users from crafting commit comments with
723 Bugzilla commands, specify the bug ID via the message body, rather
723 Bugzilla commands, specify the bug ID via the message body, rather
724 than the subject line, and leave a blank line after it.
724 than the subject line, and leave a blank line after it.
725 '''
725 '''
726 user = self.map_committer(committer)
726 user = self.map_committer(committer)
727 matches = self.bzproxy.User.get({'match': [user],
727 matches = self.bzproxy.User.get({'match': [user],
728 'token': self.bztoken})
728 'token': self.bztoken})
729 if not matches['users']:
729 if not matches['users']:
730 user = self.ui.config('bugzilla', 'user', 'bugs')
730 user = self.ui.config('bugzilla', 'user', 'bugs')
731 matches = self.bzproxy.User.get({'match': [user],
731 matches = self.bzproxy.User.get({'match': [user],
732 'token': self.bztoken})
732 'token': self.bztoken})
733 if not matches['users']:
733 if not matches['users']:
734 raise util.Abort(_("default bugzilla user %s email not found") %
734 raise util.Abort(_("default bugzilla user %s email not found") %
735 user)
735 user)
736 user = matches['users'][0]['email']
736 user = matches['users'][0]['email']
737 commands.append(self.makecommandline("id", bugid))
737 commands.append(self.makecommandline("id", bugid))
738
738
739 text = "\n".join(commands) + "\n\n" + comment
739 text = "\n".join(commands) + "\n\n" + comment
740
740
741 _charsets = mail._charsets(self.ui)
741 _charsets = mail._charsets(self.ui)
742 user = mail.addressencode(self.ui, user, _charsets)
742 user = mail.addressencode(self.ui, user, _charsets)
743 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
743 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
744 msg = mail.mimeencode(self.ui, text, _charsets)
744 msg = mail.mimeencode(self.ui, text, _charsets)
745 msg['From'] = user
745 msg['From'] = user
746 msg['To'] = bzemail
746 msg['To'] = bzemail
747 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
747 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
748 sendmail = mail.connect(self.ui)
748 sendmail = mail.connect(self.ui)
749 sendmail(user, bzemail, msg.as_string())
749 sendmail(user, bzemail, msg.as_string())
750
750
751 def updatebug(self, bugid, newstate, text, committer):
751 def updatebug(self, bugid, newstate, text, committer):
752 cmds = []
752 cmds = []
753 if 'hours' in newstate:
753 if 'hours' in newstate:
754 cmds.append(self.makecommandline("work_time", newstate['hours']))
754 cmds.append(self.makecommandline("work_time", newstate['hours']))
755 if 'fix' in newstate:
755 if 'fix' in newstate:
756 cmds.append(self.makecommandline("bug_status", self.fixstatus))
756 cmds.append(self.makecommandline("bug_status", self.fixstatus))
757 cmds.append(self.makecommandline("resolution", self.fixresolution))
757 cmds.append(self.makecommandline("resolution", self.fixresolution))
758 self.send_bug_modify_email(bugid, cmds, text, committer)
758 self.send_bug_modify_email(bugid, cmds, text, committer)
759
759
760 class bugzilla(object):
760 class bugzilla(object):
761 # supported versions of bugzilla. different versions have
761 # supported versions of bugzilla. different versions have
762 # different schemas.
762 # different schemas.
763 _versions = {
763 _versions = {
764 '2.16': bzmysql,
764 '2.16': bzmysql,
765 '2.18': bzmysql_2_18,
765 '2.18': bzmysql_2_18,
766 '3.0': bzmysql_3_0,
766 '3.0': bzmysql_3_0,
767 'xmlrpc': bzxmlrpc,
767 'xmlrpc': bzxmlrpc,
768 'xmlrpc+email': bzxmlrpcemail
768 'xmlrpc+email': bzxmlrpcemail
769 }
769 }
770
770
771 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
771 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
772 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
772 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
773 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
773 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
774
774
775 _default_fix_re = (r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
775 _default_fix_re = (r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
776 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
776 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
777 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
777 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
778 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
778 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
779
779
780 def __init__(self, ui, repo):
780 def __init__(self, ui, repo):
781 self.ui = ui
781 self.ui = ui
782 self.repo = repo
782 self.repo = repo
783
783
784 bzversion = self.ui.config('bugzilla', 'version')
784 bzversion = self.ui.config('bugzilla', 'version')
785 try:
785 try:
786 bzclass = bugzilla._versions[bzversion]
786 bzclass = bugzilla._versions[bzversion]
787 except KeyError:
787 except KeyError:
788 raise util.Abort(_('bugzilla version %s not supported') %
788 raise util.Abort(_('bugzilla version %s not supported') %
789 bzversion)
789 bzversion)
790 self.bzdriver = bzclass(self.ui)
790 self.bzdriver = bzclass(self.ui)
791
791
792 self.bug_re = re.compile(
792 self.bug_re = re.compile(
793 self.ui.config('bugzilla', 'regexp',
793 self.ui.config('bugzilla', 'regexp',
794 bugzilla._default_bug_re), re.IGNORECASE)
794 bugzilla._default_bug_re), re.IGNORECASE)
795 self.fix_re = re.compile(
795 self.fix_re = re.compile(
796 self.ui.config('bugzilla', 'fixregexp',
796 self.ui.config('bugzilla', 'fixregexp',
797 bugzilla._default_fix_re), re.IGNORECASE)
797 bugzilla._default_fix_re), re.IGNORECASE)
798 self.split_re = re.compile(r'\D+')
798 self.split_re = re.compile(r'\D+')
799
799
800 def find_bugs(self, ctx):
800 def find_bugs(self, ctx):
801 '''return bugs dictionary created from commit comment.
801 '''return bugs dictionary created from commit comment.
802
802
803 Extract bug info from changeset comments. Filter out any that are
803 Extract bug info from changeset comments. Filter out any that are
804 not known to Bugzilla, and any that already have a reference to
804 not known to Bugzilla, and any that already have a reference to
805 the given changeset in their comments.
805 the given changeset in their comments.
806 '''
806 '''
807 start = 0
807 start = 0
808 hours = 0.0
808 hours = 0.0
809 bugs = {}
809 bugs = {}
810 bugmatch = self.bug_re.search(ctx.description(), start)
810 bugmatch = self.bug_re.search(ctx.description(), start)
811 fixmatch = self.fix_re.search(ctx.description(), start)
811 fixmatch = self.fix_re.search(ctx.description(), start)
812 while True:
812 while True:
813 bugattribs = {}
813 bugattribs = {}
814 if not bugmatch and not fixmatch:
814 if not bugmatch and not fixmatch:
815 break
815 break
816 if not bugmatch:
816 if not bugmatch:
817 m = fixmatch
817 m = fixmatch
818 elif not fixmatch:
818 elif not fixmatch:
819 m = bugmatch
819 m = bugmatch
820 else:
820 else:
821 if bugmatch.start() < fixmatch.start():
821 if bugmatch.start() < fixmatch.start():
822 m = bugmatch
822 m = bugmatch
823 else:
823 else:
824 m = fixmatch
824 m = fixmatch
825 start = m.end()
825 start = m.end()
826 if m is bugmatch:
826 if m is bugmatch:
827 bugmatch = self.bug_re.search(ctx.description(), start)
827 bugmatch = self.bug_re.search(ctx.description(), start)
828 if 'fix' in bugattribs:
828 if 'fix' in bugattribs:
829 del bugattribs['fix']
829 del bugattribs['fix']
830 else:
830 else:
831 fixmatch = self.fix_re.search(ctx.description(), start)
831 fixmatch = self.fix_re.search(ctx.description(), start)
832 bugattribs['fix'] = None
832 bugattribs['fix'] = None
833
833
834 try:
834 try:
835 ids = m.group('ids')
835 ids = m.group('ids')
836 except IndexError:
836 except IndexError:
837 ids = m.group(1)
837 ids = m.group(1)
838 try:
838 try:
839 hours = float(m.group('hours'))
839 hours = float(m.group('hours'))
840 bugattribs['hours'] = hours
840 bugattribs['hours'] = hours
841 except IndexError:
841 except IndexError:
842 pass
842 pass
843 except TypeError:
843 except TypeError:
844 pass
844 pass
845 except ValueError:
845 except ValueError:
846 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
846 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
847
847
848 for id in self.split_re.split(ids):
848 for id in self.split_re.split(ids):
849 if not id:
849 if not id:
850 continue
850 continue
851 bugs[int(id)] = bugattribs
851 bugs[int(id)] = bugattribs
852 if bugs:
852 if bugs:
853 self.bzdriver.filter_real_bug_ids(bugs)
853 self.bzdriver.filter_real_bug_ids(bugs)
854 if bugs:
854 if bugs:
855 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
855 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
856 return bugs
856 return bugs
857
857
858 def update(self, bugid, newstate, ctx):
858 def update(self, bugid, newstate, ctx):
859 '''update bugzilla bug with reference to changeset.'''
859 '''update bugzilla bug with reference to changeset.'''
860
860
861 def webroot(root):
861 def webroot(root):
862 '''strip leading prefix of repo root and turn into
862 '''strip leading prefix of repo root and turn into
863 url-safe path.'''
863 url-safe path.'''
864 count = int(self.ui.config('bugzilla', 'strip', 0))
864 count = int(self.ui.config('bugzilla', 'strip', 0))
865 root = util.pconvert(root)
865 root = util.pconvert(root)
866 while count > 0:
866 while count > 0:
867 c = root.find('/')
867 c = root.find('/')
868 if c == -1:
868 if c == -1:
869 break
869 break
870 root = root[c + 1:]
870 root = root[c + 1:]
871 count -= 1
871 count -= 1
872 return root
872 return root
873
873
874 mapfile = self.ui.config('bugzilla', 'style')
874 mapfile = self.ui.config('bugzilla', 'style')
875 tmpl = self.ui.config('bugzilla', 'template')
875 tmpl = self.ui.config('bugzilla', 'template')
876 if not mapfile and not tmpl:
876 if not mapfile and not tmpl:
877 tmpl = _('changeset {node|short} in repo {root} refers '
877 tmpl = _('changeset {node|short} in repo {root} refers '
878 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
878 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
879 if tmpl:
880 tmpl = templater.parsestring(tmpl, quoted=False)
881 t = cmdutil.changeset_templater(self.ui, self.repo,
879 t = cmdutil.changeset_templater(self.ui, self.repo,
882 False, None, tmpl, mapfile, False)
880 False, None, tmpl, mapfile, False)
883 self.ui.pushbuffer()
881 self.ui.pushbuffer()
884 t.show(ctx, changes=ctx.changeset(),
882 t.show(ctx, changes=ctx.changeset(),
885 bug=str(bugid),
883 bug=str(bugid),
886 hgweb=self.ui.config('web', 'baseurl'),
884 hgweb=self.ui.config('web', 'baseurl'),
887 root=self.repo.root,
885 root=self.repo.root,
888 webroot=webroot(self.repo.root))
886 webroot=webroot(self.repo.root))
889 data = self.ui.popbuffer()
887 data = self.ui.popbuffer()
890 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
888 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
891
889
892 def notify(self, bugs, committer):
890 def notify(self, bugs, committer):
893 '''ensure Bugzilla users are notified of bug change.'''
891 '''ensure Bugzilla users are notified of bug change.'''
894 self.bzdriver.notify(bugs, committer)
892 self.bzdriver.notify(bugs, committer)
895
893
896 def hook(ui, repo, hooktype, node=None, **kwargs):
894 def hook(ui, repo, hooktype, node=None, **kwargs):
897 '''add comment to bugzilla for each changeset that refers to a
895 '''add comment to bugzilla for each changeset that refers to a
898 bugzilla bug id. only add a comment once per bug, so same change
896 bugzilla bug id. only add a comment once per bug, so same change
899 seen multiple times does not fill bug with duplicate data.'''
897 seen multiple times does not fill bug with duplicate data.'''
900 if node is None:
898 if node is None:
901 raise util.Abort(_('hook type %s does not pass a changeset id') %
899 raise util.Abort(_('hook type %s does not pass a changeset id') %
902 hooktype)
900 hooktype)
903 try:
901 try:
904 bz = bugzilla(ui, repo)
902 bz = bugzilla(ui, repo)
905 ctx = repo[node]
903 ctx = repo[node]
906 bugs = bz.find_bugs(ctx)
904 bugs = bz.find_bugs(ctx)
907 if bugs:
905 if bugs:
908 for bug in bugs:
906 for bug in bugs:
909 bz.update(bug, bugs[bug], ctx)
907 bz.update(bug, bugs[bug], ctx)
910 bz.notify(bugs, util.email(ctx.user()))
908 bz.notify(bugs, util.email(ctx.user()))
911 except Exception, e:
909 except Exception, e:
912 raise util.Abort(_('Bugzilla error: %s') % e)
910 raise util.Abort(_('Bugzilla error: %s') % e)
@@ -1,202 +1,201 b''
1 # churn.py - create a graph of revisions count grouped by template
1 # churn.py - create a graph of revisions count grouped by template
2 #
2 #
3 # Copyright 2006 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
3 # Copyright 2006 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
4 # Copyright 2008 Alexander Solovyov <piranha@piranha.org.ua>
4 # Copyright 2008 Alexander Solovyov <piranha@piranha.org.ua>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''command to display statistics about repository history'''
9 '''command to display statistics about repository history'''
10
10
11 from mercurial.i18n import _
11 from mercurial.i18n import _
12 from mercurial import patch, cmdutil, scmutil, util, templater, commands
12 from mercurial import patch, cmdutil, scmutil, util, commands
13 from mercurial import encoding
13 from mercurial import encoding
14 import os
14 import os
15 import time, datetime
15 import time, datetime
16
16
17 cmdtable = {}
17 cmdtable = {}
18 command = cmdutil.command(cmdtable)
18 command = cmdutil.command(cmdtable)
19 testedwith = 'internal'
19 testedwith = 'internal'
20
20
21 def maketemplater(ui, repo, tmpl):
21 def maketemplater(ui, repo, tmpl):
22 tmpl = templater.parsestring(tmpl, quoted=False)
23 try:
22 try:
24 t = cmdutil.changeset_templater(ui, repo, False, None, tmpl,
23 t = cmdutil.changeset_templater(ui, repo, False, None, tmpl,
25 None, False)
24 None, False)
26 except SyntaxError, inst:
25 except SyntaxError, inst:
27 raise util.Abort(inst.args[0])
26 raise util.Abort(inst.args[0])
28 return t
27 return t
29
28
30 def changedlines(ui, repo, ctx1, ctx2, fns):
29 def changedlines(ui, repo, ctx1, ctx2, fns):
31 added, removed = 0, 0
30 added, removed = 0, 0
32 fmatch = scmutil.matchfiles(repo, fns)
31 fmatch = scmutil.matchfiles(repo, fns)
33 diff = ''.join(patch.diff(repo, ctx1.node(), ctx2.node(), fmatch))
32 diff = ''.join(patch.diff(repo, ctx1.node(), ctx2.node(), fmatch))
34 for l in diff.split('\n'):
33 for l in diff.split('\n'):
35 if l.startswith("+") and not l.startswith("+++ "):
34 if l.startswith("+") and not l.startswith("+++ "):
36 added += 1
35 added += 1
37 elif l.startswith("-") and not l.startswith("--- "):
36 elif l.startswith("-") and not l.startswith("--- "):
38 removed += 1
37 removed += 1
39 return (added, removed)
38 return (added, removed)
40
39
41 def countrate(ui, repo, amap, *pats, **opts):
40 def countrate(ui, repo, amap, *pats, **opts):
42 """Calculate stats"""
41 """Calculate stats"""
43 if opts.get('dateformat'):
42 if opts.get('dateformat'):
44 def getkey(ctx):
43 def getkey(ctx):
45 t, tz = ctx.date()
44 t, tz = ctx.date()
46 date = datetime.datetime(*time.gmtime(float(t) - tz)[:6])
45 date = datetime.datetime(*time.gmtime(float(t) - tz)[:6])
47 return date.strftime(opts['dateformat'])
46 return date.strftime(opts['dateformat'])
48 else:
47 else:
49 tmpl = opts.get('oldtemplate') or opts.get('template')
48 tmpl = opts.get('oldtemplate') or opts.get('template')
50 tmpl = maketemplater(ui, repo, tmpl)
49 tmpl = maketemplater(ui, repo, tmpl)
51 def getkey(ctx):
50 def getkey(ctx):
52 ui.pushbuffer()
51 ui.pushbuffer()
53 tmpl.show(ctx)
52 tmpl.show(ctx)
54 return ui.popbuffer()
53 return ui.popbuffer()
55
54
56 state = {'count': 0}
55 state = {'count': 0}
57 rate = {}
56 rate = {}
58 df = False
57 df = False
59 if opts.get('date'):
58 if opts.get('date'):
60 df = util.matchdate(opts['date'])
59 df = util.matchdate(opts['date'])
61
60
62 m = scmutil.match(repo[None], pats, opts)
61 m = scmutil.match(repo[None], pats, opts)
63 def prep(ctx, fns):
62 def prep(ctx, fns):
64 rev = ctx.rev()
63 rev = ctx.rev()
65 if df and not df(ctx.date()[0]): # doesn't match date format
64 if df and not df(ctx.date()[0]): # doesn't match date format
66 return
65 return
67
66
68 key = getkey(ctx).strip()
67 key = getkey(ctx).strip()
69 key = amap.get(key, key) # alias remap
68 key = amap.get(key, key) # alias remap
70 if opts.get('changesets'):
69 if opts.get('changesets'):
71 rate[key] = (rate.get(key, (0,))[0] + 1, 0)
70 rate[key] = (rate.get(key, (0,))[0] + 1, 0)
72 else:
71 else:
73 parents = ctx.parents()
72 parents = ctx.parents()
74 if len(parents) > 1:
73 if len(parents) > 1:
75 ui.note(_('revision %d is a merge, ignoring...\n') % (rev,))
74 ui.note(_('revision %d is a merge, ignoring...\n') % (rev,))
76 return
75 return
77
76
78 ctx1 = parents[0]
77 ctx1 = parents[0]
79 lines = changedlines(ui, repo, ctx1, ctx, fns)
78 lines = changedlines(ui, repo, ctx1, ctx, fns)
80 rate[key] = [r + l for r, l in zip(rate.get(key, (0, 0)), lines)]
79 rate[key] = [r + l for r, l in zip(rate.get(key, (0, 0)), lines)]
81
80
82 state['count'] += 1
81 state['count'] += 1
83 ui.progress(_('analyzing'), state['count'], total=len(repo))
82 ui.progress(_('analyzing'), state['count'], total=len(repo))
84
83
85 for ctx in cmdutil.walkchangerevs(repo, m, opts, prep):
84 for ctx in cmdutil.walkchangerevs(repo, m, opts, prep):
86 continue
85 continue
87
86
88 ui.progress(_('analyzing'), None)
87 ui.progress(_('analyzing'), None)
89
88
90 return rate
89 return rate
91
90
92
91
93 @command('churn',
92 @command('churn',
94 [('r', 'rev', [],
93 [('r', 'rev', [],
95 _('count rate for the specified revision or revset'), _('REV')),
94 _('count rate for the specified revision or revset'), _('REV')),
96 ('d', 'date', '',
95 ('d', 'date', '',
97 _('count rate for revisions matching date spec'), _('DATE')),
96 _('count rate for revisions matching date spec'), _('DATE')),
98 ('t', 'oldtemplate', '',
97 ('t', 'oldtemplate', '',
99 _('template to group changesets (DEPRECATED)'), _('TEMPLATE')),
98 _('template to group changesets (DEPRECATED)'), _('TEMPLATE')),
100 ('T', 'template', '{author|email}',
99 ('T', 'template', '{author|email}',
101 _('template to group changesets'), _('TEMPLATE')),
100 _('template to group changesets'), _('TEMPLATE')),
102 ('f', 'dateformat', '',
101 ('f', 'dateformat', '',
103 _('strftime-compatible format for grouping by date'), _('FORMAT')),
102 _('strftime-compatible format for grouping by date'), _('FORMAT')),
104 ('c', 'changesets', False, _('count rate by number of changesets')),
103 ('c', 'changesets', False, _('count rate by number of changesets')),
105 ('s', 'sort', False, _('sort by key (default: sort by count)')),
104 ('s', 'sort', False, _('sort by key (default: sort by count)')),
106 ('', 'diffstat', False, _('display added/removed lines separately')),
105 ('', 'diffstat', False, _('display added/removed lines separately')),
107 ('', 'aliases', '', _('file with email aliases'), _('FILE')),
106 ('', 'aliases', '', _('file with email aliases'), _('FILE')),
108 ] + commands.walkopts,
107 ] + commands.walkopts,
109 _("hg churn [-d DATE] [-r REV] [--aliases FILE] [FILE]"),
108 _("hg churn [-d DATE] [-r REV] [--aliases FILE] [FILE]"),
110 inferrepo=True)
109 inferrepo=True)
111 def churn(ui, repo, *pats, **opts):
110 def churn(ui, repo, *pats, **opts):
112 '''histogram of changes to the repository
111 '''histogram of changes to the repository
113
112
114 This command will display a histogram representing the number
113 This command will display a histogram representing the number
115 of changed lines or revisions, grouped according to the given
114 of changed lines or revisions, grouped according to the given
116 template. The default template will group changes by author.
115 template. The default template will group changes by author.
117 The --dateformat option may be used to group the results by
116 The --dateformat option may be used to group the results by
118 date instead.
117 date instead.
119
118
120 Statistics are based on the number of changed lines, or
119 Statistics are based on the number of changed lines, or
121 alternatively the number of matching revisions if the
120 alternatively the number of matching revisions if the
122 --changesets option is specified.
121 --changesets option is specified.
123
122
124 Examples::
123 Examples::
125
124
126 # display count of changed lines for every committer
125 # display count of changed lines for every committer
127 hg churn -t "{author|email}"
126 hg churn -t "{author|email}"
128
127
129 # display daily activity graph
128 # display daily activity graph
130 hg churn -f "%H" -s -c
129 hg churn -f "%H" -s -c
131
130
132 # display activity of developers by month
131 # display activity of developers by month
133 hg churn -f "%Y-%m" -s -c
132 hg churn -f "%Y-%m" -s -c
134
133
135 # display count of lines changed in every year
134 # display count of lines changed in every year
136 hg churn -f "%Y" -s
135 hg churn -f "%Y" -s
137
136
138 It is possible to map alternate email addresses to a main address
137 It is possible to map alternate email addresses to a main address
139 by providing a file using the following format::
138 by providing a file using the following format::
140
139
141 <alias email> = <actual email>
140 <alias email> = <actual email>
142
141
143 Such a file may be specified with the --aliases option, otherwise
142 Such a file may be specified with the --aliases option, otherwise
144 a .hgchurn file will be looked for in the working directory root.
143 a .hgchurn file will be looked for in the working directory root.
145 Aliases will be split from the rightmost "=".
144 Aliases will be split from the rightmost "=".
146 '''
145 '''
147 def pad(s, l):
146 def pad(s, l):
148 return s + " " * (l - encoding.colwidth(s))
147 return s + " " * (l - encoding.colwidth(s))
149
148
150 amap = {}
149 amap = {}
151 aliases = opts.get('aliases')
150 aliases = opts.get('aliases')
152 if not aliases and os.path.exists(repo.wjoin('.hgchurn')):
151 if not aliases and os.path.exists(repo.wjoin('.hgchurn')):
153 aliases = repo.wjoin('.hgchurn')
152 aliases = repo.wjoin('.hgchurn')
154 if aliases:
153 if aliases:
155 for l in open(aliases, "r"):
154 for l in open(aliases, "r"):
156 try:
155 try:
157 alias, actual = l.rsplit('=' in l and '=' or None, 1)
156 alias, actual = l.rsplit('=' in l and '=' or None, 1)
158 amap[alias.strip()] = actual.strip()
157 amap[alias.strip()] = actual.strip()
159 except ValueError:
158 except ValueError:
160 l = l.strip()
159 l = l.strip()
161 if l:
160 if l:
162 ui.warn(_("skipping malformed alias: %s\n") % l)
161 ui.warn(_("skipping malformed alias: %s\n") % l)
163 continue
162 continue
164
163
165 rate = countrate(ui, repo, amap, *pats, **opts).items()
164 rate = countrate(ui, repo, amap, *pats, **opts).items()
166 if not rate:
165 if not rate:
167 return
166 return
168
167
169 if opts.get('sort'):
168 if opts.get('sort'):
170 rate.sort()
169 rate.sort()
171 else:
170 else:
172 rate.sort(key=lambda x: (-sum(x[1]), x))
171 rate.sort(key=lambda x: (-sum(x[1]), x))
173
172
174 # Be careful not to have a zero maxcount (issue833)
173 # Be careful not to have a zero maxcount (issue833)
175 maxcount = float(max(sum(v) for k, v in rate)) or 1.0
174 maxcount = float(max(sum(v) for k, v in rate)) or 1.0
176 maxname = max(len(k) for k, v in rate)
175 maxname = max(len(k) for k, v in rate)
177
176
178 ttywidth = ui.termwidth()
177 ttywidth = ui.termwidth()
179 ui.debug("assuming %i character terminal\n" % ttywidth)
178 ui.debug("assuming %i character terminal\n" % ttywidth)
180 width = ttywidth - maxname - 2 - 2 - 2
179 width = ttywidth - maxname - 2 - 2 - 2
181
180
182 if opts.get('diffstat'):
181 if opts.get('diffstat'):
183 width -= 15
182 width -= 15
184 def format(name, diffstat):
183 def format(name, diffstat):
185 added, removed = diffstat
184 added, removed = diffstat
186 return "%s %15s %s%s\n" % (pad(name, maxname),
185 return "%s %15s %s%s\n" % (pad(name, maxname),
187 '+%d/-%d' % (added, removed),
186 '+%d/-%d' % (added, removed),
188 ui.label('+' * charnum(added),
187 ui.label('+' * charnum(added),
189 'diffstat.inserted'),
188 'diffstat.inserted'),
190 ui.label('-' * charnum(removed),
189 ui.label('-' * charnum(removed),
191 'diffstat.deleted'))
190 'diffstat.deleted'))
192 else:
191 else:
193 width -= 6
192 width -= 6
194 def format(name, count):
193 def format(name, count):
195 return "%s %6d %s\n" % (pad(name, maxname), sum(count),
194 return "%s %6d %s\n" % (pad(name, maxname), sum(count),
196 '*' * charnum(sum(count)))
195 '*' * charnum(sum(count)))
197
196
198 def charnum(count):
197 def charnum(count):
199 return int(round(count * width / maxcount))
198 return int(round(count * width / maxcount))
200
199
201 for name, count in rate:
200 for name, count in rate:
202 ui.write(format(name, count))
201 ui.write(format(name, count))
@@ -1,282 +1,281 b''
1 # Copyright (C) 2007-8 Brendan Cully <brendan@kublai.com>
1 # Copyright (C) 2007-8 Brendan Cully <brendan@kublai.com>
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5
5
6 """hooks for integrating with the CIA.vc notification service
6 """hooks for integrating with the CIA.vc notification service
7
7
8 This is meant to be run as a changegroup or incoming hook. To
8 This is meant to be run as a changegroup or incoming hook. To
9 configure it, set the following options in your hgrc::
9 configure it, set the following options in your hgrc::
10
10
11 [cia]
11 [cia]
12 # your registered CIA user name
12 # your registered CIA user name
13 user = foo
13 user = foo
14 # the name of the project in CIA
14 # the name of the project in CIA
15 project = foo
15 project = foo
16 # the module (subproject) (optional)
16 # the module (subproject) (optional)
17 #module = foo
17 #module = foo
18 # Append a diffstat to the log message (optional)
18 # Append a diffstat to the log message (optional)
19 #diffstat = False
19 #diffstat = False
20 # Template to use for log messages (optional)
20 # Template to use for log messages (optional)
21 #template = {desc}\\n{baseurl}{webroot}/rev/{node}-- {diffstat}
21 #template = {desc}\\n{baseurl}{webroot}/rev/{node}-- {diffstat}
22 # Style to use (optional)
22 # Style to use (optional)
23 #style = foo
23 #style = foo
24 # The URL of the CIA notification service (optional)
24 # The URL of the CIA notification service (optional)
25 # You can use mailto: URLs to send by email, e.g.
25 # You can use mailto: URLs to send by email, e.g.
26 # mailto:cia@cia.vc
26 # mailto:cia@cia.vc
27 # Make sure to set email.from if you do this.
27 # Make sure to set email.from if you do this.
28 #url = http://cia.vc/
28 #url = http://cia.vc/
29 # print message instead of sending it (optional)
29 # print message instead of sending it (optional)
30 #test = False
30 #test = False
31 # number of slashes to strip for url paths
31 # number of slashes to strip for url paths
32 #strip = 0
32 #strip = 0
33
33
34 [hooks]
34 [hooks]
35 # one of these:
35 # one of these:
36 changegroup.cia = python:hgcia.hook
36 changegroup.cia = python:hgcia.hook
37 #incoming.cia = python:hgcia.hook
37 #incoming.cia = python:hgcia.hook
38
38
39 [web]
39 [web]
40 # If you want hyperlinks (optional)
40 # If you want hyperlinks (optional)
41 baseurl = http://server/path/to/repo
41 baseurl = http://server/path/to/repo
42 """
42 """
43
43
44 from mercurial.i18n import _
44 from mercurial.i18n import _
45 from mercurial.node import bin, short
45 from mercurial.node import bin, short
46 from mercurial import cmdutil, patch, templater, util, mail
46 from mercurial import cmdutil, patch, util, mail
47 import email.Parser
47 import email.Parser
48
48
49 import socket, xmlrpclib
49 import socket, xmlrpclib
50 from xml.sax import saxutils
50 from xml.sax import saxutils
51 testedwith = 'internal'
51 testedwith = 'internal'
52
52
53 socket_timeout = 30 # seconds
53 socket_timeout = 30 # seconds
54 if util.safehasattr(socket, 'setdefaulttimeout'):
54 if util.safehasattr(socket, 'setdefaulttimeout'):
55 # set a timeout for the socket so you don't have to wait so looooong
55 # set a timeout for the socket so you don't have to wait so looooong
56 # when cia.vc is having problems. requires python >= 2.3:
56 # when cia.vc is having problems. requires python >= 2.3:
57 socket.setdefaulttimeout(socket_timeout)
57 socket.setdefaulttimeout(socket_timeout)
58
58
59 HGCIA_VERSION = '0.1'
59 HGCIA_VERSION = '0.1'
60 HGCIA_URL = 'http://hg.kublai.com/mercurial/hgcia'
60 HGCIA_URL = 'http://hg.kublai.com/mercurial/hgcia'
61
61
62
62
63 class ciamsg(object):
63 class ciamsg(object):
64 """ A CIA message """
64 """ A CIA message """
65 def __init__(self, cia, ctx):
65 def __init__(self, cia, ctx):
66 self.cia = cia
66 self.cia = cia
67 self.ctx = ctx
67 self.ctx = ctx
68 self.url = self.cia.url
68 self.url = self.cia.url
69 if self.url:
69 if self.url:
70 self.url += self.cia.root
70 self.url += self.cia.root
71
71
72 def fileelem(self, path, uri, action):
72 def fileelem(self, path, uri, action):
73 if uri:
73 if uri:
74 uri = ' uri=%s' % saxutils.quoteattr(uri)
74 uri = ' uri=%s' % saxutils.quoteattr(uri)
75 return '<file%s action=%s>%s</file>' % (
75 return '<file%s action=%s>%s</file>' % (
76 uri, saxutils.quoteattr(action), saxutils.escape(path))
76 uri, saxutils.quoteattr(action), saxutils.escape(path))
77
77
78 def fileelems(self):
78 def fileelems(self):
79 n = self.ctx.node()
79 n = self.ctx.node()
80 f = self.cia.repo.status(self.ctx.p1().node(), n)
80 f = self.cia.repo.status(self.ctx.p1().node(), n)
81 url = self.url or ''
81 url = self.url or ''
82 if url and url[-1] == '/':
82 if url and url[-1] == '/':
83 url = url[:-1]
83 url = url[:-1]
84 elems = []
84 elems = []
85 for path in f.modified:
85 for path in f.modified:
86 uri = '%s/diff/%s/%s' % (url, short(n), path)
86 uri = '%s/diff/%s/%s' % (url, short(n), path)
87 elems.append(self.fileelem(path, url and uri, 'modify'))
87 elems.append(self.fileelem(path, url and uri, 'modify'))
88 for path in f.added:
88 for path in f.added:
89 # TODO: copy/rename ?
89 # TODO: copy/rename ?
90 uri = '%s/file/%s/%s' % (url, short(n), path)
90 uri = '%s/file/%s/%s' % (url, short(n), path)
91 elems.append(self.fileelem(path, url and uri, 'add'))
91 elems.append(self.fileelem(path, url and uri, 'add'))
92 for path in f.removed:
92 for path in f.removed:
93 elems.append(self.fileelem(path, '', 'remove'))
93 elems.append(self.fileelem(path, '', 'remove'))
94
94
95 return '\n'.join(elems)
95 return '\n'.join(elems)
96
96
97 def sourceelem(self, project, module=None, branch=None):
97 def sourceelem(self, project, module=None, branch=None):
98 msg = ['<source>', '<project>%s</project>' % saxutils.escape(project)]
98 msg = ['<source>', '<project>%s</project>' % saxutils.escape(project)]
99 if module:
99 if module:
100 msg.append('<module>%s</module>' % saxutils.escape(module))
100 msg.append('<module>%s</module>' % saxutils.escape(module))
101 if branch:
101 if branch:
102 msg.append('<branch>%s</branch>' % saxutils.escape(branch))
102 msg.append('<branch>%s</branch>' % saxutils.escape(branch))
103 msg.append('</source>')
103 msg.append('</source>')
104
104
105 return '\n'.join(msg)
105 return '\n'.join(msg)
106
106
107 def diffstat(self):
107 def diffstat(self):
108 class patchbuf(object):
108 class patchbuf(object):
109 def __init__(self):
109 def __init__(self):
110 self.lines = []
110 self.lines = []
111 # diffstat is stupid
111 # diffstat is stupid
112 self.name = 'cia'
112 self.name = 'cia'
113 def write(self, data):
113 def write(self, data):
114 self.lines += data.splitlines(True)
114 self.lines += data.splitlines(True)
115 def close(self):
115 def close(self):
116 pass
116 pass
117
117
118 n = self.ctx.node()
118 n = self.ctx.node()
119 pbuf = patchbuf()
119 pbuf = patchbuf()
120 cmdutil.export(self.cia.repo, [n], fp=pbuf)
120 cmdutil.export(self.cia.repo, [n], fp=pbuf)
121 return patch.diffstat(pbuf.lines) or ''
121 return patch.diffstat(pbuf.lines) or ''
122
122
123 def logmsg(self):
123 def logmsg(self):
124 if self.cia.diffstat:
124 if self.cia.diffstat:
125 diffstat = self.diffstat()
125 diffstat = self.diffstat()
126 else:
126 else:
127 diffstat = ''
127 diffstat = ''
128 self.cia.ui.pushbuffer()
128 self.cia.ui.pushbuffer()
129 self.cia.templater.show(self.ctx, changes=self.ctx.changeset(),
129 self.cia.templater.show(self.ctx, changes=self.ctx.changeset(),
130 baseurl=self.cia.ui.config('web', 'baseurl'),
130 baseurl=self.cia.ui.config('web', 'baseurl'),
131 url=self.url, diffstat=diffstat,
131 url=self.url, diffstat=diffstat,
132 webroot=self.cia.root)
132 webroot=self.cia.root)
133 return self.cia.ui.popbuffer()
133 return self.cia.ui.popbuffer()
134
134
135 def xml(self):
135 def xml(self):
136 n = short(self.ctx.node())
136 n = short(self.ctx.node())
137 src = self.sourceelem(self.cia.project, module=self.cia.module,
137 src = self.sourceelem(self.cia.project, module=self.cia.module,
138 branch=self.ctx.branch())
138 branch=self.ctx.branch())
139 # unix timestamp
139 # unix timestamp
140 dt = self.ctx.date()
140 dt = self.ctx.date()
141 timestamp = dt[0]
141 timestamp = dt[0]
142
142
143 author = saxutils.escape(self.ctx.user())
143 author = saxutils.escape(self.ctx.user())
144 rev = '%d:%s' % (self.ctx.rev(), n)
144 rev = '%d:%s' % (self.ctx.rev(), n)
145 log = saxutils.escape(self.logmsg())
145 log = saxutils.escape(self.logmsg())
146
146
147 url = self.url
147 url = self.url
148 if url and url[-1] == '/':
148 if url and url[-1] == '/':
149 url = url[:-1]
149 url = url[:-1]
150 url = url and '<url>%s/rev/%s</url>' % (saxutils.escape(url), n) or ''
150 url = url and '<url>%s/rev/%s</url>' % (saxutils.escape(url), n) or ''
151
151
152 msg = """
152 msg = """
153 <message>
153 <message>
154 <generator>
154 <generator>
155 <name>Mercurial (hgcia)</name>
155 <name>Mercurial (hgcia)</name>
156 <version>%s</version>
156 <version>%s</version>
157 <url>%s</url>
157 <url>%s</url>
158 <user>%s</user>
158 <user>%s</user>
159 </generator>
159 </generator>
160 %s
160 %s
161 <body>
161 <body>
162 <commit>
162 <commit>
163 <author>%s</author>
163 <author>%s</author>
164 <version>%s</version>
164 <version>%s</version>
165 <log>%s</log>
165 <log>%s</log>
166 %s
166 %s
167 <files>%s</files>
167 <files>%s</files>
168 </commit>
168 </commit>
169 </body>
169 </body>
170 <timestamp>%d</timestamp>
170 <timestamp>%d</timestamp>
171 </message>
171 </message>
172 """ % \
172 """ % \
173 (HGCIA_VERSION, saxutils.escape(HGCIA_URL),
173 (HGCIA_VERSION, saxutils.escape(HGCIA_URL),
174 saxutils.escape(self.cia.user), src, author, rev, log, url,
174 saxutils.escape(self.cia.user), src, author, rev, log, url,
175 self.fileelems(), timestamp)
175 self.fileelems(), timestamp)
176
176
177 return msg
177 return msg
178
178
179
179
180 class hgcia(object):
180 class hgcia(object):
181 """ CIA notification class """
181 """ CIA notification class """
182
182
183 deftemplate = '{desc}'
183 deftemplate = '{desc}'
184 dstemplate = '{desc}\n-- \n{diffstat}'
184 dstemplate = '{desc}\n-- \n{diffstat}'
185
185
186 def __init__(self, ui, repo):
186 def __init__(self, ui, repo):
187 self.ui = ui
187 self.ui = ui
188 self.repo = repo
188 self.repo = repo
189
189
190 self.ciaurl = self.ui.config('cia', 'url', 'http://cia.vc')
190 self.ciaurl = self.ui.config('cia', 'url', 'http://cia.vc')
191 self.user = self.ui.config('cia', 'user')
191 self.user = self.ui.config('cia', 'user')
192 self.project = self.ui.config('cia', 'project')
192 self.project = self.ui.config('cia', 'project')
193 self.module = self.ui.config('cia', 'module')
193 self.module = self.ui.config('cia', 'module')
194 self.diffstat = self.ui.configbool('cia', 'diffstat')
194 self.diffstat = self.ui.configbool('cia', 'diffstat')
195 self.emailfrom = self.ui.config('email', 'from')
195 self.emailfrom = self.ui.config('email', 'from')
196 self.dryrun = self.ui.configbool('cia', 'test')
196 self.dryrun = self.ui.configbool('cia', 'test')
197 self.url = self.ui.config('web', 'baseurl')
197 self.url = self.ui.config('web', 'baseurl')
198 # Default to -1 for backward compatibility
198 # Default to -1 for backward compatibility
199 self.stripcount = int(self.ui.config('cia', 'strip', -1))
199 self.stripcount = int(self.ui.config('cia', 'strip', -1))
200 self.root = self.strip(self.repo.root)
200 self.root = self.strip(self.repo.root)
201
201
202 style = self.ui.config('cia', 'style')
202 style = self.ui.config('cia', 'style')
203 template = self.ui.config('cia', 'template')
203 template = self.ui.config('cia', 'template')
204 if not template:
204 if not template:
205 if self.diffstat:
205 if self.diffstat:
206 template = self.dstemplate
206 template = self.dstemplate
207 else:
207 else:
208 template = self.deftemplate
208 template = self.deftemplate
209 template = templater.parsestring(template, quoted=False)
210 t = cmdutil.changeset_templater(self.ui, self.repo, False, None,
209 t = cmdutil.changeset_templater(self.ui, self.repo, False, None,
211 template, style, False)
210 template, style, False)
212 self.templater = t
211 self.templater = t
213
212
214 def strip(self, path):
213 def strip(self, path):
215 '''strip leading slashes from local path, turn into web-safe path.'''
214 '''strip leading slashes from local path, turn into web-safe path.'''
216
215
217 path = util.pconvert(path)
216 path = util.pconvert(path)
218 count = self.stripcount
217 count = self.stripcount
219 if count < 0:
218 if count < 0:
220 return ''
219 return ''
221 while count > 0:
220 while count > 0:
222 c = path.find('/')
221 c = path.find('/')
223 if c == -1:
222 if c == -1:
224 break
223 break
225 path = path[c + 1:]
224 path = path[c + 1:]
226 count -= 1
225 count -= 1
227 return path
226 return path
228
227
229 def sendrpc(self, msg):
228 def sendrpc(self, msg):
230 srv = xmlrpclib.Server(self.ciaurl)
229 srv = xmlrpclib.Server(self.ciaurl)
231 res = srv.hub.deliver(msg)
230 res = srv.hub.deliver(msg)
232 if res is not True and res != 'queued.':
231 if res is not True and res != 'queued.':
233 raise util.Abort(_('%s returned an error: %s') %
232 raise util.Abort(_('%s returned an error: %s') %
234 (self.ciaurl, res))
233 (self.ciaurl, res))
235
234
236 def sendemail(self, address, data):
235 def sendemail(self, address, data):
237 p = email.Parser.Parser()
236 p = email.Parser.Parser()
238 msg = p.parsestr(data)
237 msg = p.parsestr(data)
239 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
238 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
240 msg['To'] = address
239 msg['To'] = address
241 msg['From'] = self.emailfrom
240 msg['From'] = self.emailfrom
242 msg['Subject'] = 'DeliverXML'
241 msg['Subject'] = 'DeliverXML'
243 msg['Content-type'] = 'text/xml'
242 msg['Content-type'] = 'text/xml'
244 msgtext = msg.as_string()
243 msgtext = msg.as_string()
245
244
246 self.ui.status(_('hgcia: sending update to %s\n') % address)
245 self.ui.status(_('hgcia: sending update to %s\n') % address)
247 mail.sendmail(self.ui, util.email(self.emailfrom),
246 mail.sendmail(self.ui, util.email(self.emailfrom),
248 [address], msgtext)
247 [address], msgtext)
249
248
250
249
251 def hook(ui, repo, hooktype, node=None, url=None, **kwargs):
250 def hook(ui, repo, hooktype, node=None, url=None, **kwargs):
252 """ send CIA notification """
251 """ send CIA notification """
253 def sendmsg(cia, ctx):
252 def sendmsg(cia, ctx):
254 msg = ciamsg(cia, ctx).xml()
253 msg = ciamsg(cia, ctx).xml()
255 if cia.dryrun:
254 if cia.dryrun:
256 ui.write(msg)
255 ui.write(msg)
257 elif cia.ciaurl.startswith('mailto:'):
256 elif cia.ciaurl.startswith('mailto:'):
258 if not cia.emailfrom:
257 if not cia.emailfrom:
259 raise util.Abort(_('email.from must be defined when '
258 raise util.Abort(_('email.from must be defined when '
260 'sending by email'))
259 'sending by email'))
261 cia.sendemail(cia.ciaurl[7:], msg)
260 cia.sendemail(cia.ciaurl[7:], msg)
262 else:
261 else:
263 cia.sendrpc(msg)
262 cia.sendrpc(msg)
264
263
265 n = bin(node)
264 n = bin(node)
266 cia = hgcia(ui, repo)
265 cia = hgcia(ui, repo)
267 if not cia.user:
266 if not cia.user:
268 ui.debug('cia: no user specified')
267 ui.debug('cia: no user specified')
269 return
268 return
270 if not cia.project:
269 if not cia.project:
271 ui.debug('cia: no project specified')
270 ui.debug('cia: no project specified')
272 return
271 return
273 if hooktype == 'changegroup':
272 if hooktype == 'changegroup':
274 start = repo.changelog.rev(n)
273 start = repo.changelog.rev(n)
275 end = len(repo.changelog)
274 end = len(repo.changelog)
276 for rev in xrange(start, end):
275 for rev in xrange(start, end):
277 n = repo.changelog.node(rev)
276 n = repo.changelog.node(rev)
278 ctx = repo.changectx(n)
277 ctx = repo.changectx(n)
279 sendmsg(cia, ctx)
278 sendmsg(cia, ctx)
280 else:
279 else:
281 ctx = repo.changectx(n)
280 ctx = repo.changectx(n)
282 sendmsg(cia, ctx)
281 sendmsg(cia, ctx)
@@ -1,743 +1,742 b''
1 # keyword.py - $Keyword$ expansion for Mercurial
1 # keyword.py - $Keyword$ expansion for Mercurial
2 #
2 #
3 # Copyright 2007-2015 Christian Ebert <blacktrash@gmx.net>
3 # Copyright 2007-2015 Christian Ebert <blacktrash@gmx.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 #
7 #
8 # $Id$
8 # $Id$
9 #
9 #
10 # Keyword expansion hack against the grain of a Distributed SCM
10 # Keyword expansion hack against the grain of a Distributed SCM
11 #
11 #
12 # There are many good reasons why this is not needed in a distributed
12 # There are many good reasons why this is not needed in a distributed
13 # SCM, still it may be useful in very small projects based on single
13 # SCM, still it may be useful in very small projects based on single
14 # files (like LaTeX packages), that are mostly addressed to an
14 # files (like LaTeX packages), that are mostly addressed to an
15 # audience not running a version control system.
15 # audience not running a version control system.
16 #
16 #
17 # For in-depth discussion refer to
17 # For in-depth discussion refer to
18 # <http://mercurial.selenic.com/wiki/KeywordPlan>.
18 # <http://mercurial.selenic.com/wiki/KeywordPlan>.
19 #
19 #
20 # Keyword expansion is based on Mercurial's changeset template mappings.
20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 #
21 #
22 # Binary files are not touched.
22 # Binary files are not touched.
23 #
23 #
24 # Files to act upon/ignore are specified in the [keyword] section.
24 # Files to act upon/ignore are specified in the [keyword] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
26 #
26 #
27 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
27 # Run "hg help keyword" and "hg kwdemo" to get info on configuration.
28
28
29 '''expand keywords in tracked files
29 '''expand keywords in tracked files
30
30
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
32 tracked text files selected by your configuration.
32 tracked text files selected by your configuration.
33
33
34 Keywords are only expanded in local repositories and not stored in the
34 Keywords are only expanded in local repositories and not stored in the
35 change history. The mechanism can be regarded as a convenience for the
35 change history. The mechanism can be regarded as a convenience for the
36 current user or for archive distribution.
36 current user or for archive distribution.
37
37
38 Keywords expand to the changeset data pertaining to the latest change
38 Keywords expand to the changeset data pertaining to the latest change
39 relative to the working directory parent of each file.
39 relative to the working directory parent of each file.
40
40
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
42 sections of hgrc files.
42 sections of hgrc files.
43
43
44 Example::
44 Example::
45
45
46 [keyword]
46 [keyword]
47 # expand keywords in every python file except those matching "x*"
47 # expand keywords in every python file except those matching "x*"
48 **.py =
48 **.py =
49 x* = ignore
49 x* = ignore
50
50
51 [keywordset]
51 [keywordset]
52 # prefer svn- over cvs-like default keywordmaps
52 # prefer svn- over cvs-like default keywordmaps
53 svn = True
53 svn = True
54
54
55 .. note::
55 .. note::
56
56
57 The more specific you are in your filename patterns the less you
57 The more specific you are in your filename patterns the less you
58 lose speed in huge repositories.
58 lose speed in huge repositories.
59
59
60 For [keywordmaps] template mapping and expansion demonstration and
60 For [keywordmaps] template mapping and expansion demonstration and
61 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
61 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
62 available templates and filters.
62 available templates and filters.
63
63
64 Three additional date template filters are provided:
64 Three additional date template filters are provided:
65
65
66 :``utcdate``: "2006/09/18 15:13:13"
66 :``utcdate``: "2006/09/18 15:13:13"
67 :``svnutcdate``: "2006-09-18 15:13:13Z"
67 :``svnutcdate``: "2006-09-18 15:13:13Z"
68 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
68 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
69
69
70 The default template mappings (view with :hg:`kwdemo -d`) can be
70 The default template mappings (view with :hg:`kwdemo -d`) can be
71 replaced with customized keywords and templates. Again, run
71 replaced with customized keywords and templates. Again, run
72 :hg:`kwdemo` to control the results of your configuration changes.
72 :hg:`kwdemo` to control the results of your configuration changes.
73
73
74 Before changing/disabling active keywords, you must run :hg:`kwshrink`
74 Before changing/disabling active keywords, you must run :hg:`kwshrink`
75 to avoid storing expanded keywords in the change history.
75 to avoid storing expanded keywords in the change history.
76
76
77 To force expansion after enabling it, or a configuration change, run
77 To force expansion after enabling it, or a configuration change, run
78 :hg:`kwexpand`.
78 :hg:`kwexpand`.
79
79
80 Expansions spanning more than one line and incremental expansions,
80 Expansions spanning more than one line and incremental expansions,
81 like CVS' $Log$, are not supported. A keyword template map "Log =
81 like CVS' $Log$, are not supported. A keyword template map "Log =
82 {desc}" expands to the first line of the changeset description.
82 {desc}" expands to the first line of the changeset description.
83 '''
83 '''
84
84
85 from mercurial import commands, context, cmdutil, dispatch, filelog, extensions
85 from mercurial import commands, context, cmdutil, dispatch, filelog, extensions
86 from mercurial import localrepo, match, patch, templatefilters, templater, util
86 from mercurial import localrepo, match, patch, templatefilters, util
87 from mercurial import scmutil, pathutil
87 from mercurial import scmutil, pathutil
88 from mercurial.hgweb import webcommands
88 from mercurial.hgweb import webcommands
89 from mercurial.i18n import _
89 from mercurial.i18n import _
90 import os, re, tempfile
90 import os, re, tempfile
91
91
92 cmdtable = {}
92 cmdtable = {}
93 command = cmdutil.command(cmdtable)
93 command = cmdutil.command(cmdtable)
94 testedwith = 'internal'
94 testedwith = 'internal'
95
95
96 # hg commands that do not act on keywords
96 # hg commands that do not act on keywords
97 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
97 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
98 ' outgoing push tip verify convert email glog')
98 ' outgoing push tip verify convert email glog')
99
99
100 # hg commands that trigger expansion only when writing to working dir,
100 # hg commands that trigger expansion only when writing to working dir,
101 # not when reading filelog, and unexpand when reading from working dir
101 # not when reading filelog, and unexpand when reading from working dir
102 restricted = ('merge kwexpand kwshrink record qrecord resolve transplant'
102 restricted = ('merge kwexpand kwshrink record qrecord resolve transplant'
103 ' unshelve rebase graft backout histedit fetch')
103 ' unshelve rebase graft backout histedit fetch')
104
104
105 # names of extensions using dorecord
105 # names of extensions using dorecord
106 recordextensions = 'record'
106 recordextensions = 'record'
107
107
108 colortable = {
108 colortable = {
109 'kwfiles.enabled': 'green bold',
109 'kwfiles.enabled': 'green bold',
110 'kwfiles.deleted': 'cyan bold underline',
110 'kwfiles.deleted': 'cyan bold underline',
111 'kwfiles.enabledunknown': 'green',
111 'kwfiles.enabledunknown': 'green',
112 'kwfiles.ignored': 'bold',
112 'kwfiles.ignored': 'bold',
113 'kwfiles.ignoredunknown': 'none'
113 'kwfiles.ignoredunknown': 'none'
114 }
114 }
115
115
116 # date like in cvs' $Date
116 # date like in cvs' $Date
117 def utcdate(text):
117 def utcdate(text):
118 ''':utcdate: Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
118 ''':utcdate: Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
119 '''
119 '''
120 return util.datestr((util.parsedate(text)[0], 0), '%Y/%m/%d %H:%M:%S')
120 return util.datestr((util.parsedate(text)[0], 0), '%Y/%m/%d %H:%M:%S')
121 # date like in svn's $Date
121 # date like in svn's $Date
122 def svnisodate(text):
122 def svnisodate(text):
123 ''':svnisodate: Date. Returns a date in this format: "2009-08-18 13:00:13
123 ''':svnisodate: Date. Returns a date in this format: "2009-08-18 13:00:13
124 +0200 (Tue, 18 Aug 2009)".
124 +0200 (Tue, 18 Aug 2009)".
125 '''
125 '''
126 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
126 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
127 # date like in svn's $Id
127 # date like in svn's $Id
128 def svnutcdate(text):
128 def svnutcdate(text):
129 ''':svnutcdate: Date. Returns a UTC-date in this format: "2009-08-18
129 ''':svnutcdate: Date. Returns a UTC-date in this format: "2009-08-18
130 11:00:13Z".
130 11:00:13Z".
131 '''
131 '''
132 return util.datestr((util.parsedate(text)[0], 0), '%Y-%m-%d %H:%M:%SZ')
132 return util.datestr((util.parsedate(text)[0], 0), '%Y-%m-%d %H:%M:%SZ')
133
133
134 templatefilters.filters.update({'utcdate': utcdate,
134 templatefilters.filters.update({'utcdate': utcdate,
135 'svnisodate': svnisodate,
135 'svnisodate': svnisodate,
136 'svnutcdate': svnutcdate})
136 'svnutcdate': svnutcdate})
137
137
138 # make keyword tools accessible
138 # make keyword tools accessible
139 kwtools = {'templater': None, 'hgcmd': ''}
139 kwtools = {'templater': None, 'hgcmd': ''}
140
140
141 def _defaultkwmaps(ui):
141 def _defaultkwmaps(ui):
142 '''Returns default keywordmaps according to keywordset configuration.'''
142 '''Returns default keywordmaps according to keywordset configuration.'''
143 templates = {
143 templates = {
144 'Revision': '{node|short}',
144 'Revision': '{node|short}',
145 'Author': '{author|user}',
145 'Author': '{author|user}',
146 }
146 }
147 kwsets = ({
147 kwsets = ({
148 'Date': '{date|utcdate}',
148 'Date': '{date|utcdate}',
149 'RCSfile': '{file|basename},v',
149 'RCSfile': '{file|basename},v',
150 'RCSFile': '{file|basename},v', # kept for backwards compatibility
150 'RCSFile': '{file|basename},v', # kept for backwards compatibility
151 # with hg-keyword
151 # with hg-keyword
152 'Source': '{root}/{file},v',
152 'Source': '{root}/{file},v',
153 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
153 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
154 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
154 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
155 }, {
155 }, {
156 'Date': '{date|svnisodate}',
156 'Date': '{date|svnisodate}',
157 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
157 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
158 'LastChangedRevision': '{node|short}',
158 'LastChangedRevision': '{node|short}',
159 'LastChangedBy': '{author|user}',
159 'LastChangedBy': '{author|user}',
160 'LastChangedDate': '{date|svnisodate}',
160 'LastChangedDate': '{date|svnisodate}',
161 })
161 })
162 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
162 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
163 return templates
163 return templates
164
164
165 def _shrinktext(text, subfunc):
165 def _shrinktext(text, subfunc):
166 '''Helper for keyword expansion removal in text.
166 '''Helper for keyword expansion removal in text.
167 Depending on subfunc also returns number of substitutions.'''
167 Depending on subfunc also returns number of substitutions.'''
168 return subfunc(r'$\1$', text)
168 return subfunc(r'$\1$', text)
169
169
170 def _preselect(wstatus, changed):
170 def _preselect(wstatus, changed):
171 '''Retrieves modified and added files from a working directory state
171 '''Retrieves modified and added files from a working directory state
172 and returns the subset of each contained in given changed files
172 and returns the subset of each contained in given changed files
173 retrieved from a change context.'''
173 retrieved from a change context.'''
174 modified = [f for f in wstatus.modified if f in changed]
174 modified = [f for f in wstatus.modified if f in changed]
175 added = [f for f in wstatus.added if f in changed]
175 added = [f for f in wstatus.added if f in changed]
176 return modified, added
176 return modified, added
177
177
178
178
179 class kwtemplater(object):
179 class kwtemplater(object):
180 '''
180 '''
181 Sets up keyword templates, corresponding keyword regex, and
181 Sets up keyword templates, corresponding keyword regex, and
182 provides keyword substitution functions.
182 provides keyword substitution functions.
183 '''
183 '''
184
184
185 def __init__(self, ui, repo, inc, exc):
185 def __init__(self, ui, repo, inc, exc):
186 self.ui = ui
186 self.ui = ui
187 self.repo = repo
187 self.repo = repo
188 self.match = match.match(repo.root, '', [], inc, exc)
188 self.match = match.match(repo.root, '', [], inc, exc)
189 self.restrict = kwtools['hgcmd'] in restricted.split()
189 self.restrict = kwtools['hgcmd'] in restricted.split()
190 self.postcommit = False
190 self.postcommit = False
191
191
192 kwmaps = self.ui.configitems('keywordmaps')
192 kwmaps = self.ui.configitems('keywordmaps')
193 if kwmaps: # override default templates
193 if kwmaps: # override default templates
194 self.templates = dict((k, templater.parsestring(v, False))
194 self.templates = dict(kwmaps)
195 for k, v in kwmaps)
196 else:
195 else:
197 self.templates = _defaultkwmaps(self.ui)
196 self.templates = _defaultkwmaps(self.ui)
198
197
199 @util.propertycache
198 @util.propertycache
200 def escape(self):
199 def escape(self):
201 '''Returns bar-separated and escaped keywords.'''
200 '''Returns bar-separated and escaped keywords.'''
202 return '|'.join(map(re.escape, self.templates.keys()))
201 return '|'.join(map(re.escape, self.templates.keys()))
203
202
204 @util.propertycache
203 @util.propertycache
205 def rekw(self):
204 def rekw(self):
206 '''Returns regex for unexpanded keywords.'''
205 '''Returns regex for unexpanded keywords.'''
207 return re.compile(r'\$(%s)\$' % self.escape)
206 return re.compile(r'\$(%s)\$' % self.escape)
208
207
209 @util.propertycache
208 @util.propertycache
210 def rekwexp(self):
209 def rekwexp(self):
211 '''Returns regex for expanded keywords.'''
210 '''Returns regex for expanded keywords.'''
212 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
211 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
213
212
214 def substitute(self, data, path, ctx, subfunc):
213 def substitute(self, data, path, ctx, subfunc):
215 '''Replaces keywords in data with expanded template.'''
214 '''Replaces keywords in data with expanded template.'''
216 def kwsub(mobj):
215 def kwsub(mobj):
217 kw = mobj.group(1)
216 kw = mobj.group(1)
218 ct = cmdutil.changeset_templater(self.ui, self.repo, False, None,
217 ct = cmdutil.changeset_templater(self.ui, self.repo, False, None,
219 self.templates[kw], '', False)
218 self.templates[kw], '', False)
220 self.ui.pushbuffer()
219 self.ui.pushbuffer()
221 ct.show(ctx, root=self.repo.root, file=path)
220 ct.show(ctx, root=self.repo.root, file=path)
222 ekw = templatefilters.firstline(self.ui.popbuffer())
221 ekw = templatefilters.firstline(self.ui.popbuffer())
223 return '$%s: %s $' % (kw, ekw)
222 return '$%s: %s $' % (kw, ekw)
224 return subfunc(kwsub, data)
223 return subfunc(kwsub, data)
225
224
226 def linkctx(self, path, fileid):
225 def linkctx(self, path, fileid):
227 '''Similar to filelog.linkrev, but returns a changectx.'''
226 '''Similar to filelog.linkrev, but returns a changectx.'''
228 return self.repo.filectx(path, fileid=fileid).changectx()
227 return self.repo.filectx(path, fileid=fileid).changectx()
229
228
230 def expand(self, path, node, data):
229 def expand(self, path, node, data):
231 '''Returns data with keywords expanded.'''
230 '''Returns data with keywords expanded.'''
232 if not self.restrict and self.match(path) and not util.binary(data):
231 if not self.restrict and self.match(path) and not util.binary(data):
233 ctx = self.linkctx(path, node)
232 ctx = self.linkctx(path, node)
234 return self.substitute(data, path, ctx, self.rekw.sub)
233 return self.substitute(data, path, ctx, self.rekw.sub)
235 return data
234 return data
236
235
237 def iskwfile(self, cand, ctx):
236 def iskwfile(self, cand, ctx):
238 '''Returns subset of candidates which are configured for keyword
237 '''Returns subset of candidates which are configured for keyword
239 expansion but are not symbolic links.'''
238 expansion but are not symbolic links.'''
240 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
239 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
241
240
242 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
241 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
243 '''Overwrites selected files expanding/shrinking keywords.'''
242 '''Overwrites selected files expanding/shrinking keywords.'''
244 if self.restrict or lookup or self.postcommit: # exclude kw_copy
243 if self.restrict or lookup or self.postcommit: # exclude kw_copy
245 candidates = self.iskwfile(candidates, ctx)
244 candidates = self.iskwfile(candidates, ctx)
246 if not candidates:
245 if not candidates:
247 return
246 return
248 kwcmd = self.restrict and lookup # kwexpand/kwshrink
247 kwcmd = self.restrict and lookup # kwexpand/kwshrink
249 if self.restrict or expand and lookup:
248 if self.restrict or expand and lookup:
250 mf = ctx.manifest()
249 mf = ctx.manifest()
251 if self.restrict or rekw:
250 if self.restrict or rekw:
252 re_kw = self.rekw
251 re_kw = self.rekw
253 else:
252 else:
254 re_kw = self.rekwexp
253 re_kw = self.rekwexp
255 if expand:
254 if expand:
256 msg = _('overwriting %s expanding keywords\n')
255 msg = _('overwriting %s expanding keywords\n')
257 else:
256 else:
258 msg = _('overwriting %s shrinking keywords\n')
257 msg = _('overwriting %s shrinking keywords\n')
259 for f in candidates:
258 for f in candidates:
260 if self.restrict:
259 if self.restrict:
261 data = self.repo.file(f).read(mf[f])
260 data = self.repo.file(f).read(mf[f])
262 else:
261 else:
263 data = self.repo.wread(f)
262 data = self.repo.wread(f)
264 if util.binary(data):
263 if util.binary(data):
265 continue
264 continue
266 if expand:
265 if expand:
267 parents = ctx.parents()
266 parents = ctx.parents()
268 if lookup:
267 if lookup:
269 ctx = self.linkctx(f, mf[f])
268 ctx = self.linkctx(f, mf[f])
270 elif self.restrict and len(parents) > 1:
269 elif self.restrict and len(parents) > 1:
271 # merge commit
270 # merge commit
272 # in case of conflict f is in modified state during
271 # in case of conflict f is in modified state during
273 # merge, even if f does not differ from f in parent
272 # merge, even if f does not differ from f in parent
274 for p in parents:
273 for p in parents:
275 if f in p and not p[f].cmp(ctx[f]):
274 if f in p and not p[f].cmp(ctx[f]):
276 ctx = p[f].changectx()
275 ctx = p[f].changectx()
277 break
276 break
278 data, found = self.substitute(data, f, ctx, re_kw.subn)
277 data, found = self.substitute(data, f, ctx, re_kw.subn)
279 elif self.restrict:
278 elif self.restrict:
280 found = re_kw.search(data)
279 found = re_kw.search(data)
281 else:
280 else:
282 data, found = _shrinktext(data, re_kw.subn)
281 data, found = _shrinktext(data, re_kw.subn)
283 if found:
282 if found:
284 self.ui.note(msg % f)
283 self.ui.note(msg % f)
285 fp = self.repo.wvfs(f, "wb", atomictemp=True)
284 fp = self.repo.wvfs(f, "wb", atomictemp=True)
286 fp.write(data)
285 fp.write(data)
287 fp.close()
286 fp.close()
288 if kwcmd:
287 if kwcmd:
289 self.repo.dirstate.normal(f)
288 self.repo.dirstate.normal(f)
290 elif self.postcommit:
289 elif self.postcommit:
291 self.repo.dirstate.normallookup(f)
290 self.repo.dirstate.normallookup(f)
292
291
293 def shrink(self, fname, text):
292 def shrink(self, fname, text):
294 '''Returns text with all keyword substitutions removed.'''
293 '''Returns text with all keyword substitutions removed.'''
295 if self.match(fname) and not util.binary(text):
294 if self.match(fname) and not util.binary(text):
296 return _shrinktext(text, self.rekwexp.sub)
295 return _shrinktext(text, self.rekwexp.sub)
297 return text
296 return text
298
297
299 def shrinklines(self, fname, lines):
298 def shrinklines(self, fname, lines):
300 '''Returns lines with keyword substitutions removed.'''
299 '''Returns lines with keyword substitutions removed.'''
301 if self.match(fname):
300 if self.match(fname):
302 text = ''.join(lines)
301 text = ''.join(lines)
303 if not util.binary(text):
302 if not util.binary(text):
304 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
303 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
305 return lines
304 return lines
306
305
307 def wread(self, fname, data):
306 def wread(self, fname, data):
308 '''If in restricted mode returns data read from wdir with
307 '''If in restricted mode returns data read from wdir with
309 keyword substitutions removed.'''
308 keyword substitutions removed.'''
310 if self.restrict:
309 if self.restrict:
311 return self.shrink(fname, data)
310 return self.shrink(fname, data)
312 return data
311 return data
313
312
314 class kwfilelog(filelog.filelog):
313 class kwfilelog(filelog.filelog):
315 '''
314 '''
316 Subclass of filelog to hook into its read, add, cmp methods.
315 Subclass of filelog to hook into its read, add, cmp methods.
317 Keywords are "stored" unexpanded, and processed on reading.
316 Keywords are "stored" unexpanded, and processed on reading.
318 '''
317 '''
319 def __init__(self, opener, kwt, path):
318 def __init__(self, opener, kwt, path):
320 super(kwfilelog, self).__init__(opener, path)
319 super(kwfilelog, self).__init__(opener, path)
321 self.kwt = kwt
320 self.kwt = kwt
322 self.path = path
321 self.path = path
323
322
324 def read(self, node):
323 def read(self, node):
325 '''Expands keywords when reading filelog.'''
324 '''Expands keywords when reading filelog.'''
326 data = super(kwfilelog, self).read(node)
325 data = super(kwfilelog, self).read(node)
327 if self.renamed(node):
326 if self.renamed(node):
328 return data
327 return data
329 return self.kwt.expand(self.path, node, data)
328 return self.kwt.expand(self.path, node, data)
330
329
331 def add(self, text, meta, tr, link, p1=None, p2=None):
330 def add(self, text, meta, tr, link, p1=None, p2=None):
332 '''Removes keyword substitutions when adding to filelog.'''
331 '''Removes keyword substitutions when adding to filelog.'''
333 text = self.kwt.shrink(self.path, text)
332 text = self.kwt.shrink(self.path, text)
334 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
333 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
335
334
336 def cmp(self, node, text):
335 def cmp(self, node, text):
337 '''Removes keyword substitutions for comparison.'''
336 '''Removes keyword substitutions for comparison.'''
338 text = self.kwt.shrink(self.path, text)
337 text = self.kwt.shrink(self.path, text)
339 return super(kwfilelog, self).cmp(node, text)
338 return super(kwfilelog, self).cmp(node, text)
340
339
341 def _status(ui, repo, wctx, kwt, *pats, **opts):
340 def _status(ui, repo, wctx, kwt, *pats, **opts):
342 '''Bails out if [keyword] configuration is not active.
341 '''Bails out if [keyword] configuration is not active.
343 Returns status of working directory.'''
342 Returns status of working directory.'''
344 if kwt:
343 if kwt:
345 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
344 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
346 unknown=opts.get('unknown') or opts.get('all'))
345 unknown=opts.get('unknown') or opts.get('all'))
347 if ui.configitems('keyword'):
346 if ui.configitems('keyword'):
348 raise util.Abort(_('[keyword] patterns cannot match'))
347 raise util.Abort(_('[keyword] patterns cannot match'))
349 raise util.Abort(_('no [keyword] patterns configured'))
348 raise util.Abort(_('no [keyword] patterns configured'))
350
349
351 def _kwfwrite(ui, repo, expand, *pats, **opts):
350 def _kwfwrite(ui, repo, expand, *pats, **opts):
352 '''Selects files and passes them to kwtemplater.overwrite.'''
351 '''Selects files and passes them to kwtemplater.overwrite.'''
353 wctx = repo[None]
352 wctx = repo[None]
354 if len(wctx.parents()) > 1:
353 if len(wctx.parents()) > 1:
355 raise util.Abort(_('outstanding uncommitted merge'))
354 raise util.Abort(_('outstanding uncommitted merge'))
356 kwt = kwtools['templater']
355 kwt = kwtools['templater']
357 wlock = repo.wlock()
356 wlock = repo.wlock()
358 try:
357 try:
359 status = _status(ui, repo, wctx, kwt, *pats, **opts)
358 status = _status(ui, repo, wctx, kwt, *pats, **opts)
360 if status.modified or status.added or status.removed or status.deleted:
359 if status.modified or status.added or status.removed or status.deleted:
361 raise util.Abort(_('outstanding uncommitted changes'))
360 raise util.Abort(_('outstanding uncommitted changes'))
362 kwt.overwrite(wctx, status.clean, True, expand)
361 kwt.overwrite(wctx, status.clean, True, expand)
363 finally:
362 finally:
364 wlock.release()
363 wlock.release()
365
364
366 @command('kwdemo',
365 @command('kwdemo',
367 [('d', 'default', None, _('show default keyword template maps')),
366 [('d', 'default', None, _('show default keyword template maps')),
368 ('f', 'rcfile', '',
367 ('f', 'rcfile', '',
369 _('read maps from rcfile'), _('FILE'))],
368 _('read maps from rcfile'), _('FILE'))],
370 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'),
369 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'),
371 optionalrepo=True)
370 optionalrepo=True)
372 def demo(ui, repo, *args, **opts):
371 def demo(ui, repo, *args, **opts):
373 '''print [keywordmaps] configuration and an expansion example
372 '''print [keywordmaps] configuration and an expansion example
374
373
375 Show current, custom, or default keyword template maps and their
374 Show current, custom, or default keyword template maps and their
376 expansions.
375 expansions.
377
376
378 Extend the current configuration by specifying maps as arguments
377 Extend the current configuration by specifying maps as arguments
379 and using -f/--rcfile to source an external hgrc file.
378 and using -f/--rcfile to source an external hgrc file.
380
379
381 Use -d/--default to disable current configuration.
380 Use -d/--default to disable current configuration.
382
381
383 See :hg:`help templates` for information on templates and filters.
382 See :hg:`help templates` for information on templates and filters.
384 '''
383 '''
385 def demoitems(section, items):
384 def demoitems(section, items):
386 ui.write('[%s]\n' % section)
385 ui.write('[%s]\n' % section)
387 for k, v in sorted(items):
386 for k, v in sorted(items):
388 ui.write('%s = %s\n' % (k, v))
387 ui.write('%s = %s\n' % (k, v))
389
388
390 fn = 'demo.txt'
389 fn = 'demo.txt'
391 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
390 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
392 ui.note(_('creating temporary repository at %s\n') % tmpdir)
391 ui.note(_('creating temporary repository at %s\n') % tmpdir)
393 repo = localrepo.localrepository(repo.baseui, tmpdir, True)
392 repo = localrepo.localrepository(repo.baseui, tmpdir, True)
394 ui.setconfig('keyword', fn, '', 'keyword')
393 ui.setconfig('keyword', fn, '', 'keyword')
395 svn = ui.configbool('keywordset', 'svn')
394 svn = ui.configbool('keywordset', 'svn')
396 # explicitly set keywordset for demo output
395 # explicitly set keywordset for demo output
397 ui.setconfig('keywordset', 'svn', svn, 'keyword')
396 ui.setconfig('keywordset', 'svn', svn, 'keyword')
398
397
399 uikwmaps = ui.configitems('keywordmaps')
398 uikwmaps = ui.configitems('keywordmaps')
400 if args or opts.get('rcfile'):
399 if args or opts.get('rcfile'):
401 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
400 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
402 if uikwmaps:
401 if uikwmaps:
403 ui.status(_('\textending current template maps\n'))
402 ui.status(_('\textending current template maps\n'))
404 if opts.get('default') or not uikwmaps:
403 if opts.get('default') or not uikwmaps:
405 if svn:
404 if svn:
406 ui.status(_('\toverriding default svn keywordset\n'))
405 ui.status(_('\toverriding default svn keywordset\n'))
407 else:
406 else:
408 ui.status(_('\toverriding default cvs keywordset\n'))
407 ui.status(_('\toverriding default cvs keywordset\n'))
409 if opts.get('rcfile'):
408 if opts.get('rcfile'):
410 ui.readconfig(opts.get('rcfile'))
409 ui.readconfig(opts.get('rcfile'))
411 if args:
410 if args:
412 # simulate hgrc parsing
411 # simulate hgrc parsing
413 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
412 rcmaps = ['[keywordmaps]\n'] + [a + '\n' for a in args]
414 fp = repo.vfs('hgrc', 'w')
413 fp = repo.vfs('hgrc', 'w')
415 fp.writelines(rcmaps)
414 fp.writelines(rcmaps)
416 fp.close()
415 fp.close()
417 ui.readconfig(repo.join('hgrc'))
416 ui.readconfig(repo.join('hgrc'))
418 kwmaps = dict(ui.configitems('keywordmaps'))
417 kwmaps = dict(ui.configitems('keywordmaps'))
419 elif opts.get('default'):
418 elif opts.get('default'):
420 if svn:
419 if svn:
421 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
420 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
422 else:
421 else:
423 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
422 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
424 kwmaps = _defaultkwmaps(ui)
423 kwmaps = _defaultkwmaps(ui)
425 if uikwmaps:
424 if uikwmaps:
426 ui.status(_('\tdisabling current template maps\n'))
425 ui.status(_('\tdisabling current template maps\n'))
427 for k, v in kwmaps.iteritems():
426 for k, v in kwmaps.iteritems():
428 ui.setconfig('keywordmaps', k, v, 'keyword')
427 ui.setconfig('keywordmaps', k, v, 'keyword')
429 else:
428 else:
430 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
429 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
431 if uikwmaps:
430 if uikwmaps:
432 kwmaps = dict(uikwmaps)
431 kwmaps = dict(uikwmaps)
433 else:
432 else:
434 kwmaps = _defaultkwmaps(ui)
433 kwmaps = _defaultkwmaps(ui)
435
434
436 uisetup(ui)
435 uisetup(ui)
437 reposetup(ui, repo)
436 reposetup(ui, repo)
438 ui.write('[extensions]\nkeyword =\n')
437 ui.write('[extensions]\nkeyword =\n')
439 demoitems('keyword', ui.configitems('keyword'))
438 demoitems('keyword', ui.configitems('keyword'))
440 demoitems('keywordset', ui.configitems('keywordset'))
439 demoitems('keywordset', ui.configitems('keywordset'))
441 demoitems('keywordmaps', kwmaps.iteritems())
440 demoitems('keywordmaps', kwmaps.iteritems())
442 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
441 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
443 repo.wvfs.write(fn, keywords)
442 repo.wvfs.write(fn, keywords)
444 repo[None].add([fn])
443 repo[None].add([fn])
445 ui.note(_('\nkeywords written to %s:\n') % fn)
444 ui.note(_('\nkeywords written to %s:\n') % fn)
446 ui.note(keywords)
445 ui.note(keywords)
447 wlock = repo.wlock()
446 wlock = repo.wlock()
448 try:
447 try:
449 repo.dirstate.setbranch('demobranch')
448 repo.dirstate.setbranch('demobranch')
450 finally:
449 finally:
451 wlock.release()
450 wlock.release()
452 for name, cmd in ui.configitems('hooks'):
451 for name, cmd in ui.configitems('hooks'):
453 if name.split('.', 1)[0].find('commit') > -1:
452 if name.split('.', 1)[0].find('commit') > -1:
454 repo.ui.setconfig('hooks', name, '', 'keyword')
453 repo.ui.setconfig('hooks', name, '', 'keyword')
455 msg = _('hg keyword configuration and expansion example')
454 msg = _('hg keyword configuration and expansion example')
456 ui.note(("hg ci -m '%s'\n" % msg))
455 ui.note(("hg ci -m '%s'\n" % msg))
457 repo.commit(text=msg)
456 repo.commit(text=msg)
458 ui.status(_('\n\tkeywords expanded\n'))
457 ui.status(_('\n\tkeywords expanded\n'))
459 ui.write(repo.wread(fn))
458 ui.write(repo.wread(fn))
460 repo.wvfs.rmtree(repo.root)
459 repo.wvfs.rmtree(repo.root)
461
460
462 @command('kwexpand',
461 @command('kwexpand',
463 commands.walkopts,
462 commands.walkopts,
464 _('hg kwexpand [OPTION]... [FILE]...'),
463 _('hg kwexpand [OPTION]... [FILE]...'),
465 inferrepo=True)
464 inferrepo=True)
466 def expand(ui, repo, *pats, **opts):
465 def expand(ui, repo, *pats, **opts):
467 '''expand keywords in the working directory
466 '''expand keywords in the working directory
468
467
469 Run after (re)enabling keyword expansion.
468 Run after (re)enabling keyword expansion.
470
469
471 kwexpand refuses to run if given files contain local changes.
470 kwexpand refuses to run if given files contain local changes.
472 '''
471 '''
473 # 3rd argument sets expansion to True
472 # 3rd argument sets expansion to True
474 _kwfwrite(ui, repo, True, *pats, **opts)
473 _kwfwrite(ui, repo, True, *pats, **opts)
475
474
476 @command('kwfiles',
475 @command('kwfiles',
477 [('A', 'all', None, _('show keyword status flags of all files')),
476 [('A', 'all', None, _('show keyword status flags of all files')),
478 ('i', 'ignore', None, _('show files excluded from expansion')),
477 ('i', 'ignore', None, _('show files excluded from expansion')),
479 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
478 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
480 ] + commands.walkopts,
479 ] + commands.walkopts,
481 _('hg kwfiles [OPTION]... [FILE]...'),
480 _('hg kwfiles [OPTION]... [FILE]...'),
482 inferrepo=True)
481 inferrepo=True)
483 def files(ui, repo, *pats, **opts):
482 def files(ui, repo, *pats, **opts):
484 '''show files configured for keyword expansion
483 '''show files configured for keyword expansion
485
484
486 List which files in the working directory are matched by the
485 List which files in the working directory are matched by the
487 [keyword] configuration patterns.
486 [keyword] configuration patterns.
488
487
489 Useful to prevent inadvertent keyword expansion and to speed up
488 Useful to prevent inadvertent keyword expansion and to speed up
490 execution by including only files that are actual candidates for
489 execution by including only files that are actual candidates for
491 expansion.
490 expansion.
492
491
493 See :hg:`help keyword` on how to construct patterns both for
492 See :hg:`help keyword` on how to construct patterns both for
494 inclusion and exclusion of files.
493 inclusion and exclusion of files.
495
494
496 With -A/--all and -v/--verbose the codes used to show the status
495 With -A/--all and -v/--verbose the codes used to show the status
497 of files are::
496 of files are::
498
497
499 K = keyword expansion candidate
498 K = keyword expansion candidate
500 k = keyword expansion candidate (not tracked)
499 k = keyword expansion candidate (not tracked)
501 I = ignored
500 I = ignored
502 i = ignored (not tracked)
501 i = ignored (not tracked)
503 '''
502 '''
504 kwt = kwtools['templater']
503 kwt = kwtools['templater']
505 wctx = repo[None]
504 wctx = repo[None]
506 status = _status(ui, repo, wctx, kwt, *pats, **opts)
505 status = _status(ui, repo, wctx, kwt, *pats, **opts)
507 if pats:
506 if pats:
508 cwd = repo.getcwd()
507 cwd = repo.getcwd()
509 else:
508 else:
510 cwd = ''
509 cwd = ''
511 files = []
510 files = []
512 if not opts.get('unknown') or opts.get('all'):
511 if not opts.get('unknown') or opts.get('all'):
513 files = sorted(status.modified + status.added + status.clean)
512 files = sorted(status.modified + status.added + status.clean)
514 kwfiles = kwt.iskwfile(files, wctx)
513 kwfiles = kwt.iskwfile(files, wctx)
515 kwdeleted = kwt.iskwfile(status.deleted, wctx)
514 kwdeleted = kwt.iskwfile(status.deleted, wctx)
516 kwunknown = kwt.iskwfile(status.unknown, wctx)
515 kwunknown = kwt.iskwfile(status.unknown, wctx)
517 if not opts.get('ignore') or opts.get('all'):
516 if not opts.get('ignore') or opts.get('all'):
518 showfiles = kwfiles, kwdeleted, kwunknown
517 showfiles = kwfiles, kwdeleted, kwunknown
519 else:
518 else:
520 showfiles = [], [], []
519 showfiles = [], [], []
521 if opts.get('all') or opts.get('ignore'):
520 if opts.get('all') or opts.get('ignore'):
522 showfiles += ([f for f in files if f not in kwfiles],
521 showfiles += ([f for f in files if f not in kwfiles],
523 [f for f in status.unknown if f not in kwunknown])
522 [f for f in status.unknown if f not in kwunknown])
524 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
523 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
525 kwstates = zip(kwlabels, 'K!kIi', showfiles)
524 kwstates = zip(kwlabels, 'K!kIi', showfiles)
526 fm = ui.formatter('kwfiles', opts)
525 fm = ui.formatter('kwfiles', opts)
527 fmt = '%.0s%s\n'
526 fmt = '%.0s%s\n'
528 if opts.get('all') or ui.verbose:
527 if opts.get('all') or ui.verbose:
529 fmt = '%s %s\n'
528 fmt = '%s %s\n'
530 for kwstate, char, filenames in kwstates:
529 for kwstate, char, filenames in kwstates:
531 label = 'kwfiles.' + kwstate
530 label = 'kwfiles.' + kwstate
532 for f in filenames:
531 for f in filenames:
533 fm.startitem()
532 fm.startitem()
534 fm.write('kwstatus path', fmt, char,
533 fm.write('kwstatus path', fmt, char,
535 repo.pathto(f, cwd), label=label)
534 repo.pathto(f, cwd), label=label)
536 fm.end()
535 fm.end()
537
536
538 @command('kwshrink',
537 @command('kwshrink',
539 commands.walkopts,
538 commands.walkopts,
540 _('hg kwshrink [OPTION]... [FILE]...'),
539 _('hg kwshrink [OPTION]... [FILE]...'),
541 inferrepo=True)
540 inferrepo=True)
542 def shrink(ui, repo, *pats, **opts):
541 def shrink(ui, repo, *pats, **opts):
543 '''revert expanded keywords in the working directory
542 '''revert expanded keywords in the working directory
544
543
545 Must be run before changing/disabling active keywords.
544 Must be run before changing/disabling active keywords.
546
545
547 kwshrink refuses to run if given files contain local changes.
546 kwshrink refuses to run if given files contain local changes.
548 '''
547 '''
549 # 3rd argument sets expansion to False
548 # 3rd argument sets expansion to False
550 _kwfwrite(ui, repo, False, *pats, **opts)
549 _kwfwrite(ui, repo, False, *pats, **opts)
551
550
552
551
553 def uisetup(ui):
552 def uisetup(ui):
554 ''' Monkeypatches dispatch._parse to retrieve user command.'''
553 ''' Monkeypatches dispatch._parse to retrieve user command.'''
555
554
556 def kwdispatch_parse(orig, ui, args):
555 def kwdispatch_parse(orig, ui, args):
557 '''Monkeypatch dispatch._parse to obtain running hg command.'''
556 '''Monkeypatch dispatch._parse to obtain running hg command.'''
558 cmd, func, args, options, cmdoptions = orig(ui, args)
557 cmd, func, args, options, cmdoptions = orig(ui, args)
559 kwtools['hgcmd'] = cmd
558 kwtools['hgcmd'] = cmd
560 return cmd, func, args, options, cmdoptions
559 return cmd, func, args, options, cmdoptions
561
560
562 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
561 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
563
562
564 def reposetup(ui, repo):
563 def reposetup(ui, repo):
565 '''Sets up repo as kwrepo for keyword substitution.
564 '''Sets up repo as kwrepo for keyword substitution.
566 Overrides file method to return kwfilelog instead of filelog
565 Overrides file method to return kwfilelog instead of filelog
567 if file matches user configuration.
566 if file matches user configuration.
568 Wraps commit to overwrite configured files with updated
567 Wraps commit to overwrite configured files with updated
569 keyword substitutions.
568 keyword substitutions.
570 Monkeypatches patch and webcommands.'''
569 Monkeypatches patch and webcommands.'''
571
570
572 try:
571 try:
573 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
572 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
574 or '.hg' in util.splitpath(repo.root)
573 or '.hg' in util.splitpath(repo.root)
575 or repo._url.startswith('bundle:')):
574 or repo._url.startswith('bundle:')):
576 return
575 return
577 except AttributeError:
576 except AttributeError:
578 pass
577 pass
579
578
580 inc, exc = [], ['.hg*']
579 inc, exc = [], ['.hg*']
581 for pat, opt in ui.configitems('keyword'):
580 for pat, opt in ui.configitems('keyword'):
582 if opt != 'ignore':
581 if opt != 'ignore':
583 inc.append(pat)
582 inc.append(pat)
584 else:
583 else:
585 exc.append(pat)
584 exc.append(pat)
586 if not inc:
585 if not inc:
587 return
586 return
588
587
589 kwtools['templater'] = kwt = kwtemplater(ui, repo, inc, exc)
588 kwtools['templater'] = kwt = kwtemplater(ui, repo, inc, exc)
590
589
591 class kwrepo(repo.__class__):
590 class kwrepo(repo.__class__):
592 def file(self, f):
591 def file(self, f):
593 if f[0] == '/':
592 if f[0] == '/':
594 f = f[1:]
593 f = f[1:]
595 return kwfilelog(self.svfs, kwt, f)
594 return kwfilelog(self.svfs, kwt, f)
596
595
597 def wread(self, filename):
596 def wread(self, filename):
598 data = super(kwrepo, self).wread(filename)
597 data = super(kwrepo, self).wread(filename)
599 return kwt.wread(filename, data)
598 return kwt.wread(filename, data)
600
599
601 def commit(self, *args, **opts):
600 def commit(self, *args, **opts):
602 # use custom commitctx for user commands
601 # use custom commitctx for user commands
603 # other extensions can still wrap repo.commitctx directly
602 # other extensions can still wrap repo.commitctx directly
604 self.commitctx = self.kwcommitctx
603 self.commitctx = self.kwcommitctx
605 try:
604 try:
606 return super(kwrepo, self).commit(*args, **opts)
605 return super(kwrepo, self).commit(*args, **opts)
607 finally:
606 finally:
608 del self.commitctx
607 del self.commitctx
609
608
610 def kwcommitctx(self, ctx, error=False):
609 def kwcommitctx(self, ctx, error=False):
611 n = super(kwrepo, self).commitctx(ctx, error)
610 n = super(kwrepo, self).commitctx(ctx, error)
612 # no lock needed, only called from repo.commit() which already locks
611 # no lock needed, only called from repo.commit() which already locks
613 if not kwt.postcommit:
612 if not kwt.postcommit:
614 restrict = kwt.restrict
613 restrict = kwt.restrict
615 kwt.restrict = True
614 kwt.restrict = True
616 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
615 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
617 False, True)
616 False, True)
618 kwt.restrict = restrict
617 kwt.restrict = restrict
619 return n
618 return n
620
619
621 def rollback(self, dryrun=False, force=False):
620 def rollback(self, dryrun=False, force=False):
622 wlock = self.wlock()
621 wlock = self.wlock()
623 try:
622 try:
624 if not dryrun:
623 if not dryrun:
625 changed = self['.'].files()
624 changed = self['.'].files()
626 ret = super(kwrepo, self).rollback(dryrun, force)
625 ret = super(kwrepo, self).rollback(dryrun, force)
627 if not dryrun:
626 if not dryrun:
628 ctx = self['.']
627 ctx = self['.']
629 modified, added = _preselect(ctx.status(), changed)
628 modified, added = _preselect(ctx.status(), changed)
630 kwt.overwrite(ctx, modified, True, True)
629 kwt.overwrite(ctx, modified, True, True)
631 kwt.overwrite(ctx, added, True, False)
630 kwt.overwrite(ctx, added, True, False)
632 return ret
631 return ret
633 finally:
632 finally:
634 wlock.release()
633 wlock.release()
635
634
636 # monkeypatches
635 # monkeypatches
637 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
636 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
638 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
637 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
639 rejects or conflicts due to expanded keywords in working dir.'''
638 rejects or conflicts due to expanded keywords in working dir.'''
640 orig(self, ui, gp, backend, store, eolmode)
639 orig(self, ui, gp, backend, store, eolmode)
641 # shrink keywords read from working dir
640 # shrink keywords read from working dir
642 self.lines = kwt.shrinklines(self.fname, self.lines)
641 self.lines = kwt.shrinklines(self.fname, self.lines)
643
642
644 def kwdiff(orig, *args, **kwargs):
643 def kwdiff(orig, *args, **kwargs):
645 '''Monkeypatch patch.diff to avoid expansion.'''
644 '''Monkeypatch patch.diff to avoid expansion.'''
646 kwt.restrict = True
645 kwt.restrict = True
647 return orig(*args, **kwargs)
646 return orig(*args, **kwargs)
648
647
649 def kwweb_skip(orig, web, req, tmpl):
648 def kwweb_skip(orig, web, req, tmpl):
650 '''Wraps webcommands.x turning off keyword expansion.'''
649 '''Wraps webcommands.x turning off keyword expansion.'''
651 kwt.match = util.never
650 kwt.match = util.never
652 return orig(web, req, tmpl)
651 return orig(web, req, tmpl)
653
652
654 def kw_amend(orig, ui, repo, commitfunc, old, extra, pats, opts):
653 def kw_amend(orig, ui, repo, commitfunc, old, extra, pats, opts):
655 '''Wraps cmdutil.amend expanding keywords after amend.'''
654 '''Wraps cmdutil.amend expanding keywords after amend.'''
656 wlock = repo.wlock()
655 wlock = repo.wlock()
657 try:
656 try:
658 kwt.postcommit = True
657 kwt.postcommit = True
659 newid = orig(ui, repo, commitfunc, old, extra, pats, opts)
658 newid = orig(ui, repo, commitfunc, old, extra, pats, opts)
660 if newid != old.node():
659 if newid != old.node():
661 ctx = repo[newid]
660 ctx = repo[newid]
662 kwt.restrict = True
661 kwt.restrict = True
663 kwt.overwrite(ctx, ctx.files(), False, True)
662 kwt.overwrite(ctx, ctx.files(), False, True)
664 kwt.restrict = False
663 kwt.restrict = False
665 return newid
664 return newid
666 finally:
665 finally:
667 wlock.release()
666 wlock.release()
668
667
669 def kw_copy(orig, ui, repo, pats, opts, rename=False):
668 def kw_copy(orig, ui, repo, pats, opts, rename=False):
670 '''Wraps cmdutil.copy so that copy/rename destinations do not
669 '''Wraps cmdutil.copy so that copy/rename destinations do not
671 contain expanded keywords.
670 contain expanded keywords.
672 Note that the source of a regular file destination may also be a
671 Note that the source of a regular file destination may also be a
673 symlink:
672 symlink:
674 hg cp sym x -> x is symlink
673 hg cp sym x -> x is symlink
675 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
674 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
676 For the latter we have to follow the symlink to find out whether its
675 For the latter we have to follow the symlink to find out whether its
677 target is configured for expansion and we therefore must unexpand the
676 target is configured for expansion and we therefore must unexpand the
678 keywords in the destination.'''
677 keywords in the destination.'''
679 wlock = repo.wlock()
678 wlock = repo.wlock()
680 try:
679 try:
681 orig(ui, repo, pats, opts, rename)
680 orig(ui, repo, pats, opts, rename)
682 if opts.get('dry_run'):
681 if opts.get('dry_run'):
683 return
682 return
684 wctx = repo[None]
683 wctx = repo[None]
685 cwd = repo.getcwd()
684 cwd = repo.getcwd()
686
685
687 def haskwsource(dest):
686 def haskwsource(dest):
688 '''Returns true if dest is a regular file and configured for
687 '''Returns true if dest is a regular file and configured for
689 expansion or a symlink which points to a file configured for
688 expansion or a symlink which points to a file configured for
690 expansion. '''
689 expansion. '''
691 source = repo.dirstate.copied(dest)
690 source = repo.dirstate.copied(dest)
692 if 'l' in wctx.flags(source):
691 if 'l' in wctx.flags(source):
693 source = pathutil.canonpath(repo.root, cwd,
692 source = pathutil.canonpath(repo.root, cwd,
694 os.path.realpath(source))
693 os.path.realpath(source))
695 return kwt.match(source)
694 return kwt.match(source)
696
695
697 candidates = [f for f in repo.dirstate.copies() if
696 candidates = [f for f in repo.dirstate.copies() if
698 'l' not in wctx.flags(f) and haskwsource(f)]
697 'l' not in wctx.flags(f) and haskwsource(f)]
699 kwt.overwrite(wctx, candidates, False, False)
698 kwt.overwrite(wctx, candidates, False, False)
700 finally:
699 finally:
701 wlock.release()
700 wlock.release()
702
701
703 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
702 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
704 '''Wraps record.dorecord expanding keywords after recording.'''
703 '''Wraps record.dorecord expanding keywords after recording.'''
705 wlock = repo.wlock()
704 wlock = repo.wlock()
706 try:
705 try:
707 # record returns 0 even when nothing has changed
706 # record returns 0 even when nothing has changed
708 # therefore compare nodes before and after
707 # therefore compare nodes before and after
709 kwt.postcommit = True
708 kwt.postcommit = True
710 ctx = repo['.']
709 ctx = repo['.']
711 wstatus = ctx.status()
710 wstatus = ctx.status()
712 ret = orig(ui, repo, commitfunc, *pats, **opts)
711 ret = orig(ui, repo, commitfunc, *pats, **opts)
713 recctx = repo['.']
712 recctx = repo['.']
714 if ctx != recctx:
713 if ctx != recctx:
715 modified, added = _preselect(wstatus, recctx.files())
714 modified, added = _preselect(wstatus, recctx.files())
716 kwt.restrict = False
715 kwt.restrict = False
717 kwt.overwrite(recctx, modified, False, True)
716 kwt.overwrite(recctx, modified, False, True)
718 kwt.overwrite(recctx, added, False, True, True)
717 kwt.overwrite(recctx, added, False, True, True)
719 kwt.restrict = True
718 kwt.restrict = True
720 return ret
719 return ret
721 finally:
720 finally:
722 wlock.release()
721 wlock.release()
723
722
724 def kwfilectx_cmp(orig, self, fctx):
723 def kwfilectx_cmp(orig, self, fctx):
725 # keyword affects data size, comparing wdir and filelog size does
724 # keyword affects data size, comparing wdir and filelog size does
726 # not make sense
725 # not make sense
727 if (fctx._filerev is None and
726 if (fctx._filerev is None and
728 (self._repo._encodefilterpats or
727 (self._repo._encodefilterpats or
729 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
728 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
730 self.size() - 4 == fctx.size()) or
729 self.size() - 4 == fctx.size()) or
731 self.size() == fctx.size()):
730 self.size() == fctx.size()):
732 return self._filelog.cmp(self._filenode, fctx.data())
731 return self._filelog.cmp(self._filenode, fctx.data())
733 return True
732 return True
734
733
735 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
734 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
736 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
735 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
737 extensions.wrapfunction(patch, 'diff', kwdiff)
736 extensions.wrapfunction(patch, 'diff', kwdiff)
738 extensions.wrapfunction(cmdutil, 'amend', kw_amend)
737 extensions.wrapfunction(cmdutil, 'amend', kw_amend)
739 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
738 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
740 extensions.wrapfunction(cmdutil, 'dorecord', kw_dorecord)
739 extensions.wrapfunction(cmdutil, 'dorecord', kw_dorecord)
741 for c in 'annotate changeset rev filediff diff'.split():
740 for c in 'annotate changeset rev filediff diff'.split():
742 extensions.wrapfunction(webcommands, c, kwweb_skip)
741 extensions.wrapfunction(webcommands, c, kwweb_skip)
743 repo.__class__ = kwrepo
742 repo.__class__ = kwrepo
@@ -1,417 +1,415 b''
1 # notify.py - email notifications for mercurial
1 # notify.py - email notifications for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''hooks for sending email push notifications
8 '''hooks for sending email push notifications
9
9
10 This extension implements hooks to send email notifications when
10 This extension implements hooks to send email notifications when
11 changesets are sent from or received by the local repository.
11 changesets are sent from or received by the local repository.
12
12
13 First, enable the extension as explained in :hg:`help extensions`, and
13 First, enable the extension as explained in :hg:`help extensions`, and
14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
15 are run when changesets are received, while ``outgoing`` hooks are for
15 are run when changesets are received, while ``outgoing`` hooks are for
16 changesets sent to another repository::
16 changesets sent to another repository::
17
17
18 [hooks]
18 [hooks]
19 # one email for each incoming changeset
19 # one email for each incoming changeset
20 incoming.notify = python:hgext.notify.hook
20 incoming.notify = python:hgext.notify.hook
21 # one email for all incoming changesets
21 # one email for all incoming changesets
22 changegroup.notify = python:hgext.notify.hook
22 changegroup.notify = python:hgext.notify.hook
23
23
24 # one email for all outgoing changesets
24 # one email for all outgoing changesets
25 outgoing.notify = python:hgext.notify.hook
25 outgoing.notify = python:hgext.notify.hook
26
26
27 This registers the hooks. To enable notification, subscribers must
27 This registers the hooks. To enable notification, subscribers must
28 be assigned to repositories. The ``[usersubs]`` section maps multiple
28 be assigned to repositories. The ``[usersubs]`` section maps multiple
29 repositories to a given recipient. The ``[reposubs]`` section maps
29 repositories to a given recipient. The ``[reposubs]`` section maps
30 multiple recipients to a single repository::
30 multiple recipients to a single repository::
31
31
32 [usersubs]
32 [usersubs]
33 # key is subscriber email, value is a comma-separated list of repo patterns
33 # key is subscriber email, value is a comma-separated list of repo patterns
34 user@host = pattern
34 user@host = pattern
35
35
36 [reposubs]
36 [reposubs]
37 # key is repo pattern, value is a comma-separated list of subscriber emails
37 # key is repo pattern, value is a comma-separated list of subscriber emails
38 pattern = user@host
38 pattern = user@host
39
39
40 A ``pattern`` is a ``glob`` matching the absolute path to a repository,
40 A ``pattern`` is a ``glob`` matching the absolute path to a repository,
41 optionally combined with a revset expression. A revset expression, if
41 optionally combined with a revset expression. A revset expression, if
42 present, is separated from the glob by a hash. Example::
42 present, is separated from the glob by a hash. Example::
43
43
44 [reposubs]
44 [reposubs]
45 */widgets#branch(release) = qa-team@example.com
45 */widgets#branch(release) = qa-team@example.com
46
46
47 This sends to ``qa-team@example.com`` whenever a changeset on the ``release``
47 This sends to ``qa-team@example.com`` whenever a changeset on the ``release``
48 branch triggers a notification in any repository ending in ``widgets``.
48 branch triggers a notification in any repository ending in ``widgets``.
49
49
50 In order to place them under direct user management, ``[usersubs]`` and
50 In order to place them under direct user management, ``[usersubs]`` and
51 ``[reposubs]`` sections may be placed in a separate ``hgrc`` file and
51 ``[reposubs]`` sections may be placed in a separate ``hgrc`` file and
52 incorporated by reference::
52 incorporated by reference::
53
53
54 [notify]
54 [notify]
55 config = /path/to/subscriptionsfile
55 config = /path/to/subscriptionsfile
56
56
57 Notifications will not be sent until the ``notify.test`` value is set
57 Notifications will not be sent until the ``notify.test`` value is set
58 to ``False``; see below.
58 to ``False``; see below.
59
59
60 Notifications content can be tweaked with the following configuration entries:
60 Notifications content can be tweaked with the following configuration entries:
61
61
62 notify.test
62 notify.test
63 If ``True``, print messages to stdout instead of sending them. Default: True.
63 If ``True``, print messages to stdout instead of sending them. Default: True.
64
64
65 notify.sources
65 notify.sources
66 Space-separated list of change sources. Notifications are activated only
66 Space-separated list of change sources. Notifications are activated only
67 when a changeset's source is in this list. Sources may be:
67 when a changeset's source is in this list. Sources may be:
68
68
69 :``serve``: changesets received via http or ssh
69 :``serve``: changesets received via http or ssh
70 :``pull``: changesets received via ``hg pull``
70 :``pull``: changesets received via ``hg pull``
71 :``unbundle``: changesets received via ``hg unbundle``
71 :``unbundle``: changesets received via ``hg unbundle``
72 :``push``: changesets sent or received via ``hg push``
72 :``push``: changesets sent or received via ``hg push``
73 :``bundle``: changesets sent via ``hg unbundle``
73 :``bundle``: changesets sent via ``hg unbundle``
74
74
75 Default: serve.
75 Default: serve.
76
76
77 notify.strip
77 notify.strip
78 Number of leading slashes to strip from url paths. By default, notifications
78 Number of leading slashes to strip from url paths. By default, notifications
79 reference repositories with their absolute path. ``notify.strip`` lets you
79 reference repositories with their absolute path. ``notify.strip`` lets you
80 turn them into relative paths. For example, ``notify.strip=3`` will change
80 turn them into relative paths. For example, ``notify.strip=3`` will change
81 ``/long/path/repository`` into ``repository``. Default: 0.
81 ``/long/path/repository`` into ``repository``. Default: 0.
82
82
83 notify.domain
83 notify.domain
84 Default email domain for sender or recipients with no explicit domain.
84 Default email domain for sender or recipients with no explicit domain.
85
85
86 notify.style
86 notify.style
87 Style file to use when formatting emails.
87 Style file to use when formatting emails.
88
88
89 notify.template
89 notify.template
90 Template to use when formatting emails.
90 Template to use when formatting emails.
91
91
92 notify.incoming
92 notify.incoming
93 Template to use when run as an incoming hook, overriding ``notify.template``.
93 Template to use when run as an incoming hook, overriding ``notify.template``.
94
94
95 notify.outgoing
95 notify.outgoing
96 Template to use when run as an outgoing hook, overriding ``notify.template``.
96 Template to use when run as an outgoing hook, overriding ``notify.template``.
97
97
98 notify.changegroup
98 notify.changegroup
99 Template to use when running as a changegroup hook, overriding
99 Template to use when running as a changegroup hook, overriding
100 ``notify.template``.
100 ``notify.template``.
101
101
102 notify.maxdiff
102 notify.maxdiff
103 Maximum number of diff lines to include in notification email. Set to 0
103 Maximum number of diff lines to include in notification email. Set to 0
104 to disable the diff, or -1 to include all of it. Default: 300.
104 to disable the diff, or -1 to include all of it. Default: 300.
105
105
106 notify.maxsubject
106 notify.maxsubject
107 Maximum number of characters in email's subject line. Default: 67.
107 Maximum number of characters in email's subject line. Default: 67.
108
108
109 notify.diffstat
109 notify.diffstat
110 Set to True to include a diffstat before diff content. Default: True.
110 Set to True to include a diffstat before diff content. Default: True.
111
111
112 notify.merge
112 notify.merge
113 If True, send notifications for merge changesets. Default: True.
113 If True, send notifications for merge changesets. Default: True.
114
114
115 notify.mbox
115 notify.mbox
116 If set, append mails to this mbox file instead of sending. Default: None.
116 If set, append mails to this mbox file instead of sending. Default: None.
117
117
118 notify.fromauthor
118 notify.fromauthor
119 If set, use the committer of the first changeset in a changegroup for
119 If set, use the committer of the first changeset in a changegroup for
120 the "From" field of the notification mail. If not set, take the user
120 the "From" field of the notification mail. If not set, take the user
121 from the pushing repo. Default: False.
121 from the pushing repo. Default: False.
122
122
123 If set, the following entries will also be used to customize the
123 If set, the following entries will also be used to customize the
124 notifications:
124 notifications:
125
125
126 email.from
126 email.from
127 Email ``From`` address to use if none can be found in the generated
127 Email ``From`` address to use if none can be found in the generated
128 email content.
128 email content.
129
129
130 web.baseurl
130 web.baseurl
131 Root repository URL to combine with repository paths when making
131 Root repository URL to combine with repository paths when making
132 references. See also ``notify.strip``.
132 references. See also ``notify.strip``.
133
133
134 '''
134 '''
135
135
136 import email, socket, time
136 import email, socket, time
137 # On python2.4 you have to import this by name or they fail to
137 # On python2.4 you have to import this by name or they fail to
138 # load. This was not a problem on Python 2.7.
138 # load. This was not a problem on Python 2.7.
139 import email.Parser
139 import email.Parser
140 from mercurial.i18n import _
140 from mercurial.i18n import _
141 from mercurial import patch, cmdutil, templater, util, mail
141 from mercurial import patch, cmdutil, util, mail
142 import fnmatch
142 import fnmatch
143
143
144 testedwith = 'internal'
144 testedwith = 'internal'
145
145
146 # template for single changeset can include email headers.
146 # template for single changeset can include email headers.
147 single_template = '''
147 single_template = '''
148 Subject: changeset in {webroot}: {desc|firstline|strip}
148 Subject: changeset in {webroot}: {desc|firstline|strip}
149 From: {author}
149 From: {author}
150
150
151 changeset {node|short} in {root}
151 changeset {node|short} in {root}
152 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
152 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
153 description:
153 description:
154 \t{desc|tabindent|strip}
154 \t{desc|tabindent|strip}
155 '''.lstrip()
155 '''.lstrip()
156
156
157 # template for multiple changesets should not contain email headers,
157 # template for multiple changesets should not contain email headers,
158 # because only first set of headers will be used and result will look
158 # because only first set of headers will be used and result will look
159 # strange.
159 # strange.
160 multiple_template = '''
160 multiple_template = '''
161 changeset {node|short} in {root}
161 changeset {node|short} in {root}
162 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
162 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
163 summary: {desc|firstline}
163 summary: {desc|firstline}
164 '''
164 '''
165
165
166 deftemplates = {
166 deftemplates = {
167 'changegroup': multiple_template,
167 'changegroup': multiple_template,
168 }
168 }
169
169
170 class notifier(object):
170 class notifier(object):
171 '''email notification class.'''
171 '''email notification class.'''
172
172
173 def __init__(self, ui, repo, hooktype):
173 def __init__(self, ui, repo, hooktype):
174 self.ui = ui
174 self.ui = ui
175 cfg = self.ui.config('notify', 'config')
175 cfg = self.ui.config('notify', 'config')
176 if cfg:
176 if cfg:
177 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
177 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
178 self.repo = repo
178 self.repo = repo
179 self.stripcount = int(self.ui.config('notify', 'strip', 0))
179 self.stripcount = int(self.ui.config('notify', 'strip', 0))
180 self.root = self.strip(self.repo.root)
180 self.root = self.strip(self.repo.root)
181 self.domain = self.ui.config('notify', 'domain')
181 self.domain = self.ui.config('notify', 'domain')
182 self.mbox = self.ui.config('notify', 'mbox')
182 self.mbox = self.ui.config('notify', 'mbox')
183 self.test = self.ui.configbool('notify', 'test', True)
183 self.test = self.ui.configbool('notify', 'test', True)
184 self.charsets = mail._charsets(self.ui)
184 self.charsets = mail._charsets(self.ui)
185 self.subs = self.subscribers()
185 self.subs = self.subscribers()
186 self.merge = self.ui.configbool('notify', 'merge', True)
186 self.merge = self.ui.configbool('notify', 'merge', True)
187
187
188 mapfile = self.ui.config('notify', 'style')
188 mapfile = self.ui.config('notify', 'style')
189 template = (self.ui.config('notify', hooktype) or
189 template = (self.ui.config('notify', hooktype) or
190 self.ui.config('notify', 'template'))
190 self.ui.config('notify', 'template'))
191 if not mapfile and not template:
191 if not mapfile and not template:
192 template = deftemplates.get(hooktype) or single_template
192 template = deftemplates.get(hooktype) or single_template
193 if template:
194 template = templater.parsestring(template, quoted=False)
195 self.t = cmdutil.changeset_templater(self.ui, self.repo, False, None,
193 self.t = cmdutil.changeset_templater(self.ui, self.repo, False, None,
196 template, mapfile, False)
194 template, mapfile, False)
197
195
198 def strip(self, path):
196 def strip(self, path):
199 '''strip leading slashes from local path, turn into web-safe path.'''
197 '''strip leading slashes from local path, turn into web-safe path.'''
200
198
201 path = util.pconvert(path)
199 path = util.pconvert(path)
202 count = self.stripcount
200 count = self.stripcount
203 while count > 0:
201 while count > 0:
204 c = path.find('/')
202 c = path.find('/')
205 if c == -1:
203 if c == -1:
206 break
204 break
207 path = path[c + 1:]
205 path = path[c + 1:]
208 count -= 1
206 count -= 1
209 return path
207 return path
210
208
211 def fixmail(self, addr):
209 def fixmail(self, addr):
212 '''try to clean up email addresses.'''
210 '''try to clean up email addresses.'''
213
211
214 addr = util.email(addr.strip())
212 addr = util.email(addr.strip())
215 if self.domain:
213 if self.domain:
216 a = addr.find('@localhost')
214 a = addr.find('@localhost')
217 if a != -1:
215 if a != -1:
218 addr = addr[:a]
216 addr = addr[:a]
219 if '@' not in addr:
217 if '@' not in addr:
220 return addr + '@' + self.domain
218 return addr + '@' + self.domain
221 return addr
219 return addr
222
220
223 def subscribers(self):
221 def subscribers(self):
224 '''return list of email addresses of subscribers to this repo.'''
222 '''return list of email addresses of subscribers to this repo.'''
225 subs = set()
223 subs = set()
226 for user, pats in self.ui.configitems('usersubs'):
224 for user, pats in self.ui.configitems('usersubs'):
227 for pat in pats.split(','):
225 for pat in pats.split(','):
228 if '#' in pat:
226 if '#' in pat:
229 pat, revs = pat.split('#', 1)
227 pat, revs = pat.split('#', 1)
230 else:
228 else:
231 revs = None
229 revs = None
232 if fnmatch.fnmatch(self.repo.root, pat.strip()):
230 if fnmatch.fnmatch(self.repo.root, pat.strip()):
233 subs.add((self.fixmail(user), revs))
231 subs.add((self.fixmail(user), revs))
234 for pat, users in self.ui.configitems('reposubs'):
232 for pat, users in self.ui.configitems('reposubs'):
235 if '#' in pat:
233 if '#' in pat:
236 pat, revs = pat.split('#', 1)
234 pat, revs = pat.split('#', 1)
237 else:
235 else:
238 revs = None
236 revs = None
239 if fnmatch.fnmatch(self.repo.root, pat):
237 if fnmatch.fnmatch(self.repo.root, pat):
240 for user in users.split(','):
238 for user in users.split(','):
241 subs.add((self.fixmail(user), revs))
239 subs.add((self.fixmail(user), revs))
242 return [(mail.addressencode(self.ui, s, self.charsets, self.test), r)
240 return [(mail.addressencode(self.ui, s, self.charsets, self.test), r)
243 for s, r in sorted(subs)]
241 for s, r in sorted(subs)]
244
242
245 def node(self, ctx, **props):
243 def node(self, ctx, **props):
246 '''format one changeset, unless it is a suppressed merge.'''
244 '''format one changeset, unless it is a suppressed merge.'''
247 if not self.merge and len(ctx.parents()) > 1:
245 if not self.merge and len(ctx.parents()) > 1:
248 return False
246 return False
249 self.t.show(ctx, changes=ctx.changeset(),
247 self.t.show(ctx, changes=ctx.changeset(),
250 baseurl=self.ui.config('web', 'baseurl'),
248 baseurl=self.ui.config('web', 'baseurl'),
251 root=self.repo.root, webroot=self.root, **props)
249 root=self.repo.root, webroot=self.root, **props)
252 return True
250 return True
253
251
254 def skipsource(self, source):
252 def skipsource(self, source):
255 '''true if incoming changes from this source should be skipped.'''
253 '''true if incoming changes from this source should be skipped.'''
256 ok_sources = self.ui.config('notify', 'sources', 'serve').split()
254 ok_sources = self.ui.config('notify', 'sources', 'serve').split()
257 return source not in ok_sources
255 return source not in ok_sources
258
256
259 def send(self, ctx, count, data):
257 def send(self, ctx, count, data):
260 '''send message.'''
258 '''send message.'''
261
259
262 # Select subscribers by revset
260 # Select subscribers by revset
263 subs = set()
261 subs = set()
264 for sub, spec in self.subs:
262 for sub, spec in self.subs:
265 if spec is None:
263 if spec is None:
266 subs.add(sub)
264 subs.add(sub)
267 continue
265 continue
268 revs = self.repo.revs('%r and %d:', spec, ctx.rev())
266 revs = self.repo.revs('%r and %d:', spec, ctx.rev())
269 if len(revs):
267 if len(revs):
270 subs.add(sub)
268 subs.add(sub)
271 continue
269 continue
272 if len(subs) == 0:
270 if len(subs) == 0:
273 self.ui.debug('notify: no subscribers to selected repo '
271 self.ui.debug('notify: no subscribers to selected repo '
274 'and revset\n')
272 'and revset\n')
275 return
273 return
276
274
277 p = email.Parser.Parser()
275 p = email.Parser.Parser()
278 try:
276 try:
279 msg = p.parsestr(data)
277 msg = p.parsestr(data)
280 except email.Errors.MessageParseError, inst:
278 except email.Errors.MessageParseError, inst:
281 raise util.Abort(inst)
279 raise util.Abort(inst)
282
280
283 # store sender and subject
281 # store sender and subject
284 sender, subject = msg['From'], msg['Subject']
282 sender, subject = msg['From'], msg['Subject']
285 del msg['From'], msg['Subject']
283 del msg['From'], msg['Subject']
286
284
287 if not msg.is_multipart():
285 if not msg.is_multipart():
288 # create fresh mime message from scratch
286 # create fresh mime message from scratch
289 # (multipart templates must take care of this themselves)
287 # (multipart templates must take care of this themselves)
290 headers = msg.items()
288 headers = msg.items()
291 payload = msg.get_payload()
289 payload = msg.get_payload()
292 # for notification prefer readability over data precision
290 # for notification prefer readability over data precision
293 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
291 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
294 # reinstate custom headers
292 # reinstate custom headers
295 for k, v in headers:
293 for k, v in headers:
296 msg[k] = v
294 msg[k] = v
297
295
298 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
296 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
299
297
300 # try to make subject line exist and be useful
298 # try to make subject line exist and be useful
301 if not subject:
299 if not subject:
302 if count > 1:
300 if count > 1:
303 subject = _('%s: %d new changesets') % (self.root, count)
301 subject = _('%s: %d new changesets') % (self.root, count)
304 else:
302 else:
305 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
303 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
306 subject = '%s: %s' % (self.root, s)
304 subject = '%s: %s' % (self.root, s)
307 maxsubject = int(self.ui.config('notify', 'maxsubject', 67))
305 maxsubject = int(self.ui.config('notify', 'maxsubject', 67))
308 if maxsubject:
306 if maxsubject:
309 subject = util.ellipsis(subject, maxsubject)
307 subject = util.ellipsis(subject, maxsubject)
310 msg['Subject'] = mail.headencode(self.ui, subject,
308 msg['Subject'] = mail.headencode(self.ui, subject,
311 self.charsets, self.test)
309 self.charsets, self.test)
312
310
313 # try to make message have proper sender
311 # try to make message have proper sender
314 if not sender:
312 if not sender:
315 sender = self.ui.config('email', 'from') or self.ui.username()
313 sender = self.ui.config('email', 'from') or self.ui.username()
316 if '@' not in sender or '@localhost' in sender:
314 if '@' not in sender or '@localhost' in sender:
317 sender = self.fixmail(sender)
315 sender = self.fixmail(sender)
318 msg['From'] = mail.addressencode(self.ui, sender,
316 msg['From'] = mail.addressencode(self.ui, sender,
319 self.charsets, self.test)
317 self.charsets, self.test)
320
318
321 msg['X-Hg-Notification'] = 'changeset %s' % ctx
319 msg['X-Hg-Notification'] = 'changeset %s' % ctx
322 if not msg['Message-Id']:
320 if not msg['Message-Id']:
323 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
321 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
324 (ctx, int(time.time()),
322 (ctx, int(time.time()),
325 hash(self.repo.root), socket.getfqdn()))
323 hash(self.repo.root), socket.getfqdn()))
326 msg['To'] = ', '.join(sorted(subs))
324 msg['To'] = ', '.join(sorted(subs))
327
325
328 msgtext = msg.as_string()
326 msgtext = msg.as_string()
329 if self.test:
327 if self.test:
330 self.ui.write(msgtext)
328 self.ui.write(msgtext)
331 if not msgtext.endswith('\n'):
329 if not msgtext.endswith('\n'):
332 self.ui.write('\n')
330 self.ui.write('\n')
333 else:
331 else:
334 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
332 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
335 (len(subs), count))
333 (len(subs), count))
336 mail.sendmail(self.ui, util.email(msg['From']),
334 mail.sendmail(self.ui, util.email(msg['From']),
337 subs, msgtext, mbox=self.mbox)
335 subs, msgtext, mbox=self.mbox)
338
336
339 def diff(self, ctx, ref=None):
337 def diff(self, ctx, ref=None):
340
338
341 maxdiff = int(self.ui.config('notify', 'maxdiff', 300))
339 maxdiff = int(self.ui.config('notify', 'maxdiff', 300))
342 prev = ctx.p1().node()
340 prev = ctx.p1().node()
343 if ref:
341 if ref:
344 ref = ref.node()
342 ref = ref.node()
345 else:
343 else:
346 ref = ctx.node()
344 ref = ctx.node()
347 chunks = patch.diff(self.repo, prev, ref,
345 chunks = patch.diff(self.repo, prev, ref,
348 opts=patch.diffallopts(self.ui))
346 opts=patch.diffallopts(self.ui))
349 difflines = ''.join(chunks).splitlines()
347 difflines = ''.join(chunks).splitlines()
350
348
351 if self.ui.configbool('notify', 'diffstat', True):
349 if self.ui.configbool('notify', 'diffstat', True):
352 s = patch.diffstat(difflines)
350 s = patch.diffstat(difflines)
353 # s may be nil, don't include the header if it is
351 # s may be nil, don't include the header if it is
354 if s:
352 if s:
355 self.ui.write('\ndiffstat:\n\n%s' % s)
353 self.ui.write('\ndiffstat:\n\n%s' % s)
356
354
357 if maxdiff == 0:
355 if maxdiff == 0:
358 return
356 return
359 elif maxdiff > 0 and len(difflines) > maxdiff:
357 elif maxdiff > 0 and len(difflines) > maxdiff:
360 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
358 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
361 self.ui.write(msg % (len(difflines), maxdiff))
359 self.ui.write(msg % (len(difflines), maxdiff))
362 difflines = difflines[:maxdiff]
360 difflines = difflines[:maxdiff]
363 elif difflines:
361 elif difflines:
364 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
362 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
365
363
366 self.ui.write("\n".join(difflines))
364 self.ui.write("\n".join(difflines))
367
365
368 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
366 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
369 '''send email notifications to interested subscribers.
367 '''send email notifications to interested subscribers.
370
368
371 if used as changegroup hook, send one email for all changesets in
369 if used as changegroup hook, send one email for all changesets in
372 changegroup. else send one email per changeset.'''
370 changegroup. else send one email per changeset.'''
373
371
374 n = notifier(ui, repo, hooktype)
372 n = notifier(ui, repo, hooktype)
375 ctx = repo[node]
373 ctx = repo[node]
376
374
377 if not n.subs:
375 if not n.subs:
378 ui.debug('notify: no subscribers to repository %s\n' % n.root)
376 ui.debug('notify: no subscribers to repository %s\n' % n.root)
379 return
377 return
380 if n.skipsource(source):
378 if n.skipsource(source):
381 ui.debug('notify: changes have source "%s" - skipping\n' % source)
379 ui.debug('notify: changes have source "%s" - skipping\n' % source)
382 return
380 return
383
381
384 ui.pushbuffer()
382 ui.pushbuffer()
385 data = ''
383 data = ''
386 count = 0
384 count = 0
387 author = ''
385 author = ''
388 if hooktype == 'changegroup' or hooktype == 'outgoing':
386 if hooktype == 'changegroup' or hooktype == 'outgoing':
389 start, end = ctx.rev(), len(repo)
387 start, end = ctx.rev(), len(repo)
390 for rev in xrange(start, end):
388 for rev in xrange(start, end):
391 if n.node(repo[rev]):
389 if n.node(repo[rev]):
392 count += 1
390 count += 1
393 if not author:
391 if not author:
394 author = repo[rev].user()
392 author = repo[rev].user()
395 else:
393 else:
396 data += ui.popbuffer()
394 data += ui.popbuffer()
397 ui.note(_('notify: suppressing notification for merge %d:%s\n')
395 ui.note(_('notify: suppressing notification for merge %d:%s\n')
398 % (rev, repo[rev].hex()[:12]))
396 % (rev, repo[rev].hex()[:12]))
399 ui.pushbuffer()
397 ui.pushbuffer()
400 if count:
398 if count:
401 n.diff(ctx, repo['tip'])
399 n.diff(ctx, repo['tip'])
402 else:
400 else:
403 if not n.node(ctx):
401 if not n.node(ctx):
404 ui.popbuffer()
402 ui.popbuffer()
405 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
403 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
406 (ctx.rev(), ctx.hex()[:12]))
404 (ctx.rev(), ctx.hex()[:12]))
407 return
405 return
408 count += 1
406 count += 1
409 n.diff(ctx)
407 n.diff(ctx)
410
408
411 data += ui.popbuffer()
409 data += ui.popbuffer()
412 fromauthor = ui.config('notify', 'fromauthor')
410 fromauthor = ui.config('notify', 'fromauthor')
413 if author and fromauthor:
411 if author and fromauthor:
414 data = '\n'.join(['From: %s' % author, data])
412 data = '\n'.join(['From: %s' % author, data])
415
413
416 if count:
414 if count:
417 n.send(ctx, count, data)
415 n.send(ctx, count, data)
@@ -1,3261 +1,3261 b''
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from node import hex, nullid, nullrev, short
8 from node import hex, nullid, nullrev, short
9 from i18n import _
9 from i18n import _
10 import os, sys, errno, re, tempfile, cStringIO, shutil
10 import os, sys, errno, re, tempfile, cStringIO, shutil
11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
12 import match as matchmod
12 import match as matchmod
13 import context, repair, graphmod, revset, phases, obsolete, pathutil
13 import context, repair, graphmod, revset, phases, obsolete, pathutil
14 import changelog
14 import changelog
15 import bookmarks
15 import bookmarks
16 import encoding
16 import encoding
17 import crecord as crecordmod
17 import crecord as crecordmod
18 import lock as lockmod
18 import lock as lockmod
19
19
20 def parsealiases(cmd):
20 def parsealiases(cmd):
21 return cmd.lstrip("^").split("|")
21 return cmd.lstrip("^").split("|")
22
22
23 def setupwrapcolorwrite(ui):
23 def setupwrapcolorwrite(ui):
24 # wrap ui.write so diff output can be labeled/colorized
24 # wrap ui.write so diff output can be labeled/colorized
25 def wrapwrite(orig, *args, **kw):
25 def wrapwrite(orig, *args, **kw):
26 label = kw.pop('label', '')
26 label = kw.pop('label', '')
27 for chunk, l in patch.difflabel(lambda: args):
27 for chunk, l in patch.difflabel(lambda: args):
28 orig(chunk, label=label + l)
28 orig(chunk, label=label + l)
29
29
30 oldwrite = ui.write
30 oldwrite = ui.write
31 def wrap(*args, **kwargs):
31 def wrap(*args, **kwargs):
32 return wrapwrite(oldwrite, *args, **kwargs)
32 return wrapwrite(oldwrite, *args, **kwargs)
33 setattr(ui, 'write', wrap)
33 setattr(ui, 'write', wrap)
34 return oldwrite
34 return oldwrite
35
35
36 def filterchunks(ui, originalhunks, usecurses, testfile):
36 def filterchunks(ui, originalhunks, usecurses, testfile):
37 if usecurses:
37 if usecurses:
38 if testfile:
38 if testfile:
39 recordfn = crecordmod.testdecorator(testfile,
39 recordfn = crecordmod.testdecorator(testfile,
40 crecordmod.testchunkselector)
40 crecordmod.testchunkselector)
41 else:
41 else:
42 recordfn = crecordmod.chunkselector
42 recordfn = crecordmod.chunkselector
43
43
44 return crecordmod.filterpatch(ui, originalhunks, recordfn)
44 return crecordmod.filterpatch(ui, originalhunks, recordfn)
45
45
46 else:
46 else:
47 return patch.filterpatch(ui, originalhunks)
47 return patch.filterpatch(ui, originalhunks)
48
48
49 def recordfilter(ui, originalhunks):
49 def recordfilter(ui, originalhunks):
50 usecurses = ui.configbool('experimental', 'crecord', False)
50 usecurses = ui.configbool('experimental', 'crecord', False)
51 testfile = ui.config('experimental', 'crecordtest', None)
51 testfile = ui.config('experimental', 'crecordtest', None)
52 oldwrite = setupwrapcolorwrite(ui)
52 oldwrite = setupwrapcolorwrite(ui)
53 try:
53 try:
54 newchunks = filterchunks(ui, originalhunks, usecurses, testfile)
54 newchunks = filterchunks(ui, originalhunks, usecurses, testfile)
55 finally:
55 finally:
56 ui.write = oldwrite
56 ui.write = oldwrite
57 return newchunks
57 return newchunks
58
58
59 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
59 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
60 filterfn, *pats, **opts):
60 filterfn, *pats, **opts):
61 import merge as mergemod
61 import merge as mergemod
62 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
62 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
63 ishunk = lambda x: isinstance(x, hunkclasses)
63 ishunk = lambda x: isinstance(x, hunkclasses)
64
64
65 if not ui.interactive():
65 if not ui.interactive():
66 raise util.Abort(_('running non-interactively, use %s instead') %
66 raise util.Abort(_('running non-interactively, use %s instead') %
67 cmdsuggest)
67 cmdsuggest)
68
68
69 # make sure username is set before going interactive
69 # make sure username is set before going interactive
70 if not opts.get('user'):
70 if not opts.get('user'):
71 ui.username() # raise exception, username not provided
71 ui.username() # raise exception, username not provided
72
72
73 def recordfunc(ui, repo, message, match, opts):
73 def recordfunc(ui, repo, message, match, opts):
74 """This is generic record driver.
74 """This is generic record driver.
75
75
76 Its job is to interactively filter local changes, and
76 Its job is to interactively filter local changes, and
77 accordingly prepare working directory into a state in which the
77 accordingly prepare working directory into a state in which the
78 job can be delegated to a non-interactive commit command such as
78 job can be delegated to a non-interactive commit command such as
79 'commit' or 'qrefresh'.
79 'commit' or 'qrefresh'.
80
80
81 After the actual job is done by non-interactive command, the
81 After the actual job is done by non-interactive command, the
82 working directory is restored to its original state.
82 working directory is restored to its original state.
83
83
84 In the end we'll record interesting changes, and everything else
84 In the end we'll record interesting changes, and everything else
85 will be left in place, so the user can continue working.
85 will be left in place, so the user can continue working.
86 """
86 """
87
87
88 checkunfinished(repo, commit=True)
88 checkunfinished(repo, commit=True)
89 merge = len(repo[None].parents()) > 1
89 merge = len(repo[None].parents()) > 1
90 if merge:
90 if merge:
91 raise util.Abort(_('cannot partially commit a merge '
91 raise util.Abort(_('cannot partially commit a merge '
92 '(use "hg commit" instead)'))
92 '(use "hg commit" instead)'))
93
93
94 status = repo.status(match=match)
94 status = repo.status(match=match)
95 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
95 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
96 diffopts.nodates = True
96 diffopts.nodates = True
97 diffopts.git = True
97 diffopts.git = True
98 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
98 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
99 originalchunks = patch.parsepatch(originaldiff)
99 originalchunks = patch.parsepatch(originaldiff)
100
100
101 # 1. filter patch, so we have intending-to apply subset of it
101 # 1. filter patch, so we have intending-to apply subset of it
102 try:
102 try:
103 chunks = filterfn(ui, originalchunks)
103 chunks = filterfn(ui, originalchunks)
104 except patch.PatchError, err:
104 except patch.PatchError, err:
105 raise util.Abort(_('error parsing patch: %s') % err)
105 raise util.Abort(_('error parsing patch: %s') % err)
106
106
107 # We need to keep a backup of files that have been newly added and
107 # We need to keep a backup of files that have been newly added and
108 # modified during the recording process because there is a previous
108 # modified during the recording process because there is a previous
109 # version without the edit in the workdir
109 # version without the edit in the workdir
110 newlyaddedandmodifiedfiles = set()
110 newlyaddedandmodifiedfiles = set()
111 for chunk in chunks:
111 for chunk in chunks:
112 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
112 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
113 originalchunks:
113 originalchunks:
114 newlyaddedandmodifiedfiles.add(chunk.header.filename())
114 newlyaddedandmodifiedfiles.add(chunk.header.filename())
115 contenders = set()
115 contenders = set()
116 for h in chunks:
116 for h in chunks:
117 try:
117 try:
118 contenders.update(set(h.files()))
118 contenders.update(set(h.files()))
119 except AttributeError:
119 except AttributeError:
120 pass
120 pass
121
121
122 changed = status.modified + status.added + status.removed
122 changed = status.modified + status.added + status.removed
123 newfiles = [f for f in changed if f in contenders]
123 newfiles = [f for f in changed if f in contenders]
124 if not newfiles:
124 if not newfiles:
125 ui.status(_('no changes to record\n'))
125 ui.status(_('no changes to record\n'))
126 return 0
126 return 0
127
127
128 modified = set(status.modified)
128 modified = set(status.modified)
129
129
130 # 2. backup changed files, so we can restore them in the end
130 # 2. backup changed files, so we can restore them in the end
131
131
132 if backupall:
132 if backupall:
133 tobackup = changed
133 tobackup = changed
134 else:
134 else:
135 tobackup = [f for f in newfiles if f in modified or f in \
135 tobackup = [f for f in newfiles if f in modified or f in \
136 newlyaddedandmodifiedfiles]
136 newlyaddedandmodifiedfiles]
137 backups = {}
137 backups = {}
138 if tobackup:
138 if tobackup:
139 backupdir = repo.join('record-backups')
139 backupdir = repo.join('record-backups')
140 try:
140 try:
141 os.mkdir(backupdir)
141 os.mkdir(backupdir)
142 except OSError, err:
142 except OSError, err:
143 if err.errno != errno.EEXIST:
143 if err.errno != errno.EEXIST:
144 raise
144 raise
145 try:
145 try:
146 # backup continues
146 # backup continues
147 for f in tobackup:
147 for f in tobackup:
148 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
148 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
149 dir=backupdir)
149 dir=backupdir)
150 os.close(fd)
150 os.close(fd)
151 ui.debug('backup %r as %r\n' % (f, tmpname))
151 ui.debug('backup %r as %r\n' % (f, tmpname))
152 util.copyfile(repo.wjoin(f), tmpname)
152 util.copyfile(repo.wjoin(f), tmpname)
153 shutil.copystat(repo.wjoin(f), tmpname)
153 shutil.copystat(repo.wjoin(f), tmpname)
154 backups[f] = tmpname
154 backups[f] = tmpname
155
155
156 fp = cStringIO.StringIO()
156 fp = cStringIO.StringIO()
157 for c in chunks:
157 for c in chunks:
158 fname = c.filename()
158 fname = c.filename()
159 if fname in backups:
159 if fname in backups:
160 c.write(fp)
160 c.write(fp)
161 dopatch = fp.tell()
161 dopatch = fp.tell()
162 fp.seek(0)
162 fp.seek(0)
163
163
164 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
164 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
165 # 3a. apply filtered patch to clean repo (clean)
165 # 3a. apply filtered patch to clean repo (clean)
166 if backups:
166 if backups:
167 # Equivalent to hg.revert
167 # Equivalent to hg.revert
168 choices = lambda key: key in backups
168 choices = lambda key: key in backups
169 mergemod.update(repo, repo.dirstate.p1(),
169 mergemod.update(repo, repo.dirstate.p1(),
170 False, True, choices)
170 False, True, choices)
171
171
172 # 3b. (apply)
172 # 3b. (apply)
173 if dopatch:
173 if dopatch:
174 try:
174 try:
175 ui.debug('applying patch\n')
175 ui.debug('applying patch\n')
176 ui.debug(fp.getvalue())
176 ui.debug(fp.getvalue())
177 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
177 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
178 except patch.PatchError, err:
178 except patch.PatchError, err:
179 raise util.Abort(str(err))
179 raise util.Abort(str(err))
180 del fp
180 del fp
181
181
182 # 4. We prepared working directory according to filtered
182 # 4. We prepared working directory according to filtered
183 # patch. Now is the time to delegate the job to
183 # patch. Now is the time to delegate the job to
184 # commit/qrefresh or the like!
184 # commit/qrefresh or the like!
185
185
186 # Make all of the pathnames absolute.
186 # Make all of the pathnames absolute.
187 newfiles = [repo.wjoin(nf) for nf in newfiles]
187 newfiles = [repo.wjoin(nf) for nf in newfiles]
188 return commitfunc(ui, repo, *newfiles, **opts)
188 return commitfunc(ui, repo, *newfiles, **opts)
189 finally:
189 finally:
190 # 5. finally restore backed-up files
190 # 5. finally restore backed-up files
191 try:
191 try:
192 for realname, tmpname in backups.iteritems():
192 for realname, tmpname in backups.iteritems():
193 ui.debug('restoring %r to %r\n' % (tmpname, realname))
193 ui.debug('restoring %r to %r\n' % (tmpname, realname))
194 util.copyfile(tmpname, repo.wjoin(realname))
194 util.copyfile(tmpname, repo.wjoin(realname))
195 # Our calls to copystat() here and above are a
195 # Our calls to copystat() here and above are a
196 # hack to trick any editors that have f open that
196 # hack to trick any editors that have f open that
197 # we haven't modified them.
197 # we haven't modified them.
198 #
198 #
199 # Also note that this racy as an editor could
199 # Also note that this racy as an editor could
200 # notice the file's mtime before we've finished
200 # notice the file's mtime before we've finished
201 # writing it.
201 # writing it.
202 shutil.copystat(tmpname, repo.wjoin(realname))
202 shutil.copystat(tmpname, repo.wjoin(realname))
203 os.unlink(tmpname)
203 os.unlink(tmpname)
204 if tobackup:
204 if tobackup:
205 os.rmdir(backupdir)
205 os.rmdir(backupdir)
206 except OSError:
206 except OSError:
207 pass
207 pass
208
208
209 return commit(ui, repo, recordfunc, pats, opts)
209 return commit(ui, repo, recordfunc, pats, opts)
210
210
211 def findpossible(cmd, table, strict=False):
211 def findpossible(cmd, table, strict=False):
212 """
212 """
213 Return cmd -> (aliases, command table entry)
213 Return cmd -> (aliases, command table entry)
214 for each matching command.
214 for each matching command.
215 Return debug commands (or their aliases) only if no normal command matches.
215 Return debug commands (or their aliases) only if no normal command matches.
216 """
216 """
217 choice = {}
217 choice = {}
218 debugchoice = {}
218 debugchoice = {}
219
219
220 if cmd in table:
220 if cmd in table:
221 # short-circuit exact matches, "log" alias beats "^log|history"
221 # short-circuit exact matches, "log" alias beats "^log|history"
222 keys = [cmd]
222 keys = [cmd]
223 else:
223 else:
224 keys = table.keys()
224 keys = table.keys()
225
225
226 allcmds = []
226 allcmds = []
227 for e in keys:
227 for e in keys:
228 aliases = parsealiases(e)
228 aliases = parsealiases(e)
229 allcmds.extend(aliases)
229 allcmds.extend(aliases)
230 found = None
230 found = None
231 if cmd in aliases:
231 if cmd in aliases:
232 found = cmd
232 found = cmd
233 elif not strict:
233 elif not strict:
234 for a in aliases:
234 for a in aliases:
235 if a.startswith(cmd):
235 if a.startswith(cmd):
236 found = a
236 found = a
237 break
237 break
238 if found is not None:
238 if found is not None:
239 if aliases[0].startswith("debug") or found.startswith("debug"):
239 if aliases[0].startswith("debug") or found.startswith("debug"):
240 debugchoice[found] = (aliases, table[e])
240 debugchoice[found] = (aliases, table[e])
241 else:
241 else:
242 choice[found] = (aliases, table[e])
242 choice[found] = (aliases, table[e])
243
243
244 if not choice and debugchoice:
244 if not choice and debugchoice:
245 choice = debugchoice
245 choice = debugchoice
246
246
247 return choice, allcmds
247 return choice, allcmds
248
248
249 def findcmd(cmd, table, strict=True):
249 def findcmd(cmd, table, strict=True):
250 """Return (aliases, command table entry) for command string."""
250 """Return (aliases, command table entry) for command string."""
251 choice, allcmds = findpossible(cmd, table, strict)
251 choice, allcmds = findpossible(cmd, table, strict)
252
252
253 if cmd in choice:
253 if cmd in choice:
254 return choice[cmd]
254 return choice[cmd]
255
255
256 if len(choice) > 1:
256 if len(choice) > 1:
257 clist = choice.keys()
257 clist = choice.keys()
258 clist.sort()
258 clist.sort()
259 raise error.AmbiguousCommand(cmd, clist)
259 raise error.AmbiguousCommand(cmd, clist)
260
260
261 if choice:
261 if choice:
262 return choice.values()[0]
262 return choice.values()[0]
263
263
264 raise error.UnknownCommand(cmd, allcmds)
264 raise error.UnknownCommand(cmd, allcmds)
265
265
266 def findrepo(p):
266 def findrepo(p):
267 while not os.path.isdir(os.path.join(p, ".hg")):
267 while not os.path.isdir(os.path.join(p, ".hg")):
268 oldp, p = p, os.path.dirname(p)
268 oldp, p = p, os.path.dirname(p)
269 if p == oldp:
269 if p == oldp:
270 return None
270 return None
271
271
272 return p
272 return p
273
273
274 def bailifchanged(repo, merge=True):
274 def bailifchanged(repo, merge=True):
275 if merge and repo.dirstate.p2() != nullid:
275 if merge and repo.dirstate.p2() != nullid:
276 raise util.Abort(_('outstanding uncommitted merge'))
276 raise util.Abort(_('outstanding uncommitted merge'))
277 modified, added, removed, deleted = repo.status()[:4]
277 modified, added, removed, deleted = repo.status()[:4]
278 if modified or added or removed or deleted:
278 if modified or added or removed or deleted:
279 raise util.Abort(_('uncommitted changes'))
279 raise util.Abort(_('uncommitted changes'))
280 ctx = repo[None]
280 ctx = repo[None]
281 for s in sorted(ctx.substate):
281 for s in sorted(ctx.substate):
282 ctx.sub(s).bailifchanged()
282 ctx.sub(s).bailifchanged()
283
283
284 def logmessage(ui, opts):
284 def logmessage(ui, opts):
285 """ get the log message according to -m and -l option """
285 """ get the log message according to -m and -l option """
286 message = opts.get('message')
286 message = opts.get('message')
287 logfile = opts.get('logfile')
287 logfile = opts.get('logfile')
288
288
289 if message and logfile:
289 if message and logfile:
290 raise util.Abort(_('options --message and --logfile are mutually '
290 raise util.Abort(_('options --message and --logfile are mutually '
291 'exclusive'))
291 'exclusive'))
292 if not message and logfile:
292 if not message and logfile:
293 try:
293 try:
294 if logfile == '-':
294 if logfile == '-':
295 message = ui.fin.read()
295 message = ui.fin.read()
296 else:
296 else:
297 message = '\n'.join(util.readfile(logfile).splitlines())
297 message = '\n'.join(util.readfile(logfile).splitlines())
298 except IOError, inst:
298 except IOError, inst:
299 raise util.Abort(_("can't read commit message '%s': %s") %
299 raise util.Abort(_("can't read commit message '%s': %s") %
300 (logfile, inst.strerror))
300 (logfile, inst.strerror))
301 return message
301 return message
302
302
303 def mergeeditform(ctxorbool, baseformname):
303 def mergeeditform(ctxorbool, baseformname):
304 """return appropriate editform name (referencing a committemplate)
304 """return appropriate editform name (referencing a committemplate)
305
305
306 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
306 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
307 merging is committed.
307 merging is committed.
308
308
309 This returns baseformname with '.merge' appended if it is a merge,
309 This returns baseformname with '.merge' appended if it is a merge,
310 otherwise '.normal' is appended.
310 otherwise '.normal' is appended.
311 """
311 """
312 if isinstance(ctxorbool, bool):
312 if isinstance(ctxorbool, bool):
313 if ctxorbool:
313 if ctxorbool:
314 return baseformname + ".merge"
314 return baseformname + ".merge"
315 elif 1 < len(ctxorbool.parents()):
315 elif 1 < len(ctxorbool.parents()):
316 return baseformname + ".merge"
316 return baseformname + ".merge"
317
317
318 return baseformname + ".normal"
318 return baseformname + ".normal"
319
319
320 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
320 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
321 editform='', **opts):
321 editform='', **opts):
322 """get appropriate commit message editor according to '--edit' option
322 """get appropriate commit message editor according to '--edit' option
323
323
324 'finishdesc' is a function to be called with edited commit message
324 'finishdesc' is a function to be called with edited commit message
325 (= 'description' of the new changeset) just after editing, but
325 (= 'description' of the new changeset) just after editing, but
326 before checking empty-ness. It should return actual text to be
326 before checking empty-ness. It should return actual text to be
327 stored into history. This allows to change description before
327 stored into history. This allows to change description before
328 storing.
328 storing.
329
329
330 'extramsg' is a extra message to be shown in the editor instead of
330 'extramsg' is a extra message to be shown in the editor instead of
331 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
331 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
332 is automatically added.
332 is automatically added.
333
333
334 'editform' is a dot-separated list of names, to distinguish
334 'editform' is a dot-separated list of names, to distinguish
335 the purpose of commit text editing.
335 the purpose of commit text editing.
336
336
337 'getcommiteditor' returns 'commitforceeditor' regardless of
337 'getcommiteditor' returns 'commitforceeditor' regardless of
338 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
338 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
339 they are specific for usage in MQ.
339 they are specific for usage in MQ.
340 """
340 """
341 if edit or finishdesc or extramsg:
341 if edit or finishdesc or extramsg:
342 return lambda r, c, s: commitforceeditor(r, c, s,
342 return lambda r, c, s: commitforceeditor(r, c, s,
343 finishdesc=finishdesc,
343 finishdesc=finishdesc,
344 extramsg=extramsg,
344 extramsg=extramsg,
345 editform=editform)
345 editform=editform)
346 elif editform:
346 elif editform:
347 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
347 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
348 else:
348 else:
349 return commiteditor
349 return commiteditor
350
350
351 def loglimit(opts):
351 def loglimit(opts):
352 """get the log limit according to option -l/--limit"""
352 """get the log limit according to option -l/--limit"""
353 limit = opts.get('limit')
353 limit = opts.get('limit')
354 if limit:
354 if limit:
355 try:
355 try:
356 limit = int(limit)
356 limit = int(limit)
357 except ValueError:
357 except ValueError:
358 raise util.Abort(_('limit must be a positive integer'))
358 raise util.Abort(_('limit must be a positive integer'))
359 if limit <= 0:
359 if limit <= 0:
360 raise util.Abort(_('limit must be positive'))
360 raise util.Abort(_('limit must be positive'))
361 else:
361 else:
362 limit = None
362 limit = None
363 return limit
363 return limit
364
364
365 def makefilename(repo, pat, node, desc=None,
365 def makefilename(repo, pat, node, desc=None,
366 total=None, seqno=None, revwidth=None, pathname=None):
366 total=None, seqno=None, revwidth=None, pathname=None):
367 node_expander = {
367 node_expander = {
368 'H': lambda: hex(node),
368 'H': lambda: hex(node),
369 'R': lambda: str(repo.changelog.rev(node)),
369 'R': lambda: str(repo.changelog.rev(node)),
370 'h': lambda: short(node),
370 'h': lambda: short(node),
371 'm': lambda: re.sub('[^\w]', '_', str(desc))
371 'm': lambda: re.sub('[^\w]', '_', str(desc))
372 }
372 }
373 expander = {
373 expander = {
374 '%': lambda: '%',
374 '%': lambda: '%',
375 'b': lambda: os.path.basename(repo.root),
375 'b': lambda: os.path.basename(repo.root),
376 }
376 }
377
377
378 try:
378 try:
379 if node:
379 if node:
380 expander.update(node_expander)
380 expander.update(node_expander)
381 if node:
381 if node:
382 expander['r'] = (lambda:
382 expander['r'] = (lambda:
383 str(repo.changelog.rev(node)).zfill(revwidth or 0))
383 str(repo.changelog.rev(node)).zfill(revwidth or 0))
384 if total is not None:
384 if total is not None:
385 expander['N'] = lambda: str(total)
385 expander['N'] = lambda: str(total)
386 if seqno is not None:
386 if seqno is not None:
387 expander['n'] = lambda: str(seqno)
387 expander['n'] = lambda: str(seqno)
388 if total is not None and seqno is not None:
388 if total is not None and seqno is not None:
389 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
389 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
390 if pathname is not None:
390 if pathname is not None:
391 expander['s'] = lambda: os.path.basename(pathname)
391 expander['s'] = lambda: os.path.basename(pathname)
392 expander['d'] = lambda: os.path.dirname(pathname) or '.'
392 expander['d'] = lambda: os.path.dirname(pathname) or '.'
393 expander['p'] = lambda: pathname
393 expander['p'] = lambda: pathname
394
394
395 newname = []
395 newname = []
396 patlen = len(pat)
396 patlen = len(pat)
397 i = 0
397 i = 0
398 while i < patlen:
398 while i < patlen:
399 c = pat[i]
399 c = pat[i]
400 if c == '%':
400 if c == '%':
401 i += 1
401 i += 1
402 c = pat[i]
402 c = pat[i]
403 c = expander[c]()
403 c = expander[c]()
404 newname.append(c)
404 newname.append(c)
405 i += 1
405 i += 1
406 return ''.join(newname)
406 return ''.join(newname)
407 except KeyError, inst:
407 except KeyError, inst:
408 raise util.Abort(_("invalid format spec '%%%s' in output filename") %
408 raise util.Abort(_("invalid format spec '%%%s' in output filename") %
409 inst.args[0])
409 inst.args[0])
410
410
411 def makefileobj(repo, pat, node=None, desc=None, total=None,
411 def makefileobj(repo, pat, node=None, desc=None, total=None,
412 seqno=None, revwidth=None, mode='wb', modemap=None,
412 seqno=None, revwidth=None, mode='wb', modemap=None,
413 pathname=None):
413 pathname=None):
414
414
415 writable = mode not in ('r', 'rb')
415 writable = mode not in ('r', 'rb')
416
416
417 if not pat or pat == '-':
417 if not pat or pat == '-':
418 if writable:
418 if writable:
419 fp = repo.ui.fout
419 fp = repo.ui.fout
420 else:
420 else:
421 fp = repo.ui.fin
421 fp = repo.ui.fin
422 if util.safehasattr(fp, 'fileno'):
422 if util.safehasattr(fp, 'fileno'):
423 return os.fdopen(os.dup(fp.fileno()), mode)
423 return os.fdopen(os.dup(fp.fileno()), mode)
424 else:
424 else:
425 # if this fp can't be duped properly, return
425 # if this fp can't be duped properly, return
426 # a dummy object that can be closed
426 # a dummy object that can be closed
427 class wrappedfileobj(object):
427 class wrappedfileobj(object):
428 noop = lambda x: None
428 noop = lambda x: None
429 def __init__(self, f):
429 def __init__(self, f):
430 self.f = f
430 self.f = f
431 def __getattr__(self, attr):
431 def __getattr__(self, attr):
432 if attr == 'close':
432 if attr == 'close':
433 return self.noop
433 return self.noop
434 else:
434 else:
435 return getattr(self.f, attr)
435 return getattr(self.f, attr)
436
436
437 return wrappedfileobj(fp)
437 return wrappedfileobj(fp)
438 if util.safehasattr(pat, 'write') and writable:
438 if util.safehasattr(pat, 'write') and writable:
439 return pat
439 return pat
440 if util.safehasattr(pat, 'read') and 'r' in mode:
440 if util.safehasattr(pat, 'read') and 'r' in mode:
441 return pat
441 return pat
442 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
442 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
443 if modemap is not None:
443 if modemap is not None:
444 mode = modemap.get(fn, mode)
444 mode = modemap.get(fn, mode)
445 if mode == 'wb':
445 if mode == 'wb':
446 modemap[fn] = 'ab'
446 modemap[fn] = 'ab'
447 return open(fn, mode)
447 return open(fn, mode)
448
448
449 def openrevlog(repo, cmd, file_, opts):
449 def openrevlog(repo, cmd, file_, opts):
450 """opens the changelog, manifest, a filelog or a given revlog"""
450 """opens the changelog, manifest, a filelog or a given revlog"""
451 cl = opts['changelog']
451 cl = opts['changelog']
452 mf = opts['manifest']
452 mf = opts['manifest']
453 msg = None
453 msg = None
454 if cl and mf:
454 if cl and mf:
455 msg = _('cannot specify --changelog and --manifest at the same time')
455 msg = _('cannot specify --changelog and --manifest at the same time')
456 elif cl or mf:
456 elif cl or mf:
457 if file_:
457 if file_:
458 msg = _('cannot specify filename with --changelog or --manifest')
458 msg = _('cannot specify filename with --changelog or --manifest')
459 elif not repo:
459 elif not repo:
460 msg = _('cannot specify --changelog or --manifest '
460 msg = _('cannot specify --changelog or --manifest '
461 'without a repository')
461 'without a repository')
462 if msg:
462 if msg:
463 raise util.Abort(msg)
463 raise util.Abort(msg)
464
464
465 r = None
465 r = None
466 if repo:
466 if repo:
467 if cl:
467 if cl:
468 r = repo.unfiltered().changelog
468 r = repo.unfiltered().changelog
469 elif mf:
469 elif mf:
470 r = repo.manifest
470 r = repo.manifest
471 elif file_:
471 elif file_:
472 filelog = repo.file(file_)
472 filelog = repo.file(file_)
473 if len(filelog):
473 if len(filelog):
474 r = filelog
474 r = filelog
475 if not r:
475 if not r:
476 if not file_:
476 if not file_:
477 raise error.CommandError(cmd, _('invalid arguments'))
477 raise error.CommandError(cmd, _('invalid arguments'))
478 if not os.path.isfile(file_):
478 if not os.path.isfile(file_):
479 raise util.Abort(_("revlog '%s' not found") % file_)
479 raise util.Abort(_("revlog '%s' not found") % file_)
480 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
480 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
481 file_[:-2] + ".i")
481 file_[:-2] + ".i")
482 return r
482 return r
483
483
484 def copy(ui, repo, pats, opts, rename=False):
484 def copy(ui, repo, pats, opts, rename=False):
485 # called with the repo lock held
485 # called with the repo lock held
486 #
486 #
487 # hgsep => pathname that uses "/" to separate directories
487 # hgsep => pathname that uses "/" to separate directories
488 # ossep => pathname that uses os.sep to separate directories
488 # ossep => pathname that uses os.sep to separate directories
489 cwd = repo.getcwd()
489 cwd = repo.getcwd()
490 targets = {}
490 targets = {}
491 after = opts.get("after")
491 after = opts.get("after")
492 dryrun = opts.get("dry_run")
492 dryrun = opts.get("dry_run")
493 wctx = repo[None]
493 wctx = repo[None]
494
494
495 def walkpat(pat):
495 def walkpat(pat):
496 srcs = []
496 srcs = []
497 if after:
497 if after:
498 badstates = '?'
498 badstates = '?'
499 else:
499 else:
500 badstates = '?r'
500 badstates = '?r'
501 m = scmutil.match(repo[None], [pat], opts, globbed=True)
501 m = scmutil.match(repo[None], [pat], opts, globbed=True)
502 for abs in repo.walk(m):
502 for abs in repo.walk(m):
503 state = repo.dirstate[abs]
503 state = repo.dirstate[abs]
504 rel = m.rel(abs)
504 rel = m.rel(abs)
505 exact = m.exact(abs)
505 exact = m.exact(abs)
506 if state in badstates:
506 if state in badstates:
507 if exact and state == '?':
507 if exact and state == '?':
508 ui.warn(_('%s: not copying - file is not managed\n') % rel)
508 ui.warn(_('%s: not copying - file is not managed\n') % rel)
509 if exact and state == 'r':
509 if exact and state == 'r':
510 ui.warn(_('%s: not copying - file has been marked for'
510 ui.warn(_('%s: not copying - file has been marked for'
511 ' remove\n') % rel)
511 ' remove\n') % rel)
512 continue
512 continue
513 # abs: hgsep
513 # abs: hgsep
514 # rel: ossep
514 # rel: ossep
515 srcs.append((abs, rel, exact))
515 srcs.append((abs, rel, exact))
516 return srcs
516 return srcs
517
517
518 # abssrc: hgsep
518 # abssrc: hgsep
519 # relsrc: ossep
519 # relsrc: ossep
520 # otarget: ossep
520 # otarget: ossep
521 def copyfile(abssrc, relsrc, otarget, exact):
521 def copyfile(abssrc, relsrc, otarget, exact):
522 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
522 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
523 if '/' in abstarget:
523 if '/' in abstarget:
524 # We cannot normalize abstarget itself, this would prevent
524 # We cannot normalize abstarget itself, this would prevent
525 # case only renames, like a => A.
525 # case only renames, like a => A.
526 abspath, absname = abstarget.rsplit('/', 1)
526 abspath, absname = abstarget.rsplit('/', 1)
527 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
527 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
528 reltarget = repo.pathto(abstarget, cwd)
528 reltarget = repo.pathto(abstarget, cwd)
529 target = repo.wjoin(abstarget)
529 target = repo.wjoin(abstarget)
530 src = repo.wjoin(abssrc)
530 src = repo.wjoin(abssrc)
531 state = repo.dirstate[abstarget]
531 state = repo.dirstate[abstarget]
532
532
533 scmutil.checkportable(ui, abstarget)
533 scmutil.checkportable(ui, abstarget)
534
534
535 # check for collisions
535 # check for collisions
536 prevsrc = targets.get(abstarget)
536 prevsrc = targets.get(abstarget)
537 if prevsrc is not None:
537 if prevsrc is not None:
538 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
538 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
539 (reltarget, repo.pathto(abssrc, cwd),
539 (reltarget, repo.pathto(abssrc, cwd),
540 repo.pathto(prevsrc, cwd)))
540 repo.pathto(prevsrc, cwd)))
541 return
541 return
542
542
543 # check for overwrites
543 # check for overwrites
544 exists = os.path.lexists(target)
544 exists = os.path.lexists(target)
545 samefile = False
545 samefile = False
546 if exists and abssrc != abstarget:
546 if exists and abssrc != abstarget:
547 if (repo.dirstate.normalize(abssrc) ==
547 if (repo.dirstate.normalize(abssrc) ==
548 repo.dirstate.normalize(abstarget)):
548 repo.dirstate.normalize(abstarget)):
549 if not rename:
549 if not rename:
550 ui.warn(_("%s: can't copy - same file\n") % reltarget)
550 ui.warn(_("%s: can't copy - same file\n") % reltarget)
551 return
551 return
552 exists = False
552 exists = False
553 samefile = True
553 samefile = True
554
554
555 if not after and exists or after and state in 'mn':
555 if not after and exists or after and state in 'mn':
556 if not opts['force']:
556 if not opts['force']:
557 ui.warn(_('%s: not overwriting - file exists\n') %
557 ui.warn(_('%s: not overwriting - file exists\n') %
558 reltarget)
558 reltarget)
559 return
559 return
560
560
561 if after:
561 if after:
562 if not exists:
562 if not exists:
563 if rename:
563 if rename:
564 ui.warn(_('%s: not recording move - %s does not exist\n') %
564 ui.warn(_('%s: not recording move - %s does not exist\n') %
565 (relsrc, reltarget))
565 (relsrc, reltarget))
566 else:
566 else:
567 ui.warn(_('%s: not recording copy - %s does not exist\n') %
567 ui.warn(_('%s: not recording copy - %s does not exist\n') %
568 (relsrc, reltarget))
568 (relsrc, reltarget))
569 return
569 return
570 elif not dryrun:
570 elif not dryrun:
571 try:
571 try:
572 if exists:
572 if exists:
573 os.unlink(target)
573 os.unlink(target)
574 targetdir = os.path.dirname(target) or '.'
574 targetdir = os.path.dirname(target) or '.'
575 if not os.path.isdir(targetdir):
575 if not os.path.isdir(targetdir):
576 os.makedirs(targetdir)
576 os.makedirs(targetdir)
577 if samefile:
577 if samefile:
578 tmp = target + "~hgrename"
578 tmp = target + "~hgrename"
579 os.rename(src, tmp)
579 os.rename(src, tmp)
580 os.rename(tmp, target)
580 os.rename(tmp, target)
581 else:
581 else:
582 util.copyfile(src, target)
582 util.copyfile(src, target)
583 srcexists = True
583 srcexists = True
584 except IOError, inst:
584 except IOError, inst:
585 if inst.errno == errno.ENOENT:
585 if inst.errno == errno.ENOENT:
586 ui.warn(_('%s: deleted in working directory\n') % relsrc)
586 ui.warn(_('%s: deleted in working directory\n') % relsrc)
587 srcexists = False
587 srcexists = False
588 else:
588 else:
589 ui.warn(_('%s: cannot copy - %s\n') %
589 ui.warn(_('%s: cannot copy - %s\n') %
590 (relsrc, inst.strerror))
590 (relsrc, inst.strerror))
591 return True # report a failure
591 return True # report a failure
592
592
593 if ui.verbose or not exact:
593 if ui.verbose or not exact:
594 if rename:
594 if rename:
595 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
595 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
596 else:
596 else:
597 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
597 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
598
598
599 targets[abstarget] = abssrc
599 targets[abstarget] = abssrc
600
600
601 # fix up dirstate
601 # fix up dirstate
602 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
602 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
603 dryrun=dryrun, cwd=cwd)
603 dryrun=dryrun, cwd=cwd)
604 if rename and not dryrun:
604 if rename and not dryrun:
605 if not after and srcexists and not samefile:
605 if not after and srcexists and not samefile:
606 util.unlinkpath(repo.wjoin(abssrc))
606 util.unlinkpath(repo.wjoin(abssrc))
607 wctx.forget([abssrc])
607 wctx.forget([abssrc])
608
608
609 # pat: ossep
609 # pat: ossep
610 # dest ossep
610 # dest ossep
611 # srcs: list of (hgsep, hgsep, ossep, bool)
611 # srcs: list of (hgsep, hgsep, ossep, bool)
612 # return: function that takes hgsep and returns ossep
612 # return: function that takes hgsep and returns ossep
613 def targetpathfn(pat, dest, srcs):
613 def targetpathfn(pat, dest, srcs):
614 if os.path.isdir(pat):
614 if os.path.isdir(pat):
615 abspfx = pathutil.canonpath(repo.root, cwd, pat)
615 abspfx = pathutil.canonpath(repo.root, cwd, pat)
616 abspfx = util.localpath(abspfx)
616 abspfx = util.localpath(abspfx)
617 if destdirexists:
617 if destdirexists:
618 striplen = len(os.path.split(abspfx)[0])
618 striplen = len(os.path.split(abspfx)[0])
619 else:
619 else:
620 striplen = len(abspfx)
620 striplen = len(abspfx)
621 if striplen:
621 if striplen:
622 striplen += len(os.sep)
622 striplen += len(os.sep)
623 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
623 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
624 elif destdirexists:
624 elif destdirexists:
625 res = lambda p: os.path.join(dest,
625 res = lambda p: os.path.join(dest,
626 os.path.basename(util.localpath(p)))
626 os.path.basename(util.localpath(p)))
627 else:
627 else:
628 res = lambda p: dest
628 res = lambda p: dest
629 return res
629 return res
630
630
631 # pat: ossep
631 # pat: ossep
632 # dest ossep
632 # dest ossep
633 # srcs: list of (hgsep, hgsep, ossep, bool)
633 # srcs: list of (hgsep, hgsep, ossep, bool)
634 # return: function that takes hgsep and returns ossep
634 # return: function that takes hgsep and returns ossep
635 def targetpathafterfn(pat, dest, srcs):
635 def targetpathafterfn(pat, dest, srcs):
636 if matchmod.patkind(pat):
636 if matchmod.patkind(pat):
637 # a mercurial pattern
637 # a mercurial pattern
638 res = lambda p: os.path.join(dest,
638 res = lambda p: os.path.join(dest,
639 os.path.basename(util.localpath(p)))
639 os.path.basename(util.localpath(p)))
640 else:
640 else:
641 abspfx = pathutil.canonpath(repo.root, cwd, pat)
641 abspfx = pathutil.canonpath(repo.root, cwd, pat)
642 if len(abspfx) < len(srcs[0][0]):
642 if len(abspfx) < len(srcs[0][0]):
643 # A directory. Either the target path contains the last
643 # A directory. Either the target path contains the last
644 # component of the source path or it does not.
644 # component of the source path or it does not.
645 def evalpath(striplen):
645 def evalpath(striplen):
646 score = 0
646 score = 0
647 for s in srcs:
647 for s in srcs:
648 t = os.path.join(dest, util.localpath(s[0])[striplen:])
648 t = os.path.join(dest, util.localpath(s[0])[striplen:])
649 if os.path.lexists(t):
649 if os.path.lexists(t):
650 score += 1
650 score += 1
651 return score
651 return score
652
652
653 abspfx = util.localpath(abspfx)
653 abspfx = util.localpath(abspfx)
654 striplen = len(abspfx)
654 striplen = len(abspfx)
655 if striplen:
655 if striplen:
656 striplen += len(os.sep)
656 striplen += len(os.sep)
657 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
657 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
658 score = evalpath(striplen)
658 score = evalpath(striplen)
659 striplen1 = len(os.path.split(abspfx)[0])
659 striplen1 = len(os.path.split(abspfx)[0])
660 if striplen1:
660 if striplen1:
661 striplen1 += len(os.sep)
661 striplen1 += len(os.sep)
662 if evalpath(striplen1) > score:
662 if evalpath(striplen1) > score:
663 striplen = striplen1
663 striplen = striplen1
664 res = lambda p: os.path.join(dest,
664 res = lambda p: os.path.join(dest,
665 util.localpath(p)[striplen:])
665 util.localpath(p)[striplen:])
666 else:
666 else:
667 # a file
667 # a file
668 if destdirexists:
668 if destdirexists:
669 res = lambda p: os.path.join(dest,
669 res = lambda p: os.path.join(dest,
670 os.path.basename(util.localpath(p)))
670 os.path.basename(util.localpath(p)))
671 else:
671 else:
672 res = lambda p: dest
672 res = lambda p: dest
673 return res
673 return res
674
674
675 pats = scmutil.expandpats(pats)
675 pats = scmutil.expandpats(pats)
676 if not pats:
676 if not pats:
677 raise util.Abort(_('no source or destination specified'))
677 raise util.Abort(_('no source or destination specified'))
678 if len(pats) == 1:
678 if len(pats) == 1:
679 raise util.Abort(_('no destination specified'))
679 raise util.Abort(_('no destination specified'))
680 dest = pats.pop()
680 dest = pats.pop()
681 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
681 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
682 if not destdirexists:
682 if not destdirexists:
683 if len(pats) > 1 or matchmod.patkind(pats[0]):
683 if len(pats) > 1 or matchmod.patkind(pats[0]):
684 raise util.Abort(_('with multiple sources, destination must be an '
684 raise util.Abort(_('with multiple sources, destination must be an '
685 'existing directory'))
685 'existing directory'))
686 if util.endswithsep(dest):
686 if util.endswithsep(dest):
687 raise util.Abort(_('destination %s is not a directory') % dest)
687 raise util.Abort(_('destination %s is not a directory') % dest)
688
688
689 tfn = targetpathfn
689 tfn = targetpathfn
690 if after:
690 if after:
691 tfn = targetpathafterfn
691 tfn = targetpathafterfn
692 copylist = []
692 copylist = []
693 for pat in pats:
693 for pat in pats:
694 srcs = walkpat(pat)
694 srcs = walkpat(pat)
695 if not srcs:
695 if not srcs:
696 continue
696 continue
697 copylist.append((tfn(pat, dest, srcs), srcs))
697 copylist.append((tfn(pat, dest, srcs), srcs))
698 if not copylist:
698 if not copylist:
699 raise util.Abort(_('no files to copy'))
699 raise util.Abort(_('no files to copy'))
700
700
701 errors = 0
701 errors = 0
702 for targetpath, srcs in copylist:
702 for targetpath, srcs in copylist:
703 for abssrc, relsrc, exact in srcs:
703 for abssrc, relsrc, exact in srcs:
704 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
704 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
705 errors += 1
705 errors += 1
706
706
707 if errors:
707 if errors:
708 ui.warn(_('(consider using --after)\n'))
708 ui.warn(_('(consider using --after)\n'))
709
709
710 return errors != 0
710 return errors != 0
711
711
712 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
712 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
713 runargs=None, appendpid=False):
713 runargs=None, appendpid=False):
714 '''Run a command as a service.'''
714 '''Run a command as a service.'''
715
715
716 def writepid(pid):
716 def writepid(pid):
717 if opts['pid_file']:
717 if opts['pid_file']:
718 if appendpid:
718 if appendpid:
719 mode = 'a'
719 mode = 'a'
720 else:
720 else:
721 mode = 'w'
721 mode = 'w'
722 fp = open(opts['pid_file'], mode)
722 fp = open(opts['pid_file'], mode)
723 fp.write(str(pid) + '\n')
723 fp.write(str(pid) + '\n')
724 fp.close()
724 fp.close()
725
725
726 if opts['daemon'] and not opts['daemon_pipefds']:
726 if opts['daemon'] and not opts['daemon_pipefds']:
727 # Signal child process startup with file removal
727 # Signal child process startup with file removal
728 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
728 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
729 os.close(lockfd)
729 os.close(lockfd)
730 try:
730 try:
731 if not runargs:
731 if not runargs:
732 runargs = util.hgcmd() + sys.argv[1:]
732 runargs = util.hgcmd() + sys.argv[1:]
733 runargs.append('--daemon-pipefds=%s' % lockpath)
733 runargs.append('--daemon-pipefds=%s' % lockpath)
734 # Don't pass --cwd to the child process, because we've already
734 # Don't pass --cwd to the child process, because we've already
735 # changed directory.
735 # changed directory.
736 for i in xrange(1, len(runargs)):
736 for i in xrange(1, len(runargs)):
737 if runargs[i].startswith('--cwd='):
737 if runargs[i].startswith('--cwd='):
738 del runargs[i]
738 del runargs[i]
739 break
739 break
740 elif runargs[i].startswith('--cwd'):
740 elif runargs[i].startswith('--cwd'):
741 del runargs[i:i + 2]
741 del runargs[i:i + 2]
742 break
742 break
743 def condfn():
743 def condfn():
744 return not os.path.exists(lockpath)
744 return not os.path.exists(lockpath)
745 pid = util.rundetached(runargs, condfn)
745 pid = util.rundetached(runargs, condfn)
746 if pid < 0:
746 if pid < 0:
747 raise util.Abort(_('child process failed to start'))
747 raise util.Abort(_('child process failed to start'))
748 writepid(pid)
748 writepid(pid)
749 finally:
749 finally:
750 try:
750 try:
751 os.unlink(lockpath)
751 os.unlink(lockpath)
752 except OSError, e:
752 except OSError, e:
753 if e.errno != errno.ENOENT:
753 if e.errno != errno.ENOENT:
754 raise
754 raise
755 if parentfn:
755 if parentfn:
756 return parentfn(pid)
756 return parentfn(pid)
757 else:
757 else:
758 return
758 return
759
759
760 if initfn:
760 if initfn:
761 initfn()
761 initfn()
762
762
763 if not opts['daemon']:
763 if not opts['daemon']:
764 writepid(os.getpid())
764 writepid(os.getpid())
765
765
766 if opts['daemon_pipefds']:
766 if opts['daemon_pipefds']:
767 lockpath = opts['daemon_pipefds']
767 lockpath = opts['daemon_pipefds']
768 try:
768 try:
769 os.setsid()
769 os.setsid()
770 except AttributeError:
770 except AttributeError:
771 pass
771 pass
772 os.unlink(lockpath)
772 os.unlink(lockpath)
773 util.hidewindow()
773 util.hidewindow()
774 sys.stdout.flush()
774 sys.stdout.flush()
775 sys.stderr.flush()
775 sys.stderr.flush()
776
776
777 nullfd = os.open(os.devnull, os.O_RDWR)
777 nullfd = os.open(os.devnull, os.O_RDWR)
778 logfilefd = nullfd
778 logfilefd = nullfd
779 if logfile:
779 if logfile:
780 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
780 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
781 os.dup2(nullfd, 0)
781 os.dup2(nullfd, 0)
782 os.dup2(logfilefd, 1)
782 os.dup2(logfilefd, 1)
783 os.dup2(logfilefd, 2)
783 os.dup2(logfilefd, 2)
784 if nullfd not in (0, 1, 2):
784 if nullfd not in (0, 1, 2):
785 os.close(nullfd)
785 os.close(nullfd)
786 if logfile and logfilefd not in (0, 1, 2):
786 if logfile and logfilefd not in (0, 1, 2):
787 os.close(logfilefd)
787 os.close(logfilefd)
788
788
789 if runfn:
789 if runfn:
790 return runfn()
790 return runfn()
791
791
792 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
792 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
793 """Utility function used by commands.import to import a single patch
793 """Utility function used by commands.import to import a single patch
794
794
795 This function is explicitly defined here to help the evolve extension to
795 This function is explicitly defined here to help the evolve extension to
796 wrap this part of the import logic.
796 wrap this part of the import logic.
797
797
798 The API is currently a bit ugly because it a simple code translation from
798 The API is currently a bit ugly because it a simple code translation from
799 the import command. Feel free to make it better.
799 the import command. Feel free to make it better.
800
800
801 :hunk: a patch (as a binary string)
801 :hunk: a patch (as a binary string)
802 :parents: nodes that will be parent of the created commit
802 :parents: nodes that will be parent of the created commit
803 :opts: the full dict of option passed to the import command
803 :opts: the full dict of option passed to the import command
804 :msgs: list to save commit message to.
804 :msgs: list to save commit message to.
805 (used in case we need to save it when failing)
805 (used in case we need to save it when failing)
806 :updatefunc: a function that update a repo to a given node
806 :updatefunc: a function that update a repo to a given node
807 updatefunc(<repo>, <node>)
807 updatefunc(<repo>, <node>)
808 """
808 """
809 tmpname, message, user, date, branch, nodeid, p1, p2 = \
809 tmpname, message, user, date, branch, nodeid, p1, p2 = \
810 patch.extract(ui, hunk)
810 patch.extract(ui, hunk)
811
811
812 update = not opts.get('bypass')
812 update = not opts.get('bypass')
813 strip = opts["strip"]
813 strip = opts["strip"]
814 prefix = opts["prefix"]
814 prefix = opts["prefix"]
815 sim = float(opts.get('similarity') or 0)
815 sim = float(opts.get('similarity') or 0)
816 if not tmpname:
816 if not tmpname:
817 return (None, None, False)
817 return (None, None, False)
818 msg = _('applied to working directory')
818 msg = _('applied to working directory')
819
819
820 rejects = False
820 rejects = False
821
821
822 try:
822 try:
823 cmdline_message = logmessage(ui, opts)
823 cmdline_message = logmessage(ui, opts)
824 if cmdline_message:
824 if cmdline_message:
825 # pickup the cmdline msg
825 # pickup the cmdline msg
826 message = cmdline_message
826 message = cmdline_message
827 elif message:
827 elif message:
828 # pickup the patch msg
828 # pickup the patch msg
829 message = message.strip()
829 message = message.strip()
830 else:
830 else:
831 # launch the editor
831 # launch the editor
832 message = None
832 message = None
833 ui.debug('message:\n%s\n' % message)
833 ui.debug('message:\n%s\n' % message)
834
834
835 if len(parents) == 1:
835 if len(parents) == 1:
836 parents.append(repo[nullid])
836 parents.append(repo[nullid])
837 if opts.get('exact'):
837 if opts.get('exact'):
838 if not nodeid or not p1:
838 if not nodeid or not p1:
839 raise util.Abort(_('not a Mercurial patch'))
839 raise util.Abort(_('not a Mercurial patch'))
840 p1 = repo[p1]
840 p1 = repo[p1]
841 p2 = repo[p2 or nullid]
841 p2 = repo[p2 or nullid]
842 elif p2:
842 elif p2:
843 try:
843 try:
844 p1 = repo[p1]
844 p1 = repo[p1]
845 p2 = repo[p2]
845 p2 = repo[p2]
846 # Without any options, consider p2 only if the
846 # Without any options, consider p2 only if the
847 # patch is being applied on top of the recorded
847 # patch is being applied on top of the recorded
848 # first parent.
848 # first parent.
849 if p1 != parents[0]:
849 if p1 != parents[0]:
850 p1 = parents[0]
850 p1 = parents[0]
851 p2 = repo[nullid]
851 p2 = repo[nullid]
852 except error.RepoError:
852 except error.RepoError:
853 p1, p2 = parents
853 p1, p2 = parents
854 if p2.node() == nullid:
854 if p2.node() == nullid:
855 ui.warn(_("warning: import the patch as a normal revision\n"
855 ui.warn(_("warning: import the patch as a normal revision\n"
856 "(use --exact to import the patch as a merge)\n"))
856 "(use --exact to import the patch as a merge)\n"))
857 else:
857 else:
858 p1, p2 = parents
858 p1, p2 = parents
859
859
860 n = None
860 n = None
861 if update:
861 if update:
862 repo.dirstate.beginparentchange()
862 repo.dirstate.beginparentchange()
863 if p1 != parents[0]:
863 if p1 != parents[0]:
864 updatefunc(repo, p1.node())
864 updatefunc(repo, p1.node())
865 if p2 != parents[1]:
865 if p2 != parents[1]:
866 repo.setparents(p1.node(), p2.node())
866 repo.setparents(p1.node(), p2.node())
867
867
868 if opts.get('exact') or opts.get('import_branch'):
868 if opts.get('exact') or opts.get('import_branch'):
869 repo.dirstate.setbranch(branch or 'default')
869 repo.dirstate.setbranch(branch or 'default')
870
870
871 partial = opts.get('partial', False)
871 partial = opts.get('partial', False)
872 files = set()
872 files = set()
873 try:
873 try:
874 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
874 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
875 files=files, eolmode=None, similarity=sim / 100.0)
875 files=files, eolmode=None, similarity=sim / 100.0)
876 except patch.PatchError, e:
876 except patch.PatchError, e:
877 if not partial:
877 if not partial:
878 raise util.Abort(str(e))
878 raise util.Abort(str(e))
879 if partial:
879 if partial:
880 rejects = True
880 rejects = True
881
881
882 files = list(files)
882 files = list(files)
883 if opts.get('no_commit'):
883 if opts.get('no_commit'):
884 if message:
884 if message:
885 msgs.append(message)
885 msgs.append(message)
886 else:
886 else:
887 if opts.get('exact') or p2:
887 if opts.get('exact') or p2:
888 # If you got here, you either use --force and know what
888 # If you got here, you either use --force and know what
889 # you are doing or used --exact or a merge patch while
889 # you are doing or used --exact or a merge patch while
890 # being updated to its first parent.
890 # being updated to its first parent.
891 m = None
891 m = None
892 else:
892 else:
893 m = scmutil.matchfiles(repo, files or [])
893 m = scmutil.matchfiles(repo, files or [])
894 editform = mergeeditform(repo[None], 'import.normal')
894 editform = mergeeditform(repo[None], 'import.normal')
895 if opts.get('exact'):
895 if opts.get('exact'):
896 editor = None
896 editor = None
897 else:
897 else:
898 editor = getcommiteditor(editform=editform, **opts)
898 editor = getcommiteditor(editform=editform, **opts)
899 n = repo.commit(message, opts.get('user') or user,
899 n = repo.commit(message, opts.get('user') or user,
900 opts.get('date') or date, match=m,
900 opts.get('date') or date, match=m,
901 editor=editor, force=partial)
901 editor=editor, force=partial)
902 repo.dirstate.endparentchange()
902 repo.dirstate.endparentchange()
903 else:
903 else:
904 if opts.get('exact') or opts.get('import_branch'):
904 if opts.get('exact') or opts.get('import_branch'):
905 branch = branch or 'default'
905 branch = branch or 'default'
906 else:
906 else:
907 branch = p1.branch()
907 branch = p1.branch()
908 store = patch.filestore()
908 store = patch.filestore()
909 try:
909 try:
910 files = set()
910 files = set()
911 try:
911 try:
912 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
912 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
913 files, eolmode=None)
913 files, eolmode=None)
914 except patch.PatchError, e:
914 except patch.PatchError, e:
915 raise util.Abort(str(e))
915 raise util.Abort(str(e))
916 if opts.get('exact'):
916 if opts.get('exact'):
917 editor = None
917 editor = None
918 else:
918 else:
919 editor = getcommiteditor(editform='import.bypass')
919 editor = getcommiteditor(editform='import.bypass')
920 memctx = context.makememctx(repo, (p1.node(), p2.node()),
920 memctx = context.makememctx(repo, (p1.node(), p2.node()),
921 message,
921 message,
922 opts.get('user') or user,
922 opts.get('user') or user,
923 opts.get('date') or date,
923 opts.get('date') or date,
924 branch, files, store,
924 branch, files, store,
925 editor=editor)
925 editor=editor)
926 n = memctx.commit()
926 n = memctx.commit()
927 finally:
927 finally:
928 store.close()
928 store.close()
929 if opts.get('exact') and opts.get('no_commit'):
929 if opts.get('exact') and opts.get('no_commit'):
930 # --exact with --no-commit is still useful in that it does merge
930 # --exact with --no-commit is still useful in that it does merge
931 # and branch bits
931 # and branch bits
932 ui.warn(_("warning: can't check exact import with --no-commit\n"))
932 ui.warn(_("warning: can't check exact import with --no-commit\n"))
933 elif opts.get('exact') and hex(n) != nodeid:
933 elif opts.get('exact') and hex(n) != nodeid:
934 raise util.Abort(_('patch is damaged or loses information'))
934 raise util.Abort(_('patch is damaged or loses information'))
935 if n:
935 if n:
936 # i18n: refers to a short changeset id
936 # i18n: refers to a short changeset id
937 msg = _('created %s') % short(n)
937 msg = _('created %s') % short(n)
938 return (msg, n, rejects)
938 return (msg, n, rejects)
939 finally:
939 finally:
940 os.unlink(tmpname)
940 os.unlink(tmpname)
941
941
942 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
942 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
943 opts=None):
943 opts=None):
944 '''export changesets as hg patches.'''
944 '''export changesets as hg patches.'''
945
945
946 total = len(revs)
946 total = len(revs)
947 revwidth = max([len(str(rev)) for rev in revs])
947 revwidth = max([len(str(rev)) for rev in revs])
948 filemode = {}
948 filemode = {}
949
949
950 def single(rev, seqno, fp):
950 def single(rev, seqno, fp):
951 ctx = repo[rev]
951 ctx = repo[rev]
952 node = ctx.node()
952 node = ctx.node()
953 parents = [p.node() for p in ctx.parents() if p]
953 parents = [p.node() for p in ctx.parents() if p]
954 branch = ctx.branch()
954 branch = ctx.branch()
955 if switch_parent:
955 if switch_parent:
956 parents.reverse()
956 parents.reverse()
957
957
958 if parents:
958 if parents:
959 prev = parents[0]
959 prev = parents[0]
960 else:
960 else:
961 prev = nullid
961 prev = nullid
962
962
963 shouldclose = False
963 shouldclose = False
964 if not fp and len(template) > 0:
964 if not fp and len(template) > 0:
965 desc_lines = ctx.description().rstrip().split('\n')
965 desc_lines = ctx.description().rstrip().split('\n')
966 desc = desc_lines[0] #Commit always has a first line.
966 desc = desc_lines[0] #Commit always has a first line.
967 fp = makefileobj(repo, template, node, desc=desc, total=total,
967 fp = makefileobj(repo, template, node, desc=desc, total=total,
968 seqno=seqno, revwidth=revwidth, mode='wb',
968 seqno=seqno, revwidth=revwidth, mode='wb',
969 modemap=filemode)
969 modemap=filemode)
970 if fp != template:
970 if fp != template:
971 shouldclose = True
971 shouldclose = True
972 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
972 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
973 repo.ui.note("%s\n" % fp.name)
973 repo.ui.note("%s\n" % fp.name)
974
974
975 if not fp:
975 if not fp:
976 write = repo.ui.write
976 write = repo.ui.write
977 else:
977 else:
978 def write(s, **kw):
978 def write(s, **kw):
979 fp.write(s)
979 fp.write(s)
980
980
981 write("# HG changeset patch\n")
981 write("# HG changeset patch\n")
982 write("# User %s\n" % ctx.user())
982 write("# User %s\n" % ctx.user())
983 write("# Date %d %d\n" % ctx.date())
983 write("# Date %d %d\n" % ctx.date())
984 write("# %s\n" % util.datestr(ctx.date()))
984 write("# %s\n" % util.datestr(ctx.date()))
985 if branch and branch != 'default':
985 if branch and branch != 'default':
986 write("# Branch %s\n" % branch)
986 write("# Branch %s\n" % branch)
987 write("# Node ID %s\n" % hex(node))
987 write("# Node ID %s\n" % hex(node))
988 write("# Parent %s\n" % hex(prev))
988 write("# Parent %s\n" % hex(prev))
989 if len(parents) > 1:
989 if len(parents) > 1:
990 write("# Parent %s\n" % hex(parents[1]))
990 write("# Parent %s\n" % hex(parents[1]))
991 write(ctx.description().rstrip())
991 write(ctx.description().rstrip())
992 write("\n\n")
992 write("\n\n")
993
993
994 for chunk, label in patch.diffui(repo, prev, node, opts=opts):
994 for chunk, label in patch.diffui(repo, prev, node, opts=opts):
995 write(chunk, label=label)
995 write(chunk, label=label)
996
996
997 if shouldclose:
997 if shouldclose:
998 fp.close()
998 fp.close()
999
999
1000 for seqno, rev in enumerate(revs):
1000 for seqno, rev in enumerate(revs):
1001 single(rev, seqno + 1, fp)
1001 single(rev, seqno + 1, fp)
1002
1002
1003 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1003 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1004 changes=None, stat=False, fp=None, prefix='',
1004 changes=None, stat=False, fp=None, prefix='',
1005 root='', listsubrepos=False):
1005 root='', listsubrepos=False):
1006 '''show diff or diffstat.'''
1006 '''show diff or diffstat.'''
1007 if fp is None:
1007 if fp is None:
1008 write = ui.write
1008 write = ui.write
1009 else:
1009 else:
1010 def write(s, **kw):
1010 def write(s, **kw):
1011 fp.write(s)
1011 fp.write(s)
1012
1012
1013 if root:
1013 if root:
1014 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1014 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1015 else:
1015 else:
1016 relroot = ''
1016 relroot = ''
1017 if relroot != '':
1017 if relroot != '':
1018 # XXX relative roots currently don't work if the root is within a
1018 # XXX relative roots currently don't work if the root is within a
1019 # subrepo
1019 # subrepo
1020 uirelroot = match.uipath(relroot)
1020 uirelroot = match.uipath(relroot)
1021 relroot += '/'
1021 relroot += '/'
1022 for matchroot in match.files():
1022 for matchroot in match.files():
1023 if not matchroot.startswith(relroot):
1023 if not matchroot.startswith(relroot):
1024 ui.warn(_('warning: %s not inside relative root %s\n') % (
1024 ui.warn(_('warning: %s not inside relative root %s\n') % (
1025 match.uipath(matchroot), uirelroot))
1025 match.uipath(matchroot), uirelroot))
1026
1026
1027 if stat:
1027 if stat:
1028 diffopts = diffopts.copy(context=0)
1028 diffopts = diffopts.copy(context=0)
1029 width = 80
1029 width = 80
1030 if not ui.plain():
1030 if not ui.plain():
1031 width = ui.termwidth()
1031 width = ui.termwidth()
1032 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1032 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1033 prefix=prefix, relroot=relroot)
1033 prefix=prefix, relroot=relroot)
1034 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1034 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1035 width=width,
1035 width=width,
1036 git=diffopts.git):
1036 git=diffopts.git):
1037 write(chunk, label=label)
1037 write(chunk, label=label)
1038 else:
1038 else:
1039 for chunk, label in patch.diffui(repo, node1, node2, match,
1039 for chunk, label in patch.diffui(repo, node1, node2, match,
1040 changes, diffopts, prefix=prefix,
1040 changes, diffopts, prefix=prefix,
1041 relroot=relroot):
1041 relroot=relroot):
1042 write(chunk, label=label)
1042 write(chunk, label=label)
1043
1043
1044 if listsubrepos:
1044 if listsubrepos:
1045 ctx1 = repo[node1]
1045 ctx1 = repo[node1]
1046 ctx2 = repo[node2]
1046 ctx2 = repo[node2]
1047 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1047 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1048 tempnode2 = node2
1048 tempnode2 = node2
1049 try:
1049 try:
1050 if node2 is not None:
1050 if node2 is not None:
1051 tempnode2 = ctx2.substate[subpath][1]
1051 tempnode2 = ctx2.substate[subpath][1]
1052 except KeyError:
1052 except KeyError:
1053 # A subrepo that existed in node1 was deleted between node1 and
1053 # A subrepo that existed in node1 was deleted between node1 and
1054 # node2 (inclusive). Thus, ctx2's substate won't contain that
1054 # node2 (inclusive). Thus, ctx2's substate won't contain that
1055 # subpath. The best we can do is to ignore it.
1055 # subpath. The best we can do is to ignore it.
1056 tempnode2 = None
1056 tempnode2 = None
1057 submatch = matchmod.narrowmatcher(subpath, match)
1057 submatch = matchmod.narrowmatcher(subpath, match)
1058 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1058 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1059 stat=stat, fp=fp, prefix=prefix)
1059 stat=stat, fp=fp, prefix=prefix)
1060
1060
1061 class changeset_printer(object):
1061 class changeset_printer(object):
1062 '''show changeset information when templating not requested.'''
1062 '''show changeset information when templating not requested.'''
1063
1063
1064 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1064 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1065 self.ui = ui
1065 self.ui = ui
1066 self.repo = repo
1066 self.repo = repo
1067 self.buffered = buffered
1067 self.buffered = buffered
1068 self.matchfn = matchfn
1068 self.matchfn = matchfn
1069 self.diffopts = diffopts
1069 self.diffopts = diffopts
1070 self.header = {}
1070 self.header = {}
1071 self.hunk = {}
1071 self.hunk = {}
1072 self.lastheader = None
1072 self.lastheader = None
1073 self.footer = None
1073 self.footer = None
1074
1074
1075 def flush(self, rev):
1075 def flush(self, rev):
1076 if rev in self.header:
1076 if rev in self.header:
1077 h = self.header[rev]
1077 h = self.header[rev]
1078 if h != self.lastheader:
1078 if h != self.lastheader:
1079 self.lastheader = h
1079 self.lastheader = h
1080 self.ui.write(h)
1080 self.ui.write(h)
1081 del self.header[rev]
1081 del self.header[rev]
1082 if rev in self.hunk:
1082 if rev in self.hunk:
1083 self.ui.write(self.hunk[rev])
1083 self.ui.write(self.hunk[rev])
1084 del self.hunk[rev]
1084 del self.hunk[rev]
1085 return 1
1085 return 1
1086 return 0
1086 return 0
1087
1087
1088 def close(self):
1088 def close(self):
1089 if self.footer:
1089 if self.footer:
1090 self.ui.write(self.footer)
1090 self.ui.write(self.footer)
1091
1091
1092 def show(self, ctx, copies=None, matchfn=None, **props):
1092 def show(self, ctx, copies=None, matchfn=None, **props):
1093 if self.buffered:
1093 if self.buffered:
1094 self.ui.pushbuffer()
1094 self.ui.pushbuffer()
1095 self._show(ctx, copies, matchfn, props)
1095 self._show(ctx, copies, matchfn, props)
1096 self.hunk[ctx.rev()] = self.ui.popbuffer(labeled=True)
1096 self.hunk[ctx.rev()] = self.ui.popbuffer(labeled=True)
1097 else:
1097 else:
1098 self._show(ctx, copies, matchfn, props)
1098 self._show(ctx, copies, matchfn, props)
1099
1099
1100 def _show(self, ctx, copies, matchfn, props):
1100 def _show(self, ctx, copies, matchfn, props):
1101 '''show a single changeset or file revision'''
1101 '''show a single changeset or file revision'''
1102 changenode = ctx.node()
1102 changenode = ctx.node()
1103 rev = ctx.rev()
1103 rev = ctx.rev()
1104 if self.ui.debugflag:
1104 if self.ui.debugflag:
1105 hexfunc = hex
1105 hexfunc = hex
1106 else:
1106 else:
1107 hexfunc = short
1107 hexfunc = short
1108 if rev is None:
1108 if rev is None:
1109 pctx = ctx.p1()
1109 pctx = ctx.p1()
1110 revnode = (pctx.rev(), hexfunc(pctx.node()) + '+')
1110 revnode = (pctx.rev(), hexfunc(pctx.node()) + '+')
1111 else:
1111 else:
1112 revnode = (rev, hexfunc(changenode))
1112 revnode = (rev, hexfunc(changenode))
1113
1113
1114 if self.ui.quiet:
1114 if self.ui.quiet:
1115 self.ui.write("%d:%s\n" % revnode, label='log.node')
1115 self.ui.write("%d:%s\n" % revnode, label='log.node')
1116 return
1116 return
1117
1117
1118 date = util.datestr(ctx.date())
1118 date = util.datestr(ctx.date())
1119
1119
1120 # i18n: column positioning for "hg log"
1120 # i18n: column positioning for "hg log"
1121 self.ui.write(_("changeset: %d:%s\n") % revnode,
1121 self.ui.write(_("changeset: %d:%s\n") % revnode,
1122 label='log.changeset changeset.%s' % ctx.phasestr())
1122 label='log.changeset changeset.%s' % ctx.phasestr())
1123
1123
1124 # branches are shown first before any other names due to backwards
1124 # branches are shown first before any other names due to backwards
1125 # compatibility
1125 # compatibility
1126 branch = ctx.branch()
1126 branch = ctx.branch()
1127 # don't show the default branch name
1127 # don't show the default branch name
1128 if branch != 'default':
1128 if branch != 'default':
1129 # i18n: column positioning for "hg log"
1129 # i18n: column positioning for "hg log"
1130 self.ui.write(_("branch: %s\n") % branch,
1130 self.ui.write(_("branch: %s\n") % branch,
1131 label='log.branch')
1131 label='log.branch')
1132
1132
1133 for name, ns in self.repo.names.iteritems():
1133 for name, ns in self.repo.names.iteritems():
1134 # branches has special logic already handled above, so here we just
1134 # branches has special logic already handled above, so here we just
1135 # skip it
1135 # skip it
1136 if name == 'branches':
1136 if name == 'branches':
1137 continue
1137 continue
1138 # we will use the templatename as the color name since those two
1138 # we will use the templatename as the color name since those two
1139 # should be the same
1139 # should be the same
1140 for name in ns.names(self.repo, changenode):
1140 for name in ns.names(self.repo, changenode):
1141 self.ui.write(ns.logfmt % name,
1141 self.ui.write(ns.logfmt % name,
1142 label='log.%s' % ns.colorname)
1142 label='log.%s' % ns.colorname)
1143 if self.ui.debugflag:
1143 if self.ui.debugflag:
1144 # i18n: column positioning for "hg log"
1144 # i18n: column positioning for "hg log"
1145 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1145 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1146 label='log.phase')
1146 label='log.phase')
1147 for pctx in self._meaningful_parentrevs(ctx):
1147 for pctx in self._meaningful_parentrevs(ctx):
1148 label = 'log.parent changeset.%s' % pctx.phasestr()
1148 label = 'log.parent changeset.%s' % pctx.phasestr()
1149 # i18n: column positioning for "hg log"
1149 # i18n: column positioning for "hg log"
1150 self.ui.write(_("parent: %d:%s\n")
1150 self.ui.write(_("parent: %d:%s\n")
1151 % (pctx.rev(), hexfunc(pctx.node())),
1151 % (pctx.rev(), hexfunc(pctx.node())),
1152 label=label)
1152 label=label)
1153
1153
1154 if self.ui.debugflag and rev is not None:
1154 if self.ui.debugflag and rev is not None:
1155 mnode = ctx.manifestnode()
1155 mnode = ctx.manifestnode()
1156 # i18n: column positioning for "hg log"
1156 # i18n: column positioning for "hg log"
1157 self.ui.write(_("manifest: %d:%s\n") %
1157 self.ui.write(_("manifest: %d:%s\n") %
1158 (self.repo.manifest.rev(mnode), hex(mnode)),
1158 (self.repo.manifest.rev(mnode), hex(mnode)),
1159 label='ui.debug log.manifest')
1159 label='ui.debug log.manifest')
1160 # i18n: column positioning for "hg log"
1160 # i18n: column positioning for "hg log"
1161 self.ui.write(_("user: %s\n") % ctx.user(),
1161 self.ui.write(_("user: %s\n") % ctx.user(),
1162 label='log.user')
1162 label='log.user')
1163 # i18n: column positioning for "hg log"
1163 # i18n: column positioning for "hg log"
1164 self.ui.write(_("date: %s\n") % date,
1164 self.ui.write(_("date: %s\n") % date,
1165 label='log.date')
1165 label='log.date')
1166
1166
1167 if self.ui.debugflag:
1167 if self.ui.debugflag:
1168 files = ctx.p1().status(ctx)[:3]
1168 files = ctx.p1().status(ctx)[:3]
1169 for key, value in zip([# i18n: column positioning for "hg log"
1169 for key, value in zip([# i18n: column positioning for "hg log"
1170 _("files:"),
1170 _("files:"),
1171 # i18n: column positioning for "hg log"
1171 # i18n: column positioning for "hg log"
1172 _("files+:"),
1172 _("files+:"),
1173 # i18n: column positioning for "hg log"
1173 # i18n: column positioning for "hg log"
1174 _("files-:")], files):
1174 _("files-:")], files):
1175 if value:
1175 if value:
1176 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1176 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1177 label='ui.debug log.files')
1177 label='ui.debug log.files')
1178 elif ctx.files() and self.ui.verbose:
1178 elif ctx.files() and self.ui.verbose:
1179 # i18n: column positioning for "hg log"
1179 # i18n: column positioning for "hg log"
1180 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1180 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1181 label='ui.note log.files')
1181 label='ui.note log.files')
1182 if copies and self.ui.verbose:
1182 if copies and self.ui.verbose:
1183 copies = ['%s (%s)' % c for c in copies]
1183 copies = ['%s (%s)' % c for c in copies]
1184 # i18n: column positioning for "hg log"
1184 # i18n: column positioning for "hg log"
1185 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1185 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1186 label='ui.note log.copies')
1186 label='ui.note log.copies')
1187
1187
1188 extra = ctx.extra()
1188 extra = ctx.extra()
1189 if extra and self.ui.debugflag:
1189 if extra and self.ui.debugflag:
1190 for key, value in sorted(extra.items()):
1190 for key, value in sorted(extra.items()):
1191 # i18n: column positioning for "hg log"
1191 # i18n: column positioning for "hg log"
1192 self.ui.write(_("extra: %s=%s\n")
1192 self.ui.write(_("extra: %s=%s\n")
1193 % (key, value.encode('string_escape')),
1193 % (key, value.encode('string_escape')),
1194 label='ui.debug log.extra')
1194 label='ui.debug log.extra')
1195
1195
1196 description = ctx.description().strip()
1196 description = ctx.description().strip()
1197 if description:
1197 if description:
1198 if self.ui.verbose:
1198 if self.ui.verbose:
1199 self.ui.write(_("description:\n"),
1199 self.ui.write(_("description:\n"),
1200 label='ui.note log.description')
1200 label='ui.note log.description')
1201 self.ui.write(description,
1201 self.ui.write(description,
1202 label='ui.note log.description')
1202 label='ui.note log.description')
1203 self.ui.write("\n\n")
1203 self.ui.write("\n\n")
1204 else:
1204 else:
1205 # i18n: column positioning for "hg log"
1205 # i18n: column positioning for "hg log"
1206 self.ui.write(_("summary: %s\n") %
1206 self.ui.write(_("summary: %s\n") %
1207 description.splitlines()[0],
1207 description.splitlines()[0],
1208 label='log.summary')
1208 label='log.summary')
1209 self.ui.write("\n")
1209 self.ui.write("\n")
1210
1210
1211 self.showpatch(changenode, matchfn)
1211 self.showpatch(changenode, matchfn)
1212
1212
1213 def showpatch(self, node, matchfn):
1213 def showpatch(self, node, matchfn):
1214 if not matchfn:
1214 if not matchfn:
1215 matchfn = self.matchfn
1215 matchfn = self.matchfn
1216 if matchfn:
1216 if matchfn:
1217 stat = self.diffopts.get('stat')
1217 stat = self.diffopts.get('stat')
1218 diff = self.diffopts.get('patch')
1218 diff = self.diffopts.get('patch')
1219 diffopts = patch.diffallopts(self.ui, self.diffopts)
1219 diffopts = patch.diffallopts(self.ui, self.diffopts)
1220 prev = self.repo.changelog.parents(node)[0]
1220 prev = self.repo.changelog.parents(node)[0]
1221 if stat:
1221 if stat:
1222 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1222 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1223 match=matchfn, stat=True)
1223 match=matchfn, stat=True)
1224 if diff:
1224 if diff:
1225 if stat:
1225 if stat:
1226 self.ui.write("\n")
1226 self.ui.write("\n")
1227 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1227 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1228 match=matchfn, stat=False)
1228 match=matchfn, stat=False)
1229 self.ui.write("\n")
1229 self.ui.write("\n")
1230
1230
1231 def _meaningful_parentrevs(self, ctx):
1231 def _meaningful_parentrevs(self, ctx):
1232 """Return list of meaningful (or all if debug) parentrevs for rev.
1232 """Return list of meaningful (or all if debug) parentrevs for rev.
1233
1233
1234 For merges (two non-nullrev revisions) both parents are meaningful.
1234 For merges (two non-nullrev revisions) both parents are meaningful.
1235 Otherwise the first parent revision is considered meaningful if it
1235 Otherwise the first parent revision is considered meaningful if it
1236 is not the preceding revision.
1236 is not the preceding revision.
1237 """
1237 """
1238 parents = ctx.parents()
1238 parents = ctx.parents()
1239 if len(parents) > 1:
1239 if len(parents) > 1:
1240 return parents
1240 return parents
1241 if self.ui.debugflag:
1241 if self.ui.debugflag:
1242 return [parents[0], self.repo['null']]
1242 return [parents[0], self.repo['null']]
1243 if parents[0].rev() >= scmutil.intrev(self.repo, ctx.rev()) - 1:
1243 if parents[0].rev() >= scmutil.intrev(self.repo, ctx.rev()) - 1:
1244 return []
1244 return []
1245 return parents
1245 return parents
1246
1246
1247 class jsonchangeset(changeset_printer):
1247 class jsonchangeset(changeset_printer):
1248 '''format changeset information.'''
1248 '''format changeset information.'''
1249
1249
1250 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1250 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1251 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1251 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1252 self.cache = {}
1252 self.cache = {}
1253 self._first = True
1253 self._first = True
1254
1254
1255 def close(self):
1255 def close(self):
1256 if not self._first:
1256 if not self._first:
1257 self.ui.write("\n]\n")
1257 self.ui.write("\n]\n")
1258 else:
1258 else:
1259 self.ui.write("[]\n")
1259 self.ui.write("[]\n")
1260
1260
1261 def _show(self, ctx, copies, matchfn, props):
1261 def _show(self, ctx, copies, matchfn, props):
1262 '''show a single changeset or file revision'''
1262 '''show a single changeset or file revision'''
1263 rev = ctx.rev()
1263 rev = ctx.rev()
1264 if rev is None:
1264 if rev is None:
1265 jrev = jnode = 'null'
1265 jrev = jnode = 'null'
1266 else:
1266 else:
1267 jrev = str(rev)
1267 jrev = str(rev)
1268 jnode = '"%s"' % hex(ctx.node())
1268 jnode = '"%s"' % hex(ctx.node())
1269 j = encoding.jsonescape
1269 j = encoding.jsonescape
1270
1270
1271 if self._first:
1271 if self._first:
1272 self.ui.write("[\n {")
1272 self.ui.write("[\n {")
1273 self._first = False
1273 self._first = False
1274 else:
1274 else:
1275 self.ui.write(",\n {")
1275 self.ui.write(",\n {")
1276
1276
1277 if self.ui.quiet:
1277 if self.ui.quiet:
1278 self.ui.write('\n "rev": %s' % jrev)
1278 self.ui.write('\n "rev": %s' % jrev)
1279 self.ui.write(',\n "node": %s' % jnode)
1279 self.ui.write(',\n "node": %s' % jnode)
1280 self.ui.write('\n }')
1280 self.ui.write('\n }')
1281 return
1281 return
1282
1282
1283 self.ui.write('\n "rev": %s' % jrev)
1283 self.ui.write('\n "rev": %s' % jrev)
1284 self.ui.write(',\n "node": %s' % jnode)
1284 self.ui.write(',\n "node": %s' % jnode)
1285 self.ui.write(',\n "branch": "%s"' % j(ctx.branch()))
1285 self.ui.write(',\n "branch": "%s"' % j(ctx.branch()))
1286 self.ui.write(',\n "phase": "%s"' % ctx.phasestr())
1286 self.ui.write(',\n "phase": "%s"' % ctx.phasestr())
1287 self.ui.write(',\n "user": "%s"' % j(ctx.user()))
1287 self.ui.write(',\n "user": "%s"' % j(ctx.user()))
1288 self.ui.write(',\n "date": [%d, %d]' % ctx.date())
1288 self.ui.write(',\n "date": [%d, %d]' % ctx.date())
1289 self.ui.write(',\n "desc": "%s"' % j(ctx.description()))
1289 self.ui.write(',\n "desc": "%s"' % j(ctx.description()))
1290
1290
1291 self.ui.write(',\n "bookmarks": [%s]' %
1291 self.ui.write(',\n "bookmarks": [%s]' %
1292 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1292 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1293 self.ui.write(',\n "tags": [%s]' %
1293 self.ui.write(',\n "tags": [%s]' %
1294 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1294 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1295 self.ui.write(',\n "parents": [%s]' %
1295 self.ui.write(',\n "parents": [%s]' %
1296 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1296 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1297
1297
1298 if self.ui.debugflag:
1298 if self.ui.debugflag:
1299 if rev is None:
1299 if rev is None:
1300 jmanifestnode = 'null'
1300 jmanifestnode = 'null'
1301 else:
1301 else:
1302 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1302 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1303 self.ui.write(',\n "manifest": %s' % jmanifestnode)
1303 self.ui.write(',\n "manifest": %s' % jmanifestnode)
1304
1304
1305 self.ui.write(',\n "extra": {%s}' %
1305 self.ui.write(',\n "extra": {%s}' %
1306 ", ".join('"%s": "%s"' % (j(k), j(v))
1306 ", ".join('"%s": "%s"' % (j(k), j(v))
1307 for k, v in ctx.extra().items()))
1307 for k, v in ctx.extra().items()))
1308
1308
1309 files = ctx.p1().status(ctx)
1309 files = ctx.p1().status(ctx)
1310 self.ui.write(',\n "modified": [%s]' %
1310 self.ui.write(',\n "modified": [%s]' %
1311 ", ".join('"%s"' % j(f) for f in files[0]))
1311 ", ".join('"%s"' % j(f) for f in files[0]))
1312 self.ui.write(',\n "added": [%s]' %
1312 self.ui.write(',\n "added": [%s]' %
1313 ", ".join('"%s"' % j(f) for f in files[1]))
1313 ", ".join('"%s"' % j(f) for f in files[1]))
1314 self.ui.write(',\n "removed": [%s]' %
1314 self.ui.write(',\n "removed": [%s]' %
1315 ", ".join('"%s"' % j(f) for f in files[2]))
1315 ", ".join('"%s"' % j(f) for f in files[2]))
1316
1316
1317 elif self.ui.verbose:
1317 elif self.ui.verbose:
1318 self.ui.write(',\n "files": [%s]' %
1318 self.ui.write(',\n "files": [%s]' %
1319 ", ".join('"%s"' % j(f) for f in ctx.files()))
1319 ", ".join('"%s"' % j(f) for f in ctx.files()))
1320
1320
1321 if copies:
1321 if copies:
1322 self.ui.write(',\n "copies": {%s}' %
1322 self.ui.write(',\n "copies": {%s}' %
1323 ", ".join('"%s": "%s"' % (j(k), j(v))
1323 ", ".join('"%s": "%s"' % (j(k), j(v))
1324 for k, v in copies))
1324 for k, v in copies))
1325
1325
1326 matchfn = self.matchfn
1326 matchfn = self.matchfn
1327 if matchfn:
1327 if matchfn:
1328 stat = self.diffopts.get('stat')
1328 stat = self.diffopts.get('stat')
1329 diff = self.diffopts.get('patch')
1329 diff = self.diffopts.get('patch')
1330 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1330 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1331 node, prev = ctx.node(), ctx.p1().node()
1331 node, prev = ctx.node(), ctx.p1().node()
1332 if stat:
1332 if stat:
1333 self.ui.pushbuffer()
1333 self.ui.pushbuffer()
1334 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1334 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1335 match=matchfn, stat=True)
1335 match=matchfn, stat=True)
1336 self.ui.write(',\n "diffstat": "%s"' % j(self.ui.popbuffer()))
1336 self.ui.write(',\n "diffstat": "%s"' % j(self.ui.popbuffer()))
1337 if diff:
1337 if diff:
1338 self.ui.pushbuffer()
1338 self.ui.pushbuffer()
1339 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1339 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1340 match=matchfn, stat=False)
1340 match=matchfn, stat=False)
1341 self.ui.write(',\n "diff": "%s"' % j(self.ui.popbuffer()))
1341 self.ui.write(',\n "diff": "%s"' % j(self.ui.popbuffer()))
1342
1342
1343 self.ui.write("\n }")
1343 self.ui.write("\n }")
1344
1344
1345 class changeset_templater(changeset_printer):
1345 class changeset_templater(changeset_printer):
1346 '''format changeset information.'''
1346 '''format changeset information.'''
1347
1347
1348 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1348 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1349 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1349 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1350 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
1350 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
1351 defaulttempl = {
1351 defaulttempl = {
1352 'parent': '{rev}:{node|formatnode} ',
1352 'parent': '{rev}:{node|formatnode} ',
1353 'manifest': '{rev}:{node|formatnode}',
1353 'manifest': '{rev}:{node|formatnode}',
1354 'file_copy': '{name} ({source})',
1354 'file_copy': '{name} ({source})',
1355 'extra': '{key}={value|stringescape}'
1355 'extra': '{key}={value|stringescape}'
1356 }
1356 }
1357 # filecopy is preserved for compatibility reasons
1357 # filecopy is preserved for compatibility reasons
1358 defaulttempl['filecopy'] = defaulttempl['file_copy']
1358 defaulttempl['filecopy'] = defaulttempl['file_copy']
1359 self.t = templater.templater(mapfile, {'formatnode': formatnode},
1359 self.t = templater.templater(mapfile, {'formatnode': formatnode},
1360 cache=defaulttempl)
1360 cache=defaulttempl)
1361 if tmpl:
1361 if tmpl:
1362 self.t.cache['changeset'] = tmpl
1362 self.t.cache['changeset'] = tmpl
1363
1363
1364 self.cache = {}
1364 self.cache = {}
1365
1365
1366 def _show(self, ctx, copies, matchfn, props):
1366 def _show(self, ctx, copies, matchfn, props):
1367 '''show a single changeset or file revision'''
1367 '''show a single changeset or file revision'''
1368
1368
1369 showlist = templatekw.showlist
1369 showlist = templatekw.showlist
1370
1370
1371 # showparents() behaviour depends on ui trace level which
1371 # showparents() behaviour depends on ui trace level which
1372 # causes unexpected behaviours at templating level and makes
1372 # causes unexpected behaviours at templating level and makes
1373 # it harder to extract it in a standalone function. Its
1373 # it harder to extract it in a standalone function. Its
1374 # behaviour cannot be changed so leave it here for now.
1374 # behaviour cannot be changed so leave it here for now.
1375 def showparents(**args):
1375 def showparents(**args):
1376 ctx = args['ctx']
1376 ctx = args['ctx']
1377 parents = [[('rev', p.rev()),
1377 parents = [[('rev', p.rev()),
1378 ('node', p.hex()),
1378 ('node', p.hex()),
1379 ('phase', p.phasestr())]
1379 ('phase', p.phasestr())]
1380 for p in self._meaningful_parentrevs(ctx)]
1380 for p in self._meaningful_parentrevs(ctx)]
1381 return showlist('parent', parents, **args)
1381 return showlist('parent', parents, **args)
1382
1382
1383 props = props.copy()
1383 props = props.copy()
1384 props.update(templatekw.keywords)
1384 props.update(templatekw.keywords)
1385 props['parents'] = showparents
1385 props['parents'] = showparents
1386 props['templ'] = self.t
1386 props['templ'] = self.t
1387 props['ctx'] = ctx
1387 props['ctx'] = ctx
1388 props['repo'] = self.repo
1388 props['repo'] = self.repo
1389 props['revcache'] = {'copies': copies}
1389 props['revcache'] = {'copies': copies}
1390 props['cache'] = self.cache
1390 props['cache'] = self.cache
1391
1391
1392 # find correct templates for current mode
1392 # find correct templates for current mode
1393
1393
1394 tmplmodes = [
1394 tmplmodes = [
1395 (True, None),
1395 (True, None),
1396 (self.ui.verbose, 'verbose'),
1396 (self.ui.verbose, 'verbose'),
1397 (self.ui.quiet, 'quiet'),
1397 (self.ui.quiet, 'quiet'),
1398 (self.ui.debugflag, 'debug'),
1398 (self.ui.debugflag, 'debug'),
1399 ]
1399 ]
1400
1400
1401 types = {'header': '', 'footer':'', 'changeset': 'changeset'}
1401 types = {'header': '', 'footer':'', 'changeset': 'changeset'}
1402 for mode, postfix in tmplmodes:
1402 for mode, postfix in tmplmodes:
1403 for type in types:
1403 for type in types:
1404 cur = postfix and ('%s_%s' % (type, postfix)) or type
1404 cur = postfix and ('%s_%s' % (type, postfix)) or type
1405 if mode and cur in self.t:
1405 if mode and cur in self.t:
1406 types[type] = cur
1406 types[type] = cur
1407
1407
1408 try:
1408 try:
1409
1409
1410 # write header
1410 # write header
1411 if types['header']:
1411 if types['header']:
1412 h = templater.stringify(self.t(types['header'], **props))
1412 h = templater.stringify(self.t(types['header'], **props))
1413 if self.buffered:
1413 if self.buffered:
1414 self.header[ctx.rev()] = h
1414 self.header[ctx.rev()] = h
1415 else:
1415 else:
1416 if self.lastheader != h:
1416 if self.lastheader != h:
1417 self.lastheader = h
1417 self.lastheader = h
1418 self.ui.write(h)
1418 self.ui.write(h)
1419
1419
1420 # write changeset metadata, then patch if requested
1420 # write changeset metadata, then patch if requested
1421 key = types['changeset']
1421 key = types['changeset']
1422 self.ui.write(templater.stringify(self.t(key, **props)))
1422 self.ui.write(templater.stringify(self.t(key, **props)))
1423 self.showpatch(ctx.node(), matchfn)
1423 self.showpatch(ctx.node(), matchfn)
1424
1424
1425 if types['footer']:
1425 if types['footer']:
1426 if not self.footer:
1426 if not self.footer:
1427 self.footer = templater.stringify(self.t(types['footer'],
1427 self.footer = templater.stringify(self.t(types['footer'],
1428 **props))
1428 **props))
1429
1429
1430 except KeyError, inst:
1430 except KeyError, inst:
1431 msg = _("%s: no key named '%s'")
1431 msg = _("%s: no key named '%s'")
1432 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
1432 raise util.Abort(msg % (self.t.mapfile, inst.args[0]))
1433 except SyntaxError, inst:
1433 except SyntaxError, inst:
1434 raise util.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1434 raise util.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1435
1435
1436 def gettemplate(ui, tmpl, style):
1436 def gettemplate(ui, tmpl, style):
1437 """
1437 """
1438 Find the template matching the given template spec or style.
1438 Find the template matching the given template spec or style.
1439 """
1439 """
1440
1440
1441 # ui settings
1441 # ui settings
1442 if not tmpl and not style: # template are stronger than style
1442 if not tmpl and not style: # template are stronger than style
1443 tmpl = ui.config('ui', 'logtemplate')
1443 tmpl = ui.config('ui', 'logtemplate')
1444 if tmpl:
1444 if tmpl:
1445 try:
1445 try:
1446 tmpl = templater.parsestring(tmpl)
1446 tmpl = templater.parsestring(tmpl)
1447 except SyntaxError:
1447 except SyntaxError:
1448 tmpl = templater.parsestring(tmpl, quoted=False)
1448 pass
1449 return tmpl, None
1449 return tmpl, None
1450 else:
1450 else:
1451 style = util.expandpath(ui.config('ui', 'style', ''))
1451 style = util.expandpath(ui.config('ui', 'style', ''))
1452
1452
1453 if not tmpl and style:
1453 if not tmpl and style:
1454 mapfile = style
1454 mapfile = style
1455 if not os.path.split(mapfile)[0]:
1455 if not os.path.split(mapfile)[0]:
1456 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1456 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1457 or templater.templatepath(mapfile))
1457 or templater.templatepath(mapfile))
1458 if mapname:
1458 if mapname:
1459 mapfile = mapname
1459 mapfile = mapname
1460 return None, mapfile
1460 return None, mapfile
1461
1461
1462 if not tmpl:
1462 if not tmpl:
1463 return None, None
1463 return None, None
1464
1464
1465 # looks like a literal template?
1465 # looks like a literal template?
1466 if '{' in tmpl:
1466 if '{' in tmpl:
1467 return tmpl, None
1467 return tmpl, None
1468
1468
1469 # perhaps a stock style?
1469 # perhaps a stock style?
1470 if not os.path.split(tmpl)[0]:
1470 if not os.path.split(tmpl)[0]:
1471 mapname = (templater.templatepath('map-cmdline.' + tmpl)
1471 mapname = (templater.templatepath('map-cmdline.' + tmpl)
1472 or templater.templatepath(tmpl))
1472 or templater.templatepath(tmpl))
1473 if mapname and os.path.isfile(mapname):
1473 if mapname and os.path.isfile(mapname):
1474 return None, mapname
1474 return None, mapname
1475
1475
1476 # perhaps it's a reference to [templates]
1476 # perhaps it's a reference to [templates]
1477 t = ui.config('templates', tmpl)
1477 t = ui.config('templates', tmpl)
1478 if t:
1478 if t:
1479 try:
1479 try:
1480 tmpl = templater.parsestring(t)
1480 tmpl = templater.parsestring(t)
1481 except SyntaxError:
1481 except SyntaxError:
1482 tmpl = templater.parsestring(t, quoted=False)
1482 tmpl = t
1483 return tmpl, None
1483 return tmpl, None
1484
1484
1485 if tmpl == 'list':
1485 if tmpl == 'list':
1486 ui.write(_("available styles: %s\n") % templater.stylelist())
1486 ui.write(_("available styles: %s\n") % templater.stylelist())
1487 raise util.Abort(_("specify a template"))
1487 raise util.Abort(_("specify a template"))
1488
1488
1489 # perhaps it's a path to a map or a template
1489 # perhaps it's a path to a map or a template
1490 if ('/' in tmpl or '\\' in tmpl) and os.path.isfile(tmpl):
1490 if ('/' in tmpl or '\\' in tmpl) and os.path.isfile(tmpl):
1491 # is it a mapfile for a style?
1491 # is it a mapfile for a style?
1492 if os.path.basename(tmpl).startswith("map-"):
1492 if os.path.basename(tmpl).startswith("map-"):
1493 return None, os.path.realpath(tmpl)
1493 return None, os.path.realpath(tmpl)
1494 tmpl = open(tmpl).read()
1494 tmpl = open(tmpl).read()
1495 return tmpl, None
1495 return tmpl, None
1496
1496
1497 # constant string?
1497 # constant string?
1498 return tmpl, None
1498 return tmpl, None
1499
1499
1500 def show_changeset(ui, repo, opts, buffered=False):
1500 def show_changeset(ui, repo, opts, buffered=False):
1501 """show one changeset using template or regular display.
1501 """show one changeset using template or regular display.
1502
1502
1503 Display format will be the first non-empty hit of:
1503 Display format will be the first non-empty hit of:
1504 1. option 'template'
1504 1. option 'template'
1505 2. option 'style'
1505 2. option 'style'
1506 3. [ui] setting 'logtemplate'
1506 3. [ui] setting 'logtemplate'
1507 4. [ui] setting 'style'
1507 4. [ui] setting 'style'
1508 If all of these values are either the unset or the empty string,
1508 If all of these values are either the unset or the empty string,
1509 regular display via changeset_printer() is done.
1509 regular display via changeset_printer() is done.
1510 """
1510 """
1511 # options
1511 # options
1512 matchfn = None
1512 matchfn = None
1513 if opts.get('patch') or opts.get('stat'):
1513 if opts.get('patch') or opts.get('stat'):
1514 matchfn = scmutil.matchall(repo)
1514 matchfn = scmutil.matchall(repo)
1515
1515
1516 if opts.get('template') == 'json':
1516 if opts.get('template') == 'json':
1517 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1517 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1518
1518
1519 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1519 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1520
1520
1521 if not tmpl and not mapfile:
1521 if not tmpl and not mapfile:
1522 return changeset_printer(ui, repo, matchfn, opts, buffered)
1522 return changeset_printer(ui, repo, matchfn, opts, buffered)
1523
1523
1524 try:
1524 try:
1525 t = changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile,
1525 t = changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile,
1526 buffered)
1526 buffered)
1527 except SyntaxError, inst:
1527 except SyntaxError, inst:
1528 raise util.Abort(inst.args[0])
1528 raise util.Abort(inst.args[0])
1529 return t
1529 return t
1530
1530
1531 def showmarker(ui, marker):
1531 def showmarker(ui, marker):
1532 """utility function to display obsolescence marker in a readable way
1532 """utility function to display obsolescence marker in a readable way
1533
1533
1534 To be used by debug function."""
1534 To be used by debug function."""
1535 ui.write(hex(marker.precnode()))
1535 ui.write(hex(marker.precnode()))
1536 for repl in marker.succnodes():
1536 for repl in marker.succnodes():
1537 ui.write(' ')
1537 ui.write(' ')
1538 ui.write(hex(repl))
1538 ui.write(hex(repl))
1539 ui.write(' %X ' % marker.flags())
1539 ui.write(' %X ' % marker.flags())
1540 parents = marker.parentnodes()
1540 parents = marker.parentnodes()
1541 if parents is not None:
1541 if parents is not None:
1542 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1542 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1543 ui.write('(%s) ' % util.datestr(marker.date()))
1543 ui.write('(%s) ' % util.datestr(marker.date()))
1544 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1544 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1545 sorted(marker.metadata().items())
1545 sorted(marker.metadata().items())
1546 if t[0] != 'date')))
1546 if t[0] != 'date')))
1547 ui.write('\n')
1547 ui.write('\n')
1548
1548
1549 def finddate(ui, repo, date):
1549 def finddate(ui, repo, date):
1550 """Find the tipmost changeset that matches the given date spec"""
1550 """Find the tipmost changeset that matches the given date spec"""
1551
1551
1552 df = util.matchdate(date)
1552 df = util.matchdate(date)
1553 m = scmutil.matchall(repo)
1553 m = scmutil.matchall(repo)
1554 results = {}
1554 results = {}
1555
1555
1556 def prep(ctx, fns):
1556 def prep(ctx, fns):
1557 d = ctx.date()
1557 d = ctx.date()
1558 if df(d[0]):
1558 if df(d[0]):
1559 results[ctx.rev()] = d
1559 results[ctx.rev()] = d
1560
1560
1561 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1561 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1562 rev = ctx.rev()
1562 rev = ctx.rev()
1563 if rev in results:
1563 if rev in results:
1564 ui.status(_("found revision %s from %s\n") %
1564 ui.status(_("found revision %s from %s\n") %
1565 (rev, util.datestr(results[rev])))
1565 (rev, util.datestr(results[rev])))
1566 return str(rev)
1566 return str(rev)
1567
1567
1568 raise util.Abort(_("revision matching date not found"))
1568 raise util.Abort(_("revision matching date not found"))
1569
1569
1570 def increasingwindows(windowsize=8, sizelimit=512):
1570 def increasingwindows(windowsize=8, sizelimit=512):
1571 while True:
1571 while True:
1572 yield windowsize
1572 yield windowsize
1573 if windowsize < sizelimit:
1573 if windowsize < sizelimit:
1574 windowsize *= 2
1574 windowsize *= 2
1575
1575
1576 class FileWalkError(Exception):
1576 class FileWalkError(Exception):
1577 pass
1577 pass
1578
1578
1579 def walkfilerevs(repo, match, follow, revs, fncache):
1579 def walkfilerevs(repo, match, follow, revs, fncache):
1580 '''Walks the file history for the matched files.
1580 '''Walks the file history for the matched files.
1581
1581
1582 Returns the changeset revs that are involved in the file history.
1582 Returns the changeset revs that are involved in the file history.
1583
1583
1584 Throws FileWalkError if the file history can't be walked using
1584 Throws FileWalkError if the file history can't be walked using
1585 filelogs alone.
1585 filelogs alone.
1586 '''
1586 '''
1587 wanted = set()
1587 wanted = set()
1588 copies = []
1588 copies = []
1589 minrev, maxrev = min(revs), max(revs)
1589 minrev, maxrev = min(revs), max(revs)
1590 def filerevgen(filelog, last):
1590 def filerevgen(filelog, last):
1591 """
1591 """
1592 Only files, no patterns. Check the history of each file.
1592 Only files, no patterns. Check the history of each file.
1593
1593
1594 Examines filelog entries within minrev, maxrev linkrev range
1594 Examines filelog entries within minrev, maxrev linkrev range
1595 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1595 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1596 tuples in backwards order
1596 tuples in backwards order
1597 """
1597 """
1598 cl_count = len(repo)
1598 cl_count = len(repo)
1599 revs = []
1599 revs = []
1600 for j in xrange(0, last + 1):
1600 for j in xrange(0, last + 1):
1601 linkrev = filelog.linkrev(j)
1601 linkrev = filelog.linkrev(j)
1602 if linkrev < minrev:
1602 if linkrev < minrev:
1603 continue
1603 continue
1604 # only yield rev for which we have the changelog, it can
1604 # only yield rev for which we have the changelog, it can
1605 # happen while doing "hg log" during a pull or commit
1605 # happen while doing "hg log" during a pull or commit
1606 if linkrev >= cl_count:
1606 if linkrev >= cl_count:
1607 break
1607 break
1608
1608
1609 parentlinkrevs = []
1609 parentlinkrevs = []
1610 for p in filelog.parentrevs(j):
1610 for p in filelog.parentrevs(j):
1611 if p != nullrev:
1611 if p != nullrev:
1612 parentlinkrevs.append(filelog.linkrev(p))
1612 parentlinkrevs.append(filelog.linkrev(p))
1613 n = filelog.node(j)
1613 n = filelog.node(j)
1614 revs.append((linkrev, parentlinkrevs,
1614 revs.append((linkrev, parentlinkrevs,
1615 follow and filelog.renamed(n)))
1615 follow and filelog.renamed(n)))
1616
1616
1617 return reversed(revs)
1617 return reversed(revs)
1618 def iterfiles():
1618 def iterfiles():
1619 pctx = repo['.']
1619 pctx = repo['.']
1620 for filename in match.files():
1620 for filename in match.files():
1621 if follow:
1621 if follow:
1622 if filename not in pctx:
1622 if filename not in pctx:
1623 raise util.Abort(_('cannot follow file not in parent '
1623 raise util.Abort(_('cannot follow file not in parent '
1624 'revision: "%s"') % filename)
1624 'revision: "%s"') % filename)
1625 yield filename, pctx[filename].filenode()
1625 yield filename, pctx[filename].filenode()
1626 else:
1626 else:
1627 yield filename, None
1627 yield filename, None
1628 for filename_node in copies:
1628 for filename_node in copies:
1629 yield filename_node
1629 yield filename_node
1630
1630
1631 for file_, node in iterfiles():
1631 for file_, node in iterfiles():
1632 filelog = repo.file(file_)
1632 filelog = repo.file(file_)
1633 if not len(filelog):
1633 if not len(filelog):
1634 if node is None:
1634 if node is None:
1635 # A zero count may be a directory or deleted file, so
1635 # A zero count may be a directory or deleted file, so
1636 # try to find matching entries on the slow path.
1636 # try to find matching entries on the slow path.
1637 if follow:
1637 if follow:
1638 raise util.Abort(
1638 raise util.Abort(
1639 _('cannot follow nonexistent file: "%s"') % file_)
1639 _('cannot follow nonexistent file: "%s"') % file_)
1640 raise FileWalkError("Cannot walk via filelog")
1640 raise FileWalkError("Cannot walk via filelog")
1641 else:
1641 else:
1642 continue
1642 continue
1643
1643
1644 if node is None:
1644 if node is None:
1645 last = len(filelog) - 1
1645 last = len(filelog) - 1
1646 else:
1646 else:
1647 last = filelog.rev(node)
1647 last = filelog.rev(node)
1648
1648
1649 # keep track of all ancestors of the file
1649 # keep track of all ancestors of the file
1650 ancestors = set([filelog.linkrev(last)])
1650 ancestors = set([filelog.linkrev(last)])
1651
1651
1652 # iterate from latest to oldest revision
1652 # iterate from latest to oldest revision
1653 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1653 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1654 if not follow:
1654 if not follow:
1655 if rev > maxrev:
1655 if rev > maxrev:
1656 continue
1656 continue
1657 else:
1657 else:
1658 # Note that last might not be the first interesting
1658 # Note that last might not be the first interesting
1659 # rev to us:
1659 # rev to us:
1660 # if the file has been changed after maxrev, we'll
1660 # if the file has been changed after maxrev, we'll
1661 # have linkrev(last) > maxrev, and we still need
1661 # have linkrev(last) > maxrev, and we still need
1662 # to explore the file graph
1662 # to explore the file graph
1663 if rev not in ancestors:
1663 if rev not in ancestors:
1664 continue
1664 continue
1665 # XXX insert 1327 fix here
1665 # XXX insert 1327 fix here
1666 if flparentlinkrevs:
1666 if flparentlinkrevs:
1667 ancestors.update(flparentlinkrevs)
1667 ancestors.update(flparentlinkrevs)
1668
1668
1669 fncache.setdefault(rev, []).append(file_)
1669 fncache.setdefault(rev, []).append(file_)
1670 wanted.add(rev)
1670 wanted.add(rev)
1671 if copied:
1671 if copied:
1672 copies.append(copied)
1672 copies.append(copied)
1673
1673
1674 return wanted
1674 return wanted
1675
1675
1676 class _followfilter(object):
1676 class _followfilter(object):
1677 def __init__(self, repo, onlyfirst=False):
1677 def __init__(self, repo, onlyfirst=False):
1678 self.repo = repo
1678 self.repo = repo
1679 self.startrev = nullrev
1679 self.startrev = nullrev
1680 self.roots = set()
1680 self.roots = set()
1681 self.onlyfirst = onlyfirst
1681 self.onlyfirst = onlyfirst
1682
1682
1683 def match(self, rev):
1683 def match(self, rev):
1684 def realparents(rev):
1684 def realparents(rev):
1685 if self.onlyfirst:
1685 if self.onlyfirst:
1686 return self.repo.changelog.parentrevs(rev)[0:1]
1686 return self.repo.changelog.parentrevs(rev)[0:1]
1687 else:
1687 else:
1688 return filter(lambda x: x != nullrev,
1688 return filter(lambda x: x != nullrev,
1689 self.repo.changelog.parentrevs(rev))
1689 self.repo.changelog.parentrevs(rev))
1690
1690
1691 if self.startrev == nullrev:
1691 if self.startrev == nullrev:
1692 self.startrev = rev
1692 self.startrev = rev
1693 return True
1693 return True
1694
1694
1695 if rev > self.startrev:
1695 if rev > self.startrev:
1696 # forward: all descendants
1696 # forward: all descendants
1697 if not self.roots:
1697 if not self.roots:
1698 self.roots.add(self.startrev)
1698 self.roots.add(self.startrev)
1699 for parent in realparents(rev):
1699 for parent in realparents(rev):
1700 if parent in self.roots:
1700 if parent in self.roots:
1701 self.roots.add(rev)
1701 self.roots.add(rev)
1702 return True
1702 return True
1703 else:
1703 else:
1704 # backwards: all parents
1704 # backwards: all parents
1705 if not self.roots:
1705 if not self.roots:
1706 self.roots.update(realparents(self.startrev))
1706 self.roots.update(realparents(self.startrev))
1707 if rev in self.roots:
1707 if rev in self.roots:
1708 self.roots.remove(rev)
1708 self.roots.remove(rev)
1709 self.roots.update(realparents(rev))
1709 self.roots.update(realparents(rev))
1710 return True
1710 return True
1711
1711
1712 return False
1712 return False
1713
1713
1714 def walkchangerevs(repo, match, opts, prepare):
1714 def walkchangerevs(repo, match, opts, prepare):
1715 '''Iterate over files and the revs in which they changed.
1715 '''Iterate over files and the revs in which they changed.
1716
1716
1717 Callers most commonly need to iterate backwards over the history
1717 Callers most commonly need to iterate backwards over the history
1718 in which they are interested. Doing so has awful (quadratic-looking)
1718 in which they are interested. Doing so has awful (quadratic-looking)
1719 performance, so we use iterators in a "windowed" way.
1719 performance, so we use iterators in a "windowed" way.
1720
1720
1721 We walk a window of revisions in the desired order. Within the
1721 We walk a window of revisions in the desired order. Within the
1722 window, we first walk forwards to gather data, then in the desired
1722 window, we first walk forwards to gather data, then in the desired
1723 order (usually backwards) to display it.
1723 order (usually backwards) to display it.
1724
1724
1725 This function returns an iterator yielding contexts. Before
1725 This function returns an iterator yielding contexts. Before
1726 yielding each context, the iterator will first call the prepare
1726 yielding each context, the iterator will first call the prepare
1727 function on each context in the window in forward order.'''
1727 function on each context in the window in forward order.'''
1728
1728
1729 follow = opts.get('follow') or opts.get('follow_first')
1729 follow = opts.get('follow') or opts.get('follow_first')
1730 revs = _logrevs(repo, opts)
1730 revs = _logrevs(repo, opts)
1731 if not revs:
1731 if not revs:
1732 return []
1732 return []
1733 wanted = set()
1733 wanted = set()
1734 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1734 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1735 fncache = {}
1735 fncache = {}
1736 change = repo.changectx
1736 change = repo.changectx
1737
1737
1738 # First step is to fill wanted, the set of revisions that we want to yield.
1738 # First step is to fill wanted, the set of revisions that we want to yield.
1739 # When it does not induce extra cost, we also fill fncache for revisions in
1739 # When it does not induce extra cost, we also fill fncache for revisions in
1740 # wanted: a cache of filenames that were changed (ctx.files()) and that
1740 # wanted: a cache of filenames that were changed (ctx.files()) and that
1741 # match the file filtering conditions.
1741 # match the file filtering conditions.
1742
1742
1743 if match.always():
1743 if match.always():
1744 # No files, no patterns. Display all revs.
1744 # No files, no patterns. Display all revs.
1745 wanted = revs
1745 wanted = revs
1746
1746
1747 if not slowpath and match.files():
1747 if not slowpath and match.files():
1748 # We only have to read through the filelog to find wanted revisions
1748 # We only have to read through the filelog to find wanted revisions
1749
1749
1750 try:
1750 try:
1751 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1751 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1752 except FileWalkError:
1752 except FileWalkError:
1753 slowpath = True
1753 slowpath = True
1754
1754
1755 # We decided to fall back to the slowpath because at least one
1755 # We decided to fall back to the slowpath because at least one
1756 # of the paths was not a file. Check to see if at least one of them
1756 # of the paths was not a file. Check to see if at least one of them
1757 # existed in history, otherwise simply return
1757 # existed in history, otherwise simply return
1758 for path in match.files():
1758 for path in match.files():
1759 if path == '.' or path in repo.store:
1759 if path == '.' or path in repo.store:
1760 break
1760 break
1761 else:
1761 else:
1762 return []
1762 return []
1763
1763
1764 if slowpath:
1764 if slowpath:
1765 # We have to read the changelog to match filenames against
1765 # We have to read the changelog to match filenames against
1766 # changed files
1766 # changed files
1767
1767
1768 if follow:
1768 if follow:
1769 raise util.Abort(_('can only follow copies/renames for explicit '
1769 raise util.Abort(_('can only follow copies/renames for explicit '
1770 'filenames'))
1770 'filenames'))
1771
1771
1772 # The slow path checks files modified in every changeset.
1772 # The slow path checks files modified in every changeset.
1773 # This is really slow on large repos, so compute the set lazily.
1773 # This is really slow on large repos, so compute the set lazily.
1774 class lazywantedset(object):
1774 class lazywantedset(object):
1775 def __init__(self):
1775 def __init__(self):
1776 self.set = set()
1776 self.set = set()
1777 self.revs = set(revs)
1777 self.revs = set(revs)
1778
1778
1779 # No need to worry about locality here because it will be accessed
1779 # No need to worry about locality here because it will be accessed
1780 # in the same order as the increasing window below.
1780 # in the same order as the increasing window below.
1781 def __contains__(self, value):
1781 def __contains__(self, value):
1782 if value in self.set:
1782 if value in self.set:
1783 return True
1783 return True
1784 elif not value in self.revs:
1784 elif not value in self.revs:
1785 return False
1785 return False
1786 else:
1786 else:
1787 self.revs.discard(value)
1787 self.revs.discard(value)
1788 ctx = change(value)
1788 ctx = change(value)
1789 matches = filter(match, ctx.files())
1789 matches = filter(match, ctx.files())
1790 if matches:
1790 if matches:
1791 fncache[value] = matches
1791 fncache[value] = matches
1792 self.set.add(value)
1792 self.set.add(value)
1793 return True
1793 return True
1794 return False
1794 return False
1795
1795
1796 def discard(self, value):
1796 def discard(self, value):
1797 self.revs.discard(value)
1797 self.revs.discard(value)
1798 self.set.discard(value)
1798 self.set.discard(value)
1799
1799
1800 wanted = lazywantedset()
1800 wanted = lazywantedset()
1801
1801
1802 # it might be worthwhile to do this in the iterator if the rev range
1802 # it might be worthwhile to do this in the iterator if the rev range
1803 # is descending and the prune args are all within that range
1803 # is descending and the prune args are all within that range
1804 for rev in opts.get('prune', ()):
1804 for rev in opts.get('prune', ()):
1805 rev = repo[rev].rev()
1805 rev = repo[rev].rev()
1806 ff = _followfilter(repo)
1806 ff = _followfilter(repo)
1807 stop = min(revs[0], revs[-1])
1807 stop = min(revs[0], revs[-1])
1808 for x in xrange(rev, stop - 1, -1):
1808 for x in xrange(rev, stop - 1, -1):
1809 if ff.match(x):
1809 if ff.match(x):
1810 wanted = wanted - [x]
1810 wanted = wanted - [x]
1811
1811
1812 # Now that wanted is correctly initialized, we can iterate over the
1812 # Now that wanted is correctly initialized, we can iterate over the
1813 # revision range, yielding only revisions in wanted.
1813 # revision range, yielding only revisions in wanted.
1814 def iterate():
1814 def iterate():
1815 if follow and not match.files():
1815 if follow and not match.files():
1816 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1816 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1817 def want(rev):
1817 def want(rev):
1818 return ff.match(rev) and rev in wanted
1818 return ff.match(rev) and rev in wanted
1819 else:
1819 else:
1820 def want(rev):
1820 def want(rev):
1821 return rev in wanted
1821 return rev in wanted
1822
1822
1823 it = iter(revs)
1823 it = iter(revs)
1824 stopiteration = False
1824 stopiteration = False
1825 for windowsize in increasingwindows():
1825 for windowsize in increasingwindows():
1826 nrevs = []
1826 nrevs = []
1827 for i in xrange(windowsize):
1827 for i in xrange(windowsize):
1828 try:
1828 try:
1829 rev = it.next()
1829 rev = it.next()
1830 if want(rev):
1830 if want(rev):
1831 nrevs.append(rev)
1831 nrevs.append(rev)
1832 except (StopIteration):
1832 except (StopIteration):
1833 stopiteration = True
1833 stopiteration = True
1834 break
1834 break
1835 for rev in sorted(nrevs):
1835 for rev in sorted(nrevs):
1836 fns = fncache.get(rev)
1836 fns = fncache.get(rev)
1837 ctx = change(rev)
1837 ctx = change(rev)
1838 if not fns:
1838 if not fns:
1839 def fns_generator():
1839 def fns_generator():
1840 for f in ctx.files():
1840 for f in ctx.files():
1841 if match(f):
1841 if match(f):
1842 yield f
1842 yield f
1843 fns = fns_generator()
1843 fns = fns_generator()
1844 prepare(ctx, fns)
1844 prepare(ctx, fns)
1845 for rev in nrevs:
1845 for rev in nrevs:
1846 yield change(rev)
1846 yield change(rev)
1847
1847
1848 if stopiteration:
1848 if stopiteration:
1849 break
1849 break
1850
1850
1851 return iterate()
1851 return iterate()
1852
1852
1853 def _makefollowlogfilematcher(repo, files, followfirst):
1853 def _makefollowlogfilematcher(repo, files, followfirst):
1854 # When displaying a revision with --patch --follow FILE, we have
1854 # When displaying a revision with --patch --follow FILE, we have
1855 # to know which file of the revision must be diffed. With
1855 # to know which file of the revision must be diffed. With
1856 # --follow, we want the names of the ancestors of FILE in the
1856 # --follow, we want the names of the ancestors of FILE in the
1857 # revision, stored in "fcache". "fcache" is populated by
1857 # revision, stored in "fcache". "fcache" is populated by
1858 # reproducing the graph traversal already done by --follow revset
1858 # reproducing the graph traversal already done by --follow revset
1859 # and relating linkrevs to file names (which is not "correct" but
1859 # and relating linkrevs to file names (which is not "correct" but
1860 # good enough).
1860 # good enough).
1861 fcache = {}
1861 fcache = {}
1862 fcacheready = [False]
1862 fcacheready = [False]
1863 pctx = repo['.']
1863 pctx = repo['.']
1864
1864
1865 def populate():
1865 def populate():
1866 for fn in files:
1866 for fn in files:
1867 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1867 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1868 for c in i:
1868 for c in i:
1869 fcache.setdefault(c.linkrev(), set()).add(c.path())
1869 fcache.setdefault(c.linkrev(), set()).add(c.path())
1870
1870
1871 def filematcher(rev):
1871 def filematcher(rev):
1872 if not fcacheready[0]:
1872 if not fcacheready[0]:
1873 # Lazy initialization
1873 # Lazy initialization
1874 fcacheready[0] = True
1874 fcacheready[0] = True
1875 populate()
1875 populate()
1876 return scmutil.matchfiles(repo, fcache.get(rev, []))
1876 return scmutil.matchfiles(repo, fcache.get(rev, []))
1877
1877
1878 return filematcher
1878 return filematcher
1879
1879
1880 def _makenofollowlogfilematcher(repo, pats, opts):
1880 def _makenofollowlogfilematcher(repo, pats, opts):
1881 '''hook for extensions to override the filematcher for non-follow cases'''
1881 '''hook for extensions to override the filematcher for non-follow cases'''
1882 return None
1882 return None
1883
1883
1884 def _makelogrevset(repo, pats, opts, revs):
1884 def _makelogrevset(repo, pats, opts, revs):
1885 """Return (expr, filematcher) where expr is a revset string built
1885 """Return (expr, filematcher) where expr is a revset string built
1886 from log options and file patterns or None. If --stat or --patch
1886 from log options and file patterns or None. If --stat or --patch
1887 are not passed filematcher is None. Otherwise it is a callable
1887 are not passed filematcher is None. Otherwise it is a callable
1888 taking a revision number and returning a match objects filtering
1888 taking a revision number and returning a match objects filtering
1889 the files to be detailed when displaying the revision.
1889 the files to be detailed when displaying the revision.
1890 """
1890 """
1891 opt2revset = {
1891 opt2revset = {
1892 'no_merges': ('not merge()', None),
1892 'no_merges': ('not merge()', None),
1893 'only_merges': ('merge()', None),
1893 'only_merges': ('merge()', None),
1894 '_ancestors': ('ancestors(%(val)s)', None),
1894 '_ancestors': ('ancestors(%(val)s)', None),
1895 '_fancestors': ('_firstancestors(%(val)s)', None),
1895 '_fancestors': ('_firstancestors(%(val)s)', None),
1896 '_descendants': ('descendants(%(val)s)', None),
1896 '_descendants': ('descendants(%(val)s)', None),
1897 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1897 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1898 '_matchfiles': ('_matchfiles(%(val)s)', None),
1898 '_matchfiles': ('_matchfiles(%(val)s)', None),
1899 'date': ('date(%(val)r)', None),
1899 'date': ('date(%(val)r)', None),
1900 'branch': ('branch(%(val)r)', ' or '),
1900 'branch': ('branch(%(val)r)', ' or '),
1901 '_patslog': ('filelog(%(val)r)', ' or '),
1901 '_patslog': ('filelog(%(val)r)', ' or '),
1902 '_patsfollow': ('follow(%(val)r)', ' or '),
1902 '_patsfollow': ('follow(%(val)r)', ' or '),
1903 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1903 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1904 'keyword': ('keyword(%(val)r)', ' or '),
1904 'keyword': ('keyword(%(val)r)', ' or '),
1905 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1905 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1906 'user': ('user(%(val)r)', ' or '),
1906 'user': ('user(%(val)r)', ' or '),
1907 }
1907 }
1908
1908
1909 opts = dict(opts)
1909 opts = dict(opts)
1910 # follow or not follow?
1910 # follow or not follow?
1911 follow = opts.get('follow') or opts.get('follow_first')
1911 follow = opts.get('follow') or opts.get('follow_first')
1912 if opts.get('follow_first'):
1912 if opts.get('follow_first'):
1913 followfirst = 1
1913 followfirst = 1
1914 else:
1914 else:
1915 followfirst = 0
1915 followfirst = 0
1916 # --follow with FILE behaviour depends on revs...
1916 # --follow with FILE behaviour depends on revs...
1917 it = iter(revs)
1917 it = iter(revs)
1918 startrev = it.next()
1918 startrev = it.next()
1919 try:
1919 try:
1920 followdescendants = startrev < it.next()
1920 followdescendants = startrev < it.next()
1921 except (StopIteration):
1921 except (StopIteration):
1922 followdescendants = False
1922 followdescendants = False
1923
1923
1924 # branch and only_branch are really aliases and must be handled at
1924 # branch and only_branch are really aliases and must be handled at
1925 # the same time
1925 # the same time
1926 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1926 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1927 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1927 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1928 # pats/include/exclude are passed to match.match() directly in
1928 # pats/include/exclude are passed to match.match() directly in
1929 # _matchfiles() revset but walkchangerevs() builds its matcher with
1929 # _matchfiles() revset but walkchangerevs() builds its matcher with
1930 # scmutil.match(). The difference is input pats are globbed on
1930 # scmutil.match(). The difference is input pats are globbed on
1931 # platforms without shell expansion (windows).
1931 # platforms without shell expansion (windows).
1932 wctx = repo[None]
1932 wctx = repo[None]
1933 match, pats = scmutil.matchandpats(wctx, pats, opts)
1933 match, pats = scmutil.matchandpats(wctx, pats, opts)
1934 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1934 slowpath = match.anypats() or (match.files() and opts.get('removed'))
1935 if not slowpath:
1935 if not slowpath:
1936 for f in match.files():
1936 for f in match.files():
1937 if follow and f not in wctx:
1937 if follow and f not in wctx:
1938 # If the file exists, it may be a directory, so let it
1938 # If the file exists, it may be a directory, so let it
1939 # take the slow path.
1939 # take the slow path.
1940 if os.path.exists(repo.wjoin(f)):
1940 if os.path.exists(repo.wjoin(f)):
1941 slowpath = True
1941 slowpath = True
1942 continue
1942 continue
1943 else:
1943 else:
1944 raise util.Abort(_('cannot follow file not in parent '
1944 raise util.Abort(_('cannot follow file not in parent '
1945 'revision: "%s"') % f)
1945 'revision: "%s"') % f)
1946 filelog = repo.file(f)
1946 filelog = repo.file(f)
1947 if not filelog:
1947 if not filelog:
1948 # A zero count may be a directory or deleted file, so
1948 # A zero count may be a directory or deleted file, so
1949 # try to find matching entries on the slow path.
1949 # try to find matching entries on the slow path.
1950 if follow:
1950 if follow:
1951 raise util.Abort(
1951 raise util.Abort(
1952 _('cannot follow nonexistent file: "%s"') % f)
1952 _('cannot follow nonexistent file: "%s"') % f)
1953 slowpath = True
1953 slowpath = True
1954
1954
1955 # We decided to fall back to the slowpath because at least one
1955 # We decided to fall back to the slowpath because at least one
1956 # of the paths was not a file. Check to see if at least one of them
1956 # of the paths was not a file. Check to see if at least one of them
1957 # existed in history - in that case, we'll continue down the
1957 # existed in history - in that case, we'll continue down the
1958 # slowpath; otherwise, we can turn off the slowpath
1958 # slowpath; otherwise, we can turn off the slowpath
1959 if slowpath:
1959 if slowpath:
1960 for path in match.files():
1960 for path in match.files():
1961 if path == '.' or path in repo.store:
1961 if path == '.' or path in repo.store:
1962 break
1962 break
1963 else:
1963 else:
1964 slowpath = False
1964 slowpath = False
1965
1965
1966 fpats = ('_patsfollow', '_patsfollowfirst')
1966 fpats = ('_patsfollow', '_patsfollowfirst')
1967 fnopats = (('_ancestors', '_fancestors'),
1967 fnopats = (('_ancestors', '_fancestors'),
1968 ('_descendants', '_fdescendants'))
1968 ('_descendants', '_fdescendants'))
1969 if slowpath:
1969 if slowpath:
1970 # See walkchangerevs() slow path.
1970 # See walkchangerevs() slow path.
1971 #
1971 #
1972 # pats/include/exclude cannot be represented as separate
1972 # pats/include/exclude cannot be represented as separate
1973 # revset expressions as their filtering logic applies at file
1973 # revset expressions as their filtering logic applies at file
1974 # level. For instance "-I a -X a" matches a revision touching
1974 # level. For instance "-I a -X a" matches a revision touching
1975 # "a" and "b" while "file(a) and not file(b)" does
1975 # "a" and "b" while "file(a) and not file(b)" does
1976 # not. Besides, filesets are evaluated against the working
1976 # not. Besides, filesets are evaluated against the working
1977 # directory.
1977 # directory.
1978 matchargs = ['r:', 'd:relpath']
1978 matchargs = ['r:', 'd:relpath']
1979 for p in pats:
1979 for p in pats:
1980 matchargs.append('p:' + p)
1980 matchargs.append('p:' + p)
1981 for p in opts.get('include', []):
1981 for p in opts.get('include', []):
1982 matchargs.append('i:' + p)
1982 matchargs.append('i:' + p)
1983 for p in opts.get('exclude', []):
1983 for p in opts.get('exclude', []):
1984 matchargs.append('x:' + p)
1984 matchargs.append('x:' + p)
1985 matchargs = ','.join(('%r' % p) for p in matchargs)
1985 matchargs = ','.join(('%r' % p) for p in matchargs)
1986 opts['_matchfiles'] = matchargs
1986 opts['_matchfiles'] = matchargs
1987 if follow:
1987 if follow:
1988 opts[fnopats[0][followfirst]] = '.'
1988 opts[fnopats[0][followfirst]] = '.'
1989 else:
1989 else:
1990 if follow:
1990 if follow:
1991 if pats:
1991 if pats:
1992 # follow() revset interprets its file argument as a
1992 # follow() revset interprets its file argument as a
1993 # manifest entry, so use match.files(), not pats.
1993 # manifest entry, so use match.files(), not pats.
1994 opts[fpats[followfirst]] = list(match.files())
1994 opts[fpats[followfirst]] = list(match.files())
1995 else:
1995 else:
1996 op = fnopats[followdescendants][followfirst]
1996 op = fnopats[followdescendants][followfirst]
1997 opts[op] = 'rev(%d)' % startrev
1997 opts[op] = 'rev(%d)' % startrev
1998 else:
1998 else:
1999 opts['_patslog'] = list(pats)
1999 opts['_patslog'] = list(pats)
2000
2000
2001 filematcher = None
2001 filematcher = None
2002 if opts.get('patch') or opts.get('stat'):
2002 if opts.get('patch') or opts.get('stat'):
2003 # When following files, track renames via a special matcher.
2003 # When following files, track renames via a special matcher.
2004 # If we're forced to take the slowpath it means we're following
2004 # If we're forced to take the slowpath it means we're following
2005 # at least one pattern/directory, so don't bother with rename tracking.
2005 # at least one pattern/directory, so don't bother with rename tracking.
2006 if follow and not match.always() and not slowpath:
2006 if follow and not match.always() and not slowpath:
2007 # _makefollowlogfilematcher expects its files argument to be
2007 # _makefollowlogfilematcher expects its files argument to be
2008 # relative to the repo root, so use match.files(), not pats.
2008 # relative to the repo root, so use match.files(), not pats.
2009 filematcher = _makefollowlogfilematcher(repo, match.files(),
2009 filematcher = _makefollowlogfilematcher(repo, match.files(),
2010 followfirst)
2010 followfirst)
2011 else:
2011 else:
2012 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2012 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2013 if filematcher is None:
2013 if filematcher is None:
2014 filematcher = lambda rev: match
2014 filematcher = lambda rev: match
2015
2015
2016 expr = []
2016 expr = []
2017 for op, val in sorted(opts.iteritems()):
2017 for op, val in sorted(opts.iteritems()):
2018 if not val:
2018 if not val:
2019 continue
2019 continue
2020 if op not in opt2revset:
2020 if op not in opt2revset:
2021 continue
2021 continue
2022 revop, andor = opt2revset[op]
2022 revop, andor = opt2revset[op]
2023 if '%(val)' not in revop:
2023 if '%(val)' not in revop:
2024 expr.append(revop)
2024 expr.append(revop)
2025 else:
2025 else:
2026 if not isinstance(val, list):
2026 if not isinstance(val, list):
2027 e = revop % {'val': val}
2027 e = revop % {'val': val}
2028 else:
2028 else:
2029 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2029 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2030 expr.append(e)
2030 expr.append(e)
2031
2031
2032 if expr:
2032 if expr:
2033 expr = '(' + ' and '.join(expr) + ')'
2033 expr = '(' + ' and '.join(expr) + ')'
2034 else:
2034 else:
2035 expr = None
2035 expr = None
2036 return expr, filematcher
2036 return expr, filematcher
2037
2037
2038 def _logrevs(repo, opts):
2038 def _logrevs(repo, opts):
2039 # Default --rev value depends on --follow but --follow behaviour
2039 # Default --rev value depends on --follow but --follow behaviour
2040 # depends on revisions resolved from --rev...
2040 # depends on revisions resolved from --rev...
2041 follow = opts.get('follow') or opts.get('follow_first')
2041 follow = opts.get('follow') or opts.get('follow_first')
2042 if opts.get('rev'):
2042 if opts.get('rev'):
2043 revs = scmutil.revrange(repo, opts['rev'])
2043 revs = scmutil.revrange(repo, opts['rev'])
2044 elif follow and repo.dirstate.p1() == nullid:
2044 elif follow and repo.dirstate.p1() == nullid:
2045 revs = revset.baseset()
2045 revs = revset.baseset()
2046 elif follow:
2046 elif follow:
2047 revs = repo.revs('reverse(:.)')
2047 revs = repo.revs('reverse(:.)')
2048 else:
2048 else:
2049 revs = revset.spanset(repo)
2049 revs = revset.spanset(repo)
2050 revs.reverse()
2050 revs.reverse()
2051 return revs
2051 return revs
2052
2052
2053 def getgraphlogrevs(repo, pats, opts):
2053 def getgraphlogrevs(repo, pats, opts):
2054 """Return (revs, expr, filematcher) where revs is an iterable of
2054 """Return (revs, expr, filematcher) where revs is an iterable of
2055 revision numbers, expr is a revset string built from log options
2055 revision numbers, expr is a revset string built from log options
2056 and file patterns or None, and used to filter 'revs'. If --stat or
2056 and file patterns or None, and used to filter 'revs'. If --stat or
2057 --patch are not passed filematcher is None. Otherwise it is a
2057 --patch are not passed filematcher is None. Otherwise it is a
2058 callable taking a revision number and returning a match objects
2058 callable taking a revision number and returning a match objects
2059 filtering the files to be detailed when displaying the revision.
2059 filtering the files to be detailed when displaying the revision.
2060 """
2060 """
2061 limit = loglimit(opts)
2061 limit = loglimit(opts)
2062 revs = _logrevs(repo, opts)
2062 revs = _logrevs(repo, opts)
2063 if not revs:
2063 if not revs:
2064 return revset.baseset(), None, None
2064 return revset.baseset(), None, None
2065 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2065 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2066 if opts.get('rev'):
2066 if opts.get('rev'):
2067 # User-specified revs might be unsorted, but don't sort before
2067 # User-specified revs might be unsorted, but don't sort before
2068 # _makelogrevset because it might depend on the order of revs
2068 # _makelogrevset because it might depend on the order of revs
2069 revs.sort(reverse=True)
2069 revs.sort(reverse=True)
2070 if expr:
2070 if expr:
2071 # Revset matchers often operate faster on revisions in changelog
2071 # Revset matchers often operate faster on revisions in changelog
2072 # order, because most filters deal with the changelog.
2072 # order, because most filters deal with the changelog.
2073 revs.reverse()
2073 revs.reverse()
2074 matcher = revset.match(repo.ui, expr)
2074 matcher = revset.match(repo.ui, expr)
2075 # Revset matches can reorder revisions. "A or B" typically returns
2075 # Revset matches can reorder revisions. "A or B" typically returns
2076 # returns the revision matching A then the revision matching B. Sort
2076 # returns the revision matching A then the revision matching B. Sort
2077 # again to fix that.
2077 # again to fix that.
2078 revs = matcher(repo, revs)
2078 revs = matcher(repo, revs)
2079 revs.sort(reverse=True)
2079 revs.sort(reverse=True)
2080 if limit is not None:
2080 if limit is not None:
2081 limitedrevs = []
2081 limitedrevs = []
2082 for idx, rev in enumerate(revs):
2082 for idx, rev in enumerate(revs):
2083 if idx >= limit:
2083 if idx >= limit:
2084 break
2084 break
2085 limitedrevs.append(rev)
2085 limitedrevs.append(rev)
2086 revs = revset.baseset(limitedrevs)
2086 revs = revset.baseset(limitedrevs)
2087
2087
2088 return revs, expr, filematcher
2088 return revs, expr, filematcher
2089
2089
2090 def getlogrevs(repo, pats, opts):
2090 def getlogrevs(repo, pats, opts):
2091 """Return (revs, expr, filematcher) where revs is an iterable of
2091 """Return (revs, expr, filematcher) where revs is an iterable of
2092 revision numbers, expr is a revset string built from log options
2092 revision numbers, expr is a revset string built from log options
2093 and file patterns or None, and used to filter 'revs'. If --stat or
2093 and file patterns or None, and used to filter 'revs'. If --stat or
2094 --patch are not passed filematcher is None. Otherwise it is a
2094 --patch are not passed filematcher is None. Otherwise it is a
2095 callable taking a revision number and returning a match objects
2095 callable taking a revision number and returning a match objects
2096 filtering the files to be detailed when displaying the revision.
2096 filtering the files to be detailed when displaying the revision.
2097 """
2097 """
2098 limit = loglimit(opts)
2098 limit = loglimit(opts)
2099 revs = _logrevs(repo, opts)
2099 revs = _logrevs(repo, opts)
2100 if not revs:
2100 if not revs:
2101 return revset.baseset([]), None, None
2101 return revset.baseset([]), None, None
2102 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2102 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2103 if expr:
2103 if expr:
2104 # Revset matchers often operate faster on revisions in changelog
2104 # Revset matchers often operate faster on revisions in changelog
2105 # order, because most filters deal with the changelog.
2105 # order, because most filters deal with the changelog.
2106 if not opts.get('rev'):
2106 if not opts.get('rev'):
2107 revs.reverse()
2107 revs.reverse()
2108 matcher = revset.match(repo.ui, expr)
2108 matcher = revset.match(repo.ui, expr)
2109 # Revset matches can reorder revisions. "A or B" typically returns
2109 # Revset matches can reorder revisions. "A or B" typically returns
2110 # returns the revision matching A then the revision matching B. Sort
2110 # returns the revision matching A then the revision matching B. Sort
2111 # again to fix that.
2111 # again to fix that.
2112 revs = matcher(repo, revs)
2112 revs = matcher(repo, revs)
2113 if not opts.get('rev'):
2113 if not opts.get('rev'):
2114 revs.sort(reverse=True)
2114 revs.sort(reverse=True)
2115 if limit is not None:
2115 if limit is not None:
2116 count = 0
2116 count = 0
2117 limitedrevs = []
2117 limitedrevs = []
2118 it = iter(revs)
2118 it = iter(revs)
2119 while count < limit:
2119 while count < limit:
2120 try:
2120 try:
2121 limitedrevs.append(it.next())
2121 limitedrevs.append(it.next())
2122 except (StopIteration):
2122 except (StopIteration):
2123 break
2123 break
2124 count += 1
2124 count += 1
2125 revs = revset.baseset(limitedrevs)
2125 revs = revset.baseset(limitedrevs)
2126
2126
2127 return revs, expr, filematcher
2127 return revs, expr, filematcher
2128
2128
2129 def displaygraph(ui, dag, displayer, showparents, edgefn, getrenamed=None,
2129 def displaygraph(ui, dag, displayer, showparents, edgefn, getrenamed=None,
2130 filematcher=None):
2130 filematcher=None):
2131 seen, state = [], graphmod.asciistate()
2131 seen, state = [], graphmod.asciistate()
2132 for rev, type, ctx, parents in dag:
2132 for rev, type, ctx, parents in dag:
2133 char = 'o'
2133 char = 'o'
2134 if ctx.node() in showparents:
2134 if ctx.node() in showparents:
2135 char = '@'
2135 char = '@'
2136 elif ctx.obsolete():
2136 elif ctx.obsolete():
2137 char = 'x'
2137 char = 'x'
2138 elif ctx.closesbranch():
2138 elif ctx.closesbranch():
2139 char = '_'
2139 char = '_'
2140 copies = None
2140 copies = None
2141 if getrenamed and ctx.rev():
2141 if getrenamed and ctx.rev():
2142 copies = []
2142 copies = []
2143 for fn in ctx.files():
2143 for fn in ctx.files():
2144 rename = getrenamed(fn, ctx.rev())
2144 rename = getrenamed(fn, ctx.rev())
2145 if rename:
2145 if rename:
2146 copies.append((fn, rename[0]))
2146 copies.append((fn, rename[0]))
2147 revmatchfn = None
2147 revmatchfn = None
2148 if filematcher is not None:
2148 if filematcher is not None:
2149 revmatchfn = filematcher(ctx.rev())
2149 revmatchfn = filematcher(ctx.rev())
2150 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2150 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2151 lines = displayer.hunk.pop(rev).split('\n')
2151 lines = displayer.hunk.pop(rev).split('\n')
2152 if not lines[-1]:
2152 if not lines[-1]:
2153 del lines[-1]
2153 del lines[-1]
2154 displayer.flush(rev)
2154 displayer.flush(rev)
2155 edges = edgefn(type, char, lines, seen, rev, parents)
2155 edges = edgefn(type, char, lines, seen, rev, parents)
2156 for type, char, lines, coldata in edges:
2156 for type, char, lines, coldata in edges:
2157 graphmod.ascii(ui, state, type, char, lines, coldata)
2157 graphmod.ascii(ui, state, type, char, lines, coldata)
2158 displayer.close()
2158 displayer.close()
2159
2159
2160 def graphlog(ui, repo, *pats, **opts):
2160 def graphlog(ui, repo, *pats, **opts):
2161 # Parameters are identical to log command ones
2161 # Parameters are identical to log command ones
2162 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2162 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2163 revdag = graphmod.dagwalker(repo, revs)
2163 revdag = graphmod.dagwalker(repo, revs)
2164
2164
2165 getrenamed = None
2165 getrenamed = None
2166 if opts.get('copies'):
2166 if opts.get('copies'):
2167 endrev = None
2167 endrev = None
2168 if opts.get('rev'):
2168 if opts.get('rev'):
2169 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2169 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2170 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2170 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2171 displayer = show_changeset(ui, repo, opts, buffered=True)
2171 displayer = show_changeset(ui, repo, opts, buffered=True)
2172 showparents = [ctx.node() for ctx in repo[None].parents()]
2172 showparents = [ctx.node() for ctx in repo[None].parents()]
2173 displaygraph(ui, revdag, displayer, showparents,
2173 displaygraph(ui, revdag, displayer, showparents,
2174 graphmod.asciiedges, getrenamed, filematcher)
2174 graphmod.asciiedges, getrenamed, filematcher)
2175
2175
2176 def checkunsupportedgraphflags(pats, opts):
2176 def checkunsupportedgraphflags(pats, opts):
2177 for op in ["newest_first"]:
2177 for op in ["newest_first"]:
2178 if op in opts and opts[op]:
2178 if op in opts and opts[op]:
2179 raise util.Abort(_("-G/--graph option is incompatible with --%s")
2179 raise util.Abort(_("-G/--graph option is incompatible with --%s")
2180 % op.replace("_", "-"))
2180 % op.replace("_", "-"))
2181
2181
2182 def graphrevs(repo, nodes, opts):
2182 def graphrevs(repo, nodes, opts):
2183 limit = loglimit(opts)
2183 limit = loglimit(opts)
2184 nodes.reverse()
2184 nodes.reverse()
2185 if limit is not None:
2185 if limit is not None:
2186 nodes = nodes[:limit]
2186 nodes = nodes[:limit]
2187 return graphmod.nodes(repo, nodes)
2187 return graphmod.nodes(repo, nodes)
2188
2188
2189 def add(ui, repo, match, prefix, explicitonly, **opts):
2189 def add(ui, repo, match, prefix, explicitonly, **opts):
2190 join = lambda f: os.path.join(prefix, f)
2190 join = lambda f: os.path.join(prefix, f)
2191 bad = []
2191 bad = []
2192 oldbad = match.bad
2192 oldbad = match.bad
2193 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
2193 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
2194 names = []
2194 names = []
2195 wctx = repo[None]
2195 wctx = repo[None]
2196 cca = None
2196 cca = None
2197 abort, warn = scmutil.checkportabilityalert(ui)
2197 abort, warn = scmutil.checkportabilityalert(ui)
2198 if abort or warn:
2198 if abort or warn:
2199 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2199 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2200 for f in wctx.walk(match):
2200 for f in wctx.walk(match):
2201 exact = match.exact(f)
2201 exact = match.exact(f)
2202 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2202 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2203 if cca:
2203 if cca:
2204 cca(f)
2204 cca(f)
2205 names.append(f)
2205 names.append(f)
2206 if ui.verbose or not exact:
2206 if ui.verbose or not exact:
2207 ui.status(_('adding %s\n') % match.rel(f))
2207 ui.status(_('adding %s\n') % match.rel(f))
2208
2208
2209 for subpath in sorted(wctx.substate):
2209 for subpath in sorted(wctx.substate):
2210 sub = wctx.sub(subpath)
2210 sub = wctx.sub(subpath)
2211 try:
2211 try:
2212 submatch = matchmod.narrowmatcher(subpath, match)
2212 submatch = matchmod.narrowmatcher(subpath, match)
2213 if opts.get('subrepos'):
2213 if opts.get('subrepos'):
2214 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2214 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2215 else:
2215 else:
2216 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2216 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2217 except error.LookupError:
2217 except error.LookupError:
2218 ui.status(_("skipping missing subrepository: %s\n")
2218 ui.status(_("skipping missing subrepository: %s\n")
2219 % join(subpath))
2219 % join(subpath))
2220
2220
2221 if not opts.get('dry_run'):
2221 if not opts.get('dry_run'):
2222 rejected = wctx.add(names, prefix)
2222 rejected = wctx.add(names, prefix)
2223 bad.extend(f for f in rejected if f in match.files())
2223 bad.extend(f for f in rejected if f in match.files())
2224 return bad
2224 return bad
2225
2225
2226 def forget(ui, repo, match, prefix, explicitonly):
2226 def forget(ui, repo, match, prefix, explicitonly):
2227 join = lambda f: os.path.join(prefix, f)
2227 join = lambda f: os.path.join(prefix, f)
2228 bad = []
2228 bad = []
2229 oldbad = match.bad
2229 oldbad = match.bad
2230 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
2230 match.bad = lambda x, y: bad.append(x) or oldbad(x, y)
2231 wctx = repo[None]
2231 wctx = repo[None]
2232 forgot = []
2232 forgot = []
2233 s = repo.status(match=match, clean=True)
2233 s = repo.status(match=match, clean=True)
2234 forget = sorted(s[0] + s[1] + s[3] + s[6])
2234 forget = sorted(s[0] + s[1] + s[3] + s[6])
2235 if explicitonly:
2235 if explicitonly:
2236 forget = [f for f in forget if match.exact(f)]
2236 forget = [f for f in forget if match.exact(f)]
2237
2237
2238 for subpath in sorted(wctx.substate):
2238 for subpath in sorted(wctx.substate):
2239 sub = wctx.sub(subpath)
2239 sub = wctx.sub(subpath)
2240 try:
2240 try:
2241 submatch = matchmod.narrowmatcher(subpath, match)
2241 submatch = matchmod.narrowmatcher(subpath, match)
2242 subbad, subforgot = sub.forget(submatch, prefix)
2242 subbad, subforgot = sub.forget(submatch, prefix)
2243 bad.extend([subpath + '/' + f for f in subbad])
2243 bad.extend([subpath + '/' + f for f in subbad])
2244 forgot.extend([subpath + '/' + f for f in subforgot])
2244 forgot.extend([subpath + '/' + f for f in subforgot])
2245 except error.LookupError:
2245 except error.LookupError:
2246 ui.status(_("skipping missing subrepository: %s\n")
2246 ui.status(_("skipping missing subrepository: %s\n")
2247 % join(subpath))
2247 % join(subpath))
2248
2248
2249 if not explicitonly:
2249 if not explicitonly:
2250 for f in match.files():
2250 for f in match.files():
2251 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2251 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2252 if f not in forgot:
2252 if f not in forgot:
2253 if repo.wvfs.exists(f):
2253 if repo.wvfs.exists(f):
2254 # Don't complain if the exact case match wasn't given.
2254 # Don't complain if the exact case match wasn't given.
2255 # But don't do this until after checking 'forgot', so
2255 # But don't do this until after checking 'forgot', so
2256 # that subrepo files aren't normalized, and this op is
2256 # that subrepo files aren't normalized, and this op is
2257 # purely from data cached by the status walk above.
2257 # purely from data cached by the status walk above.
2258 if repo.dirstate.normalize(f) in repo.dirstate:
2258 if repo.dirstate.normalize(f) in repo.dirstate:
2259 continue
2259 continue
2260 ui.warn(_('not removing %s: '
2260 ui.warn(_('not removing %s: '
2261 'file is already untracked\n')
2261 'file is already untracked\n')
2262 % match.rel(f))
2262 % match.rel(f))
2263 bad.append(f)
2263 bad.append(f)
2264
2264
2265 for f in forget:
2265 for f in forget:
2266 if ui.verbose or not match.exact(f):
2266 if ui.verbose or not match.exact(f):
2267 ui.status(_('removing %s\n') % match.rel(f))
2267 ui.status(_('removing %s\n') % match.rel(f))
2268
2268
2269 rejected = wctx.forget(forget, prefix)
2269 rejected = wctx.forget(forget, prefix)
2270 bad.extend(f for f in rejected if f in match.files())
2270 bad.extend(f for f in rejected if f in match.files())
2271 forgot.extend(f for f in forget if f not in rejected)
2271 forgot.extend(f for f in forget if f not in rejected)
2272 return bad, forgot
2272 return bad, forgot
2273
2273
2274 def files(ui, ctx, m, fm, fmt, subrepos):
2274 def files(ui, ctx, m, fm, fmt, subrepos):
2275 rev = ctx.rev()
2275 rev = ctx.rev()
2276 ret = 1
2276 ret = 1
2277 ds = ctx.repo().dirstate
2277 ds = ctx.repo().dirstate
2278
2278
2279 for f in ctx.matches(m):
2279 for f in ctx.matches(m):
2280 if rev is None and ds[f] == 'r':
2280 if rev is None and ds[f] == 'r':
2281 continue
2281 continue
2282 fm.startitem()
2282 fm.startitem()
2283 if ui.verbose:
2283 if ui.verbose:
2284 fc = ctx[f]
2284 fc = ctx[f]
2285 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2285 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2286 fm.data(abspath=f)
2286 fm.data(abspath=f)
2287 fm.write('path', fmt, m.rel(f))
2287 fm.write('path', fmt, m.rel(f))
2288 ret = 0
2288 ret = 0
2289
2289
2290 if subrepos:
2290 if subrepos:
2291 for subpath in sorted(ctx.substate):
2291 for subpath in sorted(ctx.substate):
2292 sub = ctx.sub(subpath)
2292 sub = ctx.sub(subpath)
2293 try:
2293 try:
2294 submatch = matchmod.narrowmatcher(subpath, m)
2294 submatch = matchmod.narrowmatcher(subpath, m)
2295 if sub.printfiles(ui, submatch, fm, fmt) == 0:
2295 if sub.printfiles(ui, submatch, fm, fmt) == 0:
2296 ret = 0
2296 ret = 0
2297 except error.LookupError:
2297 except error.LookupError:
2298 ui.status(_("skipping missing subrepository: %s\n")
2298 ui.status(_("skipping missing subrepository: %s\n")
2299 % m.abs(subpath))
2299 % m.abs(subpath))
2300
2300
2301 return ret
2301 return ret
2302
2302
2303 def remove(ui, repo, m, prefix, after, force, subrepos):
2303 def remove(ui, repo, m, prefix, after, force, subrepos):
2304 join = lambda f: os.path.join(prefix, f)
2304 join = lambda f: os.path.join(prefix, f)
2305 ret = 0
2305 ret = 0
2306 s = repo.status(match=m, clean=True)
2306 s = repo.status(match=m, clean=True)
2307 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2307 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2308
2308
2309 wctx = repo[None]
2309 wctx = repo[None]
2310
2310
2311 for subpath in sorted(wctx.substate):
2311 for subpath in sorted(wctx.substate):
2312 def matchessubrepo(matcher, subpath):
2312 def matchessubrepo(matcher, subpath):
2313 if matcher.exact(subpath):
2313 if matcher.exact(subpath):
2314 return True
2314 return True
2315 for f in matcher.files():
2315 for f in matcher.files():
2316 if f.startswith(subpath):
2316 if f.startswith(subpath):
2317 return True
2317 return True
2318 return False
2318 return False
2319
2319
2320 if subrepos or matchessubrepo(m, subpath):
2320 if subrepos or matchessubrepo(m, subpath):
2321 sub = wctx.sub(subpath)
2321 sub = wctx.sub(subpath)
2322 try:
2322 try:
2323 submatch = matchmod.narrowmatcher(subpath, m)
2323 submatch = matchmod.narrowmatcher(subpath, m)
2324 if sub.removefiles(submatch, prefix, after, force, subrepos):
2324 if sub.removefiles(submatch, prefix, after, force, subrepos):
2325 ret = 1
2325 ret = 1
2326 except error.LookupError:
2326 except error.LookupError:
2327 ui.status(_("skipping missing subrepository: %s\n")
2327 ui.status(_("skipping missing subrepository: %s\n")
2328 % join(subpath))
2328 % join(subpath))
2329
2329
2330 # warn about failure to delete explicit files/dirs
2330 # warn about failure to delete explicit files/dirs
2331 deleteddirs = util.dirs(deleted)
2331 deleteddirs = util.dirs(deleted)
2332 for f in m.files():
2332 for f in m.files():
2333 def insubrepo():
2333 def insubrepo():
2334 for subpath in wctx.substate:
2334 for subpath in wctx.substate:
2335 if f.startswith(subpath):
2335 if f.startswith(subpath):
2336 return True
2336 return True
2337 return False
2337 return False
2338
2338
2339 isdir = f in deleteddirs or wctx.hasdir(f)
2339 isdir = f in deleteddirs or wctx.hasdir(f)
2340 if f in repo.dirstate or isdir or f == '.' or insubrepo():
2340 if f in repo.dirstate or isdir or f == '.' or insubrepo():
2341 continue
2341 continue
2342
2342
2343 if repo.wvfs.exists(f):
2343 if repo.wvfs.exists(f):
2344 if repo.wvfs.isdir(f):
2344 if repo.wvfs.isdir(f):
2345 ui.warn(_('not removing %s: no tracked files\n')
2345 ui.warn(_('not removing %s: no tracked files\n')
2346 % m.rel(f))
2346 % m.rel(f))
2347 else:
2347 else:
2348 ui.warn(_('not removing %s: file is untracked\n')
2348 ui.warn(_('not removing %s: file is untracked\n')
2349 % m.rel(f))
2349 % m.rel(f))
2350 # missing files will generate a warning elsewhere
2350 # missing files will generate a warning elsewhere
2351 ret = 1
2351 ret = 1
2352
2352
2353 if force:
2353 if force:
2354 list = modified + deleted + clean + added
2354 list = modified + deleted + clean + added
2355 elif after:
2355 elif after:
2356 list = deleted
2356 list = deleted
2357 for f in modified + added + clean:
2357 for f in modified + added + clean:
2358 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
2358 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
2359 ret = 1
2359 ret = 1
2360 else:
2360 else:
2361 list = deleted + clean
2361 list = deleted + clean
2362 for f in modified:
2362 for f in modified:
2363 ui.warn(_('not removing %s: file is modified (use -f'
2363 ui.warn(_('not removing %s: file is modified (use -f'
2364 ' to force removal)\n') % m.rel(f))
2364 ' to force removal)\n') % m.rel(f))
2365 ret = 1
2365 ret = 1
2366 for f in added:
2366 for f in added:
2367 ui.warn(_('not removing %s: file has been marked for add'
2367 ui.warn(_('not removing %s: file has been marked for add'
2368 ' (use forget to undo)\n') % m.rel(f))
2368 ' (use forget to undo)\n') % m.rel(f))
2369 ret = 1
2369 ret = 1
2370
2370
2371 for f in sorted(list):
2371 for f in sorted(list):
2372 if ui.verbose or not m.exact(f):
2372 if ui.verbose or not m.exact(f):
2373 ui.status(_('removing %s\n') % m.rel(f))
2373 ui.status(_('removing %s\n') % m.rel(f))
2374
2374
2375 wlock = repo.wlock()
2375 wlock = repo.wlock()
2376 try:
2376 try:
2377 if not after:
2377 if not after:
2378 for f in list:
2378 for f in list:
2379 if f in added:
2379 if f in added:
2380 continue # we never unlink added files on remove
2380 continue # we never unlink added files on remove
2381 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
2381 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
2382 repo[None].forget(list)
2382 repo[None].forget(list)
2383 finally:
2383 finally:
2384 wlock.release()
2384 wlock.release()
2385
2385
2386 return ret
2386 return ret
2387
2387
2388 def cat(ui, repo, ctx, matcher, prefix, **opts):
2388 def cat(ui, repo, ctx, matcher, prefix, **opts):
2389 err = 1
2389 err = 1
2390
2390
2391 def write(path):
2391 def write(path):
2392 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2392 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2393 pathname=os.path.join(prefix, path))
2393 pathname=os.path.join(prefix, path))
2394 data = ctx[path].data()
2394 data = ctx[path].data()
2395 if opts.get('decode'):
2395 if opts.get('decode'):
2396 data = repo.wwritedata(path, data)
2396 data = repo.wwritedata(path, data)
2397 fp.write(data)
2397 fp.write(data)
2398 fp.close()
2398 fp.close()
2399
2399
2400 # Automation often uses hg cat on single files, so special case it
2400 # Automation often uses hg cat on single files, so special case it
2401 # for performance to avoid the cost of parsing the manifest.
2401 # for performance to avoid the cost of parsing the manifest.
2402 if len(matcher.files()) == 1 and not matcher.anypats():
2402 if len(matcher.files()) == 1 and not matcher.anypats():
2403 file = matcher.files()[0]
2403 file = matcher.files()[0]
2404 mf = repo.manifest
2404 mf = repo.manifest
2405 mfnode = ctx.manifestnode()
2405 mfnode = ctx.manifestnode()
2406 if mfnode and mf.find(mfnode, file)[0]:
2406 if mfnode and mf.find(mfnode, file)[0]:
2407 write(file)
2407 write(file)
2408 return 0
2408 return 0
2409
2409
2410 # Don't warn about "missing" files that are really in subrepos
2410 # Don't warn about "missing" files that are really in subrepos
2411 bad = matcher.bad
2411 bad = matcher.bad
2412
2412
2413 def badfn(path, msg):
2413 def badfn(path, msg):
2414 for subpath in ctx.substate:
2414 for subpath in ctx.substate:
2415 if path.startswith(subpath):
2415 if path.startswith(subpath):
2416 return
2416 return
2417 bad(path, msg)
2417 bad(path, msg)
2418
2418
2419 matcher.bad = badfn
2419 matcher.bad = badfn
2420
2420
2421 for abs in ctx.walk(matcher):
2421 for abs in ctx.walk(matcher):
2422 write(abs)
2422 write(abs)
2423 err = 0
2423 err = 0
2424
2424
2425 matcher.bad = bad
2425 matcher.bad = bad
2426
2426
2427 for subpath in sorted(ctx.substate):
2427 for subpath in sorted(ctx.substate):
2428 sub = ctx.sub(subpath)
2428 sub = ctx.sub(subpath)
2429 try:
2429 try:
2430 submatch = matchmod.narrowmatcher(subpath, matcher)
2430 submatch = matchmod.narrowmatcher(subpath, matcher)
2431
2431
2432 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2432 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2433 **opts):
2433 **opts):
2434 err = 0
2434 err = 0
2435 except error.RepoLookupError:
2435 except error.RepoLookupError:
2436 ui.status(_("skipping missing subrepository: %s\n")
2436 ui.status(_("skipping missing subrepository: %s\n")
2437 % os.path.join(prefix, subpath))
2437 % os.path.join(prefix, subpath))
2438
2438
2439 return err
2439 return err
2440
2440
2441 def commit(ui, repo, commitfunc, pats, opts):
2441 def commit(ui, repo, commitfunc, pats, opts):
2442 '''commit the specified files or all outstanding changes'''
2442 '''commit the specified files or all outstanding changes'''
2443 date = opts.get('date')
2443 date = opts.get('date')
2444 if date:
2444 if date:
2445 opts['date'] = util.parsedate(date)
2445 opts['date'] = util.parsedate(date)
2446 message = logmessage(ui, opts)
2446 message = logmessage(ui, opts)
2447 matcher = scmutil.match(repo[None], pats, opts)
2447 matcher = scmutil.match(repo[None], pats, opts)
2448
2448
2449 # extract addremove carefully -- this function can be called from a command
2449 # extract addremove carefully -- this function can be called from a command
2450 # that doesn't support addremove
2450 # that doesn't support addremove
2451 if opts.get('addremove'):
2451 if opts.get('addremove'):
2452 if scmutil.addremove(repo, matcher, "", opts) != 0:
2452 if scmutil.addremove(repo, matcher, "", opts) != 0:
2453 raise util.Abort(
2453 raise util.Abort(
2454 _("failed to mark all new/missing files as added/removed"))
2454 _("failed to mark all new/missing files as added/removed"))
2455
2455
2456 return commitfunc(ui, repo, message, matcher, opts)
2456 return commitfunc(ui, repo, message, matcher, opts)
2457
2457
2458 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2458 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2459 # amend will reuse the existing user if not specified, but the obsolete
2459 # amend will reuse the existing user if not specified, but the obsolete
2460 # marker creation requires that the current user's name is specified.
2460 # marker creation requires that the current user's name is specified.
2461 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2461 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2462 ui.username() # raise exception if username not set
2462 ui.username() # raise exception if username not set
2463
2463
2464 ui.note(_('amending changeset %s\n') % old)
2464 ui.note(_('amending changeset %s\n') % old)
2465 base = old.p1()
2465 base = old.p1()
2466
2466
2467 wlock = lock = newid = None
2467 wlock = lock = newid = None
2468 try:
2468 try:
2469 wlock = repo.wlock()
2469 wlock = repo.wlock()
2470 lock = repo.lock()
2470 lock = repo.lock()
2471 tr = repo.transaction('amend')
2471 tr = repo.transaction('amend')
2472 try:
2472 try:
2473 # See if we got a message from -m or -l, if not, open the editor
2473 # See if we got a message from -m or -l, if not, open the editor
2474 # with the message of the changeset to amend
2474 # with the message of the changeset to amend
2475 message = logmessage(ui, opts)
2475 message = logmessage(ui, opts)
2476 # ensure logfile does not conflict with later enforcement of the
2476 # ensure logfile does not conflict with later enforcement of the
2477 # message. potential logfile content has been processed by
2477 # message. potential logfile content has been processed by
2478 # `logmessage` anyway.
2478 # `logmessage` anyway.
2479 opts.pop('logfile')
2479 opts.pop('logfile')
2480 # First, do a regular commit to record all changes in the working
2480 # First, do a regular commit to record all changes in the working
2481 # directory (if there are any)
2481 # directory (if there are any)
2482 ui.callhooks = False
2482 ui.callhooks = False
2483 currentbookmark = repo._activebookmark
2483 currentbookmark = repo._activebookmark
2484 try:
2484 try:
2485 repo._activebookmark = None
2485 repo._activebookmark = None
2486 opts['message'] = 'temporary amend commit for %s' % old
2486 opts['message'] = 'temporary amend commit for %s' % old
2487 node = commit(ui, repo, commitfunc, pats, opts)
2487 node = commit(ui, repo, commitfunc, pats, opts)
2488 finally:
2488 finally:
2489 repo._activebookmark = currentbookmark
2489 repo._activebookmark = currentbookmark
2490 ui.callhooks = True
2490 ui.callhooks = True
2491 ctx = repo[node]
2491 ctx = repo[node]
2492
2492
2493 # Participating changesets:
2493 # Participating changesets:
2494 #
2494 #
2495 # node/ctx o - new (intermediate) commit that contains changes
2495 # node/ctx o - new (intermediate) commit that contains changes
2496 # | from working dir to go into amending commit
2496 # | from working dir to go into amending commit
2497 # | (or a workingctx if there were no changes)
2497 # | (or a workingctx if there were no changes)
2498 # |
2498 # |
2499 # old o - changeset to amend
2499 # old o - changeset to amend
2500 # |
2500 # |
2501 # base o - parent of amending changeset
2501 # base o - parent of amending changeset
2502
2502
2503 # Update extra dict from amended commit (e.g. to preserve graft
2503 # Update extra dict from amended commit (e.g. to preserve graft
2504 # source)
2504 # source)
2505 extra.update(old.extra())
2505 extra.update(old.extra())
2506
2506
2507 # Also update it from the intermediate commit or from the wctx
2507 # Also update it from the intermediate commit or from the wctx
2508 extra.update(ctx.extra())
2508 extra.update(ctx.extra())
2509
2509
2510 if len(old.parents()) > 1:
2510 if len(old.parents()) > 1:
2511 # ctx.files() isn't reliable for merges, so fall back to the
2511 # ctx.files() isn't reliable for merges, so fall back to the
2512 # slower repo.status() method
2512 # slower repo.status() method
2513 files = set([fn for st in repo.status(base, old)[:3]
2513 files = set([fn for st in repo.status(base, old)[:3]
2514 for fn in st])
2514 for fn in st])
2515 else:
2515 else:
2516 files = set(old.files())
2516 files = set(old.files())
2517
2517
2518 # Second, we use either the commit we just did, or if there were no
2518 # Second, we use either the commit we just did, or if there were no
2519 # changes the parent of the working directory as the version of the
2519 # changes the parent of the working directory as the version of the
2520 # files in the final amend commit
2520 # files in the final amend commit
2521 if node:
2521 if node:
2522 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2522 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2523
2523
2524 user = ctx.user()
2524 user = ctx.user()
2525 date = ctx.date()
2525 date = ctx.date()
2526 # Recompute copies (avoid recording a -> b -> a)
2526 # Recompute copies (avoid recording a -> b -> a)
2527 copied = copies.pathcopies(base, ctx)
2527 copied = copies.pathcopies(base, ctx)
2528 if old.p2:
2528 if old.p2:
2529 copied.update(copies.pathcopies(old.p2(), ctx))
2529 copied.update(copies.pathcopies(old.p2(), ctx))
2530
2530
2531 # Prune files which were reverted by the updates: if old
2531 # Prune files which were reverted by the updates: if old
2532 # introduced file X and our intermediate commit, node,
2532 # introduced file X and our intermediate commit, node,
2533 # renamed that file, then those two files are the same and
2533 # renamed that file, then those two files are the same and
2534 # we can discard X from our list of files. Likewise if X
2534 # we can discard X from our list of files. Likewise if X
2535 # was deleted, it's no longer relevant
2535 # was deleted, it's no longer relevant
2536 files.update(ctx.files())
2536 files.update(ctx.files())
2537
2537
2538 def samefile(f):
2538 def samefile(f):
2539 if f in ctx.manifest():
2539 if f in ctx.manifest():
2540 a = ctx.filectx(f)
2540 a = ctx.filectx(f)
2541 if f in base.manifest():
2541 if f in base.manifest():
2542 b = base.filectx(f)
2542 b = base.filectx(f)
2543 return (not a.cmp(b)
2543 return (not a.cmp(b)
2544 and a.flags() == b.flags())
2544 and a.flags() == b.flags())
2545 else:
2545 else:
2546 return False
2546 return False
2547 else:
2547 else:
2548 return f not in base.manifest()
2548 return f not in base.manifest()
2549 files = [f for f in files if not samefile(f)]
2549 files = [f for f in files if not samefile(f)]
2550
2550
2551 def filectxfn(repo, ctx_, path):
2551 def filectxfn(repo, ctx_, path):
2552 try:
2552 try:
2553 fctx = ctx[path]
2553 fctx = ctx[path]
2554 flags = fctx.flags()
2554 flags = fctx.flags()
2555 mctx = context.memfilectx(repo,
2555 mctx = context.memfilectx(repo,
2556 fctx.path(), fctx.data(),
2556 fctx.path(), fctx.data(),
2557 islink='l' in flags,
2557 islink='l' in flags,
2558 isexec='x' in flags,
2558 isexec='x' in flags,
2559 copied=copied.get(path))
2559 copied=copied.get(path))
2560 return mctx
2560 return mctx
2561 except KeyError:
2561 except KeyError:
2562 return None
2562 return None
2563 else:
2563 else:
2564 ui.note(_('copying changeset %s to %s\n') % (old, base))
2564 ui.note(_('copying changeset %s to %s\n') % (old, base))
2565
2565
2566 # Use version of files as in the old cset
2566 # Use version of files as in the old cset
2567 def filectxfn(repo, ctx_, path):
2567 def filectxfn(repo, ctx_, path):
2568 try:
2568 try:
2569 return old.filectx(path)
2569 return old.filectx(path)
2570 except KeyError:
2570 except KeyError:
2571 return None
2571 return None
2572
2572
2573 user = opts.get('user') or old.user()
2573 user = opts.get('user') or old.user()
2574 date = opts.get('date') or old.date()
2574 date = opts.get('date') or old.date()
2575 editform = mergeeditform(old, 'commit.amend')
2575 editform = mergeeditform(old, 'commit.amend')
2576 editor = getcommiteditor(editform=editform, **opts)
2576 editor = getcommiteditor(editform=editform, **opts)
2577 if not message:
2577 if not message:
2578 editor = getcommiteditor(edit=True, editform=editform)
2578 editor = getcommiteditor(edit=True, editform=editform)
2579 message = old.description()
2579 message = old.description()
2580
2580
2581 pureextra = extra.copy()
2581 pureextra = extra.copy()
2582 extra['amend_source'] = old.hex()
2582 extra['amend_source'] = old.hex()
2583
2583
2584 new = context.memctx(repo,
2584 new = context.memctx(repo,
2585 parents=[base.node(), old.p2().node()],
2585 parents=[base.node(), old.p2().node()],
2586 text=message,
2586 text=message,
2587 files=files,
2587 files=files,
2588 filectxfn=filectxfn,
2588 filectxfn=filectxfn,
2589 user=user,
2589 user=user,
2590 date=date,
2590 date=date,
2591 extra=extra,
2591 extra=extra,
2592 editor=editor)
2592 editor=editor)
2593
2593
2594 newdesc = changelog.stripdesc(new.description())
2594 newdesc = changelog.stripdesc(new.description())
2595 if ((not node)
2595 if ((not node)
2596 and newdesc == old.description()
2596 and newdesc == old.description()
2597 and user == old.user()
2597 and user == old.user()
2598 and date == old.date()
2598 and date == old.date()
2599 and pureextra == old.extra()):
2599 and pureextra == old.extra()):
2600 # nothing changed. continuing here would create a new node
2600 # nothing changed. continuing here would create a new node
2601 # anyway because of the amend_source noise.
2601 # anyway because of the amend_source noise.
2602 #
2602 #
2603 # This not what we expect from amend.
2603 # This not what we expect from amend.
2604 return old.node()
2604 return old.node()
2605
2605
2606 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2606 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2607 try:
2607 try:
2608 if opts.get('secret'):
2608 if opts.get('secret'):
2609 commitphase = 'secret'
2609 commitphase = 'secret'
2610 else:
2610 else:
2611 commitphase = old.phase()
2611 commitphase = old.phase()
2612 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2612 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2613 newid = repo.commitctx(new)
2613 newid = repo.commitctx(new)
2614 finally:
2614 finally:
2615 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2615 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2616 if newid != old.node():
2616 if newid != old.node():
2617 # Reroute the working copy parent to the new changeset
2617 # Reroute the working copy parent to the new changeset
2618 repo.setparents(newid, nullid)
2618 repo.setparents(newid, nullid)
2619
2619
2620 # Move bookmarks from old parent to amend commit
2620 # Move bookmarks from old parent to amend commit
2621 bms = repo.nodebookmarks(old.node())
2621 bms = repo.nodebookmarks(old.node())
2622 if bms:
2622 if bms:
2623 marks = repo._bookmarks
2623 marks = repo._bookmarks
2624 for bm in bms:
2624 for bm in bms:
2625 marks[bm] = newid
2625 marks[bm] = newid
2626 marks.write()
2626 marks.write()
2627 #commit the whole amend process
2627 #commit the whole amend process
2628 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2628 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2629 if createmarkers and newid != old.node():
2629 if createmarkers and newid != old.node():
2630 # mark the new changeset as successor of the rewritten one
2630 # mark the new changeset as successor of the rewritten one
2631 new = repo[newid]
2631 new = repo[newid]
2632 obs = [(old, (new,))]
2632 obs = [(old, (new,))]
2633 if node:
2633 if node:
2634 obs.append((ctx, ()))
2634 obs.append((ctx, ()))
2635
2635
2636 obsolete.createmarkers(repo, obs)
2636 obsolete.createmarkers(repo, obs)
2637 tr.close()
2637 tr.close()
2638 finally:
2638 finally:
2639 tr.release()
2639 tr.release()
2640 if not createmarkers and newid != old.node():
2640 if not createmarkers and newid != old.node():
2641 # Strip the intermediate commit (if there was one) and the amended
2641 # Strip the intermediate commit (if there was one) and the amended
2642 # commit
2642 # commit
2643 if node:
2643 if node:
2644 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2644 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2645 ui.note(_('stripping amended changeset %s\n') % old)
2645 ui.note(_('stripping amended changeset %s\n') % old)
2646 repair.strip(ui, repo, old.node(), topic='amend-backup')
2646 repair.strip(ui, repo, old.node(), topic='amend-backup')
2647 finally:
2647 finally:
2648 if newid is None:
2648 if newid is None:
2649 repo.dirstate.invalidate()
2649 repo.dirstate.invalidate()
2650 lockmod.release(lock, wlock)
2650 lockmod.release(lock, wlock)
2651 return newid
2651 return newid
2652
2652
2653 def commiteditor(repo, ctx, subs, editform=''):
2653 def commiteditor(repo, ctx, subs, editform=''):
2654 if ctx.description():
2654 if ctx.description():
2655 return ctx.description()
2655 return ctx.description()
2656 return commitforceeditor(repo, ctx, subs, editform=editform)
2656 return commitforceeditor(repo, ctx, subs, editform=editform)
2657
2657
2658 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2658 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2659 editform=''):
2659 editform=''):
2660 if not extramsg:
2660 if not extramsg:
2661 extramsg = _("Leave message empty to abort commit.")
2661 extramsg = _("Leave message empty to abort commit.")
2662
2662
2663 forms = [e for e in editform.split('.') if e]
2663 forms = [e for e in editform.split('.') if e]
2664 forms.insert(0, 'changeset')
2664 forms.insert(0, 'changeset')
2665 while forms:
2665 while forms:
2666 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2666 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2667 if tmpl:
2667 if tmpl:
2668 committext = buildcommittemplate(repo, ctx, subs, extramsg, tmpl)
2668 committext = buildcommittemplate(repo, ctx, subs, extramsg, tmpl)
2669 break
2669 break
2670 forms.pop()
2670 forms.pop()
2671 else:
2671 else:
2672 committext = buildcommittext(repo, ctx, subs, extramsg)
2672 committext = buildcommittext(repo, ctx, subs, extramsg)
2673
2673
2674 # run editor in the repository root
2674 # run editor in the repository root
2675 olddir = os.getcwd()
2675 olddir = os.getcwd()
2676 os.chdir(repo.root)
2676 os.chdir(repo.root)
2677 text = repo.ui.edit(committext, ctx.user(), ctx.extra(), editform=editform)
2677 text = repo.ui.edit(committext, ctx.user(), ctx.extra(), editform=editform)
2678 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2678 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2679 os.chdir(olddir)
2679 os.chdir(olddir)
2680
2680
2681 if finishdesc:
2681 if finishdesc:
2682 text = finishdesc(text)
2682 text = finishdesc(text)
2683 if not text.strip():
2683 if not text.strip():
2684 raise util.Abort(_("empty commit message"))
2684 raise util.Abort(_("empty commit message"))
2685
2685
2686 return text
2686 return text
2687
2687
2688 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2688 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2689 ui = repo.ui
2689 ui = repo.ui
2690 tmpl, mapfile = gettemplate(ui, tmpl, None)
2690 tmpl, mapfile = gettemplate(ui, tmpl, None)
2691
2691
2692 try:
2692 try:
2693 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2693 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2694 except SyntaxError, inst:
2694 except SyntaxError, inst:
2695 raise util.Abort(inst.args[0])
2695 raise util.Abort(inst.args[0])
2696
2696
2697 for k, v in repo.ui.configitems('committemplate'):
2697 for k, v in repo.ui.configitems('committemplate'):
2698 if k != 'changeset':
2698 if k != 'changeset':
2699 t.t.cache[k] = v
2699 t.t.cache[k] = v
2700
2700
2701 if not extramsg:
2701 if not extramsg:
2702 extramsg = '' # ensure that extramsg is string
2702 extramsg = '' # ensure that extramsg is string
2703
2703
2704 ui.pushbuffer()
2704 ui.pushbuffer()
2705 t.show(ctx, extramsg=extramsg)
2705 t.show(ctx, extramsg=extramsg)
2706 return ui.popbuffer()
2706 return ui.popbuffer()
2707
2707
2708 def buildcommittext(repo, ctx, subs, extramsg):
2708 def buildcommittext(repo, ctx, subs, extramsg):
2709 edittext = []
2709 edittext = []
2710 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2710 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2711 if ctx.description():
2711 if ctx.description():
2712 edittext.append(ctx.description())
2712 edittext.append(ctx.description())
2713 edittext.append("")
2713 edittext.append("")
2714 edittext.append("") # Empty line between message and comments.
2714 edittext.append("") # Empty line between message and comments.
2715 edittext.append(_("HG: Enter commit message."
2715 edittext.append(_("HG: Enter commit message."
2716 " Lines beginning with 'HG:' are removed."))
2716 " Lines beginning with 'HG:' are removed."))
2717 edittext.append("HG: %s" % extramsg)
2717 edittext.append("HG: %s" % extramsg)
2718 edittext.append("HG: --")
2718 edittext.append("HG: --")
2719 edittext.append(_("HG: user: %s") % ctx.user())
2719 edittext.append(_("HG: user: %s") % ctx.user())
2720 if ctx.p2():
2720 if ctx.p2():
2721 edittext.append(_("HG: branch merge"))
2721 edittext.append(_("HG: branch merge"))
2722 if ctx.branch():
2722 if ctx.branch():
2723 edittext.append(_("HG: branch '%s'") % ctx.branch())
2723 edittext.append(_("HG: branch '%s'") % ctx.branch())
2724 if bookmarks.isactivewdirparent(repo):
2724 if bookmarks.isactivewdirparent(repo):
2725 edittext.append(_("HG: bookmark '%s'") % repo._activebookmark)
2725 edittext.append(_("HG: bookmark '%s'") % repo._activebookmark)
2726 edittext.extend([_("HG: subrepo %s") % s for s in subs])
2726 edittext.extend([_("HG: subrepo %s") % s for s in subs])
2727 edittext.extend([_("HG: added %s") % f for f in added])
2727 edittext.extend([_("HG: added %s") % f for f in added])
2728 edittext.extend([_("HG: changed %s") % f for f in modified])
2728 edittext.extend([_("HG: changed %s") % f for f in modified])
2729 edittext.extend([_("HG: removed %s") % f for f in removed])
2729 edittext.extend([_("HG: removed %s") % f for f in removed])
2730 if not added and not modified and not removed:
2730 if not added and not modified and not removed:
2731 edittext.append(_("HG: no files changed"))
2731 edittext.append(_("HG: no files changed"))
2732 edittext.append("")
2732 edittext.append("")
2733
2733
2734 return "\n".join(edittext)
2734 return "\n".join(edittext)
2735
2735
2736 def commitstatus(repo, node, branch, bheads=None, opts={}):
2736 def commitstatus(repo, node, branch, bheads=None, opts={}):
2737 ctx = repo[node]
2737 ctx = repo[node]
2738 parents = ctx.parents()
2738 parents = ctx.parents()
2739
2739
2740 if (not opts.get('amend') and bheads and node not in bheads and not
2740 if (not opts.get('amend') and bheads and node not in bheads and not
2741 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2741 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2742 repo.ui.status(_('created new head\n'))
2742 repo.ui.status(_('created new head\n'))
2743 # The message is not printed for initial roots. For the other
2743 # The message is not printed for initial roots. For the other
2744 # changesets, it is printed in the following situations:
2744 # changesets, it is printed in the following situations:
2745 #
2745 #
2746 # Par column: for the 2 parents with ...
2746 # Par column: for the 2 parents with ...
2747 # N: null or no parent
2747 # N: null or no parent
2748 # B: parent is on another named branch
2748 # B: parent is on another named branch
2749 # C: parent is a regular non head changeset
2749 # C: parent is a regular non head changeset
2750 # H: parent was a branch head of the current branch
2750 # H: parent was a branch head of the current branch
2751 # Msg column: whether we print "created new head" message
2751 # Msg column: whether we print "created new head" message
2752 # In the following, it is assumed that there already exists some
2752 # In the following, it is assumed that there already exists some
2753 # initial branch heads of the current branch, otherwise nothing is
2753 # initial branch heads of the current branch, otherwise nothing is
2754 # printed anyway.
2754 # printed anyway.
2755 #
2755 #
2756 # Par Msg Comment
2756 # Par Msg Comment
2757 # N N y additional topo root
2757 # N N y additional topo root
2758 #
2758 #
2759 # B N y additional branch root
2759 # B N y additional branch root
2760 # C N y additional topo head
2760 # C N y additional topo head
2761 # H N n usual case
2761 # H N n usual case
2762 #
2762 #
2763 # B B y weird additional branch root
2763 # B B y weird additional branch root
2764 # C B y branch merge
2764 # C B y branch merge
2765 # H B n merge with named branch
2765 # H B n merge with named branch
2766 #
2766 #
2767 # C C y additional head from merge
2767 # C C y additional head from merge
2768 # C H n merge with a head
2768 # C H n merge with a head
2769 #
2769 #
2770 # H H n head merge: head count decreases
2770 # H H n head merge: head count decreases
2771
2771
2772 if not opts.get('close_branch'):
2772 if not opts.get('close_branch'):
2773 for r in parents:
2773 for r in parents:
2774 if r.closesbranch() and r.branch() == branch:
2774 if r.closesbranch() and r.branch() == branch:
2775 repo.ui.status(_('reopening closed branch head %d\n') % r)
2775 repo.ui.status(_('reopening closed branch head %d\n') % r)
2776
2776
2777 if repo.ui.debugflag:
2777 if repo.ui.debugflag:
2778 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2778 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2779 elif repo.ui.verbose:
2779 elif repo.ui.verbose:
2780 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2780 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2781
2781
2782 def revert(ui, repo, ctx, parents, *pats, **opts):
2782 def revert(ui, repo, ctx, parents, *pats, **opts):
2783 parent, p2 = parents
2783 parent, p2 = parents
2784 node = ctx.node()
2784 node = ctx.node()
2785
2785
2786 mf = ctx.manifest()
2786 mf = ctx.manifest()
2787 if node == p2:
2787 if node == p2:
2788 parent = p2
2788 parent = p2
2789 if node == parent:
2789 if node == parent:
2790 pmf = mf
2790 pmf = mf
2791 else:
2791 else:
2792 pmf = None
2792 pmf = None
2793
2793
2794 # need all matching names in dirstate and manifest of target rev,
2794 # need all matching names in dirstate and manifest of target rev,
2795 # so have to walk both. do not print errors if files exist in one
2795 # so have to walk both. do not print errors if files exist in one
2796 # but not other. in both cases, filesets should be evaluated against
2796 # but not other. in both cases, filesets should be evaluated against
2797 # workingctx to get consistent result (issue4497). this means 'set:**'
2797 # workingctx to get consistent result (issue4497). this means 'set:**'
2798 # cannot be used to select missing files from target rev.
2798 # cannot be used to select missing files from target rev.
2799
2799
2800 # `names` is a mapping for all elements in working copy and target revision
2800 # `names` is a mapping for all elements in working copy and target revision
2801 # The mapping is in the form:
2801 # The mapping is in the form:
2802 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2802 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2803 names = {}
2803 names = {}
2804
2804
2805 wlock = repo.wlock()
2805 wlock = repo.wlock()
2806 try:
2806 try:
2807 ## filling of the `names` mapping
2807 ## filling of the `names` mapping
2808 # walk dirstate to fill `names`
2808 # walk dirstate to fill `names`
2809
2809
2810 interactive = opts.get('interactive', False)
2810 interactive = opts.get('interactive', False)
2811 wctx = repo[None]
2811 wctx = repo[None]
2812 m = scmutil.match(wctx, pats, opts)
2812 m = scmutil.match(wctx, pats, opts)
2813
2813
2814 # we'll need this later
2814 # we'll need this later
2815 targetsubs = sorted(s for s in wctx.substate if m(s))
2815 targetsubs = sorted(s for s in wctx.substate if m(s))
2816
2816
2817 if not m.always():
2817 if not m.always():
2818 m.bad = lambda x, y: False
2818 m.bad = lambda x, y: False
2819 for abs in repo.walk(m):
2819 for abs in repo.walk(m):
2820 names[abs] = m.rel(abs), m.exact(abs)
2820 names[abs] = m.rel(abs), m.exact(abs)
2821
2821
2822 # walk target manifest to fill `names`
2822 # walk target manifest to fill `names`
2823
2823
2824 def badfn(path, msg):
2824 def badfn(path, msg):
2825 if path in names:
2825 if path in names:
2826 return
2826 return
2827 if path in ctx.substate:
2827 if path in ctx.substate:
2828 return
2828 return
2829 path_ = path + '/'
2829 path_ = path + '/'
2830 for f in names:
2830 for f in names:
2831 if f.startswith(path_):
2831 if f.startswith(path_):
2832 return
2832 return
2833 ui.warn("%s: %s\n" % (m.rel(path), msg))
2833 ui.warn("%s: %s\n" % (m.rel(path), msg))
2834
2834
2835 m.bad = badfn
2835 m.bad = badfn
2836 for abs in ctx.walk(m):
2836 for abs in ctx.walk(m):
2837 if abs not in names:
2837 if abs not in names:
2838 names[abs] = m.rel(abs), m.exact(abs)
2838 names[abs] = m.rel(abs), m.exact(abs)
2839
2839
2840 # Find status of all file in `names`.
2840 # Find status of all file in `names`.
2841 m = scmutil.matchfiles(repo, names)
2841 m = scmutil.matchfiles(repo, names)
2842
2842
2843 changes = repo.status(node1=node, match=m,
2843 changes = repo.status(node1=node, match=m,
2844 unknown=True, ignored=True, clean=True)
2844 unknown=True, ignored=True, clean=True)
2845 else:
2845 else:
2846 changes = repo.status(node1=node, match=m)
2846 changes = repo.status(node1=node, match=m)
2847 for kind in changes:
2847 for kind in changes:
2848 for abs in kind:
2848 for abs in kind:
2849 names[abs] = m.rel(abs), m.exact(abs)
2849 names[abs] = m.rel(abs), m.exact(abs)
2850
2850
2851 m = scmutil.matchfiles(repo, names)
2851 m = scmutil.matchfiles(repo, names)
2852
2852
2853 modified = set(changes.modified)
2853 modified = set(changes.modified)
2854 added = set(changes.added)
2854 added = set(changes.added)
2855 removed = set(changes.removed)
2855 removed = set(changes.removed)
2856 _deleted = set(changes.deleted)
2856 _deleted = set(changes.deleted)
2857 unknown = set(changes.unknown)
2857 unknown = set(changes.unknown)
2858 unknown.update(changes.ignored)
2858 unknown.update(changes.ignored)
2859 clean = set(changes.clean)
2859 clean = set(changes.clean)
2860 modadded = set()
2860 modadded = set()
2861
2861
2862 # split between files known in target manifest and the others
2862 # split between files known in target manifest and the others
2863 smf = set(mf)
2863 smf = set(mf)
2864
2864
2865 # determine the exact nature of the deleted changesets
2865 # determine the exact nature of the deleted changesets
2866 deladded = _deleted - smf
2866 deladded = _deleted - smf
2867 deleted = _deleted - deladded
2867 deleted = _deleted - deladded
2868
2868
2869 # We need to account for the state of the file in the dirstate,
2869 # We need to account for the state of the file in the dirstate,
2870 # even when we revert against something else than parent. This will
2870 # even when we revert against something else than parent. This will
2871 # slightly alter the behavior of revert (doing back up or not, delete
2871 # slightly alter the behavior of revert (doing back up or not, delete
2872 # or just forget etc).
2872 # or just forget etc).
2873 if parent == node:
2873 if parent == node:
2874 dsmodified = modified
2874 dsmodified = modified
2875 dsadded = added
2875 dsadded = added
2876 dsremoved = removed
2876 dsremoved = removed
2877 # store all local modifications, useful later for rename detection
2877 # store all local modifications, useful later for rename detection
2878 localchanges = dsmodified | dsadded
2878 localchanges = dsmodified | dsadded
2879 modified, added, removed = set(), set(), set()
2879 modified, added, removed = set(), set(), set()
2880 else:
2880 else:
2881 changes = repo.status(node1=parent, match=m)
2881 changes = repo.status(node1=parent, match=m)
2882 dsmodified = set(changes.modified)
2882 dsmodified = set(changes.modified)
2883 dsadded = set(changes.added)
2883 dsadded = set(changes.added)
2884 dsremoved = set(changes.removed)
2884 dsremoved = set(changes.removed)
2885 # store all local modifications, useful later for rename detection
2885 # store all local modifications, useful later for rename detection
2886 localchanges = dsmodified | dsadded
2886 localchanges = dsmodified | dsadded
2887
2887
2888 # only take into account for removes between wc and target
2888 # only take into account for removes between wc and target
2889 clean |= dsremoved - removed
2889 clean |= dsremoved - removed
2890 dsremoved &= removed
2890 dsremoved &= removed
2891 # distinct between dirstate remove and other
2891 # distinct between dirstate remove and other
2892 removed -= dsremoved
2892 removed -= dsremoved
2893
2893
2894 modadded = added & dsmodified
2894 modadded = added & dsmodified
2895 added -= modadded
2895 added -= modadded
2896
2896
2897 # tell newly modified apart.
2897 # tell newly modified apart.
2898 dsmodified &= modified
2898 dsmodified &= modified
2899 dsmodified |= modified & dsadded # dirstate added may needs backup
2899 dsmodified |= modified & dsadded # dirstate added may needs backup
2900 modified -= dsmodified
2900 modified -= dsmodified
2901
2901
2902 # We need to wait for some post-processing to update this set
2902 # We need to wait for some post-processing to update this set
2903 # before making the distinction. The dirstate will be used for
2903 # before making the distinction. The dirstate will be used for
2904 # that purpose.
2904 # that purpose.
2905 dsadded = added
2905 dsadded = added
2906
2906
2907 # in case of merge, files that are actually added can be reported as
2907 # in case of merge, files that are actually added can be reported as
2908 # modified, we need to post process the result
2908 # modified, we need to post process the result
2909 if p2 != nullid:
2909 if p2 != nullid:
2910 if pmf is None:
2910 if pmf is None:
2911 # only need parent manifest in the merge case,
2911 # only need parent manifest in the merge case,
2912 # so do not read by default
2912 # so do not read by default
2913 pmf = repo[parent].manifest()
2913 pmf = repo[parent].manifest()
2914 mergeadd = dsmodified - set(pmf)
2914 mergeadd = dsmodified - set(pmf)
2915 dsadded |= mergeadd
2915 dsadded |= mergeadd
2916 dsmodified -= mergeadd
2916 dsmodified -= mergeadd
2917
2917
2918 # if f is a rename, update `names` to also revert the source
2918 # if f is a rename, update `names` to also revert the source
2919 cwd = repo.getcwd()
2919 cwd = repo.getcwd()
2920 for f in localchanges:
2920 for f in localchanges:
2921 src = repo.dirstate.copied(f)
2921 src = repo.dirstate.copied(f)
2922 # XXX should we check for rename down to target node?
2922 # XXX should we check for rename down to target node?
2923 if src and src not in names and repo.dirstate[src] == 'r':
2923 if src and src not in names and repo.dirstate[src] == 'r':
2924 dsremoved.add(src)
2924 dsremoved.add(src)
2925 names[src] = (repo.pathto(src, cwd), True)
2925 names[src] = (repo.pathto(src, cwd), True)
2926
2926
2927 # distinguish between file to forget and the other
2927 # distinguish between file to forget and the other
2928 added = set()
2928 added = set()
2929 for abs in dsadded:
2929 for abs in dsadded:
2930 if repo.dirstate[abs] != 'a':
2930 if repo.dirstate[abs] != 'a':
2931 added.add(abs)
2931 added.add(abs)
2932 dsadded -= added
2932 dsadded -= added
2933
2933
2934 for abs in deladded:
2934 for abs in deladded:
2935 if repo.dirstate[abs] == 'a':
2935 if repo.dirstate[abs] == 'a':
2936 dsadded.add(abs)
2936 dsadded.add(abs)
2937 deladded -= dsadded
2937 deladded -= dsadded
2938
2938
2939 # For files marked as removed, we check if an unknown file is present at
2939 # For files marked as removed, we check if an unknown file is present at
2940 # the same path. If a such file exists it may need to be backed up.
2940 # the same path. If a such file exists it may need to be backed up.
2941 # Making the distinction at this stage helps have simpler backup
2941 # Making the distinction at this stage helps have simpler backup
2942 # logic.
2942 # logic.
2943 removunk = set()
2943 removunk = set()
2944 for abs in removed:
2944 for abs in removed:
2945 target = repo.wjoin(abs)
2945 target = repo.wjoin(abs)
2946 if os.path.lexists(target):
2946 if os.path.lexists(target):
2947 removunk.add(abs)
2947 removunk.add(abs)
2948 removed -= removunk
2948 removed -= removunk
2949
2949
2950 dsremovunk = set()
2950 dsremovunk = set()
2951 for abs in dsremoved:
2951 for abs in dsremoved:
2952 target = repo.wjoin(abs)
2952 target = repo.wjoin(abs)
2953 if os.path.lexists(target):
2953 if os.path.lexists(target):
2954 dsremovunk.add(abs)
2954 dsremovunk.add(abs)
2955 dsremoved -= dsremovunk
2955 dsremoved -= dsremovunk
2956
2956
2957 # action to be actually performed by revert
2957 # action to be actually performed by revert
2958 # (<list of file>, message>) tuple
2958 # (<list of file>, message>) tuple
2959 actions = {'revert': ([], _('reverting %s\n')),
2959 actions = {'revert': ([], _('reverting %s\n')),
2960 'add': ([], _('adding %s\n')),
2960 'add': ([], _('adding %s\n')),
2961 'remove': ([], _('removing %s\n')),
2961 'remove': ([], _('removing %s\n')),
2962 'drop': ([], _('removing %s\n')),
2962 'drop': ([], _('removing %s\n')),
2963 'forget': ([], _('forgetting %s\n')),
2963 'forget': ([], _('forgetting %s\n')),
2964 'undelete': ([], _('undeleting %s\n')),
2964 'undelete': ([], _('undeleting %s\n')),
2965 'noop': (None, _('no changes needed to %s\n')),
2965 'noop': (None, _('no changes needed to %s\n')),
2966 'unknown': (None, _('file not managed: %s\n')),
2966 'unknown': (None, _('file not managed: %s\n')),
2967 }
2967 }
2968
2968
2969 # "constant" that convey the backup strategy.
2969 # "constant" that convey the backup strategy.
2970 # All set to `discard` if `no-backup` is set do avoid checking
2970 # All set to `discard` if `no-backup` is set do avoid checking
2971 # no_backup lower in the code.
2971 # no_backup lower in the code.
2972 # These values are ordered for comparison purposes
2972 # These values are ordered for comparison purposes
2973 backup = 2 # unconditionally do backup
2973 backup = 2 # unconditionally do backup
2974 check = 1 # check if the existing file differs from target
2974 check = 1 # check if the existing file differs from target
2975 discard = 0 # never do backup
2975 discard = 0 # never do backup
2976 if opts.get('no_backup'):
2976 if opts.get('no_backup'):
2977 backup = check = discard
2977 backup = check = discard
2978
2978
2979 backupanddel = actions['remove']
2979 backupanddel = actions['remove']
2980 if not opts.get('no_backup'):
2980 if not opts.get('no_backup'):
2981 backupanddel = actions['drop']
2981 backupanddel = actions['drop']
2982
2982
2983 disptable = (
2983 disptable = (
2984 # dispatch table:
2984 # dispatch table:
2985 # file state
2985 # file state
2986 # action
2986 # action
2987 # make backup
2987 # make backup
2988
2988
2989 ## Sets that results that will change file on disk
2989 ## Sets that results that will change file on disk
2990 # Modified compared to target, no local change
2990 # Modified compared to target, no local change
2991 (modified, actions['revert'], discard),
2991 (modified, actions['revert'], discard),
2992 # Modified compared to target, but local file is deleted
2992 # Modified compared to target, but local file is deleted
2993 (deleted, actions['revert'], discard),
2993 (deleted, actions['revert'], discard),
2994 # Modified compared to target, local change
2994 # Modified compared to target, local change
2995 (dsmodified, actions['revert'], backup),
2995 (dsmodified, actions['revert'], backup),
2996 # Added since target
2996 # Added since target
2997 (added, actions['remove'], discard),
2997 (added, actions['remove'], discard),
2998 # Added in working directory
2998 # Added in working directory
2999 (dsadded, actions['forget'], discard),
2999 (dsadded, actions['forget'], discard),
3000 # Added since target, have local modification
3000 # Added since target, have local modification
3001 (modadded, backupanddel, backup),
3001 (modadded, backupanddel, backup),
3002 # Added since target but file is missing in working directory
3002 # Added since target but file is missing in working directory
3003 (deladded, actions['drop'], discard),
3003 (deladded, actions['drop'], discard),
3004 # Removed since target, before working copy parent
3004 # Removed since target, before working copy parent
3005 (removed, actions['add'], discard),
3005 (removed, actions['add'], discard),
3006 # Same as `removed` but an unknown file exists at the same path
3006 # Same as `removed` but an unknown file exists at the same path
3007 (removunk, actions['add'], check),
3007 (removunk, actions['add'], check),
3008 # Removed since targe, marked as such in working copy parent
3008 # Removed since targe, marked as such in working copy parent
3009 (dsremoved, actions['undelete'], discard),
3009 (dsremoved, actions['undelete'], discard),
3010 # Same as `dsremoved` but an unknown file exists at the same path
3010 # Same as `dsremoved` but an unknown file exists at the same path
3011 (dsremovunk, actions['undelete'], check),
3011 (dsremovunk, actions['undelete'], check),
3012 ## the following sets does not result in any file changes
3012 ## the following sets does not result in any file changes
3013 # File with no modification
3013 # File with no modification
3014 (clean, actions['noop'], discard),
3014 (clean, actions['noop'], discard),
3015 # Existing file, not tracked anywhere
3015 # Existing file, not tracked anywhere
3016 (unknown, actions['unknown'], discard),
3016 (unknown, actions['unknown'], discard),
3017 )
3017 )
3018
3018
3019 for abs, (rel, exact) in sorted(names.items()):
3019 for abs, (rel, exact) in sorted(names.items()):
3020 # target file to be touch on disk (relative to cwd)
3020 # target file to be touch on disk (relative to cwd)
3021 target = repo.wjoin(abs)
3021 target = repo.wjoin(abs)
3022 # search the entry in the dispatch table.
3022 # search the entry in the dispatch table.
3023 # if the file is in any of these sets, it was touched in the working
3023 # if the file is in any of these sets, it was touched in the working
3024 # directory parent and we are sure it needs to be reverted.
3024 # directory parent and we are sure it needs to be reverted.
3025 for table, (xlist, msg), dobackup in disptable:
3025 for table, (xlist, msg), dobackup in disptable:
3026 if abs not in table:
3026 if abs not in table:
3027 continue
3027 continue
3028 if xlist is not None:
3028 if xlist is not None:
3029 xlist.append(abs)
3029 xlist.append(abs)
3030 if dobackup and (backup <= dobackup
3030 if dobackup and (backup <= dobackup
3031 or wctx[abs].cmp(ctx[abs])):
3031 or wctx[abs].cmp(ctx[abs])):
3032 bakname = "%s.orig" % rel
3032 bakname = "%s.orig" % rel
3033 ui.note(_('saving current version of %s as %s\n') %
3033 ui.note(_('saving current version of %s as %s\n') %
3034 (rel, bakname))
3034 (rel, bakname))
3035 if not opts.get('dry_run'):
3035 if not opts.get('dry_run'):
3036 if interactive:
3036 if interactive:
3037 util.copyfile(target, bakname)
3037 util.copyfile(target, bakname)
3038 else:
3038 else:
3039 util.rename(target, bakname)
3039 util.rename(target, bakname)
3040 if ui.verbose or not exact:
3040 if ui.verbose or not exact:
3041 if not isinstance(msg, basestring):
3041 if not isinstance(msg, basestring):
3042 msg = msg(abs)
3042 msg = msg(abs)
3043 ui.status(msg % rel)
3043 ui.status(msg % rel)
3044 elif exact:
3044 elif exact:
3045 ui.warn(msg % rel)
3045 ui.warn(msg % rel)
3046 break
3046 break
3047
3047
3048 if not opts.get('dry_run'):
3048 if not opts.get('dry_run'):
3049 needdata = ('revert', 'add', 'undelete')
3049 needdata = ('revert', 'add', 'undelete')
3050 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3050 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3051 _performrevert(repo, parents, ctx, actions, interactive)
3051 _performrevert(repo, parents, ctx, actions, interactive)
3052
3052
3053 if targetsubs:
3053 if targetsubs:
3054 # Revert the subrepos on the revert list
3054 # Revert the subrepos on the revert list
3055 for sub in targetsubs:
3055 for sub in targetsubs:
3056 try:
3056 try:
3057 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3057 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3058 except KeyError:
3058 except KeyError:
3059 raise util.Abort("subrepository '%s' does not exist in %s!"
3059 raise util.Abort("subrepository '%s' does not exist in %s!"
3060 % (sub, short(ctx.node())))
3060 % (sub, short(ctx.node())))
3061 finally:
3061 finally:
3062 wlock.release()
3062 wlock.release()
3063
3063
3064 def _revertprefetch(repo, ctx, *files):
3064 def _revertprefetch(repo, ctx, *files):
3065 """Let extension changing the storage layer prefetch content"""
3065 """Let extension changing the storage layer prefetch content"""
3066 pass
3066 pass
3067
3067
3068 def _performrevert(repo, parents, ctx, actions, interactive=False):
3068 def _performrevert(repo, parents, ctx, actions, interactive=False):
3069 """function that actually perform all the actions computed for revert
3069 """function that actually perform all the actions computed for revert
3070
3070
3071 This is an independent function to let extension to plug in and react to
3071 This is an independent function to let extension to plug in and react to
3072 the imminent revert.
3072 the imminent revert.
3073
3073
3074 Make sure you have the working directory locked when calling this function.
3074 Make sure you have the working directory locked when calling this function.
3075 """
3075 """
3076 parent, p2 = parents
3076 parent, p2 = parents
3077 node = ctx.node()
3077 node = ctx.node()
3078 def checkout(f):
3078 def checkout(f):
3079 fc = ctx[f]
3079 fc = ctx[f]
3080 return repo.wwrite(f, fc.data(), fc.flags())
3080 return repo.wwrite(f, fc.data(), fc.flags())
3081
3081
3082 audit_path = pathutil.pathauditor(repo.root)
3082 audit_path = pathutil.pathauditor(repo.root)
3083 for f in actions['forget'][0]:
3083 for f in actions['forget'][0]:
3084 repo.dirstate.drop(f)
3084 repo.dirstate.drop(f)
3085 for f in actions['remove'][0]:
3085 for f in actions['remove'][0]:
3086 audit_path(f)
3086 audit_path(f)
3087 try:
3087 try:
3088 util.unlinkpath(repo.wjoin(f))
3088 util.unlinkpath(repo.wjoin(f))
3089 except OSError:
3089 except OSError:
3090 pass
3090 pass
3091 repo.dirstate.remove(f)
3091 repo.dirstate.remove(f)
3092 for f in actions['drop'][0]:
3092 for f in actions['drop'][0]:
3093 audit_path(f)
3093 audit_path(f)
3094 repo.dirstate.remove(f)
3094 repo.dirstate.remove(f)
3095
3095
3096 normal = None
3096 normal = None
3097 if node == parent:
3097 if node == parent:
3098 # We're reverting to our parent. If possible, we'd like status
3098 # We're reverting to our parent. If possible, we'd like status
3099 # to report the file as clean. We have to use normallookup for
3099 # to report the file as clean. We have to use normallookup for
3100 # merges to avoid losing information about merged/dirty files.
3100 # merges to avoid losing information about merged/dirty files.
3101 if p2 != nullid:
3101 if p2 != nullid:
3102 normal = repo.dirstate.normallookup
3102 normal = repo.dirstate.normallookup
3103 else:
3103 else:
3104 normal = repo.dirstate.normal
3104 normal = repo.dirstate.normal
3105
3105
3106 if interactive:
3106 if interactive:
3107 # Prompt the user for changes to revert
3107 # Prompt the user for changes to revert
3108 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3108 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3109 m = scmutil.match(ctx, torevert, {})
3109 m = scmutil.match(ctx, torevert, {})
3110 diff = patch.diff(repo, None, ctx.node(), m)
3110 diff = patch.diff(repo, None, ctx.node(), m)
3111 originalchunks = patch.parsepatch(diff)
3111 originalchunks = patch.parsepatch(diff)
3112 try:
3112 try:
3113 chunks = recordfilter(repo.ui, originalchunks)
3113 chunks = recordfilter(repo.ui, originalchunks)
3114 except patch.PatchError, err:
3114 except patch.PatchError, err:
3115 raise util.Abort(_('error parsing patch: %s') % err)
3115 raise util.Abort(_('error parsing patch: %s') % err)
3116
3116
3117 # Apply changes
3117 # Apply changes
3118 fp = cStringIO.StringIO()
3118 fp = cStringIO.StringIO()
3119 for c in chunks:
3119 for c in chunks:
3120 c.write(fp)
3120 c.write(fp)
3121 dopatch = fp.tell()
3121 dopatch = fp.tell()
3122 fp.seek(0)
3122 fp.seek(0)
3123 if dopatch:
3123 if dopatch:
3124 try:
3124 try:
3125 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3125 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3126 except patch.PatchError, err:
3126 except patch.PatchError, err:
3127 raise util.Abort(str(err))
3127 raise util.Abort(str(err))
3128 del fp
3128 del fp
3129 else:
3129 else:
3130 for f in actions['revert'][0]:
3130 for f in actions['revert'][0]:
3131 wsize = checkout(f)
3131 wsize = checkout(f)
3132 if normal:
3132 if normal:
3133 normal(f)
3133 normal(f)
3134 elif wsize == repo.dirstate._map[f][2]:
3134 elif wsize == repo.dirstate._map[f][2]:
3135 # changes may be overlooked without normallookup,
3135 # changes may be overlooked without normallookup,
3136 # if size isn't changed at reverting
3136 # if size isn't changed at reverting
3137 repo.dirstate.normallookup(f)
3137 repo.dirstate.normallookup(f)
3138
3138
3139 for f in actions['add'][0]:
3139 for f in actions['add'][0]:
3140 checkout(f)
3140 checkout(f)
3141 repo.dirstate.add(f)
3141 repo.dirstate.add(f)
3142
3142
3143 normal = repo.dirstate.normallookup
3143 normal = repo.dirstate.normallookup
3144 if node == parent and p2 == nullid:
3144 if node == parent and p2 == nullid:
3145 normal = repo.dirstate.normal
3145 normal = repo.dirstate.normal
3146 for f in actions['undelete'][0]:
3146 for f in actions['undelete'][0]:
3147 checkout(f)
3147 checkout(f)
3148 normal(f)
3148 normal(f)
3149
3149
3150 copied = copies.pathcopies(repo[parent], ctx)
3150 copied = copies.pathcopies(repo[parent], ctx)
3151
3151
3152 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3152 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3153 if f in copied:
3153 if f in copied:
3154 repo.dirstate.copy(copied[f], f)
3154 repo.dirstate.copy(copied[f], f)
3155
3155
3156 def command(table):
3156 def command(table):
3157 """Returns a function object to be used as a decorator for making commands.
3157 """Returns a function object to be used as a decorator for making commands.
3158
3158
3159 This function receives a command table as its argument. The table should
3159 This function receives a command table as its argument. The table should
3160 be a dict.
3160 be a dict.
3161
3161
3162 The returned function can be used as a decorator for adding commands
3162 The returned function can be used as a decorator for adding commands
3163 to that command table. This function accepts multiple arguments to define
3163 to that command table. This function accepts multiple arguments to define
3164 a command.
3164 a command.
3165
3165
3166 The first argument is the command name.
3166 The first argument is the command name.
3167
3167
3168 The options argument is an iterable of tuples defining command arguments.
3168 The options argument is an iterable of tuples defining command arguments.
3169 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3169 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3170
3170
3171 The synopsis argument defines a short, one line summary of how to use the
3171 The synopsis argument defines a short, one line summary of how to use the
3172 command. This shows up in the help output.
3172 command. This shows up in the help output.
3173
3173
3174 The norepo argument defines whether the command does not require a
3174 The norepo argument defines whether the command does not require a
3175 local repository. Most commands operate against a repository, thus the
3175 local repository. Most commands operate against a repository, thus the
3176 default is False.
3176 default is False.
3177
3177
3178 The optionalrepo argument defines whether the command optionally requires
3178 The optionalrepo argument defines whether the command optionally requires
3179 a local repository.
3179 a local repository.
3180
3180
3181 The inferrepo argument defines whether to try to find a repository from the
3181 The inferrepo argument defines whether to try to find a repository from the
3182 command line arguments. If True, arguments will be examined for potential
3182 command line arguments. If True, arguments will be examined for potential
3183 repository locations. See ``findrepo()``. If a repository is found, it
3183 repository locations. See ``findrepo()``. If a repository is found, it
3184 will be used.
3184 will be used.
3185 """
3185 """
3186 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3186 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3187 inferrepo=False):
3187 inferrepo=False):
3188 def decorator(func):
3188 def decorator(func):
3189 if synopsis:
3189 if synopsis:
3190 table[name] = func, list(options), synopsis
3190 table[name] = func, list(options), synopsis
3191 else:
3191 else:
3192 table[name] = func, list(options)
3192 table[name] = func, list(options)
3193
3193
3194 if norepo:
3194 if norepo:
3195 # Avoid import cycle.
3195 # Avoid import cycle.
3196 import commands
3196 import commands
3197 commands.norepo += ' %s' % ' '.join(parsealiases(name))
3197 commands.norepo += ' %s' % ' '.join(parsealiases(name))
3198
3198
3199 if optionalrepo:
3199 if optionalrepo:
3200 import commands
3200 import commands
3201 commands.optionalrepo += ' %s' % ' '.join(parsealiases(name))
3201 commands.optionalrepo += ' %s' % ' '.join(parsealiases(name))
3202
3202
3203 if inferrepo:
3203 if inferrepo:
3204 import commands
3204 import commands
3205 commands.inferrepo += ' %s' % ' '.join(parsealiases(name))
3205 commands.inferrepo += ' %s' % ' '.join(parsealiases(name))
3206
3206
3207 return func
3207 return func
3208 return decorator
3208 return decorator
3209
3209
3210 return cmd
3210 return cmd
3211
3211
3212 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3212 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3213 # commands.outgoing. "missing" is "missing" of the result of
3213 # commands.outgoing. "missing" is "missing" of the result of
3214 # "findcommonoutgoing()"
3214 # "findcommonoutgoing()"
3215 outgoinghooks = util.hooks()
3215 outgoinghooks = util.hooks()
3216
3216
3217 # a list of (ui, repo) functions called by commands.summary
3217 # a list of (ui, repo) functions called by commands.summary
3218 summaryhooks = util.hooks()
3218 summaryhooks = util.hooks()
3219
3219
3220 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3220 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3221 #
3221 #
3222 # functions should return tuple of booleans below, if 'changes' is None:
3222 # functions should return tuple of booleans below, if 'changes' is None:
3223 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3223 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3224 #
3224 #
3225 # otherwise, 'changes' is a tuple of tuples below:
3225 # otherwise, 'changes' is a tuple of tuples below:
3226 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3226 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3227 # - (desturl, destbranch, destpeer, outgoing)
3227 # - (desturl, destbranch, destpeer, outgoing)
3228 summaryremotehooks = util.hooks()
3228 summaryremotehooks = util.hooks()
3229
3229
3230 # A list of state files kept by multistep operations like graft.
3230 # A list of state files kept by multistep operations like graft.
3231 # Since graft cannot be aborted, it is considered 'clearable' by update.
3231 # Since graft cannot be aborted, it is considered 'clearable' by update.
3232 # note: bisect is intentionally excluded
3232 # note: bisect is intentionally excluded
3233 # (state file, clearable, allowcommit, error, hint)
3233 # (state file, clearable, allowcommit, error, hint)
3234 unfinishedstates = [
3234 unfinishedstates = [
3235 ('graftstate', True, False, _('graft in progress'),
3235 ('graftstate', True, False, _('graft in progress'),
3236 _("use 'hg graft --continue' or 'hg update' to abort")),
3236 _("use 'hg graft --continue' or 'hg update' to abort")),
3237 ('updatestate', True, False, _('last update was interrupted'),
3237 ('updatestate', True, False, _('last update was interrupted'),
3238 _("use 'hg update' to get a consistent checkout"))
3238 _("use 'hg update' to get a consistent checkout"))
3239 ]
3239 ]
3240
3240
3241 def checkunfinished(repo, commit=False):
3241 def checkunfinished(repo, commit=False):
3242 '''Look for an unfinished multistep operation, like graft, and abort
3242 '''Look for an unfinished multistep operation, like graft, and abort
3243 if found. It's probably good to check this right before
3243 if found. It's probably good to check this right before
3244 bailifchanged().
3244 bailifchanged().
3245 '''
3245 '''
3246 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3246 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3247 if commit and allowcommit:
3247 if commit and allowcommit:
3248 continue
3248 continue
3249 if repo.vfs.exists(f):
3249 if repo.vfs.exists(f):
3250 raise util.Abort(msg, hint=hint)
3250 raise util.Abort(msg, hint=hint)
3251
3251
3252 def clearunfinished(repo):
3252 def clearunfinished(repo):
3253 '''Check for unfinished operations (as above), and clear the ones
3253 '''Check for unfinished operations (as above), and clear the ones
3254 that are clearable.
3254 that are clearable.
3255 '''
3255 '''
3256 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3256 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3257 if not clearable and repo.vfs.exists(f):
3257 if not clearable and repo.vfs.exists(f):
3258 raise util.Abort(msg, hint=hint)
3258 raise util.Abort(msg, hint=hint)
3259 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3259 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3260 if clearable and repo.vfs.exists(f):
3260 if clearable and repo.vfs.exists(f):
3261 util.unlink(repo.join(f))
3261 util.unlink(repo.join(f))
@@ -1,479 +1,478 b''
1 # filemerge.py - file-level merge handling for Mercurial
1 # filemerge.py - file-level merge handling for Mercurial
2 #
2 #
3 # Copyright 2006, 2007, 2008 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006, 2007, 2008 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from node import short
8 from node import short
9 from i18n import _
9 from i18n import _
10 import util, simplemerge, match, error, templater, templatekw
10 import util, simplemerge, match, error, templater, templatekw
11 import os, tempfile, re, filecmp
11 import os, tempfile, re, filecmp
12 import tagmerge
12 import tagmerge
13
13
14 def _toolstr(ui, tool, part, default=""):
14 def _toolstr(ui, tool, part, default=""):
15 return ui.config("merge-tools", tool + "." + part, default)
15 return ui.config("merge-tools", tool + "." + part, default)
16
16
17 def _toolbool(ui, tool, part, default=False):
17 def _toolbool(ui, tool, part, default=False):
18 return ui.configbool("merge-tools", tool + "." + part, default)
18 return ui.configbool("merge-tools", tool + "." + part, default)
19
19
20 def _toollist(ui, tool, part, default=[]):
20 def _toollist(ui, tool, part, default=[]):
21 return ui.configlist("merge-tools", tool + "." + part, default)
21 return ui.configlist("merge-tools", tool + "." + part, default)
22
22
23 internals = {}
23 internals = {}
24 # Merge tools to document.
24 # Merge tools to document.
25 internalsdoc = {}
25 internalsdoc = {}
26
26
27 def internaltool(name, trymerge, onfailure=None):
27 def internaltool(name, trymerge, onfailure=None):
28 '''return a decorator for populating internal merge tool table'''
28 '''return a decorator for populating internal merge tool table'''
29 def decorator(func):
29 def decorator(func):
30 fullname = ':' + name
30 fullname = ':' + name
31 func.__doc__ = "``%s``\n" % fullname + func.__doc__.strip()
31 func.__doc__ = "``%s``\n" % fullname + func.__doc__.strip()
32 internals[fullname] = func
32 internals[fullname] = func
33 internals['internal:' + name] = func
33 internals['internal:' + name] = func
34 internalsdoc[fullname] = func
34 internalsdoc[fullname] = func
35 func.trymerge = trymerge
35 func.trymerge = trymerge
36 func.onfailure = onfailure
36 func.onfailure = onfailure
37 return func
37 return func
38 return decorator
38 return decorator
39
39
40 def _findtool(ui, tool):
40 def _findtool(ui, tool):
41 if tool in internals:
41 if tool in internals:
42 return tool
42 return tool
43 return findexternaltool(ui, tool)
43 return findexternaltool(ui, tool)
44
44
45 def findexternaltool(ui, tool):
45 def findexternaltool(ui, tool):
46 for kn in ("regkey", "regkeyalt"):
46 for kn in ("regkey", "regkeyalt"):
47 k = _toolstr(ui, tool, kn)
47 k = _toolstr(ui, tool, kn)
48 if not k:
48 if not k:
49 continue
49 continue
50 p = util.lookupreg(k, _toolstr(ui, tool, "regname"))
50 p = util.lookupreg(k, _toolstr(ui, tool, "regname"))
51 if p:
51 if p:
52 p = util.findexe(p + _toolstr(ui, tool, "regappend"))
52 p = util.findexe(p + _toolstr(ui, tool, "regappend"))
53 if p:
53 if p:
54 return p
54 return p
55 exe = _toolstr(ui, tool, "executable", tool)
55 exe = _toolstr(ui, tool, "executable", tool)
56 return util.findexe(util.expandpath(exe))
56 return util.findexe(util.expandpath(exe))
57
57
58 def _picktool(repo, ui, path, binary, symlink):
58 def _picktool(repo, ui, path, binary, symlink):
59 def check(tool, pat, symlink, binary):
59 def check(tool, pat, symlink, binary):
60 tmsg = tool
60 tmsg = tool
61 if pat:
61 if pat:
62 tmsg += " specified for " + pat
62 tmsg += " specified for " + pat
63 if not _findtool(ui, tool):
63 if not _findtool(ui, tool):
64 if pat: # explicitly requested tool deserves a warning
64 if pat: # explicitly requested tool deserves a warning
65 ui.warn(_("couldn't find merge tool %s\n") % tmsg)
65 ui.warn(_("couldn't find merge tool %s\n") % tmsg)
66 else: # configured but non-existing tools are more silent
66 else: # configured but non-existing tools are more silent
67 ui.note(_("couldn't find merge tool %s\n") % tmsg)
67 ui.note(_("couldn't find merge tool %s\n") % tmsg)
68 elif symlink and not _toolbool(ui, tool, "symlink"):
68 elif symlink and not _toolbool(ui, tool, "symlink"):
69 ui.warn(_("tool %s can't handle symlinks\n") % tmsg)
69 ui.warn(_("tool %s can't handle symlinks\n") % tmsg)
70 elif binary and not _toolbool(ui, tool, "binary"):
70 elif binary and not _toolbool(ui, tool, "binary"):
71 ui.warn(_("tool %s can't handle binary\n") % tmsg)
71 ui.warn(_("tool %s can't handle binary\n") % tmsg)
72 elif not util.gui() and _toolbool(ui, tool, "gui"):
72 elif not util.gui() and _toolbool(ui, tool, "gui"):
73 ui.warn(_("tool %s requires a GUI\n") % tmsg)
73 ui.warn(_("tool %s requires a GUI\n") % tmsg)
74 else:
74 else:
75 return True
75 return True
76 return False
76 return False
77
77
78 # forcemerge comes from command line arguments, highest priority
78 # forcemerge comes from command line arguments, highest priority
79 force = ui.config('ui', 'forcemerge')
79 force = ui.config('ui', 'forcemerge')
80 if force:
80 if force:
81 toolpath = _findtool(ui, force)
81 toolpath = _findtool(ui, force)
82 if toolpath:
82 if toolpath:
83 return (force, util.shellquote(toolpath))
83 return (force, util.shellquote(toolpath))
84 else:
84 else:
85 # mimic HGMERGE if given tool not found
85 # mimic HGMERGE if given tool not found
86 return (force, force)
86 return (force, force)
87
87
88 # HGMERGE takes next precedence
88 # HGMERGE takes next precedence
89 hgmerge = os.environ.get("HGMERGE")
89 hgmerge = os.environ.get("HGMERGE")
90 if hgmerge:
90 if hgmerge:
91 return (hgmerge, hgmerge)
91 return (hgmerge, hgmerge)
92
92
93 # then patterns
93 # then patterns
94 for pat, tool in ui.configitems("merge-patterns"):
94 for pat, tool in ui.configitems("merge-patterns"):
95 mf = match.match(repo.root, '', [pat])
95 mf = match.match(repo.root, '', [pat])
96 if mf(path) and check(tool, pat, symlink, False):
96 if mf(path) and check(tool, pat, symlink, False):
97 toolpath = _findtool(ui, tool)
97 toolpath = _findtool(ui, tool)
98 return (tool, util.shellquote(toolpath))
98 return (tool, util.shellquote(toolpath))
99
99
100 # then merge tools
100 # then merge tools
101 tools = {}
101 tools = {}
102 for k, v in ui.configitems("merge-tools"):
102 for k, v in ui.configitems("merge-tools"):
103 t = k.split('.')[0]
103 t = k.split('.')[0]
104 if t not in tools:
104 if t not in tools:
105 tools[t] = int(_toolstr(ui, t, "priority", "0"))
105 tools[t] = int(_toolstr(ui, t, "priority", "0"))
106 names = tools.keys()
106 names = tools.keys()
107 tools = sorted([(-p, t) for t, p in tools.items()])
107 tools = sorted([(-p, t) for t, p in tools.items()])
108 uimerge = ui.config("ui", "merge")
108 uimerge = ui.config("ui", "merge")
109 if uimerge:
109 if uimerge:
110 if uimerge not in names:
110 if uimerge not in names:
111 return (uimerge, uimerge)
111 return (uimerge, uimerge)
112 tools.insert(0, (None, uimerge)) # highest priority
112 tools.insert(0, (None, uimerge)) # highest priority
113 tools.append((None, "hgmerge")) # the old default, if found
113 tools.append((None, "hgmerge")) # the old default, if found
114 for p, t in tools:
114 for p, t in tools:
115 if check(t, None, symlink, binary):
115 if check(t, None, symlink, binary):
116 toolpath = _findtool(ui, t)
116 toolpath = _findtool(ui, t)
117 return (t, util.shellquote(toolpath))
117 return (t, util.shellquote(toolpath))
118
118
119 # internal merge or prompt as last resort
119 # internal merge or prompt as last resort
120 if symlink or binary:
120 if symlink or binary:
121 return ":prompt", None
121 return ":prompt", None
122 return ":merge", None
122 return ":merge", None
123
123
124 def _eoltype(data):
124 def _eoltype(data):
125 "Guess the EOL type of a file"
125 "Guess the EOL type of a file"
126 if '\0' in data: # binary
126 if '\0' in data: # binary
127 return None
127 return None
128 if '\r\n' in data: # Windows
128 if '\r\n' in data: # Windows
129 return '\r\n'
129 return '\r\n'
130 if '\r' in data: # Old Mac
130 if '\r' in data: # Old Mac
131 return '\r'
131 return '\r'
132 if '\n' in data: # UNIX
132 if '\n' in data: # UNIX
133 return '\n'
133 return '\n'
134 return None # unknown
134 return None # unknown
135
135
136 def _matcheol(file, origfile):
136 def _matcheol(file, origfile):
137 "Convert EOL markers in a file to match origfile"
137 "Convert EOL markers in a file to match origfile"
138 tostyle = _eoltype(util.readfile(origfile))
138 tostyle = _eoltype(util.readfile(origfile))
139 if tostyle:
139 if tostyle:
140 data = util.readfile(file)
140 data = util.readfile(file)
141 style = _eoltype(data)
141 style = _eoltype(data)
142 if style:
142 if style:
143 newdata = data.replace(style, tostyle)
143 newdata = data.replace(style, tostyle)
144 if newdata != data:
144 if newdata != data:
145 util.writefile(file, newdata)
145 util.writefile(file, newdata)
146
146
147 @internaltool('prompt', False)
147 @internaltool('prompt', False)
148 def _iprompt(repo, mynode, orig, fcd, fco, fca, toolconf):
148 def _iprompt(repo, mynode, orig, fcd, fco, fca, toolconf):
149 """Asks the user which of the local or the other version to keep as
149 """Asks the user which of the local or the other version to keep as
150 the merged version."""
150 the merged version."""
151 ui = repo.ui
151 ui = repo.ui
152 fd = fcd.path()
152 fd = fcd.path()
153
153
154 if ui.promptchoice(_(" no tool found to merge %s\n"
154 if ui.promptchoice(_(" no tool found to merge %s\n"
155 "keep (l)ocal or take (o)ther?"
155 "keep (l)ocal or take (o)ther?"
156 "$$ &Local $$ &Other") % fd, 0):
156 "$$ &Local $$ &Other") % fd, 0):
157 return _iother(repo, mynode, orig, fcd, fco, fca, toolconf)
157 return _iother(repo, mynode, orig, fcd, fco, fca, toolconf)
158 else:
158 else:
159 return _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf)
159 return _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf)
160
160
161 @internaltool('local', False)
161 @internaltool('local', False)
162 def _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf):
162 def _ilocal(repo, mynode, orig, fcd, fco, fca, toolconf):
163 """Uses the local version of files as the merged version."""
163 """Uses the local version of files as the merged version."""
164 return 0
164 return 0
165
165
166 @internaltool('other', False)
166 @internaltool('other', False)
167 def _iother(repo, mynode, orig, fcd, fco, fca, toolconf):
167 def _iother(repo, mynode, orig, fcd, fco, fca, toolconf):
168 """Uses the other version of files as the merged version."""
168 """Uses the other version of files as the merged version."""
169 repo.wwrite(fcd.path(), fco.data(), fco.flags())
169 repo.wwrite(fcd.path(), fco.data(), fco.flags())
170 return 0
170 return 0
171
171
172 @internaltool('fail', False)
172 @internaltool('fail', False)
173 def _ifail(repo, mynode, orig, fcd, fco, fca, toolconf):
173 def _ifail(repo, mynode, orig, fcd, fco, fca, toolconf):
174 """
174 """
175 Rather than attempting to merge files that were modified on both
175 Rather than attempting to merge files that were modified on both
176 branches, it marks them as unresolved. The resolve command must be
176 branches, it marks them as unresolved. The resolve command must be
177 used to resolve these conflicts."""
177 used to resolve these conflicts."""
178 return 1
178 return 1
179
179
180 def _premerge(repo, toolconf, files, labels=None):
180 def _premerge(repo, toolconf, files, labels=None):
181 tool, toolpath, binary, symlink = toolconf
181 tool, toolpath, binary, symlink = toolconf
182 if symlink:
182 if symlink:
183 return 1
183 return 1
184 a, b, c, back = files
184 a, b, c, back = files
185
185
186 ui = repo.ui
186 ui = repo.ui
187
187
188 validkeep = ['keep', 'keep-merge3']
188 validkeep = ['keep', 'keep-merge3']
189
189
190 # do we attempt to simplemerge first?
190 # do we attempt to simplemerge first?
191 try:
191 try:
192 premerge = _toolbool(ui, tool, "premerge", not binary)
192 premerge = _toolbool(ui, tool, "premerge", not binary)
193 except error.ConfigError:
193 except error.ConfigError:
194 premerge = _toolstr(ui, tool, "premerge").lower()
194 premerge = _toolstr(ui, tool, "premerge").lower()
195 if premerge not in validkeep:
195 if premerge not in validkeep:
196 _valid = ', '.join(["'" + v + "'" for v in validkeep])
196 _valid = ', '.join(["'" + v + "'" for v in validkeep])
197 raise error.ConfigError(_("%s.premerge not valid "
197 raise error.ConfigError(_("%s.premerge not valid "
198 "('%s' is neither boolean nor %s)") %
198 "('%s' is neither boolean nor %s)") %
199 (tool, premerge, _valid))
199 (tool, premerge, _valid))
200
200
201 if premerge:
201 if premerge:
202 if premerge == 'keep-merge3':
202 if premerge == 'keep-merge3':
203 if not labels:
203 if not labels:
204 labels = _defaultconflictlabels
204 labels = _defaultconflictlabels
205 if len(labels) < 3:
205 if len(labels) < 3:
206 labels.append('base')
206 labels.append('base')
207 r = simplemerge.simplemerge(ui, a, b, c, quiet=True, label=labels)
207 r = simplemerge.simplemerge(ui, a, b, c, quiet=True, label=labels)
208 if not r:
208 if not r:
209 ui.debug(" premerge successful\n")
209 ui.debug(" premerge successful\n")
210 return 0
210 return 0
211 if premerge not in validkeep:
211 if premerge not in validkeep:
212 util.copyfile(back, a) # restore from backup and try again
212 util.copyfile(back, a) # restore from backup and try again
213 return 1 # continue merging
213 return 1 # continue merging
214
214
215 @internaltool('merge', True,
215 @internaltool('merge', True,
216 _("merging %s incomplete! "
216 _("merging %s incomplete! "
217 "(edit conflicts, then use 'hg resolve --mark')\n"))
217 "(edit conflicts, then use 'hg resolve --mark')\n"))
218 def _imerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
218 def _imerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
219 """
219 """
220 Uses the internal non-interactive simple merge algorithm for merging
220 Uses the internal non-interactive simple merge algorithm for merging
221 files. It will fail if there are any conflicts and leave markers in
221 files. It will fail if there are any conflicts and leave markers in
222 the partially merged file. Markers will have two sections, one for each side
222 the partially merged file. Markers will have two sections, one for each side
223 of merge."""
223 of merge."""
224 tool, toolpath, binary, symlink = toolconf
224 tool, toolpath, binary, symlink = toolconf
225 if symlink:
225 if symlink:
226 repo.ui.warn(_('warning: internal :merge cannot merge symlinks '
226 repo.ui.warn(_('warning: internal :merge cannot merge symlinks '
227 'for %s\n') % fcd.path())
227 'for %s\n') % fcd.path())
228 return False, 1
228 return False, 1
229 r = _premerge(repo, toolconf, files, labels=labels)
229 r = _premerge(repo, toolconf, files, labels=labels)
230 if r:
230 if r:
231 a, b, c, back = files
231 a, b, c, back = files
232
232
233 ui = repo.ui
233 ui = repo.ui
234
234
235 r = simplemerge.simplemerge(ui, a, b, c, label=labels)
235 r = simplemerge.simplemerge(ui, a, b, c, label=labels)
236 return True, r
236 return True, r
237 return False, 0
237 return False, 0
238
238
239 @internaltool('merge3', True,
239 @internaltool('merge3', True,
240 _("merging %s incomplete! "
240 _("merging %s incomplete! "
241 "(edit conflicts, then use 'hg resolve --mark')\n"))
241 "(edit conflicts, then use 'hg resolve --mark')\n"))
242 def _imerge3(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
242 def _imerge3(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
243 """
243 """
244 Uses the internal non-interactive simple merge algorithm for merging
244 Uses the internal non-interactive simple merge algorithm for merging
245 files. It will fail if there are any conflicts and leave markers in
245 files. It will fail if there are any conflicts and leave markers in
246 the partially merged file. Marker will have three sections, one from each
246 the partially merged file. Marker will have three sections, one from each
247 side of the merge and one for the base content."""
247 side of the merge and one for the base content."""
248 if not labels:
248 if not labels:
249 labels = _defaultconflictlabels
249 labels = _defaultconflictlabels
250 if len(labels) < 3:
250 if len(labels) < 3:
251 labels.append('base')
251 labels.append('base')
252 return _imerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels)
252 return _imerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels)
253
253
254 @internaltool('tagmerge', True,
254 @internaltool('tagmerge', True,
255 _("automatic tag merging of %s failed! "
255 _("automatic tag merging of %s failed! "
256 "(use 'hg resolve --tool :merge' or another merge "
256 "(use 'hg resolve --tool :merge' or another merge "
257 "tool of your choice)\n"))
257 "tool of your choice)\n"))
258 def _itagmerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
258 def _itagmerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
259 """
259 """
260 Uses the internal tag merge algorithm (experimental).
260 Uses the internal tag merge algorithm (experimental).
261 """
261 """
262 return tagmerge.merge(repo, fcd, fco, fca)
262 return tagmerge.merge(repo, fcd, fco, fca)
263
263
264 @internaltool('dump', True)
264 @internaltool('dump', True)
265 def _idump(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
265 def _idump(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
266 """
266 """
267 Creates three versions of the files to merge, containing the
267 Creates three versions of the files to merge, containing the
268 contents of local, other and base. These files can then be used to
268 contents of local, other and base. These files can then be used to
269 perform a merge manually. If the file to be merged is named
269 perform a merge manually. If the file to be merged is named
270 ``a.txt``, these files will accordingly be named ``a.txt.local``,
270 ``a.txt``, these files will accordingly be named ``a.txt.local``,
271 ``a.txt.other`` and ``a.txt.base`` and they will be placed in the
271 ``a.txt.other`` and ``a.txt.base`` and they will be placed in the
272 same directory as ``a.txt``."""
272 same directory as ``a.txt``."""
273 r = _premerge(repo, toolconf, files, labels=labels)
273 r = _premerge(repo, toolconf, files, labels=labels)
274 if r:
274 if r:
275 a, b, c, back = files
275 a, b, c, back = files
276
276
277 fd = fcd.path()
277 fd = fcd.path()
278
278
279 util.copyfile(a, a + ".local")
279 util.copyfile(a, a + ".local")
280 repo.wwrite(fd + ".other", fco.data(), fco.flags())
280 repo.wwrite(fd + ".other", fco.data(), fco.flags())
281 repo.wwrite(fd + ".base", fca.data(), fca.flags())
281 repo.wwrite(fd + ".base", fca.data(), fca.flags())
282 return False, r
282 return False, r
283
283
284 def _xmerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
284 def _xmerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
285 r = _premerge(repo, toolconf, files, labels=labels)
285 r = _premerge(repo, toolconf, files, labels=labels)
286 if r:
286 if r:
287 tool, toolpath, binary, symlink = toolconf
287 tool, toolpath, binary, symlink = toolconf
288 a, b, c, back = files
288 a, b, c, back = files
289 out = ""
289 out = ""
290 env = {'HG_FILE': fcd.path(),
290 env = {'HG_FILE': fcd.path(),
291 'HG_MY_NODE': short(mynode),
291 'HG_MY_NODE': short(mynode),
292 'HG_OTHER_NODE': str(fco.changectx()),
292 'HG_OTHER_NODE': str(fco.changectx()),
293 'HG_BASE_NODE': str(fca.changectx()),
293 'HG_BASE_NODE': str(fca.changectx()),
294 'HG_MY_ISLINK': 'l' in fcd.flags(),
294 'HG_MY_ISLINK': 'l' in fcd.flags(),
295 'HG_OTHER_ISLINK': 'l' in fco.flags(),
295 'HG_OTHER_ISLINK': 'l' in fco.flags(),
296 'HG_BASE_ISLINK': 'l' in fca.flags(),
296 'HG_BASE_ISLINK': 'l' in fca.flags(),
297 }
297 }
298
298
299 ui = repo.ui
299 ui = repo.ui
300
300
301 args = _toolstr(ui, tool, "args", '$local $base $other')
301 args = _toolstr(ui, tool, "args", '$local $base $other')
302 if "$output" in args:
302 if "$output" in args:
303 out, a = a, back # read input from backup, write to original
303 out, a = a, back # read input from backup, write to original
304 replace = {'local': a, 'base': b, 'other': c, 'output': out}
304 replace = {'local': a, 'base': b, 'other': c, 'output': out}
305 args = util.interpolate(r'\$', replace, args,
305 args = util.interpolate(r'\$', replace, args,
306 lambda s: util.shellquote(util.localpath(s)))
306 lambda s: util.shellquote(util.localpath(s)))
307 cmd = toolpath + ' ' + args
307 cmd = toolpath + ' ' + args
308 repo.ui.debug('launching merge tool: %s\n' % cmd)
308 repo.ui.debug('launching merge tool: %s\n' % cmd)
309 r = ui.system(cmd, cwd=repo.root, environ=env)
309 r = ui.system(cmd, cwd=repo.root, environ=env)
310 repo.ui.debug('merge tool returned: %s\n' % r)
310 repo.ui.debug('merge tool returned: %s\n' % r)
311 return True, r
311 return True, r
312 return False, 0
312 return False, 0
313
313
314 def _formatconflictmarker(repo, ctx, template, label, pad):
314 def _formatconflictmarker(repo, ctx, template, label, pad):
315 """Applies the given template to the ctx, prefixed by the label.
315 """Applies the given template to the ctx, prefixed by the label.
316
316
317 Pad is the minimum width of the label prefix, so that multiple markers
317 Pad is the minimum width of the label prefix, so that multiple markers
318 can have aligned templated parts.
318 can have aligned templated parts.
319 """
319 """
320 if ctx.node() is None:
320 if ctx.node() is None:
321 ctx = ctx.p1()
321 ctx = ctx.p1()
322
322
323 props = templatekw.keywords.copy()
323 props = templatekw.keywords.copy()
324 props['templ'] = template
324 props['templ'] = template
325 props['ctx'] = ctx
325 props['ctx'] = ctx
326 props['repo'] = repo
326 props['repo'] = repo
327 templateresult = template('conflictmarker', **props)
327 templateresult = template('conflictmarker', **props)
328
328
329 label = ('%s:' % label).ljust(pad + 1)
329 label = ('%s:' % label).ljust(pad + 1)
330 mark = '%s %s' % (label, templater.stringify(templateresult))
330 mark = '%s %s' % (label, templater.stringify(templateresult))
331
331
332 if mark:
332 if mark:
333 mark = mark.splitlines()[0] # split for safety
333 mark = mark.splitlines()[0] # split for safety
334
334
335 # 8 for the prefix of conflict marker lines (e.g. '<<<<<<< ')
335 # 8 for the prefix of conflict marker lines (e.g. '<<<<<<< ')
336 return util.ellipsis(mark, 80 - 8)
336 return util.ellipsis(mark, 80 - 8)
337
337
338 _defaultconflictmarker = ('{node|short} ' +
338 _defaultconflictmarker = ('{node|short} ' +
339 '{ifeq(tags, "tip", "", "{tags} ")}' +
339 '{ifeq(tags, "tip", "", "{tags} ")}' +
340 '{if(bookmarks, "{bookmarks} ")}' +
340 '{if(bookmarks, "{bookmarks} ")}' +
341 '{ifeq(branch, "default", "", "{branch} ")}' +
341 '{ifeq(branch, "default", "", "{branch} ")}' +
342 '- {author|user}: {desc|firstline}')
342 '- {author|user}: {desc|firstline}')
343
343
344 _defaultconflictlabels = ['local', 'other']
344 _defaultconflictlabels = ['local', 'other']
345
345
346 def _formatlabels(repo, fcd, fco, fca, labels):
346 def _formatlabels(repo, fcd, fco, fca, labels):
347 """Formats the given labels using the conflict marker template.
347 """Formats the given labels using the conflict marker template.
348
348
349 Returns a list of formatted labels.
349 Returns a list of formatted labels.
350 """
350 """
351 cd = fcd.changectx()
351 cd = fcd.changectx()
352 co = fco.changectx()
352 co = fco.changectx()
353 ca = fca.changectx()
353 ca = fca.changectx()
354
354
355 ui = repo.ui
355 ui = repo.ui
356 template = ui.config('ui', 'mergemarkertemplate', _defaultconflictmarker)
356 template = ui.config('ui', 'mergemarkertemplate', _defaultconflictmarker)
357 template = templater.parsestring(template, quoted=False)
358 tmpl = templater.templater(None, cache={'conflictmarker': template})
357 tmpl = templater.templater(None, cache={'conflictmarker': template})
359
358
360 pad = max(len(l) for l in labels)
359 pad = max(len(l) for l in labels)
361
360
362 newlabels = [_formatconflictmarker(repo, cd, tmpl, labels[0], pad),
361 newlabels = [_formatconflictmarker(repo, cd, tmpl, labels[0], pad),
363 _formatconflictmarker(repo, co, tmpl, labels[1], pad)]
362 _formatconflictmarker(repo, co, tmpl, labels[1], pad)]
364 if len(labels) > 2:
363 if len(labels) > 2:
365 newlabels.append(_formatconflictmarker(repo, ca, tmpl, labels[2], pad))
364 newlabels.append(_formatconflictmarker(repo, ca, tmpl, labels[2], pad))
366 return newlabels
365 return newlabels
367
366
368 def filemerge(repo, mynode, orig, fcd, fco, fca, labels=None):
367 def filemerge(repo, mynode, orig, fcd, fco, fca, labels=None):
369 """perform a 3-way merge in the working directory
368 """perform a 3-way merge in the working directory
370
369
371 mynode = parent node before merge
370 mynode = parent node before merge
372 orig = original local filename before merge
371 orig = original local filename before merge
373 fco = other file context
372 fco = other file context
374 fca = ancestor file context
373 fca = ancestor file context
375 fcd = local file context for current/destination file
374 fcd = local file context for current/destination file
376 """
375 """
377
376
378 def temp(prefix, ctx):
377 def temp(prefix, ctx):
379 pre = "%s~%s." % (os.path.basename(ctx.path()), prefix)
378 pre = "%s~%s." % (os.path.basename(ctx.path()), prefix)
380 (fd, name) = tempfile.mkstemp(prefix=pre)
379 (fd, name) = tempfile.mkstemp(prefix=pre)
381 data = repo.wwritedata(ctx.path(), ctx.data())
380 data = repo.wwritedata(ctx.path(), ctx.data())
382 f = os.fdopen(fd, "wb")
381 f = os.fdopen(fd, "wb")
383 f.write(data)
382 f.write(data)
384 f.close()
383 f.close()
385 return name
384 return name
386
385
387 if not fco.cmp(fcd): # files identical?
386 if not fco.cmp(fcd): # files identical?
388 return None
387 return None
389
388
390 ui = repo.ui
389 ui = repo.ui
391 fd = fcd.path()
390 fd = fcd.path()
392 binary = fcd.isbinary() or fco.isbinary() or fca.isbinary()
391 binary = fcd.isbinary() or fco.isbinary() or fca.isbinary()
393 symlink = 'l' in fcd.flags() + fco.flags()
392 symlink = 'l' in fcd.flags() + fco.flags()
394 tool, toolpath = _picktool(repo, ui, fd, binary, symlink)
393 tool, toolpath = _picktool(repo, ui, fd, binary, symlink)
395 ui.debug("picked tool '%s' for %s (binary %s symlink %s)\n" %
394 ui.debug("picked tool '%s' for %s (binary %s symlink %s)\n" %
396 (tool, fd, binary, symlink))
395 (tool, fd, binary, symlink))
397
396
398 if tool in internals:
397 if tool in internals:
399 func = internals[tool]
398 func = internals[tool]
400 trymerge = func.trymerge
399 trymerge = func.trymerge
401 onfailure = func.onfailure
400 onfailure = func.onfailure
402 else:
401 else:
403 func = _xmerge
402 func = _xmerge
404 trymerge = True
403 trymerge = True
405 onfailure = _("merging %s failed!\n")
404 onfailure = _("merging %s failed!\n")
406
405
407 toolconf = tool, toolpath, binary, symlink
406 toolconf = tool, toolpath, binary, symlink
408
407
409 if not trymerge:
408 if not trymerge:
410 return func(repo, mynode, orig, fcd, fco, fca, toolconf)
409 return func(repo, mynode, orig, fcd, fco, fca, toolconf)
411
410
412 a = repo.wjoin(fd)
411 a = repo.wjoin(fd)
413 b = temp("base", fca)
412 b = temp("base", fca)
414 c = temp("other", fco)
413 c = temp("other", fco)
415 back = a + ".orig"
414 back = a + ".orig"
416 util.copyfile(a, back)
415 util.copyfile(a, back)
417
416
418 if orig != fco.path():
417 if orig != fco.path():
419 ui.status(_("merging %s and %s to %s\n") % (orig, fco.path(), fd))
418 ui.status(_("merging %s and %s to %s\n") % (orig, fco.path(), fd))
420 else:
419 else:
421 ui.status(_("merging %s\n") % fd)
420 ui.status(_("merging %s\n") % fd)
422
421
423 ui.debug("my %s other %s ancestor %s\n" % (fcd, fco, fca))
422 ui.debug("my %s other %s ancestor %s\n" % (fcd, fco, fca))
424
423
425 markerstyle = ui.config('ui', 'mergemarkers', 'basic')
424 markerstyle = ui.config('ui', 'mergemarkers', 'basic')
426 if not labels:
425 if not labels:
427 labels = _defaultconflictlabels
426 labels = _defaultconflictlabels
428 if markerstyle != 'basic':
427 if markerstyle != 'basic':
429 labels = _formatlabels(repo, fcd, fco, fca, labels)
428 labels = _formatlabels(repo, fcd, fco, fca, labels)
430
429
431 needcheck, r = func(repo, mynode, orig, fcd, fco, fca, toolconf,
430 needcheck, r = func(repo, mynode, orig, fcd, fco, fca, toolconf,
432 (a, b, c, back), labels=labels)
431 (a, b, c, back), labels=labels)
433 if not needcheck:
432 if not needcheck:
434 if r:
433 if r:
435 if onfailure:
434 if onfailure:
436 ui.warn(onfailure % fd)
435 ui.warn(onfailure % fd)
437 else:
436 else:
438 util.unlink(back)
437 util.unlink(back)
439
438
440 util.unlink(b)
439 util.unlink(b)
441 util.unlink(c)
440 util.unlink(c)
442 return r
441 return r
443
442
444 if not r and (_toolbool(ui, tool, "checkconflicts") or
443 if not r and (_toolbool(ui, tool, "checkconflicts") or
445 'conflicts' in _toollist(ui, tool, "check")):
444 'conflicts' in _toollist(ui, tool, "check")):
446 if re.search("^(<<<<<<< .*|=======|>>>>>>> .*)$", fcd.data(),
445 if re.search("^(<<<<<<< .*|=======|>>>>>>> .*)$", fcd.data(),
447 re.MULTILINE):
446 re.MULTILINE):
448 r = 1
447 r = 1
449
448
450 checked = False
449 checked = False
451 if 'prompt' in _toollist(ui, tool, "check"):
450 if 'prompt' in _toollist(ui, tool, "check"):
452 checked = True
451 checked = True
453 if ui.promptchoice(_("was merge of '%s' successful (yn)?"
452 if ui.promptchoice(_("was merge of '%s' successful (yn)?"
454 "$$ &Yes $$ &No") % fd, 1):
453 "$$ &Yes $$ &No") % fd, 1):
455 r = 1
454 r = 1
456
455
457 if not r and not checked and (_toolbool(ui, tool, "checkchanged") or
456 if not r and not checked and (_toolbool(ui, tool, "checkchanged") or
458 'changed' in _toollist(ui, tool, "check")):
457 'changed' in _toollist(ui, tool, "check")):
459 if filecmp.cmp(a, back):
458 if filecmp.cmp(a, back):
460 if ui.promptchoice(_(" output file %s appears unchanged\n"
459 if ui.promptchoice(_(" output file %s appears unchanged\n"
461 "was merge successful (yn)?"
460 "was merge successful (yn)?"
462 "$$ &Yes $$ &No") % fd, 1):
461 "$$ &Yes $$ &No") % fd, 1):
463 r = 1
462 r = 1
464
463
465 if _toolbool(ui, tool, "fixeol"):
464 if _toolbool(ui, tool, "fixeol"):
466 _matcheol(a, back)
465 _matcheol(a, back)
467
466
468 if r:
467 if r:
469 if onfailure:
468 if onfailure:
470 ui.warn(onfailure % fd)
469 ui.warn(onfailure % fd)
471 else:
470 else:
472 util.unlink(back)
471 util.unlink(back)
473
472
474 util.unlink(b)
473 util.unlink(b)
475 util.unlink(c)
474 util.unlink(c)
476 return r
475 return r
477
476
478 # tell hggettext to extract docstrings from these functions:
477 # tell hggettext to extract docstrings from these functions:
479 i18nfunctions = internals.values()
478 i18nfunctions = internals.values()
@@ -1,824 +1,821 b''
1 # templater.py - template expansion for output
1 # templater.py - template expansion for output
2 #
2 #
3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 import os, re
9 import os, re
10 import util, config, templatefilters, templatekw, parser, error
10 import util, config, templatefilters, templatekw, parser, error
11 import revset as revsetmod
11 import revset as revsetmod
12 import types
12 import types
13 import minirst
13 import minirst
14
14
15 # template parsing
15 # template parsing
16
16
17 elements = {
17 elements = {
18 "(": (20, ("group", 1, ")"), ("func", 1, ")")),
18 "(": (20, ("group", 1, ")"), ("func", 1, ")")),
19 ",": (2, None, ("list", 2)),
19 ",": (2, None, ("list", 2)),
20 "|": (5, None, ("|", 5)),
20 "|": (5, None, ("|", 5)),
21 "%": (6, None, ("%", 6)),
21 "%": (6, None, ("%", 6)),
22 ")": (0, None, None),
22 ")": (0, None, None),
23 "symbol": (0, ("symbol",), None),
23 "symbol": (0, ("symbol",), None),
24 "string": (0, ("string",), None),
24 "string": (0, ("string",), None),
25 "rawstring": (0, ("rawstring",), None),
25 "rawstring": (0, ("rawstring",), None),
26 "end": (0, None, None),
26 "end": (0, None, None),
27 }
27 }
28
28
29 def tokenizer(data):
29 def tokenizer(data):
30 program, start, end = data
30 program, start, end = data
31 pos = start
31 pos = start
32 while pos < end:
32 while pos < end:
33 c = program[pos]
33 c = program[pos]
34 if c.isspace(): # skip inter-token whitespace
34 if c.isspace(): # skip inter-token whitespace
35 pass
35 pass
36 elif c in "(,)%|": # handle simple operators
36 elif c in "(,)%|": # handle simple operators
37 yield (c, None, pos)
37 yield (c, None, pos)
38 elif (c in '"\'' or c == 'r' and
38 elif (c in '"\'' or c == 'r' and
39 program[pos:pos + 2] in ("r'", 'r"')): # handle quoted strings
39 program[pos:pos + 2] in ("r'", 'r"')): # handle quoted strings
40 if c == 'r':
40 if c == 'r':
41 pos += 1
41 pos += 1
42 c = program[pos]
42 c = program[pos]
43 decode = False
43 decode = False
44 else:
44 else:
45 decode = True
45 decode = True
46 pos += 1
46 pos += 1
47 s = pos
47 s = pos
48 while pos < end: # find closing quote
48 while pos < end: # find closing quote
49 d = program[pos]
49 d = program[pos]
50 if decode and d == '\\': # skip over escaped characters
50 if decode and d == '\\': # skip over escaped characters
51 pos += 2
51 pos += 2
52 continue
52 continue
53 if d == c:
53 if d == c:
54 if not decode:
54 if not decode:
55 yield ('rawstring', program[s:pos], s)
55 yield ('rawstring', program[s:pos], s)
56 break
56 break
57 yield ('string', program[s:pos], s)
57 yield ('string', program[s:pos], s)
58 break
58 break
59 pos += 1
59 pos += 1
60 else:
60 else:
61 raise error.ParseError(_("unterminated string"), s)
61 raise error.ParseError(_("unterminated string"), s)
62 elif c.isalnum() or c in '_':
62 elif c.isalnum() or c in '_':
63 s = pos
63 s = pos
64 pos += 1
64 pos += 1
65 while pos < end: # find end of symbol
65 while pos < end: # find end of symbol
66 d = program[pos]
66 d = program[pos]
67 if not (d.isalnum() or d == "_"):
67 if not (d.isalnum() or d == "_"):
68 break
68 break
69 pos += 1
69 pos += 1
70 sym = program[s:pos]
70 sym = program[s:pos]
71 yield ('symbol', sym, s)
71 yield ('symbol', sym, s)
72 pos -= 1
72 pos -= 1
73 elif c == '}':
73 elif c == '}':
74 pos += 1
74 pos += 1
75 break
75 break
76 else:
76 else:
77 raise error.ParseError(_("syntax error"), pos)
77 raise error.ParseError(_("syntax error"), pos)
78 pos += 1
78 pos += 1
79 yield ('end', None, pos)
79 yield ('end', None, pos)
80
80
81 def compiletemplate(tmpl, context, strtoken="string"):
81 def compiletemplate(tmpl, context, strtoken="string"):
82 parsed = []
82 parsed = []
83 pos, stop = 0, len(tmpl)
83 pos, stop = 0, len(tmpl)
84 p = parser.parser(tokenizer, elements)
84 p = parser.parser(tokenizer, elements)
85 while pos < stop:
85 while pos < stop:
86 n = tmpl.find('{', pos)
86 n = tmpl.find('{', pos)
87 if n < 0:
87 if n < 0:
88 parsed.append((strtoken, tmpl[pos:]))
88 parsed.append((strtoken, tmpl[pos:]))
89 break
89 break
90 bs = (n - pos) - len(tmpl[pos:n].rstrip('\\'))
90 bs = (n - pos) - len(tmpl[pos:n].rstrip('\\'))
91 if strtoken == 'string' and bs % 2 == 1:
91 if strtoken == 'string' and bs % 2 == 1:
92 # escaped (e.g. '\{', '\\\{', but not '\\{' nor r'\{')
92 # escaped (e.g. '\{', '\\\{', but not '\\{' nor r'\{')
93 parsed.append((strtoken, (tmpl[pos:n - 1] + "{")))
93 parsed.append((strtoken, (tmpl[pos:n - 1] + "{")))
94 pos = n + 1
94 pos = n + 1
95 continue
95 continue
96 if n > pos:
96 if n > pos:
97 parsed.append((strtoken, tmpl[pos:n]))
97 parsed.append((strtoken, tmpl[pos:n]))
98
98
99 pd = [tmpl, n + 1, stop]
99 pd = [tmpl, n + 1, stop]
100 parseres, pos = p.parse(pd)
100 parseres, pos = p.parse(pd)
101 parsed.append(parseres)
101 parsed.append(parseres)
102
102
103 return [compileexp(e, context) for e in parsed]
103 return [compileexp(e, context) for e in parsed]
104
104
105 def compileexp(exp, context):
105 def compileexp(exp, context):
106 t = exp[0]
106 t = exp[0]
107 if t in methods:
107 if t in methods:
108 return methods[t](exp, context)
108 return methods[t](exp, context)
109 raise error.ParseError(_("unknown method '%s'") % t)
109 raise error.ParseError(_("unknown method '%s'") % t)
110
110
111 # template evaluation
111 # template evaluation
112
112
113 def getsymbol(exp):
113 def getsymbol(exp):
114 if exp[0] == 'symbol':
114 if exp[0] == 'symbol':
115 return exp[1]
115 return exp[1]
116 raise error.ParseError(_("expected a symbol, got '%s'") % exp[0])
116 raise error.ParseError(_("expected a symbol, got '%s'") % exp[0])
117
117
118 def getlist(x):
118 def getlist(x):
119 if not x:
119 if not x:
120 return []
120 return []
121 if x[0] == 'list':
121 if x[0] == 'list':
122 return getlist(x[1]) + [x[2]]
122 return getlist(x[1]) + [x[2]]
123 return [x]
123 return [x]
124
124
125 def getfilter(exp, context):
125 def getfilter(exp, context):
126 f = getsymbol(exp)
126 f = getsymbol(exp)
127 if f not in context._filters:
127 if f not in context._filters:
128 raise error.ParseError(_("unknown function '%s'") % f)
128 raise error.ParseError(_("unknown function '%s'") % f)
129 return context._filters[f]
129 return context._filters[f]
130
130
131 def gettemplate(exp, context):
131 def gettemplate(exp, context):
132 if exp[0] == 'string' or exp[0] == 'rawstring':
132 if exp[0] == 'string' or exp[0] == 'rawstring':
133 return compiletemplate(exp[1], context, strtoken=exp[0])
133 return compiletemplate(exp[1], context, strtoken=exp[0])
134 if exp[0] == 'symbol':
134 if exp[0] == 'symbol':
135 return context._load(exp[1])
135 return context._load(exp[1])
136 raise error.ParseError(_("expected template specifier"))
136 raise error.ParseError(_("expected template specifier"))
137
137
138 def runstring(context, mapping, data):
138 def runstring(context, mapping, data):
139 return data.decode("string-escape")
139 return data.decode("string-escape")
140
140
141 def runrawstring(context, mapping, data):
141 def runrawstring(context, mapping, data):
142 return data
142 return data
143
143
144 def runsymbol(context, mapping, key):
144 def runsymbol(context, mapping, key):
145 v = mapping.get(key)
145 v = mapping.get(key)
146 if v is None:
146 if v is None:
147 v = context._defaults.get(key)
147 v = context._defaults.get(key)
148 if v is None:
148 if v is None:
149 try:
149 try:
150 v = context.process(key, mapping)
150 v = context.process(key, mapping)
151 except TemplateNotFound:
151 except TemplateNotFound:
152 v = ''
152 v = ''
153 if callable(v):
153 if callable(v):
154 return v(**mapping)
154 return v(**mapping)
155 if isinstance(v, types.GeneratorType):
155 if isinstance(v, types.GeneratorType):
156 v = list(v)
156 v = list(v)
157 return v
157 return v
158
158
159 def buildfilter(exp, context):
159 def buildfilter(exp, context):
160 func, data = compileexp(exp[1], context)
160 func, data = compileexp(exp[1], context)
161 filt = getfilter(exp[2], context)
161 filt = getfilter(exp[2], context)
162 return (runfilter, (func, data, filt))
162 return (runfilter, (func, data, filt))
163
163
164 def runfilter(context, mapping, data):
164 def runfilter(context, mapping, data):
165 func, data, filt = data
165 func, data, filt = data
166 # func() may return string, generator of strings or arbitrary object such
166 # func() may return string, generator of strings or arbitrary object such
167 # as date tuple, but filter does not want generator.
167 # as date tuple, but filter does not want generator.
168 thing = func(context, mapping, data)
168 thing = func(context, mapping, data)
169 if isinstance(thing, types.GeneratorType):
169 if isinstance(thing, types.GeneratorType):
170 thing = stringify(thing)
170 thing = stringify(thing)
171 try:
171 try:
172 return filt(thing)
172 return filt(thing)
173 except (ValueError, AttributeError, TypeError):
173 except (ValueError, AttributeError, TypeError):
174 if isinstance(data, tuple):
174 if isinstance(data, tuple):
175 dt = data[1]
175 dt = data[1]
176 else:
176 else:
177 dt = data
177 dt = data
178 raise util.Abort(_("template filter '%s' is not compatible with "
178 raise util.Abort(_("template filter '%s' is not compatible with "
179 "keyword '%s'") % (filt.func_name, dt))
179 "keyword '%s'") % (filt.func_name, dt))
180
180
181 def buildmap(exp, context):
181 def buildmap(exp, context):
182 func, data = compileexp(exp[1], context)
182 func, data = compileexp(exp[1], context)
183 ctmpl = gettemplate(exp[2], context)
183 ctmpl = gettemplate(exp[2], context)
184 return (runmap, (func, data, ctmpl))
184 return (runmap, (func, data, ctmpl))
185
185
186 def runtemplate(context, mapping, template):
186 def runtemplate(context, mapping, template):
187 for func, data in template:
187 for func, data in template:
188 yield func(context, mapping, data)
188 yield func(context, mapping, data)
189
189
190 def runmap(context, mapping, data):
190 def runmap(context, mapping, data):
191 func, data, ctmpl = data
191 func, data, ctmpl = data
192 d = func(context, mapping, data)
192 d = func(context, mapping, data)
193 if callable(d):
193 if callable(d):
194 d = d()
194 d = d()
195
195
196 lm = mapping.copy()
196 lm = mapping.copy()
197
197
198 for i in d:
198 for i in d:
199 if isinstance(i, dict):
199 if isinstance(i, dict):
200 lm.update(i)
200 lm.update(i)
201 lm['originalnode'] = mapping.get('node')
201 lm['originalnode'] = mapping.get('node')
202 yield runtemplate(context, lm, ctmpl)
202 yield runtemplate(context, lm, ctmpl)
203 else:
203 else:
204 # v is not an iterable of dicts, this happen when 'key'
204 # v is not an iterable of dicts, this happen when 'key'
205 # has been fully expanded already and format is useless.
205 # has been fully expanded already and format is useless.
206 # If so, return the expanded value.
206 # If so, return the expanded value.
207 yield i
207 yield i
208
208
209 def buildfunc(exp, context):
209 def buildfunc(exp, context):
210 n = getsymbol(exp[1])
210 n = getsymbol(exp[1])
211 args = [compileexp(x, context) for x in getlist(exp[2])]
211 args = [compileexp(x, context) for x in getlist(exp[2])]
212 if n in funcs:
212 if n in funcs:
213 f = funcs[n]
213 f = funcs[n]
214 return (f, args)
214 return (f, args)
215 if n in context._filters:
215 if n in context._filters:
216 if len(args) != 1:
216 if len(args) != 1:
217 raise error.ParseError(_("filter %s expects one argument") % n)
217 raise error.ParseError(_("filter %s expects one argument") % n)
218 f = context._filters[n]
218 f = context._filters[n]
219 return (runfilter, (args[0][0], args[0][1], f))
219 return (runfilter, (args[0][0], args[0][1], f))
220 raise error.ParseError(_("unknown function '%s'") % n)
220 raise error.ParseError(_("unknown function '%s'") % n)
221
221
222 def date(context, mapping, args):
222 def date(context, mapping, args):
223 """:date(date[, fmt]): Format a date. See :hg:`help dates` for formatting
223 """:date(date[, fmt]): Format a date. See :hg:`help dates` for formatting
224 strings."""
224 strings."""
225 if not (1 <= len(args) <= 2):
225 if not (1 <= len(args) <= 2):
226 # i18n: "date" is a keyword
226 # i18n: "date" is a keyword
227 raise error.ParseError(_("date expects one or two arguments"))
227 raise error.ParseError(_("date expects one or two arguments"))
228
228
229 date = args[0][0](context, mapping, args[0][1])
229 date = args[0][0](context, mapping, args[0][1])
230 fmt = None
230 fmt = None
231 if len(args) == 2:
231 if len(args) == 2:
232 fmt = stringify(args[1][0](context, mapping, args[1][1]))
232 fmt = stringify(args[1][0](context, mapping, args[1][1]))
233 try:
233 try:
234 if fmt is None:
234 if fmt is None:
235 return util.datestr(date)
235 return util.datestr(date)
236 else:
236 else:
237 return util.datestr(date, fmt)
237 return util.datestr(date, fmt)
238 except (TypeError, ValueError):
238 except (TypeError, ValueError):
239 # i18n: "date" is a keyword
239 # i18n: "date" is a keyword
240 raise error.ParseError(_("date expects a date information"))
240 raise error.ParseError(_("date expects a date information"))
241
241
242 def diff(context, mapping, args):
242 def diff(context, mapping, args):
243 """:diff([includepattern [, excludepattern]]): Show a diff, optionally
243 """:diff([includepattern [, excludepattern]]): Show a diff, optionally
244 specifying files to include or exclude."""
244 specifying files to include or exclude."""
245 if len(args) > 2:
245 if len(args) > 2:
246 # i18n: "diff" is a keyword
246 # i18n: "diff" is a keyword
247 raise error.ParseError(_("diff expects one, two or no arguments"))
247 raise error.ParseError(_("diff expects one, two or no arguments"))
248
248
249 def getpatterns(i):
249 def getpatterns(i):
250 if i < len(args):
250 if i < len(args):
251 s = args[i][1].strip()
251 s = args[i][1].strip()
252 if s:
252 if s:
253 return [s]
253 return [s]
254 return []
254 return []
255
255
256 ctx = mapping['ctx']
256 ctx = mapping['ctx']
257 chunks = ctx.diff(match=ctx.match([], getpatterns(0), getpatterns(1)))
257 chunks = ctx.diff(match=ctx.match([], getpatterns(0), getpatterns(1)))
258
258
259 return ''.join(chunks)
259 return ''.join(chunks)
260
260
261 def fill(context, mapping, args):
261 def fill(context, mapping, args):
262 """:fill(text[, width[, initialident[, hangindent]]]): Fill many
262 """:fill(text[, width[, initialident[, hangindent]]]): Fill many
263 paragraphs with optional indentation. See the "fill" filter."""
263 paragraphs with optional indentation. See the "fill" filter."""
264 if not (1 <= len(args) <= 4):
264 if not (1 <= len(args) <= 4):
265 # i18n: "fill" is a keyword
265 # i18n: "fill" is a keyword
266 raise error.ParseError(_("fill expects one to four arguments"))
266 raise error.ParseError(_("fill expects one to four arguments"))
267
267
268 text = stringify(args[0][0](context, mapping, args[0][1]))
268 text = stringify(args[0][0](context, mapping, args[0][1]))
269 width = 76
269 width = 76
270 initindent = ''
270 initindent = ''
271 hangindent = ''
271 hangindent = ''
272 if 2 <= len(args) <= 4:
272 if 2 <= len(args) <= 4:
273 try:
273 try:
274 width = int(stringify(args[1][0](context, mapping, args[1][1])))
274 width = int(stringify(args[1][0](context, mapping, args[1][1])))
275 except ValueError:
275 except ValueError:
276 # i18n: "fill" is a keyword
276 # i18n: "fill" is a keyword
277 raise error.ParseError(_("fill expects an integer width"))
277 raise error.ParseError(_("fill expects an integer width"))
278 try:
278 try:
279 initindent = stringify(_evalifliteral(args[2], context, mapping))
279 initindent = stringify(_evalifliteral(args[2], context, mapping))
280 hangindent = stringify(_evalifliteral(args[3], context, mapping))
280 hangindent = stringify(_evalifliteral(args[3], context, mapping))
281 except IndexError:
281 except IndexError:
282 pass
282 pass
283
283
284 return templatefilters.fill(text, width, initindent, hangindent)
284 return templatefilters.fill(text, width, initindent, hangindent)
285
285
286 def pad(context, mapping, args):
286 def pad(context, mapping, args):
287 """:pad(text, width[, fillchar=' '[, right=False]]): Pad text with a
287 """:pad(text, width[, fillchar=' '[, right=False]]): Pad text with a
288 fill character."""
288 fill character."""
289 if not (2 <= len(args) <= 4):
289 if not (2 <= len(args) <= 4):
290 # i18n: "pad" is a keyword
290 # i18n: "pad" is a keyword
291 raise error.ParseError(_("pad() expects two to four arguments"))
291 raise error.ParseError(_("pad() expects two to four arguments"))
292
292
293 width = int(args[1][1])
293 width = int(args[1][1])
294
294
295 text = stringify(args[0][0](context, mapping, args[0][1]))
295 text = stringify(args[0][0](context, mapping, args[0][1]))
296 if args[0][0] == runstring:
296 if args[0][0] == runstring:
297 text = stringify(runtemplate(context, mapping,
297 text = stringify(runtemplate(context, mapping,
298 compiletemplate(text, context)))
298 compiletemplate(text, context)))
299
299
300 right = False
300 right = False
301 fillchar = ' '
301 fillchar = ' '
302 if len(args) > 2:
302 if len(args) > 2:
303 fillchar = stringify(args[2][0](context, mapping, args[2][1]))
303 fillchar = stringify(args[2][0](context, mapping, args[2][1]))
304 if len(args) > 3:
304 if len(args) > 3:
305 right = util.parsebool(args[3][1])
305 right = util.parsebool(args[3][1])
306
306
307 if right:
307 if right:
308 return text.rjust(width, fillchar)
308 return text.rjust(width, fillchar)
309 else:
309 else:
310 return text.ljust(width, fillchar)
310 return text.ljust(width, fillchar)
311
311
312 def get(context, mapping, args):
312 def get(context, mapping, args):
313 """:get(dict, key): Get an attribute/key from an object. Some keywords
313 """:get(dict, key): Get an attribute/key from an object. Some keywords
314 are complex types. This function allows you to obtain the value of an
314 are complex types. This function allows you to obtain the value of an
315 attribute on these type."""
315 attribute on these type."""
316 if len(args) != 2:
316 if len(args) != 2:
317 # i18n: "get" is a keyword
317 # i18n: "get" is a keyword
318 raise error.ParseError(_("get() expects two arguments"))
318 raise error.ParseError(_("get() expects two arguments"))
319
319
320 dictarg = args[0][0](context, mapping, args[0][1])
320 dictarg = args[0][0](context, mapping, args[0][1])
321 if not util.safehasattr(dictarg, 'get'):
321 if not util.safehasattr(dictarg, 'get'):
322 # i18n: "get" is a keyword
322 # i18n: "get" is a keyword
323 raise error.ParseError(_("get() expects a dict as first argument"))
323 raise error.ParseError(_("get() expects a dict as first argument"))
324
324
325 key = args[1][0](context, mapping, args[1][1])
325 key = args[1][0](context, mapping, args[1][1])
326 yield dictarg.get(key)
326 yield dictarg.get(key)
327
327
328 def _evalifliteral(arg, context, mapping):
328 def _evalifliteral(arg, context, mapping):
329 t = stringify(arg[0](context, mapping, arg[1]))
329 t = stringify(arg[0](context, mapping, arg[1]))
330 if arg[0] == runstring or arg[0] == runrawstring:
330 if arg[0] == runstring or arg[0] == runrawstring:
331 yield runtemplate(context, mapping,
331 yield runtemplate(context, mapping,
332 compiletemplate(t, context, strtoken='rawstring'))
332 compiletemplate(t, context, strtoken='rawstring'))
333 else:
333 else:
334 yield t
334 yield t
335
335
336 def if_(context, mapping, args):
336 def if_(context, mapping, args):
337 """:if(expr, then[, else]): Conditionally execute based on the result of
337 """:if(expr, then[, else]): Conditionally execute based on the result of
338 an expression."""
338 an expression."""
339 if not (2 <= len(args) <= 3):
339 if not (2 <= len(args) <= 3):
340 # i18n: "if" is a keyword
340 # i18n: "if" is a keyword
341 raise error.ParseError(_("if expects two or three arguments"))
341 raise error.ParseError(_("if expects two or three arguments"))
342
342
343 test = stringify(args[0][0](context, mapping, args[0][1]))
343 test = stringify(args[0][0](context, mapping, args[0][1]))
344 if test:
344 if test:
345 yield _evalifliteral(args[1], context, mapping)
345 yield _evalifliteral(args[1], context, mapping)
346 elif len(args) == 3:
346 elif len(args) == 3:
347 yield _evalifliteral(args[2], context, mapping)
347 yield _evalifliteral(args[2], context, mapping)
348
348
349 def ifcontains(context, mapping, args):
349 def ifcontains(context, mapping, args):
350 """:ifcontains(search, thing, then[, else]): Conditionally execute based
350 """:ifcontains(search, thing, then[, else]): Conditionally execute based
351 on whether the item "search" is in "thing"."""
351 on whether the item "search" is in "thing"."""
352 if not (3 <= len(args) <= 4):
352 if not (3 <= len(args) <= 4):
353 # i18n: "ifcontains" is a keyword
353 # i18n: "ifcontains" is a keyword
354 raise error.ParseError(_("ifcontains expects three or four arguments"))
354 raise error.ParseError(_("ifcontains expects three or four arguments"))
355
355
356 item = stringify(args[0][0](context, mapping, args[0][1]))
356 item = stringify(args[0][0](context, mapping, args[0][1]))
357 items = args[1][0](context, mapping, args[1][1])
357 items = args[1][0](context, mapping, args[1][1])
358
358
359 if item in items:
359 if item in items:
360 yield _evalifliteral(args[2], context, mapping)
360 yield _evalifliteral(args[2], context, mapping)
361 elif len(args) == 4:
361 elif len(args) == 4:
362 yield _evalifliteral(args[3], context, mapping)
362 yield _evalifliteral(args[3], context, mapping)
363
363
364 def ifeq(context, mapping, args):
364 def ifeq(context, mapping, args):
365 """:ifeq(expr1, expr2, then[, else]): Conditionally execute based on
365 """:ifeq(expr1, expr2, then[, else]): Conditionally execute based on
366 whether 2 items are equivalent."""
366 whether 2 items are equivalent."""
367 if not (3 <= len(args) <= 4):
367 if not (3 <= len(args) <= 4):
368 # i18n: "ifeq" is a keyword
368 # i18n: "ifeq" is a keyword
369 raise error.ParseError(_("ifeq expects three or four arguments"))
369 raise error.ParseError(_("ifeq expects three or four arguments"))
370
370
371 test = stringify(args[0][0](context, mapping, args[0][1]))
371 test = stringify(args[0][0](context, mapping, args[0][1]))
372 match = stringify(args[1][0](context, mapping, args[1][1]))
372 match = stringify(args[1][0](context, mapping, args[1][1]))
373 if test == match:
373 if test == match:
374 yield _evalifliteral(args[2], context, mapping)
374 yield _evalifliteral(args[2], context, mapping)
375 elif len(args) == 4:
375 elif len(args) == 4:
376 yield _evalifliteral(args[3], context, mapping)
376 yield _evalifliteral(args[3], context, mapping)
377
377
378 def join(context, mapping, args):
378 def join(context, mapping, args):
379 """:join(list, sep): Join items in a list with a delimiter."""
379 """:join(list, sep): Join items in a list with a delimiter."""
380 if not (1 <= len(args) <= 2):
380 if not (1 <= len(args) <= 2):
381 # i18n: "join" is a keyword
381 # i18n: "join" is a keyword
382 raise error.ParseError(_("join expects one or two arguments"))
382 raise error.ParseError(_("join expects one or two arguments"))
383
383
384 joinset = args[0][0](context, mapping, args[0][1])
384 joinset = args[0][0](context, mapping, args[0][1])
385 if callable(joinset):
385 if callable(joinset):
386 jf = joinset.joinfmt
386 jf = joinset.joinfmt
387 joinset = [jf(x) for x in joinset()]
387 joinset = [jf(x) for x in joinset()]
388
388
389 joiner = " "
389 joiner = " "
390 if len(args) > 1:
390 if len(args) > 1:
391 joiner = stringify(args[1][0](context, mapping, args[1][1]))
391 joiner = stringify(args[1][0](context, mapping, args[1][1]))
392
392
393 first = True
393 first = True
394 for x in joinset:
394 for x in joinset:
395 if first:
395 if first:
396 first = False
396 first = False
397 else:
397 else:
398 yield joiner
398 yield joiner
399 yield x
399 yield x
400
400
401 def label(context, mapping, args):
401 def label(context, mapping, args):
402 """:label(label, expr): Apply a label to generated content. Content with
402 """:label(label, expr): Apply a label to generated content. Content with
403 a label applied can result in additional post-processing, such as
403 a label applied can result in additional post-processing, such as
404 automatic colorization."""
404 automatic colorization."""
405 if len(args) != 2:
405 if len(args) != 2:
406 # i18n: "label" is a keyword
406 # i18n: "label" is a keyword
407 raise error.ParseError(_("label expects two arguments"))
407 raise error.ParseError(_("label expects two arguments"))
408
408
409 # ignore args[0] (the label string) since this is supposed to be a a no-op
409 # ignore args[0] (the label string) since this is supposed to be a a no-op
410 yield _evalifliteral(args[1], context, mapping)
410 yield _evalifliteral(args[1], context, mapping)
411
411
412 def revset(context, mapping, args):
412 def revset(context, mapping, args):
413 """:revset(query[, formatargs...]): Execute a revision set query. See
413 """:revset(query[, formatargs...]): Execute a revision set query. See
414 :hg:`help revset`."""
414 :hg:`help revset`."""
415 if not len(args) > 0:
415 if not len(args) > 0:
416 # i18n: "revset" is a keyword
416 # i18n: "revset" is a keyword
417 raise error.ParseError(_("revset expects one or more arguments"))
417 raise error.ParseError(_("revset expects one or more arguments"))
418
418
419 raw = args[0][1]
419 raw = args[0][1]
420 ctx = mapping['ctx']
420 ctx = mapping['ctx']
421 repo = ctx.repo()
421 repo = ctx.repo()
422
422
423 def query(expr):
423 def query(expr):
424 m = revsetmod.match(repo.ui, expr)
424 m = revsetmod.match(repo.ui, expr)
425 return m(repo)
425 return m(repo)
426
426
427 if len(args) > 1:
427 if len(args) > 1:
428 formatargs = list([a[0](context, mapping, a[1]) for a in args[1:]])
428 formatargs = list([a[0](context, mapping, a[1]) for a in args[1:]])
429 revs = query(revsetmod.formatspec(raw, *formatargs))
429 revs = query(revsetmod.formatspec(raw, *formatargs))
430 revs = list([str(r) for r in revs])
430 revs = list([str(r) for r in revs])
431 else:
431 else:
432 revsetcache = mapping['cache'].setdefault("revsetcache", {})
432 revsetcache = mapping['cache'].setdefault("revsetcache", {})
433 if raw in revsetcache:
433 if raw in revsetcache:
434 revs = revsetcache[raw]
434 revs = revsetcache[raw]
435 else:
435 else:
436 revs = query(raw)
436 revs = query(raw)
437 revs = list([str(r) for r in revs])
437 revs = list([str(r) for r in revs])
438 revsetcache[raw] = revs
438 revsetcache[raw] = revs
439
439
440 return templatekw.showlist("revision", revs, **mapping)
440 return templatekw.showlist("revision", revs, **mapping)
441
441
442 def rstdoc(context, mapping, args):
442 def rstdoc(context, mapping, args):
443 """:rstdoc(text, style): Format ReStructuredText."""
443 """:rstdoc(text, style): Format ReStructuredText."""
444 if len(args) != 2:
444 if len(args) != 2:
445 # i18n: "rstdoc" is a keyword
445 # i18n: "rstdoc" is a keyword
446 raise error.ParseError(_("rstdoc expects two arguments"))
446 raise error.ParseError(_("rstdoc expects two arguments"))
447
447
448 text = stringify(args[0][0](context, mapping, args[0][1]))
448 text = stringify(args[0][0](context, mapping, args[0][1]))
449 style = stringify(args[1][0](context, mapping, args[1][1]))
449 style = stringify(args[1][0](context, mapping, args[1][1]))
450
450
451 return minirst.format(text, style=style, keep=['verbose'])
451 return minirst.format(text, style=style, keep=['verbose'])
452
452
453 def shortest(context, mapping, args):
453 def shortest(context, mapping, args):
454 """:shortest(node, minlength=4): Obtain the shortest representation of
454 """:shortest(node, minlength=4): Obtain the shortest representation of
455 a node."""
455 a node."""
456 if not (1 <= len(args) <= 2):
456 if not (1 <= len(args) <= 2):
457 # i18n: "shortest" is a keyword
457 # i18n: "shortest" is a keyword
458 raise error.ParseError(_("shortest() expects one or two arguments"))
458 raise error.ParseError(_("shortest() expects one or two arguments"))
459
459
460 node = stringify(args[0][0](context, mapping, args[0][1]))
460 node = stringify(args[0][0](context, mapping, args[0][1]))
461
461
462 minlength = 4
462 minlength = 4
463 if len(args) > 1:
463 if len(args) > 1:
464 minlength = int(args[1][1])
464 minlength = int(args[1][1])
465
465
466 cl = mapping['ctx']._repo.changelog
466 cl = mapping['ctx']._repo.changelog
467 def isvalid(test):
467 def isvalid(test):
468 try:
468 try:
469 try:
469 try:
470 cl.index.partialmatch(test)
470 cl.index.partialmatch(test)
471 except AttributeError:
471 except AttributeError:
472 # Pure mercurial doesn't support partialmatch on the index.
472 # Pure mercurial doesn't support partialmatch on the index.
473 # Fallback to the slow way.
473 # Fallback to the slow way.
474 if cl._partialmatch(test) is None:
474 if cl._partialmatch(test) is None:
475 return False
475 return False
476
476
477 try:
477 try:
478 i = int(test)
478 i = int(test)
479 # if we are a pure int, then starting with zero will not be
479 # if we are a pure int, then starting with zero will not be
480 # confused as a rev; or, obviously, if the int is larger than
480 # confused as a rev; or, obviously, if the int is larger than
481 # the value of the tip rev
481 # the value of the tip rev
482 if test[0] == '0' or i > len(cl):
482 if test[0] == '0' or i > len(cl):
483 return True
483 return True
484 return False
484 return False
485 except ValueError:
485 except ValueError:
486 return True
486 return True
487 except error.RevlogError:
487 except error.RevlogError:
488 return False
488 return False
489
489
490 shortest = node
490 shortest = node
491 startlength = max(6, minlength)
491 startlength = max(6, minlength)
492 length = startlength
492 length = startlength
493 while True:
493 while True:
494 test = node[:length]
494 test = node[:length]
495 if isvalid(test):
495 if isvalid(test):
496 shortest = test
496 shortest = test
497 if length == minlength or length > startlength:
497 if length == minlength or length > startlength:
498 return shortest
498 return shortest
499 length -= 1
499 length -= 1
500 else:
500 else:
501 length += 1
501 length += 1
502 if len(shortest) <= length:
502 if len(shortest) <= length:
503 return shortest
503 return shortest
504
504
505 def strip(context, mapping, args):
505 def strip(context, mapping, args):
506 """:strip(text[, chars]): Strip characters from a string."""
506 """:strip(text[, chars]): Strip characters from a string."""
507 if not (1 <= len(args) <= 2):
507 if not (1 <= len(args) <= 2):
508 # i18n: "strip" is a keyword
508 # i18n: "strip" is a keyword
509 raise error.ParseError(_("strip expects one or two arguments"))
509 raise error.ParseError(_("strip expects one or two arguments"))
510
510
511 text = stringify(args[0][0](context, mapping, args[0][1]))
511 text = stringify(args[0][0](context, mapping, args[0][1]))
512 if len(args) == 2:
512 if len(args) == 2:
513 chars = stringify(args[1][0](context, mapping, args[1][1]))
513 chars = stringify(args[1][0](context, mapping, args[1][1]))
514 return text.strip(chars)
514 return text.strip(chars)
515 return text.strip()
515 return text.strip()
516
516
517 def sub(context, mapping, args):
517 def sub(context, mapping, args):
518 """:sub(pattern, replacement, expression): Perform text substitution
518 """:sub(pattern, replacement, expression): Perform text substitution
519 using regular expressions."""
519 using regular expressions."""
520 if len(args) != 3:
520 if len(args) != 3:
521 # i18n: "sub" is a keyword
521 # i18n: "sub" is a keyword
522 raise error.ParseError(_("sub expects three arguments"))
522 raise error.ParseError(_("sub expects three arguments"))
523
523
524 pat = stringify(args[0][0](context, mapping, args[0][1]))
524 pat = stringify(args[0][0](context, mapping, args[0][1]))
525 rpl = stringify(args[1][0](context, mapping, args[1][1]))
525 rpl = stringify(args[1][0](context, mapping, args[1][1]))
526 src = stringify(_evalifliteral(args[2], context, mapping))
526 src = stringify(_evalifliteral(args[2], context, mapping))
527 yield re.sub(pat, rpl, src)
527 yield re.sub(pat, rpl, src)
528
528
529 def startswith(context, mapping, args):
529 def startswith(context, mapping, args):
530 """:startswith(pattern, text): Returns the value from the "text" argument
530 """:startswith(pattern, text): Returns the value from the "text" argument
531 if it begins with the content from the "pattern" argument."""
531 if it begins with the content from the "pattern" argument."""
532 if len(args) != 2:
532 if len(args) != 2:
533 # i18n: "startswith" is a keyword
533 # i18n: "startswith" is a keyword
534 raise error.ParseError(_("startswith expects two arguments"))
534 raise error.ParseError(_("startswith expects two arguments"))
535
535
536 patn = stringify(args[0][0](context, mapping, args[0][1]))
536 patn = stringify(args[0][0](context, mapping, args[0][1]))
537 text = stringify(args[1][0](context, mapping, args[1][1]))
537 text = stringify(args[1][0](context, mapping, args[1][1]))
538 if text.startswith(patn):
538 if text.startswith(patn):
539 return text
539 return text
540 return ''
540 return ''
541
541
542
542
543 def word(context, mapping, args):
543 def word(context, mapping, args):
544 """:word(number, text[, separator]): Return the nth word from a string."""
544 """:word(number, text[, separator]): Return the nth word from a string."""
545 if not (2 <= len(args) <= 3):
545 if not (2 <= len(args) <= 3):
546 # i18n: "word" is a keyword
546 # i18n: "word" is a keyword
547 raise error.ParseError(_("word expects two or three arguments, got %d")
547 raise error.ParseError(_("word expects two or three arguments, got %d")
548 % len(args))
548 % len(args))
549
549
550 try:
550 try:
551 num = int(stringify(args[0][0](context, mapping, args[0][1])))
551 num = int(stringify(args[0][0](context, mapping, args[0][1])))
552 except ValueError:
552 except ValueError:
553 # i18n: "word" is a keyword
553 # i18n: "word" is a keyword
554 raise error.ParseError(
554 raise error.ParseError(
555 _("Use strings like '3' for numbers passed to word function"))
555 _("Use strings like '3' for numbers passed to word function"))
556 text = stringify(args[1][0](context, mapping, args[1][1]))
556 text = stringify(args[1][0](context, mapping, args[1][1]))
557 if len(args) == 3:
557 if len(args) == 3:
558 splitter = stringify(args[2][0](context, mapping, args[2][1]))
558 splitter = stringify(args[2][0](context, mapping, args[2][1]))
559 else:
559 else:
560 splitter = None
560 splitter = None
561
561
562 tokens = text.split(splitter)
562 tokens = text.split(splitter)
563 if num >= len(tokens):
563 if num >= len(tokens):
564 return ''
564 return ''
565 else:
565 else:
566 return tokens[num]
566 return tokens[num]
567
567
568 methods = {
568 methods = {
569 "string": lambda e, c: (runstring, e[1]),
569 "string": lambda e, c: (runstring, e[1]),
570 "rawstring": lambda e, c: (runrawstring, e[1]),
570 "rawstring": lambda e, c: (runrawstring, e[1]),
571 "symbol": lambda e, c: (runsymbol, e[1]),
571 "symbol": lambda e, c: (runsymbol, e[1]),
572 "group": lambda e, c: compileexp(e[1], c),
572 "group": lambda e, c: compileexp(e[1], c),
573 # ".": buildmember,
573 # ".": buildmember,
574 "|": buildfilter,
574 "|": buildfilter,
575 "%": buildmap,
575 "%": buildmap,
576 "func": buildfunc,
576 "func": buildfunc,
577 }
577 }
578
578
579 funcs = {
579 funcs = {
580 "date": date,
580 "date": date,
581 "diff": diff,
581 "diff": diff,
582 "fill": fill,
582 "fill": fill,
583 "get": get,
583 "get": get,
584 "if": if_,
584 "if": if_,
585 "ifcontains": ifcontains,
585 "ifcontains": ifcontains,
586 "ifeq": ifeq,
586 "ifeq": ifeq,
587 "join": join,
587 "join": join,
588 "label": label,
588 "label": label,
589 "pad": pad,
589 "pad": pad,
590 "revset": revset,
590 "revset": revset,
591 "rstdoc": rstdoc,
591 "rstdoc": rstdoc,
592 "shortest": shortest,
592 "shortest": shortest,
593 "startswith": startswith,
593 "startswith": startswith,
594 "strip": strip,
594 "strip": strip,
595 "sub": sub,
595 "sub": sub,
596 "word": word,
596 "word": word,
597 }
597 }
598
598
599 # template engine
599 # template engine
600
600
601 stringify = templatefilters.stringify
601 stringify = templatefilters.stringify
602
602
603 def _flatten(thing):
603 def _flatten(thing):
604 '''yield a single stream from a possibly nested set of iterators'''
604 '''yield a single stream from a possibly nested set of iterators'''
605 if isinstance(thing, str):
605 if isinstance(thing, str):
606 yield thing
606 yield thing
607 elif not util.safehasattr(thing, '__iter__'):
607 elif not util.safehasattr(thing, '__iter__'):
608 if thing is not None:
608 if thing is not None:
609 yield str(thing)
609 yield str(thing)
610 else:
610 else:
611 for i in thing:
611 for i in thing:
612 if isinstance(i, str):
612 if isinstance(i, str):
613 yield i
613 yield i
614 elif not util.safehasattr(i, '__iter__'):
614 elif not util.safehasattr(i, '__iter__'):
615 if i is not None:
615 if i is not None:
616 yield str(i)
616 yield str(i)
617 elif i is not None:
617 elif i is not None:
618 for j in _flatten(i):
618 for j in _flatten(i):
619 yield j
619 yield j
620
620
621 def parsestring(s, quoted=True):
621 def parsestring(s):
622 '''unwrap quotes if quoted is True'''
622 '''unwrap quotes'''
623 if quoted:
623 if len(s) < 2 or s[0] != s[-1]:
624 if len(s) < 2 or s[0] != s[-1]:
624 raise SyntaxError(_('unmatched quotes'))
625 raise SyntaxError(_('unmatched quotes'))
625 # de-backslash-ify only <\">. it is invalid syntax in non-string part of
626 # de-backslash-ify only <\">. it is invalid syntax in non-string part of
626 # template, but we are likely to escape <"> in quoted string and it was
627 # template, but we are likely to escape <"> in quoted string and it was
627 # accepted before, thanks to issue4290. <\\"> is unmodified because it
628 # accepted before, thanks to issue4290. <\\"> is unmodified because it
628 # is ambiguous and it was processed as such before 2.8.1.
629 # is ambiguous and it was processed as such before 2.8.1.
629 #
630 #
630 # template result
631 # template result
631 # --------- ------------------------
632 # --------- ------------------------
632 # {\"\"} parse error
633 # {\"\"} parse error
633 # "{""}" {""} -> <>
634 # "{""}" {""} -> <>
634 # "{\"\"}" {""} -> <>
635 # "{\"\"}" {""} -> <>
635 # {"\""} {"\""} -> <">
636 # {"\""} {"\""} -> <">
636 # '{"\""}' {"\""} -> <">
637 # '{"\""}' {"\""} -> <">
637 # "{"\""}" parse error (don't care)
638 # "{"\""}" parse error (don't care)
638 q = s[0]
639 q = s[0]
639 return s[1:-1].replace('\\\\' + q, '\\\\\\' + q).replace('\\' + q, q)
640 return s[1:-1].replace('\\\\' + q, '\\\\\\' + q).replace('\\' + q, q)
641
642 return s
643
640
644 class engine(object):
641 class engine(object):
645 '''template expansion engine.
642 '''template expansion engine.
646
643
647 template expansion works like this. a map file contains key=value
644 template expansion works like this. a map file contains key=value
648 pairs. if value is quoted, it is treated as string. otherwise, it
645 pairs. if value is quoted, it is treated as string. otherwise, it
649 is treated as name of template file.
646 is treated as name of template file.
650
647
651 templater is asked to expand a key in map. it looks up key, and
648 templater is asked to expand a key in map. it looks up key, and
652 looks for strings like this: {foo}. it expands {foo} by looking up
649 looks for strings like this: {foo}. it expands {foo} by looking up
653 foo in map, and substituting it. expansion is recursive: it stops
650 foo in map, and substituting it. expansion is recursive: it stops
654 when there is no more {foo} to replace.
651 when there is no more {foo} to replace.
655
652
656 expansion also allows formatting and filtering.
653 expansion also allows formatting and filtering.
657
654
658 format uses key to expand each item in list. syntax is
655 format uses key to expand each item in list. syntax is
659 {key%format}.
656 {key%format}.
660
657
661 filter uses function to transform value. syntax is
658 filter uses function to transform value. syntax is
662 {key|filter1|filter2|...}.'''
659 {key|filter1|filter2|...}.'''
663
660
664 def __init__(self, loader, filters={}, defaults={}):
661 def __init__(self, loader, filters={}, defaults={}):
665 self._loader = loader
662 self._loader = loader
666 self._filters = filters
663 self._filters = filters
667 self._defaults = defaults
664 self._defaults = defaults
668 self._cache = {}
665 self._cache = {}
669
666
670 def _load(self, t):
667 def _load(self, t):
671 '''load, parse, and cache a template'''
668 '''load, parse, and cache a template'''
672 if t not in self._cache:
669 if t not in self._cache:
673 self._cache[t] = compiletemplate(self._loader(t), self)
670 self._cache[t] = compiletemplate(self._loader(t), self)
674 return self._cache[t]
671 return self._cache[t]
675
672
676 def process(self, t, mapping):
673 def process(self, t, mapping):
677 '''Perform expansion. t is name of map element to expand.
674 '''Perform expansion. t is name of map element to expand.
678 mapping contains added elements for use during expansion. Is a
675 mapping contains added elements for use during expansion. Is a
679 generator.'''
676 generator.'''
680 return _flatten(runtemplate(self, mapping, self._load(t)))
677 return _flatten(runtemplate(self, mapping, self._load(t)))
681
678
682 engines = {'default': engine}
679 engines = {'default': engine}
683
680
684 def stylelist():
681 def stylelist():
685 paths = templatepaths()
682 paths = templatepaths()
686 if not paths:
683 if not paths:
687 return _('no templates found, try `hg debuginstall` for more info')
684 return _('no templates found, try `hg debuginstall` for more info')
688 dirlist = os.listdir(paths[0])
685 dirlist = os.listdir(paths[0])
689 stylelist = []
686 stylelist = []
690 for file in dirlist:
687 for file in dirlist:
691 split = file.split(".")
688 split = file.split(".")
692 if split[0] == "map-cmdline":
689 if split[0] == "map-cmdline":
693 stylelist.append(split[1])
690 stylelist.append(split[1])
694 return ", ".join(sorted(stylelist))
691 return ", ".join(sorted(stylelist))
695
692
696 class TemplateNotFound(util.Abort):
693 class TemplateNotFound(util.Abort):
697 pass
694 pass
698
695
699 class templater(object):
696 class templater(object):
700
697
701 def __init__(self, mapfile, filters={}, defaults={}, cache={},
698 def __init__(self, mapfile, filters={}, defaults={}, cache={},
702 minchunk=1024, maxchunk=65536):
699 minchunk=1024, maxchunk=65536):
703 '''set up template engine.
700 '''set up template engine.
704 mapfile is name of file to read map definitions from.
701 mapfile is name of file to read map definitions from.
705 filters is dict of functions. each transforms a value into another.
702 filters is dict of functions. each transforms a value into another.
706 defaults is dict of default map definitions.'''
703 defaults is dict of default map definitions.'''
707 self.mapfile = mapfile or 'template'
704 self.mapfile = mapfile or 'template'
708 self.cache = cache.copy()
705 self.cache = cache.copy()
709 self.map = {}
706 self.map = {}
710 if mapfile:
707 if mapfile:
711 self.base = os.path.dirname(mapfile)
708 self.base = os.path.dirname(mapfile)
712 else:
709 else:
713 self.base = ''
710 self.base = ''
714 self.filters = templatefilters.filters.copy()
711 self.filters = templatefilters.filters.copy()
715 self.filters.update(filters)
712 self.filters.update(filters)
716 self.defaults = defaults
713 self.defaults = defaults
717 self.minchunk, self.maxchunk = minchunk, maxchunk
714 self.minchunk, self.maxchunk = minchunk, maxchunk
718 self.ecache = {}
715 self.ecache = {}
719
716
720 if not mapfile:
717 if not mapfile:
721 return
718 return
722 if not os.path.exists(mapfile):
719 if not os.path.exists(mapfile):
723 raise util.Abort(_("style '%s' not found") % mapfile,
720 raise util.Abort(_("style '%s' not found") % mapfile,
724 hint=_("available styles: %s") % stylelist())
721 hint=_("available styles: %s") % stylelist())
725
722
726 conf = config.config()
723 conf = config.config()
727 conf.read(mapfile)
724 conf.read(mapfile)
728
725
729 for key, val in conf[''].items():
726 for key, val in conf[''].items():
730 if not val:
727 if not val:
731 raise SyntaxError(_('%s: missing value') % conf.source('', key))
728 raise SyntaxError(_('%s: missing value') % conf.source('', key))
732 if val[0] in "'\"":
729 if val[0] in "'\"":
733 try:
730 try:
734 self.cache[key] = parsestring(val)
731 self.cache[key] = parsestring(val)
735 except SyntaxError, inst:
732 except SyntaxError, inst:
736 raise SyntaxError('%s: %s' %
733 raise SyntaxError('%s: %s' %
737 (conf.source('', key), inst.args[0]))
734 (conf.source('', key), inst.args[0]))
738 else:
735 else:
739 val = 'default', val
736 val = 'default', val
740 if ':' in val[1]:
737 if ':' in val[1]:
741 val = val[1].split(':', 1)
738 val = val[1].split(':', 1)
742 self.map[key] = val[0], os.path.join(self.base, val[1])
739 self.map[key] = val[0], os.path.join(self.base, val[1])
743
740
744 def __contains__(self, key):
741 def __contains__(self, key):
745 return key in self.cache or key in self.map
742 return key in self.cache or key in self.map
746
743
747 def load(self, t):
744 def load(self, t):
748 '''Get the template for the given template name. Use a local cache.'''
745 '''Get the template for the given template name. Use a local cache.'''
749 if t not in self.cache:
746 if t not in self.cache:
750 try:
747 try:
751 self.cache[t] = util.readfile(self.map[t][1])
748 self.cache[t] = util.readfile(self.map[t][1])
752 except KeyError, inst:
749 except KeyError, inst:
753 raise TemplateNotFound(_('"%s" not in template map') %
750 raise TemplateNotFound(_('"%s" not in template map') %
754 inst.args[0])
751 inst.args[0])
755 except IOError, inst:
752 except IOError, inst:
756 raise IOError(inst.args[0], _('template file %s: %s') %
753 raise IOError(inst.args[0], _('template file %s: %s') %
757 (self.map[t][1], inst.args[1]))
754 (self.map[t][1], inst.args[1]))
758 return self.cache[t]
755 return self.cache[t]
759
756
760 def __call__(self, t, **mapping):
757 def __call__(self, t, **mapping):
761 ttype = t in self.map and self.map[t][0] or 'default'
758 ttype = t in self.map and self.map[t][0] or 'default'
762 if ttype not in self.ecache:
759 if ttype not in self.ecache:
763 self.ecache[ttype] = engines[ttype](self.load,
760 self.ecache[ttype] = engines[ttype](self.load,
764 self.filters, self.defaults)
761 self.filters, self.defaults)
765 proc = self.ecache[ttype]
762 proc = self.ecache[ttype]
766
763
767 stream = proc.process(t, mapping)
764 stream = proc.process(t, mapping)
768 if self.minchunk:
765 if self.minchunk:
769 stream = util.increasingchunks(stream, min=self.minchunk,
766 stream = util.increasingchunks(stream, min=self.minchunk,
770 max=self.maxchunk)
767 max=self.maxchunk)
771 return stream
768 return stream
772
769
773 def templatepaths():
770 def templatepaths():
774 '''return locations used for template files.'''
771 '''return locations used for template files.'''
775 pathsrel = ['templates']
772 pathsrel = ['templates']
776 paths = [os.path.normpath(os.path.join(util.datapath, f))
773 paths = [os.path.normpath(os.path.join(util.datapath, f))
777 for f in pathsrel]
774 for f in pathsrel]
778 return [p for p in paths if os.path.isdir(p)]
775 return [p for p in paths if os.path.isdir(p)]
779
776
780 def templatepath(name):
777 def templatepath(name):
781 '''return location of template file. returns None if not found.'''
778 '''return location of template file. returns None if not found.'''
782 for p in templatepaths():
779 for p in templatepaths():
783 f = os.path.join(p, name)
780 f = os.path.join(p, name)
784 if os.path.exists(f):
781 if os.path.exists(f):
785 return f
782 return f
786 return None
783 return None
787
784
788 def stylemap(styles, paths=None):
785 def stylemap(styles, paths=None):
789 """Return path to mapfile for a given style.
786 """Return path to mapfile for a given style.
790
787
791 Searches mapfile in the following locations:
788 Searches mapfile in the following locations:
792 1. templatepath/style/map
789 1. templatepath/style/map
793 2. templatepath/map-style
790 2. templatepath/map-style
794 3. templatepath/map
791 3. templatepath/map
795 """
792 """
796
793
797 if paths is None:
794 if paths is None:
798 paths = templatepaths()
795 paths = templatepaths()
799 elif isinstance(paths, str):
796 elif isinstance(paths, str):
800 paths = [paths]
797 paths = [paths]
801
798
802 if isinstance(styles, str):
799 if isinstance(styles, str):
803 styles = [styles]
800 styles = [styles]
804
801
805 for style in styles:
802 for style in styles:
806 # only plain name is allowed to honor template paths
803 # only plain name is allowed to honor template paths
807 if (not style
804 if (not style
808 or style in (os.curdir, os.pardir)
805 or style in (os.curdir, os.pardir)
809 or os.sep in style
806 or os.sep in style
810 or os.altsep and os.altsep in style):
807 or os.altsep and os.altsep in style):
811 continue
808 continue
812 locations = [os.path.join(style, 'map'), 'map-' + style]
809 locations = [os.path.join(style, 'map'), 'map-' + style]
813 locations.append('map')
810 locations.append('map')
814
811
815 for path in paths:
812 for path in paths:
816 for location in locations:
813 for location in locations:
817 mapfile = os.path.join(path, location)
814 mapfile = os.path.join(path, location)
818 if os.path.isfile(mapfile):
815 if os.path.isfile(mapfile):
819 return style, mapfile
816 return style, mapfile
820
817
821 raise RuntimeError("No hgweb templates found in %r" % paths)
818 raise RuntimeError("No hgweb templates found in %r" % paths)
822
819
823 # tell hggettext to extract docstrings from these functions:
820 # tell hggettext to extract docstrings from these functions:
824 i18nfunctions = funcs.values()
821 i18nfunctions = funcs.values()
General Comments 0
You need to be logged in to leave comments. Login now