##// END OF EJS Templates
cmdutil: drop aliases for logcmdutil functions (API)...
Yuya Nishihara -
r35962:c8e2d6ed default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,1125 +1,1125 b''
1 # bugzilla.py - bugzilla integration for mercurial
1 # bugzilla.py - bugzilla integration for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''hooks for integrating with the Bugzilla bug tracker
9 '''hooks for integrating with the Bugzilla bug tracker
10
10
11 This hook extension adds comments on bugs in Bugzilla when changesets
11 This hook extension adds comments on bugs in Bugzilla when changesets
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 the Mercurial template mechanism.
13 the Mercurial template mechanism.
14
14
15 The bug references can optionally include an update for Bugzilla of the
15 The bug references can optionally include an update for Bugzilla of the
16 hours spent working on the bug. Bugs can also be marked fixed.
16 hours spent working on the bug. Bugs can also be marked fixed.
17
17
18 Four basic modes of access to Bugzilla are provided:
18 Four basic modes of access to Bugzilla are provided:
19
19
20 1. Access via the Bugzilla REST-API. Requires bugzilla 5.0 or later.
20 1. Access via the Bugzilla REST-API. Requires bugzilla 5.0 or later.
21
21
22 2. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
22 2. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
23
23
24 3. Check data via the Bugzilla XMLRPC interface and submit bug change
24 3. Check data via the Bugzilla XMLRPC interface and submit bug change
25 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
25 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
26
26
27 4. Writing directly to the Bugzilla database. Only Bugzilla installations
27 4. Writing directly to the Bugzilla database. Only Bugzilla installations
28 using MySQL are supported. Requires Python MySQLdb.
28 using MySQL are supported. Requires Python MySQLdb.
29
29
30 Writing directly to the database is susceptible to schema changes, and
30 Writing directly to the database is susceptible to schema changes, and
31 relies on a Bugzilla contrib script to send out bug change
31 relies on a Bugzilla contrib script to send out bug change
32 notification emails. This script runs as the user running Mercurial,
32 notification emails. This script runs as the user running Mercurial,
33 must be run on the host with the Bugzilla install, and requires
33 must be run on the host with the Bugzilla install, and requires
34 permission to read Bugzilla configuration details and the necessary
34 permission to read Bugzilla configuration details and the necessary
35 MySQL user and password to have full access rights to the Bugzilla
35 MySQL user and password to have full access rights to the Bugzilla
36 database. For these reasons this access mode is now considered
36 database. For these reasons this access mode is now considered
37 deprecated, and will not be updated for new Bugzilla versions going
37 deprecated, and will not be updated for new Bugzilla versions going
38 forward. Only adding comments is supported in this access mode.
38 forward. Only adding comments is supported in this access mode.
39
39
40 Access via XMLRPC needs a Bugzilla username and password to be specified
40 Access via XMLRPC needs a Bugzilla username and password to be specified
41 in the configuration. Comments are added under that username. Since the
41 in the configuration. Comments are added under that username. Since the
42 configuration must be readable by all Mercurial users, it is recommended
42 configuration must be readable by all Mercurial users, it is recommended
43 that the rights of that user are restricted in Bugzilla to the minimum
43 that the rights of that user are restricted in Bugzilla to the minimum
44 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
44 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
45
45
46 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
46 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
47 email to the Bugzilla email interface to submit comments to bugs.
47 email to the Bugzilla email interface to submit comments to bugs.
48 The From: address in the email is set to the email address of the Mercurial
48 The From: address in the email is set to the email address of the Mercurial
49 user, so the comment appears to come from the Mercurial user. In the event
49 user, so the comment appears to come from the Mercurial user. In the event
50 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
50 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
51 user, the email associated with the Bugzilla username used to log into
51 user, the email associated with the Bugzilla username used to log into
52 Bugzilla is used instead as the source of the comment. Marking bugs fixed
52 Bugzilla is used instead as the source of the comment. Marking bugs fixed
53 works on all supported Bugzilla versions.
53 works on all supported Bugzilla versions.
54
54
55 Access via the REST-API needs either a Bugzilla username and password
55 Access via the REST-API needs either a Bugzilla username and password
56 or an apikey specified in the configuration. Comments are made under
56 or an apikey specified in the configuration. Comments are made under
57 the given username or the user associated with the apikey in Bugzilla.
57 the given username or the user associated with the apikey in Bugzilla.
58
58
59 Configuration items common to all access modes:
59 Configuration items common to all access modes:
60
60
61 bugzilla.version
61 bugzilla.version
62 The access type to use. Values recognized are:
62 The access type to use. Values recognized are:
63
63
64 :``restapi``: Bugzilla REST-API, Bugzilla 5.0 and later.
64 :``restapi``: Bugzilla REST-API, Bugzilla 5.0 and later.
65 :``xmlrpc``: Bugzilla XMLRPC interface.
65 :``xmlrpc``: Bugzilla XMLRPC interface.
66 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
66 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
67 :``3.0``: MySQL access, Bugzilla 3.0 and later.
67 :``3.0``: MySQL access, Bugzilla 3.0 and later.
68 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
68 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
69 including 3.0.
69 including 3.0.
70 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
70 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
71 including 2.18.
71 including 2.18.
72
72
73 bugzilla.regexp
73 bugzilla.regexp
74 Regular expression to match bug IDs for update in changeset commit message.
74 Regular expression to match bug IDs for update in changeset commit message.
75 It must contain one "()" named group ``<ids>`` containing the bug
75 It must contain one "()" named group ``<ids>`` containing the bug
76 IDs separated by non-digit characters. It may also contain
76 IDs separated by non-digit characters. It may also contain
77 a named group ``<hours>`` with a floating-point number giving the
77 a named group ``<hours>`` with a floating-point number giving the
78 hours worked on the bug. If no named groups are present, the first
78 hours worked on the bug. If no named groups are present, the first
79 "()" group is assumed to contain the bug IDs, and work time is not
79 "()" group is assumed to contain the bug IDs, and work time is not
80 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
80 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
81 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
81 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
82 variations thereof, followed by an hours number prefixed by ``h`` or
82 variations thereof, followed by an hours number prefixed by ``h`` or
83 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
83 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
84
84
85 bugzilla.fixregexp
85 bugzilla.fixregexp
86 Regular expression to match bug IDs for marking fixed in changeset
86 Regular expression to match bug IDs for marking fixed in changeset
87 commit message. This must contain a "()" named group ``<ids>` containing
87 commit message. This must contain a "()" named group ``<ids>` containing
88 the bug IDs separated by non-digit characters. It may also contain
88 the bug IDs separated by non-digit characters. It may also contain
89 a named group ``<hours>`` with a floating-point number giving the
89 a named group ``<hours>`` with a floating-point number giving the
90 hours worked on the bug. If no named groups are present, the first
90 hours worked on the bug. If no named groups are present, the first
91 "()" group is assumed to contain the bug IDs, and work time is not
91 "()" group is assumed to contain the bug IDs, and work time is not
92 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
92 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
93 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
93 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
94 variations thereof, followed by an hours number prefixed by ``h`` or
94 variations thereof, followed by an hours number prefixed by ``h`` or
95 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
95 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
96
96
97 bugzilla.fixstatus
97 bugzilla.fixstatus
98 The status to set a bug to when marking fixed. Default ``RESOLVED``.
98 The status to set a bug to when marking fixed. Default ``RESOLVED``.
99
99
100 bugzilla.fixresolution
100 bugzilla.fixresolution
101 The resolution to set a bug to when marking fixed. Default ``FIXED``.
101 The resolution to set a bug to when marking fixed. Default ``FIXED``.
102
102
103 bugzilla.style
103 bugzilla.style
104 The style file to use when formatting comments.
104 The style file to use when formatting comments.
105
105
106 bugzilla.template
106 bugzilla.template
107 Template to use when formatting comments. Overrides style if
107 Template to use when formatting comments. Overrides style if
108 specified. In addition to the usual Mercurial keywords, the
108 specified. In addition to the usual Mercurial keywords, the
109 extension specifies:
109 extension specifies:
110
110
111 :``{bug}``: The Bugzilla bug ID.
111 :``{bug}``: The Bugzilla bug ID.
112 :``{root}``: The full pathname of the Mercurial repository.
112 :``{root}``: The full pathname of the Mercurial repository.
113 :``{webroot}``: Stripped pathname of the Mercurial repository.
113 :``{webroot}``: Stripped pathname of the Mercurial repository.
114 :``{hgweb}``: Base URL for browsing Mercurial repositories.
114 :``{hgweb}``: Base URL for browsing Mercurial repositories.
115
115
116 Default ``changeset {node|short} in repo {root} refers to bug
116 Default ``changeset {node|short} in repo {root} refers to bug
117 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
117 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
118
118
119 bugzilla.strip
119 bugzilla.strip
120 The number of path separator characters to strip from the front of
120 The number of path separator characters to strip from the front of
121 the Mercurial repository path (``{root}`` in templates) to produce
121 the Mercurial repository path (``{root}`` in templates) to produce
122 ``{webroot}``. For example, a repository with ``{root}``
122 ``{webroot}``. For example, a repository with ``{root}``
123 ``/var/local/my-project`` with a strip of 2 gives a value for
123 ``/var/local/my-project`` with a strip of 2 gives a value for
124 ``{webroot}`` of ``my-project``. Default 0.
124 ``{webroot}`` of ``my-project``. Default 0.
125
125
126 web.baseurl
126 web.baseurl
127 Base URL for browsing Mercurial repositories. Referenced from
127 Base URL for browsing Mercurial repositories. Referenced from
128 templates as ``{hgweb}``.
128 templates as ``{hgweb}``.
129
129
130 Configuration items common to XMLRPC+email and MySQL access modes:
130 Configuration items common to XMLRPC+email and MySQL access modes:
131
131
132 bugzilla.usermap
132 bugzilla.usermap
133 Path of file containing Mercurial committer email to Bugzilla user email
133 Path of file containing Mercurial committer email to Bugzilla user email
134 mappings. If specified, the file should contain one mapping per
134 mappings. If specified, the file should contain one mapping per
135 line::
135 line::
136
136
137 committer = Bugzilla user
137 committer = Bugzilla user
138
138
139 See also the ``[usermap]`` section.
139 See also the ``[usermap]`` section.
140
140
141 The ``[usermap]`` section is used to specify mappings of Mercurial
141 The ``[usermap]`` section is used to specify mappings of Mercurial
142 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
142 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
143 Contains entries of the form ``committer = Bugzilla user``.
143 Contains entries of the form ``committer = Bugzilla user``.
144
144
145 XMLRPC and REST-API access mode configuration:
145 XMLRPC and REST-API access mode configuration:
146
146
147 bugzilla.bzurl
147 bugzilla.bzurl
148 The base URL for the Bugzilla installation.
148 The base URL for the Bugzilla installation.
149 Default ``http://localhost/bugzilla``.
149 Default ``http://localhost/bugzilla``.
150
150
151 bugzilla.user
151 bugzilla.user
152 The username to use to log into Bugzilla via XMLRPC. Default
152 The username to use to log into Bugzilla via XMLRPC. Default
153 ``bugs``.
153 ``bugs``.
154
154
155 bugzilla.password
155 bugzilla.password
156 The password for Bugzilla login.
156 The password for Bugzilla login.
157
157
158 REST-API access mode uses the options listed above as well as:
158 REST-API access mode uses the options listed above as well as:
159
159
160 bugzilla.apikey
160 bugzilla.apikey
161 An apikey generated on the Bugzilla instance for api access.
161 An apikey generated on the Bugzilla instance for api access.
162 Using an apikey removes the need to store the user and password
162 Using an apikey removes the need to store the user and password
163 options.
163 options.
164
164
165 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
165 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
166 and also:
166 and also:
167
167
168 bugzilla.bzemail
168 bugzilla.bzemail
169 The Bugzilla email address.
169 The Bugzilla email address.
170
170
171 In addition, the Mercurial email settings must be configured. See the
171 In addition, the Mercurial email settings must be configured. See the
172 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
172 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
173
173
174 MySQL access mode configuration:
174 MySQL access mode configuration:
175
175
176 bugzilla.host
176 bugzilla.host
177 Hostname of the MySQL server holding the Bugzilla database.
177 Hostname of the MySQL server holding the Bugzilla database.
178 Default ``localhost``.
178 Default ``localhost``.
179
179
180 bugzilla.db
180 bugzilla.db
181 Name of the Bugzilla database in MySQL. Default ``bugs``.
181 Name of the Bugzilla database in MySQL. Default ``bugs``.
182
182
183 bugzilla.user
183 bugzilla.user
184 Username to use to access MySQL server. Default ``bugs``.
184 Username to use to access MySQL server. Default ``bugs``.
185
185
186 bugzilla.password
186 bugzilla.password
187 Password to use to access MySQL server.
187 Password to use to access MySQL server.
188
188
189 bugzilla.timeout
189 bugzilla.timeout
190 Database connection timeout (seconds). Default 5.
190 Database connection timeout (seconds). Default 5.
191
191
192 bugzilla.bzuser
192 bugzilla.bzuser
193 Fallback Bugzilla user name to record comments with, if changeset
193 Fallback Bugzilla user name to record comments with, if changeset
194 committer cannot be found as a Bugzilla user.
194 committer cannot be found as a Bugzilla user.
195
195
196 bugzilla.bzdir
196 bugzilla.bzdir
197 Bugzilla install directory. Used by default notify. Default
197 Bugzilla install directory. Used by default notify. Default
198 ``/var/www/html/bugzilla``.
198 ``/var/www/html/bugzilla``.
199
199
200 bugzilla.notify
200 bugzilla.notify
201 The command to run to get Bugzilla to send bug change notification
201 The command to run to get Bugzilla to send bug change notification
202 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
202 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
203 id) and ``user`` (committer bugzilla email). Default depends on
203 id) and ``user`` (committer bugzilla email). Default depends on
204 version; from 2.18 it is "cd %(bzdir)s && perl -T
204 version; from 2.18 it is "cd %(bzdir)s && perl -T
205 contrib/sendbugmail.pl %(id)s %(user)s".
205 contrib/sendbugmail.pl %(id)s %(user)s".
206
206
207 Activating the extension::
207 Activating the extension::
208
208
209 [extensions]
209 [extensions]
210 bugzilla =
210 bugzilla =
211
211
212 [hooks]
212 [hooks]
213 # run bugzilla hook on every change pulled or pushed in here
213 # run bugzilla hook on every change pulled or pushed in here
214 incoming.bugzilla = python:hgext.bugzilla.hook
214 incoming.bugzilla = python:hgext.bugzilla.hook
215
215
216 Example configurations:
216 Example configurations:
217
217
218 XMLRPC example configuration. This uses the Bugzilla at
218 XMLRPC example configuration. This uses the Bugzilla at
219 ``http://my-project.org/bugzilla``, logging in as user
219 ``http://my-project.org/bugzilla``, logging in as user
220 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
220 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
221 collection of Mercurial repositories in ``/var/local/hg/repos/``,
221 collection of Mercurial repositories in ``/var/local/hg/repos/``,
222 with a web interface at ``http://my-project.org/hg``. ::
222 with a web interface at ``http://my-project.org/hg``. ::
223
223
224 [bugzilla]
224 [bugzilla]
225 bzurl=http://my-project.org/bugzilla
225 bzurl=http://my-project.org/bugzilla
226 user=bugmail@my-project.org
226 user=bugmail@my-project.org
227 password=plugh
227 password=plugh
228 version=xmlrpc
228 version=xmlrpc
229 template=Changeset {node|short} in {root|basename}.
229 template=Changeset {node|short} in {root|basename}.
230 {hgweb}/{webroot}/rev/{node|short}\\n
230 {hgweb}/{webroot}/rev/{node|short}\\n
231 {desc}\\n
231 {desc}\\n
232 strip=5
232 strip=5
233
233
234 [web]
234 [web]
235 baseurl=http://my-project.org/hg
235 baseurl=http://my-project.org/hg
236
236
237 XMLRPC+email example configuration. This uses the Bugzilla at
237 XMLRPC+email example configuration. This uses the Bugzilla at
238 ``http://my-project.org/bugzilla``, logging in as user
238 ``http://my-project.org/bugzilla``, logging in as user
239 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
239 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
240 collection of Mercurial repositories in ``/var/local/hg/repos/``,
240 collection of Mercurial repositories in ``/var/local/hg/repos/``,
241 with a web interface at ``http://my-project.org/hg``. Bug comments
241 with a web interface at ``http://my-project.org/hg``. Bug comments
242 are sent to the Bugzilla email address
242 are sent to the Bugzilla email address
243 ``bugzilla@my-project.org``. ::
243 ``bugzilla@my-project.org``. ::
244
244
245 [bugzilla]
245 [bugzilla]
246 bzurl=http://my-project.org/bugzilla
246 bzurl=http://my-project.org/bugzilla
247 user=bugmail@my-project.org
247 user=bugmail@my-project.org
248 password=plugh
248 password=plugh
249 version=xmlrpc+email
249 version=xmlrpc+email
250 bzemail=bugzilla@my-project.org
250 bzemail=bugzilla@my-project.org
251 template=Changeset {node|short} in {root|basename}.
251 template=Changeset {node|short} in {root|basename}.
252 {hgweb}/{webroot}/rev/{node|short}\\n
252 {hgweb}/{webroot}/rev/{node|short}\\n
253 {desc}\\n
253 {desc}\\n
254 strip=5
254 strip=5
255
255
256 [web]
256 [web]
257 baseurl=http://my-project.org/hg
257 baseurl=http://my-project.org/hg
258
258
259 [usermap]
259 [usermap]
260 user@emaildomain.com=user.name@bugzilladomain.com
260 user@emaildomain.com=user.name@bugzilladomain.com
261
261
262 MySQL example configuration. This has a local Bugzilla 3.2 installation
262 MySQL example configuration. This has a local Bugzilla 3.2 installation
263 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
263 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
264 the Bugzilla database name is ``bugs`` and MySQL is
264 the Bugzilla database name is ``bugs`` and MySQL is
265 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
265 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
266 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
266 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
267 with a web interface at ``http://my-project.org/hg``. ::
267 with a web interface at ``http://my-project.org/hg``. ::
268
268
269 [bugzilla]
269 [bugzilla]
270 host=localhost
270 host=localhost
271 password=XYZZY
271 password=XYZZY
272 version=3.0
272 version=3.0
273 bzuser=unknown@domain.com
273 bzuser=unknown@domain.com
274 bzdir=/opt/bugzilla-3.2
274 bzdir=/opt/bugzilla-3.2
275 template=Changeset {node|short} in {root|basename}.
275 template=Changeset {node|short} in {root|basename}.
276 {hgweb}/{webroot}/rev/{node|short}\\n
276 {hgweb}/{webroot}/rev/{node|short}\\n
277 {desc}\\n
277 {desc}\\n
278 strip=5
278 strip=5
279
279
280 [web]
280 [web]
281 baseurl=http://my-project.org/hg
281 baseurl=http://my-project.org/hg
282
282
283 [usermap]
283 [usermap]
284 user@emaildomain.com=user.name@bugzilladomain.com
284 user@emaildomain.com=user.name@bugzilladomain.com
285
285
286 All the above add a comment to the Bugzilla bug record of the form::
286 All the above add a comment to the Bugzilla bug record of the form::
287
287
288 Changeset 3b16791d6642 in repository-name.
288 Changeset 3b16791d6642 in repository-name.
289 http://my-project.org/hg/repository-name/rev/3b16791d6642
289 http://my-project.org/hg/repository-name/rev/3b16791d6642
290
290
291 Changeset commit comment. Bug 1234.
291 Changeset commit comment. Bug 1234.
292 '''
292 '''
293
293
294 from __future__ import absolute_import
294 from __future__ import absolute_import
295
295
296 import json
296 import json
297 import re
297 import re
298 import time
298 import time
299
299
300 from mercurial.i18n import _
300 from mercurial.i18n import _
301 from mercurial.node import short
301 from mercurial.node import short
302 from mercurial import (
302 from mercurial import (
303 cmdutil,
304 error,
303 error,
304 logcmdutil,
305 mail,
305 mail,
306 registrar,
306 registrar,
307 url,
307 url,
308 util,
308 util,
309 )
309 )
310
310
311 xmlrpclib = util.xmlrpclib
311 xmlrpclib = util.xmlrpclib
312
312
313 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
313 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
314 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
314 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
315 # be specifying the version(s) of Mercurial they are tested with, or
315 # be specifying the version(s) of Mercurial they are tested with, or
316 # leave the attribute unspecified.
316 # leave the attribute unspecified.
317 testedwith = 'ships-with-hg-core'
317 testedwith = 'ships-with-hg-core'
318
318
319 configtable = {}
319 configtable = {}
320 configitem = registrar.configitem(configtable)
320 configitem = registrar.configitem(configtable)
321
321
322 configitem('bugzilla', 'apikey',
322 configitem('bugzilla', 'apikey',
323 default='',
323 default='',
324 )
324 )
325 configitem('bugzilla', 'bzdir',
325 configitem('bugzilla', 'bzdir',
326 default='/var/www/html/bugzilla',
326 default='/var/www/html/bugzilla',
327 )
327 )
328 configitem('bugzilla', 'bzemail',
328 configitem('bugzilla', 'bzemail',
329 default=None,
329 default=None,
330 )
330 )
331 configitem('bugzilla', 'bzurl',
331 configitem('bugzilla', 'bzurl',
332 default='http://localhost/bugzilla/',
332 default='http://localhost/bugzilla/',
333 )
333 )
334 configitem('bugzilla', 'bzuser',
334 configitem('bugzilla', 'bzuser',
335 default=None,
335 default=None,
336 )
336 )
337 configitem('bugzilla', 'db',
337 configitem('bugzilla', 'db',
338 default='bugs',
338 default='bugs',
339 )
339 )
340 configitem('bugzilla', 'fixregexp',
340 configitem('bugzilla', 'fixregexp',
341 default=(r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
341 default=(r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
342 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
342 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
343 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
343 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
344 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
344 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
345 )
345 )
346 configitem('bugzilla', 'fixresolution',
346 configitem('bugzilla', 'fixresolution',
347 default='FIXED',
347 default='FIXED',
348 )
348 )
349 configitem('bugzilla', 'fixstatus',
349 configitem('bugzilla', 'fixstatus',
350 default='RESOLVED',
350 default='RESOLVED',
351 )
351 )
352 configitem('bugzilla', 'host',
352 configitem('bugzilla', 'host',
353 default='localhost',
353 default='localhost',
354 )
354 )
355 configitem('bugzilla', 'notify',
355 configitem('bugzilla', 'notify',
356 default=configitem.dynamicdefault,
356 default=configitem.dynamicdefault,
357 )
357 )
358 configitem('bugzilla', 'password',
358 configitem('bugzilla', 'password',
359 default=None,
359 default=None,
360 )
360 )
361 configitem('bugzilla', 'regexp',
361 configitem('bugzilla', 'regexp',
362 default=(r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
362 default=(r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
363 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
363 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
364 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
364 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
365 )
365 )
366 configitem('bugzilla', 'strip',
366 configitem('bugzilla', 'strip',
367 default=0,
367 default=0,
368 )
368 )
369 configitem('bugzilla', 'style',
369 configitem('bugzilla', 'style',
370 default=None,
370 default=None,
371 )
371 )
372 configitem('bugzilla', 'template',
372 configitem('bugzilla', 'template',
373 default=None,
373 default=None,
374 )
374 )
375 configitem('bugzilla', 'timeout',
375 configitem('bugzilla', 'timeout',
376 default=5,
376 default=5,
377 )
377 )
378 configitem('bugzilla', 'user',
378 configitem('bugzilla', 'user',
379 default='bugs',
379 default='bugs',
380 )
380 )
381 configitem('bugzilla', 'usermap',
381 configitem('bugzilla', 'usermap',
382 default=None,
382 default=None,
383 )
383 )
384 configitem('bugzilla', 'version',
384 configitem('bugzilla', 'version',
385 default=None,
385 default=None,
386 )
386 )
387
387
388 class bzaccess(object):
388 class bzaccess(object):
389 '''Base class for access to Bugzilla.'''
389 '''Base class for access to Bugzilla.'''
390
390
391 def __init__(self, ui):
391 def __init__(self, ui):
392 self.ui = ui
392 self.ui = ui
393 usermap = self.ui.config('bugzilla', 'usermap')
393 usermap = self.ui.config('bugzilla', 'usermap')
394 if usermap:
394 if usermap:
395 self.ui.readconfig(usermap, sections=['usermap'])
395 self.ui.readconfig(usermap, sections=['usermap'])
396
396
397 def map_committer(self, user):
397 def map_committer(self, user):
398 '''map name of committer to Bugzilla user name.'''
398 '''map name of committer to Bugzilla user name.'''
399 for committer, bzuser in self.ui.configitems('usermap'):
399 for committer, bzuser in self.ui.configitems('usermap'):
400 if committer.lower() == user.lower():
400 if committer.lower() == user.lower():
401 return bzuser
401 return bzuser
402 return user
402 return user
403
403
404 # Methods to be implemented by access classes.
404 # Methods to be implemented by access classes.
405 #
405 #
406 # 'bugs' is a dict keyed on bug id, where values are a dict holding
406 # 'bugs' is a dict keyed on bug id, where values are a dict holding
407 # updates to bug state. Recognized dict keys are:
407 # updates to bug state. Recognized dict keys are:
408 #
408 #
409 # 'hours': Value, float containing work hours to be updated.
409 # 'hours': Value, float containing work hours to be updated.
410 # 'fix': If key present, bug is to be marked fixed. Value ignored.
410 # 'fix': If key present, bug is to be marked fixed. Value ignored.
411
411
412 def filter_real_bug_ids(self, bugs):
412 def filter_real_bug_ids(self, bugs):
413 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
413 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
414
414
415 def filter_cset_known_bug_ids(self, node, bugs):
415 def filter_cset_known_bug_ids(self, node, bugs):
416 '''remove bug IDs where node occurs in comment text from bugs.'''
416 '''remove bug IDs where node occurs in comment text from bugs.'''
417
417
418 def updatebug(self, bugid, newstate, text, committer):
418 def updatebug(self, bugid, newstate, text, committer):
419 '''update the specified bug. Add comment text and set new states.
419 '''update the specified bug. Add comment text and set new states.
420
420
421 If possible add the comment as being from the committer of
421 If possible add the comment as being from the committer of
422 the changeset. Otherwise use the default Bugzilla user.
422 the changeset. Otherwise use the default Bugzilla user.
423 '''
423 '''
424
424
425 def notify(self, bugs, committer):
425 def notify(self, bugs, committer):
426 '''Force sending of Bugzilla notification emails.
426 '''Force sending of Bugzilla notification emails.
427
427
428 Only required if the access method does not trigger notification
428 Only required if the access method does not trigger notification
429 emails automatically.
429 emails automatically.
430 '''
430 '''
431
431
432 # Bugzilla via direct access to MySQL database.
432 # Bugzilla via direct access to MySQL database.
433 class bzmysql(bzaccess):
433 class bzmysql(bzaccess):
434 '''Support for direct MySQL access to Bugzilla.
434 '''Support for direct MySQL access to Bugzilla.
435
435
436 The earliest Bugzilla version this is tested with is version 2.16.
436 The earliest Bugzilla version this is tested with is version 2.16.
437
437
438 If your Bugzilla is version 3.4 or above, you are strongly
438 If your Bugzilla is version 3.4 or above, you are strongly
439 recommended to use the XMLRPC access method instead.
439 recommended to use the XMLRPC access method instead.
440 '''
440 '''
441
441
442 @staticmethod
442 @staticmethod
443 def sql_buglist(ids):
443 def sql_buglist(ids):
444 '''return SQL-friendly list of bug ids'''
444 '''return SQL-friendly list of bug ids'''
445 return '(' + ','.join(map(str, ids)) + ')'
445 return '(' + ','.join(map(str, ids)) + ')'
446
446
447 _MySQLdb = None
447 _MySQLdb = None
448
448
449 def __init__(self, ui):
449 def __init__(self, ui):
450 try:
450 try:
451 import MySQLdb as mysql
451 import MySQLdb as mysql
452 bzmysql._MySQLdb = mysql
452 bzmysql._MySQLdb = mysql
453 except ImportError as err:
453 except ImportError as err:
454 raise error.Abort(_('python mysql support not available: %s') % err)
454 raise error.Abort(_('python mysql support not available: %s') % err)
455
455
456 bzaccess.__init__(self, ui)
456 bzaccess.__init__(self, ui)
457
457
458 host = self.ui.config('bugzilla', 'host')
458 host = self.ui.config('bugzilla', 'host')
459 user = self.ui.config('bugzilla', 'user')
459 user = self.ui.config('bugzilla', 'user')
460 passwd = self.ui.config('bugzilla', 'password')
460 passwd = self.ui.config('bugzilla', 'password')
461 db = self.ui.config('bugzilla', 'db')
461 db = self.ui.config('bugzilla', 'db')
462 timeout = int(self.ui.config('bugzilla', 'timeout'))
462 timeout = int(self.ui.config('bugzilla', 'timeout'))
463 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
463 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
464 (host, db, user, '*' * len(passwd)))
464 (host, db, user, '*' * len(passwd)))
465 self.conn = bzmysql._MySQLdb.connect(host=host,
465 self.conn = bzmysql._MySQLdb.connect(host=host,
466 user=user, passwd=passwd,
466 user=user, passwd=passwd,
467 db=db,
467 db=db,
468 connect_timeout=timeout)
468 connect_timeout=timeout)
469 self.cursor = self.conn.cursor()
469 self.cursor = self.conn.cursor()
470 self.longdesc_id = self.get_longdesc_id()
470 self.longdesc_id = self.get_longdesc_id()
471 self.user_ids = {}
471 self.user_ids = {}
472 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
472 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
473
473
474 def run(self, *args, **kwargs):
474 def run(self, *args, **kwargs):
475 '''run a query.'''
475 '''run a query.'''
476 self.ui.note(_('query: %s %s\n') % (args, kwargs))
476 self.ui.note(_('query: %s %s\n') % (args, kwargs))
477 try:
477 try:
478 self.cursor.execute(*args, **kwargs)
478 self.cursor.execute(*args, **kwargs)
479 except bzmysql._MySQLdb.MySQLError:
479 except bzmysql._MySQLdb.MySQLError:
480 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
480 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
481 raise
481 raise
482
482
483 def get_longdesc_id(self):
483 def get_longdesc_id(self):
484 '''get identity of longdesc field'''
484 '''get identity of longdesc field'''
485 self.run('select fieldid from fielddefs where name = "longdesc"')
485 self.run('select fieldid from fielddefs where name = "longdesc"')
486 ids = self.cursor.fetchall()
486 ids = self.cursor.fetchall()
487 if len(ids) != 1:
487 if len(ids) != 1:
488 raise error.Abort(_('unknown database schema'))
488 raise error.Abort(_('unknown database schema'))
489 return ids[0][0]
489 return ids[0][0]
490
490
491 def filter_real_bug_ids(self, bugs):
491 def filter_real_bug_ids(self, bugs):
492 '''filter not-existing bugs from set.'''
492 '''filter not-existing bugs from set.'''
493 self.run('select bug_id from bugs where bug_id in %s' %
493 self.run('select bug_id from bugs where bug_id in %s' %
494 bzmysql.sql_buglist(bugs.keys()))
494 bzmysql.sql_buglist(bugs.keys()))
495 existing = [id for (id,) in self.cursor.fetchall()]
495 existing = [id for (id,) in self.cursor.fetchall()]
496 for id in bugs.keys():
496 for id in bugs.keys():
497 if id not in existing:
497 if id not in existing:
498 self.ui.status(_('bug %d does not exist\n') % id)
498 self.ui.status(_('bug %d does not exist\n') % id)
499 del bugs[id]
499 del bugs[id]
500
500
501 def filter_cset_known_bug_ids(self, node, bugs):
501 def filter_cset_known_bug_ids(self, node, bugs):
502 '''filter bug ids that already refer to this changeset from set.'''
502 '''filter bug ids that already refer to this changeset from set.'''
503 self.run('''select bug_id from longdescs where
503 self.run('''select bug_id from longdescs where
504 bug_id in %s and thetext like "%%%s%%"''' %
504 bug_id in %s and thetext like "%%%s%%"''' %
505 (bzmysql.sql_buglist(bugs.keys()), short(node)))
505 (bzmysql.sql_buglist(bugs.keys()), short(node)))
506 for (id,) in self.cursor.fetchall():
506 for (id,) in self.cursor.fetchall():
507 self.ui.status(_('bug %d already knows about changeset %s\n') %
507 self.ui.status(_('bug %d already knows about changeset %s\n') %
508 (id, short(node)))
508 (id, short(node)))
509 del bugs[id]
509 del bugs[id]
510
510
511 def notify(self, bugs, committer):
511 def notify(self, bugs, committer):
512 '''tell bugzilla to send mail.'''
512 '''tell bugzilla to send mail.'''
513 self.ui.status(_('telling bugzilla to send mail:\n'))
513 self.ui.status(_('telling bugzilla to send mail:\n'))
514 (user, userid) = self.get_bugzilla_user(committer)
514 (user, userid) = self.get_bugzilla_user(committer)
515 for id in bugs.keys():
515 for id in bugs.keys():
516 self.ui.status(_(' bug %s\n') % id)
516 self.ui.status(_(' bug %s\n') % id)
517 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
517 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
518 bzdir = self.ui.config('bugzilla', 'bzdir')
518 bzdir = self.ui.config('bugzilla', 'bzdir')
519 try:
519 try:
520 # Backwards-compatible with old notify string, which
520 # Backwards-compatible with old notify string, which
521 # took one string. This will throw with a new format
521 # took one string. This will throw with a new format
522 # string.
522 # string.
523 cmd = cmdfmt % id
523 cmd = cmdfmt % id
524 except TypeError:
524 except TypeError:
525 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
525 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
526 self.ui.note(_('running notify command %s\n') % cmd)
526 self.ui.note(_('running notify command %s\n') % cmd)
527 fp = util.popen('(%s) 2>&1' % cmd)
527 fp = util.popen('(%s) 2>&1' % cmd)
528 out = fp.read()
528 out = fp.read()
529 ret = fp.close()
529 ret = fp.close()
530 if ret:
530 if ret:
531 self.ui.warn(out)
531 self.ui.warn(out)
532 raise error.Abort(_('bugzilla notify command %s') %
532 raise error.Abort(_('bugzilla notify command %s') %
533 util.explainexit(ret)[0])
533 util.explainexit(ret)[0])
534 self.ui.status(_('done\n'))
534 self.ui.status(_('done\n'))
535
535
536 def get_user_id(self, user):
536 def get_user_id(self, user):
537 '''look up numeric bugzilla user id.'''
537 '''look up numeric bugzilla user id.'''
538 try:
538 try:
539 return self.user_ids[user]
539 return self.user_ids[user]
540 except KeyError:
540 except KeyError:
541 try:
541 try:
542 userid = int(user)
542 userid = int(user)
543 except ValueError:
543 except ValueError:
544 self.ui.note(_('looking up user %s\n') % user)
544 self.ui.note(_('looking up user %s\n') % user)
545 self.run('''select userid from profiles
545 self.run('''select userid from profiles
546 where login_name like %s''', user)
546 where login_name like %s''', user)
547 all = self.cursor.fetchall()
547 all = self.cursor.fetchall()
548 if len(all) != 1:
548 if len(all) != 1:
549 raise KeyError(user)
549 raise KeyError(user)
550 userid = int(all[0][0])
550 userid = int(all[0][0])
551 self.user_ids[user] = userid
551 self.user_ids[user] = userid
552 return userid
552 return userid
553
553
554 def get_bugzilla_user(self, committer):
554 def get_bugzilla_user(self, committer):
555 '''See if committer is a registered bugzilla user. Return
555 '''See if committer is a registered bugzilla user. Return
556 bugzilla username and userid if so. If not, return default
556 bugzilla username and userid if so. If not, return default
557 bugzilla username and userid.'''
557 bugzilla username and userid.'''
558 user = self.map_committer(committer)
558 user = self.map_committer(committer)
559 try:
559 try:
560 userid = self.get_user_id(user)
560 userid = self.get_user_id(user)
561 except KeyError:
561 except KeyError:
562 try:
562 try:
563 defaultuser = self.ui.config('bugzilla', 'bzuser')
563 defaultuser = self.ui.config('bugzilla', 'bzuser')
564 if not defaultuser:
564 if not defaultuser:
565 raise error.Abort(_('cannot find bugzilla user id for %s') %
565 raise error.Abort(_('cannot find bugzilla user id for %s') %
566 user)
566 user)
567 userid = self.get_user_id(defaultuser)
567 userid = self.get_user_id(defaultuser)
568 user = defaultuser
568 user = defaultuser
569 except KeyError:
569 except KeyError:
570 raise error.Abort(_('cannot find bugzilla user id for %s or %s')
570 raise error.Abort(_('cannot find bugzilla user id for %s or %s')
571 % (user, defaultuser))
571 % (user, defaultuser))
572 return (user, userid)
572 return (user, userid)
573
573
574 def updatebug(self, bugid, newstate, text, committer):
574 def updatebug(self, bugid, newstate, text, committer):
575 '''update bug state with comment text.
575 '''update bug state with comment text.
576
576
577 Try adding comment as committer of changeset, otherwise as
577 Try adding comment as committer of changeset, otherwise as
578 default bugzilla user.'''
578 default bugzilla user.'''
579 if len(newstate) > 0:
579 if len(newstate) > 0:
580 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
580 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
581
581
582 (user, userid) = self.get_bugzilla_user(committer)
582 (user, userid) = self.get_bugzilla_user(committer)
583 now = time.strftime(r'%Y-%m-%d %H:%M:%S')
583 now = time.strftime(r'%Y-%m-%d %H:%M:%S')
584 self.run('''insert into longdescs
584 self.run('''insert into longdescs
585 (bug_id, who, bug_when, thetext)
585 (bug_id, who, bug_when, thetext)
586 values (%s, %s, %s, %s)''',
586 values (%s, %s, %s, %s)''',
587 (bugid, userid, now, text))
587 (bugid, userid, now, text))
588 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
588 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
589 values (%s, %s, %s, %s)''',
589 values (%s, %s, %s, %s)''',
590 (bugid, userid, now, self.longdesc_id))
590 (bugid, userid, now, self.longdesc_id))
591 self.conn.commit()
591 self.conn.commit()
592
592
593 class bzmysql_2_18(bzmysql):
593 class bzmysql_2_18(bzmysql):
594 '''support for bugzilla 2.18 series.'''
594 '''support for bugzilla 2.18 series.'''
595
595
596 def __init__(self, ui):
596 def __init__(self, ui):
597 bzmysql.__init__(self, ui)
597 bzmysql.__init__(self, ui)
598 self.default_notify = \
598 self.default_notify = \
599 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
599 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
600
600
601 class bzmysql_3_0(bzmysql_2_18):
601 class bzmysql_3_0(bzmysql_2_18):
602 '''support for bugzilla 3.0 series.'''
602 '''support for bugzilla 3.0 series.'''
603
603
604 def __init__(self, ui):
604 def __init__(self, ui):
605 bzmysql_2_18.__init__(self, ui)
605 bzmysql_2_18.__init__(self, ui)
606
606
607 def get_longdesc_id(self):
607 def get_longdesc_id(self):
608 '''get identity of longdesc field'''
608 '''get identity of longdesc field'''
609 self.run('select id from fielddefs where name = "longdesc"')
609 self.run('select id from fielddefs where name = "longdesc"')
610 ids = self.cursor.fetchall()
610 ids = self.cursor.fetchall()
611 if len(ids) != 1:
611 if len(ids) != 1:
612 raise error.Abort(_('unknown database schema'))
612 raise error.Abort(_('unknown database schema'))
613 return ids[0][0]
613 return ids[0][0]
614
614
615 # Bugzilla via XMLRPC interface.
615 # Bugzilla via XMLRPC interface.
616
616
617 class cookietransportrequest(object):
617 class cookietransportrequest(object):
618 """A Transport request method that retains cookies over its lifetime.
618 """A Transport request method that retains cookies over its lifetime.
619
619
620 The regular xmlrpclib transports ignore cookies. Which causes
620 The regular xmlrpclib transports ignore cookies. Which causes
621 a bit of a problem when you need a cookie-based login, as with
621 a bit of a problem when you need a cookie-based login, as with
622 the Bugzilla XMLRPC interface prior to 4.4.3.
622 the Bugzilla XMLRPC interface prior to 4.4.3.
623
623
624 So this is a helper for defining a Transport which looks for
624 So this is a helper for defining a Transport which looks for
625 cookies being set in responses and saves them to add to all future
625 cookies being set in responses and saves them to add to all future
626 requests.
626 requests.
627 """
627 """
628
628
629 # Inspiration drawn from
629 # Inspiration drawn from
630 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
630 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
631 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
631 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
632
632
633 cookies = []
633 cookies = []
634 def send_cookies(self, connection):
634 def send_cookies(self, connection):
635 if self.cookies:
635 if self.cookies:
636 for cookie in self.cookies:
636 for cookie in self.cookies:
637 connection.putheader("Cookie", cookie)
637 connection.putheader("Cookie", cookie)
638
638
639 def request(self, host, handler, request_body, verbose=0):
639 def request(self, host, handler, request_body, verbose=0):
640 self.verbose = verbose
640 self.verbose = verbose
641 self.accept_gzip_encoding = False
641 self.accept_gzip_encoding = False
642
642
643 # issue XML-RPC request
643 # issue XML-RPC request
644 h = self.make_connection(host)
644 h = self.make_connection(host)
645 if verbose:
645 if verbose:
646 h.set_debuglevel(1)
646 h.set_debuglevel(1)
647
647
648 self.send_request(h, handler, request_body)
648 self.send_request(h, handler, request_body)
649 self.send_host(h, host)
649 self.send_host(h, host)
650 self.send_cookies(h)
650 self.send_cookies(h)
651 self.send_user_agent(h)
651 self.send_user_agent(h)
652 self.send_content(h, request_body)
652 self.send_content(h, request_body)
653
653
654 # Deal with differences between Python 2.6 and 2.7.
654 # Deal with differences between Python 2.6 and 2.7.
655 # In the former h is a HTTP(S). In the latter it's a
655 # In the former h is a HTTP(S). In the latter it's a
656 # HTTP(S)Connection. Luckily, the 2.6 implementation of
656 # HTTP(S)Connection. Luckily, the 2.6 implementation of
657 # HTTP(S) has an underlying HTTP(S)Connection, so extract
657 # HTTP(S) has an underlying HTTP(S)Connection, so extract
658 # that and use it.
658 # that and use it.
659 try:
659 try:
660 response = h.getresponse()
660 response = h.getresponse()
661 except AttributeError:
661 except AttributeError:
662 response = h._conn.getresponse()
662 response = h._conn.getresponse()
663
663
664 # Add any cookie definitions to our list.
664 # Add any cookie definitions to our list.
665 for header in response.msg.getallmatchingheaders("Set-Cookie"):
665 for header in response.msg.getallmatchingheaders("Set-Cookie"):
666 val = header.split(": ", 1)[1]
666 val = header.split(": ", 1)[1]
667 cookie = val.split(";", 1)[0]
667 cookie = val.split(";", 1)[0]
668 self.cookies.append(cookie)
668 self.cookies.append(cookie)
669
669
670 if response.status != 200:
670 if response.status != 200:
671 raise xmlrpclib.ProtocolError(host + handler, response.status,
671 raise xmlrpclib.ProtocolError(host + handler, response.status,
672 response.reason, response.msg.headers)
672 response.reason, response.msg.headers)
673
673
674 payload = response.read()
674 payload = response.read()
675 parser, unmarshaller = self.getparser()
675 parser, unmarshaller = self.getparser()
676 parser.feed(payload)
676 parser.feed(payload)
677 parser.close()
677 parser.close()
678
678
679 return unmarshaller.close()
679 return unmarshaller.close()
680
680
681 # The explicit calls to the underlying xmlrpclib __init__() methods are
681 # The explicit calls to the underlying xmlrpclib __init__() methods are
682 # necessary. The xmlrpclib.Transport classes are old-style classes, and
682 # necessary. The xmlrpclib.Transport classes are old-style classes, and
683 # it turns out their __init__() doesn't get called when doing multiple
683 # it turns out their __init__() doesn't get called when doing multiple
684 # inheritance with a new-style class.
684 # inheritance with a new-style class.
685 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
685 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
686 def __init__(self, use_datetime=0):
686 def __init__(self, use_datetime=0):
687 if util.safehasattr(xmlrpclib.Transport, "__init__"):
687 if util.safehasattr(xmlrpclib.Transport, "__init__"):
688 xmlrpclib.Transport.__init__(self, use_datetime)
688 xmlrpclib.Transport.__init__(self, use_datetime)
689
689
690 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
690 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
691 def __init__(self, use_datetime=0):
691 def __init__(self, use_datetime=0):
692 if util.safehasattr(xmlrpclib.Transport, "__init__"):
692 if util.safehasattr(xmlrpclib.Transport, "__init__"):
693 xmlrpclib.SafeTransport.__init__(self, use_datetime)
693 xmlrpclib.SafeTransport.__init__(self, use_datetime)
694
694
695 class bzxmlrpc(bzaccess):
695 class bzxmlrpc(bzaccess):
696 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
696 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
697
697
698 Requires a minimum Bugzilla version 3.4.
698 Requires a minimum Bugzilla version 3.4.
699 """
699 """
700
700
701 def __init__(self, ui):
701 def __init__(self, ui):
702 bzaccess.__init__(self, ui)
702 bzaccess.__init__(self, ui)
703
703
704 bzweb = self.ui.config('bugzilla', 'bzurl')
704 bzweb = self.ui.config('bugzilla', 'bzurl')
705 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
705 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
706
706
707 user = self.ui.config('bugzilla', 'user')
707 user = self.ui.config('bugzilla', 'user')
708 passwd = self.ui.config('bugzilla', 'password')
708 passwd = self.ui.config('bugzilla', 'password')
709
709
710 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
710 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
711 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
711 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
712
712
713 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
713 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
714 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
714 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
715 self.bzvermajor = int(ver[0])
715 self.bzvermajor = int(ver[0])
716 self.bzverminor = int(ver[1])
716 self.bzverminor = int(ver[1])
717 login = self.bzproxy.User.login({'login': user, 'password': passwd,
717 login = self.bzproxy.User.login({'login': user, 'password': passwd,
718 'restrict_login': True})
718 'restrict_login': True})
719 self.bztoken = login.get('token', '')
719 self.bztoken = login.get('token', '')
720
720
721 def transport(self, uri):
721 def transport(self, uri):
722 if util.urlreq.urlparse(uri, "http")[0] == "https":
722 if util.urlreq.urlparse(uri, "http")[0] == "https":
723 return cookiesafetransport()
723 return cookiesafetransport()
724 else:
724 else:
725 return cookietransport()
725 return cookietransport()
726
726
727 def get_bug_comments(self, id):
727 def get_bug_comments(self, id):
728 """Return a string with all comment text for a bug."""
728 """Return a string with all comment text for a bug."""
729 c = self.bzproxy.Bug.comments({'ids': [id],
729 c = self.bzproxy.Bug.comments({'ids': [id],
730 'include_fields': ['text'],
730 'include_fields': ['text'],
731 'token': self.bztoken})
731 'token': self.bztoken})
732 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
732 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
733
733
734 def filter_real_bug_ids(self, bugs):
734 def filter_real_bug_ids(self, bugs):
735 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
735 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
736 'include_fields': [],
736 'include_fields': [],
737 'permissive': True,
737 'permissive': True,
738 'token': self.bztoken,
738 'token': self.bztoken,
739 })
739 })
740 for badbug in probe['faults']:
740 for badbug in probe['faults']:
741 id = badbug['id']
741 id = badbug['id']
742 self.ui.status(_('bug %d does not exist\n') % id)
742 self.ui.status(_('bug %d does not exist\n') % id)
743 del bugs[id]
743 del bugs[id]
744
744
745 def filter_cset_known_bug_ids(self, node, bugs):
745 def filter_cset_known_bug_ids(self, node, bugs):
746 for id in sorted(bugs.keys()):
746 for id in sorted(bugs.keys()):
747 if self.get_bug_comments(id).find(short(node)) != -1:
747 if self.get_bug_comments(id).find(short(node)) != -1:
748 self.ui.status(_('bug %d already knows about changeset %s\n') %
748 self.ui.status(_('bug %d already knows about changeset %s\n') %
749 (id, short(node)))
749 (id, short(node)))
750 del bugs[id]
750 del bugs[id]
751
751
752 def updatebug(self, bugid, newstate, text, committer):
752 def updatebug(self, bugid, newstate, text, committer):
753 args = {}
753 args = {}
754 if 'hours' in newstate:
754 if 'hours' in newstate:
755 args['work_time'] = newstate['hours']
755 args['work_time'] = newstate['hours']
756
756
757 if self.bzvermajor >= 4:
757 if self.bzvermajor >= 4:
758 args['ids'] = [bugid]
758 args['ids'] = [bugid]
759 args['comment'] = {'body' : text}
759 args['comment'] = {'body' : text}
760 if 'fix' in newstate:
760 if 'fix' in newstate:
761 args['status'] = self.fixstatus
761 args['status'] = self.fixstatus
762 args['resolution'] = self.fixresolution
762 args['resolution'] = self.fixresolution
763 args['token'] = self.bztoken
763 args['token'] = self.bztoken
764 self.bzproxy.Bug.update(args)
764 self.bzproxy.Bug.update(args)
765 else:
765 else:
766 if 'fix' in newstate:
766 if 'fix' in newstate:
767 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
767 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
768 "to mark bugs fixed\n"))
768 "to mark bugs fixed\n"))
769 args['id'] = bugid
769 args['id'] = bugid
770 args['comment'] = text
770 args['comment'] = text
771 self.bzproxy.Bug.add_comment(args)
771 self.bzproxy.Bug.add_comment(args)
772
772
773 class bzxmlrpcemail(bzxmlrpc):
773 class bzxmlrpcemail(bzxmlrpc):
774 """Read data from Bugzilla via XMLRPC, send updates via email.
774 """Read data from Bugzilla via XMLRPC, send updates via email.
775
775
776 Advantages of sending updates via email:
776 Advantages of sending updates via email:
777 1. Comments can be added as any user, not just logged in user.
777 1. Comments can be added as any user, not just logged in user.
778 2. Bug statuses or other fields not accessible via XMLRPC can
778 2. Bug statuses or other fields not accessible via XMLRPC can
779 potentially be updated.
779 potentially be updated.
780
780
781 There is no XMLRPC function to change bug status before Bugzilla
781 There is no XMLRPC function to change bug status before Bugzilla
782 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
782 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
783 But bugs can be marked fixed via email from 3.4 onwards.
783 But bugs can be marked fixed via email from 3.4 onwards.
784 """
784 """
785
785
786 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
786 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
787 # in-email fields are specified as '@<fieldname> = <value>'. In
787 # in-email fields are specified as '@<fieldname> = <value>'. In
788 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
788 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
789 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
789 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
790 # compatibility, but rather than rely on this use the new format for
790 # compatibility, but rather than rely on this use the new format for
791 # 4.0 onwards.
791 # 4.0 onwards.
792
792
793 def __init__(self, ui):
793 def __init__(self, ui):
794 bzxmlrpc.__init__(self, ui)
794 bzxmlrpc.__init__(self, ui)
795
795
796 self.bzemail = self.ui.config('bugzilla', 'bzemail')
796 self.bzemail = self.ui.config('bugzilla', 'bzemail')
797 if not self.bzemail:
797 if not self.bzemail:
798 raise error.Abort(_("configuration 'bzemail' missing"))
798 raise error.Abort(_("configuration 'bzemail' missing"))
799 mail.validateconfig(self.ui)
799 mail.validateconfig(self.ui)
800
800
801 def makecommandline(self, fieldname, value):
801 def makecommandline(self, fieldname, value):
802 if self.bzvermajor >= 4:
802 if self.bzvermajor >= 4:
803 return "@%s %s" % (fieldname, str(value))
803 return "@%s %s" % (fieldname, str(value))
804 else:
804 else:
805 if fieldname == "id":
805 if fieldname == "id":
806 fieldname = "bug_id"
806 fieldname = "bug_id"
807 return "@%s = %s" % (fieldname, str(value))
807 return "@%s = %s" % (fieldname, str(value))
808
808
809 def send_bug_modify_email(self, bugid, commands, comment, committer):
809 def send_bug_modify_email(self, bugid, commands, comment, committer):
810 '''send modification message to Bugzilla bug via email.
810 '''send modification message to Bugzilla bug via email.
811
811
812 The message format is documented in the Bugzilla email_in.pl
812 The message format is documented in the Bugzilla email_in.pl
813 specification. commands is a list of command lines, comment is the
813 specification. commands is a list of command lines, comment is the
814 comment text.
814 comment text.
815
815
816 To stop users from crafting commit comments with
816 To stop users from crafting commit comments with
817 Bugzilla commands, specify the bug ID via the message body, rather
817 Bugzilla commands, specify the bug ID via the message body, rather
818 than the subject line, and leave a blank line after it.
818 than the subject line, and leave a blank line after it.
819 '''
819 '''
820 user = self.map_committer(committer)
820 user = self.map_committer(committer)
821 matches = self.bzproxy.User.get({'match': [user],
821 matches = self.bzproxy.User.get({'match': [user],
822 'token': self.bztoken})
822 'token': self.bztoken})
823 if not matches['users']:
823 if not matches['users']:
824 user = self.ui.config('bugzilla', 'user')
824 user = self.ui.config('bugzilla', 'user')
825 matches = self.bzproxy.User.get({'match': [user],
825 matches = self.bzproxy.User.get({'match': [user],
826 'token': self.bztoken})
826 'token': self.bztoken})
827 if not matches['users']:
827 if not matches['users']:
828 raise error.Abort(_("default bugzilla user %s email not found")
828 raise error.Abort(_("default bugzilla user %s email not found")
829 % user)
829 % user)
830 user = matches['users'][0]['email']
830 user = matches['users'][0]['email']
831 commands.append(self.makecommandline("id", bugid))
831 commands.append(self.makecommandline("id", bugid))
832
832
833 text = "\n".join(commands) + "\n\n" + comment
833 text = "\n".join(commands) + "\n\n" + comment
834
834
835 _charsets = mail._charsets(self.ui)
835 _charsets = mail._charsets(self.ui)
836 user = mail.addressencode(self.ui, user, _charsets)
836 user = mail.addressencode(self.ui, user, _charsets)
837 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
837 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
838 msg = mail.mimeencode(self.ui, text, _charsets)
838 msg = mail.mimeencode(self.ui, text, _charsets)
839 msg['From'] = user
839 msg['From'] = user
840 msg['To'] = bzemail
840 msg['To'] = bzemail
841 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
841 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
842 sendmail = mail.connect(self.ui)
842 sendmail = mail.connect(self.ui)
843 sendmail(user, bzemail, msg.as_string())
843 sendmail(user, bzemail, msg.as_string())
844
844
845 def updatebug(self, bugid, newstate, text, committer):
845 def updatebug(self, bugid, newstate, text, committer):
846 cmds = []
846 cmds = []
847 if 'hours' in newstate:
847 if 'hours' in newstate:
848 cmds.append(self.makecommandline("work_time", newstate['hours']))
848 cmds.append(self.makecommandline("work_time", newstate['hours']))
849 if 'fix' in newstate:
849 if 'fix' in newstate:
850 cmds.append(self.makecommandline("bug_status", self.fixstatus))
850 cmds.append(self.makecommandline("bug_status", self.fixstatus))
851 cmds.append(self.makecommandline("resolution", self.fixresolution))
851 cmds.append(self.makecommandline("resolution", self.fixresolution))
852 self.send_bug_modify_email(bugid, cmds, text, committer)
852 self.send_bug_modify_email(bugid, cmds, text, committer)
853
853
854 class NotFound(LookupError):
854 class NotFound(LookupError):
855 pass
855 pass
856
856
857 class bzrestapi(bzaccess):
857 class bzrestapi(bzaccess):
858 """Read and write bugzilla data using the REST API available since
858 """Read and write bugzilla data using the REST API available since
859 Bugzilla 5.0.
859 Bugzilla 5.0.
860 """
860 """
861 def __init__(self, ui):
861 def __init__(self, ui):
862 bzaccess.__init__(self, ui)
862 bzaccess.__init__(self, ui)
863 bz = self.ui.config('bugzilla', 'bzurl')
863 bz = self.ui.config('bugzilla', 'bzurl')
864 self.bzroot = '/'.join([bz, 'rest'])
864 self.bzroot = '/'.join([bz, 'rest'])
865 self.apikey = self.ui.config('bugzilla', 'apikey')
865 self.apikey = self.ui.config('bugzilla', 'apikey')
866 self.user = self.ui.config('bugzilla', 'user')
866 self.user = self.ui.config('bugzilla', 'user')
867 self.passwd = self.ui.config('bugzilla', 'password')
867 self.passwd = self.ui.config('bugzilla', 'password')
868 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
868 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
869 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
869 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
870
870
871 def apiurl(self, targets, include_fields=None):
871 def apiurl(self, targets, include_fields=None):
872 url = '/'.join([self.bzroot] + [str(t) for t in targets])
872 url = '/'.join([self.bzroot] + [str(t) for t in targets])
873 qv = {}
873 qv = {}
874 if self.apikey:
874 if self.apikey:
875 qv['api_key'] = self.apikey
875 qv['api_key'] = self.apikey
876 elif self.user and self.passwd:
876 elif self.user and self.passwd:
877 qv['login'] = self.user
877 qv['login'] = self.user
878 qv['password'] = self.passwd
878 qv['password'] = self.passwd
879 if include_fields:
879 if include_fields:
880 qv['include_fields'] = include_fields
880 qv['include_fields'] = include_fields
881 if qv:
881 if qv:
882 url = '%s?%s' % (url, util.urlreq.urlencode(qv))
882 url = '%s?%s' % (url, util.urlreq.urlencode(qv))
883 return url
883 return url
884
884
885 def _fetch(self, burl):
885 def _fetch(self, burl):
886 try:
886 try:
887 resp = url.open(self.ui, burl)
887 resp = url.open(self.ui, burl)
888 return json.loads(resp.read())
888 return json.loads(resp.read())
889 except util.urlerr.httperror as inst:
889 except util.urlerr.httperror as inst:
890 if inst.code == 401:
890 if inst.code == 401:
891 raise error.Abort(_('authorization failed'))
891 raise error.Abort(_('authorization failed'))
892 if inst.code == 404:
892 if inst.code == 404:
893 raise NotFound()
893 raise NotFound()
894 else:
894 else:
895 raise
895 raise
896
896
897 def _submit(self, burl, data, method='POST'):
897 def _submit(self, burl, data, method='POST'):
898 data = json.dumps(data)
898 data = json.dumps(data)
899 if method == 'PUT':
899 if method == 'PUT':
900 class putrequest(util.urlreq.request):
900 class putrequest(util.urlreq.request):
901 def get_method(self):
901 def get_method(self):
902 return 'PUT'
902 return 'PUT'
903 request_type = putrequest
903 request_type = putrequest
904 else:
904 else:
905 request_type = util.urlreq.request
905 request_type = util.urlreq.request
906 req = request_type(burl, data,
906 req = request_type(burl, data,
907 {'Content-Type': 'application/json'})
907 {'Content-Type': 'application/json'})
908 try:
908 try:
909 resp = url.opener(self.ui).open(req)
909 resp = url.opener(self.ui).open(req)
910 return json.loads(resp.read())
910 return json.loads(resp.read())
911 except util.urlerr.httperror as inst:
911 except util.urlerr.httperror as inst:
912 if inst.code == 401:
912 if inst.code == 401:
913 raise error.Abort(_('authorization failed'))
913 raise error.Abort(_('authorization failed'))
914 if inst.code == 404:
914 if inst.code == 404:
915 raise NotFound()
915 raise NotFound()
916 else:
916 else:
917 raise
917 raise
918
918
919 def filter_real_bug_ids(self, bugs):
919 def filter_real_bug_ids(self, bugs):
920 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
920 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
921 badbugs = set()
921 badbugs = set()
922 for bugid in bugs:
922 for bugid in bugs:
923 burl = self.apiurl(('bug', bugid), include_fields='status')
923 burl = self.apiurl(('bug', bugid), include_fields='status')
924 try:
924 try:
925 self._fetch(burl)
925 self._fetch(burl)
926 except NotFound:
926 except NotFound:
927 badbugs.add(bugid)
927 badbugs.add(bugid)
928 for bugid in badbugs:
928 for bugid in badbugs:
929 del bugs[bugid]
929 del bugs[bugid]
930
930
931 def filter_cset_known_bug_ids(self, node, bugs):
931 def filter_cset_known_bug_ids(self, node, bugs):
932 '''remove bug IDs where node occurs in comment text from bugs.'''
932 '''remove bug IDs where node occurs in comment text from bugs.'''
933 sn = short(node)
933 sn = short(node)
934 for bugid in bugs.keys():
934 for bugid in bugs.keys():
935 burl = self.apiurl(('bug', bugid, 'comment'), include_fields='text')
935 burl = self.apiurl(('bug', bugid, 'comment'), include_fields='text')
936 result = self._fetch(burl)
936 result = self._fetch(burl)
937 comments = result['bugs'][str(bugid)]['comments']
937 comments = result['bugs'][str(bugid)]['comments']
938 if any(sn in c['text'] for c in comments):
938 if any(sn in c['text'] for c in comments):
939 self.ui.status(_('bug %d already knows about changeset %s\n') %
939 self.ui.status(_('bug %d already knows about changeset %s\n') %
940 (bugid, sn))
940 (bugid, sn))
941 del bugs[bugid]
941 del bugs[bugid]
942
942
943 def updatebug(self, bugid, newstate, text, committer):
943 def updatebug(self, bugid, newstate, text, committer):
944 '''update the specified bug. Add comment text and set new states.
944 '''update the specified bug. Add comment text and set new states.
945
945
946 If possible add the comment as being from the committer of
946 If possible add the comment as being from the committer of
947 the changeset. Otherwise use the default Bugzilla user.
947 the changeset. Otherwise use the default Bugzilla user.
948 '''
948 '''
949 bugmod = {}
949 bugmod = {}
950 if 'hours' in newstate:
950 if 'hours' in newstate:
951 bugmod['work_time'] = newstate['hours']
951 bugmod['work_time'] = newstate['hours']
952 if 'fix' in newstate:
952 if 'fix' in newstate:
953 bugmod['status'] = self.fixstatus
953 bugmod['status'] = self.fixstatus
954 bugmod['resolution'] = self.fixresolution
954 bugmod['resolution'] = self.fixresolution
955 if bugmod:
955 if bugmod:
956 # if we have to change the bugs state do it here
956 # if we have to change the bugs state do it here
957 bugmod['comment'] = {
957 bugmod['comment'] = {
958 'comment': text,
958 'comment': text,
959 'is_private': False,
959 'is_private': False,
960 'is_markdown': False,
960 'is_markdown': False,
961 }
961 }
962 burl = self.apiurl(('bug', bugid))
962 burl = self.apiurl(('bug', bugid))
963 self._submit(burl, bugmod, method='PUT')
963 self._submit(burl, bugmod, method='PUT')
964 self.ui.debug('updated bug %s\n' % bugid)
964 self.ui.debug('updated bug %s\n' % bugid)
965 else:
965 else:
966 burl = self.apiurl(('bug', bugid, 'comment'))
966 burl = self.apiurl(('bug', bugid, 'comment'))
967 self._submit(burl, {
967 self._submit(burl, {
968 'comment': text,
968 'comment': text,
969 'is_private': False,
969 'is_private': False,
970 'is_markdown': False,
970 'is_markdown': False,
971 })
971 })
972 self.ui.debug('added comment to bug %s\n' % bugid)
972 self.ui.debug('added comment to bug %s\n' % bugid)
973
973
974 def notify(self, bugs, committer):
974 def notify(self, bugs, committer):
975 '''Force sending of Bugzilla notification emails.
975 '''Force sending of Bugzilla notification emails.
976
976
977 Only required if the access method does not trigger notification
977 Only required if the access method does not trigger notification
978 emails automatically.
978 emails automatically.
979 '''
979 '''
980 pass
980 pass
981
981
982 class bugzilla(object):
982 class bugzilla(object):
983 # supported versions of bugzilla. different versions have
983 # supported versions of bugzilla. different versions have
984 # different schemas.
984 # different schemas.
985 _versions = {
985 _versions = {
986 '2.16': bzmysql,
986 '2.16': bzmysql,
987 '2.18': bzmysql_2_18,
987 '2.18': bzmysql_2_18,
988 '3.0': bzmysql_3_0,
988 '3.0': bzmysql_3_0,
989 'xmlrpc': bzxmlrpc,
989 'xmlrpc': bzxmlrpc,
990 'xmlrpc+email': bzxmlrpcemail,
990 'xmlrpc+email': bzxmlrpcemail,
991 'restapi': bzrestapi,
991 'restapi': bzrestapi,
992 }
992 }
993
993
994 def __init__(self, ui, repo):
994 def __init__(self, ui, repo):
995 self.ui = ui
995 self.ui = ui
996 self.repo = repo
996 self.repo = repo
997
997
998 bzversion = self.ui.config('bugzilla', 'version')
998 bzversion = self.ui.config('bugzilla', 'version')
999 try:
999 try:
1000 bzclass = bugzilla._versions[bzversion]
1000 bzclass = bugzilla._versions[bzversion]
1001 except KeyError:
1001 except KeyError:
1002 raise error.Abort(_('bugzilla version %s not supported') %
1002 raise error.Abort(_('bugzilla version %s not supported') %
1003 bzversion)
1003 bzversion)
1004 self.bzdriver = bzclass(self.ui)
1004 self.bzdriver = bzclass(self.ui)
1005
1005
1006 self.bug_re = re.compile(
1006 self.bug_re = re.compile(
1007 self.ui.config('bugzilla', 'regexp'), re.IGNORECASE)
1007 self.ui.config('bugzilla', 'regexp'), re.IGNORECASE)
1008 self.fix_re = re.compile(
1008 self.fix_re = re.compile(
1009 self.ui.config('bugzilla', 'fixregexp'), re.IGNORECASE)
1009 self.ui.config('bugzilla', 'fixregexp'), re.IGNORECASE)
1010 self.split_re = re.compile(r'\D+')
1010 self.split_re = re.compile(r'\D+')
1011
1011
1012 def find_bugs(self, ctx):
1012 def find_bugs(self, ctx):
1013 '''return bugs dictionary created from commit comment.
1013 '''return bugs dictionary created from commit comment.
1014
1014
1015 Extract bug info from changeset comments. Filter out any that are
1015 Extract bug info from changeset comments. Filter out any that are
1016 not known to Bugzilla, and any that already have a reference to
1016 not known to Bugzilla, and any that already have a reference to
1017 the given changeset in their comments.
1017 the given changeset in their comments.
1018 '''
1018 '''
1019 start = 0
1019 start = 0
1020 hours = 0.0
1020 hours = 0.0
1021 bugs = {}
1021 bugs = {}
1022 bugmatch = self.bug_re.search(ctx.description(), start)
1022 bugmatch = self.bug_re.search(ctx.description(), start)
1023 fixmatch = self.fix_re.search(ctx.description(), start)
1023 fixmatch = self.fix_re.search(ctx.description(), start)
1024 while True:
1024 while True:
1025 bugattribs = {}
1025 bugattribs = {}
1026 if not bugmatch and not fixmatch:
1026 if not bugmatch and not fixmatch:
1027 break
1027 break
1028 if not bugmatch:
1028 if not bugmatch:
1029 m = fixmatch
1029 m = fixmatch
1030 elif not fixmatch:
1030 elif not fixmatch:
1031 m = bugmatch
1031 m = bugmatch
1032 else:
1032 else:
1033 if bugmatch.start() < fixmatch.start():
1033 if bugmatch.start() < fixmatch.start():
1034 m = bugmatch
1034 m = bugmatch
1035 else:
1035 else:
1036 m = fixmatch
1036 m = fixmatch
1037 start = m.end()
1037 start = m.end()
1038 if m is bugmatch:
1038 if m is bugmatch:
1039 bugmatch = self.bug_re.search(ctx.description(), start)
1039 bugmatch = self.bug_re.search(ctx.description(), start)
1040 if 'fix' in bugattribs:
1040 if 'fix' in bugattribs:
1041 del bugattribs['fix']
1041 del bugattribs['fix']
1042 else:
1042 else:
1043 fixmatch = self.fix_re.search(ctx.description(), start)
1043 fixmatch = self.fix_re.search(ctx.description(), start)
1044 bugattribs['fix'] = None
1044 bugattribs['fix'] = None
1045
1045
1046 try:
1046 try:
1047 ids = m.group('ids')
1047 ids = m.group('ids')
1048 except IndexError:
1048 except IndexError:
1049 ids = m.group(1)
1049 ids = m.group(1)
1050 try:
1050 try:
1051 hours = float(m.group('hours'))
1051 hours = float(m.group('hours'))
1052 bugattribs['hours'] = hours
1052 bugattribs['hours'] = hours
1053 except IndexError:
1053 except IndexError:
1054 pass
1054 pass
1055 except TypeError:
1055 except TypeError:
1056 pass
1056 pass
1057 except ValueError:
1057 except ValueError:
1058 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
1058 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
1059
1059
1060 for id in self.split_re.split(ids):
1060 for id in self.split_re.split(ids):
1061 if not id:
1061 if not id:
1062 continue
1062 continue
1063 bugs[int(id)] = bugattribs
1063 bugs[int(id)] = bugattribs
1064 if bugs:
1064 if bugs:
1065 self.bzdriver.filter_real_bug_ids(bugs)
1065 self.bzdriver.filter_real_bug_ids(bugs)
1066 if bugs:
1066 if bugs:
1067 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
1067 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
1068 return bugs
1068 return bugs
1069
1069
1070 def update(self, bugid, newstate, ctx):
1070 def update(self, bugid, newstate, ctx):
1071 '''update bugzilla bug with reference to changeset.'''
1071 '''update bugzilla bug with reference to changeset.'''
1072
1072
1073 def webroot(root):
1073 def webroot(root):
1074 '''strip leading prefix of repo root and turn into
1074 '''strip leading prefix of repo root and turn into
1075 url-safe path.'''
1075 url-safe path.'''
1076 count = int(self.ui.config('bugzilla', 'strip'))
1076 count = int(self.ui.config('bugzilla', 'strip'))
1077 root = util.pconvert(root)
1077 root = util.pconvert(root)
1078 while count > 0:
1078 while count > 0:
1079 c = root.find('/')
1079 c = root.find('/')
1080 if c == -1:
1080 if c == -1:
1081 break
1081 break
1082 root = root[c + 1:]
1082 root = root[c + 1:]
1083 count -= 1
1083 count -= 1
1084 return root
1084 return root
1085
1085
1086 mapfile = None
1086 mapfile = None
1087 tmpl = self.ui.config('bugzilla', 'template')
1087 tmpl = self.ui.config('bugzilla', 'template')
1088 if not tmpl:
1088 if not tmpl:
1089 mapfile = self.ui.config('bugzilla', 'style')
1089 mapfile = self.ui.config('bugzilla', 'style')
1090 if not mapfile and not tmpl:
1090 if not mapfile and not tmpl:
1091 tmpl = _('changeset {node|short} in repo {root} refers '
1091 tmpl = _('changeset {node|short} in repo {root} refers '
1092 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
1092 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
1093 spec = cmdutil.logtemplatespec(tmpl, mapfile)
1093 spec = logcmdutil.templatespec(tmpl, mapfile)
1094 t = cmdutil.changeset_templater(self.ui, self.repo, spec,
1094 t = logcmdutil.changesettemplater(self.ui, self.repo, spec,
1095 False, None, False)
1095 False, None, False)
1096 self.ui.pushbuffer()
1096 self.ui.pushbuffer()
1097 t.show(ctx, changes=ctx.changeset(),
1097 t.show(ctx, changes=ctx.changeset(),
1098 bug=str(bugid),
1098 bug=str(bugid),
1099 hgweb=self.ui.config('web', 'baseurl'),
1099 hgweb=self.ui.config('web', 'baseurl'),
1100 root=self.repo.root,
1100 root=self.repo.root,
1101 webroot=webroot(self.repo.root))
1101 webroot=webroot(self.repo.root))
1102 data = self.ui.popbuffer()
1102 data = self.ui.popbuffer()
1103 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
1103 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
1104
1104
1105 def notify(self, bugs, committer):
1105 def notify(self, bugs, committer):
1106 '''ensure Bugzilla users are notified of bug change.'''
1106 '''ensure Bugzilla users are notified of bug change.'''
1107 self.bzdriver.notify(bugs, committer)
1107 self.bzdriver.notify(bugs, committer)
1108
1108
1109 def hook(ui, repo, hooktype, node=None, **kwargs):
1109 def hook(ui, repo, hooktype, node=None, **kwargs):
1110 '''add comment to bugzilla for each changeset that refers to a
1110 '''add comment to bugzilla for each changeset that refers to a
1111 bugzilla bug id. only add a comment once per bug, so same change
1111 bugzilla bug id. only add a comment once per bug, so same change
1112 seen multiple times does not fill bug with duplicate data.'''
1112 seen multiple times does not fill bug with duplicate data.'''
1113 if node is None:
1113 if node is None:
1114 raise error.Abort(_('hook type %s does not pass a changeset id') %
1114 raise error.Abort(_('hook type %s does not pass a changeset id') %
1115 hooktype)
1115 hooktype)
1116 try:
1116 try:
1117 bz = bugzilla(ui, repo)
1117 bz = bugzilla(ui, repo)
1118 ctx = repo[node]
1118 ctx = repo[node]
1119 bugs = bz.find_bugs(ctx)
1119 bugs = bz.find_bugs(ctx)
1120 if bugs:
1120 if bugs:
1121 for bug in bugs:
1121 for bug in bugs:
1122 bz.update(bug, bugs[bug], ctx)
1122 bz.update(bug, bugs[bug], ctx)
1123 bz.notify(bugs, util.email(ctx.user()))
1123 bz.notify(bugs, util.email(ctx.user()))
1124 except Exception as e:
1124 except Exception as e:
1125 raise error.Abort(_('Bugzilla error: %s') % e)
1125 raise error.Abort(_('Bugzilla error: %s') % e)
@@ -1,71 +1,72 b''
1 # Mercurial extension to provide the 'hg children' command
1 # Mercurial extension to provide the 'hg children' command
2 #
2 #
3 # Copyright 2007 by Intevation GmbH <intevation@intevation.de>
3 # Copyright 2007 by Intevation GmbH <intevation@intevation.de>
4 #
4 #
5 # Author(s):
5 # Author(s):
6 # Thomas Arendsen Hein <thomas@intevation.de>
6 # Thomas Arendsen Hein <thomas@intevation.de>
7 #
7 #
8 # This software may be used and distributed according to the terms of the
8 # This software may be used and distributed according to the terms of the
9 # GNU General Public License version 2 or any later version.
9 # GNU General Public License version 2 or any later version.
10
10
11 '''command to display child changesets (DEPRECATED)
11 '''command to display child changesets (DEPRECATED)
12
12
13 This extension is deprecated. You should use :hg:`log -r
13 This extension is deprecated. You should use :hg:`log -r
14 "children(REV)"` instead.
14 "children(REV)"` instead.
15 '''
15 '''
16
16
17 from __future__ import absolute_import
17 from __future__ import absolute_import
18
18
19 from mercurial.i18n import _
19 from mercurial.i18n import _
20 from mercurial import (
20 from mercurial import (
21 cmdutil,
21 cmdutil,
22 logcmdutil,
22 pycompat,
23 pycompat,
23 registrar,
24 registrar,
24 )
25 )
25
26
26 templateopts = cmdutil.templateopts
27 templateopts = cmdutil.templateopts
27
28
28 cmdtable = {}
29 cmdtable = {}
29 command = registrar.command(cmdtable)
30 command = registrar.command(cmdtable)
30 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
31 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 # be specifying the version(s) of Mercurial they are tested with, or
33 # be specifying the version(s) of Mercurial they are tested with, or
33 # leave the attribute unspecified.
34 # leave the attribute unspecified.
34 testedwith = 'ships-with-hg-core'
35 testedwith = 'ships-with-hg-core'
35
36
36 @command('children',
37 @command('children',
37 [('r', 'rev', '',
38 [('r', 'rev', '',
38 _('show children of the specified revision'), _('REV')),
39 _('show children of the specified revision'), _('REV')),
39 ] + templateopts,
40 ] + templateopts,
40 _('hg children [-r REV] [FILE]'),
41 _('hg children [-r REV] [FILE]'),
41 inferrepo=True)
42 inferrepo=True)
42 def children(ui, repo, file_=None, **opts):
43 def children(ui, repo, file_=None, **opts):
43 """show the children of the given or working directory revision
44 """show the children of the given or working directory revision
44
45
45 Print the children of the working directory's revisions. If a
46 Print the children of the working directory's revisions. If a
46 revision is given via -r/--rev, the children of that revision will
47 revision is given via -r/--rev, the children of that revision will
47 be printed. If a file argument is given, revision in which the
48 be printed. If a file argument is given, revision in which the
48 file was last changed (after the working directory revision or the
49 file was last changed (after the working directory revision or the
49 argument to --rev if given) is printed.
50 argument to --rev if given) is printed.
50
51
51 Please use :hg:`log` instead::
52 Please use :hg:`log` instead::
52
53
53 hg children => hg log -r "children(.)"
54 hg children => hg log -r "children(.)"
54 hg children -r REV => hg log -r "children(REV)"
55 hg children -r REV => hg log -r "children(REV)"
55
56
56 See :hg:`help log` and :hg:`help revsets.children`.
57 See :hg:`help log` and :hg:`help revsets.children`.
57
58
58 """
59 """
59 opts = pycompat.byteskwargs(opts)
60 opts = pycompat.byteskwargs(opts)
60 rev = opts.get('rev')
61 rev = opts.get('rev')
61 if file_:
62 if file_:
62 fctx = repo.filectx(file_, changeid=rev)
63 fctx = repo.filectx(file_, changeid=rev)
63 childctxs = [fcctx.changectx() for fcctx in fctx.children()]
64 childctxs = [fcctx.changectx() for fcctx in fctx.children()]
64 else:
65 else:
65 ctx = repo[rev]
66 ctx = repo[rev]
66 childctxs = ctx.children()
67 childctxs = ctx.children()
67
68
68 displayer = cmdutil.show_changeset(ui, repo, opts)
69 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
69 for cctx in childctxs:
70 for cctx in childctxs:
70 displayer.show(cctx)
71 displayer.show(cctx)
71 displayer.close()
72 displayer.close()
@@ -1,210 +1,211 b''
1 # churn.py - create a graph of revisions count grouped by template
1 # churn.py - create a graph of revisions count grouped by template
2 #
2 #
3 # Copyright 2006 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
3 # Copyright 2006 Josef "Jeff" Sipek <jeffpc@josefsipek.net>
4 # Copyright 2008 Alexander Solovyov <piranha@piranha.org.ua>
4 # Copyright 2008 Alexander Solovyov <piranha@piranha.org.ua>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''command to display statistics about repository history'''
9 '''command to display statistics about repository history'''
10
10
11 from __future__ import absolute_import
11 from __future__ import absolute_import
12
12
13 import datetime
13 import datetime
14 import os
14 import os
15 import time
15 import time
16
16
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18 from mercurial import (
18 from mercurial import (
19 cmdutil,
19 cmdutil,
20 encoding,
20 encoding,
21 logcmdutil,
21 patch,
22 patch,
22 pycompat,
23 pycompat,
23 registrar,
24 registrar,
24 scmutil,
25 scmutil,
25 util,
26 util,
26 )
27 )
27
28
28 cmdtable = {}
29 cmdtable = {}
29 command = registrar.command(cmdtable)
30 command = registrar.command(cmdtable)
30 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
31 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 # be specifying the version(s) of Mercurial they are tested with, or
33 # be specifying the version(s) of Mercurial they are tested with, or
33 # leave the attribute unspecified.
34 # leave the attribute unspecified.
34 testedwith = 'ships-with-hg-core'
35 testedwith = 'ships-with-hg-core'
35
36
36 def changedlines(ui, repo, ctx1, ctx2, fns):
37 def changedlines(ui, repo, ctx1, ctx2, fns):
37 added, removed = 0, 0
38 added, removed = 0, 0
38 fmatch = scmutil.matchfiles(repo, fns)
39 fmatch = scmutil.matchfiles(repo, fns)
39 diff = ''.join(patch.diff(repo, ctx1.node(), ctx2.node(), fmatch))
40 diff = ''.join(patch.diff(repo, ctx1.node(), ctx2.node(), fmatch))
40 for l in diff.split('\n'):
41 for l in diff.split('\n'):
41 if l.startswith("+") and not l.startswith("+++ "):
42 if l.startswith("+") and not l.startswith("+++ "):
42 added += 1
43 added += 1
43 elif l.startswith("-") and not l.startswith("--- "):
44 elif l.startswith("-") and not l.startswith("--- "):
44 removed += 1
45 removed += 1
45 return (added, removed)
46 return (added, removed)
46
47
47 def countrate(ui, repo, amap, *pats, **opts):
48 def countrate(ui, repo, amap, *pats, **opts):
48 """Calculate stats"""
49 """Calculate stats"""
49 opts = pycompat.byteskwargs(opts)
50 opts = pycompat.byteskwargs(opts)
50 if opts.get('dateformat'):
51 if opts.get('dateformat'):
51 def getkey(ctx):
52 def getkey(ctx):
52 t, tz = ctx.date()
53 t, tz = ctx.date()
53 date = datetime.datetime(*time.gmtime(float(t) - tz)[:6])
54 date = datetime.datetime(*time.gmtime(float(t) - tz)[:6])
54 return date.strftime(opts['dateformat'])
55 return date.strftime(opts['dateformat'])
55 else:
56 else:
56 tmpl = opts.get('oldtemplate') or opts.get('template')
57 tmpl = opts.get('oldtemplate') or opts.get('template')
57 tmpl = cmdutil.makelogtemplater(ui, repo, tmpl)
58 tmpl = logcmdutil.maketemplater(ui, repo, tmpl)
58 def getkey(ctx):
59 def getkey(ctx):
59 ui.pushbuffer()
60 ui.pushbuffer()
60 tmpl.show(ctx)
61 tmpl.show(ctx)
61 return ui.popbuffer()
62 return ui.popbuffer()
62
63
63 state = {'count': 0}
64 state = {'count': 0}
64 rate = {}
65 rate = {}
65 df = False
66 df = False
66 if opts.get('date'):
67 if opts.get('date'):
67 df = util.matchdate(opts['date'])
68 df = util.matchdate(opts['date'])
68
69
69 m = scmutil.match(repo[None], pats, opts)
70 m = scmutil.match(repo[None], pats, opts)
70 def prep(ctx, fns):
71 def prep(ctx, fns):
71 rev = ctx.rev()
72 rev = ctx.rev()
72 if df and not df(ctx.date()[0]): # doesn't match date format
73 if df and not df(ctx.date()[0]): # doesn't match date format
73 return
74 return
74
75
75 key = getkey(ctx).strip()
76 key = getkey(ctx).strip()
76 key = amap.get(key, key) # alias remap
77 key = amap.get(key, key) # alias remap
77 if opts.get('changesets'):
78 if opts.get('changesets'):
78 rate[key] = (rate.get(key, (0,))[0] + 1, 0)
79 rate[key] = (rate.get(key, (0,))[0] + 1, 0)
79 else:
80 else:
80 parents = ctx.parents()
81 parents = ctx.parents()
81 if len(parents) > 1:
82 if len(parents) > 1:
82 ui.note(_('revision %d is a merge, ignoring...\n') % (rev,))
83 ui.note(_('revision %d is a merge, ignoring...\n') % (rev,))
83 return
84 return
84
85
85 ctx1 = parents[0]
86 ctx1 = parents[0]
86 lines = changedlines(ui, repo, ctx1, ctx, fns)
87 lines = changedlines(ui, repo, ctx1, ctx, fns)
87 rate[key] = [r + l for r, l in zip(rate.get(key, (0, 0)), lines)]
88 rate[key] = [r + l for r, l in zip(rate.get(key, (0, 0)), lines)]
88
89
89 state['count'] += 1
90 state['count'] += 1
90 ui.progress(_('analyzing'), state['count'], total=len(repo),
91 ui.progress(_('analyzing'), state['count'], total=len(repo),
91 unit=_('revisions'))
92 unit=_('revisions'))
92
93
93 for ctx in cmdutil.walkchangerevs(repo, m, opts, prep):
94 for ctx in cmdutil.walkchangerevs(repo, m, opts, prep):
94 continue
95 continue
95
96
96 ui.progress(_('analyzing'), None)
97 ui.progress(_('analyzing'), None)
97
98
98 return rate
99 return rate
99
100
100
101
101 @command('churn',
102 @command('churn',
102 [('r', 'rev', [],
103 [('r', 'rev', [],
103 _('count rate for the specified revision or revset'), _('REV')),
104 _('count rate for the specified revision or revset'), _('REV')),
104 ('d', 'date', '',
105 ('d', 'date', '',
105 _('count rate for revisions matching date spec'), _('DATE')),
106 _('count rate for revisions matching date spec'), _('DATE')),
106 ('t', 'oldtemplate', '',
107 ('t', 'oldtemplate', '',
107 _('template to group changesets (DEPRECATED)'), _('TEMPLATE')),
108 _('template to group changesets (DEPRECATED)'), _('TEMPLATE')),
108 ('T', 'template', '{author|email}',
109 ('T', 'template', '{author|email}',
109 _('template to group changesets'), _('TEMPLATE')),
110 _('template to group changesets'), _('TEMPLATE')),
110 ('f', 'dateformat', '',
111 ('f', 'dateformat', '',
111 _('strftime-compatible format for grouping by date'), _('FORMAT')),
112 _('strftime-compatible format for grouping by date'), _('FORMAT')),
112 ('c', 'changesets', False, _('count rate by number of changesets')),
113 ('c', 'changesets', False, _('count rate by number of changesets')),
113 ('s', 'sort', False, _('sort by key (default: sort by count)')),
114 ('s', 'sort', False, _('sort by key (default: sort by count)')),
114 ('', 'diffstat', False, _('display added/removed lines separately')),
115 ('', 'diffstat', False, _('display added/removed lines separately')),
115 ('', 'aliases', '', _('file with email aliases'), _('FILE')),
116 ('', 'aliases', '', _('file with email aliases'), _('FILE')),
116 ] + cmdutil.walkopts,
117 ] + cmdutil.walkopts,
117 _("hg churn [-d DATE] [-r REV] [--aliases FILE] [FILE]"),
118 _("hg churn [-d DATE] [-r REV] [--aliases FILE] [FILE]"),
118 inferrepo=True)
119 inferrepo=True)
119 def churn(ui, repo, *pats, **opts):
120 def churn(ui, repo, *pats, **opts):
120 '''histogram of changes to the repository
121 '''histogram of changes to the repository
121
122
122 This command will display a histogram representing the number
123 This command will display a histogram representing the number
123 of changed lines or revisions, grouped according to the given
124 of changed lines or revisions, grouped according to the given
124 template. The default template will group changes by author.
125 template. The default template will group changes by author.
125 The --dateformat option may be used to group the results by
126 The --dateformat option may be used to group the results by
126 date instead.
127 date instead.
127
128
128 Statistics are based on the number of changed lines, or
129 Statistics are based on the number of changed lines, or
129 alternatively the number of matching revisions if the
130 alternatively the number of matching revisions if the
130 --changesets option is specified.
131 --changesets option is specified.
131
132
132 Examples::
133 Examples::
133
134
134 # display count of changed lines for every committer
135 # display count of changed lines for every committer
135 hg churn -T "{author|email}"
136 hg churn -T "{author|email}"
136
137
137 # display daily activity graph
138 # display daily activity graph
138 hg churn -f "%H" -s -c
139 hg churn -f "%H" -s -c
139
140
140 # display activity of developers by month
141 # display activity of developers by month
141 hg churn -f "%Y-%m" -s -c
142 hg churn -f "%Y-%m" -s -c
142
143
143 # display count of lines changed in every year
144 # display count of lines changed in every year
144 hg churn -f "%Y" -s
145 hg churn -f "%Y" -s
145
146
146 It is possible to map alternate email addresses to a main address
147 It is possible to map alternate email addresses to a main address
147 by providing a file using the following format::
148 by providing a file using the following format::
148
149
149 <alias email> = <actual email>
150 <alias email> = <actual email>
150
151
151 Such a file may be specified with the --aliases option, otherwise
152 Such a file may be specified with the --aliases option, otherwise
152 a .hgchurn file will be looked for in the working directory root.
153 a .hgchurn file will be looked for in the working directory root.
153 Aliases will be split from the rightmost "=".
154 Aliases will be split from the rightmost "=".
154 '''
155 '''
155 def pad(s, l):
156 def pad(s, l):
156 return s + " " * (l - encoding.colwidth(s))
157 return s + " " * (l - encoding.colwidth(s))
157
158
158 amap = {}
159 amap = {}
159 aliases = opts.get(r'aliases')
160 aliases = opts.get(r'aliases')
160 if not aliases and os.path.exists(repo.wjoin('.hgchurn')):
161 if not aliases and os.path.exists(repo.wjoin('.hgchurn')):
161 aliases = repo.wjoin('.hgchurn')
162 aliases = repo.wjoin('.hgchurn')
162 if aliases:
163 if aliases:
163 for l in open(aliases, "r"):
164 for l in open(aliases, "r"):
164 try:
165 try:
165 alias, actual = l.rsplit('=' in l and '=' or None, 1)
166 alias, actual = l.rsplit('=' in l and '=' or None, 1)
166 amap[alias.strip()] = actual.strip()
167 amap[alias.strip()] = actual.strip()
167 except ValueError:
168 except ValueError:
168 l = l.strip()
169 l = l.strip()
169 if l:
170 if l:
170 ui.warn(_("skipping malformed alias: %s\n") % l)
171 ui.warn(_("skipping malformed alias: %s\n") % l)
171 continue
172 continue
172
173
173 rate = countrate(ui, repo, amap, *pats, **opts).items()
174 rate = countrate(ui, repo, amap, *pats, **opts).items()
174 if not rate:
175 if not rate:
175 return
176 return
176
177
177 if opts.get(r'sort'):
178 if opts.get(r'sort'):
178 rate.sort()
179 rate.sort()
179 else:
180 else:
180 rate.sort(key=lambda x: (-sum(x[1]), x))
181 rate.sort(key=lambda x: (-sum(x[1]), x))
181
182
182 # Be careful not to have a zero maxcount (issue833)
183 # Be careful not to have a zero maxcount (issue833)
183 maxcount = float(max(sum(v) for k, v in rate)) or 1.0
184 maxcount = float(max(sum(v) for k, v in rate)) or 1.0
184 maxname = max(len(k) for k, v in rate)
185 maxname = max(len(k) for k, v in rate)
185
186
186 ttywidth = ui.termwidth()
187 ttywidth = ui.termwidth()
187 ui.debug("assuming %i character terminal\n" % ttywidth)
188 ui.debug("assuming %i character terminal\n" % ttywidth)
188 width = ttywidth - maxname - 2 - 2 - 2
189 width = ttywidth - maxname - 2 - 2 - 2
189
190
190 if opts.get(r'diffstat'):
191 if opts.get(r'diffstat'):
191 width -= 15
192 width -= 15
192 def format(name, diffstat):
193 def format(name, diffstat):
193 added, removed = diffstat
194 added, removed = diffstat
194 return "%s %15s %s%s\n" % (pad(name, maxname),
195 return "%s %15s %s%s\n" % (pad(name, maxname),
195 '+%d/-%d' % (added, removed),
196 '+%d/-%d' % (added, removed),
196 ui.label('+' * charnum(added),
197 ui.label('+' * charnum(added),
197 'diffstat.inserted'),
198 'diffstat.inserted'),
198 ui.label('-' * charnum(removed),
199 ui.label('-' * charnum(removed),
199 'diffstat.deleted'))
200 'diffstat.deleted'))
200 else:
201 else:
201 width -= 6
202 width -= 6
202 def format(name, count):
203 def format(name, count):
203 return "%s %6d %s\n" % (pad(name, maxname), sum(count),
204 return "%s %6d %s\n" % (pad(name, maxname), sum(count),
204 '*' * charnum(sum(count)))
205 '*' * charnum(sum(count)))
205
206
206 def charnum(count):
207 def charnum(count):
207 return int(round(count * width / maxcount))
208 return int(round(count * width / maxcount))
208
209
209 for name, count in rate:
210 for name, count in rate:
210 ui.write(format(name, count))
211 ui.write(format(name, count))
@@ -1,517 +1,519 b''
1 # journal.py
1 # journal.py
2 #
2 #
3 # Copyright 2014-2016 Facebook, Inc.
3 # Copyright 2014-2016 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """track previous positions of bookmarks (EXPERIMENTAL)
7 """track previous positions of bookmarks (EXPERIMENTAL)
8
8
9 This extension adds a new command: `hg journal`, which shows you where
9 This extension adds a new command: `hg journal`, which shows you where
10 bookmarks were previously located.
10 bookmarks were previously located.
11
11
12 """
12 """
13
13
14 from __future__ import absolute_import
14 from __future__ import absolute_import
15
15
16 import collections
16 import collections
17 import errno
17 import errno
18 import os
18 import os
19 import weakref
19 import weakref
20
20
21 from mercurial.i18n import _
21 from mercurial.i18n import _
22
22
23 from mercurial import (
23 from mercurial import (
24 bookmarks,
24 bookmarks,
25 cmdutil,
25 cmdutil,
26 dispatch,
26 dispatch,
27 error,
27 error,
28 extensions,
28 extensions,
29 hg,
29 hg,
30 localrepo,
30 localrepo,
31 lock,
31 lock,
32 logcmdutil,
32 node,
33 node,
33 pycompat,
34 pycompat,
34 registrar,
35 registrar,
35 util,
36 util,
36 )
37 )
37
38
38 from . import share
39 from . import share
39
40
40 cmdtable = {}
41 cmdtable = {}
41 command = registrar.command(cmdtable)
42 command = registrar.command(cmdtable)
42
43
43 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
44 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
44 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
45 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
45 # be specifying the version(s) of Mercurial they are tested with, or
46 # be specifying the version(s) of Mercurial they are tested with, or
46 # leave the attribute unspecified.
47 # leave the attribute unspecified.
47 testedwith = 'ships-with-hg-core'
48 testedwith = 'ships-with-hg-core'
48
49
49 # storage format version; increment when the format changes
50 # storage format version; increment when the format changes
50 storageversion = 0
51 storageversion = 0
51
52
52 # namespaces
53 # namespaces
53 bookmarktype = 'bookmark'
54 bookmarktype = 'bookmark'
54 wdirparenttype = 'wdirparent'
55 wdirparenttype = 'wdirparent'
55 # In a shared repository, what shared feature name is used
56 # In a shared repository, what shared feature name is used
56 # to indicate this namespace is shared with the source?
57 # to indicate this namespace is shared with the source?
57 sharednamespaces = {
58 sharednamespaces = {
58 bookmarktype: hg.sharedbookmarks,
59 bookmarktype: hg.sharedbookmarks,
59 }
60 }
60
61
61 # Journal recording, register hooks and storage object
62 # Journal recording, register hooks and storage object
62 def extsetup(ui):
63 def extsetup(ui):
63 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
64 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
64 extensions.wrapfunction(bookmarks.bmstore, '_write', recordbookmarks)
65 extensions.wrapfunction(bookmarks.bmstore, '_write', recordbookmarks)
65 extensions.wrapfilecache(
66 extensions.wrapfilecache(
66 localrepo.localrepository, 'dirstate', wrapdirstate)
67 localrepo.localrepository, 'dirstate', wrapdirstate)
67 extensions.wrapfunction(hg, 'postshare', wrappostshare)
68 extensions.wrapfunction(hg, 'postshare', wrappostshare)
68 extensions.wrapfunction(hg, 'copystore', unsharejournal)
69 extensions.wrapfunction(hg, 'copystore', unsharejournal)
69
70
70 def reposetup(ui, repo):
71 def reposetup(ui, repo):
71 if repo.local():
72 if repo.local():
72 repo.journal = journalstorage(repo)
73 repo.journal = journalstorage(repo)
73 repo._wlockfreeprefix.add('namejournal')
74 repo._wlockfreeprefix.add('namejournal')
74
75
75 dirstate, cached = localrepo.isfilecached(repo, 'dirstate')
76 dirstate, cached = localrepo.isfilecached(repo, 'dirstate')
76 if cached:
77 if cached:
77 # already instantiated dirstate isn't yet marked as
78 # already instantiated dirstate isn't yet marked as
78 # "journal"-ing, even though repo.dirstate() was already
79 # "journal"-ing, even though repo.dirstate() was already
79 # wrapped by own wrapdirstate()
80 # wrapped by own wrapdirstate()
80 _setupdirstate(repo, dirstate)
81 _setupdirstate(repo, dirstate)
81
82
82 def runcommand(orig, lui, repo, cmd, fullargs, *args):
83 def runcommand(orig, lui, repo, cmd, fullargs, *args):
83 """Track the command line options for recording in the journal"""
84 """Track the command line options for recording in the journal"""
84 journalstorage.recordcommand(*fullargs)
85 journalstorage.recordcommand(*fullargs)
85 return orig(lui, repo, cmd, fullargs, *args)
86 return orig(lui, repo, cmd, fullargs, *args)
86
87
87 def _setupdirstate(repo, dirstate):
88 def _setupdirstate(repo, dirstate):
88 dirstate.journalstorage = repo.journal
89 dirstate.journalstorage = repo.journal
89 dirstate.addparentchangecallback('journal', recorddirstateparents)
90 dirstate.addparentchangecallback('journal', recorddirstateparents)
90
91
91 # hooks to record dirstate changes
92 # hooks to record dirstate changes
92 def wrapdirstate(orig, repo):
93 def wrapdirstate(orig, repo):
93 """Make journal storage available to the dirstate object"""
94 """Make journal storage available to the dirstate object"""
94 dirstate = orig(repo)
95 dirstate = orig(repo)
95 if util.safehasattr(repo, 'journal'):
96 if util.safehasattr(repo, 'journal'):
96 _setupdirstate(repo, dirstate)
97 _setupdirstate(repo, dirstate)
97 return dirstate
98 return dirstate
98
99
99 def recorddirstateparents(dirstate, old, new):
100 def recorddirstateparents(dirstate, old, new):
100 """Records all dirstate parent changes in the journal."""
101 """Records all dirstate parent changes in the journal."""
101 old = list(old)
102 old = list(old)
102 new = list(new)
103 new = list(new)
103 if util.safehasattr(dirstate, 'journalstorage'):
104 if util.safehasattr(dirstate, 'journalstorage'):
104 # only record two hashes if there was a merge
105 # only record two hashes if there was a merge
105 oldhashes = old[:1] if old[1] == node.nullid else old
106 oldhashes = old[:1] if old[1] == node.nullid else old
106 newhashes = new[:1] if new[1] == node.nullid else new
107 newhashes = new[:1] if new[1] == node.nullid else new
107 dirstate.journalstorage.record(
108 dirstate.journalstorage.record(
108 wdirparenttype, '.', oldhashes, newhashes)
109 wdirparenttype, '.', oldhashes, newhashes)
109
110
110 # hooks to record bookmark changes (both local and remote)
111 # hooks to record bookmark changes (both local and remote)
111 def recordbookmarks(orig, store, fp):
112 def recordbookmarks(orig, store, fp):
112 """Records all bookmark changes in the journal."""
113 """Records all bookmark changes in the journal."""
113 repo = store._repo
114 repo = store._repo
114 if util.safehasattr(repo, 'journal'):
115 if util.safehasattr(repo, 'journal'):
115 oldmarks = bookmarks.bmstore(repo)
116 oldmarks = bookmarks.bmstore(repo)
116 for mark, value in store.iteritems():
117 for mark, value in store.iteritems():
117 oldvalue = oldmarks.get(mark, node.nullid)
118 oldvalue = oldmarks.get(mark, node.nullid)
118 if value != oldvalue:
119 if value != oldvalue:
119 repo.journal.record(bookmarktype, mark, oldvalue, value)
120 repo.journal.record(bookmarktype, mark, oldvalue, value)
120 return orig(store, fp)
121 return orig(store, fp)
121
122
122 # shared repository support
123 # shared repository support
123 def _readsharedfeatures(repo):
124 def _readsharedfeatures(repo):
124 """A set of shared features for this repository"""
125 """A set of shared features for this repository"""
125 try:
126 try:
126 return set(repo.vfs.read('shared').splitlines())
127 return set(repo.vfs.read('shared').splitlines())
127 except IOError as inst:
128 except IOError as inst:
128 if inst.errno != errno.ENOENT:
129 if inst.errno != errno.ENOENT:
129 raise
130 raise
130 return set()
131 return set()
131
132
132 def _mergeentriesiter(*iterables, **kwargs):
133 def _mergeentriesiter(*iterables, **kwargs):
133 """Given a set of sorted iterables, yield the next entry in merged order
134 """Given a set of sorted iterables, yield the next entry in merged order
134
135
135 Note that by default entries go from most recent to oldest.
136 Note that by default entries go from most recent to oldest.
136 """
137 """
137 order = kwargs.pop(r'order', max)
138 order = kwargs.pop(r'order', max)
138 iterables = [iter(it) for it in iterables]
139 iterables = [iter(it) for it in iterables]
139 # this tracks still active iterables; iterables are deleted as they are
140 # this tracks still active iterables; iterables are deleted as they are
140 # exhausted, which is why this is a dictionary and why each entry also
141 # exhausted, which is why this is a dictionary and why each entry also
141 # stores the key. Entries are mutable so we can store the next value each
142 # stores the key. Entries are mutable so we can store the next value each
142 # time.
143 # time.
143 iterable_map = {}
144 iterable_map = {}
144 for key, it in enumerate(iterables):
145 for key, it in enumerate(iterables):
145 try:
146 try:
146 iterable_map[key] = [next(it), key, it]
147 iterable_map[key] = [next(it), key, it]
147 except StopIteration:
148 except StopIteration:
148 # empty entry, can be ignored
149 # empty entry, can be ignored
149 pass
150 pass
150
151
151 while iterable_map:
152 while iterable_map:
152 value, key, it = order(iterable_map.itervalues())
153 value, key, it = order(iterable_map.itervalues())
153 yield value
154 yield value
154 try:
155 try:
155 iterable_map[key][0] = next(it)
156 iterable_map[key][0] = next(it)
156 except StopIteration:
157 except StopIteration:
157 # this iterable is empty, remove it from consideration
158 # this iterable is empty, remove it from consideration
158 del iterable_map[key]
159 del iterable_map[key]
159
160
160 def wrappostshare(orig, sourcerepo, destrepo, **kwargs):
161 def wrappostshare(orig, sourcerepo, destrepo, **kwargs):
161 """Mark this shared working copy as sharing journal information"""
162 """Mark this shared working copy as sharing journal information"""
162 with destrepo.wlock():
163 with destrepo.wlock():
163 orig(sourcerepo, destrepo, **kwargs)
164 orig(sourcerepo, destrepo, **kwargs)
164 with destrepo.vfs('shared', 'a') as fp:
165 with destrepo.vfs('shared', 'a') as fp:
165 fp.write('journal\n')
166 fp.write('journal\n')
166
167
167 def unsharejournal(orig, ui, repo, repopath):
168 def unsharejournal(orig, ui, repo, repopath):
168 """Copy shared journal entries into this repo when unsharing"""
169 """Copy shared journal entries into this repo when unsharing"""
169 if (repo.path == repopath and repo.shared() and
170 if (repo.path == repopath and repo.shared() and
170 util.safehasattr(repo, 'journal')):
171 util.safehasattr(repo, 'journal')):
171 sharedrepo = share._getsrcrepo(repo)
172 sharedrepo = share._getsrcrepo(repo)
172 sharedfeatures = _readsharedfeatures(repo)
173 sharedfeatures = _readsharedfeatures(repo)
173 if sharedrepo and sharedfeatures > {'journal'}:
174 if sharedrepo and sharedfeatures > {'journal'}:
174 # there is a shared repository and there are shared journal entries
175 # there is a shared repository and there are shared journal entries
175 # to copy. move shared date over from source to destination but
176 # to copy. move shared date over from source to destination but
176 # move the local file first
177 # move the local file first
177 if repo.vfs.exists('namejournal'):
178 if repo.vfs.exists('namejournal'):
178 journalpath = repo.vfs.join('namejournal')
179 journalpath = repo.vfs.join('namejournal')
179 util.rename(journalpath, journalpath + '.bak')
180 util.rename(journalpath, journalpath + '.bak')
180 storage = repo.journal
181 storage = repo.journal
181 local = storage._open(
182 local = storage._open(
182 repo.vfs, filename='namejournal.bak', _newestfirst=False)
183 repo.vfs, filename='namejournal.bak', _newestfirst=False)
183 shared = (
184 shared = (
184 e for e in storage._open(sharedrepo.vfs, _newestfirst=False)
185 e for e in storage._open(sharedrepo.vfs, _newestfirst=False)
185 if sharednamespaces.get(e.namespace) in sharedfeatures)
186 if sharednamespaces.get(e.namespace) in sharedfeatures)
186 for entry in _mergeentriesiter(local, shared, order=min):
187 for entry in _mergeentriesiter(local, shared, order=min):
187 storage._write(repo.vfs, entry)
188 storage._write(repo.vfs, entry)
188
189
189 return orig(ui, repo, repopath)
190 return orig(ui, repo, repopath)
190
191
191 class journalentry(collections.namedtuple(
192 class journalentry(collections.namedtuple(
192 u'journalentry',
193 u'journalentry',
193 u'timestamp user command namespace name oldhashes newhashes')):
194 u'timestamp user command namespace name oldhashes newhashes')):
194 """Individual journal entry
195 """Individual journal entry
195
196
196 * timestamp: a mercurial (time, timezone) tuple
197 * timestamp: a mercurial (time, timezone) tuple
197 * user: the username that ran the command
198 * user: the username that ran the command
198 * namespace: the entry namespace, an opaque string
199 * namespace: the entry namespace, an opaque string
199 * name: the name of the changed item, opaque string with meaning in the
200 * name: the name of the changed item, opaque string with meaning in the
200 namespace
201 namespace
201 * command: the hg command that triggered this record
202 * command: the hg command that triggered this record
202 * oldhashes: a tuple of one or more binary hashes for the old location
203 * oldhashes: a tuple of one or more binary hashes for the old location
203 * newhashes: a tuple of one or more binary hashes for the new location
204 * newhashes: a tuple of one or more binary hashes for the new location
204
205
205 Handles serialisation from and to the storage format. Fields are
206 Handles serialisation from and to the storage format. Fields are
206 separated by newlines, hashes are written out in hex separated by commas,
207 separated by newlines, hashes are written out in hex separated by commas,
207 timestamp and timezone are separated by a space.
208 timestamp and timezone are separated by a space.
208
209
209 """
210 """
210 @classmethod
211 @classmethod
211 def fromstorage(cls, line):
212 def fromstorage(cls, line):
212 (time, user, command, namespace, name,
213 (time, user, command, namespace, name,
213 oldhashes, newhashes) = line.split('\n')
214 oldhashes, newhashes) = line.split('\n')
214 timestamp, tz = time.split()
215 timestamp, tz = time.split()
215 timestamp, tz = float(timestamp), int(tz)
216 timestamp, tz = float(timestamp), int(tz)
216 oldhashes = tuple(node.bin(hash) for hash in oldhashes.split(','))
217 oldhashes = tuple(node.bin(hash) for hash in oldhashes.split(','))
217 newhashes = tuple(node.bin(hash) for hash in newhashes.split(','))
218 newhashes = tuple(node.bin(hash) for hash in newhashes.split(','))
218 return cls(
219 return cls(
219 (timestamp, tz), user, command, namespace, name,
220 (timestamp, tz), user, command, namespace, name,
220 oldhashes, newhashes)
221 oldhashes, newhashes)
221
222
222 def __str__(self):
223 def __str__(self):
223 """String representation for storage"""
224 """String representation for storage"""
224 time = ' '.join(map(str, self.timestamp))
225 time = ' '.join(map(str, self.timestamp))
225 oldhashes = ','.join([node.hex(hash) for hash in self.oldhashes])
226 oldhashes = ','.join([node.hex(hash) for hash in self.oldhashes])
226 newhashes = ','.join([node.hex(hash) for hash in self.newhashes])
227 newhashes = ','.join([node.hex(hash) for hash in self.newhashes])
227 return '\n'.join((
228 return '\n'.join((
228 time, self.user, self.command, self.namespace, self.name,
229 time, self.user, self.command, self.namespace, self.name,
229 oldhashes, newhashes))
230 oldhashes, newhashes))
230
231
231 class journalstorage(object):
232 class journalstorage(object):
232 """Storage for journal entries
233 """Storage for journal entries
233
234
234 Entries are divided over two files; one with entries that pertain to the
235 Entries are divided over two files; one with entries that pertain to the
235 local working copy *only*, and one with entries that are shared across
236 local working copy *only*, and one with entries that are shared across
236 multiple working copies when shared using the share extension.
237 multiple working copies when shared using the share extension.
237
238
238 Entries are stored with NUL bytes as separators. See the journalentry
239 Entries are stored with NUL bytes as separators. See the journalentry
239 class for the per-entry structure.
240 class for the per-entry structure.
240
241
241 The file format starts with an integer version, delimited by a NUL.
242 The file format starts with an integer version, delimited by a NUL.
242
243
243 This storage uses a dedicated lock; this makes it easier to avoid issues
244 This storage uses a dedicated lock; this makes it easier to avoid issues
244 with adding entries that added when the regular wlock is unlocked (e.g.
245 with adding entries that added when the regular wlock is unlocked (e.g.
245 the dirstate).
246 the dirstate).
246
247
247 """
248 """
248 _currentcommand = ()
249 _currentcommand = ()
249 _lockref = None
250 _lockref = None
250
251
251 def __init__(self, repo):
252 def __init__(self, repo):
252 self.user = util.getuser()
253 self.user = util.getuser()
253 self.ui = repo.ui
254 self.ui = repo.ui
254 self.vfs = repo.vfs
255 self.vfs = repo.vfs
255
256
256 # is this working copy using a shared storage?
257 # is this working copy using a shared storage?
257 self.sharedfeatures = self.sharedvfs = None
258 self.sharedfeatures = self.sharedvfs = None
258 if repo.shared():
259 if repo.shared():
259 features = _readsharedfeatures(repo)
260 features = _readsharedfeatures(repo)
260 sharedrepo = share._getsrcrepo(repo)
261 sharedrepo = share._getsrcrepo(repo)
261 if sharedrepo is not None and 'journal' in features:
262 if sharedrepo is not None and 'journal' in features:
262 self.sharedvfs = sharedrepo.vfs
263 self.sharedvfs = sharedrepo.vfs
263 self.sharedfeatures = features
264 self.sharedfeatures = features
264
265
265 # track the current command for recording in journal entries
266 # track the current command for recording in journal entries
266 @property
267 @property
267 def command(self):
268 def command(self):
268 commandstr = ' '.join(
269 commandstr = ' '.join(
269 map(util.shellquote, journalstorage._currentcommand))
270 map(util.shellquote, journalstorage._currentcommand))
270 if '\n' in commandstr:
271 if '\n' in commandstr:
271 # truncate multi-line commands
272 # truncate multi-line commands
272 commandstr = commandstr.partition('\n')[0] + ' ...'
273 commandstr = commandstr.partition('\n')[0] + ' ...'
273 return commandstr
274 return commandstr
274
275
275 @classmethod
276 @classmethod
276 def recordcommand(cls, *fullargs):
277 def recordcommand(cls, *fullargs):
277 """Set the current hg arguments, stored with recorded entries"""
278 """Set the current hg arguments, stored with recorded entries"""
278 # Set the current command on the class because we may have started
279 # Set the current command on the class because we may have started
279 # with a non-local repo (cloning for example).
280 # with a non-local repo (cloning for example).
280 cls._currentcommand = fullargs
281 cls._currentcommand = fullargs
281
282
282 def _currentlock(self, lockref):
283 def _currentlock(self, lockref):
283 """Returns the lock if it's held, or None if it's not.
284 """Returns the lock if it's held, or None if it's not.
284
285
285 (This is copied from the localrepo class)
286 (This is copied from the localrepo class)
286 """
287 """
287 if lockref is None:
288 if lockref is None:
288 return None
289 return None
289 l = lockref()
290 l = lockref()
290 if l is None or not l.held:
291 if l is None or not l.held:
291 return None
292 return None
292 return l
293 return l
293
294
294 def jlock(self, vfs):
295 def jlock(self, vfs):
295 """Create a lock for the journal file"""
296 """Create a lock for the journal file"""
296 if self._currentlock(self._lockref) is not None:
297 if self._currentlock(self._lockref) is not None:
297 raise error.Abort(_('journal lock does not support nesting'))
298 raise error.Abort(_('journal lock does not support nesting'))
298 desc = _('journal of %s') % vfs.base
299 desc = _('journal of %s') % vfs.base
299 try:
300 try:
300 l = lock.lock(vfs, 'namejournal.lock', 0, desc=desc)
301 l = lock.lock(vfs, 'namejournal.lock', 0, desc=desc)
301 except error.LockHeld as inst:
302 except error.LockHeld as inst:
302 self.ui.warn(
303 self.ui.warn(
303 _("waiting for lock on %s held by %r\n") % (desc, inst.locker))
304 _("waiting for lock on %s held by %r\n") % (desc, inst.locker))
304 # default to 600 seconds timeout
305 # default to 600 seconds timeout
305 l = lock.lock(
306 l = lock.lock(
306 vfs, 'namejournal.lock',
307 vfs, 'namejournal.lock',
307 self.ui.configint("ui", "timeout"), desc=desc)
308 self.ui.configint("ui", "timeout"), desc=desc)
308 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
309 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
309 self._lockref = weakref.ref(l)
310 self._lockref = weakref.ref(l)
310 return l
311 return l
311
312
312 def record(self, namespace, name, oldhashes, newhashes):
313 def record(self, namespace, name, oldhashes, newhashes):
313 """Record a new journal entry
314 """Record a new journal entry
314
315
315 * namespace: an opaque string; this can be used to filter on the type
316 * namespace: an opaque string; this can be used to filter on the type
316 of recorded entries.
317 of recorded entries.
317 * name: the name defining this entry; for bookmarks, this is the
318 * name: the name defining this entry; for bookmarks, this is the
318 bookmark name. Can be filtered on when retrieving entries.
319 bookmark name. Can be filtered on when retrieving entries.
319 * oldhashes and newhashes: each a single binary hash, or a list of
320 * oldhashes and newhashes: each a single binary hash, or a list of
320 binary hashes. These represent the old and new position of the named
321 binary hashes. These represent the old and new position of the named
321 item.
322 item.
322
323
323 """
324 """
324 if not isinstance(oldhashes, list):
325 if not isinstance(oldhashes, list):
325 oldhashes = [oldhashes]
326 oldhashes = [oldhashes]
326 if not isinstance(newhashes, list):
327 if not isinstance(newhashes, list):
327 newhashes = [newhashes]
328 newhashes = [newhashes]
328
329
329 entry = journalentry(
330 entry = journalentry(
330 util.makedate(), self.user, self.command, namespace, name,
331 util.makedate(), self.user, self.command, namespace, name,
331 oldhashes, newhashes)
332 oldhashes, newhashes)
332
333
333 vfs = self.vfs
334 vfs = self.vfs
334 if self.sharedvfs is not None:
335 if self.sharedvfs is not None:
335 # write to the shared repository if this feature is being
336 # write to the shared repository if this feature is being
336 # shared between working copies.
337 # shared between working copies.
337 if sharednamespaces.get(namespace) in self.sharedfeatures:
338 if sharednamespaces.get(namespace) in self.sharedfeatures:
338 vfs = self.sharedvfs
339 vfs = self.sharedvfs
339
340
340 self._write(vfs, entry)
341 self._write(vfs, entry)
341
342
342 def _write(self, vfs, entry):
343 def _write(self, vfs, entry):
343 with self.jlock(vfs):
344 with self.jlock(vfs):
344 version = None
345 version = None
345 # open file in amend mode to ensure it is created if missing
346 # open file in amend mode to ensure it is created if missing
346 with vfs('namejournal', mode='a+b') as f:
347 with vfs('namejournal', mode='a+b') as f:
347 f.seek(0, os.SEEK_SET)
348 f.seek(0, os.SEEK_SET)
348 # Read just enough bytes to get a version number (up to 2
349 # Read just enough bytes to get a version number (up to 2
349 # digits plus separator)
350 # digits plus separator)
350 version = f.read(3).partition('\0')[0]
351 version = f.read(3).partition('\0')[0]
351 if version and version != str(storageversion):
352 if version and version != str(storageversion):
352 # different version of the storage. Exit early (and not
353 # different version of the storage. Exit early (and not
353 # write anything) if this is not a version we can handle or
354 # write anything) if this is not a version we can handle or
354 # the file is corrupt. In future, perhaps rotate the file
355 # the file is corrupt. In future, perhaps rotate the file
355 # instead?
356 # instead?
356 self.ui.warn(
357 self.ui.warn(
357 _("unsupported journal file version '%s'\n") % version)
358 _("unsupported journal file version '%s'\n") % version)
358 return
359 return
359 if not version:
360 if not version:
360 # empty file, write version first
361 # empty file, write version first
361 f.write(str(storageversion) + '\0')
362 f.write(str(storageversion) + '\0')
362 f.seek(0, os.SEEK_END)
363 f.seek(0, os.SEEK_END)
363 f.write(str(entry) + '\0')
364 f.write(str(entry) + '\0')
364
365
365 def filtered(self, namespace=None, name=None):
366 def filtered(self, namespace=None, name=None):
366 """Yield all journal entries with the given namespace or name
367 """Yield all journal entries with the given namespace or name
367
368
368 Both the namespace and the name are optional; if neither is given all
369 Both the namespace and the name are optional; if neither is given all
369 entries in the journal are produced.
370 entries in the journal are produced.
370
371
371 Matching supports regular expressions by using the `re:` prefix
372 Matching supports regular expressions by using the `re:` prefix
372 (use `literal:` to match names or namespaces that start with `re:`)
373 (use `literal:` to match names or namespaces that start with `re:`)
373
374
374 """
375 """
375 if namespace is not None:
376 if namespace is not None:
376 namespace = util.stringmatcher(namespace)[-1]
377 namespace = util.stringmatcher(namespace)[-1]
377 if name is not None:
378 if name is not None:
378 name = util.stringmatcher(name)[-1]
379 name = util.stringmatcher(name)[-1]
379 for entry in self:
380 for entry in self:
380 if namespace is not None and not namespace(entry.namespace):
381 if namespace is not None and not namespace(entry.namespace):
381 continue
382 continue
382 if name is not None and not name(entry.name):
383 if name is not None and not name(entry.name):
383 continue
384 continue
384 yield entry
385 yield entry
385
386
386 def __iter__(self):
387 def __iter__(self):
387 """Iterate over the storage
388 """Iterate over the storage
388
389
389 Yields journalentry instances for each contained journal record.
390 Yields journalentry instances for each contained journal record.
390
391
391 """
392 """
392 local = self._open(self.vfs)
393 local = self._open(self.vfs)
393
394
394 if self.sharedvfs is None:
395 if self.sharedvfs is None:
395 return local
396 return local
396
397
397 # iterate over both local and shared entries, but only those
398 # iterate over both local and shared entries, but only those
398 # shared entries that are among the currently shared features
399 # shared entries that are among the currently shared features
399 shared = (
400 shared = (
400 e for e in self._open(self.sharedvfs)
401 e for e in self._open(self.sharedvfs)
401 if sharednamespaces.get(e.namespace) in self.sharedfeatures)
402 if sharednamespaces.get(e.namespace) in self.sharedfeatures)
402 return _mergeentriesiter(local, shared)
403 return _mergeentriesiter(local, shared)
403
404
404 def _open(self, vfs, filename='namejournal', _newestfirst=True):
405 def _open(self, vfs, filename='namejournal', _newestfirst=True):
405 if not vfs.exists(filename):
406 if not vfs.exists(filename):
406 return
407 return
407
408
408 with vfs(filename) as f:
409 with vfs(filename) as f:
409 raw = f.read()
410 raw = f.read()
410
411
411 lines = raw.split('\0')
412 lines = raw.split('\0')
412 version = lines and lines[0]
413 version = lines and lines[0]
413 if version != str(storageversion):
414 if version != str(storageversion):
414 version = version or _('not available')
415 version = version or _('not available')
415 raise error.Abort(_("unknown journal file version '%s'") % version)
416 raise error.Abort(_("unknown journal file version '%s'") % version)
416
417
417 # Skip the first line, it's a version number. Normally we iterate over
418 # Skip the first line, it's a version number. Normally we iterate over
418 # these in reverse order to list newest first; only when copying across
419 # these in reverse order to list newest first; only when copying across
419 # a shared storage do we forgo reversing.
420 # a shared storage do we forgo reversing.
420 lines = lines[1:]
421 lines = lines[1:]
421 if _newestfirst:
422 if _newestfirst:
422 lines = reversed(lines)
423 lines = reversed(lines)
423 for line in lines:
424 for line in lines:
424 if not line:
425 if not line:
425 continue
426 continue
426 yield journalentry.fromstorage(line)
427 yield journalentry.fromstorage(line)
427
428
428 # journal reading
429 # journal reading
429 # log options that don't make sense for journal
430 # log options that don't make sense for journal
430 _ignoreopts = ('no-merges', 'graph')
431 _ignoreopts = ('no-merges', 'graph')
431 @command(
432 @command(
432 'journal', [
433 'journal', [
433 ('', 'all', None, 'show history for all names'),
434 ('', 'all', None, 'show history for all names'),
434 ('c', 'commits', None, 'show commit metadata'),
435 ('c', 'commits', None, 'show commit metadata'),
435 ] + [opt for opt in cmdutil.logopts if opt[1] not in _ignoreopts],
436 ] + [opt for opt in cmdutil.logopts if opt[1] not in _ignoreopts],
436 '[OPTION]... [BOOKMARKNAME]')
437 '[OPTION]... [BOOKMARKNAME]')
437 def journal(ui, repo, *args, **opts):
438 def journal(ui, repo, *args, **opts):
438 """show the previous position of bookmarks and the working copy
439 """show the previous position of bookmarks and the working copy
439
440
440 The journal is used to see the previous commits that bookmarks and the
441 The journal is used to see the previous commits that bookmarks and the
441 working copy pointed to. By default the previous locations for the working
442 working copy pointed to. By default the previous locations for the working
442 copy. Passing a bookmark name will show all the previous positions of
443 copy. Passing a bookmark name will show all the previous positions of
443 that bookmark. Use the --all switch to show previous locations for all
444 that bookmark. Use the --all switch to show previous locations for all
444 bookmarks and the working copy; each line will then include the bookmark
445 bookmarks and the working copy; each line will then include the bookmark
445 name, or '.' for the working copy, as well.
446 name, or '.' for the working copy, as well.
446
447
447 If `name` starts with `re:`, the remainder of the name is treated as
448 If `name` starts with `re:`, the remainder of the name is treated as
448 a regular expression. To match a name that actually starts with `re:`,
449 a regular expression. To match a name that actually starts with `re:`,
449 use the prefix `literal:`.
450 use the prefix `literal:`.
450
451
451 By default hg journal only shows the commit hash and the command that was
452 By default hg journal only shows the commit hash and the command that was
452 running at that time. -v/--verbose will show the prior hash, the user, and
453 running at that time. -v/--verbose will show the prior hash, the user, and
453 the time at which it happened.
454 the time at which it happened.
454
455
455 Use -c/--commits to output log information on each commit hash; at this
456 Use -c/--commits to output log information on each commit hash; at this
456 point you can use the usual `--patch`, `--git`, `--stat` and `--template`
457 point you can use the usual `--patch`, `--git`, `--stat` and `--template`
457 switches to alter the log output for these.
458 switches to alter the log output for these.
458
459
459 `hg journal -T json` can be used to produce machine readable output.
460 `hg journal -T json` can be used to produce machine readable output.
460
461
461 """
462 """
462 opts = pycompat.byteskwargs(opts)
463 opts = pycompat.byteskwargs(opts)
463 name = '.'
464 name = '.'
464 if opts.get('all'):
465 if opts.get('all'):
465 if args:
466 if args:
466 raise error.Abort(
467 raise error.Abort(
467 _("You can't combine --all and filtering on a name"))
468 _("You can't combine --all and filtering on a name"))
468 name = None
469 name = None
469 if args:
470 if args:
470 name = args[0]
471 name = args[0]
471
472
472 fm = ui.formatter('journal', opts)
473 fm = ui.formatter('journal', opts)
473
474
474 if opts.get("template") != "json":
475 if opts.get("template") != "json":
475 if name is None:
476 if name is None:
476 displayname = _('the working copy and bookmarks')
477 displayname = _('the working copy and bookmarks')
477 else:
478 else:
478 displayname = "'%s'" % name
479 displayname = "'%s'" % name
479 ui.status(_("previous locations of %s:\n") % displayname)
480 ui.status(_("previous locations of %s:\n") % displayname)
480
481
481 limit = cmdutil.loglimit(opts)
482 limit = logcmdutil.getlimit(opts)
482 entry = None
483 entry = None
483 ui.pager('journal')
484 ui.pager('journal')
484 for count, entry in enumerate(repo.journal.filtered(name=name)):
485 for count, entry in enumerate(repo.journal.filtered(name=name)):
485 if count == limit:
486 if count == limit:
486 break
487 break
487 newhashesstr = fm.formatlist(map(fm.hexfunc, entry.newhashes),
488 newhashesstr = fm.formatlist(map(fm.hexfunc, entry.newhashes),
488 name='node', sep=',')
489 name='node', sep=',')
489 oldhashesstr = fm.formatlist(map(fm.hexfunc, entry.oldhashes),
490 oldhashesstr = fm.formatlist(map(fm.hexfunc, entry.oldhashes),
490 name='node', sep=',')
491 name='node', sep=',')
491
492
492 fm.startitem()
493 fm.startitem()
493 fm.condwrite(ui.verbose, 'oldhashes', '%s -> ', oldhashesstr)
494 fm.condwrite(ui.verbose, 'oldhashes', '%s -> ', oldhashesstr)
494 fm.write('newhashes', '%s', newhashesstr)
495 fm.write('newhashes', '%s', newhashesstr)
495 fm.condwrite(ui.verbose, 'user', ' %-8s', entry.user)
496 fm.condwrite(ui.verbose, 'user', ' %-8s', entry.user)
496 fm.condwrite(
497 fm.condwrite(
497 opts.get('all') or name.startswith('re:'),
498 opts.get('all') or name.startswith('re:'),
498 'name', ' %-8s', entry.name)
499 'name', ' %-8s', entry.name)
499
500
500 timestring = fm.formatdate(entry.timestamp, '%Y-%m-%d %H:%M %1%2')
501 timestring = fm.formatdate(entry.timestamp, '%Y-%m-%d %H:%M %1%2')
501 fm.condwrite(ui.verbose, 'date', ' %s', timestring)
502 fm.condwrite(ui.verbose, 'date', ' %s', timestring)
502 fm.write('command', ' %s\n', entry.command)
503 fm.write('command', ' %s\n', entry.command)
503
504
504 if opts.get("commits"):
505 if opts.get("commits"):
505 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=False)
506 displayer = logcmdutil.changesetdisplayer(ui, repo, opts,
507 buffered=False)
506 for hash in entry.newhashes:
508 for hash in entry.newhashes:
507 try:
509 try:
508 ctx = repo[hash]
510 ctx = repo[hash]
509 displayer.show(ctx)
511 displayer.show(ctx)
510 except error.RepoLookupError as e:
512 except error.RepoLookupError as e:
511 fm.write('repolookuperror', "%s\n\n", str(e))
513 fm.write('repolookuperror', "%s\n\n", str(e))
512 displayer.close()
514 displayer.close()
513
515
514 fm.end()
516 fm.end()
515
517
516 if entry is None:
518 if entry is None:
517 ui.status(_("no recorded locations\n"))
519 ui.status(_("no recorded locations\n"))
@@ -1,807 +1,808 b''
1 # keyword.py - $Keyword$ expansion for Mercurial
1 # keyword.py - $Keyword$ expansion for Mercurial
2 #
2 #
3 # Copyright 2007-2015 Christian Ebert <blacktrash@gmx.net>
3 # Copyright 2007-2015 Christian Ebert <blacktrash@gmx.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 #
7 #
8 # $Id$
8 # $Id$
9 #
9 #
10 # Keyword expansion hack against the grain of a Distributed SCM
10 # Keyword expansion hack against the grain of a Distributed SCM
11 #
11 #
12 # There are many good reasons why this is not needed in a distributed
12 # There are many good reasons why this is not needed in a distributed
13 # SCM, still it may be useful in very small projects based on single
13 # SCM, still it may be useful in very small projects based on single
14 # files (like LaTeX packages), that are mostly addressed to an
14 # files (like LaTeX packages), that are mostly addressed to an
15 # audience not running a version control system.
15 # audience not running a version control system.
16 #
16 #
17 # For in-depth discussion refer to
17 # For in-depth discussion refer to
18 # <https://mercurial-scm.org/wiki/KeywordPlan>.
18 # <https://mercurial-scm.org/wiki/KeywordPlan>.
19 #
19 #
20 # Keyword expansion is based on Mercurial's changeset template mappings.
20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 #
21 #
22 # Binary files are not touched.
22 # Binary files are not touched.
23 #
23 #
24 # Files to act upon/ignore are specified in the [keyword] section.
24 # Files to act upon/ignore are specified in the [keyword] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
26 #
26 #
27 # Run 'hg help keyword' and 'hg kwdemo' to get info on configuration.
27 # Run 'hg help keyword' and 'hg kwdemo' to get info on configuration.
28
28
29 '''expand keywords in tracked files
29 '''expand keywords in tracked files
30
30
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
32 tracked text files selected by your configuration.
32 tracked text files selected by your configuration.
33
33
34 Keywords are only expanded in local repositories and not stored in the
34 Keywords are only expanded in local repositories and not stored in the
35 change history. The mechanism can be regarded as a convenience for the
35 change history. The mechanism can be regarded as a convenience for the
36 current user or for archive distribution.
36 current user or for archive distribution.
37
37
38 Keywords expand to the changeset data pertaining to the latest change
38 Keywords expand to the changeset data pertaining to the latest change
39 relative to the working directory parent of each file.
39 relative to the working directory parent of each file.
40
40
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
42 sections of hgrc files.
42 sections of hgrc files.
43
43
44 Example::
44 Example::
45
45
46 [keyword]
46 [keyword]
47 # expand keywords in every python file except those matching "x*"
47 # expand keywords in every python file except those matching "x*"
48 **.py =
48 **.py =
49 x* = ignore
49 x* = ignore
50
50
51 [keywordset]
51 [keywordset]
52 # prefer svn- over cvs-like default keywordmaps
52 # prefer svn- over cvs-like default keywordmaps
53 svn = True
53 svn = True
54
54
55 .. note::
55 .. note::
56
56
57 The more specific you are in your filename patterns the less you
57 The more specific you are in your filename patterns the less you
58 lose speed in huge repositories.
58 lose speed in huge repositories.
59
59
60 For [keywordmaps] template mapping and expansion demonstration and
60 For [keywordmaps] template mapping and expansion demonstration and
61 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
61 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
62 available templates and filters.
62 available templates and filters.
63
63
64 Three additional date template filters are provided:
64 Three additional date template filters are provided:
65
65
66 :``utcdate``: "2006/09/18 15:13:13"
66 :``utcdate``: "2006/09/18 15:13:13"
67 :``svnutcdate``: "2006-09-18 15:13:13Z"
67 :``svnutcdate``: "2006-09-18 15:13:13Z"
68 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
68 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
69
69
70 The default template mappings (view with :hg:`kwdemo -d`) can be
70 The default template mappings (view with :hg:`kwdemo -d`) can be
71 replaced with customized keywords and templates. Again, run
71 replaced with customized keywords and templates. Again, run
72 :hg:`kwdemo` to control the results of your configuration changes.
72 :hg:`kwdemo` to control the results of your configuration changes.
73
73
74 Before changing/disabling active keywords, you must run :hg:`kwshrink`
74 Before changing/disabling active keywords, you must run :hg:`kwshrink`
75 to avoid storing expanded keywords in the change history.
75 to avoid storing expanded keywords in the change history.
76
76
77 To force expansion after enabling it, or a configuration change, run
77 To force expansion after enabling it, or a configuration change, run
78 :hg:`kwexpand`.
78 :hg:`kwexpand`.
79
79
80 Expansions spanning more than one line and incremental expansions,
80 Expansions spanning more than one line and incremental expansions,
81 like CVS' $Log$, are not supported. A keyword template map "Log =
81 like CVS' $Log$, are not supported. A keyword template map "Log =
82 {desc}" expands to the first line of the changeset description.
82 {desc}" expands to the first line of the changeset description.
83 '''
83 '''
84
84
85
85
86 from __future__ import absolute_import
86 from __future__ import absolute_import
87
87
88 import os
88 import os
89 import re
89 import re
90 import tempfile
90 import tempfile
91 import weakref
91 import weakref
92
92
93 from mercurial.i18n import _
93 from mercurial.i18n import _
94 from mercurial.hgweb import webcommands
94 from mercurial.hgweb import webcommands
95
95
96 from mercurial import (
96 from mercurial import (
97 cmdutil,
97 cmdutil,
98 context,
98 context,
99 dispatch,
99 dispatch,
100 error,
100 error,
101 extensions,
101 extensions,
102 filelog,
102 filelog,
103 localrepo,
103 localrepo,
104 logcmdutil,
104 match,
105 match,
105 patch,
106 patch,
106 pathutil,
107 pathutil,
107 pycompat,
108 pycompat,
108 registrar,
109 registrar,
109 scmutil,
110 scmutil,
110 templatefilters,
111 templatefilters,
111 util,
112 util,
112 )
113 )
113
114
114 cmdtable = {}
115 cmdtable = {}
115 command = registrar.command(cmdtable)
116 command = registrar.command(cmdtable)
116 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
117 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
117 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
118 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
118 # be specifying the version(s) of Mercurial they are tested with, or
119 # be specifying the version(s) of Mercurial they are tested with, or
119 # leave the attribute unspecified.
120 # leave the attribute unspecified.
120 testedwith = 'ships-with-hg-core'
121 testedwith = 'ships-with-hg-core'
121
122
122 # hg commands that do not act on keywords
123 # hg commands that do not act on keywords
123 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
124 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
124 ' outgoing push tip verify convert email glog')
125 ' outgoing push tip verify convert email glog')
125
126
126 # webcommands that do not act on keywords
127 # webcommands that do not act on keywords
127 nokwwebcommands = ('annotate changeset rev filediff diff comparison')
128 nokwwebcommands = ('annotate changeset rev filediff diff comparison')
128
129
129 # hg commands that trigger expansion only when writing to working dir,
130 # hg commands that trigger expansion only when writing to working dir,
130 # not when reading filelog, and unexpand when reading from working dir
131 # not when reading filelog, and unexpand when reading from working dir
131 restricted = ('merge kwexpand kwshrink record qrecord resolve transplant'
132 restricted = ('merge kwexpand kwshrink record qrecord resolve transplant'
132 ' unshelve rebase graft backout histedit fetch')
133 ' unshelve rebase graft backout histedit fetch')
133
134
134 # names of extensions using dorecord
135 # names of extensions using dorecord
135 recordextensions = 'record'
136 recordextensions = 'record'
136
137
137 colortable = {
138 colortable = {
138 'kwfiles.enabled': 'green bold',
139 'kwfiles.enabled': 'green bold',
139 'kwfiles.deleted': 'cyan bold underline',
140 'kwfiles.deleted': 'cyan bold underline',
140 'kwfiles.enabledunknown': 'green',
141 'kwfiles.enabledunknown': 'green',
141 'kwfiles.ignored': 'bold',
142 'kwfiles.ignored': 'bold',
142 'kwfiles.ignoredunknown': 'none'
143 'kwfiles.ignoredunknown': 'none'
143 }
144 }
144
145
145 templatefilter = registrar.templatefilter()
146 templatefilter = registrar.templatefilter()
146
147
147 configtable = {}
148 configtable = {}
148 configitem = registrar.configitem(configtable)
149 configitem = registrar.configitem(configtable)
149
150
150 configitem('keywordset', 'svn',
151 configitem('keywordset', 'svn',
151 default=False,
152 default=False,
152 )
153 )
153 # date like in cvs' $Date
154 # date like in cvs' $Date
154 @templatefilter('utcdate')
155 @templatefilter('utcdate')
155 def utcdate(text):
156 def utcdate(text):
156 '''Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
157 '''Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
157 '''
158 '''
158 return util.datestr((util.parsedate(text)[0], 0), '%Y/%m/%d %H:%M:%S')
159 return util.datestr((util.parsedate(text)[0], 0), '%Y/%m/%d %H:%M:%S')
159 # date like in svn's $Date
160 # date like in svn's $Date
160 @templatefilter('svnisodate')
161 @templatefilter('svnisodate')
161 def svnisodate(text):
162 def svnisodate(text):
162 '''Date. Returns a date in this format: "2009-08-18 13:00:13
163 '''Date. Returns a date in this format: "2009-08-18 13:00:13
163 +0200 (Tue, 18 Aug 2009)".
164 +0200 (Tue, 18 Aug 2009)".
164 '''
165 '''
165 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
166 return util.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
166 # date like in svn's $Id
167 # date like in svn's $Id
167 @templatefilter('svnutcdate')
168 @templatefilter('svnutcdate')
168 def svnutcdate(text):
169 def svnutcdate(text):
169 '''Date. Returns a UTC-date in this format: "2009-08-18
170 '''Date. Returns a UTC-date in this format: "2009-08-18
170 11:00:13Z".
171 11:00:13Z".
171 '''
172 '''
172 return util.datestr((util.parsedate(text)[0], 0), '%Y-%m-%d %H:%M:%SZ')
173 return util.datestr((util.parsedate(text)[0], 0), '%Y-%m-%d %H:%M:%SZ')
173
174
174 # make keyword tools accessible
175 # make keyword tools accessible
175 kwtools = {'hgcmd': ''}
176 kwtools = {'hgcmd': ''}
176
177
177 def _defaultkwmaps(ui):
178 def _defaultkwmaps(ui):
178 '''Returns default keywordmaps according to keywordset configuration.'''
179 '''Returns default keywordmaps according to keywordset configuration.'''
179 templates = {
180 templates = {
180 'Revision': '{node|short}',
181 'Revision': '{node|short}',
181 'Author': '{author|user}',
182 'Author': '{author|user}',
182 }
183 }
183 kwsets = ({
184 kwsets = ({
184 'Date': '{date|utcdate}',
185 'Date': '{date|utcdate}',
185 'RCSfile': '{file|basename},v',
186 'RCSfile': '{file|basename},v',
186 'RCSFile': '{file|basename},v', # kept for backwards compatibility
187 'RCSFile': '{file|basename},v', # kept for backwards compatibility
187 # with hg-keyword
188 # with hg-keyword
188 'Source': '{root}/{file},v',
189 'Source': '{root}/{file},v',
189 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
190 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
190 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
191 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
191 }, {
192 }, {
192 'Date': '{date|svnisodate}',
193 'Date': '{date|svnisodate}',
193 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
194 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
194 'LastChangedRevision': '{node|short}',
195 'LastChangedRevision': '{node|short}',
195 'LastChangedBy': '{author|user}',
196 'LastChangedBy': '{author|user}',
196 'LastChangedDate': '{date|svnisodate}',
197 'LastChangedDate': '{date|svnisodate}',
197 })
198 })
198 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
199 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
199 return templates
200 return templates
200
201
201 def _shrinktext(text, subfunc):
202 def _shrinktext(text, subfunc):
202 '''Helper for keyword expansion removal in text.
203 '''Helper for keyword expansion removal in text.
203 Depending on subfunc also returns number of substitutions.'''
204 Depending on subfunc also returns number of substitutions.'''
204 return subfunc(r'$\1$', text)
205 return subfunc(r'$\1$', text)
205
206
206 def _preselect(wstatus, changed):
207 def _preselect(wstatus, changed):
207 '''Retrieves modified and added files from a working directory state
208 '''Retrieves modified and added files from a working directory state
208 and returns the subset of each contained in given changed files
209 and returns the subset of each contained in given changed files
209 retrieved from a change context.'''
210 retrieved from a change context.'''
210 modified = [f for f in wstatus.modified if f in changed]
211 modified = [f for f in wstatus.modified if f in changed]
211 added = [f for f in wstatus.added if f in changed]
212 added = [f for f in wstatus.added if f in changed]
212 return modified, added
213 return modified, added
213
214
214
215
215 class kwtemplater(object):
216 class kwtemplater(object):
216 '''
217 '''
217 Sets up keyword templates, corresponding keyword regex, and
218 Sets up keyword templates, corresponding keyword regex, and
218 provides keyword substitution functions.
219 provides keyword substitution functions.
219 '''
220 '''
220
221
221 def __init__(self, ui, repo, inc, exc):
222 def __init__(self, ui, repo, inc, exc):
222 self.ui = ui
223 self.ui = ui
223 self._repo = weakref.ref(repo)
224 self._repo = weakref.ref(repo)
224 self.match = match.match(repo.root, '', [], inc, exc)
225 self.match = match.match(repo.root, '', [], inc, exc)
225 self.restrict = kwtools['hgcmd'] in restricted.split()
226 self.restrict = kwtools['hgcmd'] in restricted.split()
226 self.postcommit = False
227 self.postcommit = False
227
228
228 kwmaps = self.ui.configitems('keywordmaps')
229 kwmaps = self.ui.configitems('keywordmaps')
229 if kwmaps: # override default templates
230 if kwmaps: # override default templates
230 self.templates = dict(kwmaps)
231 self.templates = dict(kwmaps)
231 else:
232 else:
232 self.templates = _defaultkwmaps(self.ui)
233 self.templates = _defaultkwmaps(self.ui)
233
234
234 @property
235 @property
235 def repo(self):
236 def repo(self):
236 return self._repo()
237 return self._repo()
237
238
238 @util.propertycache
239 @util.propertycache
239 def escape(self):
240 def escape(self):
240 '''Returns bar-separated and escaped keywords.'''
241 '''Returns bar-separated and escaped keywords.'''
241 return '|'.join(map(re.escape, self.templates.keys()))
242 return '|'.join(map(re.escape, self.templates.keys()))
242
243
243 @util.propertycache
244 @util.propertycache
244 def rekw(self):
245 def rekw(self):
245 '''Returns regex for unexpanded keywords.'''
246 '''Returns regex for unexpanded keywords.'''
246 return re.compile(r'\$(%s)\$' % self.escape)
247 return re.compile(r'\$(%s)\$' % self.escape)
247
248
248 @util.propertycache
249 @util.propertycache
249 def rekwexp(self):
250 def rekwexp(self):
250 '''Returns regex for expanded keywords.'''
251 '''Returns regex for expanded keywords.'''
251 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
252 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
252
253
253 def substitute(self, data, path, ctx, subfunc):
254 def substitute(self, data, path, ctx, subfunc):
254 '''Replaces keywords in data with expanded template.'''
255 '''Replaces keywords in data with expanded template.'''
255 def kwsub(mobj):
256 def kwsub(mobj):
256 kw = mobj.group(1)
257 kw = mobj.group(1)
257 ct = cmdutil.makelogtemplater(self.ui, self.repo,
258 ct = logcmdutil.maketemplater(self.ui, self.repo,
258 self.templates[kw])
259 self.templates[kw])
259 self.ui.pushbuffer()
260 self.ui.pushbuffer()
260 ct.show(ctx, root=self.repo.root, file=path)
261 ct.show(ctx, root=self.repo.root, file=path)
261 ekw = templatefilters.firstline(self.ui.popbuffer())
262 ekw = templatefilters.firstline(self.ui.popbuffer())
262 return '$%s: %s $' % (kw, ekw)
263 return '$%s: %s $' % (kw, ekw)
263 return subfunc(kwsub, data)
264 return subfunc(kwsub, data)
264
265
265 def linkctx(self, path, fileid):
266 def linkctx(self, path, fileid):
266 '''Similar to filelog.linkrev, but returns a changectx.'''
267 '''Similar to filelog.linkrev, but returns a changectx.'''
267 return self.repo.filectx(path, fileid=fileid).changectx()
268 return self.repo.filectx(path, fileid=fileid).changectx()
268
269
269 def expand(self, path, node, data):
270 def expand(self, path, node, data):
270 '''Returns data with keywords expanded.'''
271 '''Returns data with keywords expanded.'''
271 if not self.restrict and self.match(path) and not util.binary(data):
272 if not self.restrict and self.match(path) and not util.binary(data):
272 ctx = self.linkctx(path, node)
273 ctx = self.linkctx(path, node)
273 return self.substitute(data, path, ctx, self.rekw.sub)
274 return self.substitute(data, path, ctx, self.rekw.sub)
274 return data
275 return data
275
276
276 def iskwfile(self, cand, ctx):
277 def iskwfile(self, cand, ctx):
277 '''Returns subset of candidates which are configured for keyword
278 '''Returns subset of candidates which are configured for keyword
278 expansion but are not symbolic links.'''
279 expansion but are not symbolic links.'''
279 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
280 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
280
281
281 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
282 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
282 '''Overwrites selected files expanding/shrinking keywords.'''
283 '''Overwrites selected files expanding/shrinking keywords.'''
283 if self.restrict or lookup or self.postcommit: # exclude kw_copy
284 if self.restrict or lookup or self.postcommit: # exclude kw_copy
284 candidates = self.iskwfile(candidates, ctx)
285 candidates = self.iskwfile(candidates, ctx)
285 if not candidates:
286 if not candidates:
286 return
287 return
287 kwcmd = self.restrict and lookup # kwexpand/kwshrink
288 kwcmd = self.restrict and lookup # kwexpand/kwshrink
288 if self.restrict or expand and lookup:
289 if self.restrict or expand and lookup:
289 mf = ctx.manifest()
290 mf = ctx.manifest()
290 if self.restrict or rekw:
291 if self.restrict or rekw:
291 re_kw = self.rekw
292 re_kw = self.rekw
292 else:
293 else:
293 re_kw = self.rekwexp
294 re_kw = self.rekwexp
294 if expand:
295 if expand:
295 msg = _('overwriting %s expanding keywords\n')
296 msg = _('overwriting %s expanding keywords\n')
296 else:
297 else:
297 msg = _('overwriting %s shrinking keywords\n')
298 msg = _('overwriting %s shrinking keywords\n')
298 for f in candidates:
299 for f in candidates:
299 if self.restrict:
300 if self.restrict:
300 data = self.repo.file(f).read(mf[f])
301 data = self.repo.file(f).read(mf[f])
301 else:
302 else:
302 data = self.repo.wread(f)
303 data = self.repo.wread(f)
303 if util.binary(data):
304 if util.binary(data):
304 continue
305 continue
305 if expand:
306 if expand:
306 parents = ctx.parents()
307 parents = ctx.parents()
307 if lookup:
308 if lookup:
308 ctx = self.linkctx(f, mf[f])
309 ctx = self.linkctx(f, mf[f])
309 elif self.restrict and len(parents) > 1:
310 elif self.restrict and len(parents) > 1:
310 # merge commit
311 # merge commit
311 # in case of conflict f is in modified state during
312 # in case of conflict f is in modified state during
312 # merge, even if f does not differ from f in parent
313 # merge, even if f does not differ from f in parent
313 for p in parents:
314 for p in parents:
314 if f in p and not p[f].cmp(ctx[f]):
315 if f in p and not p[f].cmp(ctx[f]):
315 ctx = p[f].changectx()
316 ctx = p[f].changectx()
316 break
317 break
317 data, found = self.substitute(data, f, ctx, re_kw.subn)
318 data, found = self.substitute(data, f, ctx, re_kw.subn)
318 elif self.restrict:
319 elif self.restrict:
319 found = re_kw.search(data)
320 found = re_kw.search(data)
320 else:
321 else:
321 data, found = _shrinktext(data, re_kw.subn)
322 data, found = _shrinktext(data, re_kw.subn)
322 if found:
323 if found:
323 self.ui.note(msg % f)
324 self.ui.note(msg % f)
324 fp = self.repo.wvfs(f, "wb", atomictemp=True)
325 fp = self.repo.wvfs(f, "wb", atomictemp=True)
325 fp.write(data)
326 fp.write(data)
326 fp.close()
327 fp.close()
327 if kwcmd:
328 if kwcmd:
328 self.repo.dirstate.normal(f)
329 self.repo.dirstate.normal(f)
329 elif self.postcommit:
330 elif self.postcommit:
330 self.repo.dirstate.normallookup(f)
331 self.repo.dirstate.normallookup(f)
331
332
332 def shrink(self, fname, text):
333 def shrink(self, fname, text):
333 '''Returns text with all keyword substitutions removed.'''
334 '''Returns text with all keyword substitutions removed.'''
334 if self.match(fname) and not util.binary(text):
335 if self.match(fname) and not util.binary(text):
335 return _shrinktext(text, self.rekwexp.sub)
336 return _shrinktext(text, self.rekwexp.sub)
336 return text
337 return text
337
338
338 def shrinklines(self, fname, lines):
339 def shrinklines(self, fname, lines):
339 '''Returns lines with keyword substitutions removed.'''
340 '''Returns lines with keyword substitutions removed.'''
340 if self.match(fname):
341 if self.match(fname):
341 text = ''.join(lines)
342 text = ''.join(lines)
342 if not util.binary(text):
343 if not util.binary(text):
343 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
344 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
344 return lines
345 return lines
345
346
346 def wread(self, fname, data):
347 def wread(self, fname, data):
347 '''If in restricted mode returns data read from wdir with
348 '''If in restricted mode returns data read from wdir with
348 keyword substitutions removed.'''
349 keyword substitutions removed.'''
349 if self.restrict:
350 if self.restrict:
350 return self.shrink(fname, data)
351 return self.shrink(fname, data)
351 return data
352 return data
352
353
353 class kwfilelog(filelog.filelog):
354 class kwfilelog(filelog.filelog):
354 '''
355 '''
355 Subclass of filelog to hook into its read, add, cmp methods.
356 Subclass of filelog to hook into its read, add, cmp methods.
356 Keywords are "stored" unexpanded, and processed on reading.
357 Keywords are "stored" unexpanded, and processed on reading.
357 '''
358 '''
358 def __init__(self, opener, kwt, path):
359 def __init__(self, opener, kwt, path):
359 super(kwfilelog, self).__init__(opener, path)
360 super(kwfilelog, self).__init__(opener, path)
360 self.kwt = kwt
361 self.kwt = kwt
361 self.path = path
362 self.path = path
362
363
363 def read(self, node):
364 def read(self, node):
364 '''Expands keywords when reading filelog.'''
365 '''Expands keywords when reading filelog.'''
365 data = super(kwfilelog, self).read(node)
366 data = super(kwfilelog, self).read(node)
366 if self.renamed(node):
367 if self.renamed(node):
367 return data
368 return data
368 return self.kwt.expand(self.path, node, data)
369 return self.kwt.expand(self.path, node, data)
369
370
370 def add(self, text, meta, tr, link, p1=None, p2=None):
371 def add(self, text, meta, tr, link, p1=None, p2=None):
371 '''Removes keyword substitutions when adding to filelog.'''
372 '''Removes keyword substitutions when adding to filelog.'''
372 text = self.kwt.shrink(self.path, text)
373 text = self.kwt.shrink(self.path, text)
373 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
374 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
374
375
375 def cmp(self, node, text):
376 def cmp(self, node, text):
376 '''Removes keyword substitutions for comparison.'''
377 '''Removes keyword substitutions for comparison.'''
377 text = self.kwt.shrink(self.path, text)
378 text = self.kwt.shrink(self.path, text)
378 return super(kwfilelog, self).cmp(node, text)
379 return super(kwfilelog, self).cmp(node, text)
379
380
380 def _status(ui, repo, wctx, kwt, *pats, **opts):
381 def _status(ui, repo, wctx, kwt, *pats, **opts):
381 '''Bails out if [keyword] configuration is not active.
382 '''Bails out if [keyword] configuration is not active.
382 Returns status of working directory.'''
383 Returns status of working directory.'''
383 if kwt:
384 if kwt:
384 opts = pycompat.byteskwargs(opts)
385 opts = pycompat.byteskwargs(opts)
385 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
386 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
386 unknown=opts.get('unknown') or opts.get('all'))
387 unknown=opts.get('unknown') or opts.get('all'))
387 if ui.configitems('keyword'):
388 if ui.configitems('keyword'):
388 raise error.Abort(_('[keyword] patterns cannot match'))
389 raise error.Abort(_('[keyword] patterns cannot match'))
389 raise error.Abort(_('no [keyword] patterns configured'))
390 raise error.Abort(_('no [keyword] patterns configured'))
390
391
391 def _kwfwrite(ui, repo, expand, *pats, **opts):
392 def _kwfwrite(ui, repo, expand, *pats, **opts):
392 '''Selects files and passes them to kwtemplater.overwrite.'''
393 '''Selects files and passes them to kwtemplater.overwrite.'''
393 wctx = repo[None]
394 wctx = repo[None]
394 if len(wctx.parents()) > 1:
395 if len(wctx.parents()) > 1:
395 raise error.Abort(_('outstanding uncommitted merge'))
396 raise error.Abort(_('outstanding uncommitted merge'))
396 kwt = getattr(repo, '_keywordkwt', None)
397 kwt = getattr(repo, '_keywordkwt', None)
397 with repo.wlock():
398 with repo.wlock():
398 status = _status(ui, repo, wctx, kwt, *pats, **opts)
399 status = _status(ui, repo, wctx, kwt, *pats, **opts)
399 if status.modified or status.added or status.removed or status.deleted:
400 if status.modified or status.added or status.removed or status.deleted:
400 raise error.Abort(_('outstanding uncommitted changes'))
401 raise error.Abort(_('outstanding uncommitted changes'))
401 kwt.overwrite(wctx, status.clean, True, expand)
402 kwt.overwrite(wctx, status.clean, True, expand)
402
403
403 @command('kwdemo',
404 @command('kwdemo',
404 [('d', 'default', None, _('show default keyword template maps')),
405 [('d', 'default', None, _('show default keyword template maps')),
405 ('f', 'rcfile', '',
406 ('f', 'rcfile', '',
406 _('read maps from rcfile'), _('FILE'))],
407 _('read maps from rcfile'), _('FILE'))],
407 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'),
408 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'),
408 optionalrepo=True)
409 optionalrepo=True)
409 def demo(ui, repo, *args, **opts):
410 def demo(ui, repo, *args, **opts):
410 '''print [keywordmaps] configuration and an expansion example
411 '''print [keywordmaps] configuration and an expansion example
411
412
412 Show current, custom, or default keyword template maps and their
413 Show current, custom, or default keyword template maps and their
413 expansions.
414 expansions.
414
415
415 Extend the current configuration by specifying maps as arguments
416 Extend the current configuration by specifying maps as arguments
416 and using -f/--rcfile to source an external hgrc file.
417 and using -f/--rcfile to source an external hgrc file.
417
418
418 Use -d/--default to disable current configuration.
419 Use -d/--default to disable current configuration.
419
420
420 See :hg:`help templates` for information on templates and filters.
421 See :hg:`help templates` for information on templates and filters.
421 '''
422 '''
422 def demoitems(section, items):
423 def demoitems(section, items):
423 ui.write('[%s]\n' % section)
424 ui.write('[%s]\n' % section)
424 for k, v in sorted(items):
425 for k, v in sorted(items):
425 ui.write('%s = %s\n' % (k, v))
426 ui.write('%s = %s\n' % (k, v))
426
427
427 fn = 'demo.txt'
428 fn = 'demo.txt'
428 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
429 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
429 ui.note(_('creating temporary repository at %s\n') % tmpdir)
430 ui.note(_('creating temporary repository at %s\n') % tmpdir)
430 if repo is None:
431 if repo is None:
431 baseui = ui
432 baseui = ui
432 else:
433 else:
433 baseui = repo.baseui
434 baseui = repo.baseui
434 repo = localrepo.localrepository(baseui, tmpdir, True)
435 repo = localrepo.localrepository(baseui, tmpdir, True)
435 ui.setconfig('keyword', fn, '', 'keyword')
436 ui.setconfig('keyword', fn, '', 'keyword')
436 svn = ui.configbool('keywordset', 'svn')
437 svn = ui.configbool('keywordset', 'svn')
437 # explicitly set keywordset for demo output
438 # explicitly set keywordset for demo output
438 ui.setconfig('keywordset', 'svn', svn, 'keyword')
439 ui.setconfig('keywordset', 'svn', svn, 'keyword')
439
440
440 uikwmaps = ui.configitems('keywordmaps')
441 uikwmaps = ui.configitems('keywordmaps')
441 if args or opts.get(r'rcfile'):
442 if args or opts.get(r'rcfile'):
442 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
443 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
443 if uikwmaps:
444 if uikwmaps:
444 ui.status(_('\textending current template maps\n'))
445 ui.status(_('\textending current template maps\n'))
445 if opts.get(r'default') or not uikwmaps:
446 if opts.get(r'default') or not uikwmaps:
446 if svn:
447 if svn:
447 ui.status(_('\toverriding default svn keywordset\n'))
448 ui.status(_('\toverriding default svn keywordset\n'))
448 else:
449 else:
449 ui.status(_('\toverriding default cvs keywordset\n'))
450 ui.status(_('\toverriding default cvs keywordset\n'))
450 if opts.get(r'rcfile'):
451 if opts.get(r'rcfile'):
451 ui.readconfig(opts.get('rcfile'))
452 ui.readconfig(opts.get('rcfile'))
452 if args:
453 if args:
453 # simulate hgrc parsing
454 # simulate hgrc parsing
454 rcmaps = '[keywordmaps]\n%s\n' % '\n'.join(args)
455 rcmaps = '[keywordmaps]\n%s\n' % '\n'.join(args)
455 repo.vfs.write('hgrc', rcmaps)
456 repo.vfs.write('hgrc', rcmaps)
456 ui.readconfig(repo.vfs.join('hgrc'))
457 ui.readconfig(repo.vfs.join('hgrc'))
457 kwmaps = dict(ui.configitems('keywordmaps'))
458 kwmaps = dict(ui.configitems('keywordmaps'))
458 elif opts.get(r'default'):
459 elif opts.get(r'default'):
459 if svn:
460 if svn:
460 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
461 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
461 else:
462 else:
462 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
463 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
463 kwmaps = _defaultkwmaps(ui)
464 kwmaps = _defaultkwmaps(ui)
464 if uikwmaps:
465 if uikwmaps:
465 ui.status(_('\tdisabling current template maps\n'))
466 ui.status(_('\tdisabling current template maps\n'))
466 for k, v in kwmaps.iteritems():
467 for k, v in kwmaps.iteritems():
467 ui.setconfig('keywordmaps', k, v, 'keyword')
468 ui.setconfig('keywordmaps', k, v, 'keyword')
468 else:
469 else:
469 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
470 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
470 if uikwmaps:
471 if uikwmaps:
471 kwmaps = dict(uikwmaps)
472 kwmaps = dict(uikwmaps)
472 else:
473 else:
473 kwmaps = _defaultkwmaps(ui)
474 kwmaps = _defaultkwmaps(ui)
474
475
475 uisetup(ui)
476 uisetup(ui)
476 reposetup(ui, repo)
477 reposetup(ui, repo)
477 ui.write(('[extensions]\nkeyword =\n'))
478 ui.write(('[extensions]\nkeyword =\n'))
478 demoitems('keyword', ui.configitems('keyword'))
479 demoitems('keyword', ui.configitems('keyword'))
479 demoitems('keywordset', ui.configitems('keywordset'))
480 demoitems('keywordset', ui.configitems('keywordset'))
480 demoitems('keywordmaps', kwmaps.iteritems())
481 demoitems('keywordmaps', kwmaps.iteritems())
481 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
482 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
482 repo.wvfs.write(fn, keywords)
483 repo.wvfs.write(fn, keywords)
483 repo[None].add([fn])
484 repo[None].add([fn])
484 ui.note(_('\nkeywords written to %s:\n') % fn)
485 ui.note(_('\nkeywords written to %s:\n') % fn)
485 ui.note(keywords)
486 ui.note(keywords)
486 with repo.wlock():
487 with repo.wlock():
487 repo.dirstate.setbranch('demobranch')
488 repo.dirstate.setbranch('demobranch')
488 for name, cmd in ui.configitems('hooks'):
489 for name, cmd in ui.configitems('hooks'):
489 if name.split('.', 1)[0].find('commit') > -1:
490 if name.split('.', 1)[0].find('commit') > -1:
490 repo.ui.setconfig('hooks', name, '', 'keyword')
491 repo.ui.setconfig('hooks', name, '', 'keyword')
491 msg = _('hg keyword configuration and expansion example')
492 msg = _('hg keyword configuration and expansion example')
492 ui.note(("hg ci -m '%s'\n" % msg))
493 ui.note(("hg ci -m '%s'\n" % msg))
493 repo.commit(text=msg)
494 repo.commit(text=msg)
494 ui.status(_('\n\tkeywords expanded\n'))
495 ui.status(_('\n\tkeywords expanded\n'))
495 ui.write(repo.wread(fn))
496 ui.write(repo.wread(fn))
496 repo.wvfs.rmtree(repo.root)
497 repo.wvfs.rmtree(repo.root)
497
498
498 @command('kwexpand',
499 @command('kwexpand',
499 cmdutil.walkopts,
500 cmdutil.walkopts,
500 _('hg kwexpand [OPTION]... [FILE]...'),
501 _('hg kwexpand [OPTION]... [FILE]...'),
501 inferrepo=True)
502 inferrepo=True)
502 def expand(ui, repo, *pats, **opts):
503 def expand(ui, repo, *pats, **opts):
503 '''expand keywords in the working directory
504 '''expand keywords in the working directory
504
505
505 Run after (re)enabling keyword expansion.
506 Run after (re)enabling keyword expansion.
506
507
507 kwexpand refuses to run if given files contain local changes.
508 kwexpand refuses to run if given files contain local changes.
508 '''
509 '''
509 # 3rd argument sets expansion to True
510 # 3rd argument sets expansion to True
510 _kwfwrite(ui, repo, True, *pats, **opts)
511 _kwfwrite(ui, repo, True, *pats, **opts)
511
512
512 @command('kwfiles',
513 @command('kwfiles',
513 [('A', 'all', None, _('show keyword status flags of all files')),
514 [('A', 'all', None, _('show keyword status flags of all files')),
514 ('i', 'ignore', None, _('show files excluded from expansion')),
515 ('i', 'ignore', None, _('show files excluded from expansion')),
515 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
516 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
516 ] + cmdutil.walkopts,
517 ] + cmdutil.walkopts,
517 _('hg kwfiles [OPTION]... [FILE]...'),
518 _('hg kwfiles [OPTION]... [FILE]...'),
518 inferrepo=True)
519 inferrepo=True)
519 def files(ui, repo, *pats, **opts):
520 def files(ui, repo, *pats, **opts):
520 '''show files configured for keyword expansion
521 '''show files configured for keyword expansion
521
522
522 List which files in the working directory are matched by the
523 List which files in the working directory are matched by the
523 [keyword] configuration patterns.
524 [keyword] configuration patterns.
524
525
525 Useful to prevent inadvertent keyword expansion and to speed up
526 Useful to prevent inadvertent keyword expansion and to speed up
526 execution by including only files that are actual candidates for
527 execution by including only files that are actual candidates for
527 expansion.
528 expansion.
528
529
529 See :hg:`help keyword` on how to construct patterns both for
530 See :hg:`help keyword` on how to construct patterns both for
530 inclusion and exclusion of files.
531 inclusion and exclusion of files.
531
532
532 With -A/--all and -v/--verbose the codes used to show the status
533 With -A/--all and -v/--verbose the codes used to show the status
533 of files are::
534 of files are::
534
535
535 K = keyword expansion candidate
536 K = keyword expansion candidate
536 k = keyword expansion candidate (not tracked)
537 k = keyword expansion candidate (not tracked)
537 I = ignored
538 I = ignored
538 i = ignored (not tracked)
539 i = ignored (not tracked)
539 '''
540 '''
540 kwt = getattr(repo, '_keywordkwt', None)
541 kwt = getattr(repo, '_keywordkwt', None)
541 wctx = repo[None]
542 wctx = repo[None]
542 status = _status(ui, repo, wctx, kwt, *pats, **opts)
543 status = _status(ui, repo, wctx, kwt, *pats, **opts)
543 if pats:
544 if pats:
544 cwd = repo.getcwd()
545 cwd = repo.getcwd()
545 else:
546 else:
546 cwd = ''
547 cwd = ''
547 files = []
548 files = []
548 opts = pycompat.byteskwargs(opts)
549 opts = pycompat.byteskwargs(opts)
549 if not opts.get('unknown') or opts.get('all'):
550 if not opts.get('unknown') or opts.get('all'):
550 files = sorted(status.modified + status.added + status.clean)
551 files = sorted(status.modified + status.added + status.clean)
551 kwfiles = kwt.iskwfile(files, wctx)
552 kwfiles = kwt.iskwfile(files, wctx)
552 kwdeleted = kwt.iskwfile(status.deleted, wctx)
553 kwdeleted = kwt.iskwfile(status.deleted, wctx)
553 kwunknown = kwt.iskwfile(status.unknown, wctx)
554 kwunknown = kwt.iskwfile(status.unknown, wctx)
554 if not opts.get('ignore') or opts.get('all'):
555 if not opts.get('ignore') or opts.get('all'):
555 showfiles = kwfiles, kwdeleted, kwunknown
556 showfiles = kwfiles, kwdeleted, kwunknown
556 else:
557 else:
557 showfiles = [], [], []
558 showfiles = [], [], []
558 if opts.get('all') or opts.get('ignore'):
559 if opts.get('all') or opts.get('ignore'):
559 showfiles += ([f for f in files if f not in kwfiles],
560 showfiles += ([f for f in files if f not in kwfiles],
560 [f for f in status.unknown if f not in kwunknown])
561 [f for f in status.unknown if f not in kwunknown])
561 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
562 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
562 kwstates = zip(kwlabels, 'K!kIi', showfiles)
563 kwstates = zip(kwlabels, 'K!kIi', showfiles)
563 fm = ui.formatter('kwfiles', opts)
564 fm = ui.formatter('kwfiles', opts)
564 fmt = '%.0s%s\n'
565 fmt = '%.0s%s\n'
565 if opts.get('all') or ui.verbose:
566 if opts.get('all') or ui.verbose:
566 fmt = '%s %s\n'
567 fmt = '%s %s\n'
567 for kwstate, char, filenames in kwstates:
568 for kwstate, char, filenames in kwstates:
568 label = 'kwfiles.' + kwstate
569 label = 'kwfiles.' + kwstate
569 for f in filenames:
570 for f in filenames:
570 fm.startitem()
571 fm.startitem()
571 fm.write('kwstatus path', fmt, char,
572 fm.write('kwstatus path', fmt, char,
572 repo.pathto(f, cwd), label=label)
573 repo.pathto(f, cwd), label=label)
573 fm.end()
574 fm.end()
574
575
575 @command('kwshrink',
576 @command('kwshrink',
576 cmdutil.walkopts,
577 cmdutil.walkopts,
577 _('hg kwshrink [OPTION]... [FILE]...'),
578 _('hg kwshrink [OPTION]... [FILE]...'),
578 inferrepo=True)
579 inferrepo=True)
579 def shrink(ui, repo, *pats, **opts):
580 def shrink(ui, repo, *pats, **opts):
580 '''revert expanded keywords in the working directory
581 '''revert expanded keywords in the working directory
581
582
582 Must be run before changing/disabling active keywords.
583 Must be run before changing/disabling active keywords.
583
584
584 kwshrink refuses to run if given files contain local changes.
585 kwshrink refuses to run if given files contain local changes.
585 '''
586 '''
586 # 3rd argument sets expansion to False
587 # 3rd argument sets expansion to False
587 _kwfwrite(ui, repo, False, *pats, **opts)
588 _kwfwrite(ui, repo, False, *pats, **opts)
588
589
589 # monkeypatches
590 # monkeypatches
590
591
591 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
592 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
592 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
593 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
593 rejects or conflicts due to expanded keywords in working dir.'''
594 rejects or conflicts due to expanded keywords in working dir.'''
594 orig(self, ui, gp, backend, store, eolmode)
595 orig(self, ui, gp, backend, store, eolmode)
595 kwt = getattr(getattr(backend, 'repo', None), '_keywordkwt', None)
596 kwt = getattr(getattr(backend, 'repo', None), '_keywordkwt', None)
596 if kwt:
597 if kwt:
597 # shrink keywords read from working dir
598 # shrink keywords read from working dir
598 self.lines = kwt.shrinklines(self.fname, self.lines)
599 self.lines = kwt.shrinklines(self.fname, self.lines)
599
600
600 def kwdiff(orig, repo, *args, **kwargs):
601 def kwdiff(orig, repo, *args, **kwargs):
601 '''Monkeypatch patch.diff to avoid expansion.'''
602 '''Monkeypatch patch.diff to avoid expansion.'''
602 kwt = getattr(repo, '_keywordkwt', None)
603 kwt = getattr(repo, '_keywordkwt', None)
603 if kwt:
604 if kwt:
604 restrict = kwt.restrict
605 restrict = kwt.restrict
605 kwt.restrict = True
606 kwt.restrict = True
606 try:
607 try:
607 for chunk in orig(repo, *args, **kwargs):
608 for chunk in orig(repo, *args, **kwargs):
608 yield chunk
609 yield chunk
609 finally:
610 finally:
610 if kwt:
611 if kwt:
611 kwt.restrict = restrict
612 kwt.restrict = restrict
612
613
613 def kwweb_skip(orig, web, req, tmpl):
614 def kwweb_skip(orig, web, req, tmpl):
614 '''Wraps webcommands.x turning off keyword expansion.'''
615 '''Wraps webcommands.x turning off keyword expansion.'''
615 kwt = getattr(web.repo, '_keywordkwt', None)
616 kwt = getattr(web.repo, '_keywordkwt', None)
616 if kwt:
617 if kwt:
617 origmatch = kwt.match
618 origmatch = kwt.match
618 kwt.match = util.never
619 kwt.match = util.never
619 try:
620 try:
620 for chunk in orig(web, req, tmpl):
621 for chunk in orig(web, req, tmpl):
621 yield chunk
622 yield chunk
622 finally:
623 finally:
623 if kwt:
624 if kwt:
624 kwt.match = origmatch
625 kwt.match = origmatch
625
626
626 def kw_amend(orig, ui, repo, old, extra, pats, opts):
627 def kw_amend(orig, ui, repo, old, extra, pats, opts):
627 '''Wraps cmdutil.amend expanding keywords after amend.'''
628 '''Wraps cmdutil.amend expanding keywords after amend.'''
628 kwt = getattr(repo, '_keywordkwt', None)
629 kwt = getattr(repo, '_keywordkwt', None)
629 if kwt is None:
630 if kwt is None:
630 return orig(ui, repo, old, extra, pats, opts)
631 return orig(ui, repo, old, extra, pats, opts)
631 with repo.wlock():
632 with repo.wlock():
632 kwt.postcommit = True
633 kwt.postcommit = True
633 newid = orig(ui, repo, old, extra, pats, opts)
634 newid = orig(ui, repo, old, extra, pats, opts)
634 if newid != old.node():
635 if newid != old.node():
635 ctx = repo[newid]
636 ctx = repo[newid]
636 kwt.restrict = True
637 kwt.restrict = True
637 kwt.overwrite(ctx, ctx.files(), False, True)
638 kwt.overwrite(ctx, ctx.files(), False, True)
638 kwt.restrict = False
639 kwt.restrict = False
639 return newid
640 return newid
640
641
641 def kw_copy(orig, ui, repo, pats, opts, rename=False):
642 def kw_copy(orig, ui, repo, pats, opts, rename=False):
642 '''Wraps cmdutil.copy so that copy/rename destinations do not
643 '''Wraps cmdutil.copy so that copy/rename destinations do not
643 contain expanded keywords.
644 contain expanded keywords.
644 Note that the source of a regular file destination may also be a
645 Note that the source of a regular file destination may also be a
645 symlink:
646 symlink:
646 hg cp sym x -> x is symlink
647 hg cp sym x -> x is symlink
647 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
648 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
648 For the latter we have to follow the symlink to find out whether its
649 For the latter we have to follow the symlink to find out whether its
649 target is configured for expansion and we therefore must unexpand the
650 target is configured for expansion and we therefore must unexpand the
650 keywords in the destination.'''
651 keywords in the destination.'''
651 kwt = getattr(repo, '_keywordkwt', None)
652 kwt = getattr(repo, '_keywordkwt', None)
652 if kwt is None:
653 if kwt is None:
653 return orig(ui, repo, pats, opts, rename)
654 return orig(ui, repo, pats, opts, rename)
654 with repo.wlock():
655 with repo.wlock():
655 orig(ui, repo, pats, opts, rename)
656 orig(ui, repo, pats, opts, rename)
656 if opts.get('dry_run'):
657 if opts.get('dry_run'):
657 return
658 return
658 wctx = repo[None]
659 wctx = repo[None]
659 cwd = repo.getcwd()
660 cwd = repo.getcwd()
660
661
661 def haskwsource(dest):
662 def haskwsource(dest):
662 '''Returns true if dest is a regular file and configured for
663 '''Returns true if dest is a regular file and configured for
663 expansion or a symlink which points to a file configured for
664 expansion or a symlink which points to a file configured for
664 expansion. '''
665 expansion. '''
665 source = repo.dirstate.copied(dest)
666 source = repo.dirstate.copied(dest)
666 if 'l' in wctx.flags(source):
667 if 'l' in wctx.flags(source):
667 source = pathutil.canonpath(repo.root, cwd,
668 source = pathutil.canonpath(repo.root, cwd,
668 os.path.realpath(source))
669 os.path.realpath(source))
669 return kwt.match(source)
670 return kwt.match(source)
670
671
671 candidates = [f for f in repo.dirstate.copies() if
672 candidates = [f for f in repo.dirstate.copies() if
672 'l' not in wctx.flags(f) and haskwsource(f)]
673 'l' not in wctx.flags(f) and haskwsource(f)]
673 kwt.overwrite(wctx, candidates, False, False)
674 kwt.overwrite(wctx, candidates, False, False)
674
675
675 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
676 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
676 '''Wraps record.dorecord expanding keywords after recording.'''
677 '''Wraps record.dorecord expanding keywords after recording.'''
677 kwt = getattr(repo, '_keywordkwt', None)
678 kwt = getattr(repo, '_keywordkwt', None)
678 if kwt is None:
679 if kwt is None:
679 return orig(ui, repo, commitfunc, *pats, **opts)
680 return orig(ui, repo, commitfunc, *pats, **opts)
680 with repo.wlock():
681 with repo.wlock():
681 # record returns 0 even when nothing has changed
682 # record returns 0 even when nothing has changed
682 # therefore compare nodes before and after
683 # therefore compare nodes before and after
683 kwt.postcommit = True
684 kwt.postcommit = True
684 ctx = repo['.']
685 ctx = repo['.']
685 wstatus = ctx.status()
686 wstatus = ctx.status()
686 ret = orig(ui, repo, commitfunc, *pats, **opts)
687 ret = orig(ui, repo, commitfunc, *pats, **opts)
687 recctx = repo['.']
688 recctx = repo['.']
688 if ctx != recctx:
689 if ctx != recctx:
689 modified, added = _preselect(wstatus, recctx.files())
690 modified, added = _preselect(wstatus, recctx.files())
690 kwt.restrict = False
691 kwt.restrict = False
691 kwt.overwrite(recctx, modified, False, True)
692 kwt.overwrite(recctx, modified, False, True)
692 kwt.overwrite(recctx, added, False, True, True)
693 kwt.overwrite(recctx, added, False, True, True)
693 kwt.restrict = True
694 kwt.restrict = True
694 return ret
695 return ret
695
696
696 def kwfilectx_cmp(orig, self, fctx):
697 def kwfilectx_cmp(orig, self, fctx):
697 if fctx._customcmp:
698 if fctx._customcmp:
698 return fctx.cmp(self)
699 return fctx.cmp(self)
699 kwt = getattr(self._repo, '_keywordkwt', None)
700 kwt = getattr(self._repo, '_keywordkwt', None)
700 if kwt is None:
701 if kwt is None:
701 return orig(self, fctx)
702 return orig(self, fctx)
702 # keyword affects data size, comparing wdir and filelog size does
703 # keyword affects data size, comparing wdir and filelog size does
703 # not make sense
704 # not make sense
704 if (fctx._filenode is None and
705 if (fctx._filenode is None and
705 (self._repo._encodefilterpats or
706 (self._repo._encodefilterpats or
706 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
707 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
707 self.size() - 4 == fctx.size()) or
708 self.size() - 4 == fctx.size()) or
708 self.size() == fctx.size()):
709 self.size() == fctx.size()):
709 return self._filelog.cmp(self._filenode, fctx.data())
710 return self._filelog.cmp(self._filenode, fctx.data())
710 return True
711 return True
711
712
712 def uisetup(ui):
713 def uisetup(ui):
713 ''' Monkeypatches dispatch._parse to retrieve user command.
714 ''' Monkeypatches dispatch._parse to retrieve user command.
714 Overrides file method to return kwfilelog instead of filelog
715 Overrides file method to return kwfilelog instead of filelog
715 if file matches user configuration.
716 if file matches user configuration.
716 Wraps commit to overwrite configured files with updated
717 Wraps commit to overwrite configured files with updated
717 keyword substitutions.
718 keyword substitutions.
718 Monkeypatches patch and webcommands.'''
719 Monkeypatches patch and webcommands.'''
719
720
720 def kwdispatch_parse(orig, ui, args):
721 def kwdispatch_parse(orig, ui, args):
721 '''Monkeypatch dispatch._parse to obtain running hg command.'''
722 '''Monkeypatch dispatch._parse to obtain running hg command.'''
722 cmd, func, args, options, cmdoptions = orig(ui, args)
723 cmd, func, args, options, cmdoptions = orig(ui, args)
723 kwtools['hgcmd'] = cmd
724 kwtools['hgcmd'] = cmd
724 return cmd, func, args, options, cmdoptions
725 return cmd, func, args, options, cmdoptions
725
726
726 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
727 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
727
728
728 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
729 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
729 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
730 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
730 extensions.wrapfunction(patch, 'diff', kwdiff)
731 extensions.wrapfunction(patch, 'diff', kwdiff)
731 extensions.wrapfunction(cmdutil, 'amend', kw_amend)
732 extensions.wrapfunction(cmdutil, 'amend', kw_amend)
732 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
733 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
733 extensions.wrapfunction(cmdutil, 'dorecord', kw_dorecord)
734 extensions.wrapfunction(cmdutil, 'dorecord', kw_dorecord)
734 for c in nokwwebcommands.split():
735 for c in nokwwebcommands.split():
735 extensions.wrapfunction(webcommands, c, kwweb_skip)
736 extensions.wrapfunction(webcommands, c, kwweb_skip)
736
737
737 def reposetup(ui, repo):
738 def reposetup(ui, repo):
738 '''Sets up repo as kwrepo for keyword substitution.'''
739 '''Sets up repo as kwrepo for keyword substitution.'''
739
740
740 try:
741 try:
741 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
742 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
742 or '.hg' in util.splitpath(repo.root)
743 or '.hg' in util.splitpath(repo.root)
743 or repo._url.startswith('bundle:')):
744 or repo._url.startswith('bundle:')):
744 return
745 return
745 except AttributeError:
746 except AttributeError:
746 pass
747 pass
747
748
748 inc, exc = [], ['.hg*']
749 inc, exc = [], ['.hg*']
749 for pat, opt in ui.configitems('keyword'):
750 for pat, opt in ui.configitems('keyword'):
750 if opt != 'ignore':
751 if opt != 'ignore':
751 inc.append(pat)
752 inc.append(pat)
752 else:
753 else:
753 exc.append(pat)
754 exc.append(pat)
754 if not inc:
755 if not inc:
755 return
756 return
756
757
757 kwt = kwtemplater(ui, repo, inc, exc)
758 kwt = kwtemplater(ui, repo, inc, exc)
758
759
759 class kwrepo(repo.__class__):
760 class kwrepo(repo.__class__):
760 def file(self, f):
761 def file(self, f):
761 if f[0] == '/':
762 if f[0] == '/':
762 f = f[1:]
763 f = f[1:]
763 return kwfilelog(self.svfs, kwt, f)
764 return kwfilelog(self.svfs, kwt, f)
764
765
765 def wread(self, filename):
766 def wread(self, filename):
766 data = super(kwrepo, self).wread(filename)
767 data = super(kwrepo, self).wread(filename)
767 return kwt.wread(filename, data)
768 return kwt.wread(filename, data)
768
769
769 def commit(self, *args, **opts):
770 def commit(self, *args, **opts):
770 # use custom commitctx for user commands
771 # use custom commitctx for user commands
771 # other extensions can still wrap repo.commitctx directly
772 # other extensions can still wrap repo.commitctx directly
772 self.commitctx = self.kwcommitctx
773 self.commitctx = self.kwcommitctx
773 try:
774 try:
774 return super(kwrepo, self).commit(*args, **opts)
775 return super(kwrepo, self).commit(*args, **opts)
775 finally:
776 finally:
776 del self.commitctx
777 del self.commitctx
777
778
778 def kwcommitctx(self, ctx, error=False):
779 def kwcommitctx(self, ctx, error=False):
779 n = super(kwrepo, self).commitctx(ctx, error)
780 n = super(kwrepo, self).commitctx(ctx, error)
780 # no lock needed, only called from repo.commit() which already locks
781 # no lock needed, only called from repo.commit() which already locks
781 if not kwt.postcommit:
782 if not kwt.postcommit:
782 restrict = kwt.restrict
783 restrict = kwt.restrict
783 kwt.restrict = True
784 kwt.restrict = True
784 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
785 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
785 False, True)
786 False, True)
786 kwt.restrict = restrict
787 kwt.restrict = restrict
787 return n
788 return n
788
789
789 def rollback(self, dryrun=False, force=False):
790 def rollback(self, dryrun=False, force=False):
790 with self.wlock():
791 with self.wlock():
791 origrestrict = kwt.restrict
792 origrestrict = kwt.restrict
792 try:
793 try:
793 if not dryrun:
794 if not dryrun:
794 changed = self['.'].files()
795 changed = self['.'].files()
795 ret = super(kwrepo, self).rollback(dryrun, force)
796 ret = super(kwrepo, self).rollback(dryrun, force)
796 if not dryrun:
797 if not dryrun:
797 ctx = self['.']
798 ctx = self['.']
798 modified, added = _preselect(ctx.status(), changed)
799 modified, added = _preselect(ctx.status(), changed)
799 kwt.restrict = False
800 kwt.restrict = False
800 kwt.overwrite(ctx, modified, True, True)
801 kwt.overwrite(ctx, modified, True, True)
801 kwt.overwrite(ctx, added, True, False)
802 kwt.overwrite(ctx, added, True, False)
802 return ret
803 return ret
803 finally:
804 finally:
804 kwt.restrict = origrestrict
805 kwt.restrict = origrestrict
805
806
806 repo.__class__ = kwrepo
807 repo.__class__ = kwrepo
807 repo._keywordkwt = kwt
808 repo._keywordkwt = kwt
@@ -1,3655 +1,3656 b''
1 # mq.py - patch queues for mercurial
1 # mq.py - patch queues for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''manage a stack of patches
8 '''manage a stack of patches
9
9
10 This extension lets you work with a stack of patches in a Mercurial
10 This extension lets you work with a stack of patches in a Mercurial
11 repository. It manages two stacks of patches - all known patches, and
11 repository. It manages two stacks of patches - all known patches, and
12 applied patches (subset of known patches).
12 applied patches (subset of known patches).
13
13
14 Known patches are represented as patch files in the .hg/patches
14 Known patches are represented as patch files in the .hg/patches
15 directory. Applied patches are both patch files and changesets.
15 directory. Applied patches are both patch files and changesets.
16
16
17 Common tasks (use :hg:`help COMMAND` for more details)::
17 Common tasks (use :hg:`help COMMAND` for more details)::
18
18
19 create new patch qnew
19 create new patch qnew
20 import existing patch qimport
20 import existing patch qimport
21
21
22 print patch series qseries
22 print patch series qseries
23 print applied patches qapplied
23 print applied patches qapplied
24
24
25 add known patch to applied stack qpush
25 add known patch to applied stack qpush
26 remove patch from applied stack qpop
26 remove patch from applied stack qpop
27 refresh contents of top applied patch qrefresh
27 refresh contents of top applied patch qrefresh
28
28
29 By default, mq will automatically use git patches when required to
29 By default, mq will automatically use git patches when required to
30 avoid losing file mode changes, copy records, binary files or empty
30 avoid losing file mode changes, copy records, binary files or empty
31 files creations or deletions. This behavior can be configured with::
31 files creations or deletions. This behavior can be configured with::
32
32
33 [mq]
33 [mq]
34 git = auto/keep/yes/no
34 git = auto/keep/yes/no
35
35
36 If set to 'keep', mq will obey the [diff] section configuration while
36 If set to 'keep', mq will obey the [diff] section configuration while
37 preserving existing git patches upon qrefresh. If set to 'yes' or
37 preserving existing git patches upon qrefresh. If set to 'yes' or
38 'no', mq will override the [diff] section and always generate git or
38 'no', mq will override the [diff] section and always generate git or
39 regular patches, possibly losing data in the second case.
39 regular patches, possibly losing data in the second case.
40
40
41 It may be desirable for mq changesets to be kept in the secret phase (see
41 It may be desirable for mq changesets to be kept in the secret phase (see
42 :hg:`help phases`), which can be enabled with the following setting::
42 :hg:`help phases`), which can be enabled with the following setting::
43
43
44 [mq]
44 [mq]
45 secret = True
45 secret = True
46
46
47 You will by default be managing a patch queue named "patches". You can
47 You will by default be managing a patch queue named "patches". You can
48 create other, independent patch queues with the :hg:`qqueue` command.
48 create other, independent patch queues with the :hg:`qqueue` command.
49
49
50 If the working directory contains uncommitted files, qpush, qpop and
50 If the working directory contains uncommitted files, qpush, qpop and
51 qgoto abort immediately. If -f/--force is used, the changes are
51 qgoto abort immediately. If -f/--force is used, the changes are
52 discarded. Setting::
52 discarded. Setting::
53
53
54 [mq]
54 [mq]
55 keepchanges = True
55 keepchanges = True
56
56
57 make them behave as if --keep-changes were passed, and non-conflicting
57 make them behave as if --keep-changes were passed, and non-conflicting
58 local changes will be tolerated and preserved. If incompatible options
58 local changes will be tolerated and preserved. If incompatible options
59 such as -f/--force or --exact are passed, this setting is ignored.
59 such as -f/--force or --exact are passed, this setting is ignored.
60
60
61 This extension used to provide a strip command. This command now lives
61 This extension used to provide a strip command. This command now lives
62 in the strip extension.
62 in the strip extension.
63 '''
63 '''
64
64
65 from __future__ import absolute_import, print_function
65 from __future__ import absolute_import, print_function
66
66
67 import errno
67 import errno
68 import os
68 import os
69 import re
69 import re
70 import shutil
70 import shutil
71 from mercurial.i18n import _
71 from mercurial.i18n import _
72 from mercurial.node import (
72 from mercurial.node import (
73 bin,
73 bin,
74 hex,
74 hex,
75 nullid,
75 nullid,
76 nullrev,
76 nullrev,
77 short,
77 short,
78 )
78 )
79 from mercurial import (
79 from mercurial import (
80 cmdutil,
80 cmdutil,
81 commands,
81 commands,
82 dirstateguard,
82 dirstateguard,
83 encoding,
83 encoding,
84 error,
84 error,
85 extensions,
85 extensions,
86 hg,
86 hg,
87 localrepo,
87 localrepo,
88 lock as lockmod,
88 lock as lockmod,
89 logcmdutil,
89 patch as patchmod,
90 patch as patchmod,
90 phases,
91 phases,
91 pycompat,
92 pycompat,
92 registrar,
93 registrar,
93 revsetlang,
94 revsetlang,
94 scmutil,
95 scmutil,
95 smartset,
96 smartset,
96 subrepo,
97 subrepo,
97 util,
98 util,
98 vfs as vfsmod,
99 vfs as vfsmod,
99 )
100 )
100
101
101 release = lockmod.release
102 release = lockmod.release
102 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
103 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
103
104
104 cmdtable = {}
105 cmdtable = {}
105 command = registrar.command(cmdtable)
106 command = registrar.command(cmdtable)
106 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
107 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
107 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
108 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
108 # be specifying the version(s) of Mercurial they are tested with, or
109 # be specifying the version(s) of Mercurial they are tested with, or
109 # leave the attribute unspecified.
110 # leave the attribute unspecified.
110 testedwith = 'ships-with-hg-core'
111 testedwith = 'ships-with-hg-core'
111
112
112 configtable = {}
113 configtable = {}
113 configitem = registrar.configitem(configtable)
114 configitem = registrar.configitem(configtable)
114
115
115 configitem('mq', 'git',
116 configitem('mq', 'git',
116 default='auto',
117 default='auto',
117 )
118 )
118 configitem('mq', 'keepchanges',
119 configitem('mq', 'keepchanges',
119 default=False,
120 default=False,
120 )
121 )
121 configitem('mq', 'plain',
122 configitem('mq', 'plain',
122 default=False,
123 default=False,
123 )
124 )
124 configitem('mq', 'secret',
125 configitem('mq', 'secret',
125 default=False,
126 default=False,
126 )
127 )
127
128
128 # force load strip extension formerly included in mq and import some utility
129 # force load strip extension formerly included in mq and import some utility
129 try:
130 try:
130 stripext = extensions.find('strip')
131 stripext = extensions.find('strip')
131 except KeyError:
132 except KeyError:
132 # note: load is lazy so we could avoid the try-except,
133 # note: load is lazy so we could avoid the try-except,
133 # but I (marmoute) prefer this explicit code.
134 # but I (marmoute) prefer this explicit code.
134 class dummyui(object):
135 class dummyui(object):
135 def debug(self, msg):
136 def debug(self, msg):
136 pass
137 pass
137 stripext = extensions.load(dummyui(), 'strip', '')
138 stripext = extensions.load(dummyui(), 'strip', '')
138
139
139 strip = stripext.strip
140 strip = stripext.strip
140 checksubstate = stripext.checksubstate
141 checksubstate = stripext.checksubstate
141 checklocalchanges = stripext.checklocalchanges
142 checklocalchanges = stripext.checklocalchanges
142
143
143
144
144 # Patch names looks like unix-file names.
145 # Patch names looks like unix-file names.
145 # They must be joinable with queue directory and result in the patch path.
146 # They must be joinable with queue directory and result in the patch path.
146 normname = util.normpath
147 normname = util.normpath
147
148
148 class statusentry(object):
149 class statusentry(object):
149 def __init__(self, node, name):
150 def __init__(self, node, name):
150 self.node, self.name = node, name
151 self.node, self.name = node, name
151
152
152 def __bytes__(self):
153 def __bytes__(self):
153 return hex(self.node) + ':' + self.name
154 return hex(self.node) + ':' + self.name
154
155
155 __str__ = encoding.strmethod(__bytes__)
156 __str__ = encoding.strmethod(__bytes__)
156 __repr__ = encoding.strmethod(__bytes__)
157 __repr__ = encoding.strmethod(__bytes__)
157
158
158 # The order of the headers in 'hg export' HG patches:
159 # The order of the headers in 'hg export' HG patches:
159 HGHEADERS = [
160 HGHEADERS = [
160 # '# HG changeset patch',
161 # '# HG changeset patch',
161 '# User ',
162 '# User ',
162 '# Date ',
163 '# Date ',
163 '# ',
164 '# ',
164 '# Branch ',
165 '# Branch ',
165 '# Node ID ',
166 '# Node ID ',
166 '# Parent ', # can occur twice for merges - but that is not relevant for mq
167 '# Parent ', # can occur twice for merges - but that is not relevant for mq
167 ]
168 ]
168 # The order of headers in plain 'mail style' patches:
169 # The order of headers in plain 'mail style' patches:
169 PLAINHEADERS = {
170 PLAINHEADERS = {
170 'from': 0,
171 'from': 0,
171 'date': 1,
172 'date': 1,
172 'subject': 2,
173 'subject': 2,
173 }
174 }
174
175
175 def inserthgheader(lines, header, value):
176 def inserthgheader(lines, header, value):
176 """Assuming lines contains a HG patch header, add a header line with value.
177 """Assuming lines contains a HG patch header, add a header line with value.
177 >>> try: inserthgheader([], b'# Date ', b'z')
178 >>> try: inserthgheader([], b'# Date ', b'z')
178 ... except ValueError as inst: print("oops")
179 ... except ValueError as inst: print("oops")
179 oops
180 oops
180 >>> inserthgheader([b'# HG changeset patch'], b'# Date ', b'z')
181 >>> inserthgheader([b'# HG changeset patch'], b'# Date ', b'z')
181 ['# HG changeset patch', '# Date z']
182 ['# HG changeset patch', '# Date z']
182 >>> inserthgheader([b'# HG changeset patch', b''], b'# Date ', b'z')
183 >>> inserthgheader([b'# HG changeset patch', b''], b'# Date ', b'z')
183 ['# HG changeset patch', '# Date z', '']
184 ['# HG changeset patch', '# Date z', '']
184 >>> inserthgheader([b'# HG changeset patch', b'# User y'], b'# Date ', b'z')
185 >>> inserthgheader([b'# HG changeset patch', b'# User y'], b'# Date ', b'z')
185 ['# HG changeset patch', '# User y', '# Date z']
186 ['# HG changeset patch', '# User y', '# Date z']
186 >>> inserthgheader([b'# HG changeset patch', b'# Date x', b'# User y'],
187 >>> inserthgheader([b'# HG changeset patch', b'# Date x', b'# User y'],
187 ... b'# User ', b'z')
188 ... b'# User ', b'z')
188 ['# HG changeset patch', '# Date x', '# User z']
189 ['# HG changeset patch', '# Date x', '# User z']
189 >>> inserthgheader([b'# HG changeset patch', b'# Date y'], b'# Date ', b'z')
190 >>> inserthgheader([b'# HG changeset patch', b'# Date y'], b'# Date ', b'z')
190 ['# HG changeset patch', '# Date z']
191 ['# HG changeset patch', '# Date z']
191 >>> inserthgheader([b'# HG changeset patch', b'', b'# Date y'],
192 >>> inserthgheader([b'# HG changeset patch', b'', b'# Date y'],
192 ... b'# Date ', b'z')
193 ... b'# Date ', b'z')
193 ['# HG changeset patch', '# Date z', '', '# Date y']
194 ['# HG changeset patch', '# Date z', '', '# Date y']
194 >>> inserthgheader([b'# HG changeset patch', b'# Parent y'],
195 >>> inserthgheader([b'# HG changeset patch', b'# Parent y'],
195 ... b'# Date ', b'z')
196 ... b'# Date ', b'z')
196 ['# HG changeset patch', '# Date z', '# Parent y']
197 ['# HG changeset patch', '# Date z', '# Parent y']
197 """
198 """
198 start = lines.index('# HG changeset patch') + 1
199 start = lines.index('# HG changeset patch') + 1
199 newindex = HGHEADERS.index(header)
200 newindex = HGHEADERS.index(header)
200 bestpos = len(lines)
201 bestpos = len(lines)
201 for i in range(start, len(lines)):
202 for i in range(start, len(lines)):
202 line = lines[i]
203 line = lines[i]
203 if not line.startswith('# '):
204 if not line.startswith('# '):
204 bestpos = min(bestpos, i)
205 bestpos = min(bestpos, i)
205 break
206 break
206 for lineindex, h in enumerate(HGHEADERS):
207 for lineindex, h in enumerate(HGHEADERS):
207 if line.startswith(h):
208 if line.startswith(h):
208 if lineindex == newindex:
209 if lineindex == newindex:
209 lines[i] = header + value
210 lines[i] = header + value
210 return lines
211 return lines
211 if lineindex > newindex:
212 if lineindex > newindex:
212 bestpos = min(bestpos, i)
213 bestpos = min(bestpos, i)
213 break # next line
214 break # next line
214 lines.insert(bestpos, header + value)
215 lines.insert(bestpos, header + value)
215 return lines
216 return lines
216
217
217 def insertplainheader(lines, header, value):
218 def insertplainheader(lines, header, value):
218 """For lines containing a plain patch header, add a header line with value.
219 """For lines containing a plain patch header, add a header line with value.
219 >>> insertplainheader([], b'Date', b'z')
220 >>> insertplainheader([], b'Date', b'z')
220 ['Date: z']
221 ['Date: z']
221 >>> insertplainheader([b''], b'Date', b'z')
222 >>> insertplainheader([b''], b'Date', b'z')
222 ['Date: z', '']
223 ['Date: z', '']
223 >>> insertplainheader([b'x'], b'Date', b'z')
224 >>> insertplainheader([b'x'], b'Date', b'z')
224 ['Date: z', '', 'x']
225 ['Date: z', '', 'x']
225 >>> insertplainheader([b'From: y', b'x'], b'Date', b'z')
226 >>> insertplainheader([b'From: y', b'x'], b'Date', b'z')
226 ['From: y', 'Date: z', '', 'x']
227 ['From: y', 'Date: z', '', 'x']
227 >>> insertplainheader([b' date : x', b' from : y', b''], b'From', b'z')
228 >>> insertplainheader([b' date : x', b' from : y', b''], b'From', b'z')
228 [' date : x', 'From: z', '']
229 [' date : x', 'From: z', '']
229 >>> insertplainheader([b'', b'Date: y'], b'Date', b'z')
230 >>> insertplainheader([b'', b'Date: y'], b'Date', b'z')
230 ['Date: z', '', 'Date: y']
231 ['Date: z', '', 'Date: y']
231 >>> insertplainheader([b'foo: bar', b'DATE: z', b'x'], b'From', b'y')
232 >>> insertplainheader([b'foo: bar', b'DATE: z', b'x'], b'From', b'y')
232 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
233 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
233 """
234 """
234 newprio = PLAINHEADERS[header.lower()]
235 newprio = PLAINHEADERS[header.lower()]
235 bestpos = len(lines)
236 bestpos = len(lines)
236 for i, line in enumerate(lines):
237 for i, line in enumerate(lines):
237 if ':' in line:
238 if ':' in line:
238 lheader = line.split(':', 1)[0].strip().lower()
239 lheader = line.split(':', 1)[0].strip().lower()
239 lprio = PLAINHEADERS.get(lheader, newprio + 1)
240 lprio = PLAINHEADERS.get(lheader, newprio + 1)
240 if lprio == newprio:
241 if lprio == newprio:
241 lines[i] = '%s: %s' % (header, value)
242 lines[i] = '%s: %s' % (header, value)
242 return lines
243 return lines
243 if lprio > newprio and i < bestpos:
244 if lprio > newprio and i < bestpos:
244 bestpos = i
245 bestpos = i
245 else:
246 else:
246 if line:
247 if line:
247 lines.insert(i, '')
248 lines.insert(i, '')
248 if i < bestpos:
249 if i < bestpos:
249 bestpos = i
250 bestpos = i
250 break
251 break
251 lines.insert(bestpos, '%s: %s' % (header, value))
252 lines.insert(bestpos, '%s: %s' % (header, value))
252 return lines
253 return lines
253
254
254 class patchheader(object):
255 class patchheader(object):
255 def __init__(self, pf, plainmode=False):
256 def __init__(self, pf, plainmode=False):
256 def eatdiff(lines):
257 def eatdiff(lines):
257 while lines:
258 while lines:
258 l = lines[-1]
259 l = lines[-1]
259 if (l.startswith("diff -") or
260 if (l.startswith("diff -") or
260 l.startswith("Index:") or
261 l.startswith("Index:") or
261 l.startswith("===========")):
262 l.startswith("===========")):
262 del lines[-1]
263 del lines[-1]
263 else:
264 else:
264 break
265 break
265 def eatempty(lines):
266 def eatempty(lines):
266 while lines:
267 while lines:
267 if not lines[-1].strip():
268 if not lines[-1].strip():
268 del lines[-1]
269 del lines[-1]
269 else:
270 else:
270 break
271 break
271
272
272 message = []
273 message = []
273 comments = []
274 comments = []
274 user = None
275 user = None
275 date = None
276 date = None
276 parent = None
277 parent = None
277 format = None
278 format = None
278 subject = None
279 subject = None
279 branch = None
280 branch = None
280 nodeid = None
281 nodeid = None
281 diffstart = 0
282 diffstart = 0
282
283
283 for line in file(pf):
284 for line in file(pf):
284 line = line.rstrip()
285 line = line.rstrip()
285 if (line.startswith('diff --git')
286 if (line.startswith('diff --git')
286 or (diffstart and line.startswith('+++ '))):
287 or (diffstart and line.startswith('+++ '))):
287 diffstart = 2
288 diffstart = 2
288 break
289 break
289 diffstart = 0 # reset
290 diffstart = 0 # reset
290 if line.startswith("--- "):
291 if line.startswith("--- "):
291 diffstart = 1
292 diffstart = 1
292 continue
293 continue
293 elif format == "hgpatch":
294 elif format == "hgpatch":
294 # parse values when importing the result of an hg export
295 # parse values when importing the result of an hg export
295 if line.startswith("# User "):
296 if line.startswith("# User "):
296 user = line[7:]
297 user = line[7:]
297 elif line.startswith("# Date "):
298 elif line.startswith("# Date "):
298 date = line[7:]
299 date = line[7:]
299 elif line.startswith("# Parent "):
300 elif line.startswith("# Parent "):
300 parent = line[9:].lstrip() # handle double trailing space
301 parent = line[9:].lstrip() # handle double trailing space
301 elif line.startswith("# Branch "):
302 elif line.startswith("# Branch "):
302 branch = line[9:]
303 branch = line[9:]
303 elif line.startswith("# Node ID "):
304 elif line.startswith("# Node ID "):
304 nodeid = line[10:]
305 nodeid = line[10:]
305 elif not line.startswith("# ") and line:
306 elif not line.startswith("# ") and line:
306 message.append(line)
307 message.append(line)
307 format = None
308 format = None
308 elif line == '# HG changeset patch':
309 elif line == '# HG changeset patch':
309 message = []
310 message = []
310 format = "hgpatch"
311 format = "hgpatch"
311 elif (format != "tagdone" and (line.startswith("Subject: ") or
312 elif (format != "tagdone" and (line.startswith("Subject: ") or
312 line.startswith("subject: "))):
313 line.startswith("subject: "))):
313 subject = line[9:]
314 subject = line[9:]
314 format = "tag"
315 format = "tag"
315 elif (format != "tagdone" and (line.startswith("From: ") or
316 elif (format != "tagdone" and (line.startswith("From: ") or
316 line.startswith("from: "))):
317 line.startswith("from: "))):
317 user = line[6:]
318 user = line[6:]
318 format = "tag"
319 format = "tag"
319 elif (format != "tagdone" and (line.startswith("Date: ") or
320 elif (format != "tagdone" and (line.startswith("Date: ") or
320 line.startswith("date: "))):
321 line.startswith("date: "))):
321 date = line[6:]
322 date = line[6:]
322 format = "tag"
323 format = "tag"
323 elif format == "tag" and line == "":
324 elif format == "tag" and line == "":
324 # when looking for tags (subject: from: etc) they
325 # when looking for tags (subject: from: etc) they
325 # end once you find a blank line in the source
326 # end once you find a blank line in the source
326 format = "tagdone"
327 format = "tagdone"
327 elif message or line:
328 elif message or line:
328 message.append(line)
329 message.append(line)
329 comments.append(line)
330 comments.append(line)
330
331
331 eatdiff(message)
332 eatdiff(message)
332 eatdiff(comments)
333 eatdiff(comments)
333 # Remember the exact starting line of the patch diffs before consuming
334 # Remember the exact starting line of the patch diffs before consuming
334 # empty lines, for external use by TortoiseHg and others
335 # empty lines, for external use by TortoiseHg and others
335 self.diffstartline = len(comments)
336 self.diffstartline = len(comments)
336 eatempty(message)
337 eatempty(message)
337 eatempty(comments)
338 eatempty(comments)
338
339
339 # make sure message isn't empty
340 # make sure message isn't empty
340 if format and format.startswith("tag") and subject:
341 if format and format.startswith("tag") and subject:
341 message.insert(0, subject)
342 message.insert(0, subject)
342
343
343 self.message = message
344 self.message = message
344 self.comments = comments
345 self.comments = comments
345 self.user = user
346 self.user = user
346 self.date = date
347 self.date = date
347 self.parent = parent
348 self.parent = parent
348 # nodeid and branch are for external use by TortoiseHg and others
349 # nodeid and branch are for external use by TortoiseHg and others
349 self.nodeid = nodeid
350 self.nodeid = nodeid
350 self.branch = branch
351 self.branch = branch
351 self.haspatch = diffstart > 1
352 self.haspatch = diffstart > 1
352 self.plainmode = (plainmode or
353 self.plainmode = (plainmode or
353 '# HG changeset patch' not in self.comments and
354 '# HG changeset patch' not in self.comments and
354 any(c.startswith('Date: ') or
355 any(c.startswith('Date: ') or
355 c.startswith('From: ')
356 c.startswith('From: ')
356 for c in self.comments))
357 for c in self.comments))
357
358
358 def setuser(self, user):
359 def setuser(self, user):
359 try:
360 try:
360 inserthgheader(self.comments, '# User ', user)
361 inserthgheader(self.comments, '# User ', user)
361 except ValueError:
362 except ValueError:
362 if self.plainmode:
363 if self.plainmode:
363 insertplainheader(self.comments, 'From', user)
364 insertplainheader(self.comments, 'From', user)
364 else:
365 else:
365 tmp = ['# HG changeset patch', '# User ' + user]
366 tmp = ['# HG changeset patch', '# User ' + user]
366 self.comments = tmp + self.comments
367 self.comments = tmp + self.comments
367 self.user = user
368 self.user = user
368
369
369 def setdate(self, date):
370 def setdate(self, date):
370 try:
371 try:
371 inserthgheader(self.comments, '# Date ', date)
372 inserthgheader(self.comments, '# Date ', date)
372 except ValueError:
373 except ValueError:
373 if self.plainmode:
374 if self.plainmode:
374 insertplainheader(self.comments, 'Date', date)
375 insertplainheader(self.comments, 'Date', date)
375 else:
376 else:
376 tmp = ['# HG changeset patch', '# Date ' + date]
377 tmp = ['# HG changeset patch', '# Date ' + date]
377 self.comments = tmp + self.comments
378 self.comments = tmp + self.comments
378 self.date = date
379 self.date = date
379
380
380 def setparent(self, parent):
381 def setparent(self, parent):
381 try:
382 try:
382 inserthgheader(self.comments, '# Parent ', parent)
383 inserthgheader(self.comments, '# Parent ', parent)
383 except ValueError:
384 except ValueError:
384 if not self.plainmode:
385 if not self.plainmode:
385 tmp = ['# HG changeset patch', '# Parent ' + parent]
386 tmp = ['# HG changeset patch', '# Parent ' + parent]
386 self.comments = tmp + self.comments
387 self.comments = tmp + self.comments
387 self.parent = parent
388 self.parent = parent
388
389
389 def setmessage(self, message):
390 def setmessage(self, message):
390 if self.comments:
391 if self.comments:
391 self._delmsg()
392 self._delmsg()
392 self.message = [message]
393 self.message = [message]
393 if message:
394 if message:
394 if self.plainmode and self.comments and self.comments[-1]:
395 if self.plainmode and self.comments and self.comments[-1]:
395 self.comments.append('')
396 self.comments.append('')
396 self.comments.append(message)
397 self.comments.append(message)
397
398
398 def __str__(self):
399 def __str__(self):
399 s = '\n'.join(self.comments).rstrip()
400 s = '\n'.join(self.comments).rstrip()
400 if not s:
401 if not s:
401 return ''
402 return ''
402 return s + '\n\n'
403 return s + '\n\n'
403
404
404 def _delmsg(self):
405 def _delmsg(self):
405 '''Remove existing message, keeping the rest of the comments fields.
406 '''Remove existing message, keeping the rest of the comments fields.
406 If comments contains 'subject: ', message will prepend
407 If comments contains 'subject: ', message will prepend
407 the field and a blank line.'''
408 the field and a blank line.'''
408 if self.message:
409 if self.message:
409 subj = 'subject: ' + self.message[0].lower()
410 subj = 'subject: ' + self.message[0].lower()
410 for i in xrange(len(self.comments)):
411 for i in xrange(len(self.comments)):
411 if subj == self.comments[i].lower():
412 if subj == self.comments[i].lower():
412 del self.comments[i]
413 del self.comments[i]
413 self.message = self.message[2:]
414 self.message = self.message[2:]
414 break
415 break
415 ci = 0
416 ci = 0
416 for mi in self.message:
417 for mi in self.message:
417 while mi != self.comments[ci]:
418 while mi != self.comments[ci]:
418 ci += 1
419 ci += 1
419 del self.comments[ci]
420 del self.comments[ci]
420
421
421 def newcommit(repo, phase, *args, **kwargs):
422 def newcommit(repo, phase, *args, **kwargs):
422 """helper dedicated to ensure a commit respect mq.secret setting
423 """helper dedicated to ensure a commit respect mq.secret setting
423
424
424 It should be used instead of repo.commit inside the mq source for operation
425 It should be used instead of repo.commit inside the mq source for operation
425 creating new changeset.
426 creating new changeset.
426 """
427 """
427 repo = repo.unfiltered()
428 repo = repo.unfiltered()
428 if phase is None:
429 if phase is None:
429 if repo.ui.configbool('mq', 'secret'):
430 if repo.ui.configbool('mq', 'secret'):
430 phase = phases.secret
431 phase = phases.secret
431 overrides = {('ui', 'allowemptycommit'): True}
432 overrides = {('ui', 'allowemptycommit'): True}
432 if phase is not None:
433 if phase is not None:
433 overrides[('phases', 'new-commit')] = phase
434 overrides[('phases', 'new-commit')] = phase
434 with repo.ui.configoverride(overrides, 'mq'):
435 with repo.ui.configoverride(overrides, 'mq'):
435 repo.ui.setconfig('ui', 'allowemptycommit', True)
436 repo.ui.setconfig('ui', 'allowemptycommit', True)
436 return repo.commit(*args, **kwargs)
437 return repo.commit(*args, **kwargs)
437
438
438 class AbortNoCleanup(error.Abort):
439 class AbortNoCleanup(error.Abort):
439 pass
440 pass
440
441
441 class queue(object):
442 class queue(object):
442 def __init__(self, ui, baseui, path, patchdir=None):
443 def __init__(self, ui, baseui, path, patchdir=None):
443 self.basepath = path
444 self.basepath = path
444 try:
445 try:
445 fh = open(os.path.join(path, 'patches.queue'))
446 fh = open(os.path.join(path, 'patches.queue'))
446 cur = fh.read().rstrip()
447 cur = fh.read().rstrip()
447 fh.close()
448 fh.close()
448 if not cur:
449 if not cur:
449 curpath = os.path.join(path, 'patches')
450 curpath = os.path.join(path, 'patches')
450 else:
451 else:
451 curpath = os.path.join(path, 'patches-' + cur)
452 curpath = os.path.join(path, 'patches-' + cur)
452 except IOError:
453 except IOError:
453 curpath = os.path.join(path, 'patches')
454 curpath = os.path.join(path, 'patches')
454 self.path = patchdir or curpath
455 self.path = patchdir or curpath
455 self.opener = vfsmod.vfs(self.path)
456 self.opener = vfsmod.vfs(self.path)
456 self.ui = ui
457 self.ui = ui
457 self.baseui = baseui
458 self.baseui = baseui
458 self.applieddirty = False
459 self.applieddirty = False
459 self.seriesdirty = False
460 self.seriesdirty = False
460 self.added = []
461 self.added = []
461 self.seriespath = "series"
462 self.seriespath = "series"
462 self.statuspath = "status"
463 self.statuspath = "status"
463 self.guardspath = "guards"
464 self.guardspath = "guards"
464 self.activeguards = None
465 self.activeguards = None
465 self.guardsdirty = False
466 self.guardsdirty = False
466 # Handle mq.git as a bool with extended values
467 # Handle mq.git as a bool with extended values
467 gitmode = ui.config('mq', 'git').lower()
468 gitmode = ui.config('mq', 'git').lower()
468 boolmode = util.parsebool(gitmode)
469 boolmode = util.parsebool(gitmode)
469 if boolmode is not None:
470 if boolmode is not None:
470 if boolmode:
471 if boolmode:
471 gitmode = 'yes'
472 gitmode = 'yes'
472 else:
473 else:
473 gitmode = 'no'
474 gitmode = 'no'
474 self.gitmode = gitmode
475 self.gitmode = gitmode
475 # deprecated config: mq.plain
476 # deprecated config: mq.plain
476 self.plainmode = ui.configbool('mq', 'plain')
477 self.plainmode = ui.configbool('mq', 'plain')
477 self.checkapplied = True
478 self.checkapplied = True
478
479
479 @util.propertycache
480 @util.propertycache
480 def applied(self):
481 def applied(self):
481 def parselines(lines):
482 def parselines(lines):
482 for l in lines:
483 for l in lines:
483 entry = l.split(':', 1)
484 entry = l.split(':', 1)
484 if len(entry) > 1:
485 if len(entry) > 1:
485 n, name = entry
486 n, name = entry
486 yield statusentry(bin(n), name)
487 yield statusentry(bin(n), name)
487 elif l.strip():
488 elif l.strip():
488 self.ui.warn(_('malformated mq status line: %s\n') % entry)
489 self.ui.warn(_('malformated mq status line: %s\n') % entry)
489 # else we ignore empty lines
490 # else we ignore empty lines
490 try:
491 try:
491 lines = self.opener.read(self.statuspath).splitlines()
492 lines = self.opener.read(self.statuspath).splitlines()
492 return list(parselines(lines))
493 return list(parselines(lines))
493 except IOError as e:
494 except IOError as e:
494 if e.errno == errno.ENOENT:
495 if e.errno == errno.ENOENT:
495 return []
496 return []
496 raise
497 raise
497
498
498 @util.propertycache
499 @util.propertycache
499 def fullseries(self):
500 def fullseries(self):
500 try:
501 try:
501 return self.opener.read(self.seriespath).splitlines()
502 return self.opener.read(self.seriespath).splitlines()
502 except IOError as e:
503 except IOError as e:
503 if e.errno == errno.ENOENT:
504 if e.errno == errno.ENOENT:
504 return []
505 return []
505 raise
506 raise
506
507
507 @util.propertycache
508 @util.propertycache
508 def series(self):
509 def series(self):
509 self.parseseries()
510 self.parseseries()
510 return self.series
511 return self.series
511
512
512 @util.propertycache
513 @util.propertycache
513 def seriesguards(self):
514 def seriesguards(self):
514 self.parseseries()
515 self.parseseries()
515 return self.seriesguards
516 return self.seriesguards
516
517
517 def invalidate(self):
518 def invalidate(self):
518 for a in 'applied fullseries series seriesguards'.split():
519 for a in 'applied fullseries series seriesguards'.split():
519 if a in self.__dict__:
520 if a in self.__dict__:
520 delattr(self, a)
521 delattr(self, a)
521 self.applieddirty = False
522 self.applieddirty = False
522 self.seriesdirty = False
523 self.seriesdirty = False
523 self.guardsdirty = False
524 self.guardsdirty = False
524 self.activeguards = None
525 self.activeguards = None
525
526
526 def diffopts(self, opts=None, patchfn=None, plain=False):
527 def diffopts(self, opts=None, patchfn=None, plain=False):
527 """Return diff options tweaked for this mq use, possibly upgrading to
528 """Return diff options tweaked for this mq use, possibly upgrading to
528 git format, and possibly plain and without lossy options."""
529 git format, and possibly plain and without lossy options."""
529 diffopts = patchmod.difffeatureopts(self.ui, opts,
530 diffopts = patchmod.difffeatureopts(self.ui, opts,
530 git=True, whitespace=not plain, formatchanging=not plain)
531 git=True, whitespace=not plain, formatchanging=not plain)
531 if self.gitmode == 'auto':
532 if self.gitmode == 'auto':
532 diffopts.upgrade = True
533 diffopts.upgrade = True
533 elif self.gitmode == 'keep':
534 elif self.gitmode == 'keep':
534 pass
535 pass
535 elif self.gitmode in ('yes', 'no'):
536 elif self.gitmode in ('yes', 'no'):
536 diffopts.git = self.gitmode == 'yes'
537 diffopts.git = self.gitmode == 'yes'
537 else:
538 else:
538 raise error.Abort(_('mq.git option can be auto/keep/yes/no'
539 raise error.Abort(_('mq.git option can be auto/keep/yes/no'
539 ' got %s') % self.gitmode)
540 ' got %s') % self.gitmode)
540 if patchfn:
541 if patchfn:
541 diffopts = self.patchopts(diffopts, patchfn)
542 diffopts = self.patchopts(diffopts, patchfn)
542 return diffopts
543 return diffopts
543
544
544 def patchopts(self, diffopts, *patches):
545 def patchopts(self, diffopts, *patches):
545 """Return a copy of input diff options with git set to true if
546 """Return a copy of input diff options with git set to true if
546 referenced patch is a git patch and should be preserved as such.
547 referenced patch is a git patch and should be preserved as such.
547 """
548 """
548 diffopts = diffopts.copy()
549 diffopts = diffopts.copy()
549 if not diffopts.git and self.gitmode == 'keep':
550 if not diffopts.git and self.gitmode == 'keep':
550 for patchfn in patches:
551 for patchfn in patches:
551 patchf = self.opener(patchfn, 'r')
552 patchf = self.opener(patchfn, 'r')
552 # if the patch was a git patch, refresh it as a git patch
553 # if the patch was a git patch, refresh it as a git patch
553 for line in patchf:
554 for line in patchf:
554 if line.startswith('diff --git'):
555 if line.startswith('diff --git'):
555 diffopts.git = True
556 diffopts.git = True
556 break
557 break
557 patchf.close()
558 patchf.close()
558 return diffopts
559 return diffopts
559
560
560 def join(self, *p):
561 def join(self, *p):
561 return os.path.join(self.path, *p)
562 return os.path.join(self.path, *p)
562
563
563 def findseries(self, patch):
564 def findseries(self, patch):
564 def matchpatch(l):
565 def matchpatch(l):
565 l = l.split('#', 1)[0]
566 l = l.split('#', 1)[0]
566 return l.strip() == patch
567 return l.strip() == patch
567 for index, l in enumerate(self.fullseries):
568 for index, l in enumerate(self.fullseries):
568 if matchpatch(l):
569 if matchpatch(l):
569 return index
570 return index
570 return None
571 return None
571
572
572 guard_re = re.compile(br'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
573 guard_re = re.compile(br'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
573
574
574 def parseseries(self):
575 def parseseries(self):
575 self.series = []
576 self.series = []
576 self.seriesguards = []
577 self.seriesguards = []
577 for l in self.fullseries:
578 for l in self.fullseries:
578 h = l.find('#')
579 h = l.find('#')
579 if h == -1:
580 if h == -1:
580 patch = l
581 patch = l
581 comment = ''
582 comment = ''
582 elif h == 0:
583 elif h == 0:
583 continue
584 continue
584 else:
585 else:
585 patch = l[:h]
586 patch = l[:h]
586 comment = l[h:]
587 comment = l[h:]
587 patch = patch.strip()
588 patch = patch.strip()
588 if patch:
589 if patch:
589 if patch in self.series:
590 if patch in self.series:
590 raise error.Abort(_('%s appears more than once in %s') %
591 raise error.Abort(_('%s appears more than once in %s') %
591 (patch, self.join(self.seriespath)))
592 (patch, self.join(self.seriespath)))
592 self.series.append(patch)
593 self.series.append(patch)
593 self.seriesguards.append(self.guard_re.findall(comment))
594 self.seriesguards.append(self.guard_re.findall(comment))
594
595
595 def checkguard(self, guard):
596 def checkguard(self, guard):
596 if not guard:
597 if not guard:
597 return _('guard cannot be an empty string')
598 return _('guard cannot be an empty string')
598 bad_chars = '# \t\r\n\f'
599 bad_chars = '# \t\r\n\f'
599 first = guard[0]
600 first = guard[0]
600 if first in '-+':
601 if first in '-+':
601 return (_('guard %r starts with invalid character: %r') %
602 return (_('guard %r starts with invalid character: %r') %
602 (guard, first))
603 (guard, first))
603 for c in bad_chars:
604 for c in bad_chars:
604 if c in guard:
605 if c in guard:
605 return _('invalid character in guard %r: %r') % (guard, c)
606 return _('invalid character in guard %r: %r') % (guard, c)
606
607
607 def setactive(self, guards):
608 def setactive(self, guards):
608 for guard in guards:
609 for guard in guards:
609 bad = self.checkguard(guard)
610 bad = self.checkguard(guard)
610 if bad:
611 if bad:
611 raise error.Abort(bad)
612 raise error.Abort(bad)
612 guards = sorted(set(guards))
613 guards = sorted(set(guards))
613 self.ui.debug('active guards: %s\n' % ' '.join(guards))
614 self.ui.debug('active guards: %s\n' % ' '.join(guards))
614 self.activeguards = guards
615 self.activeguards = guards
615 self.guardsdirty = True
616 self.guardsdirty = True
616
617
617 def active(self):
618 def active(self):
618 if self.activeguards is None:
619 if self.activeguards is None:
619 self.activeguards = []
620 self.activeguards = []
620 try:
621 try:
621 guards = self.opener.read(self.guardspath).split()
622 guards = self.opener.read(self.guardspath).split()
622 except IOError as err:
623 except IOError as err:
623 if err.errno != errno.ENOENT:
624 if err.errno != errno.ENOENT:
624 raise
625 raise
625 guards = []
626 guards = []
626 for i, guard in enumerate(guards):
627 for i, guard in enumerate(guards):
627 bad = self.checkguard(guard)
628 bad = self.checkguard(guard)
628 if bad:
629 if bad:
629 self.ui.warn('%s:%d: %s\n' %
630 self.ui.warn('%s:%d: %s\n' %
630 (self.join(self.guardspath), i + 1, bad))
631 (self.join(self.guardspath), i + 1, bad))
631 else:
632 else:
632 self.activeguards.append(guard)
633 self.activeguards.append(guard)
633 return self.activeguards
634 return self.activeguards
634
635
635 def setguards(self, idx, guards):
636 def setguards(self, idx, guards):
636 for g in guards:
637 for g in guards:
637 if len(g) < 2:
638 if len(g) < 2:
638 raise error.Abort(_('guard %r too short') % g)
639 raise error.Abort(_('guard %r too short') % g)
639 if g[0] not in '-+':
640 if g[0] not in '-+':
640 raise error.Abort(_('guard %r starts with invalid char') % g)
641 raise error.Abort(_('guard %r starts with invalid char') % g)
641 bad = self.checkguard(g[1:])
642 bad = self.checkguard(g[1:])
642 if bad:
643 if bad:
643 raise error.Abort(bad)
644 raise error.Abort(bad)
644 drop = self.guard_re.sub('', self.fullseries[idx])
645 drop = self.guard_re.sub('', self.fullseries[idx])
645 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
646 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
646 self.parseseries()
647 self.parseseries()
647 self.seriesdirty = True
648 self.seriesdirty = True
648
649
649 def pushable(self, idx):
650 def pushable(self, idx):
650 if isinstance(idx, str):
651 if isinstance(idx, str):
651 idx = self.series.index(idx)
652 idx = self.series.index(idx)
652 patchguards = self.seriesguards[idx]
653 patchguards = self.seriesguards[idx]
653 if not patchguards:
654 if not patchguards:
654 return True, None
655 return True, None
655 guards = self.active()
656 guards = self.active()
656 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
657 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
657 if exactneg:
658 if exactneg:
658 return False, repr(exactneg[0])
659 return False, repr(exactneg[0])
659 pos = [g for g in patchguards if g[0] == '+']
660 pos = [g for g in patchguards if g[0] == '+']
660 exactpos = [g for g in pos if g[1:] in guards]
661 exactpos = [g for g in pos if g[1:] in guards]
661 if pos:
662 if pos:
662 if exactpos:
663 if exactpos:
663 return True, repr(exactpos[0])
664 return True, repr(exactpos[0])
664 return False, ' '.join(map(repr, pos))
665 return False, ' '.join(map(repr, pos))
665 return True, ''
666 return True, ''
666
667
667 def explainpushable(self, idx, all_patches=False):
668 def explainpushable(self, idx, all_patches=False):
668 if all_patches:
669 if all_patches:
669 write = self.ui.write
670 write = self.ui.write
670 else:
671 else:
671 write = self.ui.warn
672 write = self.ui.warn
672
673
673 if all_patches or self.ui.verbose:
674 if all_patches or self.ui.verbose:
674 if isinstance(idx, str):
675 if isinstance(idx, str):
675 idx = self.series.index(idx)
676 idx = self.series.index(idx)
676 pushable, why = self.pushable(idx)
677 pushable, why = self.pushable(idx)
677 if all_patches and pushable:
678 if all_patches and pushable:
678 if why is None:
679 if why is None:
679 write(_('allowing %s - no guards in effect\n') %
680 write(_('allowing %s - no guards in effect\n') %
680 self.series[idx])
681 self.series[idx])
681 else:
682 else:
682 if not why:
683 if not why:
683 write(_('allowing %s - no matching negative guards\n') %
684 write(_('allowing %s - no matching negative guards\n') %
684 self.series[idx])
685 self.series[idx])
685 else:
686 else:
686 write(_('allowing %s - guarded by %s\n') %
687 write(_('allowing %s - guarded by %s\n') %
687 (self.series[idx], why))
688 (self.series[idx], why))
688 if not pushable:
689 if not pushable:
689 if why:
690 if why:
690 write(_('skipping %s - guarded by %s\n') %
691 write(_('skipping %s - guarded by %s\n') %
691 (self.series[idx], why))
692 (self.series[idx], why))
692 else:
693 else:
693 write(_('skipping %s - no matching guards\n') %
694 write(_('skipping %s - no matching guards\n') %
694 self.series[idx])
695 self.series[idx])
695
696
696 def savedirty(self):
697 def savedirty(self):
697 def writelist(items, path):
698 def writelist(items, path):
698 fp = self.opener(path, 'wb')
699 fp = self.opener(path, 'wb')
699 for i in items:
700 for i in items:
700 fp.write("%s\n" % i)
701 fp.write("%s\n" % i)
701 fp.close()
702 fp.close()
702 if self.applieddirty:
703 if self.applieddirty:
703 writelist(map(bytes, self.applied), self.statuspath)
704 writelist(map(bytes, self.applied), self.statuspath)
704 self.applieddirty = False
705 self.applieddirty = False
705 if self.seriesdirty:
706 if self.seriesdirty:
706 writelist(self.fullseries, self.seriespath)
707 writelist(self.fullseries, self.seriespath)
707 self.seriesdirty = False
708 self.seriesdirty = False
708 if self.guardsdirty:
709 if self.guardsdirty:
709 writelist(self.activeguards, self.guardspath)
710 writelist(self.activeguards, self.guardspath)
710 self.guardsdirty = False
711 self.guardsdirty = False
711 if self.added:
712 if self.added:
712 qrepo = self.qrepo()
713 qrepo = self.qrepo()
713 if qrepo:
714 if qrepo:
714 qrepo[None].add(f for f in self.added if f not in qrepo[None])
715 qrepo[None].add(f for f in self.added if f not in qrepo[None])
715 self.added = []
716 self.added = []
716
717
717 def removeundo(self, repo):
718 def removeundo(self, repo):
718 undo = repo.sjoin('undo')
719 undo = repo.sjoin('undo')
719 if not os.path.exists(undo):
720 if not os.path.exists(undo):
720 return
721 return
721 try:
722 try:
722 os.unlink(undo)
723 os.unlink(undo)
723 except OSError as inst:
724 except OSError as inst:
724 self.ui.warn(_('error removing undo: %s\n') % str(inst))
725 self.ui.warn(_('error removing undo: %s\n') % str(inst))
725
726
726 def backup(self, repo, files, copy=False):
727 def backup(self, repo, files, copy=False):
727 # backup local changes in --force case
728 # backup local changes in --force case
728 for f in sorted(files):
729 for f in sorted(files):
729 absf = repo.wjoin(f)
730 absf = repo.wjoin(f)
730 if os.path.lexists(absf):
731 if os.path.lexists(absf):
731 self.ui.note(_('saving current version of %s as %s\n') %
732 self.ui.note(_('saving current version of %s as %s\n') %
732 (f, scmutil.origpath(self.ui, repo, f)))
733 (f, scmutil.origpath(self.ui, repo, f)))
733
734
734 absorig = scmutil.origpath(self.ui, repo, absf)
735 absorig = scmutil.origpath(self.ui, repo, absf)
735 if copy:
736 if copy:
736 util.copyfile(absf, absorig)
737 util.copyfile(absf, absorig)
737 else:
738 else:
738 util.rename(absf, absorig)
739 util.rename(absf, absorig)
739
740
740 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
741 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
741 fp=None, changes=None, opts=None):
742 fp=None, changes=None, opts=None):
742 if opts is None:
743 if opts is None:
743 opts = {}
744 opts = {}
744 stat = opts.get('stat')
745 stat = opts.get('stat')
745 m = scmutil.match(repo[node1], files, opts)
746 m = scmutil.match(repo[node1], files, opts)
746 cmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
747 logcmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
747 changes, stat, fp)
748 changes, stat, fp)
748
749
749 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
750 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
750 # first try just applying the patch
751 # first try just applying the patch
751 (err, n) = self.apply(repo, [patch], update_status=False,
752 (err, n) = self.apply(repo, [patch], update_status=False,
752 strict=True, merge=rev)
753 strict=True, merge=rev)
753
754
754 if err == 0:
755 if err == 0:
755 return (err, n)
756 return (err, n)
756
757
757 if n is None:
758 if n is None:
758 raise error.Abort(_("apply failed for patch %s") % patch)
759 raise error.Abort(_("apply failed for patch %s") % patch)
759
760
760 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
761 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
761
762
762 # apply failed, strip away that rev and merge.
763 # apply failed, strip away that rev and merge.
763 hg.clean(repo, head)
764 hg.clean(repo, head)
764 strip(self.ui, repo, [n], update=False, backup=False)
765 strip(self.ui, repo, [n], update=False, backup=False)
765
766
766 ctx = repo[rev]
767 ctx = repo[rev]
767 ret = hg.merge(repo, rev)
768 ret = hg.merge(repo, rev)
768 if ret:
769 if ret:
769 raise error.Abort(_("update returned %d") % ret)
770 raise error.Abort(_("update returned %d") % ret)
770 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
771 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
771 if n is None:
772 if n is None:
772 raise error.Abort(_("repo commit failed"))
773 raise error.Abort(_("repo commit failed"))
773 try:
774 try:
774 ph = patchheader(mergeq.join(patch), self.plainmode)
775 ph = patchheader(mergeq.join(patch), self.plainmode)
775 except Exception:
776 except Exception:
776 raise error.Abort(_("unable to read %s") % patch)
777 raise error.Abort(_("unable to read %s") % patch)
777
778
778 diffopts = self.patchopts(diffopts, patch)
779 diffopts = self.patchopts(diffopts, patch)
779 patchf = self.opener(patch, "w")
780 patchf = self.opener(patch, "w")
780 comments = str(ph)
781 comments = str(ph)
781 if comments:
782 if comments:
782 patchf.write(comments)
783 patchf.write(comments)
783 self.printdiff(repo, diffopts, head, n, fp=patchf)
784 self.printdiff(repo, diffopts, head, n, fp=patchf)
784 patchf.close()
785 patchf.close()
785 self.removeundo(repo)
786 self.removeundo(repo)
786 return (0, n)
787 return (0, n)
787
788
788 def qparents(self, repo, rev=None):
789 def qparents(self, repo, rev=None):
789 """return the mq handled parent or p1
790 """return the mq handled parent or p1
790
791
791 In some case where mq get himself in being the parent of a merge the
792 In some case where mq get himself in being the parent of a merge the
792 appropriate parent may be p2.
793 appropriate parent may be p2.
793 (eg: an in progress merge started with mq disabled)
794 (eg: an in progress merge started with mq disabled)
794
795
795 If no parent are managed by mq, p1 is returned.
796 If no parent are managed by mq, p1 is returned.
796 """
797 """
797 if rev is None:
798 if rev is None:
798 (p1, p2) = repo.dirstate.parents()
799 (p1, p2) = repo.dirstate.parents()
799 if p2 == nullid:
800 if p2 == nullid:
800 return p1
801 return p1
801 if not self.applied:
802 if not self.applied:
802 return None
803 return None
803 return self.applied[-1].node
804 return self.applied[-1].node
804 p1, p2 = repo.changelog.parents(rev)
805 p1, p2 = repo.changelog.parents(rev)
805 if p2 != nullid and p2 in [x.node for x in self.applied]:
806 if p2 != nullid and p2 in [x.node for x in self.applied]:
806 return p2
807 return p2
807 return p1
808 return p1
808
809
809 def mergepatch(self, repo, mergeq, series, diffopts):
810 def mergepatch(self, repo, mergeq, series, diffopts):
810 if not self.applied:
811 if not self.applied:
811 # each of the patches merged in will have two parents. This
812 # each of the patches merged in will have two parents. This
812 # can confuse the qrefresh, qdiff, and strip code because it
813 # can confuse the qrefresh, qdiff, and strip code because it
813 # needs to know which parent is actually in the patch queue.
814 # needs to know which parent is actually in the patch queue.
814 # so, we insert a merge marker with only one parent. This way
815 # so, we insert a merge marker with only one parent. This way
815 # the first patch in the queue is never a merge patch
816 # the first patch in the queue is never a merge patch
816 #
817 #
817 pname = ".hg.patches.merge.marker"
818 pname = ".hg.patches.merge.marker"
818 n = newcommit(repo, None, '[mq]: merge marker', force=True)
819 n = newcommit(repo, None, '[mq]: merge marker', force=True)
819 self.removeundo(repo)
820 self.removeundo(repo)
820 self.applied.append(statusentry(n, pname))
821 self.applied.append(statusentry(n, pname))
821 self.applieddirty = True
822 self.applieddirty = True
822
823
823 head = self.qparents(repo)
824 head = self.qparents(repo)
824
825
825 for patch in series:
826 for patch in series:
826 patch = mergeq.lookup(patch, strict=True)
827 patch = mergeq.lookup(patch, strict=True)
827 if not patch:
828 if not patch:
828 self.ui.warn(_("patch %s does not exist\n") % patch)
829 self.ui.warn(_("patch %s does not exist\n") % patch)
829 return (1, None)
830 return (1, None)
830 pushable, reason = self.pushable(patch)
831 pushable, reason = self.pushable(patch)
831 if not pushable:
832 if not pushable:
832 self.explainpushable(patch, all_patches=True)
833 self.explainpushable(patch, all_patches=True)
833 continue
834 continue
834 info = mergeq.isapplied(patch)
835 info = mergeq.isapplied(patch)
835 if not info:
836 if not info:
836 self.ui.warn(_("patch %s is not applied\n") % patch)
837 self.ui.warn(_("patch %s is not applied\n") % patch)
837 return (1, None)
838 return (1, None)
838 rev = info[1]
839 rev = info[1]
839 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
840 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
840 if head:
841 if head:
841 self.applied.append(statusentry(head, patch))
842 self.applied.append(statusentry(head, patch))
842 self.applieddirty = True
843 self.applieddirty = True
843 if err:
844 if err:
844 return (err, head)
845 return (err, head)
845 self.savedirty()
846 self.savedirty()
846 return (0, head)
847 return (0, head)
847
848
848 def patch(self, repo, patchfile):
849 def patch(self, repo, patchfile):
849 '''Apply patchfile to the working directory.
850 '''Apply patchfile to the working directory.
850 patchfile: name of patch file'''
851 patchfile: name of patch file'''
851 files = set()
852 files = set()
852 try:
853 try:
853 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
854 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
854 files=files, eolmode=None)
855 files=files, eolmode=None)
855 return (True, list(files), fuzz)
856 return (True, list(files), fuzz)
856 except Exception as inst:
857 except Exception as inst:
857 self.ui.note(str(inst) + '\n')
858 self.ui.note(str(inst) + '\n')
858 if not self.ui.verbose:
859 if not self.ui.verbose:
859 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
860 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
860 self.ui.traceback()
861 self.ui.traceback()
861 return (False, list(files), False)
862 return (False, list(files), False)
862
863
863 def apply(self, repo, series, list=False, update_status=True,
864 def apply(self, repo, series, list=False, update_status=True,
864 strict=False, patchdir=None, merge=None, all_files=None,
865 strict=False, patchdir=None, merge=None, all_files=None,
865 tobackup=None, keepchanges=False):
866 tobackup=None, keepchanges=False):
866 wlock = lock = tr = None
867 wlock = lock = tr = None
867 try:
868 try:
868 wlock = repo.wlock()
869 wlock = repo.wlock()
869 lock = repo.lock()
870 lock = repo.lock()
870 tr = repo.transaction("qpush")
871 tr = repo.transaction("qpush")
871 try:
872 try:
872 ret = self._apply(repo, series, list, update_status,
873 ret = self._apply(repo, series, list, update_status,
873 strict, patchdir, merge, all_files=all_files,
874 strict, patchdir, merge, all_files=all_files,
874 tobackup=tobackup, keepchanges=keepchanges)
875 tobackup=tobackup, keepchanges=keepchanges)
875 tr.close()
876 tr.close()
876 self.savedirty()
877 self.savedirty()
877 return ret
878 return ret
878 except AbortNoCleanup:
879 except AbortNoCleanup:
879 tr.close()
880 tr.close()
880 self.savedirty()
881 self.savedirty()
881 raise
882 raise
882 except: # re-raises
883 except: # re-raises
883 try:
884 try:
884 tr.abort()
885 tr.abort()
885 finally:
886 finally:
886 self.invalidate()
887 self.invalidate()
887 raise
888 raise
888 finally:
889 finally:
889 release(tr, lock, wlock)
890 release(tr, lock, wlock)
890 self.removeundo(repo)
891 self.removeundo(repo)
891
892
892 def _apply(self, repo, series, list=False, update_status=True,
893 def _apply(self, repo, series, list=False, update_status=True,
893 strict=False, patchdir=None, merge=None, all_files=None,
894 strict=False, patchdir=None, merge=None, all_files=None,
894 tobackup=None, keepchanges=False):
895 tobackup=None, keepchanges=False):
895 """returns (error, hash)
896 """returns (error, hash)
896
897
897 error = 1 for unable to read, 2 for patch failed, 3 for patch
898 error = 1 for unable to read, 2 for patch failed, 3 for patch
898 fuzz. tobackup is None or a set of files to backup before they
899 fuzz. tobackup is None or a set of files to backup before they
899 are modified by a patch.
900 are modified by a patch.
900 """
901 """
901 # TODO unify with commands.py
902 # TODO unify with commands.py
902 if not patchdir:
903 if not patchdir:
903 patchdir = self.path
904 patchdir = self.path
904 err = 0
905 err = 0
905 n = None
906 n = None
906 for patchname in series:
907 for patchname in series:
907 pushable, reason = self.pushable(patchname)
908 pushable, reason = self.pushable(patchname)
908 if not pushable:
909 if not pushable:
909 self.explainpushable(patchname, all_patches=True)
910 self.explainpushable(patchname, all_patches=True)
910 continue
911 continue
911 self.ui.status(_("applying %s\n") % patchname)
912 self.ui.status(_("applying %s\n") % patchname)
912 pf = os.path.join(patchdir, patchname)
913 pf = os.path.join(patchdir, patchname)
913
914
914 try:
915 try:
915 ph = patchheader(self.join(patchname), self.plainmode)
916 ph = patchheader(self.join(patchname), self.plainmode)
916 except IOError:
917 except IOError:
917 self.ui.warn(_("unable to read %s\n") % patchname)
918 self.ui.warn(_("unable to read %s\n") % patchname)
918 err = 1
919 err = 1
919 break
920 break
920
921
921 message = ph.message
922 message = ph.message
922 if not message:
923 if not message:
923 # The commit message should not be translated
924 # The commit message should not be translated
924 message = "imported patch %s\n" % patchname
925 message = "imported patch %s\n" % patchname
925 else:
926 else:
926 if list:
927 if list:
927 # The commit message should not be translated
928 # The commit message should not be translated
928 message.append("\nimported patch %s" % patchname)
929 message.append("\nimported patch %s" % patchname)
929 message = '\n'.join(message)
930 message = '\n'.join(message)
930
931
931 if ph.haspatch:
932 if ph.haspatch:
932 if tobackup:
933 if tobackup:
933 touched = patchmod.changedfiles(self.ui, repo, pf)
934 touched = patchmod.changedfiles(self.ui, repo, pf)
934 touched = set(touched) & tobackup
935 touched = set(touched) & tobackup
935 if touched and keepchanges:
936 if touched and keepchanges:
936 raise AbortNoCleanup(
937 raise AbortNoCleanup(
937 _("conflicting local changes found"),
938 _("conflicting local changes found"),
938 hint=_("did you forget to qrefresh?"))
939 hint=_("did you forget to qrefresh?"))
939 self.backup(repo, touched, copy=True)
940 self.backup(repo, touched, copy=True)
940 tobackup = tobackup - touched
941 tobackup = tobackup - touched
941 (patcherr, files, fuzz) = self.patch(repo, pf)
942 (patcherr, files, fuzz) = self.patch(repo, pf)
942 if all_files is not None:
943 if all_files is not None:
943 all_files.update(files)
944 all_files.update(files)
944 patcherr = not patcherr
945 patcherr = not patcherr
945 else:
946 else:
946 self.ui.warn(_("patch %s is empty\n") % patchname)
947 self.ui.warn(_("patch %s is empty\n") % patchname)
947 patcherr, files, fuzz = 0, [], 0
948 patcherr, files, fuzz = 0, [], 0
948
949
949 if merge and files:
950 if merge and files:
950 # Mark as removed/merged and update dirstate parent info
951 # Mark as removed/merged and update dirstate parent info
951 removed = []
952 removed = []
952 merged = []
953 merged = []
953 for f in files:
954 for f in files:
954 if os.path.lexists(repo.wjoin(f)):
955 if os.path.lexists(repo.wjoin(f)):
955 merged.append(f)
956 merged.append(f)
956 else:
957 else:
957 removed.append(f)
958 removed.append(f)
958 with repo.dirstate.parentchange():
959 with repo.dirstate.parentchange():
959 for f in removed:
960 for f in removed:
960 repo.dirstate.remove(f)
961 repo.dirstate.remove(f)
961 for f in merged:
962 for f in merged:
962 repo.dirstate.merge(f)
963 repo.dirstate.merge(f)
963 p1, p2 = repo.dirstate.parents()
964 p1, p2 = repo.dirstate.parents()
964 repo.setparents(p1, merge)
965 repo.setparents(p1, merge)
965
966
966 if all_files and '.hgsubstate' in all_files:
967 if all_files and '.hgsubstate' in all_files:
967 wctx = repo[None]
968 wctx = repo[None]
968 pctx = repo['.']
969 pctx = repo['.']
969 overwrite = False
970 overwrite = False
970 mergedsubstate = subrepo.submerge(repo, pctx, wctx, wctx,
971 mergedsubstate = subrepo.submerge(repo, pctx, wctx, wctx,
971 overwrite)
972 overwrite)
972 files += mergedsubstate.keys()
973 files += mergedsubstate.keys()
973
974
974 match = scmutil.matchfiles(repo, files or [])
975 match = scmutil.matchfiles(repo, files or [])
975 oldtip = repo['tip']
976 oldtip = repo['tip']
976 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
977 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
977 force=True)
978 force=True)
978 if repo['tip'] == oldtip:
979 if repo['tip'] == oldtip:
979 raise error.Abort(_("qpush exactly duplicates child changeset"))
980 raise error.Abort(_("qpush exactly duplicates child changeset"))
980 if n is None:
981 if n is None:
981 raise error.Abort(_("repository commit failed"))
982 raise error.Abort(_("repository commit failed"))
982
983
983 if update_status:
984 if update_status:
984 self.applied.append(statusentry(n, patchname))
985 self.applied.append(statusentry(n, patchname))
985
986
986 if patcherr:
987 if patcherr:
987 self.ui.warn(_("patch failed, rejects left in working "
988 self.ui.warn(_("patch failed, rejects left in working "
988 "directory\n"))
989 "directory\n"))
989 err = 2
990 err = 2
990 break
991 break
991
992
992 if fuzz and strict:
993 if fuzz and strict:
993 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
994 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
994 err = 3
995 err = 3
995 break
996 break
996 return (err, n)
997 return (err, n)
997
998
998 def _cleanup(self, patches, numrevs, keep=False):
999 def _cleanup(self, patches, numrevs, keep=False):
999 if not keep:
1000 if not keep:
1000 r = self.qrepo()
1001 r = self.qrepo()
1001 if r:
1002 if r:
1002 r[None].forget(patches)
1003 r[None].forget(patches)
1003 for p in patches:
1004 for p in patches:
1004 try:
1005 try:
1005 os.unlink(self.join(p))
1006 os.unlink(self.join(p))
1006 except OSError as inst:
1007 except OSError as inst:
1007 if inst.errno != errno.ENOENT:
1008 if inst.errno != errno.ENOENT:
1008 raise
1009 raise
1009
1010
1010 qfinished = []
1011 qfinished = []
1011 if numrevs:
1012 if numrevs:
1012 qfinished = self.applied[:numrevs]
1013 qfinished = self.applied[:numrevs]
1013 del self.applied[:numrevs]
1014 del self.applied[:numrevs]
1014 self.applieddirty = True
1015 self.applieddirty = True
1015
1016
1016 unknown = []
1017 unknown = []
1017
1018
1018 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
1019 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
1019 reverse=True):
1020 reverse=True):
1020 if i is not None:
1021 if i is not None:
1021 del self.fullseries[i]
1022 del self.fullseries[i]
1022 else:
1023 else:
1023 unknown.append(p)
1024 unknown.append(p)
1024
1025
1025 if unknown:
1026 if unknown:
1026 if numrevs:
1027 if numrevs:
1027 rev = dict((entry.name, entry.node) for entry in qfinished)
1028 rev = dict((entry.name, entry.node) for entry in qfinished)
1028 for p in unknown:
1029 for p in unknown:
1029 msg = _('revision %s refers to unknown patches: %s\n')
1030 msg = _('revision %s refers to unknown patches: %s\n')
1030 self.ui.warn(msg % (short(rev[p]), p))
1031 self.ui.warn(msg % (short(rev[p]), p))
1031 else:
1032 else:
1032 msg = _('unknown patches: %s\n')
1033 msg = _('unknown patches: %s\n')
1033 raise error.Abort(''.join(msg % p for p in unknown))
1034 raise error.Abort(''.join(msg % p for p in unknown))
1034
1035
1035 self.parseseries()
1036 self.parseseries()
1036 self.seriesdirty = True
1037 self.seriesdirty = True
1037 return [entry.node for entry in qfinished]
1038 return [entry.node for entry in qfinished]
1038
1039
1039 def _revpatches(self, repo, revs):
1040 def _revpatches(self, repo, revs):
1040 firstrev = repo[self.applied[0].node].rev()
1041 firstrev = repo[self.applied[0].node].rev()
1041 patches = []
1042 patches = []
1042 for i, rev in enumerate(revs):
1043 for i, rev in enumerate(revs):
1043
1044
1044 if rev < firstrev:
1045 if rev < firstrev:
1045 raise error.Abort(_('revision %d is not managed') % rev)
1046 raise error.Abort(_('revision %d is not managed') % rev)
1046
1047
1047 ctx = repo[rev]
1048 ctx = repo[rev]
1048 base = self.applied[i].node
1049 base = self.applied[i].node
1049 if ctx.node() != base:
1050 if ctx.node() != base:
1050 msg = _('cannot delete revision %d above applied patches')
1051 msg = _('cannot delete revision %d above applied patches')
1051 raise error.Abort(msg % rev)
1052 raise error.Abort(msg % rev)
1052
1053
1053 patch = self.applied[i].name
1054 patch = self.applied[i].name
1054 for fmt in ('[mq]: %s', 'imported patch %s'):
1055 for fmt in ('[mq]: %s', 'imported patch %s'):
1055 if ctx.description() == fmt % patch:
1056 if ctx.description() == fmt % patch:
1056 msg = _('patch %s finalized without changeset message\n')
1057 msg = _('patch %s finalized without changeset message\n')
1057 repo.ui.status(msg % patch)
1058 repo.ui.status(msg % patch)
1058 break
1059 break
1059
1060
1060 patches.append(patch)
1061 patches.append(patch)
1061 return patches
1062 return patches
1062
1063
1063 def finish(self, repo, revs):
1064 def finish(self, repo, revs):
1064 # Manually trigger phase computation to ensure phasedefaults is
1065 # Manually trigger phase computation to ensure phasedefaults is
1065 # executed before we remove the patches.
1066 # executed before we remove the patches.
1066 repo._phasecache
1067 repo._phasecache
1067 patches = self._revpatches(repo, sorted(revs))
1068 patches = self._revpatches(repo, sorted(revs))
1068 qfinished = self._cleanup(patches, len(patches))
1069 qfinished = self._cleanup(patches, len(patches))
1069 if qfinished and repo.ui.configbool('mq', 'secret'):
1070 if qfinished and repo.ui.configbool('mq', 'secret'):
1070 # only use this logic when the secret option is added
1071 # only use this logic when the secret option is added
1071 oldqbase = repo[qfinished[0]]
1072 oldqbase = repo[qfinished[0]]
1072 tphase = phases.newcommitphase(repo.ui)
1073 tphase = phases.newcommitphase(repo.ui)
1073 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1074 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1074 with repo.transaction('qfinish') as tr:
1075 with repo.transaction('qfinish') as tr:
1075 phases.advanceboundary(repo, tr, tphase, qfinished)
1076 phases.advanceboundary(repo, tr, tphase, qfinished)
1076
1077
1077 def delete(self, repo, patches, opts):
1078 def delete(self, repo, patches, opts):
1078 if not patches and not opts.get('rev'):
1079 if not patches and not opts.get('rev'):
1079 raise error.Abort(_('qdelete requires at least one revision or '
1080 raise error.Abort(_('qdelete requires at least one revision or '
1080 'patch name'))
1081 'patch name'))
1081
1082
1082 realpatches = []
1083 realpatches = []
1083 for patch in patches:
1084 for patch in patches:
1084 patch = self.lookup(patch, strict=True)
1085 patch = self.lookup(patch, strict=True)
1085 info = self.isapplied(patch)
1086 info = self.isapplied(patch)
1086 if info:
1087 if info:
1087 raise error.Abort(_("cannot delete applied patch %s") % patch)
1088 raise error.Abort(_("cannot delete applied patch %s") % patch)
1088 if patch not in self.series:
1089 if patch not in self.series:
1089 raise error.Abort(_("patch %s not in series file") % patch)
1090 raise error.Abort(_("patch %s not in series file") % patch)
1090 if patch not in realpatches:
1091 if patch not in realpatches:
1091 realpatches.append(patch)
1092 realpatches.append(patch)
1092
1093
1093 numrevs = 0
1094 numrevs = 0
1094 if opts.get('rev'):
1095 if opts.get('rev'):
1095 if not self.applied:
1096 if not self.applied:
1096 raise error.Abort(_('no patches applied'))
1097 raise error.Abort(_('no patches applied'))
1097 revs = scmutil.revrange(repo, opts.get('rev'))
1098 revs = scmutil.revrange(repo, opts.get('rev'))
1098 revs.sort()
1099 revs.sort()
1099 revpatches = self._revpatches(repo, revs)
1100 revpatches = self._revpatches(repo, revs)
1100 realpatches += revpatches
1101 realpatches += revpatches
1101 numrevs = len(revpatches)
1102 numrevs = len(revpatches)
1102
1103
1103 self._cleanup(realpatches, numrevs, opts.get('keep'))
1104 self._cleanup(realpatches, numrevs, opts.get('keep'))
1104
1105
1105 def checktoppatch(self, repo):
1106 def checktoppatch(self, repo):
1106 '''check that working directory is at qtip'''
1107 '''check that working directory is at qtip'''
1107 if self.applied:
1108 if self.applied:
1108 top = self.applied[-1].node
1109 top = self.applied[-1].node
1109 patch = self.applied[-1].name
1110 patch = self.applied[-1].name
1110 if repo.dirstate.p1() != top:
1111 if repo.dirstate.p1() != top:
1111 raise error.Abort(_("working directory revision is not qtip"))
1112 raise error.Abort(_("working directory revision is not qtip"))
1112 return top, patch
1113 return top, patch
1113 return None, None
1114 return None, None
1114
1115
1115 def putsubstate2changes(self, substatestate, changes):
1116 def putsubstate2changes(self, substatestate, changes):
1116 for files in changes[:3]:
1117 for files in changes[:3]:
1117 if '.hgsubstate' in files:
1118 if '.hgsubstate' in files:
1118 return # already listed up
1119 return # already listed up
1119 # not yet listed up
1120 # not yet listed up
1120 if substatestate in 'a?':
1121 if substatestate in 'a?':
1121 changes[1].append('.hgsubstate')
1122 changes[1].append('.hgsubstate')
1122 elif substatestate in 'r':
1123 elif substatestate in 'r':
1123 changes[2].append('.hgsubstate')
1124 changes[2].append('.hgsubstate')
1124 else: # modified
1125 else: # modified
1125 changes[0].append('.hgsubstate')
1126 changes[0].append('.hgsubstate')
1126
1127
1127 def checklocalchanges(self, repo, force=False, refresh=True):
1128 def checklocalchanges(self, repo, force=False, refresh=True):
1128 excsuffix = ''
1129 excsuffix = ''
1129 if refresh:
1130 if refresh:
1130 excsuffix = ', qrefresh first'
1131 excsuffix = ', qrefresh first'
1131 # plain versions for i18n tool to detect them
1132 # plain versions for i18n tool to detect them
1132 _("local changes found, qrefresh first")
1133 _("local changes found, qrefresh first")
1133 _("local changed subrepos found, qrefresh first")
1134 _("local changed subrepos found, qrefresh first")
1134 return checklocalchanges(repo, force, excsuffix)
1135 return checklocalchanges(repo, force, excsuffix)
1135
1136
1136 _reserved = ('series', 'status', 'guards', '.', '..')
1137 _reserved = ('series', 'status', 'guards', '.', '..')
1137 def checkreservedname(self, name):
1138 def checkreservedname(self, name):
1138 if name in self._reserved:
1139 if name in self._reserved:
1139 raise error.Abort(_('"%s" cannot be used as the name of a patch')
1140 raise error.Abort(_('"%s" cannot be used as the name of a patch')
1140 % name)
1141 % name)
1141 if name != name.strip():
1142 if name != name.strip():
1142 # whitespace is stripped by parseseries()
1143 # whitespace is stripped by parseseries()
1143 raise error.Abort(_('patch name cannot begin or end with '
1144 raise error.Abort(_('patch name cannot begin or end with '
1144 'whitespace'))
1145 'whitespace'))
1145 for prefix in ('.hg', '.mq'):
1146 for prefix in ('.hg', '.mq'):
1146 if name.startswith(prefix):
1147 if name.startswith(prefix):
1147 raise error.Abort(_('patch name cannot begin with "%s"')
1148 raise error.Abort(_('patch name cannot begin with "%s"')
1148 % prefix)
1149 % prefix)
1149 for c in ('#', ':', '\r', '\n'):
1150 for c in ('#', ':', '\r', '\n'):
1150 if c in name:
1151 if c in name:
1151 raise error.Abort(_('%r cannot be used in the name of a patch')
1152 raise error.Abort(_('%r cannot be used in the name of a patch')
1152 % c)
1153 % c)
1153
1154
1154 def checkpatchname(self, name, force=False):
1155 def checkpatchname(self, name, force=False):
1155 self.checkreservedname(name)
1156 self.checkreservedname(name)
1156 if not force and os.path.exists(self.join(name)):
1157 if not force and os.path.exists(self.join(name)):
1157 if os.path.isdir(self.join(name)):
1158 if os.path.isdir(self.join(name)):
1158 raise error.Abort(_('"%s" already exists as a directory')
1159 raise error.Abort(_('"%s" already exists as a directory')
1159 % name)
1160 % name)
1160 else:
1161 else:
1161 raise error.Abort(_('patch "%s" already exists') % name)
1162 raise error.Abort(_('patch "%s" already exists') % name)
1162
1163
1163 def makepatchname(self, title, fallbackname):
1164 def makepatchname(self, title, fallbackname):
1164 """Return a suitable filename for title, adding a suffix to make
1165 """Return a suitable filename for title, adding a suffix to make
1165 it unique in the existing list"""
1166 it unique in the existing list"""
1166 namebase = re.sub('[\s\W_]+', '_', title.lower()).strip('_')
1167 namebase = re.sub('[\s\W_]+', '_', title.lower()).strip('_')
1167 namebase = namebase[:75] # avoid too long name (issue5117)
1168 namebase = namebase[:75] # avoid too long name (issue5117)
1168 if namebase:
1169 if namebase:
1169 try:
1170 try:
1170 self.checkreservedname(namebase)
1171 self.checkreservedname(namebase)
1171 except error.Abort:
1172 except error.Abort:
1172 namebase = fallbackname
1173 namebase = fallbackname
1173 else:
1174 else:
1174 namebase = fallbackname
1175 namebase = fallbackname
1175 name = namebase
1176 name = namebase
1176 i = 0
1177 i = 0
1177 while True:
1178 while True:
1178 if name not in self.fullseries:
1179 if name not in self.fullseries:
1179 try:
1180 try:
1180 self.checkpatchname(name)
1181 self.checkpatchname(name)
1181 break
1182 break
1182 except error.Abort:
1183 except error.Abort:
1183 pass
1184 pass
1184 i += 1
1185 i += 1
1185 name = '%s__%s' % (namebase, i)
1186 name = '%s__%s' % (namebase, i)
1186 return name
1187 return name
1187
1188
1188 def checkkeepchanges(self, keepchanges, force):
1189 def checkkeepchanges(self, keepchanges, force):
1189 if force and keepchanges:
1190 if force and keepchanges:
1190 raise error.Abort(_('cannot use both --force and --keep-changes'))
1191 raise error.Abort(_('cannot use both --force and --keep-changes'))
1191
1192
1192 def new(self, repo, patchfn, *pats, **opts):
1193 def new(self, repo, patchfn, *pats, **opts):
1193 """options:
1194 """options:
1194 msg: a string or a no-argument function returning a string
1195 msg: a string or a no-argument function returning a string
1195 """
1196 """
1196 msg = opts.get('msg')
1197 msg = opts.get('msg')
1197 edit = opts.get('edit')
1198 edit = opts.get('edit')
1198 editform = opts.get('editform', 'mq.qnew')
1199 editform = opts.get('editform', 'mq.qnew')
1199 user = opts.get('user')
1200 user = opts.get('user')
1200 date = opts.get('date')
1201 date = opts.get('date')
1201 if date:
1202 if date:
1202 date = util.parsedate(date)
1203 date = util.parsedate(date)
1203 diffopts = self.diffopts({'git': opts.get('git')}, plain=True)
1204 diffopts = self.diffopts({'git': opts.get('git')}, plain=True)
1204 if opts.get('checkname', True):
1205 if opts.get('checkname', True):
1205 self.checkpatchname(patchfn)
1206 self.checkpatchname(patchfn)
1206 inclsubs = checksubstate(repo)
1207 inclsubs = checksubstate(repo)
1207 if inclsubs:
1208 if inclsubs:
1208 substatestate = repo.dirstate['.hgsubstate']
1209 substatestate = repo.dirstate['.hgsubstate']
1209 if opts.get('include') or opts.get('exclude') or pats:
1210 if opts.get('include') or opts.get('exclude') or pats:
1210 # detect missing files in pats
1211 # detect missing files in pats
1211 def badfn(f, msg):
1212 def badfn(f, msg):
1212 if f != '.hgsubstate': # .hgsubstate is auto-created
1213 if f != '.hgsubstate': # .hgsubstate is auto-created
1213 raise error.Abort('%s: %s' % (f, msg))
1214 raise error.Abort('%s: %s' % (f, msg))
1214 match = scmutil.match(repo[None], pats, opts, badfn=badfn)
1215 match = scmutil.match(repo[None], pats, opts, badfn=badfn)
1215 changes = repo.status(match=match)
1216 changes = repo.status(match=match)
1216 else:
1217 else:
1217 changes = self.checklocalchanges(repo, force=True)
1218 changes = self.checklocalchanges(repo, force=True)
1218 commitfiles = list(inclsubs)
1219 commitfiles = list(inclsubs)
1219 for files in changes[:3]:
1220 for files in changes[:3]:
1220 commitfiles.extend(files)
1221 commitfiles.extend(files)
1221 match = scmutil.matchfiles(repo, commitfiles)
1222 match = scmutil.matchfiles(repo, commitfiles)
1222 if len(repo[None].parents()) > 1:
1223 if len(repo[None].parents()) > 1:
1223 raise error.Abort(_('cannot manage merge changesets'))
1224 raise error.Abort(_('cannot manage merge changesets'))
1224 self.checktoppatch(repo)
1225 self.checktoppatch(repo)
1225 insert = self.fullseriesend()
1226 insert = self.fullseriesend()
1226 with repo.wlock():
1227 with repo.wlock():
1227 try:
1228 try:
1228 # if patch file write fails, abort early
1229 # if patch file write fails, abort early
1229 p = self.opener(patchfn, "w")
1230 p = self.opener(patchfn, "w")
1230 except IOError as e:
1231 except IOError as e:
1231 raise error.Abort(_('cannot write patch "%s": %s')
1232 raise error.Abort(_('cannot write patch "%s": %s')
1232 % (patchfn, encoding.strtolocal(e.strerror)))
1233 % (patchfn, encoding.strtolocal(e.strerror)))
1233 try:
1234 try:
1234 defaultmsg = "[mq]: %s" % patchfn
1235 defaultmsg = "[mq]: %s" % patchfn
1235 editor = cmdutil.getcommiteditor(editform=editform)
1236 editor = cmdutil.getcommiteditor(editform=editform)
1236 if edit:
1237 if edit:
1237 def finishdesc(desc):
1238 def finishdesc(desc):
1238 if desc.rstrip():
1239 if desc.rstrip():
1239 return desc
1240 return desc
1240 else:
1241 else:
1241 return defaultmsg
1242 return defaultmsg
1242 # i18n: this message is shown in editor with "HG: " prefix
1243 # i18n: this message is shown in editor with "HG: " prefix
1243 extramsg = _('Leave message empty to use default message.')
1244 extramsg = _('Leave message empty to use default message.')
1244 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1245 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1245 extramsg=extramsg,
1246 extramsg=extramsg,
1246 editform=editform)
1247 editform=editform)
1247 commitmsg = msg
1248 commitmsg = msg
1248 else:
1249 else:
1249 commitmsg = msg or defaultmsg
1250 commitmsg = msg or defaultmsg
1250
1251
1251 n = newcommit(repo, None, commitmsg, user, date, match=match,
1252 n = newcommit(repo, None, commitmsg, user, date, match=match,
1252 force=True, editor=editor)
1253 force=True, editor=editor)
1253 if n is None:
1254 if n is None:
1254 raise error.Abort(_("repo commit failed"))
1255 raise error.Abort(_("repo commit failed"))
1255 try:
1256 try:
1256 self.fullseries[insert:insert] = [patchfn]
1257 self.fullseries[insert:insert] = [patchfn]
1257 self.applied.append(statusentry(n, patchfn))
1258 self.applied.append(statusentry(n, patchfn))
1258 self.parseseries()
1259 self.parseseries()
1259 self.seriesdirty = True
1260 self.seriesdirty = True
1260 self.applieddirty = True
1261 self.applieddirty = True
1261 nctx = repo[n]
1262 nctx = repo[n]
1262 ph = patchheader(self.join(patchfn), self.plainmode)
1263 ph = patchheader(self.join(patchfn), self.plainmode)
1263 if user:
1264 if user:
1264 ph.setuser(user)
1265 ph.setuser(user)
1265 if date:
1266 if date:
1266 ph.setdate('%s %s' % date)
1267 ph.setdate('%s %s' % date)
1267 ph.setparent(hex(nctx.p1().node()))
1268 ph.setparent(hex(nctx.p1().node()))
1268 msg = nctx.description().strip()
1269 msg = nctx.description().strip()
1269 if msg == defaultmsg.strip():
1270 if msg == defaultmsg.strip():
1270 msg = ''
1271 msg = ''
1271 ph.setmessage(msg)
1272 ph.setmessage(msg)
1272 p.write(str(ph))
1273 p.write(str(ph))
1273 if commitfiles:
1274 if commitfiles:
1274 parent = self.qparents(repo, n)
1275 parent = self.qparents(repo, n)
1275 if inclsubs:
1276 if inclsubs:
1276 self.putsubstate2changes(substatestate, changes)
1277 self.putsubstate2changes(substatestate, changes)
1277 chunks = patchmod.diff(repo, node1=parent, node2=n,
1278 chunks = patchmod.diff(repo, node1=parent, node2=n,
1278 changes=changes, opts=diffopts)
1279 changes=changes, opts=diffopts)
1279 for chunk in chunks:
1280 for chunk in chunks:
1280 p.write(chunk)
1281 p.write(chunk)
1281 p.close()
1282 p.close()
1282 r = self.qrepo()
1283 r = self.qrepo()
1283 if r:
1284 if r:
1284 r[None].add([patchfn])
1285 r[None].add([patchfn])
1285 except: # re-raises
1286 except: # re-raises
1286 repo.rollback()
1287 repo.rollback()
1287 raise
1288 raise
1288 except Exception:
1289 except Exception:
1289 patchpath = self.join(patchfn)
1290 patchpath = self.join(patchfn)
1290 try:
1291 try:
1291 os.unlink(patchpath)
1292 os.unlink(patchpath)
1292 except OSError:
1293 except OSError:
1293 self.ui.warn(_('error unlinking %s\n') % patchpath)
1294 self.ui.warn(_('error unlinking %s\n') % patchpath)
1294 raise
1295 raise
1295 self.removeundo(repo)
1296 self.removeundo(repo)
1296
1297
1297 def isapplied(self, patch):
1298 def isapplied(self, patch):
1298 """returns (index, rev, patch)"""
1299 """returns (index, rev, patch)"""
1299 for i, a in enumerate(self.applied):
1300 for i, a in enumerate(self.applied):
1300 if a.name == patch:
1301 if a.name == patch:
1301 return (i, a.node, a.name)
1302 return (i, a.node, a.name)
1302 return None
1303 return None
1303
1304
1304 # if the exact patch name does not exist, we try a few
1305 # if the exact patch name does not exist, we try a few
1305 # variations. If strict is passed, we try only #1
1306 # variations. If strict is passed, we try only #1
1306 #
1307 #
1307 # 1) a number (as string) to indicate an offset in the series file
1308 # 1) a number (as string) to indicate an offset in the series file
1308 # 2) a unique substring of the patch name was given
1309 # 2) a unique substring of the patch name was given
1309 # 3) patchname[-+]num to indicate an offset in the series file
1310 # 3) patchname[-+]num to indicate an offset in the series file
1310 def lookup(self, patch, strict=False):
1311 def lookup(self, patch, strict=False):
1311 def partialname(s):
1312 def partialname(s):
1312 if s in self.series:
1313 if s in self.series:
1313 return s
1314 return s
1314 matches = [x for x in self.series if s in x]
1315 matches = [x for x in self.series if s in x]
1315 if len(matches) > 1:
1316 if len(matches) > 1:
1316 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1317 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1317 for m in matches:
1318 for m in matches:
1318 self.ui.warn(' %s\n' % m)
1319 self.ui.warn(' %s\n' % m)
1319 return None
1320 return None
1320 if matches:
1321 if matches:
1321 return matches[0]
1322 return matches[0]
1322 if self.series and self.applied:
1323 if self.series and self.applied:
1323 if s == 'qtip':
1324 if s == 'qtip':
1324 return self.series[self.seriesend(True) - 1]
1325 return self.series[self.seriesend(True) - 1]
1325 if s == 'qbase':
1326 if s == 'qbase':
1326 return self.series[0]
1327 return self.series[0]
1327 return None
1328 return None
1328
1329
1329 if patch in self.series:
1330 if patch in self.series:
1330 return patch
1331 return patch
1331
1332
1332 if not os.path.isfile(self.join(patch)):
1333 if not os.path.isfile(self.join(patch)):
1333 try:
1334 try:
1334 sno = int(patch)
1335 sno = int(patch)
1335 except (ValueError, OverflowError):
1336 except (ValueError, OverflowError):
1336 pass
1337 pass
1337 else:
1338 else:
1338 if -len(self.series) <= sno < len(self.series):
1339 if -len(self.series) <= sno < len(self.series):
1339 return self.series[sno]
1340 return self.series[sno]
1340
1341
1341 if not strict:
1342 if not strict:
1342 res = partialname(patch)
1343 res = partialname(patch)
1343 if res:
1344 if res:
1344 return res
1345 return res
1345 minus = patch.rfind('-')
1346 minus = patch.rfind('-')
1346 if minus >= 0:
1347 if minus >= 0:
1347 res = partialname(patch[:minus])
1348 res = partialname(patch[:minus])
1348 if res:
1349 if res:
1349 i = self.series.index(res)
1350 i = self.series.index(res)
1350 try:
1351 try:
1351 off = int(patch[minus + 1:] or 1)
1352 off = int(patch[minus + 1:] or 1)
1352 except (ValueError, OverflowError):
1353 except (ValueError, OverflowError):
1353 pass
1354 pass
1354 else:
1355 else:
1355 if i - off >= 0:
1356 if i - off >= 0:
1356 return self.series[i - off]
1357 return self.series[i - off]
1357 plus = patch.rfind('+')
1358 plus = patch.rfind('+')
1358 if plus >= 0:
1359 if plus >= 0:
1359 res = partialname(patch[:plus])
1360 res = partialname(patch[:plus])
1360 if res:
1361 if res:
1361 i = self.series.index(res)
1362 i = self.series.index(res)
1362 try:
1363 try:
1363 off = int(patch[plus + 1:] or 1)
1364 off = int(patch[plus + 1:] or 1)
1364 except (ValueError, OverflowError):
1365 except (ValueError, OverflowError):
1365 pass
1366 pass
1366 else:
1367 else:
1367 if i + off < len(self.series):
1368 if i + off < len(self.series):
1368 return self.series[i + off]
1369 return self.series[i + off]
1369 raise error.Abort(_("patch %s not in series") % patch)
1370 raise error.Abort(_("patch %s not in series") % patch)
1370
1371
1371 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1372 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1372 all=False, move=False, exact=False, nobackup=False,
1373 all=False, move=False, exact=False, nobackup=False,
1373 keepchanges=False):
1374 keepchanges=False):
1374 self.checkkeepchanges(keepchanges, force)
1375 self.checkkeepchanges(keepchanges, force)
1375 diffopts = self.diffopts()
1376 diffopts = self.diffopts()
1376 with repo.wlock():
1377 with repo.wlock():
1377 heads = []
1378 heads = []
1378 for hs in repo.branchmap().itervalues():
1379 for hs in repo.branchmap().itervalues():
1379 heads.extend(hs)
1380 heads.extend(hs)
1380 if not heads:
1381 if not heads:
1381 heads = [nullid]
1382 heads = [nullid]
1382 if repo.dirstate.p1() not in heads and not exact:
1383 if repo.dirstate.p1() not in heads and not exact:
1383 self.ui.status(_("(working directory not at a head)\n"))
1384 self.ui.status(_("(working directory not at a head)\n"))
1384
1385
1385 if not self.series:
1386 if not self.series:
1386 self.ui.warn(_('no patches in series\n'))
1387 self.ui.warn(_('no patches in series\n'))
1387 return 0
1388 return 0
1388
1389
1389 # Suppose our series file is: A B C and the current 'top'
1390 # Suppose our series file is: A B C and the current 'top'
1390 # patch is B. qpush C should be performed (moving forward)
1391 # patch is B. qpush C should be performed (moving forward)
1391 # qpush B is a NOP (no change) qpush A is an error (can't
1392 # qpush B is a NOP (no change) qpush A is an error (can't
1392 # go backwards with qpush)
1393 # go backwards with qpush)
1393 if patch:
1394 if patch:
1394 patch = self.lookup(patch)
1395 patch = self.lookup(patch)
1395 info = self.isapplied(patch)
1396 info = self.isapplied(patch)
1396 if info and info[0] >= len(self.applied) - 1:
1397 if info and info[0] >= len(self.applied) - 1:
1397 self.ui.warn(
1398 self.ui.warn(
1398 _('qpush: %s is already at the top\n') % patch)
1399 _('qpush: %s is already at the top\n') % patch)
1399 return 0
1400 return 0
1400
1401
1401 pushable, reason = self.pushable(patch)
1402 pushable, reason = self.pushable(patch)
1402 if pushable:
1403 if pushable:
1403 if self.series.index(patch) < self.seriesend():
1404 if self.series.index(patch) < self.seriesend():
1404 raise error.Abort(
1405 raise error.Abort(
1405 _("cannot push to a previous patch: %s") % patch)
1406 _("cannot push to a previous patch: %s") % patch)
1406 else:
1407 else:
1407 if reason:
1408 if reason:
1408 reason = _('guarded by %s') % reason
1409 reason = _('guarded by %s') % reason
1409 else:
1410 else:
1410 reason = _('no matching guards')
1411 reason = _('no matching guards')
1411 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1412 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1412 return 1
1413 return 1
1413 elif all:
1414 elif all:
1414 patch = self.series[-1]
1415 patch = self.series[-1]
1415 if self.isapplied(patch):
1416 if self.isapplied(patch):
1416 self.ui.warn(_('all patches are currently applied\n'))
1417 self.ui.warn(_('all patches are currently applied\n'))
1417 return 0
1418 return 0
1418
1419
1419 # Following the above example, starting at 'top' of B:
1420 # Following the above example, starting at 'top' of B:
1420 # qpush should be performed (pushes C), but a subsequent
1421 # qpush should be performed (pushes C), but a subsequent
1421 # qpush without an argument is an error (nothing to
1422 # qpush without an argument is an error (nothing to
1422 # apply). This allows a loop of "...while hg qpush..." to
1423 # apply). This allows a loop of "...while hg qpush..." to
1423 # work as it detects an error when done
1424 # work as it detects an error when done
1424 start = self.seriesend()
1425 start = self.seriesend()
1425 if start == len(self.series):
1426 if start == len(self.series):
1426 self.ui.warn(_('patch series already fully applied\n'))
1427 self.ui.warn(_('patch series already fully applied\n'))
1427 return 1
1428 return 1
1428 if not force and not keepchanges:
1429 if not force and not keepchanges:
1429 self.checklocalchanges(repo, refresh=self.applied)
1430 self.checklocalchanges(repo, refresh=self.applied)
1430
1431
1431 if exact:
1432 if exact:
1432 if keepchanges:
1433 if keepchanges:
1433 raise error.Abort(
1434 raise error.Abort(
1434 _("cannot use --exact and --keep-changes together"))
1435 _("cannot use --exact and --keep-changes together"))
1435 if move:
1436 if move:
1436 raise error.Abort(_('cannot use --exact and --move '
1437 raise error.Abort(_('cannot use --exact and --move '
1437 'together'))
1438 'together'))
1438 if self.applied:
1439 if self.applied:
1439 raise error.Abort(_('cannot push --exact with applied '
1440 raise error.Abort(_('cannot push --exact with applied '
1440 'patches'))
1441 'patches'))
1441 root = self.series[start]
1442 root = self.series[start]
1442 target = patchheader(self.join(root), self.plainmode).parent
1443 target = patchheader(self.join(root), self.plainmode).parent
1443 if not target:
1444 if not target:
1444 raise error.Abort(
1445 raise error.Abort(
1445 _("%s does not have a parent recorded") % root)
1446 _("%s does not have a parent recorded") % root)
1446 if not repo[target] == repo['.']:
1447 if not repo[target] == repo['.']:
1447 hg.update(repo, target)
1448 hg.update(repo, target)
1448
1449
1449 if move:
1450 if move:
1450 if not patch:
1451 if not patch:
1451 raise error.Abort(_("please specify the patch to move"))
1452 raise error.Abort(_("please specify the patch to move"))
1452 for fullstart, rpn in enumerate(self.fullseries):
1453 for fullstart, rpn in enumerate(self.fullseries):
1453 # strip markers for patch guards
1454 # strip markers for patch guards
1454 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1455 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1455 break
1456 break
1456 for i, rpn in enumerate(self.fullseries[fullstart:]):
1457 for i, rpn in enumerate(self.fullseries[fullstart:]):
1457 # strip markers for patch guards
1458 # strip markers for patch guards
1458 if self.guard_re.split(rpn, 1)[0] == patch:
1459 if self.guard_re.split(rpn, 1)[0] == patch:
1459 break
1460 break
1460 index = fullstart + i
1461 index = fullstart + i
1461 assert index < len(self.fullseries)
1462 assert index < len(self.fullseries)
1462 fullpatch = self.fullseries[index]
1463 fullpatch = self.fullseries[index]
1463 del self.fullseries[index]
1464 del self.fullseries[index]
1464 self.fullseries.insert(fullstart, fullpatch)
1465 self.fullseries.insert(fullstart, fullpatch)
1465 self.parseseries()
1466 self.parseseries()
1466 self.seriesdirty = True
1467 self.seriesdirty = True
1467
1468
1468 self.applieddirty = True
1469 self.applieddirty = True
1469 if start > 0:
1470 if start > 0:
1470 self.checktoppatch(repo)
1471 self.checktoppatch(repo)
1471 if not patch:
1472 if not patch:
1472 patch = self.series[start]
1473 patch = self.series[start]
1473 end = start + 1
1474 end = start + 1
1474 else:
1475 else:
1475 end = self.series.index(patch, start) + 1
1476 end = self.series.index(patch, start) + 1
1476
1477
1477 tobackup = set()
1478 tobackup = set()
1478 if (not nobackup and force) or keepchanges:
1479 if (not nobackup and force) or keepchanges:
1479 status = self.checklocalchanges(repo, force=True)
1480 status = self.checklocalchanges(repo, force=True)
1480 if keepchanges:
1481 if keepchanges:
1481 tobackup.update(status.modified + status.added +
1482 tobackup.update(status.modified + status.added +
1482 status.removed + status.deleted)
1483 status.removed + status.deleted)
1483 else:
1484 else:
1484 tobackup.update(status.modified + status.added)
1485 tobackup.update(status.modified + status.added)
1485
1486
1486 s = self.series[start:end]
1487 s = self.series[start:end]
1487 all_files = set()
1488 all_files = set()
1488 try:
1489 try:
1489 if mergeq:
1490 if mergeq:
1490 ret = self.mergepatch(repo, mergeq, s, diffopts)
1491 ret = self.mergepatch(repo, mergeq, s, diffopts)
1491 else:
1492 else:
1492 ret = self.apply(repo, s, list, all_files=all_files,
1493 ret = self.apply(repo, s, list, all_files=all_files,
1493 tobackup=tobackup, keepchanges=keepchanges)
1494 tobackup=tobackup, keepchanges=keepchanges)
1494 except AbortNoCleanup:
1495 except AbortNoCleanup:
1495 raise
1496 raise
1496 except: # re-raises
1497 except: # re-raises
1497 self.ui.warn(_('cleaning up working directory...\n'))
1498 self.ui.warn(_('cleaning up working directory...\n'))
1498 cmdutil.revert(self.ui, repo, repo['.'],
1499 cmdutil.revert(self.ui, repo, repo['.'],
1499 repo.dirstate.parents(), no_backup=True)
1500 repo.dirstate.parents(), no_backup=True)
1500 # only remove unknown files that we know we touched or
1501 # only remove unknown files that we know we touched or
1501 # created while patching
1502 # created while patching
1502 for f in all_files:
1503 for f in all_files:
1503 if f not in repo.dirstate:
1504 if f not in repo.dirstate:
1504 repo.wvfs.unlinkpath(f, ignoremissing=True)
1505 repo.wvfs.unlinkpath(f, ignoremissing=True)
1505 self.ui.warn(_('done\n'))
1506 self.ui.warn(_('done\n'))
1506 raise
1507 raise
1507
1508
1508 if not self.applied:
1509 if not self.applied:
1509 return ret[0]
1510 return ret[0]
1510 top = self.applied[-1].name
1511 top = self.applied[-1].name
1511 if ret[0] and ret[0] > 1:
1512 if ret[0] and ret[0] > 1:
1512 msg = _("errors during apply, please fix and qrefresh %s\n")
1513 msg = _("errors during apply, please fix and qrefresh %s\n")
1513 self.ui.write(msg % top)
1514 self.ui.write(msg % top)
1514 else:
1515 else:
1515 self.ui.write(_("now at: %s\n") % top)
1516 self.ui.write(_("now at: %s\n") % top)
1516 return ret[0]
1517 return ret[0]
1517
1518
1518 def pop(self, repo, patch=None, force=False, update=True, all=False,
1519 def pop(self, repo, patch=None, force=False, update=True, all=False,
1519 nobackup=False, keepchanges=False):
1520 nobackup=False, keepchanges=False):
1520 self.checkkeepchanges(keepchanges, force)
1521 self.checkkeepchanges(keepchanges, force)
1521 with repo.wlock():
1522 with repo.wlock():
1522 if patch:
1523 if patch:
1523 # index, rev, patch
1524 # index, rev, patch
1524 info = self.isapplied(patch)
1525 info = self.isapplied(patch)
1525 if not info:
1526 if not info:
1526 patch = self.lookup(patch)
1527 patch = self.lookup(patch)
1527 info = self.isapplied(patch)
1528 info = self.isapplied(patch)
1528 if not info:
1529 if not info:
1529 raise error.Abort(_("patch %s is not applied") % patch)
1530 raise error.Abort(_("patch %s is not applied") % patch)
1530
1531
1531 if not self.applied:
1532 if not self.applied:
1532 # Allow qpop -a to work repeatedly,
1533 # Allow qpop -a to work repeatedly,
1533 # but not qpop without an argument
1534 # but not qpop without an argument
1534 self.ui.warn(_("no patches applied\n"))
1535 self.ui.warn(_("no patches applied\n"))
1535 return not all
1536 return not all
1536
1537
1537 if all:
1538 if all:
1538 start = 0
1539 start = 0
1539 elif patch:
1540 elif patch:
1540 start = info[0] + 1
1541 start = info[0] + 1
1541 else:
1542 else:
1542 start = len(self.applied) - 1
1543 start = len(self.applied) - 1
1543
1544
1544 if start >= len(self.applied):
1545 if start >= len(self.applied):
1545 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1546 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1546 return
1547 return
1547
1548
1548 if not update:
1549 if not update:
1549 parents = repo.dirstate.parents()
1550 parents = repo.dirstate.parents()
1550 rr = [x.node for x in self.applied]
1551 rr = [x.node for x in self.applied]
1551 for p in parents:
1552 for p in parents:
1552 if p in rr:
1553 if p in rr:
1553 self.ui.warn(_("qpop: forcing dirstate update\n"))
1554 self.ui.warn(_("qpop: forcing dirstate update\n"))
1554 update = True
1555 update = True
1555 else:
1556 else:
1556 parents = [p.node() for p in repo[None].parents()]
1557 parents = [p.node() for p in repo[None].parents()]
1557 needupdate = False
1558 needupdate = False
1558 for entry in self.applied[start:]:
1559 for entry in self.applied[start:]:
1559 if entry.node in parents:
1560 if entry.node in parents:
1560 needupdate = True
1561 needupdate = True
1561 break
1562 break
1562 update = needupdate
1563 update = needupdate
1563
1564
1564 tobackup = set()
1565 tobackup = set()
1565 if update:
1566 if update:
1566 s = self.checklocalchanges(repo, force=force or keepchanges)
1567 s = self.checklocalchanges(repo, force=force or keepchanges)
1567 if force:
1568 if force:
1568 if not nobackup:
1569 if not nobackup:
1569 tobackup.update(s.modified + s.added)
1570 tobackup.update(s.modified + s.added)
1570 elif keepchanges:
1571 elif keepchanges:
1571 tobackup.update(s.modified + s.added +
1572 tobackup.update(s.modified + s.added +
1572 s.removed + s.deleted)
1573 s.removed + s.deleted)
1573
1574
1574 self.applieddirty = True
1575 self.applieddirty = True
1575 end = len(self.applied)
1576 end = len(self.applied)
1576 rev = self.applied[start].node
1577 rev = self.applied[start].node
1577
1578
1578 try:
1579 try:
1579 heads = repo.changelog.heads(rev)
1580 heads = repo.changelog.heads(rev)
1580 except error.LookupError:
1581 except error.LookupError:
1581 node = short(rev)
1582 node = short(rev)
1582 raise error.Abort(_('trying to pop unknown node %s') % node)
1583 raise error.Abort(_('trying to pop unknown node %s') % node)
1583
1584
1584 if heads != [self.applied[-1].node]:
1585 if heads != [self.applied[-1].node]:
1585 raise error.Abort(_("popping would remove a revision not "
1586 raise error.Abort(_("popping would remove a revision not "
1586 "managed by this patch queue"))
1587 "managed by this patch queue"))
1587 if not repo[self.applied[-1].node].mutable():
1588 if not repo[self.applied[-1].node].mutable():
1588 raise error.Abort(
1589 raise error.Abort(
1589 _("popping would remove a public revision"),
1590 _("popping would remove a public revision"),
1590 hint=_("see 'hg help phases' for details"))
1591 hint=_("see 'hg help phases' for details"))
1591
1592
1592 # we know there are no local changes, so we can make a simplified
1593 # we know there are no local changes, so we can make a simplified
1593 # form of hg.update.
1594 # form of hg.update.
1594 if update:
1595 if update:
1595 qp = self.qparents(repo, rev)
1596 qp = self.qparents(repo, rev)
1596 ctx = repo[qp]
1597 ctx = repo[qp]
1597 m, a, r, d = repo.status(qp, '.')[:4]
1598 m, a, r, d = repo.status(qp, '.')[:4]
1598 if d:
1599 if d:
1599 raise error.Abort(_("deletions found between repo revs"))
1600 raise error.Abort(_("deletions found between repo revs"))
1600
1601
1601 tobackup = set(a + m + r) & tobackup
1602 tobackup = set(a + m + r) & tobackup
1602 if keepchanges and tobackup:
1603 if keepchanges and tobackup:
1603 raise error.Abort(_("local changes found, qrefresh first"))
1604 raise error.Abort(_("local changes found, qrefresh first"))
1604 self.backup(repo, tobackup)
1605 self.backup(repo, tobackup)
1605 with repo.dirstate.parentchange():
1606 with repo.dirstate.parentchange():
1606 for f in a:
1607 for f in a:
1607 repo.wvfs.unlinkpath(f, ignoremissing=True)
1608 repo.wvfs.unlinkpath(f, ignoremissing=True)
1608 repo.dirstate.drop(f)
1609 repo.dirstate.drop(f)
1609 for f in m + r:
1610 for f in m + r:
1610 fctx = ctx[f]
1611 fctx = ctx[f]
1611 repo.wwrite(f, fctx.data(), fctx.flags())
1612 repo.wwrite(f, fctx.data(), fctx.flags())
1612 repo.dirstate.normal(f)
1613 repo.dirstate.normal(f)
1613 repo.setparents(qp, nullid)
1614 repo.setparents(qp, nullid)
1614 for patch in reversed(self.applied[start:end]):
1615 for patch in reversed(self.applied[start:end]):
1615 self.ui.status(_("popping %s\n") % patch.name)
1616 self.ui.status(_("popping %s\n") % patch.name)
1616 del self.applied[start:end]
1617 del self.applied[start:end]
1617 strip(self.ui, repo, [rev], update=False, backup=False)
1618 strip(self.ui, repo, [rev], update=False, backup=False)
1618 for s, state in repo['.'].substate.items():
1619 for s, state in repo['.'].substate.items():
1619 repo['.'].sub(s).get(state)
1620 repo['.'].sub(s).get(state)
1620 if self.applied:
1621 if self.applied:
1621 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1622 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1622 else:
1623 else:
1623 self.ui.write(_("patch queue now empty\n"))
1624 self.ui.write(_("patch queue now empty\n"))
1624
1625
1625 def diff(self, repo, pats, opts):
1626 def diff(self, repo, pats, opts):
1626 top, patch = self.checktoppatch(repo)
1627 top, patch = self.checktoppatch(repo)
1627 if not top:
1628 if not top:
1628 self.ui.write(_("no patches applied\n"))
1629 self.ui.write(_("no patches applied\n"))
1629 return
1630 return
1630 qp = self.qparents(repo, top)
1631 qp = self.qparents(repo, top)
1631 if opts.get('reverse'):
1632 if opts.get('reverse'):
1632 node1, node2 = None, qp
1633 node1, node2 = None, qp
1633 else:
1634 else:
1634 node1, node2 = qp, None
1635 node1, node2 = qp, None
1635 diffopts = self.diffopts(opts, patch)
1636 diffopts = self.diffopts(opts, patch)
1636 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1637 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1637
1638
1638 def refresh(self, repo, pats=None, **opts):
1639 def refresh(self, repo, pats=None, **opts):
1639 if not self.applied:
1640 if not self.applied:
1640 self.ui.write(_("no patches applied\n"))
1641 self.ui.write(_("no patches applied\n"))
1641 return 1
1642 return 1
1642 msg = opts.get('msg', '').rstrip()
1643 msg = opts.get('msg', '').rstrip()
1643 edit = opts.get('edit')
1644 edit = opts.get('edit')
1644 editform = opts.get('editform', 'mq.qrefresh')
1645 editform = opts.get('editform', 'mq.qrefresh')
1645 newuser = opts.get('user')
1646 newuser = opts.get('user')
1646 newdate = opts.get('date')
1647 newdate = opts.get('date')
1647 if newdate:
1648 if newdate:
1648 newdate = '%d %d' % util.parsedate(newdate)
1649 newdate = '%d %d' % util.parsedate(newdate)
1649 wlock = repo.wlock()
1650 wlock = repo.wlock()
1650
1651
1651 try:
1652 try:
1652 self.checktoppatch(repo)
1653 self.checktoppatch(repo)
1653 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1654 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1654 if repo.changelog.heads(top) != [top]:
1655 if repo.changelog.heads(top) != [top]:
1655 raise error.Abort(_("cannot qrefresh a revision with children"))
1656 raise error.Abort(_("cannot qrefresh a revision with children"))
1656 if not repo[top].mutable():
1657 if not repo[top].mutable():
1657 raise error.Abort(_("cannot qrefresh public revision"),
1658 raise error.Abort(_("cannot qrefresh public revision"),
1658 hint=_("see 'hg help phases' for details"))
1659 hint=_("see 'hg help phases' for details"))
1659
1660
1660 cparents = repo.changelog.parents(top)
1661 cparents = repo.changelog.parents(top)
1661 patchparent = self.qparents(repo, top)
1662 patchparent = self.qparents(repo, top)
1662
1663
1663 inclsubs = checksubstate(repo, hex(patchparent))
1664 inclsubs = checksubstate(repo, hex(patchparent))
1664 if inclsubs:
1665 if inclsubs:
1665 substatestate = repo.dirstate['.hgsubstate']
1666 substatestate = repo.dirstate['.hgsubstate']
1666
1667
1667 ph = patchheader(self.join(patchfn), self.plainmode)
1668 ph = patchheader(self.join(patchfn), self.plainmode)
1668 diffopts = self.diffopts({'git': opts.get('git')}, patchfn,
1669 diffopts = self.diffopts({'git': opts.get('git')}, patchfn,
1669 plain=True)
1670 plain=True)
1670 if newuser:
1671 if newuser:
1671 ph.setuser(newuser)
1672 ph.setuser(newuser)
1672 if newdate:
1673 if newdate:
1673 ph.setdate(newdate)
1674 ph.setdate(newdate)
1674 ph.setparent(hex(patchparent))
1675 ph.setparent(hex(patchparent))
1675
1676
1676 # only commit new patch when write is complete
1677 # only commit new patch when write is complete
1677 patchf = self.opener(patchfn, 'w', atomictemp=True)
1678 patchf = self.opener(patchfn, 'w', atomictemp=True)
1678
1679
1679 # update the dirstate in place, strip off the qtip commit
1680 # update the dirstate in place, strip off the qtip commit
1680 # and then commit.
1681 # and then commit.
1681 #
1682 #
1682 # this should really read:
1683 # this should really read:
1683 # mm, dd, aa = repo.status(top, patchparent)[:3]
1684 # mm, dd, aa = repo.status(top, patchparent)[:3]
1684 # but we do it backwards to take advantage of manifest/changelog
1685 # but we do it backwards to take advantage of manifest/changelog
1685 # caching against the next repo.status call
1686 # caching against the next repo.status call
1686 mm, aa, dd = repo.status(patchparent, top)[:3]
1687 mm, aa, dd = repo.status(patchparent, top)[:3]
1687 changes = repo.changelog.read(top)
1688 changes = repo.changelog.read(top)
1688 man = repo.manifestlog[changes[0]].read()
1689 man = repo.manifestlog[changes[0]].read()
1689 aaa = aa[:]
1690 aaa = aa[:]
1690 match1 = scmutil.match(repo[None], pats, opts)
1691 match1 = scmutil.match(repo[None], pats, opts)
1691 # in short mode, we only diff the files included in the
1692 # in short mode, we only diff the files included in the
1692 # patch already plus specified files
1693 # patch already plus specified files
1693 if opts.get('short'):
1694 if opts.get('short'):
1694 # if amending a patch, we start with existing
1695 # if amending a patch, we start with existing
1695 # files plus specified files - unfiltered
1696 # files plus specified files - unfiltered
1696 match = scmutil.matchfiles(repo, mm + aa + dd + match1.files())
1697 match = scmutil.matchfiles(repo, mm + aa + dd + match1.files())
1697 # filter with include/exclude options
1698 # filter with include/exclude options
1698 match1 = scmutil.match(repo[None], opts=opts)
1699 match1 = scmutil.match(repo[None], opts=opts)
1699 else:
1700 else:
1700 match = scmutil.matchall(repo)
1701 match = scmutil.matchall(repo)
1701 m, a, r, d = repo.status(match=match)[:4]
1702 m, a, r, d = repo.status(match=match)[:4]
1702 mm = set(mm)
1703 mm = set(mm)
1703 aa = set(aa)
1704 aa = set(aa)
1704 dd = set(dd)
1705 dd = set(dd)
1705
1706
1706 # we might end up with files that were added between
1707 # we might end up with files that were added between
1707 # qtip and the dirstate parent, but then changed in the
1708 # qtip and the dirstate parent, but then changed in the
1708 # local dirstate. in this case, we want them to only
1709 # local dirstate. in this case, we want them to only
1709 # show up in the added section
1710 # show up in the added section
1710 for x in m:
1711 for x in m:
1711 if x not in aa:
1712 if x not in aa:
1712 mm.add(x)
1713 mm.add(x)
1713 # we might end up with files added by the local dirstate that
1714 # we might end up with files added by the local dirstate that
1714 # were deleted by the patch. In this case, they should only
1715 # were deleted by the patch. In this case, they should only
1715 # show up in the changed section.
1716 # show up in the changed section.
1716 for x in a:
1717 for x in a:
1717 if x in dd:
1718 if x in dd:
1718 dd.remove(x)
1719 dd.remove(x)
1719 mm.add(x)
1720 mm.add(x)
1720 else:
1721 else:
1721 aa.add(x)
1722 aa.add(x)
1722 # make sure any files deleted in the local dirstate
1723 # make sure any files deleted in the local dirstate
1723 # are not in the add or change column of the patch
1724 # are not in the add or change column of the patch
1724 forget = []
1725 forget = []
1725 for x in d + r:
1726 for x in d + r:
1726 if x in aa:
1727 if x in aa:
1727 aa.remove(x)
1728 aa.remove(x)
1728 forget.append(x)
1729 forget.append(x)
1729 continue
1730 continue
1730 else:
1731 else:
1731 mm.discard(x)
1732 mm.discard(x)
1732 dd.add(x)
1733 dd.add(x)
1733
1734
1734 m = list(mm)
1735 m = list(mm)
1735 r = list(dd)
1736 r = list(dd)
1736 a = list(aa)
1737 a = list(aa)
1737
1738
1738 # create 'match' that includes the files to be recommitted.
1739 # create 'match' that includes the files to be recommitted.
1739 # apply match1 via repo.status to ensure correct case handling.
1740 # apply match1 via repo.status to ensure correct case handling.
1740 cm, ca, cr, cd = repo.status(patchparent, match=match1)[:4]
1741 cm, ca, cr, cd = repo.status(patchparent, match=match1)[:4]
1741 allmatches = set(cm + ca + cr + cd)
1742 allmatches = set(cm + ca + cr + cd)
1742 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1743 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1743
1744
1744 files = set(inclsubs)
1745 files = set(inclsubs)
1745 for x in refreshchanges:
1746 for x in refreshchanges:
1746 files.update(x)
1747 files.update(x)
1747 match = scmutil.matchfiles(repo, files)
1748 match = scmutil.matchfiles(repo, files)
1748
1749
1749 bmlist = repo[top].bookmarks()
1750 bmlist = repo[top].bookmarks()
1750
1751
1751 dsguard = None
1752 dsguard = None
1752 try:
1753 try:
1753 dsguard = dirstateguard.dirstateguard(repo, 'mq.refresh')
1754 dsguard = dirstateguard.dirstateguard(repo, 'mq.refresh')
1754 if diffopts.git or diffopts.upgrade:
1755 if diffopts.git or diffopts.upgrade:
1755 copies = {}
1756 copies = {}
1756 for dst in a:
1757 for dst in a:
1757 src = repo.dirstate.copied(dst)
1758 src = repo.dirstate.copied(dst)
1758 # during qfold, the source file for copies may
1759 # during qfold, the source file for copies may
1759 # be removed. Treat this as a simple add.
1760 # be removed. Treat this as a simple add.
1760 if src is not None and src in repo.dirstate:
1761 if src is not None and src in repo.dirstate:
1761 copies.setdefault(src, []).append(dst)
1762 copies.setdefault(src, []).append(dst)
1762 repo.dirstate.add(dst)
1763 repo.dirstate.add(dst)
1763 # remember the copies between patchparent and qtip
1764 # remember the copies between patchparent and qtip
1764 for dst in aaa:
1765 for dst in aaa:
1765 f = repo.file(dst)
1766 f = repo.file(dst)
1766 src = f.renamed(man[dst])
1767 src = f.renamed(man[dst])
1767 if src:
1768 if src:
1768 copies.setdefault(src[0], []).extend(
1769 copies.setdefault(src[0], []).extend(
1769 copies.get(dst, []))
1770 copies.get(dst, []))
1770 if dst in a:
1771 if dst in a:
1771 copies[src[0]].append(dst)
1772 copies[src[0]].append(dst)
1772 # we can't copy a file created by the patch itself
1773 # we can't copy a file created by the patch itself
1773 if dst in copies:
1774 if dst in copies:
1774 del copies[dst]
1775 del copies[dst]
1775 for src, dsts in copies.iteritems():
1776 for src, dsts in copies.iteritems():
1776 for dst in dsts:
1777 for dst in dsts:
1777 repo.dirstate.copy(src, dst)
1778 repo.dirstate.copy(src, dst)
1778 else:
1779 else:
1779 for dst in a:
1780 for dst in a:
1780 repo.dirstate.add(dst)
1781 repo.dirstate.add(dst)
1781 # Drop useless copy information
1782 # Drop useless copy information
1782 for f in list(repo.dirstate.copies()):
1783 for f in list(repo.dirstate.copies()):
1783 repo.dirstate.copy(None, f)
1784 repo.dirstate.copy(None, f)
1784 for f in r:
1785 for f in r:
1785 repo.dirstate.remove(f)
1786 repo.dirstate.remove(f)
1786 # if the patch excludes a modified file, mark that
1787 # if the patch excludes a modified file, mark that
1787 # file with mtime=0 so status can see it.
1788 # file with mtime=0 so status can see it.
1788 mm = []
1789 mm = []
1789 for i in xrange(len(m) - 1, -1, -1):
1790 for i in xrange(len(m) - 1, -1, -1):
1790 if not match1(m[i]):
1791 if not match1(m[i]):
1791 mm.append(m[i])
1792 mm.append(m[i])
1792 del m[i]
1793 del m[i]
1793 for f in m:
1794 for f in m:
1794 repo.dirstate.normal(f)
1795 repo.dirstate.normal(f)
1795 for f in mm:
1796 for f in mm:
1796 repo.dirstate.normallookup(f)
1797 repo.dirstate.normallookup(f)
1797 for f in forget:
1798 for f in forget:
1798 repo.dirstate.drop(f)
1799 repo.dirstate.drop(f)
1799
1800
1800 user = ph.user or changes[1]
1801 user = ph.user or changes[1]
1801
1802
1802 oldphase = repo[top].phase()
1803 oldphase = repo[top].phase()
1803
1804
1804 # assumes strip can roll itself back if interrupted
1805 # assumes strip can roll itself back if interrupted
1805 repo.setparents(*cparents)
1806 repo.setparents(*cparents)
1806 self.applied.pop()
1807 self.applied.pop()
1807 self.applieddirty = True
1808 self.applieddirty = True
1808 strip(self.ui, repo, [top], update=False, backup=False)
1809 strip(self.ui, repo, [top], update=False, backup=False)
1809 dsguard.close()
1810 dsguard.close()
1810 finally:
1811 finally:
1811 release(dsguard)
1812 release(dsguard)
1812
1813
1813 try:
1814 try:
1814 # might be nice to attempt to roll back strip after this
1815 # might be nice to attempt to roll back strip after this
1815
1816
1816 defaultmsg = "[mq]: %s" % patchfn
1817 defaultmsg = "[mq]: %s" % patchfn
1817 editor = cmdutil.getcommiteditor(editform=editform)
1818 editor = cmdutil.getcommiteditor(editform=editform)
1818 if edit:
1819 if edit:
1819 def finishdesc(desc):
1820 def finishdesc(desc):
1820 if desc.rstrip():
1821 if desc.rstrip():
1821 ph.setmessage(desc)
1822 ph.setmessage(desc)
1822 return desc
1823 return desc
1823 return defaultmsg
1824 return defaultmsg
1824 # i18n: this message is shown in editor with "HG: " prefix
1825 # i18n: this message is shown in editor with "HG: " prefix
1825 extramsg = _('Leave message empty to use default message.')
1826 extramsg = _('Leave message empty to use default message.')
1826 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1827 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1827 extramsg=extramsg,
1828 extramsg=extramsg,
1828 editform=editform)
1829 editform=editform)
1829 message = msg or "\n".join(ph.message)
1830 message = msg or "\n".join(ph.message)
1830 elif not msg:
1831 elif not msg:
1831 if not ph.message:
1832 if not ph.message:
1832 message = defaultmsg
1833 message = defaultmsg
1833 else:
1834 else:
1834 message = "\n".join(ph.message)
1835 message = "\n".join(ph.message)
1835 else:
1836 else:
1836 message = msg
1837 message = msg
1837 ph.setmessage(msg)
1838 ph.setmessage(msg)
1838
1839
1839 # Ensure we create a new changeset in the same phase than
1840 # Ensure we create a new changeset in the same phase than
1840 # the old one.
1841 # the old one.
1841 lock = tr = None
1842 lock = tr = None
1842 try:
1843 try:
1843 lock = repo.lock()
1844 lock = repo.lock()
1844 tr = repo.transaction('mq')
1845 tr = repo.transaction('mq')
1845 n = newcommit(repo, oldphase, message, user, ph.date,
1846 n = newcommit(repo, oldphase, message, user, ph.date,
1846 match=match, force=True, editor=editor)
1847 match=match, force=True, editor=editor)
1847 # only write patch after a successful commit
1848 # only write patch after a successful commit
1848 c = [list(x) for x in refreshchanges]
1849 c = [list(x) for x in refreshchanges]
1849 if inclsubs:
1850 if inclsubs:
1850 self.putsubstate2changes(substatestate, c)
1851 self.putsubstate2changes(substatestate, c)
1851 chunks = patchmod.diff(repo, patchparent,
1852 chunks = patchmod.diff(repo, patchparent,
1852 changes=c, opts=diffopts)
1853 changes=c, opts=diffopts)
1853 comments = str(ph)
1854 comments = str(ph)
1854 if comments:
1855 if comments:
1855 patchf.write(comments)
1856 patchf.write(comments)
1856 for chunk in chunks:
1857 for chunk in chunks:
1857 patchf.write(chunk)
1858 patchf.write(chunk)
1858 patchf.close()
1859 patchf.close()
1859
1860
1860 marks = repo._bookmarks
1861 marks = repo._bookmarks
1861 marks.applychanges(repo, tr, [(bm, n) for bm in bmlist])
1862 marks.applychanges(repo, tr, [(bm, n) for bm in bmlist])
1862 tr.close()
1863 tr.close()
1863
1864
1864 self.applied.append(statusentry(n, patchfn))
1865 self.applied.append(statusentry(n, patchfn))
1865 finally:
1866 finally:
1866 lockmod.release(tr, lock)
1867 lockmod.release(tr, lock)
1867 except: # re-raises
1868 except: # re-raises
1868 ctx = repo[cparents[0]]
1869 ctx = repo[cparents[0]]
1869 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1870 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1870 self.savedirty()
1871 self.savedirty()
1871 self.ui.warn(_('qrefresh interrupted while patch was popped! '
1872 self.ui.warn(_('qrefresh interrupted while patch was popped! '
1872 '(revert --all, qpush to recover)\n'))
1873 '(revert --all, qpush to recover)\n'))
1873 raise
1874 raise
1874 finally:
1875 finally:
1875 wlock.release()
1876 wlock.release()
1876 self.removeundo(repo)
1877 self.removeundo(repo)
1877
1878
1878 def init(self, repo, create=False):
1879 def init(self, repo, create=False):
1879 if not create and os.path.isdir(self.path):
1880 if not create and os.path.isdir(self.path):
1880 raise error.Abort(_("patch queue directory already exists"))
1881 raise error.Abort(_("patch queue directory already exists"))
1881 try:
1882 try:
1882 os.mkdir(self.path)
1883 os.mkdir(self.path)
1883 except OSError as inst:
1884 except OSError as inst:
1884 if inst.errno != errno.EEXIST or not create:
1885 if inst.errno != errno.EEXIST or not create:
1885 raise
1886 raise
1886 if create:
1887 if create:
1887 return self.qrepo(create=True)
1888 return self.qrepo(create=True)
1888
1889
1889 def unapplied(self, repo, patch=None):
1890 def unapplied(self, repo, patch=None):
1890 if patch and patch not in self.series:
1891 if patch and patch not in self.series:
1891 raise error.Abort(_("patch %s is not in series file") % patch)
1892 raise error.Abort(_("patch %s is not in series file") % patch)
1892 if not patch:
1893 if not patch:
1893 start = self.seriesend()
1894 start = self.seriesend()
1894 else:
1895 else:
1895 start = self.series.index(patch) + 1
1896 start = self.series.index(patch) + 1
1896 unapplied = []
1897 unapplied = []
1897 for i in xrange(start, len(self.series)):
1898 for i in xrange(start, len(self.series)):
1898 pushable, reason = self.pushable(i)
1899 pushable, reason = self.pushable(i)
1899 if pushable:
1900 if pushable:
1900 unapplied.append((i, self.series[i]))
1901 unapplied.append((i, self.series[i]))
1901 self.explainpushable(i)
1902 self.explainpushable(i)
1902 return unapplied
1903 return unapplied
1903
1904
1904 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1905 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1905 summary=False):
1906 summary=False):
1906 def displayname(pfx, patchname, state):
1907 def displayname(pfx, patchname, state):
1907 if pfx:
1908 if pfx:
1908 self.ui.write(pfx)
1909 self.ui.write(pfx)
1909 if summary:
1910 if summary:
1910 ph = patchheader(self.join(patchname), self.plainmode)
1911 ph = patchheader(self.join(patchname), self.plainmode)
1911 if ph.message:
1912 if ph.message:
1912 msg = ph.message[0]
1913 msg = ph.message[0]
1913 else:
1914 else:
1914 msg = ''
1915 msg = ''
1915
1916
1916 if self.ui.formatted():
1917 if self.ui.formatted():
1917 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1918 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1918 if width > 0:
1919 if width > 0:
1919 msg = util.ellipsis(msg, width)
1920 msg = util.ellipsis(msg, width)
1920 else:
1921 else:
1921 msg = ''
1922 msg = ''
1922 self.ui.write(patchname, label='qseries.' + state)
1923 self.ui.write(patchname, label='qseries.' + state)
1923 self.ui.write(': ')
1924 self.ui.write(': ')
1924 self.ui.write(msg, label='qseries.message.' + state)
1925 self.ui.write(msg, label='qseries.message.' + state)
1925 else:
1926 else:
1926 self.ui.write(patchname, label='qseries.' + state)
1927 self.ui.write(patchname, label='qseries.' + state)
1927 self.ui.write('\n')
1928 self.ui.write('\n')
1928
1929
1929 applied = set([p.name for p in self.applied])
1930 applied = set([p.name for p in self.applied])
1930 if length is None:
1931 if length is None:
1931 length = len(self.series) - start
1932 length = len(self.series) - start
1932 if not missing:
1933 if not missing:
1933 if self.ui.verbose:
1934 if self.ui.verbose:
1934 idxwidth = len(str(start + length - 1))
1935 idxwidth = len(str(start + length - 1))
1935 for i in xrange(start, start + length):
1936 for i in xrange(start, start + length):
1936 patch = self.series[i]
1937 patch = self.series[i]
1937 if patch in applied:
1938 if patch in applied:
1938 char, state = 'A', 'applied'
1939 char, state = 'A', 'applied'
1939 elif self.pushable(i)[0]:
1940 elif self.pushable(i)[0]:
1940 char, state = 'U', 'unapplied'
1941 char, state = 'U', 'unapplied'
1941 else:
1942 else:
1942 char, state = 'G', 'guarded'
1943 char, state = 'G', 'guarded'
1943 pfx = ''
1944 pfx = ''
1944 if self.ui.verbose:
1945 if self.ui.verbose:
1945 pfx = '%*d %s ' % (idxwidth, i, char)
1946 pfx = '%*d %s ' % (idxwidth, i, char)
1946 elif status and status != char:
1947 elif status and status != char:
1947 continue
1948 continue
1948 displayname(pfx, patch, state)
1949 displayname(pfx, patch, state)
1949 else:
1950 else:
1950 msng_list = []
1951 msng_list = []
1951 for root, dirs, files in os.walk(self.path):
1952 for root, dirs, files in os.walk(self.path):
1952 d = root[len(self.path) + 1:]
1953 d = root[len(self.path) + 1:]
1953 for f in files:
1954 for f in files:
1954 fl = os.path.join(d, f)
1955 fl = os.path.join(d, f)
1955 if (fl not in self.series and
1956 if (fl not in self.series and
1956 fl not in (self.statuspath, self.seriespath,
1957 fl not in (self.statuspath, self.seriespath,
1957 self.guardspath)
1958 self.guardspath)
1958 and not fl.startswith('.')):
1959 and not fl.startswith('.')):
1959 msng_list.append(fl)
1960 msng_list.append(fl)
1960 for x in sorted(msng_list):
1961 for x in sorted(msng_list):
1961 pfx = self.ui.verbose and ('D ') or ''
1962 pfx = self.ui.verbose and ('D ') or ''
1962 displayname(pfx, x, 'missing')
1963 displayname(pfx, x, 'missing')
1963
1964
1964 def issaveline(self, l):
1965 def issaveline(self, l):
1965 if l.name == '.hg.patches.save.line':
1966 if l.name == '.hg.patches.save.line':
1966 return True
1967 return True
1967
1968
1968 def qrepo(self, create=False):
1969 def qrepo(self, create=False):
1969 ui = self.baseui.copy()
1970 ui = self.baseui.copy()
1970 # copy back attributes set by ui.pager()
1971 # copy back attributes set by ui.pager()
1971 if self.ui.pageractive and not ui.pageractive:
1972 if self.ui.pageractive and not ui.pageractive:
1972 ui.pageractive = self.ui.pageractive
1973 ui.pageractive = self.ui.pageractive
1973 # internal config: ui.formatted
1974 # internal config: ui.formatted
1974 ui.setconfig('ui', 'formatted',
1975 ui.setconfig('ui', 'formatted',
1975 self.ui.config('ui', 'formatted'), 'mqpager')
1976 self.ui.config('ui', 'formatted'), 'mqpager')
1976 ui.setconfig('ui', 'interactive',
1977 ui.setconfig('ui', 'interactive',
1977 self.ui.config('ui', 'interactive'), 'mqpager')
1978 self.ui.config('ui', 'interactive'), 'mqpager')
1978 if create or os.path.isdir(self.join(".hg")):
1979 if create or os.path.isdir(self.join(".hg")):
1979 return hg.repository(ui, path=self.path, create=create)
1980 return hg.repository(ui, path=self.path, create=create)
1980
1981
1981 def restore(self, repo, rev, delete=None, qupdate=None):
1982 def restore(self, repo, rev, delete=None, qupdate=None):
1982 desc = repo[rev].description().strip()
1983 desc = repo[rev].description().strip()
1983 lines = desc.splitlines()
1984 lines = desc.splitlines()
1984 i = 0
1985 i = 0
1985 datastart = None
1986 datastart = None
1986 series = []
1987 series = []
1987 applied = []
1988 applied = []
1988 qpp = None
1989 qpp = None
1989 for i, line in enumerate(lines):
1990 for i, line in enumerate(lines):
1990 if line == 'Patch Data:':
1991 if line == 'Patch Data:':
1991 datastart = i + 1
1992 datastart = i + 1
1992 elif line.startswith('Dirstate:'):
1993 elif line.startswith('Dirstate:'):
1993 l = line.rstrip()
1994 l = line.rstrip()
1994 l = l[10:].split(' ')
1995 l = l[10:].split(' ')
1995 qpp = [bin(x) for x in l]
1996 qpp = [bin(x) for x in l]
1996 elif datastart is not None:
1997 elif datastart is not None:
1997 l = line.rstrip()
1998 l = line.rstrip()
1998 n, name = l.split(':', 1)
1999 n, name = l.split(':', 1)
1999 if n:
2000 if n:
2000 applied.append(statusentry(bin(n), name))
2001 applied.append(statusentry(bin(n), name))
2001 else:
2002 else:
2002 series.append(l)
2003 series.append(l)
2003 if datastart is None:
2004 if datastart is None:
2004 self.ui.warn(_("no saved patch data found\n"))
2005 self.ui.warn(_("no saved patch data found\n"))
2005 return 1
2006 return 1
2006 self.ui.warn(_("restoring status: %s\n") % lines[0])
2007 self.ui.warn(_("restoring status: %s\n") % lines[0])
2007 self.fullseries = series
2008 self.fullseries = series
2008 self.applied = applied
2009 self.applied = applied
2009 self.parseseries()
2010 self.parseseries()
2010 self.seriesdirty = True
2011 self.seriesdirty = True
2011 self.applieddirty = True
2012 self.applieddirty = True
2012 heads = repo.changelog.heads()
2013 heads = repo.changelog.heads()
2013 if delete:
2014 if delete:
2014 if rev not in heads:
2015 if rev not in heads:
2015 self.ui.warn(_("save entry has children, leaving it alone\n"))
2016 self.ui.warn(_("save entry has children, leaving it alone\n"))
2016 else:
2017 else:
2017 self.ui.warn(_("removing save entry %s\n") % short(rev))
2018 self.ui.warn(_("removing save entry %s\n") % short(rev))
2018 pp = repo.dirstate.parents()
2019 pp = repo.dirstate.parents()
2019 if rev in pp:
2020 if rev in pp:
2020 update = True
2021 update = True
2021 else:
2022 else:
2022 update = False
2023 update = False
2023 strip(self.ui, repo, [rev], update=update, backup=False)
2024 strip(self.ui, repo, [rev], update=update, backup=False)
2024 if qpp:
2025 if qpp:
2025 self.ui.warn(_("saved queue repository parents: %s %s\n") %
2026 self.ui.warn(_("saved queue repository parents: %s %s\n") %
2026 (short(qpp[0]), short(qpp[1])))
2027 (short(qpp[0]), short(qpp[1])))
2027 if qupdate:
2028 if qupdate:
2028 self.ui.status(_("updating queue directory\n"))
2029 self.ui.status(_("updating queue directory\n"))
2029 r = self.qrepo()
2030 r = self.qrepo()
2030 if not r:
2031 if not r:
2031 self.ui.warn(_("unable to load queue repository\n"))
2032 self.ui.warn(_("unable to load queue repository\n"))
2032 return 1
2033 return 1
2033 hg.clean(r, qpp[0])
2034 hg.clean(r, qpp[0])
2034
2035
2035 def save(self, repo, msg=None):
2036 def save(self, repo, msg=None):
2036 if not self.applied:
2037 if not self.applied:
2037 self.ui.warn(_("save: no patches applied, exiting\n"))
2038 self.ui.warn(_("save: no patches applied, exiting\n"))
2038 return 1
2039 return 1
2039 if self.issaveline(self.applied[-1]):
2040 if self.issaveline(self.applied[-1]):
2040 self.ui.warn(_("status is already saved\n"))
2041 self.ui.warn(_("status is already saved\n"))
2041 return 1
2042 return 1
2042
2043
2043 if not msg:
2044 if not msg:
2044 msg = _("hg patches saved state")
2045 msg = _("hg patches saved state")
2045 else:
2046 else:
2046 msg = "hg patches: " + msg.rstrip('\r\n')
2047 msg = "hg patches: " + msg.rstrip('\r\n')
2047 r = self.qrepo()
2048 r = self.qrepo()
2048 if r:
2049 if r:
2049 pp = r.dirstate.parents()
2050 pp = r.dirstate.parents()
2050 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
2051 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
2051 msg += "\n\nPatch Data:\n"
2052 msg += "\n\nPatch Data:\n"
2052 msg += ''.join('%s\n' % x for x in self.applied)
2053 msg += ''.join('%s\n' % x for x in self.applied)
2053 msg += ''.join(':%s\n' % x for x in self.fullseries)
2054 msg += ''.join(':%s\n' % x for x in self.fullseries)
2054 n = repo.commit(msg, force=True)
2055 n = repo.commit(msg, force=True)
2055 if not n:
2056 if not n:
2056 self.ui.warn(_("repo commit failed\n"))
2057 self.ui.warn(_("repo commit failed\n"))
2057 return 1
2058 return 1
2058 self.applied.append(statusentry(n, '.hg.patches.save.line'))
2059 self.applied.append(statusentry(n, '.hg.patches.save.line'))
2059 self.applieddirty = True
2060 self.applieddirty = True
2060 self.removeundo(repo)
2061 self.removeundo(repo)
2061
2062
2062 def fullseriesend(self):
2063 def fullseriesend(self):
2063 if self.applied:
2064 if self.applied:
2064 p = self.applied[-1].name
2065 p = self.applied[-1].name
2065 end = self.findseries(p)
2066 end = self.findseries(p)
2066 if end is None:
2067 if end is None:
2067 return len(self.fullseries)
2068 return len(self.fullseries)
2068 return end + 1
2069 return end + 1
2069 return 0
2070 return 0
2070
2071
2071 def seriesend(self, all_patches=False):
2072 def seriesend(self, all_patches=False):
2072 """If all_patches is False, return the index of the next pushable patch
2073 """If all_patches is False, return the index of the next pushable patch
2073 in the series, or the series length. If all_patches is True, return the
2074 in the series, or the series length. If all_patches is True, return the
2074 index of the first patch past the last applied one.
2075 index of the first patch past the last applied one.
2075 """
2076 """
2076 end = 0
2077 end = 0
2077 def nextpatch(start):
2078 def nextpatch(start):
2078 if all_patches or start >= len(self.series):
2079 if all_patches or start >= len(self.series):
2079 return start
2080 return start
2080 for i in xrange(start, len(self.series)):
2081 for i in xrange(start, len(self.series)):
2081 p, reason = self.pushable(i)
2082 p, reason = self.pushable(i)
2082 if p:
2083 if p:
2083 return i
2084 return i
2084 self.explainpushable(i)
2085 self.explainpushable(i)
2085 return len(self.series)
2086 return len(self.series)
2086 if self.applied:
2087 if self.applied:
2087 p = self.applied[-1].name
2088 p = self.applied[-1].name
2088 try:
2089 try:
2089 end = self.series.index(p)
2090 end = self.series.index(p)
2090 except ValueError:
2091 except ValueError:
2091 return 0
2092 return 0
2092 return nextpatch(end + 1)
2093 return nextpatch(end + 1)
2093 return nextpatch(end)
2094 return nextpatch(end)
2094
2095
2095 def appliedname(self, index):
2096 def appliedname(self, index):
2096 pname = self.applied[index].name
2097 pname = self.applied[index].name
2097 if not self.ui.verbose:
2098 if not self.ui.verbose:
2098 p = pname
2099 p = pname
2099 else:
2100 else:
2100 p = str(self.series.index(pname)) + " " + pname
2101 p = str(self.series.index(pname)) + " " + pname
2101 return p
2102 return p
2102
2103
2103 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2104 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2104 force=None, git=False):
2105 force=None, git=False):
2105 def checkseries(patchname):
2106 def checkseries(patchname):
2106 if patchname in self.series:
2107 if patchname in self.series:
2107 raise error.Abort(_('patch %s is already in the series file')
2108 raise error.Abort(_('patch %s is already in the series file')
2108 % patchname)
2109 % patchname)
2109
2110
2110 if rev:
2111 if rev:
2111 if files:
2112 if files:
2112 raise error.Abort(_('option "-r" not valid when importing '
2113 raise error.Abort(_('option "-r" not valid when importing '
2113 'files'))
2114 'files'))
2114 rev = scmutil.revrange(repo, rev)
2115 rev = scmutil.revrange(repo, rev)
2115 rev.sort(reverse=True)
2116 rev.sort(reverse=True)
2116 elif not files:
2117 elif not files:
2117 raise error.Abort(_('no files or revisions specified'))
2118 raise error.Abort(_('no files or revisions specified'))
2118 if (len(files) > 1 or len(rev) > 1) and patchname:
2119 if (len(files) > 1 or len(rev) > 1) and patchname:
2119 raise error.Abort(_('option "-n" not valid when importing multiple '
2120 raise error.Abort(_('option "-n" not valid when importing multiple '
2120 'patches'))
2121 'patches'))
2121 imported = []
2122 imported = []
2122 if rev:
2123 if rev:
2123 # If mq patches are applied, we can only import revisions
2124 # If mq patches are applied, we can only import revisions
2124 # that form a linear path to qbase.
2125 # that form a linear path to qbase.
2125 # Otherwise, they should form a linear path to a head.
2126 # Otherwise, they should form a linear path to a head.
2126 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2127 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2127 if len(heads) > 1:
2128 if len(heads) > 1:
2128 raise error.Abort(_('revision %d is the root of more than one '
2129 raise error.Abort(_('revision %d is the root of more than one '
2129 'branch') % rev.last())
2130 'branch') % rev.last())
2130 if self.applied:
2131 if self.applied:
2131 base = repo.changelog.node(rev.first())
2132 base = repo.changelog.node(rev.first())
2132 if base in [n.node for n in self.applied]:
2133 if base in [n.node for n in self.applied]:
2133 raise error.Abort(_('revision %d is already managed')
2134 raise error.Abort(_('revision %d is already managed')
2134 % rev.first())
2135 % rev.first())
2135 if heads != [self.applied[-1].node]:
2136 if heads != [self.applied[-1].node]:
2136 raise error.Abort(_('revision %d is not the parent of '
2137 raise error.Abort(_('revision %d is not the parent of '
2137 'the queue') % rev.first())
2138 'the queue') % rev.first())
2138 base = repo.changelog.rev(self.applied[0].node)
2139 base = repo.changelog.rev(self.applied[0].node)
2139 lastparent = repo.changelog.parentrevs(base)[0]
2140 lastparent = repo.changelog.parentrevs(base)[0]
2140 else:
2141 else:
2141 if heads != [repo.changelog.node(rev.first())]:
2142 if heads != [repo.changelog.node(rev.first())]:
2142 raise error.Abort(_('revision %d has unmanaged children')
2143 raise error.Abort(_('revision %d has unmanaged children')
2143 % rev.first())
2144 % rev.first())
2144 lastparent = None
2145 lastparent = None
2145
2146
2146 diffopts = self.diffopts({'git': git})
2147 diffopts = self.diffopts({'git': git})
2147 with repo.transaction('qimport') as tr:
2148 with repo.transaction('qimport') as tr:
2148 for r in rev:
2149 for r in rev:
2149 if not repo[r].mutable():
2150 if not repo[r].mutable():
2150 raise error.Abort(_('revision %d is not mutable') % r,
2151 raise error.Abort(_('revision %d is not mutable') % r,
2151 hint=_("see 'hg help phases' "
2152 hint=_("see 'hg help phases' "
2152 'for details'))
2153 'for details'))
2153 p1, p2 = repo.changelog.parentrevs(r)
2154 p1, p2 = repo.changelog.parentrevs(r)
2154 n = repo.changelog.node(r)
2155 n = repo.changelog.node(r)
2155 if p2 != nullrev:
2156 if p2 != nullrev:
2156 raise error.Abort(_('cannot import merge revision %d')
2157 raise error.Abort(_('cannot import merge revision %d')
2157 % r)
2158 % r)
2158 if lastparent and lastparent != r:
2159 if lastparent and lastparent != r:
2159 raise error.Abort(_('revision %d is not the parent of '
2160 raise error.Abort(_('revision %d is not the parent of '
2160 '%d')
2161 '%d')
2161 % (r, lastparent))
2162 % (r, lastparent))
2162 lastparent = p1
2163 lastparent = p1
2163
2164
2164 if not patchname:
2165 if not patchname:
2165 patchname = self.makepatchname(
2166 patchname = self.makepatchname(
2166 repo[r].description().split('\n', 1)[0],
2167 repo[r].description().split('\n', 1)[0],
2167 '%d.diff' % r)
2168 '%d.diff' % r)
2168 checkseries(patchname)
2169 checkseries(patchname)
2169 self.checkpatchname(patchname, force)
2170 self.checkpatchname(patchname, force)
2170 self.fullseries.insert(0, patchname)
2171 self.fullseries.insert(0, patchname)
2171
2172
2172 patchf = self.opener(patchname, "w")
2173 patchf = self.opener(patchname, "w")
2173 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2174 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2174 patchf.close()
2175 patchf.close()
2175
2176
2176 se = statusentry(n, patchname)
2177 se = statusentry(n, patchname)
2177 self.applied.insert(0, se)
2178 self.applied.insert(0, se)
2178
2179
2179 self.added.append(patchname)
2180 self.added.append(patchname)
2180 imported.append(patchname)
2181 imported.append(patchname)
2181 patchname = None
2182 patchname = None
2182 if rev and repo.ui.configbool('mq', 'secret'):
2183 if rev and repo.ui.configbool('mq', 'secret'):
2183 # if we added anything with --rev, move the secret root
2184 # if we added anything with --rev, move the secret root
2184 phases.retractboundary(repo, tr, phases.secret, [n])
2185 phases.retractboundary(repo, tr, phases.secret, [n])
2185 self.parseseries()
2186 self.parseseries()
2186 self.applieddirty = True
2187 self.applieddirty = True
2187 self.seriesdirty = True
2188 self.seriesdirty = True
2188
2189
2189 for i, filename in enumerate(files):
2190 for i, filename in enumerate(files):
2190 if existing:
2191 if existing:
2191 if filename == '-':
2192 if filename == '-':
2192 raise error.Abort(_('-e is incompatible with import from -')
2193 raise error.Abort(_('-e is incompatible with import from -')
2193 )
2194 )
2194 filename = normname(filename)
2195 filename = normname(filename)
2195 self.checkreservedname(filename)
2196 self.checkreservedname(filename)
2196 if util.url(filename).islocal():
2197 if util.url(filename).islocal():
2197 originpath = self.join(filename)
2198 originpath = self.join(filename)
2198 if not os.path.isfile(originpath):
2199 if not os.path.isfile(originpath):
2199 raise error.Abort(
2200 raise error.Abort(
2200 _("patch %s does not exist") % filename)
2201 _("patch %s does not exist") % filename)
2201
2202
2202 if patchname:
2203 if patchname:
2203 self.checkpatchname(patchname, force)
2204 self.checkpatchname(patchname, force)
2204
2205
2205 self.ui.write(_('renaming %s to %s\n')
2206 self.ui.write(_('renaming %s to %s\n')
2206 % (filename, patchname))
2207 % (filename, patchname))
2207 util.rename(originpath, self.join(patchname))
2208 util.rename(originpath, self.join(patchname))
2208 else:
2209 else:
2209 patchname = filename
2210 patchname = filename
2210
2211
2211 else:
2212 else:
2212 if filename == '-' and not patchname:
2213 if filename == '-' and not patchname:
2213 raise error.Abort(_('need --name to import a patch from -'))
2214 raise error.Abort(_('need --name to import a patch from -'))
2214 elif not patchname:
2215 elif not patchname:
2215 patchname = normname(os.path.basename(filename.rstrip('/')))
2216 patchname = normname(os.path.basename(filename.rstrip('/')))
2216 self.checkpatchname(patchname, force)
2217 self.checkpatchname(patchname, force)
2217 try:
2218 try:
2218 if filename == '-':
2219 if filename == '-':
2219 text = self.ui.fin.read()
2220 text = self.ui.fin.read()
2220 else:
2221 else:
2221 fp = hg.openpath(self.ui, filename)
2222 fp = hg.openpath(self.ui, filename)
2222 text = fp.read()
2223 text = fp.read()
2223 fp.close()
2224 fp.close()
2224 except (OSError, IOError):
2225 except (OSError, IOError):
2225 raise error.Abort(_("unable to read file %s") % filename)
2226 raise error.Abort(_("unable to read file %s") % filename)
2226 patchf = self.opener(patchname, "w")
2227 patchf = self.opener(patchname, "w")
2227 patchf.write(text)
2228 patchf.write(text)
2228 patchf.close()
2229 patchf.close()
2229 if not force:
2230 if not force:
2230 checkseries(patchname)
2231 checkseries(patchname)
2231 if patchname not in self.series:
2232 if patchname not in self.series:
2232 index = self.fullseriesend() + i
2233 index = self.fullseriesend() + i
2233 self.fullseries[index:index] = [patchname]
2234 self.fullseries[index:index] = [patchname]
2234 self.parseseries()
2235 self.parseseries()
2235 self.seriesdirty = True
2236 self.seriesdirty = True
2236 self.ui.warn(_("adding %s to series file\n") % patchname)
2237 self.ui.warn(_("adding %s to series file\n") % patchname)
2237 self.added.append(patchname)
2238 self.added.append(patchname)
2238 imported.append(patchname)
2239 imported.append(patchname)
2239 patchname = None
2240 patchname = None
2240
2241
2241 self.removeundo(repo)
2242 self.removeundo(repo)
2242 return imported
2243 return imported
2243
2244
2244 def fixkeepchangesopts(ui, opts):
2245 def fixkeepchangesopts(ui, opts):
2245 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2246 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2246 or opts.get('exact')):
2247 or opts.get('exact')):
2247 return opts
2248 return opts
2248 opts = dict(opts)
2249 opts = dict(opts)
2249 opts['keep_changes'] = True
2250 opts['keep_changes'] = True
2250 return opts
2251 return opts
2251
2252
2252 @command("qdelete|qremove|qrm",
2253 @command("qdelete|qremove|qrm",
2253 [('k', 'keep', None, _('keep patch file')),
2254 [('k', 'keep', None, _('keep patch file')),
2254 ('r', 'rev', [],
2255 ('r', 'rev', [],
2255 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2256 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2256 _('hg qdelete [-k] [PATCH]...'))
2257 _('hg qdelete [-k] [PATCH]...'))
2257 def delete(ui, repo, *patches, **opts):
2258 def delete(ui, repo, *patches, **opts):
2258 """remove patches from queue
2259 """remove patches from queue
2259
2260
2260 The patches must not be applied, and at least one patch is required. Exact
2261 The patches must not be applied, and at least one patch is required. Exact
2261 patch identifiers must be given. With -k/--keep, the patch files are
2262 patch identifiers must be given. With -k/--keep, the patch files are
2262 preserved in the patch directory.
2263 preserved in the patch directory.
2263
2264
2264 To stop managing a patch and move it into permanent history,
2265 To stop managing a patch and move it into permanent history,
2265 use the :hg:`qfinish` command."""
2266 use the :hg:`qfinish` command."""
2266 q = repo.mq
2267 q = repo.mq
2267 q.delete(repo, patches, opts)
2268 q.delete(repo, patches, opts)
2268 q.savedirty()
2269 q.savedirty()
2269 return 0
2270 return 0
2270
2271
2271 @command("qapplied",
2272 @command("qapplied",
2272 [('1', 'last', None, _('show only the preceding applied patch'))
2273 [('1', 'last', None, _('show only the preceding applied patch'))
2273 ] + seriesopts,
2274 ] + seriesopts,
2274 _('hg qapplied [-1] [-s] [PATCH]'))
2275 _('hg qapplied [-1] [-s] [PATCH]'))
2275 def applied(ui, repo, patch=None, **opts):
2276 def applied(ui, repo, patch=None, **opts):
2276 """print the patches already applied
2277 """print the patches already applied
2277
2278
2278 Returns 0 on success."""
2279 Returns 0 on success."""
2279
2280
2280 q = repo.mq
2281 q = repo.mq
2281 opts = pycompat.byteskwargs(opts)
2282 opts = pycompat.byteskwargs(opts)
2282
2283
2283 if patch:
2284 if patch:
2284 if patch not in q.series:
2285 if patch not in q.series:
2285 raise error.Abort(_("patch %s is not in series file") % patch)
2286 raise error.Abort(_("patch %s is not in series file") % patch)
2286 end = q.series.index(patch) + 1
2287 end = q.series.index(patch) + 1
2287 else:
2288 else:
2288 end = q.seriesend(True)
2289 end = q.seriesend(True)
2289
2290
2290 if opts.get('last') and not end:
2291 if opts.get('last') and not end:
2291 ui.write(_("no patches applied\n"))
2292 ui.write(_("no patches applied\n"))
2292 return 1
2293 return 1
2293 elif opts.get('last') and end == 1:
2294 elif opts.get('last') and end == 1:
2294 ui.write(_("only one patch applied\n"))
2295 ui.write(_("only one patch applied\n"))
2295 return 1
2296 return 1
2296 elif opts.get('last'):
2297 elif opts.get('last'):
2297 start = end - 2
2298 start = end - 2
2298 end = 1
2299 end = 1
2299 else:
2300 else:
2300 start = 0
2301 start = 0
2301
2302
2302 q.qseries(repo, length=end, start=start, status='A',
2303 q.qseries(repo, length=end, start=start, status='A',
2303 summary=opts.get('summary'))
2304 summary=opts.get('summary'))
2304
2305
2305
2306
2306 @command("qunapplied",
2307 @command("qunapplied",
2307 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2308 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2308 _('hg qunapplied [-1] [-s] [PATCH]'))
2309 _('hg qunapplied [-1] [-s] [PATCH]'))
2309 def unapplied(ui, repo, patch=None, **opts):
2310 def unapplied(ui, repo, patch=None, **opts):
2310 """print the patches not yet applied
2311 """print the patches not yet applied
2311
2312
2312 Returns 0 on success."""
2313 Returns 0 on success."""
2313
2314
2314 q = repo.mq
2315 q = repo.mq
2315 opts = pycompat.byteskwargs(opts)
2316 opts = pycompat.byteskwargs(opts)
2316 if patch:
2317 if patch:
2317 if patch not in q.series:
2318 if patch not in q.series:
2318 raise error.Abort(_("patch %s is not in series file") % patch)
2319 raise error.Abort(_("patch %s is not in series file") % patch)
2319 start = q.series.index(patch) + 1
2320 start = q.series.index(patch) + 1
2320 else:
2321 else:
2321 start = q.seriesend(True)
2322 start = q.seriesend(True)
2322
2323
2323 if start == len(q.series) and opts.get('first'):
2324 if start == len(q.series) and opts.get('first'):
2324 ui.write(_("all patches applied\n"))
2325 ui.write(_("all patches applied\n"))
2325 return 1
2326 return 1
2326
2327
2327 if opts.get('first'):
2328 if opts.get('first'):
2328 length = 1
2329 length = 1
2329 else:
2330 else:
2330 length = None
2331 length = None
2331 q.qseries(repo, start=start, length=length, status='U',
2332 q.qseries(repo, start=start, length=length, status='U',
2332 summary=opts.get('summary'))
2333 summary=opts.get('summary'))
2333
2334
2334 @command("qimport",
2335 @command("qimport",
2335 [('e', 'existing', None, _('import file in patch directory')),
2336 [('e', 'existing', None, _('import file in patch directory')),
2336 ('n', 'name', '',
2337 ('n', 'name', '',
2337 _('name of patch file'), _('NAME')),
2338 _('name of patch file'), _('NAME')),
2338 ('f', 'force', None, _('overwrite existing files')),
2339 ('f', 'force', None, _('overwrite existing files')),
2339 ('r', 'rev', [],
2340 ('r', 'rev', [],
2340 _('place existing revisions under mq control'), _('REV')),
2341 _('place existing revisions under mq control'), _('REV')),
2341 ('g', 'git', None, _('use git extended diff format')),
2342 ('g', 'git', None, _('use git extended diff format')),
2342 ('P', 'push', None, _('qpush after importing'))],
2343 ('P', 'push', None, _('qpush after importing'))],
2343 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2344 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2344 def qimport(ui, repo, *filename, **opts):
2345 def qimport(ui, repo, *filename, **opts):
2345 """import a patch or existing changeset
2346 """import a patch or existing changeset
2346
2347
2347 The patch is inserted into the series after the last applied
2348 The patch is inserted into the series after the last applied
2348 patch. If no patches have been applied, qimport prepends the patch
2349 patch. If no patches have been applied, qimport prepends the patch
2349 to the series.
2350 to the series.
2350
2351
2351 The patch will have the same name as its source file unless you
2352 The patch will have the same name as its source file unless you
2352 give it a new one with -n/--name.
2353 give it a new one with -n/--name.
2353
2354
2354 You can register an existing patch inside the patch directory with
2355 You can register an existing patch inside the patch directory with
2355 the -e/--existing flag.
2356 the -e/--existing flag.
2356
2357
2357 With -f/--force, an existing patch of the same name will be
2358 With -f/--force, an existing patch of the same name will be
2358 overwritten.
2359 overwritten.
2359
2360
2360 An existing changeset may be placed under mq control with -r/--rev
2361 An existing changeset may be placed under mq control with -r/--rev
2361 (e.g. qimport --rev . -n patch will place the current revision
2362 (e.g. qimport --rev . -n patch will place the current revision
2362 under mq control). With -g/--git, patches imported with --rev will
2363 under mq control). With -g/--git, patches imported with --rev will
2363 use the git diff format. See the diffs help topic for information
2364 use the git diff format. See the diffs help topic for information
2364 on why this is important for preserving rename/copy information
2365 on why this is important for preserving rename/copy information
2365 and permission changes. Use :hg:`qfinish` to remove changesets
2366 and permission changes. Use :hg:`qfinish` to remove changesets
2366 from mq control.
2367 from mq control.
2367
2368
2368 To import a patch from standard input, pass - as the patch file.
2369 To import a patch from standard input, pass - as the patch file.
2369 When importing from standard input, a patch name must be specified
2370 When importing from standard input, a patch name must be specified
2370 using the --name flag.
2371 using the --name flag.
2371
2372
2372 To import an existing patch while renaming it::
2373 To import an existing patch while renaming it::
2373
2374
2374 hg qimport -e existing-patch -n new-name
2375 hg qimport -e existing-patch -n new-name
2375
2376
2376 Returns 0 if import succeeded.
2377 Returns 0 if import succeeded.
2377 """
2378 """
2378 opts = pycompat.byteskwargs(opts)
2379 opts = pycompat.byteskwargs(opts)
2379 with repo.lock(): # cause this may move phase
2380 with repo.lock(): # cause this may move phase
2380 q = repo.mq
2381 q = repo.mq
2381 try:
2382 try:
2382 imported = q.qimport(
2383 imported = q.qimport(
2383 repo, filename, patchname=opts.get('name'),
2384 repo, filename, patchname=opts.get('name'),
2384 existing=opts.get('existing'), force=opts.get('force'),
2385 existing=opts.get('existing'), force=opts.get('force'),
2385 rev=opts.get('rev'), git=opts.get('git'))
2386 rev=opts.get('rev'), git=opts.get('git'))
2386 finally:
2387 finally:
2387 q.savedirty()
2388 q.savedirty()
2388
2389
2389 if imported and opts.get('push') and not opts.get('rev'):
2390 if imported and opts.get('push') and not opts.get('rev'):
2390 return q.push(repo, imported[-1])
2391 return q.push(repo, imported[-1])
2391 return 0
2392 return 0
2392
2393
2393 def qinit(ui, repo, create):
2394 def qinit(ui, repo, create):
2394 """initialize a new queue repository
2395 """initialize a new queue repository
2395
2396
2396 This command also creates a series file for ordering patches, and
2397 This command also creates a series file for ordering patches, and
2397 an mq-specific .hgignore file in the queue repository, to exclude
2398 an mq-specific .hgignore file in the queue repository, to exclude
2398 the status and guards files (these contain mostly transient state).
2399 the status and guards files (these contain mostly transient state).
2399
2400
2400 Returns 0 if initialization succeeded."""
2401 Returns 0 if initialization succeeded."""
2401 q = repo.mq
2402 q = repo.mq
2402 r = q.init(repo, create)
2403 r = q.init(repo, create)
2403 q.savedirty()
2404 q.savedirty()
2404 if r:
2405 if r:
2405 if not os.path.exists(r.wjoin('.hgignore')):
2406 if not os.path.exists(r.wjoin('.hgignore')):
2406 fp = r.wvfs('.hgignore', 'w')
2407 fp = r.wvfs('.hgignore', 'w')
2407 fp.write('^\\.hg\n')
2408 fp.write('^\\.hg\n')
2408 fp.write('^\\.mq\n')
2409 fp.write('^\\.mq\n')
2409 fp.write('syntax: glob\n')
2410 fp.write('syntax: glob\n')
2410 fp.write('status\n')
2411 fp.write('status\n')
2411 fp.write('guards\n')
2412 fp.write('guards\n')
2412 fp.close()
2413 fp.close()
2413 if not os.path.exists(r.wjoin('series')):
2414 if not os.path.exists(r.wjoin('series')):
2414 r.wvfs('series', 'w').close()
2415 r.wvfs('series', 'w').close()
2415 r[None].add(['.hgignore', 'series'])
2416 r[None].add(['.hgignore', 'series'])
2416 commands.add(ui, r)
2417 commands.add(ui, r)
2417 return 0
2418 return 0
2418
2419
2419 @command("^qinit",
2420 @command("^qinit",
2420 [('c', 'create-repo', None, _('create queue repository'))],
2421 [('c', 'create-repo', None, _('create queue repository'))],
2421 _('hg qinit [-c]'))
2422 _('hg qinit [-c]'))
2422 def init(ui, repo, **opts):
2423 def init(ui, repo, **opts):
2423 """init a new queue repository (DEPRECATED)
2424 """init a new queue repository (DEPRECATED)
2424
2425
2425 The queue repository is unversioned by default. If
2426 The queue repository is unversioned by default. If
2426 -c/--create-repo is specified, qinit will create a separate nested
2427 -c/--create-repo is specified, qinit will create a separate nested
2427 repository for patches (qinit -c may also be run later to convert
2428 repository for patches (qinit -c may also be run later to convert
2428 an unversioned patch repository into a versioned one). You can use
2429 an unversioned patch repository into a versioned one). You can use
2429 qcommit to commit changes to this queue repository.
2430 qcommit to commit changes to this queue repository.
2430
2431
2431 This command is deprecated. Without -c, it's implied by other relevant
2432 This command is deprecated. Without -c, it's implied by other relevant
2432 commands. With -c, use :hg:`init --mq` instead."""
2433 commands. With -c, use :hg:`init --mq` instead."""
2433 return qinit(ui, repo, create=opts.get(r'create_repo'))
2434 return qinit(ui, repo, create=opts.get(r'create_repo'))
2434
2435
2435 @command("qclone",
2436 @command("qclone",
2436 [('', 'pull', None, _('use pull protocol to copy metadata')),
2437 [('', 'pull', None, _('use pull protocol to copy metadata')),
2437 ('U', 'noupdate', None,
2438 ('U', 'noupdate', None,
2438 _('do not update the new working directories')),
2439 _('do not update the new working directories')),
2439 ('', 'uncompressed', None,
2440 ('', 'uncompressed', None,
2440 _('use uncompressed transfer (fast over LAN)')),
2441 _('use uncompressed transfer (fast over LAN)')),
2441 ('p', 'patches', '',
2442 ('p', 'patches', '',
2442 _('location of source patch repository'), _('REPO')),
2443 _('location of source patch repository'), _('REPO')),
2443 ] + cmdutil.remoteopts,
2444 ] + cmdutil.remoteopts,
2444 _('hg qclone [OPTION]... SOURCE [DEST]'),
2445 _('hg qclone [OPTION]... SOURCE [DEST]'),
2445 norepo=True)
2446 norepo=True)
2446 def clone(ui, source, dest=None, **opts):
2447 def clone(ui, source, dest=None, **opts):
2447 '''clone main and patch repository at same time
2448 '''clone main and patch repository at same time
2448
2449
2449 If source is local, destination will have no patches applied. If
2450 If source is local, destination will have no patches applied. If
2450 source is remote, this command can not check if patches are
2451 source is remote, this command can not check if patches are
2451 applied in source, so cannot guarantee that patches are not
2452 applied in source, so cannot guarantee that patches are not
2452 applied in destination. If you clone remote repository, be sure
2453 applied in destination. If you clone remote repository, be sure
2453 before that it has no patches applied.
2454 before that it has no patches applied.
2454
2455
2455 Source patch repository is looked for in <src>/.hg/patches by
2456 Source patch repository is looked for in <src>/.hg/patches by
2456 default. Use -p <url> to change.
2457 default. Use -p <url> to change.
2457
2458
2458 The patch directory must be a nested Mercurial repository, as
2459 The patch directory must be a nested Mercurial repository, as
2459 would be created by :hg:`init --mq`.
2460 would be created by :hg:`init --mq`.
2460
2461
2461 Return 0 on success.
2462 Return 0 on success.
2462 '''
2463 '''
2463 opts = pycompat.byteskwargs(opts)
2464 opts = pycompat.byteskwargs(opts)
2464 def patchdir(repo):
2465 def patchdir(repo):
2465 """compute a patch repo url from a repo object"""
2466 """compute a patch repo url from a repo object"""
2466 url = repo.url()
2467 url = repo.url()
2467 if url.endswith('/'):
2468 if url.endswith('/'):
2468 url = url[:-1]
2469 url = url[:-1]
2469 return url + '/.hg/patches'
2470 return url + '/.hg/patches'
2470
2471
2471 # main repo (destination and sources)
2472 # main repo (destination and sources)
2472 if dest is None:
2473 if dest is None:
2473 dest = hg.defaultdest(source)
2474 dest = hg.defaultdest(source)
2474 sr = hg.peer(ui, opts, ui.expandpath(source))
2475 sr = hg.peer(ui, opts, ui.expandpath(source))
2475
2476
2476 # patches repo (source only)
2477 # patches repo (source only)
2477 if opts.get('patches'):
2478 if opts.get('patches'):
2478 patchespath = ui.expandpath(opts.get('patches'))
2479 patchespath = ui.expandpath(opts.get('patches'))
2479 else:
2480 else:
2480 patchespath = patchdir(sr)
2481 patchespath = patchdir(sr)
2481 try:
2482 try:
2482 hg.peer(ui, opts, patchespath)
2483 hg.peer(ui, opts, patchespath)
2483 except error.RepoError:
2484 except error.RepoError:
2484 raise error.Abort(_('versioned patch repository not found'
2485 raise error.Abort(_('versioned patch repository not found'
2485 ' (see init --mq)'))
2486 ' (see init --mq)'))
2486 qbase, destrev = None, None
2487 qbase, destrev = None, None
2487 if sr.local():
2488 if sr.local():
2488 repo = sr.local()
2489 repo = sr.local()
2489 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2490 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2490 qbase = repo.mq.applied[0].node
2491 qbase = repo.mq.applied[0].node
2491 if not hg.islocal(dest):
2492 if not hg.islocal(dest):
2492 heads = set(repo.heads())
2493 heads = set(repo.heads())
2493 destrev = list(heads.difference(repo.heads(qbase)))
2494 destrev = list(heads.difference(repo.heads(qbase)))
2494 destrev.append(repo.changelog.parents(qbase)[0])
2495 destrev.append(repo.changelog.parents(qbase)[0])
2495 elif sr.capable('lookup'):
2496 elif sr.capable('lookup'):
2496 try:
2497 try:
2497 qbase = sr.lookup('qbase')
2498 qbase = sr.lookup('qbase')
2498 except error.RepoError:
2499 except error.RepoError:
2499 pass
2500 pass
2500
2501
2501 ui.note(_('cloning main repository\n'))
2502 ui.note(_('cloning main repository\n'))
2502 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2503 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2503 pull=opts.get('pull'),
2504 pull=opts.get('pull'),
2504 rev=destrev,
2505 rev=destrev,
2505 update=False,
2506 update=False,
2506 stream=opts.get('uncompressed'))
2507 stream=opts.get('uncompressed'))
2507
2508
2508 ui.note(_('cloning patch repository\n'))
2509 ui.note(_('cloning patch repository\n'))
2509 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2510 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2510 pull=opts.get('pull'), update=not opts.get('noupdate'),
2511 pull=opts.get('pull'), update=not opts.get('noupdate'),
2511 stream=opts.get('uncompressed'))
2512 stream=opts.get('uncompressed'))
2512
2513
2513 if dr.local():
2514 if dr.local():
2514 repo = dr.local()
2515 repo = dr.local()
2515 if qbase:
2516 if qbase:
2516 ui.note(_('stripping applied patches from destination '
2517 ui.note(_('stripping applied patches from destination '
2517 'repository\n'))
2518 'repository\n'))
2518 strip(ui, repo, [qbase], update=False, backup=None)
2519 strip(ui, repo, [qbase], update=False, backup=None)
2519 if not opts.get('noupdate'):
2520 if not opts.get('noupdate'):
2520 ui.note(_('updating destination repository\n'))
2521 ui.note(_('updating destination repository\n'))
2521 hg.update(repo, repo.changelog.tip())
2522 hg.update(repo, repo.changelog.tip())
2522
2523
2523 @command("qcommit|qci",
2524 @command("qcommit|qci",
2524 commands.table["^commit|ci"][1],
2525 commands.table["^commit|ci"][1],
2525 _('hg qcommit [OPTION]... [FILE]...'),
2526 _('hg qcommit [OPTION]... [FILE]...'),
2526 inferrepo=True)
2527 inferrepo=True)
2527 def commit(ui, repo, *pats, **opts):
2528 def commit(ui, repo, *pats, **opts):
2528 """commit changes in the queue repository (DEPRECATED)
2529 """commit changes in the queue repository (DEPRECATED)
2529
2530
2530 This command is deprecated; use :hg:`commit --mq` instead."""
2531 This command is deprecated; use :hg:`commit --mq` instead."""
2531 q = repo.mq
2532 q = repo.mq
2532 r = q.qrepo()
2533 r = q.qrepo()
2533 if not r:
2534 if not r:
2534 raise error.Abort('no queue repository')
2535 raise error.Abort('no queue repository')
2535 commands.commit(r.ui, r, *pats, **opts)
2536 commands.commit(r.ui, r, *pats, **opts)
2536
2537
2537 @command("qseries",
2538 @command("qseries",
2538 [('m', 'missing', None, _('print patches not in series')),
2539 [('m', 'missing', None, _('print patches not in series')),
2539 ] + seriesopts,
2540 ] + seriesopts,
2540 _('hg qseries [-ms]'))
2541 _('hg qseries [-ms]'))
2541 def series(ui, repo, **opts):
2542 def series(ui, repo, **opts):
2542 """print the entire series file
2543 """print the entire series file
2543
2544
2544 Returns 0 on success."""
2545 Returns 0 on success."""
2545 repo.mq.qseries(repo, missing=opts.get(r'missing'),
2546 repo.mq.qseries(repo, missing=opts.get(r'missing'),
2546 summary=opts.get(r'summary'))
2547 summary=opts.get(r'summary'))
2547 return 0
2548 return 0
2548
2549
2549 @command("qtop", seriesopts, _('hg qtop [-s]'))
2550 @command("qtop", seriesopts, _('hg qtop [-s]'))
2550 def top(ui, repo, **opts):
2551 def top(ui, repo, **opts):
2551 """print the name of the current patch
2552 """print the name of the current patch
2552
2553
2553 Returns 0 on success."""
2554 Returns 0 on success."""
2554 q = repo.mq
2555 q = repo.mq
2555 if q.applied:
2556 if q.applied:
2556 t = q.seriesend(True)
2557 t = q.seriesend(True)
2557 else:
2558 else:
2558 t = 0
2559 t = 0
2559
2560
2560 if t:
2561 if t:
2561 q.qseries(repo, start=t - 1, length=1, status='A',
2562 q.qseries(repo, start=t - 1, length=1, status='A',
2562 summary=opts.get(r'summary'))
2563 summary=opts.get(r'summary'))
2563 else:
2564 else:
2564 ui.write(_("no patches applied\n"))
2565 ui.write(_("no patches applied\n"))
2565 return 1
2566 return 1
2566
2567
2567 @command("qnext", seriesopts, _('hg qnext [-s]'))
2568 @command("qnext", seriesopts, _('hg qnext [-s]'))
2568 def next(ui, repo, **opts):
2569 def next(ui, repo, **opts):
2569 """print the name of the next pushable patch
2570 """print the name of the next pushable patch
2570
2571
2571 Returns 0 on success."""
2572 Returns 0 on success."""
2572 q = repo.mq
2573 q = repo.mq
2573 end = q.seriesend()
2574 end = q.seriesend()
2574 if end == len(q.series):
2575 if end == len(q.series):
2575 ui.write(_("all patches applied\n"))
2576 ui.write(_("all patches applied\n"))
2576 return 1
2577 return 1
2577 q.qseries(repo, start=end, length=1, summary=opts.get(r'summary'))
2578 q.qseries(repo, start=end, length=1, summary=opts.get(r'summary'))
2578
2579
2579 @command("qprev", seriesopts, _('hg qprev [-s]'))
2580 @command("qprev", seriesopts, _('hg qprev [-s]'))
2580 def prev(ui, repo, **opts):
2581 def prev(ui, repo, **opts):
2581 """print the name of the preceding applied patch
2582 """print the name of the preceding applied patch
2582
2583
2583 Returns 0 on success."""
2584 Returns 0 on success."""
2584 q = repo.mq
2585 q = repo.mq
2585 l = len(q.applied)
2586 l = len(q.applied)
2586 if l == 1:
2587 if l == 1:
2587 ui.write(_("only one patch applied\n"))
2588 ui.write(_("only one patch applied\n"))
2588 return 1
2589 return 1
2589 if not l:
2590 if not l:
2590 ui.write(_("no patches applied\n"))
2591 ui.write(_("no patches applied\n"))
2591 return 1
2592 return 1
2592 idx = q.series.index(q.applied[-2].name)
2593 idx = q.series.index(q.applied[-2].name)
2593 q.qseries(repo, start=idx, length=1, status='A',
2594 q.qseries(repo, start=idx, length=1, status='A',
2594 summary=opts.get(r'summary'))
2595 summary=opts.get(r'summary'))
2595
2596
2596 def setupheaderopts(ui, opts):
2597 def setupheaderopts(ui, opts):
2597 if not opts.get('user') and opts.get('currentuser'):
2598 if not opts.get('user') and opts.get('currentuser'):
2598 opts['user'] = ui.username()
2599 opts['user'] = ui.username()
2599 if not opts.get('date') and opts.get('currentdate'):
2600 if not opts.get('date') and opts.get('currentdate'):
2600 opts['date'] = "%d %d" % util.makedate()
2601 opts['date'] = "%d %d" % util.makedate()
2601
2602
2602 @command("^qnew",
2603 @command("^qnew",
2603 [('e', 'edit', None, _('invoke editor on commit messages')),
2604 [('e', 'edit', None, _('invoke editor on commit messages')),
2604 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2605 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2605 ('g', 'git', None, _('use git extended diff format')),
2606 ('g', 'git', None, _('use git extended diff format')),
2606 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2607 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2607 ('u', 'user', '',
2608 ('u', 'user', '',
2608 _('add "From: <USER>" to patch'), _('USER')),
2609 _('add "From: <USER>" to patch'), _('USER')),
2609 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2610 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2610 ('d', 'date', '',
2611 ('d', 'date', '',
2611 _('add "Date: <DATE>" to patch'), _('DATE'))
2612 _('add "Date: <DATE>" to patch'), _('DATE'))
2612 ] + cmdutil.walkopts + cmdutil.commitopts,
2613 ] + cmdutil.walkopts + cmdutil.commitopts,
2613 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2614 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2614 inferrepo=True)
2615 inferrepo=True)
2615 def new(ui, repo, patch, *args, **opts):
2616 def new(ui, repo, patch, *args, **opts):
2616 """create a new patch
2617 """create a new patch
2617
2618
2618 qnew creates a new patch on top of the currently-applied patch (if
2619 qnew creates a new patch on top of the currently-applied patch (if
2619 any). The patch will be initialized with any outstanding changes
2620 any). The patch will be initialized with any outstanding changes
2620 in the working directory. You may also use -I/--include,
2621 in the working directory. You may also use -I/--include,
2621 -X/--exclude, and/or a list of files after the patch name to add
2622 -X/--exclude, and/or a list of files after the patch name to add
2622 only changes to matching files to the new patch, leaving the rest
2623 only changes to matching files to the new patch, leaving the rest
2623 as uncommitted modifications.
2624 as uncommitted modifications.
2624
2625
2625 -u/--user and -d/--date can be used to set the (given) user and
2626 -u/--user and -d/--date can be used to set the (given) user and
2626 date, respectively. -U/--currentuser and -D/--currentdate set user
2627 date, respectively. -U/--currentuser and -D/--currentdate set user
2627 to current user and date to current date.
2628 to current user and date to current date.
2628
2629
2629 -e/--edit, -m/--message or -l/--logfile set the patch header as
2630 -e/--edit, -m/--message or -l/--logfile set the patch header as
2630 well as the commit message. If none is specified, the header is
2631 well as the commit message. If none is specified, the header is
2631 empty and the commit message is '[mq]: PATCH'.
2632 empty and the commit message is '[mq]: PATCH'.
2632
2633
2633 Use the -g/--git option to keep the patch in the git extended diff
2634 Use the -g/--git option to keep the patch in the git extended diff
2634 format. Read the diffs help topic for more information on why this
2635 format. Read the diffs help topic for more information on why this
2635 is important for preserving permission changes and copy/rename
2636 is important for preserving permission changes and copy/rename
2636 information.
2637 information.
2637
2638
2638 Returns 0 on successful creation of a new patch.
2639 Returns 0 on successful creation of a new patch.
2639 """
2640 """
2640 opts = pycompat.byteskwargs(opts)
2641 opts = pycompat.byteskwargs(opts)
2641 msg = cmdutil.logmessage(ui, opts)
2642 msg = cmdutil.logmessage(ui, opts)
2642 q = repo.mq
2643 q = repo.mq
2643 opts['msg'] = msg
2644 opts['msg'] = msg
2644 setupheaderopts(ui, opts)
2645 setupheaderopts(ui, opts)
2645 q.new(repo, patch, *args, **pycompat.strkwargs(opts))
2646 q.new(repo, patch, *args, **pycompat.strkwargs(opts))
2646 q.savedirty()
2647 q.savedirty()
2647 return 0
2648 return 0
2648
2649
2649 @command("^qrefresh",
2650 @command("^qrefresh",
2650 [('e', 'edit', None, _('invoke editor on commit messages')),
2651 [('e', 'edit', None, _('invoke editor on commit messages')),
2651 ('g', 'git', None, _('use git extended diff format')),
2652 ('g', 'git', None, _('use git extended diff format')),
2652 ('s', 'short', None,
2653 ('s', 'short', None,
2653 _('refresh only files already in the patch and specified files')),
2654 _('refresh only files already in the patch and specified files')),
2654 ('U', 'currentuser', None,
2655 ('U', 'currentuser', None,
2655 _('add/update author field in patch with current user')),
2656 _('add/update author field in patch with current user')),
2656 ('u', 'user', '',
2657 ('u', 'user', '',
2657 _('add/update author field in patch with given user'), _('USER')),
2658 _('add/update author field in patch with given user'), _('USER')),
2658 ('D', 'currentdate', None,
2659 ('D', 'currentdate', None,
2659 _('add/update date field in patch with current date')),
2660 _('add/update date field in patch with current date')),
2660 ('d', 'date', '',
2661 ('d', 'date', '',
2661 _('add/update date field in patch with given date'), _('DATE'))
2662 _('add/update date field in patch with given date'), _('DATE'))
2662 ] + cmdutil.walkopts + cmdutil.commitopts,
2663 ] + cmdutil.walkopts + cmdutil.commitopts,
2663 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2664 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2664 inferrepo=True)
2665 inferrepo=True)
2665 def refresh(ui, repo, *pats, **opts):
2666 def refresh(ui, repo, *pats, **opts):
2666 """update the current patch
2667 """update the current patch
2667
2668
2668 If any file patterns are provided, the refreshed patch will
2669 If any file patterns are provided, the refreshed patch will
2669 contain only the modifications that match those patterns; the
2670 contain only the modifications that match those patterns; the
2670 remaining modifications will remain in the working directory.
2671 remaining modifications will remain in the working directory.
2671
2672
2672 If -s/--short is specified, files currently included in the patch
2673 If -s/--short is specified, files currently included in the patch
2673 will be refreshed just like matched files and remain in the patch.
2674 will be refreshed just like matched files and remain in the patch.
2674
2675
2675 If -e/--edit is specified, Mercurial will start your configured editor for
2676 If -e/--edit is specified, Mercurial will start your configured editor for
2676 you to enter a message. In case qrefresh fails, you will find a backup of
2677 you to enter a message. In case qrefresh fails, you will find a backup of
2677 your message in ``.hg/last-message.txt``.
2678 your message in ``.hg/last-message.txt``.
2678
2679
2679 hg add/remove/copy/rename work as usual, though you might want to
2680 hg add/remove/copy/rename work as usual, though you might want to
2680 use git-style patches (-g/--git or [diff] git=1) to track copies
2681 use git-style patches (-g/--git or [diff] git=1) to track copies
2681 and renames. See the diffs help topic for more information on the
2682 and renames. See the diffs help topic for more information on the
2682 git diff format.
2683 git diff format.
2683
2684
2684 Returns 0 on success.
2685 Returns 0 on success.
2685 """
2686 """
2686 opts = pycompat.byteskwargs(opts)
2687 opts = pycompat.byteskwargs(opts)
2687 q = repo.mq
2688 q = repo.mq
2688 message = cmdutil.logmessage(ui, opts)
2689 message = cmdutil.logmessage(ui, opts)
2689 setupheaderopts(ui, opts)
2690 setupheaderopts(ui, opts)
2690 with repo.wlock():
2691 with repo.wlock():
2691 ret = q.refresh(repo, pats, msg=message, **pycompat.strkwargs(opts))
2692 ret = q.refresh(repo, pats, msg=message, **pycompat.strkwargs(opts))
2692 q.savedirty()
2693 q.savedirty()
2693 return ret
2694 return ret
2694
2695
2695 @command("^qdiff",
2696 @command("^qdiff",
2696 cmdutil.diffopts + cmdutil.diffopts2 + cmdutil.walkopts,
2697 cmdutil.diffopts + cmdutil.diffopts2 + cmdutil.walkopts,
2697 _('hg qdiff [OPTION]... [FILE]...'),
2698 _('hg qdiff [OPTION]... [FILE]...'),
2698 inferrepo=True)
2699 inferrepo=True)
2699 def diff(ui, repo, *pats, **opts):
2700 def diff(ui, repo, *pats, **opts):
2700 """diff of the current patch and subsequent modifications
2701 """diff of the current patch and subsequent modifications
2701
2702
2702 Shows a diff which includes the current patch as well as any
2703 Shows a diff which includes the current patch as well as any
2703 changes which have been made in the working directory since the
2704 changes which have been made in the working directory since the
2704 last refresh (thus showing what the current patch would become
2705 last refresh (thus showing what the current patch would become
2705 after a qrefresh).
2706 after a qrefresh).
2706
2707
2707 Use :hg:`diff` if you only want to see the changes made since the
2708 Use :hg:`diff` if you only want to see the changes made since the
2708 last qrefresh, or :hg:`export qtip` if you want to see changes
2709 last qrefresh, or :hg:`export qtip` if you want to see changes
2709 made by the current patch without including changes made since the
2710 made by the current patch without including changes made since the
2710 qrefresh.
2711 qrefresh.
2711
2712
2712 Returns 0 on success.
2713 Returns 0 on success.
2713 """
2714 """
2714 ui.pager('qdiff')
2715 ui.pager('qdiff')
2715 repo.mq.diff(repo, pats, pycompat.byteskwargs(opts))
2716 repo.mq.diff(repo, pats, pycompat.byteskwargs(opts))
2716 return 0
2717 return 0
2717
2718
2718 @command('qfold',
2719 @command('qfold',
2719 [('e', 'edit', None, _('invoke editor on commit messages')),
2720 [('e', 'edit', None, _('invoke editor on commit messages')),
2720 ('k', 'keep', None, _('keep folded patch files')),
2721 ('k', 'keep', None, _('keep folded patch files')),
2721 ] + cmdutil.commitopts,
2722 ] + cmdutil.commitopts,
2722 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2723 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2723 def fold(ui, repo, *files, **opts):
2724 def fold(ui, repo, *files, **opts):
2724 """fold the named patches into the current patch
2725 """fold the named patches into the current patch
2725
2726
2726 Patches must not yet be applied. Each patch will be successively
2727 Patches must not yet be applied. Each patch will be successively
2727 applied to the current patch in the order given. If all the
2728 applied to the current patch in the order given. If all the
2728 patches apply successfully, the current patch will be refreshed
2729 patches apply successfully, the current patch will be refreshed
2729 with the new cumulative patch, and the folded patches will be
2730 with the new cumulative patch, and the folded patches will be
2730 deleted. With -k/--keep, the folded patch files will not be
2731 deleted. With -k/--keep, the folded patch files will not be
2731 removed afterwards.
2732 removed afterwards.
2732
2733
2733 The header for each folded patch will be concatenated with the
2734 The header for each folded patch will be concatenated with the
2734 current patch header, separated by a line of ``* * *``.
2735 current patch header, separated by a line of ``* * *``.
2735
2736
2736 Returns 0 on success."""
2737 Returns 0 on success."""
2737 opts = pycompat.byteskwargs(opts)
2738 opts = pycompat.byteskwargs(opts)
2738 q = repo.mq
2739 q = repo.mq
2739 if not files:
2740 if not files:
2740 raise error.Abort(_('qfold requires at least one patch name'))
2741 raise error.Abort(_('qfold requires at least one patch name'))
2741 if not q.checktoppatch(repo)[0]:
2742 if not q.checktoppatch(repo)[0]:
2742 raise error.Abort(_('no patches applied'))
2743 raise error.Abort(_('no patches applied'))
2743 q.checklocalchanges(repo)
2744 q.checklocalchanges(repo)
2744
2745
2745 message = cmdutil.logmessage(ui, opts)
2746 message = cmdutil.logmessage(ui, opts)
2746
2747
2747 parent = q.lookup('qtip')
2748 parent = q.lookup('qtip')
2748 patches = []
2749 patches = []
2749 messages = []
2750 messages = []
2750 for f in files:
2751 for f in files:
2751 p = q.lookup(f)
2752 p = q.lookup(f)
2752 if p in patches or p == parent:
2753 if p in patches or p == parent:
2753 ui.warn(_('skipping already folded patch %s\n') % p)
2754 ui.warn(_('skipping already folded patch %s\n') % p)
2754 if q.isapplied(p):
2755 if q.isapplied(p):
2755 raise error.Abort(_('qfold cannot fold already applied patch %s')
2756 raise error.Abort(_('qfold cannot fold already applied patch %s')
2756 % p)
2757 % p)
2757 patches.append(p)
2758 patches.append(p)
2758
2759
2759 for p in patches:
2760 for p in patches:
2760 if not message:
2761 if not message:
2761 ph = patchheader(q.join(p), q.plainmode)
2762 ph = patchheader(q.join(p), q.plainmode)
2762 if ph.message:
2763 if ph.message:
2763 messages.append(ph.message)
2764 messages.append(ph.message)
2764 pf = q.join(p)
2765 pf = q.join(p)
2765 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2766 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2766 if not patchsuccess:
2767 if not patchsuccess:
2767 raise error.Abort(_('error folding patch %s') % p)
2768 raise error.Abort(_('error folding patch %s') % p)
2768
2769
2769 if not message:
2770 if not message:
2770 ph = patchheader(q.join(parent), q.plainmode)
2771 ph = patchheader(q.join(parent), q.plainmode)
2771 message = ph.message
2772 message = ph.message
2772 for msg in messages:
2773 for msg in messages:
2773 if msg:
2774 if msg:
2774 if message:
2775 if message:
2775 message.append('* * *')
2776 message.append('* * *')
2776 message.extend(msg)
2777 message.extend(msg)
2777 message = '\n'.join(message)
2778 message = '\n'.join(message)
2778
2779
2779 diffopts = q.patchopts(q.diffopts(), *patches)
2780 diffopts = q.patchopts(q.diffopts(), *patches)
2780 with repo.wlock():
2781 with repo.wlock():
2781 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2782 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2782 editform='mq.qfold')
2783 editform='mq.qfold')
2783 q.delete(repo, patches, opts)
2784 q.delete(repo, patches, opts)
2784 q.savedirty()
2785 q.savedirty()
2785
2786
2786 @command("qgoto",
2787 @command("qgoto",
2787 [('', 'keep-changes', None,
2788 [('', 'keep-changes', None,
2788 _('tolerate non-conflicting local changes')),
2789 _('tolerate non-conflicting local changes')),
2789 ('f', 'force', None, _('overwrite any local changes')),
2790 ('f', 'force', None, _('overwrite any local changes')),
2790 ('', 'no-backup', None, _('do not save backup copies of files'))],
2791 ('', 'no-backup', None, _('do not save backup copies of files'))],
2791 _('hg qgoto [OPTION]... PATCH'))
2792 _('hg qgoto [OPTION]... PATCH'))
2792 def goto(ui, repo, patch, **opts):
2793 def goto(ui, repo, patch, **opts):
2793 '''push or pop patches until named patch is at top of stack
2794 '''push or pop patches until named patch is at top of stack
2794
2795
2795 Returns 0 on success.'''
2796 Returns 0 on success.'''
2796 opts = pycompat.byteskwargs(opts)
2797 opts = pycompat.byteskwargs(opts)
2797 opts = fixkeepchangesopts(ui, opts)
2798 opts = fixkeepchangesopts(ui, opts)
2798 q = repo.mq
2799 q = repo.mq
2799 patch = q.lookup(patch)
2800 patch = q.lookup(patch)
2800 nobackup = opts.get('no_backup')
2801 nobackup = opts.get('no_backup')
2801 keepchanges = opts.get('keep_changes')
2802 keepchanges = opts.get('keep_changes')
2802 if q.isapplied(patch):
2803 if q.isapplied(patch):
2803 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2804 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2804 keepchanges=keepchanges)
2805 keepchanges=keepchanges)
2805 else:
2806 else:
2806 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2807 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2807 keepchanges=keepchanges)
2808 keepchanges=keepchanges)
2808 q.savedirty()
2809 q.savedirty()
2809 return ret
2810 return ret
2810
2811
2811 @command("qguard",
2812 @command("qguard",
2812 [('l', 'list', None, _('list all patches and guards')),
2813 [('l', 'list', None, _('list all patches and guards')),
2813 ('n', 'none', None, _('drop all guards'))],
2814 ('n', 'none', None, _('drop all guards'))],
2814 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2815 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2815 def guard(ui, repo, *args, **opts):
2816 def guard(ui, repo, *args, **opts):
2816 '''set or print guards for a patch
2817 '''set or print guards for a patch
2817
2818
2818 Guards control whether a patch can be pushed. A patch with no
2819 Guards control whether a patch can be pushed. A patch with no
2819 guards is always pushed. A patch with a positive guard ("+foo") is
2820 guards is always pushed. A patch with a positive guard ("+foo") is
2820 pushed only if the :hg:`qselect` command has activated it. A patch with
2821 pushed only if the :hg:`qselect` command has activated it. A patch with
2821 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2822 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2822 has activated it.
2823 has activated it.
2823
2824
2824 With no arguments, print the currently active guards.
2825 With no arguments, print the currently active guards.
2825 With arguments, set guards for the named patch.
2826 With arguments, set guards for the named patch.
2826
2827
2827 .. note::
2828 .. note::
2828
2829
2829 Specifying negative guards now requires '--'.
2830 Specifying negative guards now requires '--'.
2830
2831
2831 To set guards on another patch::
2832 To set guards on another patch::
2832
2833
2833 hg qguard other.patch -- +2.6.17 -stable
2834 hg qguard other.patch -- +2.6.17 -stable
2834
2835
2835 Returns 0 on success.
2836 Returns 0 on success.
2836 '''
2837 '''
2837 def status(idx):
2838 def status(idx):
2838 guards = q.seriesguards[idx] or ['unguarded']
2839 guards = q.seriesguards[idx] or ['unguarded']
2839 if q.series[idx] in applied:
2840 if q.series[idx] in applied:
2840 state = 'applied'
2841 state = 'applied'
2841 elif q.pushable(idx)[0]:
2842 elif q.pushable(idx)[0]:
2842 state = 'unapplied'
2843 state = 'unapplied'
2843 else:
2844 else:
2844 state = 'guarded'
2845 state = 'guarded'
2845 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2846 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2846 ui.write('%s: ' % ui.label(q.series[idx], label))
2847 ui.write('%s: ' % ui.label(q.series[idx], label))
2847
2848
2848 for i, guard in enumerate(guards):
2849 for i, guard in enumerate(guards):
2849 if guard.startswith('+'):
2850 if guard.startswith('+'):
2850 ui.write(guard, label='qguard.positive')
2851 ui.write(guard, label='qguard.positive')
2851 elif guard.startswith('-'):
2852 elif guard.startswith('-'):
2852 ui.write(guard, label='qguard.negative')
2853 ui.write(guard, label='qguard.negative')
2853 else:
2854 else:
2854 ui.write(guard, label='qguard.unguarded')
2855 ui.write(guard, label='qguard.unguarded')
2855 if i != len(guards) - 1:
2856 if i != len(guards) - 1:
2856 ui.write(' ')
2857 ui.write(' ')
2857 ui.write('\n')
2858 ui.write('\n')
2858 q = repo.mq
2859 q = repo.mq
2859 applied = set(p.name for p in q.applied)
2860 applied = set(p.name for p in q.applied)
2860 patch = None
2861 patch = None
2861 args = list(args)
2862 args = list(args)
2862 if opts.get(r'list'):
2863 if opts.get(r'list'):
2863 if args or opts.get('none'):
2864 if args or opts.get('none'):
2864 raise error.Abort(_('cannot mix -l/--list with options or '
2865 raise error.Abort(_('cannot mix -l/--list with options or '
2865 'arguments'))
2866 'arguments'))
2866 for i in xrange(len(q.series)):
2867 for i in xrange(len(q.series)):
2867 status(i)
2868 status(i)
2868 return
2869 return
2869 if not args or args[0][0:1] in '-+':
2870 if not args or args[0][0:1] in '-+':
2870 if not q.applied:
2871 if not q.applied:
2871 raise error.Abort(_('no patches applied'))
2872 raise error.Abort(_('no patches applied'))
2872 patch = q.applied[-1].name
2873 patch = q.applied[-1].name
2873 if patch is None and args[0][0:1] not in '-+':
2874 if patch is None and args[0][0:1] not in '-+':
2874 patch = args.pop(0)
2875 patch = args.pop(0)
2875 if patch is None:
2876 if patch is None:
2876 raise error.Abort(_('no patch to work with'))
2877 raise error.Abort(_('no patch to work with'))
2877 if args or opts.get('none'):
2878 if args or opts.get('none'):
2878 idx = q.findseries(patch)
2879 idx = q.findseries(patch)
2879 if idx is None:
2880 if idx is None:
2880 raise error.Abort(_('no patch named %s') % patch)
2881 raise error.Abort(_('no patch named %s') % patch)
2881 q.setguards(idx, args)
2882 q.setguards(idx, args)
2882 q.savedirty()
2883 q.savedirty()
2883 else:
2884 else:
2884 status(q.series.index(q.lookup(patch)))
2885 status(q.series.index(q.lookup(patch)))
2885
2886
2886 @command("qheader", [], _('hg qheader [PATCH]'))
2887 @command("qheader", [], _('hg qheader [PATCH]'))
2887 def header(ui, repo, patch=None):
2888 def header(ui, repo, patch=None):
2888 """print the header of the topmost or specified patch
2889 """print the header of the topmost or specified patch
2889
2890
2890 Returns 0 on success."""
2891 Returns 0 on success."""
2891 q = repo.mq
2892 q = repo.mq
2892
2893
2893 if patch:
2894 if patch:
2894 patch = q.lookup(patch)
2895 patch = q.lookup(patch)
2895 else:
2896 else:
2896 if not q.applied:
2897 if not q.applied:
2897 ui.write(_('no patches applied\n'))
2898 ui.write(_('no patches applied\n'))
2898 return 1
2899 return 1
2899 patch = q.lookup('qtip')
2900 patch = q.lookup('qtip')
2900 ph = patchheader(q.join(patch), q.plainmode)
2901 ph = patchheader(q.join(patch), q.plainmode)
2901
2902
2902 ui.write('\n'.join(ph.message) + '\n')
2903 ui.write('\n'.join(ph.message) + '\n')
2903
2904
2904 def lastsavename(path):
2905 def lastsavename(path):
2905 (directory, base) = os.path.split(path)
2906 (directory, base) = os.path.split(path)
2906 names = os.listdir(directory)
2907 names = os.listdir(directory)
2907 namere = re.compile("%s.([0-9]+)" % base)
2908 namere = re.compile("%s.([0-9]+)" % base)
2908 maxindex = None
2909 maxindex = None
2909 maxname = None
2910 maxname = None
2910 for f in names:
2911 for f in names:
2911 m = namere.match(f)
2912 m = namere.match(f)
2912 if m:
2913 if m:
2913 index = int(m.group(1))
2914 index = int(m.group(1))
2914 if maxindex is None or index > maxindex:
2915 if maxindex is None or index > maxindex:
2915 maxindex = index
2916 maxindex = index
2916 maxname = f
2917 maxname = f
2917 if maxname:
2918 if maxname:
2918 return (os.path.join(directory, maxname), maxindex)
2919 return (os.path.join(directory, maxname), maxindex)
2919 return (None, None)
2920 return (None, None)
2920
2921
2921 def savename(path):
2922 def savename(path):
2922 (last, index) = lastsavename(path)
2923 (last, index) = lastsavename(path)
2923 if last is None:
2924 if last is None:
2924 index = 0
2925 index = 0
2925 newpath = path + ".%d" % (index + 1)
2926 newpath = path + ".%d" % (index + 1)
2926 return newpath
2927 return newpath
2927
2928
2928 @command("^qpush",
2929 @command("^qpush",
2929 [('', 'keep-changes', None,
2930 [('', 'keep-changes', None,
2930 _('tolerate non-conflicting local changes')),
2931 _('tolerate non-conflicting local changes')),
2931 ('f', 'force', None, _('apply on top of local changes')),
2932 ('f', 'force', None, _('apply on top of local changes')),
2932 ('e', 'exact', None,
2933 ('e', 'exact', None,
2933 _('apply the target patch to its recorded parent')),
2934 _('apply the target patch to its recorded parent')),
2934 ('l', 'list', None, _('list patch name in commit text')),
2935 ('l', 'list', None, _('list patch name in commit text')),
2935 ('a', 'all', None, _('apply all patches')),
2936 ('a', 'all', None, _('apply all patches')),
2936 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2937 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2937 ('n', 'name', '',
2938 ('n', 'name', '',
2938 _('merge queue name (DEPRECATED)'), _('NAME')),
2939 _('merge queue name (DEPRECATED)'), _('NAME')),
2939 ('', 'move', None,
2940 ('', 'move', None,
2940 _('reorder patch series and apply only the patch')),
2941 _('reorder patch series and apply only the patch')),
2941 ('', 'no-backup', None, _('do not save backup copies of files'))],
2942 ('', 'no-backup', None, _('do not save backup copies of files'))],
2942 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2943 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2943 def push(ui, repo, patch=None, **opts):
2944 def push(ui, repo, patch=None, **opts):
2944 """push the next patch onto the stack
2945 """push the next patch onto the stack
2945
2946
2946 By default, abort if the working directory contains uncommitted
2947 By default, abort if the working directory contains uncommitted
2947 changes. With --keep-changes, abort only if the uncommitted files
2948 changes. With --keep-changes, abort only if the uncommitted files
2948 overlap with patched files. With -f/--force, backup and patch over
2949 overlap with patched files. With -f/--force, backup and patch over
2949 uncommitted changes.
2950 uncommitted changes.
2950
2951
2951 Return 0 on success.
2952 Return 0 on success.
2952 """
2953 """
2953 q = repo.mq
2954 q = repo.mq
2954 mergeq = None
2955 mergeq = None
2955
2956
2956 opts = pycompat.byteskwargs(opts)
2957 opts = pycompat.byteskwargs(opts)
2957 opts = fixkeepchangesopts(ui, opts)
2958 opts = fixkeepchangesopts(ui, opts)
2958 if opts.get('merge'):
2959 if opts.get('merge'):
2959 if opts.get('name'):
2960 if opts.get('name'):
2960 newpath = repo.vfs.join(opts.get('name'))
2961 newpath = repo.vfs.join(opts.get('name'))
2961 else:
2962 else:
2962 newpath, i = lastsavename(q.path)
2963 newpath, i = lastsavename(q.path)
2963 if not newpath:
2964 if not newpath:
2964 ui.warn(_("no saved queues found, please use -n\n"))
2965 ui.warn(_("no saved queues found, please use -n\n"))
2965 return 1
2966 return 1
2966 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2967 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2967 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2968 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2968 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2969 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2969 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2970 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2970 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2971 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2971 keepchanges=opts.get('keep_changes'))
2972 keepchanges=opts.get('keep_changes'))
2972 return ret
2973 return ret
2973
2974
2974 @command("^qpop",
2975 @command("^qpop",
2975 [('a', 'all', None, _('pop all patches')),
2976 [('a', 'all', None, _('pop all patches')),
2976 ('n', 'name', '',
2977 ('n', 'name', '',
2977 _('queue name to pop (DEPRECATED)'), _('NAME')),
2978 _('queue name to pop (DEPRECATED)'), _('NAME')),
2978 ('', 'keep-changes', None,
2979 ('', 'keep-changes', None,
2979 _('tolerate non-conflicting local changes')),
2980 _('tolerate non-conflicting local changes')),
2980 ('f', 'force', None, _('forget any local changes to patched files')),
2981 ('f', 'force', None, _('forget any local changes to patched files')),
2981 ('', 'no-backup', None, _('do not save backup copies of files'))],
2982 ('', 'no-backup', None, _('do not save backup copies of files'))],
2982 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2983 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2983 def pop(ui, repo, patch=None, **opts):
2984 def pop(ui, repo, patch=None, **opts):
2984 """pop the current patch off the stack
2985 """pop the current patch off the stack
2985
2986
2986 Without argument, pops off the top of the patch stack. If given a
2987 Without argument, pops off the top of the patch stack. If given a
2987 patch name, keeps popping off patches until the named patch is at
2988 patch name, keeps popping off patches until the named patch is at
2988 the top of the stack.
2989 the top of the stack.
2989
2990
2990 By default, abort if the working directory contains uncommitted
2991 By default, abort if the working directory contains uncommitted
2991 changes. With --keep-changes, abort only if the uncommitted files
2992 changes. With --keep-changes, abort only if the uncommitted files
2992 overlap with patched files. With -f/--force, backup and discard
2993 overlap with patched files. With -f/--force, backup and discard
2993 changes made to such files.
2994 changes made to such files.
2994
2995
2995 Return 0 on success.
2996 Return 0 on success.
2996 """
2997 """
2997 opts = pycompat.byteskwargs(opts)
2998 opts = pycompat.byteskwargs(opts)
2998 opts = fixkeepchangesopts(ui, opts)
2999 opts = fixkeepchangesopts(ui, opts)
2999 localupdate = True
3000 localupdate = True
3000 if opts.get('name'):
3001 if opts.get('name'):
3001 q = queue(ui, repo.baseui, repo.path, repo.vfs.join(opts.get('name')))
3002 q = queue(ui, repo.baseui, repo.path, repo.vfs.join(opts.get('name')))
3002 ui.warn(_('using patch queue: %s\n') % q.path)
3003 ui.warn(_('using patch queue: %s\n') % q.path)
3003 localupdate = False
3004 localupdate = False
3004 else:
3005 else:
3005 q = repo.mq
3006 q = repo.mq
3006 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
3007 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
3007 all=opts.get('all'), nobackup=opts.get('no_backup'),
3008 all=opts.get('all'), nobackup=opts.get('no_backup'),
3008 keepchanges=opts.get('keep_changes'))
3009 keepchanges=opts.get('keep_changes'))
3009 q.savedirty()
3010 q.savedirty()
3010 return ret
3011 return ret
3011
3012
3012 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
3013 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
3013 def rename(ui, repo, patch, name=None, **opts):
3014 def rename(ui, repo, patch, name=None, **opts):
3014 """rename a patch
3015 """rename a patch
3015
3016
3016 With one argument, renames the current patch to PATCH1.
3017 With one argument, renames the current patch to PATCH1.
3017 With two arguments, renames PATCH1 to PATCH2.
3018 With two arguments, renames PATCH1 to PATCH2.
3018
3019
3019 Returns 0 on success."""
3020 Returns 0 on success."""
3020 q = repo.mq
3021 q = repo.mq
3021 if not name:
3022 if not name:
3022 name = patch
3023 name = patch
3023 patch = None
3024 patch = None
3024
3025
3025 if patch:
3026 if patch:
3026 patch = q.lookup(patch)
3027 patch = q.lookup(patch)
3027 else:
3028 else:
3028 if not q.applied:
3029 if not q.applied:
3029 ui.write(_('no patches applied\n'))
3030 ui.write(_('no patches applied\n'))
3030 return
3031 return
3031 patch = q.lookup('qtip')
3032 patch = q.lookup('qtip')
3032 absdest = q.join(name)
3033 absdest = q.join(name)
3033 if os.path.isdir(absdest):
3034 if os.path.isdir(absdest):
3034 name = normname(os.path.join(name, os.path.basename(patch)))
3035 name = normname(os.path.join(name, os.path.basename(patch)))
3035 absdest = q.join(name)
3036 absdest = q.join(name)
3036 q.checkpatchname(name)
3037 q.checkpatchname(name)
3037
3038
3038 ui.note(_('renaming %s to %s\n') % (patch, name))
3039 ui.note(_('renaming %s to %s\n') % (patch, name))
3039 i = q.findseries(patch)
3040 i = q.findseries(patch)
3040 guards = q.guard_re.findall(q.fullseries[i])
3041 guards = q.guard_re.findall(q.fullseries[i])
3041 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
3042 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
3042 q.parseseries()
3043 q.parseseries()
3043 q.seriesdirty = True
3044 q.seriesdirty = True
3044
3045
3045 info = q.isapplied(patch)
3046 info = q.isapplied(patch)
3046 if info:
3047 if info:
3047 q.applied[info[0]] = statusentry(info[1], name)
3048 q.applied[info[0]] = statusentry(info[1], name)
3048 q.applieddirty = True
3049 q.applieddirty = True
3049
3050
3050 destdir = os.path.dirname(absdest)
3051 destdir = os.path.dirname(absdest)
3051 if not os.path.isdir(destdir):
3052 if not os.path.isdir(destdir):
3052 os.makedirs(destdir)
3053 os.makedirs(destdir)
3053 util.rename(q.join(patch), absdest)
3054 util.rename(q.join(patch), absdest)
3054 r = q.qrepo()
3055 r = q.qrepo()
3055 if r and patch in r.dirstate:
3056 if r and patch in r.dirstate:
3056 wctx = r[None]
3057 wctx = r[None]
3057 with r.wlock():
3058 with r.wlock():
3058 if r.dirstate[patch] == 'a':
3059 if r.dirstate[patch] == 'a':
3059 r.dirstate.drop(patch)
3060 r.dirstate.drop(patch)
3060 r.dirstate.add(name)
3061 r.dirstate.add(name)
3061 else:
3062 else:
3062 wctx.copy(patch, name)
3063 wctx.copy(patch, name)
3063 wctx.forget([patch])
3064 wctx.forget([patch])
3064
3065
3065 q.savedirty()
3066 q.savedirty()
3066
3067
3067 @command("qrestore",
3068 @command("qrestore",
3068 [('d', 'delete', None, _('delete save entry')),
3069 [('d', 'delete', None, _('delete save entry')),
3069 ('u', 'update', None, _('update queue working directory'))],
3070 ('u', 'update', None, _('update queue working directory'))],
3070 _('hg qrestore [-d] [-u] REV'))
3071 _('hg qrestore [-d] [-u] REV'))
3071 def restore(ui, repo, rev, **opts):
3072 def restore(ui, repo, rev, **opts):
3072 """restore the queue state saved by a revision (DEPRECATED)
3073 """restore the queue state saved by a revision (DEPRECATED)
3073
3074
3074 This command is deprecated, use :hg:`rebase` instead."""
3075 This command is deprecated, use :hg:`rebase` instead."""
3075 rev = repo.lookup(rev)
3076 rev = repo.lookup(rev)
3076 q = repo.mq
3077 q = repo.mq
3077 q.restore(repo, rev, delete=opts.get(r'delete'),
3078 q.restore(repo, rev, delete=opts.get(r'delete'),
3078 qupdate=opts.get(r'update'))
3079 qupdate=opts.get(r'update'))
3079 q.savedirty()
3080 q.savedirty()
3080 return 0
3081 return 0
3081
3082
3082 @command("qsave",
3083 @command("qsave",
3083 [('c', 'copy', None, _('copy patch directory')),
3084 [('c', 'copy', None, _('copy patch directory')),
3084 ('n', 'name', '',
3085 ('n', 'name', '',
3085 _('copy directory name'), _('NAME')),
3086 _('copy directory name'), _('NAME')),
3086 ('e', 'empty', None, _('clear queue status file')),
3087 ('e', 'empty', None, _('clear queue status file')),
3087 ('f', 'force', None, _('force copy'))] + cmdutil.commitopts,
3088 ('f', 'force', None, _('force copy'))] + cmdutil.commitopts,
3088 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3089 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3089 def save(ui, repo, **opts):
3090 def save(ui, repo, **opts):
3090 """save current queue state (DEPRECATED)
3091 """save current queue state (DEPRECATED)
3091
3092
3092 This command is deprecated, use :hg:`rebase` instead."""
3093 This command is deprecated, use :hg:`rebase` instead."""
3093 q = repo.mq
3094 q = repo.mq
3094 opts = pycompat.byteskwargs(opts)
3095 opts = pycompat.byteskwargs(opts)
3095 message = cmdutil.logmessage(ui, opts)
3096 message = cmdutil.logmessage(ui, opts)
3096 ret = q.save(repo, msg=message)
3097 ret = q.save(repo, msg=message)
3097 if ret:
3098 if ret:
3098 return ret
3099 return ret
3099 q.savedirty() # save to .hg/patches before copying
3100 q.savedirty() # save to .hg/patches before copying
3100 if opts.get('copy'):
3101 if opts.get('copy'):
3101 path = q.path
3102 path = q.path
3102 if opts.get('name'):
3103 if opts.get('name'):
3103 newpath = os.path.join(q.basepath, opts.get('name'))
3104 newpath = os.path.join(q.basepath, opts.get('name'))
3104 if os.path.exists(newpath):
3105 if os.path.exists(newpath):
3105 if not os.path.isdir(newpath):
3106 if not os.path.isdir(newpath):
3106 raise error.Abort(_('destination %s exists and is not '
3107 raise error.Abort(_('destination %s exists and is not '
3107 'a directory') % newpath)
3108 'a directory') % newpath)
3108 if not opts.get('force'):
3109 if not opts.get('force'):
3109 raise error.Abort(_('destination %s exists, '
3110 raise error.Abort(_('destination %s exists, '
3110 'use -f to force') % newpath)
3111 'use -f to force') % newpath)
3111 else:
3112 else:
3112 newpath = savename(path)
3113 newpath = savename(path)
3113 ui.warn(_("copy %s to %s\n") % (path, newpath))
3114 ui.warn(_("copy %s to %s\n") % (path, newpath))
3114 util.copyfiles(path, newpath)
3115 util.copyfiles(path, newpath)
3115 if opts.get('empty'):
3116 if opts.get('empty'):
3116 del q.applied[:]
3117 del q.applied[:]
3117 q.applieddirty = True
3118 q.applieddirty = True
3118 q.savedirty()
3119 q.savedirty()
3119 return 0
3120 return 0
3120
3121
3121
3122
3122 @command("qselect",
3123 @command("qselect",
3123 [('n', 'none', None, _('disable all guards')),
3124 [('n', 'none', None, _('disable all guards')),
3124 ('s', 'series', None, _('list all guards in series file')),
3125 ('s', 'series', None, _('list all guards in series file')),
3125 ('', 'pop', None, _('pop to before first guarded applied patch')),
3126 ('', 'pop', None, _('pop to before first guarded applied patch')),
3126 ('', 'reapply', None, _('pop, then reapply patches'))],
3127 ('', 'reapply', None, _('pop, then reapply patches'))],
3127 _('hg qselect [OPTION]... [GUARD]...'))
3128 _('hg qselect [OPTION]... [GUARD]...'))
3128 def select(ui, repo, *args, **opts):
3129 def select(ui, repo, *args, **opts):
3129 '''set or print guarded patches to push
3130 '''set or print guarded patches to push
3130
3131
3131 Use the :hg:`qguard` command to set or print guards on patch, then use
3132 Use the :hg:`qguard` command to set or print guards on patch, then use
3132 qselect to tell mq which guards to use. A patch will be pushed if
3133 qselect to tell mq which guards to use. A patch will be pushed if
3133 it has no guards or any positive guards match the currently
3134 it has no guards or any positive guards match the currently
3134 selected guard, but will not be pushed if any negative guards
3135 selected guard, but will not be pushed if any negative guards
3135 match the current guard. For example::
3136 match the current guard. For example::
3136
3137
3137 qguard foo.patch -- -stable (negative guard)
3138 qguard foo.patch -- -stable (negative guard)
3138 qguard bar.patch +stable (positive guard)
3139 qguard bar.patch +stable (positive guard)
3139 qselect stable
3140 qselect stable
3140
3141
3141 This activates the "stable" guard. mq will skip foo.patch (because
3142 This activates the "stable" guard. mq will skip foo.patch (because
3142 it has a negative match) but push bar.patch (because it has a
3143 it has a negative match) but push bar.patch (because it has a
3143 positive match).
3144 positive match).
3144
3145
3145 With no arguments, prints the currently active guards.
3146 With no arguments, prints the currently active guards.
3146 With one argument, sets the active guard.
3147 With one argument, sets the active guard.
3147
3148
3148 Use -n/--none to deactivate guards (no other arguments needed).
3149 Use -n/--none to deactivate guards (no other arguments needed).
3149 When no guards are active, patches with positive guards are
3150 When no guards are active, patches with positive guards are
3150 skipped and patches with negative guards are pushed.
3151 skipped and patches with negative guards are pushed.
3151
3152
3152 qselect can change the guards on applied patches. It does not pop
3153 qselect can change the guards on applied patches. It does not pop
3153 guarded patches by default. Use --pop to pop back to the last
3154 guarded patches by default. Use --pop to pop back to the last
3154 applied patch that is not guarded. Use --reapply (which implies
3155 applied patch that is not guarded. Use --reapply (which implies
3155 --pop) to push back to the current patch afterwards, but skip
3156 --pop) to push back to the current patch afterwards, but skip
3156 guarded patches.
3157 guarded patches.
3157
3158
3158 Use -s/--series to print a list of all guards in the series file
3159 Use -s/--series to print a list of all guards in the series file
3159 (no other arguments needed). Use -v for more information.
3160 (no other arguments needed). Use -v for more information.
3160
3161
3161 Returns 0 on success.'''
3162 Returns 0 on success.'''
3162
3163
3163 q = repo.mq
3164 q = repo.mq
3164 opts = pycompat.byteskwargs(opts)
3165 opts = pycompat.byteskwargs(opts)
3165 guards = q.active()
3166 guards = q.active()
3166 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3167 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3167 if args or opts.get('none'):
3168 if args or opts.get('none'):
3168 old_unapplied = q.unapplied(repo)
3169 old_unapplied = q.unapplied(repo)
3169 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3170 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3170 q.setactive(args)
3171 q.setactive(args)
3171 q.savedirty()
3172 q.savedirty()
3172 if not args:
3173 if not args:
3173 ui.status(_('guards deactivated\n'))
3174 ui.status(_('guards deactivated\n'))
3174 if not opts.get('pop') and not opts.get('reapply'):
3175 if not opts.get('pop') and not opts.get('reapply'):
3175 unapplied = q.unapplied(repo)
3176 unapplied = q.unapplied(repo)
3176 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3177 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3177 if len(unapplied) != len(old_unapplied):
3178 if len(unapplied) != len(old_unapplied):
3178 ui.status(_('number of unguarded, unapplied patches has '
3179 ui.status(_('number of unguarded, unapplied patches has '
3179 'changed from %d to %d\n') %
3180 'changed from %d to %d\n') %
3180 (len(old_unapplied), len(unapplied)))
3181 (len(old_unapplied), len(unapplied)))
3181 if len(guarded) != len(old_guarded):
3182 if len(guarded) != len(old_guarded):
3182 ui.status(_('number of guarded, applied patches has changed '
3183 ui.status(_('number of guarded, applied patches has changed '
3183 'from %d to %d\n') %
3184 'from %d to %d\n') %
3184 (len(old_guarded), len(guarded)))
3185 (len(old_guarded), len(guarded)))
3185 elif opts.get('series'):
3186 elif opts.get('series'):
3186 guards = {}
3187 guards = {}
3187 noguards = 0
3188 noguards = 0
3188 for gs in q.seriesguards:
3189 for gs in q.seriesguards:
3189 if not gs:
3190 if not gs:
3190 noguards += 1
3191 noguards += 1
3191 for g in gs:
3192 for g in gs:
3192 guards.setdefault(g, 0)
3193 guards.setdefault(g, 0)
3193 guards[g] += 1
3194 guards[g] += 1
3194 if ui.verbose:
3195 if ui.verbose:
3195 guards['NONE'] = noguards
3196 guards['NONE'] = noguards
3196 guards = guards.items()
3197 guards = guards.items()
3197 guards.sort(key=lambda x: x[0][1:])
3198 guards.sort(key=lambda x: x[0][1:])
3198 if guards:
3199 if guards:
3199 ui.note(_('guards in series file:\n'))
3200 ui.note(_('guards in series file:\n'))
3200 for guard, count in guards:
3201 for guard, count in guards:
3201 ui.note('%2d ' % count)
3202 ui.note('%2d ' % count)
3202 ui.write(guard, '\n')
3203 ui.write(guard, '\n')
3203 else:
3204 else:
3204 ui.note(_('no guards in series file\n'))
3205 ui.note(_('no guards in series file\n'))
3205 else:
3206 else:
3206 if guards:
3207 if guards:
3207 ui.note(_('active guards:\n'))
3208 ui.note(_('active guards:\n'))
3208 for g in guards:
3209 for g in guards:
3209 ui.write(g, '\n')
3210 ui.write(g, '\n')
3210 else:
3211 else:
3211 ui.write(_('no active guards\n'))
3212 ui.write(_('no active guards\n'))
3212 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3213 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3213 popped = False
3214 popped = False
3214 if opts.get('pop') or opts.get('reapply'):
3215 if opts.get('pop') or opts.get('reapply'):
3215 for i in xrange(len(q.applied)):
3216 for i in xrange(len(q.applied)):
3216 if not pushable(i):
3217 if not pushable(i):
3217 ui.status(_('popping guarded patches\n'))
3218 ui.status(_('popping guarded patches\n'))
3218 popped = True
3219 popped = True
3219 if i == 0:
3220 if i == 0:
3220 q.pop(repo, all=True)
3221 q.pop(repo, all=True)
3221 else:
3222 else:
3222 q.pop(repo, q.applied[i - 1].name)
3223 q.pop(repo, q.applied[i - 1].name)
3223 break
3224 break
3224 if popped:
3225 if popped:
3225 try:
3226 try:
3226 if reapply:
3227 if reapply:
3227 ui.status(_('reapplying unguarded patches\n'))
3228 ui.status(_('reapplying unguarded patches\n'))
3228 q.push(repo, reapply)
3229 q.push(repo, reapply)
3229 finally:
3230 finally:
3230 q.savedirty()
3231 q.savedirty()
3231
3232
3232 @command("qfinish",
3233 @command("qfinish",
3233 [('a', 'applied', None, _('finish all applied changesets'))],
3234 [('a', 'applied', None, _('finish all applied changesets'))],
3234 _('hg qfinish [-a] [REV]...'))
3235 _('hg qfinish [-a] [REV]...'))
3235 def finish(ui, repo, *revrange, **opts):
3236 def finish(ui, repo, *revrange, **opts):
3236 """move applied patches into repository history
3237 """move applied patches into repository history
3237
3238
3238 Finishes the specified revisions (corresponding to applied
3239 Finishes the specified revisions (corresponding to applied
3239 patches) by moving them out of mq control into regular repository
3240 patches) by moving them out of mq control into regular repository
3240 history.
3241 history.
3241
3242
3242 Accepts a revision range or the -a/--applied option. If --applied
3243 Accepts a revision range or the -a/--applied option. If --applied
3243 is specified, all applied mq revisions are removed from mq
3244 is specified, all applied mq revisions are removed from mq
3244 control. Otherwise, the given revisions must be at the base of the
3245 control. Otherwise, the given revisions must be at the base of the
3245 stack of applied patches.
3246 stack of applied patches.
3246
3247
3247 This can be especially useful if your changes have been applied to
3248 This can be especially useful if your changes have been applied to
3248 an upstream repository, or if you are about to push your changes
3249 an upstream repository, or if you are about to push your changes
3249 to upstream.
3250 to upstream.
3250
3251
3251 Returns 0 on success.
3252 Returns 0 on success.
3252 """
3253 """
3253 if not opts.get(r'applied') and not revrange:
3254 if not opts.get(r'applied') and not revrange:
3254 raise error.Abort(_('no revisions specified'))
3255 raise error.Abort(_('no revisions specified'))
3255 elif opts.get(r'applied'):
3256 elif opts.get(r'applied'):
3256 revrange = ('qbase::qtip',) + revrange
3257 revrange = ('qbase::qtip',) + revrange
3257
3258
3258 q = repo.mq
3259 q = repo.mq
3259 if not q.applied:
3260 if not q.applied:
3260 ui.status(_('no patches applied\n'))
3261 ui.status(_('no patches applied\n'))
3261 return 0
3262 return 0
3262
3263
3263 revs = scmutil.revrange(repo, revrange)
3264 revs = scmutil.revrange(repo, revrange)
3264 if repo['.'].rev() in revs and repo[None].files():
3265 if repo['.'].rev() in revs and repo[None].files():
3265 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3266 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3266 # queue.finish may changes phases but leave the responsibility to lock the
3267 # queue.finish may changes phases but leave the responsibility to lock the
3267 # repo to the caller to avoid deadlock with wlock. This command code is
3268 # repo to the caller to avoid deadlock with wlock. This command code is
3268 # responsibility for this locking.
3269 # responsibility for this locking.
3269 with repo.lock():
3270 with repo.lock():
3270 q.finish(repo, revs)
3271 q.finish(repo, revs)
3271 q.savedirty()
3272 q.savedirty()
3272 return 0
3273 return 0
3273
3274
3274 @command("qqueue",
3275 @command("qqueue",
3275 [('l', 'list', False, _('list all available queues')),
3276 [('l', 'list', False, _('list all available queues')),
3276 ('', 'active', False, _('print name of active queue')),
3277 ('', 'active', False, _('print name of active queue')),
3277 ('c', 'create', False, _('create new queue')),
3278 ('c', 'create', False, _('create new queue')),
3278 ('', 'rename', False, _('rename active queue')),
3279 ('', 'rename', False, _('rename active queue')),
3279 ('', 'delete', False, _('delete reference to queue')),
3280 ('', 'delete', False, _('delete reference to queue')),
3280 ('', 'purge', False, _('delete queue, and remove patch dir')),
3281 ('', 'purge', False, _('delete queue, and remove patch dir')),
3281 ],
3282 ],
3282 _('[OPTION] [QUEUE]'))
3283 _('[OPTION] [QUEUE]'))
3283 def qqueue(ui, repo, name=None, **opts):
3284 def qqueue(ui, repo, name=None, **opts):
3284 '''manage multiple patch queues
3285 '''manage multiple patch queues
3285
3286
3286 Supports switching between different patch queues, as well as creating
3287 Supports switching between different patch queues, as well as creating
3287 new patch queues and deleting existing ones.
3288 new patch queues and deleting existing ones.
3288
3289
3289 Omitting a queue name or specifying -l/--list will show you the registered
3290 Omitting a queue name or specifying -l/--list will show you the registered
3290 queues - by default the "normal" patches queue is registered. The currently
3291 queues - by default the "normal" patches queue is registered. The currently
3291 active queue will be marked with "(active)". Specifying --active will print
3292 active queue will be marked with "(active)". Specifying --active will print
3292 only the name of the active queue.
3293 only the name of the active queue.
3293
3294
3294 To create a new queue, use -c/--create. The queue is automatically made
3295 To create a new queue, use -c/--create. The queue is automatically made
3295 active, except in the case where there are applied patches from the
3296 active, except in the case where there are applied patches from the
3296 currently active queue in the repository. Then the queue will only be
3297 currently active queue in the repository. Then the queue will only be
3297 created and switching will fail.
3298 created and switching will fail.
3298
3299
3299 To delete an existing queue, use --delete. You cannot delete the currently
3300 To delete an existing queue, use --delete. You cannot delete the currently
3300 active queue.
3301 active queue.
3301
3302
3302 Returns 0 on success.
3303 Returns 0 on success.
3303 '''
3304 '''
3304 q = repo.mq
3305 q = repo.mq
3305 _defaultqueue = 'patches'
3306 _defaultqueue = 'patches'
3306 _allqueues = 'patches.queues'
3307 _allqueues = 'patches.queues'
3307 _activequeue = 'patches.queue'
3308 _activequeue = 'patches.queue'
3308
3309
3309 def _getcurrent():
3310 def _getcurrent():
3310 cur = os.path.basename(q.path)
3311 cur = os.path.basename(q.path)
3311 if cur.startswith('patches-'):
3312 if cur.startswith('patches-'):
3312 cur = cur[8:]
3313 cur = cur[8:]
3313 return cur
3314 return cur
3314
3315
3315 def _noqueues():
3316 def _noqueues():
3316 try:
3317 try:
3317 fh = repo.vfs(_allqueues, 'r')
3318 fh = repo.vfs(_allqueues, 'r')
3318 fh.close()
3319 fh.close()
3319 except IOError:
3320 except IOError:
3320 return True
3321 return True
3321
3322
3322 return False
3323 return False
3323
3324
3324 def _getqueues():
3325 def _getqueues():
3325 current = _getcurrent()
3326 current = _getcurrent()
3326
3327
3327 try:
3328 try:
3328 fh = repo.vfs(_allqueues, 'r')
3329 fh = repo.vfs(_allqueues, 'r')
3329 queues = [queue.strip() for queue in fh if queue.strip()]
3330 queues = [queue.strip() for queue in fh if queue.strip()]
3330 fh.close()
3331 fh.close()
3331 if current not in queues:
3332 if current not in queues:
3332 queues.append(current)
3333 queues.append(current)
3333 except IOError:
3334 except IOError:
3334 queues = [_defaultqueue]
3335 queues = [_defaultqueue]
3335
3336
3336 return sorted(queues)
3337 return sorted(queues)
3337
3338
3338 def _setactive(name):
3339 def _setactive(name):
3339 if q.applied:
3340 if q.applied:
3340 raise error.Abort(_('new queue created, but cannot make active '
3341 raise error.Abort(_('new queue created, but cannot make active '
3341 'as patches are applied'))
3342 'as patches are applied'))
3342 _setactivenocheck(name)
3343 _setactivenocheck(name)
3343
3344
3344 def _setactivenocheck(name):
3345 def _setactivenocheck(name):
3345 fh = repo.vfs(_activequeue, 'w')
3346 fh = repo.vfs(_activequeue, 'w')
3346 if name != 'patches':
3347 if name != 'patches':
3347 fh.write(name)
3348 fh.write(name)
3348 fh.close()
3349 fh.close()
3349
3350
3350 def _addqueue(name):
3351 def _addqueue(name):
3351 fh = repo.vfs(_allqueues, 'a')
3352 fh = repo.vfs(_allqueues, 'a')
3352 fh.write('%s\n' % (name,))
3353 fh.write('%s\n' % (name,))
3353 fh.close()
3354 fh.close()
3354
3355
3355 def _queuedir(name):
3356 def _queuedir(name):
3356 if name == 'patches':
3357 if name == 'patches':
3357 return repo.vfs.join('patches')
3358 return repo.vfs.join('patches')
3358 else:
3359 else:
3359 return repo.vfs.join('patches-' + name)
3360 return repo.vfs.join('patches-' + name)
3360
3361
3361 def _validname(name):
3362 def _validname(name):
3362 for n in name:
3363 for n in name:
3363 if n in ':\\/.':
3364 if n in ':\\/.':
3364 return False
3365 return False
3365 return True
3366 return True
3366
3367
3367 def _delete(name):
3368 def _delete(name):
3368 if name not in existing:
3369 if name not in existing:
3369 raise error.Abort(_('cannot delete queue that does not exist'))
3370 raise error.Abort(_('cannot delete queue that does not exist'))
3370
3371
3371 current = _getcurrent()
3372 current = _getcurrent()
3372
3373
3373 if name == current:
3374 if name == current:
3374 raise error.Abort(_('cannot delete currently active queue'))
3375 raise error.Abort(_('cannot delete currently active queue'))
3375
3376
3376 fh = repo.vfs('patches.queues.new', 'w')
3377 fh = repo.vfs('patches.queues.new', 'w')
3377 for queue in existing:
3378 for queue in existing:
3378 if queue == name:
3379 if queue == name:
3379 continue
3380 continue
3380 fh.write('%s\n' % (queue,))
3381 fh.write('%s\n' % (queue,))
3381 fh.close()
3382 fh.close()
3382 repo.vfs.rename('patches.queues.new', _allqueues)
3383 repo.vfs.rename('patches.queues.new', _allqueues)
3383
3384
3384 opts = pycompat.byteskwargs(opts)
3385 opts = pycompat.byteskwargs(opts)
3385 if not name or opts.get('list') or opts.get('active'):
3386 if not name or opts.get('list') or opts.get('active'):
3386 current = _getcurrent()
3387 current = _getcurrent()
3387 if opts.get('active'):
3388 if opts.get('active'):
3388 ui.write('%s\n' % (current,))
3389 ui.write('%s\n' % (current,))
3389 return
3390 return
3390 for queue in _getqueues():
3391 for queue in _getqueues():
3391 ui.write('%s' % (queue,))
3392 ui.write('%s' % (queue,))
3392 if queue == current and not ui.quiet:
3393 if queue == current and not ui.quiet:
3393 ui.write(_(' (active)\n'))
3394 ui.write(_(' (active)\n'))
3394 else:
3395 else:
3395 ui.write('\n')
3396 ui.write('\n')
3396 return
3397 return
3397
3398
3398 if not _validname(name):
3399 if not _validname(name):
3399 raise error.Abort(
3400 raise error.Abort(
3400 _('invalid queue name, may not contain the characters ":\\/."'))
3401 _('invalid queue name, may not contain the characters ":\\/."'))
3401
3402
3402 with repo.wlock():
3403 with repo.wlock():
3403 existing = _getqueues()
3404 existing = _getqueues()
3404
3405
3405 if opts.get('create'):
3406 if opts.get('create'):
3406 if name in existing:
3407 if name in existing:
3407 raise error.Abort(_('queue "%s" already exists') % name)
3408 raise error.Abort(_('queue "%s" already exists') % name)
3408 if _noqueues():
3409 if _noqueues():
3409 _addqueue(_defaultqueue)
3410 _addqueue(_defaultqueue)
3410 _addqueue(name)
3411 _addqueue(name)
3411 _setactive(name)
3412 _setactive(name)
3412 elif opts.get('rename'):
3413 elif opts.get('rename'):
3413 current = _getcurrent()
3414 current = _getcurrent()
3414 if name == current:
3415 if name == current:
3415 raise error.Abort(_('can\'t rename "%s" to its current name')
3416 raise error.Abort(_('can\'t rename "%s" to its current name')
3416 % name)
3417 % name)
3417 if name in existing:
3418 if name in existing:
3418 raise error.Abort(_('queue "%s" already exists') % name)
3419 raise error.Abort(_('queue "%s" already exists') % name)
3419
3420
3420 olddir = _queuedir(current)
3421 olddir = _queuedir(current)
3421 newdir = _queuedir(name)
3422 newdir = _queuedir(name)
3422
3423
3423 if os.path.exists(newdir):
3424 if os.path.exists(newdir):
3424 raise error.Abort(_('non-queue directory "%s" already exists') %
3425 raise error.Abort(_('non-queue directory "%s" already exists') %
3425 newdir)
3426 newdir)
3426
3427
3427 fh = repo.vfs('patches.queues.new', 'w')
3428 fh = repo.vfs('patches.queues.new', 'w')
3428 for queue in existing:
3429 for queue in existing:
3429 if queue == current:
3430 if queue == current:
3430 fh.write('%s\n' % (name,))
3431 fh.write('%s\n' % (name,))
3431 if os.path.exists(olddir):
3432 if os.path.exists(olddir):
3432 util.rename(olddir, newdir)
3433 util.rename(olddir, newdir)
3433 else:
3434 else:
3434 fh.write('%s\n' % (queue,))
3435 fh.write('%s\n' % (queue,))
3435 fh.close()
3436 fh.close()
3436 repo.vfs.rename('patches.queues.new', _allqueues)
3437 repo.vfs.rename('patches.queues.new', _allqueues)
3437 _setactivenocheck(name)
3438 _setactivenocheck(name)
3438 elif opts.get('delete'):
3439 elif opts.get('delete'):
3439 _delete(name)
3440 _delete(name)
3440 elif opts.get('purge'):
3441 elif opts.get('purge'):
3441 if name in existing:
3442 if name in existing:
3442 _delete(name)
3443 _delete(name)
3443 qdir = _queuedir(name)
3444 qdir = _queuedir(name)
3444 if os.path.exists(qdir):
3445 if os.path.exists(qdir):
3445 shutil.rmtree(qdir)
3446 shutil.rmtree(qdir)
3446 else:
3447 else:
3447 if name not in existing:
3448 if name not in existing:
3448 raise error.Abort(_('use --create to create a new queue'))
3449 raise error.Abort(_('use --create to create a new queue'))
3449 _setactive(name)
3450 _setactive(name)
3450
3451
3451 def mqphasedefaults(repo, roots):
3452 def mqphasedefaults(repo, roots):
3452 """callback used to set mq changeset as secret when no phase data exists"""
3453 """callback used to set mq changeset as secret when no phase data exists"""
3453 if repo.mq.applied:
3454 if repo.mq.applied:
3454 if repo.ui.configbool('mq', 'secret'):
3455 if repo.ui.configbool('mq', 'secret'):
3455 mqphase = phases.secret
3456 mqphase = phases.secret
3456 else:
3457 else:
3457 mqphase = phases.draft
3458 mqphase = phases.draft
3458 qbase = repo[repo.mq.applied[0].node]
3459 qbase = repo[repo.mq.applied[0].node]
3459 roots[mqphase].add(qbase.node())
3460 roots[mqphase].add(qbase.node())
3460 return roots
3461 return roots
3461
3462
3462 def reposetup(ui, repo):
3463 def reposetup(ui, repo):
3463 class mqrepo(repo.__class__):
3464 class mqrepo(repo.__class__):
3464 @localrepo.unfilteredpropertycache
3465 @localrepo.unfilteredpropertycache
3465 def mq(self):
3466 def mq(self):
3466 return queue(self.ui, self.baseui, self.path)
3467 return queue(self.ui, self.baseui, self.path)
3467
3468
3468 def invalidateall(self):
3469 def invalidateall(self):
3469 super(mqrepo, self).invalidateall()
3470 super(mqrepo, self).invalidateall()
3470 if localrepo.hasunfilteredcache(self, 'mq'):
3471 if localrepo.hasunfilteredcache(self, 'mq'):
3471 # recreate mq in case queue path was changed
3472 # recreate mq in case queue path was changed
3472 delattr(self.unfiltered(), 'mq')
3473 delattr(self.unfiltered(), 'mq')
3473
3474
3474 def abortifwdirpatched(self, errmsg, force=False):
3475 def abortifwdirpatched(self, errmsg, force=False):
3475 if self.mq.applied and self.mq.checkapplied and not force:
3476 if self.mq.applied and self.mq.checkapplied and not force:
3476 parents = self.dirstate.parents()
3477 parents = self.dirstate.parents()
3477 patches = [s.node for s in self.mq.applied]
3478 patches = [s.node for s in self.mq.applied]
3478 if parents[0] in patches or parents[1] in patches:
3479 if parents[0] in patches or parents[1] in patches:
3479 raise error.Abort(errmsg)
3480 raise error.Abort(errmsg)
3480
3481
3481 def commit(self, text="", user=None, date=None, match=None,
3482 def commit(self, text="", user=None, date=None, match=None,
3482 force=False, editor=False, extra=None):
3483 force=False, editor=False, extra=None):
3483 if extra is None:
3484 if extra is None:
3484 extra = {}
3485 extra = {}
3485 self.abortifwdirpatched(
3486 self.abortifwdirpatched(
3486 _('cannot commit over an applied mq patch'),
3487 _('cannot commit over an applied mq patch'),
3487 force)
3488 force)
3488
3489
3489 return super(mqrepo, self).commit(text, user, date, match, force,
3490 return super(mqrepo, self).commit(text, user, date, match, force,
3490 editor, extra)
3491 editor, extra)
3491
3492
3492 def checkpush(self, pushop):
3493 def checkpush(self, pushop):
3493 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3494 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3494 outapplied = [e.node for e in self.mq.applied]
3495 outapplied = [e.node for e in self.mq.applied]
3495 if pushop.revs:
3496 if pushop.revs:
3496 # Assume applied patches have no non-patch descendants and
3497 # Assume applied patches have no non-patch descendants and
3497 # are not on remote already. Filtering any changeset not
3498 # are not on remote already. Filtering any changeset not
3498 # pushed.
3499 # pushed.
3499 heads = set(pushop.revs)
3500 heads = set(pushop.revs)
3500 for node in reversed(outapplied):
3501 for node in reversed(outapplied):
3501 if node in heads:
3502 if node in heads:
3502 break
3503 break
3503 else:
3504 else:
3504 outapplied.pop()
3505 outapplied.pop()
3505 # looking for pushed and shared changeset
3506 # looking for pushed and shared changeset
3506 for node in outapplied:
3507 for node in outapplied:
3507 if self[node].phase() < phases.secret:
3508 if self[node].phase() < phases.secret:
3508 raise error.Abort(_('source has mq patches applied'))
3509 raise error.Abort(_('source has mq patches applied'))
3509 # no non-secret patches pushed
3510 # no non-secret patches pushed
3510 super(mqrepo, self).checkpush(pushop)
3511 super(mqrepo, self).checkpush(pushop)
3511
3512
3512 def _findtags(self):
3513 def _findtags(self):
3513 '''augment tags from base class with patch tags'''
3514 '''augment tags from base class with patch tags'''
3514 result = super(mqrepo, self)._findtags()
3515 result = super(mqrepo, self)._findtags()
3515
3516
3516 q = self.mq
3517 q = self.mq
3517 if not q.applied:
3518 if not q.applied:
3518 return result
3519 return result
3519
3520
3520 mqtags = [(patch.node, patch.name) for patch in q.applied]
3521 mqtags = [(patch.node, patch.name) for patch in q.applied]
3521
3522
3522 try:
3523 try:
3523 # for now ignore filtering business
3524 # for now ignore filtering business
3524 self.unfiltered().changelog.rev(mqtags[-1][0])
3525 self.unfiltered().changelog.rev(mqtags[-1][0])
3525 except error.LookupError:
3526 except error.LookupError:
3526 self.ui.warn(_('mq status file refers to unknown node %s\n')
3527 self.ui.warn(_('mq status file refers to unknown node %s\n')
3527 % short(mqtags[-1][0]))
3528 % short(mqtags[-1][0]))
3528 return result
3529 return result
3529
3530
3530 # do not add fake tags for filtered revisions
3531 # do not add fake tags for filtered revisions
3531 included = self.changelog.hasnode
3532 included = self.changelog.hasnode
3532 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3533 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3533 if not mqtags:
3534 if not mqtags:
3534 return result
3535 return result
3535
3536
3536 mqtags.append((mqtags[-1][0], 'qtip'))
3537 mqtags.append((mqtags[-1][0], 'qtip'))
3537 mqtags.append((mqtags[0][0], 'qbase'))
3538 mqtags.append((mqtags[0][0], 'qbase'))
3538 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3539 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3539 tags = result[0]
3540 tags = result[0]
3540 for patch in mqtags:
3541 for patch in mqtags:
3541 if patch[1] in tags:
3542 if patch[1] in tags:
3542 self.ui.warn(_('tag %s overrides mq patch of the same '
3543 self.ui.warn(_('tag %s overrides mq patch of the same '
3543 'name\n') % patch[1])
3544 'name\n') % patch[1])
3544 else:
3545 else:
3545 tags[patch[1]] = patch[0]
3546 tags[patch[1]] = patch[0]
3546
3547
3547 return result
3548 return result
3548
3549
3549 if repo.local():
3550 if repo.local():
3550 repo.__class__ = mqrepo
3551 repo.__class__ = mqrepo
3551
3552
3552 repo._phasedefaults.append(mqphasedefaults)
3553 repo._phasedefaults.append(mqphasedefaults)
3553
3554
3554 def mqimport(orig, ui, repo, *args, **kwargs):
3555 def mqimport(orig, ui, repo, *args, **kwargs):
3555 if (util.safehasattr(repo, 'abortifwdirpatched')
3556 if (util.safehasattr(repo, 'abortifwdirpatched')
3556 and not kwargs.get(r'no_commit', False)):
3557 and not kwargs.get(r'no_commit', False)):
3557 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3558 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3558 kwargs.get(r'force'))
3559 kwargs.get(r'force'))
3559 return orig(ui, repo, *args, **kwargs)
3560 return orig(ui, repo, *args, **kwargs)
3560
3561
3561 def mqinit(orig, ui, *args, **kwargs):
3562 def mqinit(orig, ui, *args, **kwargs):
3562 mq = kwargs.pop(r'mq', None)
3563 mq = kwargs.pop(r'mq', None)
3563
3564
3564 if not mq:
3565 if not mq:
3565 return orig(ui, *args, **kwargs)
3566 return orig(ui, *args, **kwargs)
3566
3567
3567 if args:
3568 if args:
3568 repopath = args[0]
3569 repopath = args[0]
3569 if not hg.islocal(repopath):
3570 if not hg.islocal(repopath):
3570 raise error.Abort(_('only a local queue repository '
3571 raise error.Abort(_('only a local queue repository '
3571 'may be initialized'))
3572 'may be initialized'))
3572 else:
3573 else:
3573 repopath = cmdutil.findrepo(pycompat.getcwd())
3574 repopath = cmdutil.findrepo(pycompat.getcwd())
3574 if not repopath:
3575 if not repopath:
3575 raise error.Abort(_('there is no Mercurial repository here '
3576 raise error.Abort(_('there is no Mercurial repository here '
3576 '(.hg not found)'))
3577 '(.hg not found)'))
3577 repo = hg.repository(ui, repopath)
3578 repo = hg.repository(ui, repopath)
3578 return qinit(ui, repo, True)
3579 return qinit(ui, repo, True)
3579
3580
3580 def mqcommand(orig, ui, repo, *args, **kwargs):
3581 def mqcommand(orig, ui, repo, *args, **kwargs):
3581 """Add --mq option to operate on patch repository instead of main"""
3582 """Add --mq option to operate on patch repository instead of main"""
3582
3583
3583 # some commands do not like getting unknown options
3584 # some commands do not like getting unknown options
3584 mq = kwargs.pop(r'mq', None)
3585 mq = kwargs.pop(r'mq', None)
3585
3586
3586 if not mq:
3587 if not mq:
3587 return orig(ui, repo, *args, **kwargs)
3588 return orig(ui, repo, *args, **kwargs)
3588
3589
3589 q = repo.mq
3590 q = repo.mq
3590 r = q.qrepo()
3591 r = q.qrepo()
3591 if not r:
3592 if not r:
3592 raise error.Abort(_('no queue repository'))
3593 raise error.Abort(_('no queue repository'))
3593 return orig(r.ui, r, *args, **kwargs)
3594 return orig(r.ui, r, *args, **kwargs)
3594
3595
3595 def summaryhook(ui, repo):
3596 def summaryhook(ui, repo):
3596 q = repo.mq
3597 q = repo.mq
3597 m = []
3598 m = []
3598 a, u = len(q.applied), len(q.unapplied(repo))
3599 a, u = len(q.applied), len(q.unapplied(repo))
3599 if a:
3600 if a:
3600 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3601 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3601 if u:
3602 if u:
3602 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3603 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3603 if m:
3604 if m:
3604 # i18n: column positioning for "hg summary"
3605 # i18n: column positioning for "hg summary"
3605 ui.write(_("mq: %s\n") % ', '.join(m))
3606 ui.write(_("mq: %s\n") % ', '.join(m))
3606 else:
3607 else:
3607 # i18n: column positioning for "hg summary"
3608 # i18n: column positioning for "hg summary"
3608 ui.note(_("mq: (empty queue)\n"))
3609 ui.note(_("mq: (empty queue)\n"))
3609
3610
3610 revsetpredicate = registrar.revsetpredicate()
3611 revsetpredicate = registrar.revsetpredicate()
3611
3612
3612 @revsetpredicate('mq()')
3613 @revsetpredicate('mq()')
3613 def revsetmq(repo, subset, x):
3614 def revsetmq(repo, subset, x):
3614 """Changesets managed by MQ.
3615 """Changesets managed by MQ.
3615 """
3616 """
3616 revsetlang.getargs(x, 0, 0, _("mq takes no arguments"))
3617 revsetlang.getargs(x, 0, 0, _("mq takes no arguments"))
3617 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3618 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3618 return smartset.baseset([r for r in subset if r in applied])
3619 return smartset.baseset([r for r in subset if r in applied])
3619
3620
3620 # tell hggettext to extract docstrings from these functions:
3621 # tell hggettext to extract docstrings from these functions:
3621 i18nfunctions = [revsetmq]
3622 i18nfunctions = [revsetmq]
3622
3623
3623 def extsetup(ui):
3624 def extsetup(ui):
3624 # Ensure mq wrappers are called first, regardless of extension load order by
3625 # Ensure mq wrappers are called first, regardless of extension load order by
3625 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3626 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3626 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3627 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3627
3628
3628 extensions.wrapcommand(commands.table, 'import', mqimport)
3629 extensions.wrapcommand(commands.table, 'import', mqimport)
3629 cmdutil.summaryhooks.add('mq', summaryhook)
3630 cmdutil.summaryhooks.add('mq', summaryhook)
3630
3631
3631 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3632 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3632 entry[1].extend(mqopt)
3633 entry[1].extend(mqopt)
3633
3634
3634 def dotable(cmdtable):
3635 def dotable(cmdtable):
3635 for cmd, entry in cmdtable.iteritems():
3636 for cmd, entry in cmdtable.iteritems():
3636 cmd = cmdutil.parsealiases(cmd)[0]
3637 cmd = cmdutil.parsealiases(cmd)[0]
3637 func = entry[0]
3638 func = entry[0]
3638 if func.norepo:
3639 if func.norepo:
3639 continue
3640 continue
3640 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3641 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3641 entry[1].extend(mqopt)
3642 entry[1].extend(mqopt)
3642
3643
3643 dotable(commands.table)
3644 dotable(commands.table)
3644
3645
3645 for extname, extmodule in extensions.extensions():
3646 for extname, extmodule in extensions.extensions():
3646 if extmodule.__file__ != __file__:
3647 if extmodule.__file__ != __file__:
3647 dotable(getattr(extmodule, 'cmdtable', {}))
3648 dotable(getattr(extmodule, 'cmdtable', {}))
3648
3649
3649 colortable = {'qguard.negative': 'red',
3650 colortable = {'qguard.negative': 'red',
3650 'qguard.positive': 'yellow',
3651 'qguard.positive': 'yellow',
3651 'qguard.unguarded': 'green',
3652 'qguard.unguarded': 'green',
3652 'qseries.applied': 'blue bold underline',
3653 'qseries.applied': 'blue bold underline',
3653 'qseries.guarded': 'black bold',
3654 'qseries.guarded': 'black bold',
3654 'qseries.missing': 'red bold',
3655 'qseries.missing': 'red bold',
3655 'qseries.unapplied': 'black bold'}
3656 'qseries.unapplied': 'black bold'}
@@ -1,485 +1,485 b''
1 # notify.py - email notifications for mercurial
1 # notify.py - email notifications for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''hooks for sending email push notifications
8 '''hooks for sending email push notifications
9
9
10 This extension implements hooks to send email notifications when
10 This extension implements hooks to send email notifications when
11 changesets are sent from or received by the local repository.
11 changesets are sent from or received by the local repository.
12
12
13 First, enable the extension as explained in :hg:`help extensions`, and
13 First, enable the extension as explained in :hg:`help extensions`, and
14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
15 are run when changesets are received, while ``outgoing`` hooks are for
15 are run when changesets are received, while ``outgoing`` hooks are for
16 changesets sent to another repository::
16 changesets sent to another repository::
17
17
18 [hooks]
18 [hooks]
19 # one email for each incoming changeset
19 # one email for each incoming changeset
20 incoming.notify = python:hgext.notify.hook
20 incoming.notify = python:hgext.notify.hook
21 # one email for all incoming changesets
21 # one email for all incoming changesets
22 changegroup.notify = python:hgext.notify.hook
22 changegroup.notify = python:hgext.notify.hook
23
23
24 # one email for all outgoing changesets
24 # one email for all outgoing changesets
25 outgoing.notify = python:hgext.notify.hook
25 outgoing.notify = python:hgext.notify.hook
26
26
27 This registers the hooks. To enable notification, subscribers must
27 This registers the hooks. To enable notification, subscribers must
28 be assigned to repositories. The ``[usersubs]`` section maps multiple
28 be assigned to repositories. The ``[usersubs]`` section maps multiple
29 repositories to a given recipient. The ``[reposubs]`` section maps
29 repositories to a given recipient. The ``[reposubs]`` section maps
30 multiple recipients to a single repository::
30 multiple recipients to a single repository::
31
31
32 [usersubs]
32 [usersubs]
33 # key is subscriber email, value is a comma-separated list of repo patterns
33 # key is subscriber email, value is a comma-separated list of repo patterns
34 user@host = pattern
34 user@host = pattern
35
35
36 [reposubs]
36 [reposubs]
37 # key is repo pattern, value is a comma-separated list of subscriber emails
37 # key is repo pattern, value is a comma-separated list of subscriber emails
38 pattern = user@host
38 pattern = user@host
39
39
40 A ``pattern`` is a ``glob`` matching the absolute path to a repository,
40 A ``pattern`` is a ``glob`` matching the absolute path to a repository,
41 optionally combined with a revset expression. A revset expression, if
41 optionally combined with a revset expression. A revset expression, if
42 present, is separated from the glob by a hash. Example::
42 present, is separated from the glob by a hash. Example::
43
43
44 [reposubs]
44 [reposubs]
45 */widgets#branch(release) = qa-team@example.com
45 */widgets#branch(release) = qa-team@example.com
46
46
47 This sends to ``qa-team@example.com`` whenever a changeset on the ``release``
47 This sends to ``qa-team@example.com`` whenever a changeset on the ``release``
48 branch triggers a notification in any repository ending in ``widgets``.
48 branch triggers a notification in any repository ending in ``widgets``.
49
49
50 In order to place them under direct user management, ``[usersubs]`` and
50 In order to place them under direct user management, ``[usersubs]`` and
51 ``[reposubs]`` sections may be placed in a separate ``hgrc`` file and
51 ``[reposubs]`` sections may be placed in a separate ``hgrc`` file and
52 incorporated by reference::
52 incorporated by reference::
53
53
54 [notify]
54 [notify]
55 config = /path/to/subscriptionsfile
55 config = /path/to/subscriptionsfile
56
56
57 Notifications will not be sent until the ``notify.test`` value is set
57 Notifications will not be sent until the ``notify.test`` value is set
58 to ``False``; see below.
58 to ``False``; see below.
59
59
60 Notifications content can be tweaked with the following configuration entries:
60 Notifications content can be tweaked with the following configuration entries:
61
61
62 notify.test
62 notify.test
63 If ``True``, print messages to stdout instead of sending them. Default: True.
63 If ``True``, print messages to stdout instead of sending them. Default: True.
64
64
65 notify.sources
65 notify.sources
66 Space-separated list of change sources. Notifications are activated only
66 Space-separated list of change sources. Notifications are activated only
67 when a changeset's source is in this list. Sources may be:
67 when a changeset's source is in this list. Sources may be:
68
68
69 :``serve``: changesets received via http or ssh
69 :``serve``: changesets received via http or ssh
70 :``pull``: changesets received via ``hg pull``
70 :``pull``: changesets received via ``hg pull``
71 :``unbundle``: changesets received via ``hg unbundle``
71 :``unbundle``: changesets received via ``hg unbundle``
72 :``push``: changesets sent or received via ``hg push``
72 :``push``: changesets sent or received via ``hg push``
73 :``bundle``: changesets sent via ``hg unbundle``
73 :``bundle``: changesets sent via ``hg unbundle``
74
74
75 Default: serve.
75 Default: serve.
76
76
77 notify.strip
77 notify.strip
78 Number of leading slashes to strip from url paths. By default, notifications
78 Number of leading slashes to strip from url paths. By default, notifications
79 reference repositories with their absolute path. ``notify.strip`` lets you
79 reference repositories with their absolute path. ``notify.strip`` lets you
80 turn them into relative paths. For example, ``notify.strip=3`` will change
80 turn them into relative paths. For example, ``notify.strip=3`` will change
81 ``/long/path/repository`` into ``repository``. Default: 0.
81 ``/long/path/repository`` into ``repository``. Default: 0.
82
82
83 notify.domain
83 notify.domain
84 Default email domain for sender or recipients with no explicit domain.
84 Default email domain for sender or recipients with no explicit domain.
85
85
86 notify.style
86 notify.style
87 Style file to use when formatting emails.
87 Style file to use when formatting emails.
88
88
89 notify.template
89 notify.template
90 Template to use when formatting emails.
90 Template to use when formatting emails.
91
91
92 notify.incoming
92 notify.incoming
93 Template to use when run as an incoming hook, overriding ``notify.template``.
93 Template to use when run as an incoming hook, overriding ``notify.template``.
94
94
95 notify.outgoing
95 notify.outgoing
96 Template to use when run as an outgoing hook, overriding ``notify.template``.
96 Template to use when run as an outgoing hook, overriding ``notify.template``.
97
97
98 notify.changegroup
98 notify.changegroup
99 Template to use when running as a changegroup hook, overriding
99 Template to use when running as a changegroup hook, overriding
100 ``notify.template``.
100 ``notify.template``.
101
101
102 notify.maxdiff
102 notify.maxdiff
103 Maximum number of diff lines to include in notification email. Set to 0
103 Maximum number of diff lines to include in notification email. Set to 0
104 to disable the diff, or -1 to include all of it. Default: 300.
104 to disable the diff, or -1 to include all of it. Default: 300.
105
105
106 notify.maxsubject
106 notify.maxsubject
107 Maximum number of characters in email's subject line. Default: 67.
107 Maximum number of characters in email's subject line. Default: 67.
108
108
109 notify.diffstat
109 notify.diffstat
110 Set to True to include a diffstat before diff content. Default: True.
110 Set to True to include a diffstat before diff content. Default: True.
111
111
112 notify.merge
112 notify.merge
113 If True, send notifications for merge changesets. Default: True.
113 If True, send notifications for merge changesets. Default: True.
114
114
115 notify.mbox
115 notify.mbox
116 If set, append mails to this mbox file instead of sending. Default: None.
116 If set, append mails to this mbox file instead of sending. Default: None.
117
117
118 notify.fromauthor
118 notify.fromauthor
119 If set, use the committer of the first changeset in a changegroup for
119 If set, use the committer of the first changeset in a changegroup for
120 the "From" field of the notification mail. If not set, take the user
120 the "From" field of the notification mail. If not set, take the user
121 from the pushing repo. Default: False.
121 from the pushing repo. Default: False.
122
122
123 If set, the following entries will also be used to customize the
123 If set, the following entries will also be used to customize the
124 notifications:
124 notifications:
125
125
126 email.from
126 email.from
127 Email ``From`` address to use if none can be found in the generated
127 Email ``From`` address to use if none can be found in the generated
128 email content.
128 email content.
129
129
130 web.baseurl
130 web.baseurl
131 Root repository URL to combine with repository paths when making
131 Root repository URL to combine with repository paths when making
132 references. See also ``notify.strip``.
132 references. See also ``notify.strip``.
133
133
134 '''
134 '''
135 from __future__ import absolute_import
135 from __future__ import absolute_import
136
136
137 import email
137 import email
138 import email.parser as emailparser
138 import email.parser as emailparser
139 import fnmatch
139 import fnmatch
140 import socket
140 import socket
141 import time
141 import time
142
142
143 from mercurial.i18n import _
143 from mercurial.i18n import _
144 from mercurial import (
144 from mercurial import (
145 cmdutil,
146 error,
145 error,
146 logcmdutil,
147 mail,
147 mail,
148 patch,
148 patch,
149 registrar,
149 registrar,
150 util,
150 util,
151 )
151 )
152
152
153 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
153 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
154 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
154 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
155 # be specifying the version(s) of Mercurial they are tested with, or
155 # be specifying the version(s) of Mercurial they are tested with, or
156 # leave the attribute unspecified.
156 # leave the attribute unspecified.
157 testedwith = 'ships-with-hg-core'
157 testedwith = 'ships-with-hg-core'
158
158
159 configtable = {}
159 configtable = {}
160 configitem = registrar.configitem(configtable)
160 configitem = registrar.configitem(configtable)
161
161
162 configitem('notify', 'changegroup',
162 configitem('notify', 'changegroup',
163 default=None,
163 default=None,
164 )
164 )
165 configitem('notify', 'config',
165 configitem('notify', 'config',
166 default=None,
166 default=None,
167 )
167 )
168 configitem('notify', 'diffstat',
168 configitem('notify', 'diffstat',
169 default=True,
169 default=True,
170 )
170 )
171 configitem('notify', 'domain',
171 configitem('notify', 'domain',
172 default=None,
172 default=None,
173 )
173 )
174 configitem('notify', 'fromauthor',
174 configitem('notify', 'fromauthor',
175 default=None,
175 default=None,
176 )
176 )
177 configitem('notify', 'incoming',
177 configitem('notify', 'incoming',
178 default=None,
178 default=None,
179 )
179 )
180 configitem('notify', 'maxdiff',
180 configitem('notify', 'maxdiff',
181 default=300,
181 default=300,
182 )
182 )
183 configitem('notify', 'maxsubject',
183 configitem('notify', 'maxsubject',
184 default=67,
184 default=67,
185 )
185 )
186 configitem('notify', 'mbox',
186 configitem('notify', 'mbox',
187 default=None,
187 default=None,
188 )
188 )
189 configitem('notify', 'merge',
189 configitem('notify', 'merge',
190 default=True,
190 default=True,
191 )
191 )
192 configitem('notify', 'outgoing',
192 configitem('notify', 'outgoing',
193 default=None,
193 default=None,
194 )
194 )
195 configitem('notify', 'sources',
195 configitem('notify', 'sources',
196 default='serve',
196 default='serve',
197 )
197 )
198 configitem('notify', 'strip',
198 configitem('notify', 'strip',
199 default=0,
199 default=0,
200 )
200 )
201 configitem('notify', 'style',
201 configitem('notify', 'style',
202 default=None,
202 default=None,
203 )
203 )
204 configitem('notify', 'template',
204 configitem('notify', 'template',
205 default=None,
205 default=None,
206 )
206 )
207 configitem('notify', 'test',
207 configitem('notify', 'test',
208 default=True,
208 default=True,
209 )
209 )
210
210
211 # template for single changeset can include email headers.
211 # template for single changeset can include email headers.
212 single_template = '''
212 single_template = '''
213 Subject: changeset in {webroot}: {desc|firstline|strip}
213 Subject: changeset in {webroot}: {desc|firstline|strip}
214 From: {author}
214 From: {author}
215
215
216 changeset {node|short} in {root}
216 changeset {node|short} in {root}
217 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
217 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
218 description:
218 description:
219 \t{desc|tabindent|strip}
219 \t{desc|tabindent|strip}
220 '''.lstrip()
220 '''.lstrip()
221
221
222 # template for multiple changesets should not contain email headers,
222 # template for multiple changesets should not contain email headers,
223 # because only first set of headers will be used and result will look
223 # because only first set of headers will be used and result will look
224 # strange.
224 # strange.
225 multiple_template = '''
225 multiple_template = '''
226 changeset {node|short} in {root}
226 changeset {node|short} in {root}
227 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
227 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
228 summary: {desc|firstline}
228 summary: {desc|firstline}
229 '''
229 '''
230
230
231 deftemplates = {
231 deftemplates = {
232 'changegroup': multiple_template,
232 'changegroup': multiple_template,
233 }
233 }
234
234
235 class notifier(object):
235 class notifier(object):
236 '''email notification class.'''
236 '''email notification class.'''
237
237
238 def __init__(self, ui, repo, hooktype):
238 def __init__(self, ui, repo, hooktype):
239 self.ui = ui
239 self.ui = ui
240 cfg = self.ui.config('notify', 'config')
240 cfg = self.ui.config('notify', 'config')
241 if cfg:
241 if cfg:
242 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
242 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
243 self.repo = repo
243 self.repo = repo
244 self.stripcount = int(self.ui.config('notify', 'strip'))
244 self.stripcount = int(self.ui.config('notify', 'strip'))
245 self.root = self.strip(self.repo.root)
245 self.root = self.strip(self.repo.root)
246 self.domain = self.ui.config('notify', 'domain')
246 self.domain = self.ui.config('notify', 'domain')
247 self.mbox = self.ui.config('notify', 'mbox')
247 self.mbox = self.ui.config('notify', 'mbox')
248 self.test = self.ui.configbool('notify', 'test')
248 self.test = self.ui.configbool('notify', 'test')
249 self.charsets = mail._charsets(self.ui)
249 self.charsets = mail._charsets(self.ui)
250 self.subs = self.subscribers()
250 self.subs = self.subscribers()
251 self.merge = self.ui.configbool('notify', 'merge')
251 self.merge = self.ui.configbool('notify', 'merge')
252
252
253 mapfile = None
253 mapfile = None
254 template = (self.ui.config('notify', hooktype) or
254 template = (self.ui.config('notify', hooktype) or
255 self.ui.config('notify', 'template'))
255 self.ui.config('notify', 'template'))
256 if not template:
256 if not template:
257 mapfile = self.ui.config('notify', 'style')
257 mapfile = self.ui.config('notify', 'style')
258 if not mapfile and not template:
258 if not mapfile and not template:
259 template = deftemplates.get(hooktype) or single_template
259 template = deftemplates.get(hooktype) or single_template
260 spec = cmdutil.logtemplatespec(template, mapfile)
260 spec = logcmdutil.templatespec(template, mapfile)
261 self.t = cmdutil.changeset_templater(self.ui, self.repo, spec,
261 self.t = logcmdutil.changesettemplater(self.ui, self.repo, spec,
262 False, None, False)
262 False, None, False)
263
263
264 def strip(self, path):
264 def strip(self, path):
265 '''strip leading slashes from local path, turn into web-safe path.'''
265 '''strip leading slashes from local path, turn into web-safe path.'''
266
266
267 path = util.pconvert(path)
267 path = util.pconvert(path)
268 count = self.stripcount
268 count = self.stripcount
269 while count > 0:
269 while count > 0:
270 c = path.find('/')
270 c = path.find('/')
271 if c == -1:
271 if c == -1:
272 break
272 break
273 path = path[c + 1:]
273 path = path[c + 1:]
274 count -= 1
274 count -= 1
275 return path
275 return path
276
276
277 def fixmail(self, addr):
277 def fixmail(self, addr):
278 '''try to clean up email addresses.'''
278 '''try to clean up email addresses.'''
279
279
280 addr = util.email(addr.strip())
280 addr = util.email(addr.strip())
281 if self.domain:
281 if self.domain:
282 a = addr.find('@localhost')
282 a = addr.find('@localhost')
283 if a != -1:
283 if a != -1:
284 addr = addr[:a]
284 addr = addr[:a]
285 if '@' not in addr:
285 if '@' not in addr:
286 return addr + '@' + self.domain
286 return addr + '@' + self.domain
287 return addr
287 return addr
288
288
289 def subscribers(self):
289 def subscribers(self):
290 '''return list of email addresses of subscribers to this repo.'''
290 '''return list of email addresses of subscribers to this repo.'''
291 subs = set()
291 subs = set()
292 for user, pats in self.ui.configitems('usersubs'):
292 for user, pats in self.ui.configitems('usersubs'):
293 for pat in pats.split(','):
293 for pat in pats.split(','):
294 if '#' in pat:
294 if '#' in pat:
295 pat, revs = pat.split('#', 1)
295 pat, revs = pat.split('#', 1)
296 else:
296 else:
297 revs = None
297 revs = None
298 if fnmatch.fnmatch(self.repo.root, pat.strip()):
298 if fnmatch.fnmatch(self.repo.root, pat.strip()):
299 subs.add((self.fixmail(user), revs))
299 subs.add((self.fixmail(user), revs))
300 for pat, users in self.ui.configitems('reposubs'):
300 for pat, users in self.ui.configitems('reposubs'):
301 if '#' in pat:
301 if '#' in pat:
302 pat, revs = pat.split('#', 1)
302 pat, revs = pat.split('#', 1)
303 else:
303 else:
304 revs = None
304 revs = None
305 if fnmatch.fnmatch(self.repo.root, pat):
305 if fnmatch.fnmatch(self.repo.root, pat):
306 for user in users.split(','):
306 for user in users.split(','):
307 subs.add((self.fixmail(user), revs))
307 subs.add((self.fixmail(user), revs))
308 return [(mail.addressencode(self.ui, s, self.charsets, self.test), r)
308 return [(mail.addressencode(self.ui, s, self.charsets, self.test), r)
309 for s, r in sorted(subs)]
309 for s, r in sorted(subs)]
310
310
311 def node(self, ctx, **props):
311 def node(self, ctx, **props):
312 '''format one changeset, unless it is a suppressed merge.'''
312 '''format one changeset, unless it is a suppressed merge.'''
313 if not self.merge and len(ctx.parents()) > 1:
313 if not self.merge and len(ctx.parents()) > 1:
314 return False
314 return False
315 self.t.show(ctx, changes=ctx.changeset(),
315 self.t.show(ctx, changes=ctx.changeset(),
316 baseurl=self.ui.config('web', 'baseurl'),
316 baseurl=self.ui.config('web', 'baseurl'),
317 root=self.repo.root, webroot=self.root, **props)
317 root=self.repo.root, webroot=self.root, **props)
318 return True
318 return True
319
319
320 def skipsource(self, source):
320 def skipsource(self, source):
321 '''true if incoming changes from this source should be skipped.'''
321 '''true if incoming changes from this source should be skipped.'''
322 ok_sources = self.ui.config('notify', 'sources').split()
322 ok_sources = self.ui.config('notify', 'sources').split()
323 return source not in ok_sources
323 return source not in ok_sources
324
324
325 def send(self, ctx, count, data):
325 def send(self, ctx, count, data):
326 '''send message.'''
326 '''send message.'''
327
327
328 # Select subscribers by revset
328 # Select subscribers by revset
329 subs = set()
329 subs = set()
330 for sub, spec in self.subs:
330 for sub, spec in self.subs:
331 if spec is None:
331 if spec is None:
332 subs.add(sub)
332 subs.add(sub)
333 continue
333 continue
334 revs = self.repo.revs('%r and %d:', spec, ctx.rev())
334 revs = self.repo.revs('%r and %d:', spec, ctx.rev())
335 if len(revs):
335 if len(revs):
336 subs.add(sub)
336 subs.add(sub)
337 continue
337 continue
338 if len(subs) == 0:
338 if len(subs) == 0:
339 self.ui.debug('notify: no subscribers to selected repo '
339 self.ui.debug('notify: no subscribers to selected repo '
340 'and revset\n')
340 'and revset\n')
341 return
341 return
342
342
343 p = emailparser.Parser()
343 p = emailparser.Parser()
344 try:
344 try:
345 msg = p.parsestr(data)
345 msg = p.parsestr(data)
346 except email.Errors.MessageParseError as inst:
346 except email.Errors.MessageParseError as inst:
347 raise error.Abort(inst)
347 raise error.Abort(inst)
348
348
349 # store sender and subject
349 # store sender and subject
350 sender, subject = msg['From'], msg['Subject']
350 sender, subject = msg['From'], msg['Subject']
351 del msg['From'], msg['Subject']
351 del msg['From'], msg['Subject']
352
352
353 if not msg.is_multipart():
353 if not msg.is_multipart():
354 # create fresh mime message from scratch
354 # create fresh mime message from scratch
355 # (multipart templates must take care of this themselves)
355 # (multipart templates must take care of this themselves)
356 headers = msg.items()
356 headers = msg.items()
357 payload = msg.get_payload()
357 payload = msg.get_payload()
358 # for notification prefer readability over data precision
358 # for notification prefer readability over data precision
359 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
359 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
360 # reinstate custom headers
360 # reinstate custom headers
361 for k, v in headers:
361 for k, v in headers:
362 msg[k] = v
362 msg[k] = v
363
363
364 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
364 msg['Date'] = util.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
365
365
366 # try to make subject line exist and be useful
366 # try to make subject line exist and be useful
367 if not subject:
367 if not subject:
368 if count > 1:
368 if count > 1:
369 subject = _('%s: %d new changesets') % (self.root, count)
369 subject = _('%s: %d new changesets') % (self.root, count)
370 else:
370 else:
371 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
371 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
372 subject = '%s: %s' % (self.root, s)
372 subject = '%s: %s' % (self.root, s)
373 maxsubject = int(self.ui.config('notify', 'maxsubject'))
373 maxsubject = int(self.ui.config('notify', 'maxsubject'))
374 if maxsubject:
374 if maxsubject:
375 subject = util.ellipsis(subject, maxsubject)
375 subject = util.ellipsis(subject, maxsubject)
376 msg['Subject'] = mail.headencode(self.ui, subject,
376 msg['Subject'] = mail.headencode(self.ui, subject,
377 self.charsets, self.test)
377 self.charsets, self.test)
378
378
379 # try to make message have proper sender
379 # try to make message have proper sender
380 if not sender:
380 if not sender:
381 sender = self.ui.config('email', 'from') or self.ui.username()
381 sender = self.ui.config('email', 'from') or self.ui.username()
382 if '@' not in sender or '@localhost' in sender:
382 if '@' not in sender or '@localhost' in sender:
383 sender = self.fixmail(sender)
383 sender = self.fixmail(sender)
384 msg['From'] = mail.addressencode(self.ui, sender,
384 msg['From'] = mail.addressencode(self.ui, sender,
385 self.charsets, self.test)
385 self.charsets, self.test)
386
386
387 msg['X-Hg-Notification'] = 'changeset %s' % ctx
387 msg['X-Hg-Notification'] = 'changeset %s' % ctx
388 if not msg['Message-Id']:
388 if not msg['Message-Id']:
389 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
389 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
390 (ctx, int(time.time()),
390 (ctx, int(time.time()),
391 hash(self.repo.root), socket.getfqdn()))
391 hash(self.repo.root), socket.getfqdn()))
392 msg['To'] = ', '.join(sorted(subs))
392 msg['To'] = ', '.join(sorted(subs))
393
393
394 msgtext = msg.as_string()
394 msgtext = msg.as_string()
395 if self.test:
395 if self.test:
396 self.ui.write(msgtext)
396 self.ui.write(msgtext)
397 if not msgtext.endswith('\n'):
397 if not msgtext.endswith('\n'):
398 self.ui.write('\n')
398 self.ui.write('\n')
399 else:
399 else:
400 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
400 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
401 (len(subs), count))
401 (len(subs), count))
402 mail.sendmail(self.ui, util.email(msg['From']),
402 mail.sendmail(self.ui, util.email(msg['From']),
403 subs, msgtext, mbox=self.mbox)
403 subs, msgtext, mbox=self.mbox)
404
404
405 def diff(self, ctx, ref=None):
405 def diff(self, ctx, ref=None):
406
406
407 maxdiff = int(self.ui.config('notify', 'maxdiff'))
407 maxdiff = int(self.ui.config('notify', 'maxdiff'))
408 prev = ctx.p1().node()
408 prev = ctx.p1().node()
409 if ref:
409 if ref:
410 ref = ref.node()
410 ref = ref.node()
411 else:
411 else:
412 ref = ctx.node()
412 ref = ctx.node()
413 chunks = patch.diff(self.repo, prev, ref,
413 chunks = patch.diff(self.repo, prev, ref,
414 opts=patch.diffallopts(self.ui))
414 opts=patch.diffallopts(self.ui))
415 difflines = ''.join(chunks).splitlines()
415 difflines = ''.join(chunks).splitlines()
416
416
417 if self.ui.configbool('notify', 'diffstat'):
417 if self.ui.configbool('notify', 'diffstat'):
418 s = patch.diffstat(difflines)
418 s = patch.diffstat(difflines)
419 # s may be nil, don't include the header if it is
419 # s may be nil, don't include the header if it is
420 if s:
420 if s:
421 self.ui.write(_('\ndiffstat:\n\n%s') % s)
421 self.ui.write(_('\ndiffstat:\n\n%s') % s)
422
422
423 if maxdiff == 0:
423 if maxdiff == 0:
424 return
424 return
425 elif maxdiff > 0 and len(difflines) > maxdiff:
425 elif maxdiff > 0 and len(difflines) > maxdiff:
426 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
426 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
427 self.ui.write(msg % (len(difflines), maxdiff))
427 self.ui.write(msg % (len(difflines), maxdiff))
428 difflines = difflines[:maxdiff]
428 difflines = difflines[:maxdiff]
429 elif difflines:
429 elif difflines:
430 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
430 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
431
431
432 self.ui.write("\n".join(difflines))
432 self.ui.write("\n".join(difflines))
433
433
434 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
434 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
435 '''send email notifications to interested subscribers.
435 '''send email notifications to interested subscribers.
436
436
437 if used as changegroup hook, send one email for all changesets in
437 if used as changegroup hook, send one email for all changesets in
438 changegroup. else send one email per changeset.'''
438 changegroup. else send one email per changeset.'''
439
439
440 n = notifier(ui, repo, hooktype)
440 n = notifier(ui, repo, hooktype)
441 ctx = repo[node]
441 ctx = repo[node]
442
442
443 if not n.subs:
443 if not n.subs:
444 ui.debug('notify: no subscribers to repository %s\n' % n.root)
444 ui.debug('notify: no subscribers to repository %s\n' % n.root)
445 return
445 return
446 if n.skipsource(source):
446 if n.skipsource(source):
447 ui.debug('notify: changes have source "%s" - skipping\n' % source)
447 ui.debug('notify: changes have source "%s" - skipping\n' % source)
448 return
448 return
449
449
450 ui.pushbuffer()
450 ui.pushbuffer()
451 data = ''
451 data = ''
452 count = 0
452 count = 0
453 author = ''
453 author = ''
454 if hooktype == 'changegroup' or hooktype == 'outgoing':
454 if hooktype == 'changegroup' or hooktype == 'outgoing':
455 start, end = ctx.rev(), len(repo)
455 start, end = ctx.rev(), len(repo)
456 for rev in xrange(start, end):
456 for rev in xrange(start, end):
457 if n.node(repo[rev]):
457 if n.node(repo[rev]):
458 count += 1
458 count += 1
459 if not author:
459 if not author:
460 author = repo[rev].user()
460 author = repo[rev].user()
461 else:
461 else:
462 data += ui.popbuffer()
462 data += ui.popbuffer()
463 ui.note(_('notify: suppressing notification for merge %d:%s\n')
463 ui.note(_('notify: suppressing notification for merge %d:%s\n')
464 % (rev, repo[rev].hex()[:12]))
464 % (rev, repo[rev].hex()[:12]))
465 ui.pushbuffer()
465 ui.pushbuffer()
466 if count:
466 if count:
467 n.diff(ctx, repo['tip'])
467 n.diff(ctx, repo['tip'])
468 else:
468 else:
469 if not n.node(ctx):
469 if not n.node(ctx):
470 ui.popbuffer()
470 ui.popbuffer()
471 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
471 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
472 (ctx.rev(), ctx.hex()[:12]))
472 (ctx.rev(), ctx.hex()[:12]))
473 return
473 return
474 count += 1
474 count += 1
475 n.diff(ctx)
475 n.diff(ctx)
476 if not author:
476 if not author:
477 author = ctx.user()
477 author = ctx.user()
478
478
479 data += ui.popbuffer()
479 data += ui.popbuffer()
480 fromauthor = ui.config('notify', 'fromauthor')
480 fromauthor = ui.config('notify', 'fromauthor')
481 if author and fromauthor:
481 if author and fromauthor:
482 data = '\n'.join(['From: %s' % author, data])
482 data = '\n'.join(['From: %s' % author, data])
483
483
484 if count:
484 if count:
485 n.send(ctx, count, data)
485 n.send(ctx, count, data)
@@ -1,468 +1,469 b''
1 # show.py - Extension implementing `hg show`
1 # show.py - Extension implementing `hg show`
2 #
2 #
3 # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com>
3 # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """unified command to show various repository information (EXPERIMENTAL)
8 """unified command to show various repository information (EXPERIMENTAL)
9
9
10 This extension provides the :hg:`show` command, which provides a central
10 This extension provides the :hg:`show` command, which provides a central
11 command for displaying commonly-accessed repository data and views of that
11 command for displaying commonly-accessed repository data and views of that
12 data.
12 data.
13
13
14 The following config options can influence operation.
14 The following config options can influence operation.
15
15
16 ``commands``
16 ``commands``
17 ------------
17 ------------
18
18
19 ``show.aliasprefix``
19 ``show.aliasprefix``
20 List of strings that will register aliases for views. e.g. ``s`` will
20 List of strings that will register aliases for views. e.g. ``s`` will
21 effectively set config options ``alias.s<view> = show <view>`` for all
21 effectively set config options ``alias.s<view> = show <view>`` for all
22 views. i.e. `hg swork` would execute `hg show work`.
22 views. i.e. `hg swork` would execute `hg show work`.
23
23
24 Aliases that would conflict with existing registrations will not be
24 Aliases that would conflict with existing registrations will not be
25 performed.
25 performed.
26 """
26 """
27
27
28 from __future__ import absolute_import
28 from __future__ import absolute_import
29
29
30 from mercurial.i18n import _
30 from mercurial.i18n import _
31 from mercurial.node import (
31 from mercurial.node import (
32 hex,
32 hex,
33 nullrev,
33 nullrev,
34 )
34 )
35 from mercurial import (
35 from mercurial import (
36 cmdutil,
36 cmdutil,
37 commands,
37 commands,
38 destutil,
38 destutil,
39 error,
39 error,
40 formatter,
40 formatter,
41 graphmod,
41 graphmod,
42 logcmdutil,
42 phases,
43 phases,
43 pycompat,
44 pycompat,
44 registrar,
45 registrar,
45 revset,
46 revset,
46 revsetlang,
47 revsetlang,
47 )
48 )
48
49
49 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
50 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
50 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
51 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
51 # be specifying the version(s) of Mercurial they are tested with, or
52 # be specifying the version(s) of Mercurial they are tested with, or
52 # leave the attribute unspecified.
53 # leave the attribute unspecified.
53 testedwith = 'ships-with-hg-core'
54 testedwith = 'ships-with-hg-core'
54
55
55 cmdtable = {}
56 cmdtable = {}
56 command = registrar.command(cmdtable)
57 command = registrar.command(cmdtable)
57
58
58 revsetpredicate = registrar.revsetpredicate()
59 revsetpredicate = registrar.revsetpredicate()
59
60
60 class showcmdfunc(registrar._funcregistrarbase):
61 class showcmdfunc(registrar._funcregistrarbase):
61 """Register a function to be invoked for an `hg show <thing>`."""
62 """Register a function to be invoked for an `hg show <thing>`."""
62
63
63 # Used by _formatdoc().
64 # Used by _formatdoc().
64 _docformat = '%s -- %s'
65 _docformat = '%s -- %s'
65
66
66 def _extrasetup(self, name, func, fmtopic=None, csettopic=None):
67 def _extrasetup(self, name, func, fmtopic=None, csettopic=None):
67 """Called with decorator arguments to register a show view.
68 """Called with decorator arguments to register a show view.
68
69
69 ``name`` is the sub-command name.
70 ``name`` is the sub-command name.
70
71
71 ``func`` is the function being decorated.
72 ``func`` is the function being decorated.
72
73
73 ``fmtopic`` is the topic in the style that will be rendered for
74 ``fmtopic`` is the topic in the style that will be rendered for
74 this view.
75 this view.
75
76
76 ``csettopic`` is the topic in the style to be used for a changeset
77 ``csettopic`` is the topic in the style to be used for a changeset
77 printer.
78 printer.
78
79
79 If ``fmtopic`` is specified, the view function will receive a
80 If ``fmtopic`` is specified, the view function will receive a
80 formatter instance. If ``csettopic`` is specified, the view
81 formatter instance. If ``csettopic`` is specified, the view
81 function will receive a changeset printer.
82 function will receive a changeset printer.
82 """
83 """
83 func._fmtopic = fmtopic
84 func._fmtopic = fmtopic
84 func._csettopic = csettopic
85 func._csettopic = csettopic
85
86
86 showview = showcmdfunc()
87 showview = showcmdfunc()
87
88
88 @command('show', [
89 @command('show', [
89 # TODO: Switch this template flag to use cmdutil.formatteropts if
90 # TODO: Switch this template flag to use cmdutil.formatteropts if
90 # 'hg show' becomes stable before --template/-T is stable. For now,
91 # 'hg show' becomes stable before --template/-T is stable. For now,
91 # we are putting it here without the '(EXPERIMENTAL)' flag because it
92 # we are putting it here without the '(EXPERIMENTAL)' flag because it
92 # is an important part of the 'hg show' user experience and the entire
93 # is an important part of the 'hg show' user experience and the entire
93 # 'hg show' experience is experimental.
94 # 'hg show' experience is experimental.
94 ('T', 'template', '', ('display with template'), _('TEMPLATE')),
95 ('T', 'template', '', ('display with template'), _('TEMPLATE')),
95 ], _('VIEW'))
96 ], _('VIEW'))
96 def show(ui, repo, view=None, template=None):
97 def show(ui, repo, view=None, template=None):
97 """show various repository information
98 """show various repository information
98
99
99 A requested view of repository data is displayed.
100 A requested view of repository data is displayed.
100
101
101 If no view is requested, the list of available views is shown and the
102 If no view is requested, the list of available views is shown and the
102 command aborts.
103 command aborts.
103
104
104 .. note::
105 .. note::
105
106
106 There are no backwards compatibility guarantees for the output of this
107 There are no backwards compatibility guarantees for the output of this
107 command. Output may change in any future Mercurial release.
108 command. Output may change in any future Mercurial release.
108
109
109 Consumers wanting stable command output should specify a template via
110 Consumers wanting stable command output should specify a template via
110 ``-T/--template``.
111 ``-T/--template``.
111
112
112 List of available views:
113 List of available views:
113 """
114 """
114 if ui.plain() and not template:
115 if ui.plain() and not template:
115 hint = _('invoke with -T/--template to control output format')
116 hint = _('invoke with -T/--template to control output format')
116 raise error.Abort(_('must specify a template in plain mode'), hint=hint)
117 raise error.Abort(_('must specify a template in plain mode'), hint=hint)
117
118
118 views = showview._table
119 views = showview._table
119
120
120 if not view:
121 if not view:
121 ui.pager('show')
122 ui.pager('show')
122 # TODO consider using formatter here so available views can be
123 # TODO consider using formatter here so available views can be
123 # rendered to custom format.
124 # rendered to custom format.
124 ui.write(_('available views:\n'))
125 ui.write(_('available views:\n'))
125 ui.write('\n')
126 ui.write('\n')
126
127
127 for name, func in sorted(views.items()):
128 for name, func in sorted(views.items()):
128 ui.write(('%s\n') % func.__doc__)
129 ui.write(('%s\n') % func.__doc__)
129
130
130 ui.write('\n')
131 ui.write('\n')
131 raise error.Abort(_('no view requested'),
132 raise error.Abort(_('no view requested'),
132 hint=_('use "hg show VIEW" to choose a view'))
133 hint=_('use "hg show VIEW" to choose a view'))
133
134
134 # TODO use same logic as dispatch to perform prefix matching.
135 # TODO use same logic as dispatch to perform prefix matching.
135 if view not in views:
136 if view not in views:
136 raise error.Abort(_('unknown view: %s') % view,
137 raise error.Abort(_('unknown view: %s') % view,
137 hint=_('run "hg show" to see available views'))
138 hint=_('run "hg show" to see available views'))
138
139
139 template = template or 'show'
140 template = template or 'show'
140
141
141 fn = views[view]
142 fn = views[view]
142 ui.pager('show')
143 ui.pager('show')
143
144
144 if fn._fmtopic:
145 if fn._fmtopic:
145 fmtopic = 'show%s' % fn._fmtopic
146 fmtopic = 'show%s' % fn._fmtopic
146 with ui.formatter(fmtopic, {'template': template}) as fm:
147 with ui.formatter(fmtopic, {'template': template}) as fm:
147 return fn(ui, repo, fm)
148 return fn(ui, repo, fm)
148 elif fn._csettopic:
149 elif fn._csettopic:
149 ref = 'show%s' % fn._csettopic
150 ref = 'show%s' % fn._csettopic
150 spec = formatter.lookuptemplate(ui, ref, template)
151 spec = formatter.lookuptemplate(ui, ref, template)
151 displayer = cmdutil.changeset_templater(ui, repo, spec, buffered=True)
152 displayer = logcmdutil.changesettemplater(ui, repo, spec, buffered=True)
152 return fn(ui, repo, displayer)
153 return fn(ui, repo, displayer)
153 else:
154 else:
154 return fn(ui, repo)
155 return fn(ui, repo)
155
156
156 @showview('bookmarks', fmtopic='bookmarks')
157 @showview('bookmarks', fmtopic='bookmarks')
157 def showbookmarks(ui, repo, fm):
158 def showbookmarks(ui, repo, fm):
158 """bookmarks and their associated changeset"""
159 """bookmarks and their associated changeset"""
159 marks = repo._bookmarks
160 marks = repo._bookmarks
160 if not len(marks):
161 if not len(marks):
161 # This is a bit hacky. Ideally, templates would have a way to
162 # This is a bit hacky. Ideally, templates would have a way to
162 # specify an empty output, but we shouldn't corrupt JSON while
163 # specify an empty output, but we shouldn't corrupt JSON while
163 # waiting for this functionality.
164 # waiting for this functionality.
164 if not isinstance(fm, formatter.jsonformatter):
165 if not isinstance(fm, formatter.jsonformatter):
165 ui.write(_('(no bookmarks set)\n'))
166 ui.write(_('(no bookmarks set)\n'))
166 return
167 return
167
168
168 revs = [repo[node].rev() for node in marks.values()]
169 revs = [repo[node].rev() for node in marks.values()]
169 active = repo._activebookmark
170 active = repo._activebookmark
170 longestname = max(len(b) for b in marks)
171 longestname = max(len(b) for b in marks)
171 nodelen = longestshortest(repo, revs)
172 nodelen = longestshortest(repo, revs)
172
173
173 for bm, node in sorted(marks.items()):
174 for bm, node in sorted(marks.items()):
174 fm.startitem()
175 fm.startitem()
175 fm.context(ctx=repo[node])
176 fm.context(ctx=repo[node])
176 fm.write('bookmark', '%s', bm)
177 fm.write('bookmark', '%s', bm)
177 fm.write('node', fm.hexfunc(node), fm.hexfunc(node))
178 fm.write('node', fm.hexfunc(node), fm.hexfunc(node))
178 fm.data(active=bm == active,
179 fm.data(active=bm == active,
179 longestbookmarklen=longestname,
180 longestbookmarklen=longestname,
180 nodelen=nodelen)
181 nodelen=nodelen)
181
182
182 @showview('stack', csettopic='stack')
183 @showview('stack', csettopic='stack')
183 def showstack(ui, repo, displayer):
184 def showstack(ui, repo, displayer):
184 """current line of work"""
185 """current line of work"""
185 wdirctx = repo['.']
186 wdirctx = repo['.']
186 if wdirctx.rev() == nullrev:
187 if wdirctx.rev() == nullrev:
187 raise error.Abort(_('stack view only available when there is a '
188 raise error.Abort(_('stack view only available when there is a '
188 'working directory'))
189 'working directory'))
189
190
190 if wdirctx.phase() == phases.public:
191 if wdirctx.phase() == phases.public:
191 ui.write(_('(empty stack; working directory parent is a published '
192 ui.write(_('(empty stack; working directory parent is a published '
192 'changeset)\n'))
193 'changeset)\n'))
193 return
194 return
194
195
195 # TODO extract "find stack" into a function to facilitate
196 # TODO extract "find stack" into a function to facilitate
196 # customization and reuse.
197 # customization and reuse.
197
198
198 baserev = destutil.stackbase(ui, repo)
199 baserev = destutil.stackbase(ui, repo)
199 basectx = None
200 basectx = None
200
201
201 if baserev is None:
202 if baserev is None:
202 baserev = wdirctx.rev()
203 baserev = wdirctx.rev()
203 stackrevs = {wdirctx.rev()}
204 stackrevs = {wdirctx.rev()}
204 else:
205 else:
205 stackrevs = set(repo.revs('%d::.', baserev))
206 stackrevs = set(repo.revs('%d::.', baserev))
206
207
207 ctx = repo[baserev]
208 ctx = repo[baserev]
208 if ctx.p1().rev() != nullrev:
209 if ctx.p1().rev() != nullrev:
209 basectx = ctx.p1()
210 basectx = ctx.p1()
210
211
211 # And relevant descendants.
212 # And relevant descendants.
212 branchpointattip = False
213 branchpointattip = False
213 cl = repo.changelog
214 cl = repo.changelog
214
215
215 for rev in cl.descendants([wdirctx.rev()]):
216 for rev in cl.descendants([wdirctx.rev()]):
216 ctx = repo[rev]
217 ctx = repo[rev]
217
218
218 # Will only happen if . is public.
219 # Will only happen if . is public.
219 if ctx.phase() == phases.public:
220 if ctx.phase() == phases.public:
220 break
221 break
221
222
222 stackrevs.add(ctx.rev())
223 stackrevs.add(ctx.rev())
223
224
224 # ctx.children() within a function iterating on descandants
225 # ctx.children() within a function iterating on descandants
225 # potentially has severe performance concerns because revlog.children()
226 # potentially has severe performance concerns because revlog.children()
226 # iterates over all revisions after ctx's node. However, the number of
227 # iterates over all revisions after ctx's node. However, the number of
227 # draft changesets should be a reasonably small number. So even if
228 # draft changesets should be a reasonably small number. So even if
228 # this is quadratic, the perf impact should be minimal.
229 # this is quadratic, the perf impact should be minimal.
229 if len(ctx.children()) > 1:
230 if len(ctx.children()) > 1:
230 branchpointattip = True
231 branchpointattip = True
231 break
232 break
232
233
233 stackrevs = list(sorted(stackrevs, reverse=True))
234 stackrevs = list(sorted(stackrevs, reverse=True))
234
235
235 # Find likely target heads for the current stack. These are likely
236 # Find likely target heads for the current stack. These are likely
236 # merge or rebase targets.
237 # merge or rebase targets.
237 if basectx:
238 if basectx:
238 # TODO make this customizable?
239 # TODO make this customizable?
239 newheads = set(repo.revs('heads(%d::) - %ld - not public()',
240 newheads = set(repo.revs('heads(%d::) - %ld - not public()',
240 basectx.rev(), stackrevs))
241 basectx.rev(), stackrevs))
241 else:
242 else:
242 newheads = set()
243 newheads = set()
243
244
244 allrevs = set(stackrevs) | newheads | set([baserev])
245 allrevs = set(stackrevs) | newheads | set([baserev])
245 nodelen = longestshortest(repo, allrevs)
246 nodelen = longestshortest(repo, allrevs)
246
247
247 try:
248 try:
248 cmdutil.findcmd('rebase', commands.table)
249 cmdutil.findcmd('rebase', commands.table)
249 haverebase = True
250 haverebase = True
250 except (error.AmbiguousCommand, error.UnknownCommand):
251 except (error.AmbiguousCommand, error.UnknownCommand):
251 haverebase = False
252 haverebase = False
252
253
253 # TODO use templating.
254 # TODO use templating.
254 # TODO consider using graphmod. But it may not be necessary given
255 # TODO consider using graphmod. But it may not be necessary given
255 # our simplicity and the customizations required.
256 # our simplicity and the customizations required.
256 # TODO use proper graph symbols from graphmod
257 # TODO use proper graph symbols from graphmod
257
258
258 tres = formatter.templateresources(ui, repo)
259 tres = formatter.templateresources(ui, repo)
259 shortesttmpl = formatter.maketemplater(ui, '{shortest(node, %d)}' % nodelen,
260 shortesttmpl = formatter.maketemplater(ui, '{shortest(node, %d)}' % nodelen,
260 resources=tres)
261 resources=tres)
261 def shortest(ctx):
262 def shortest(ctx):
262 return shortesttmpl.render({'ctx': ctx, 'node': ctx.hex()})
263 return shortesttmpl.render({'ctx': ctx, 'node': ctx.hex()})
263
264
264 # We write out new heads to aid in DAG awareness and to help with decision
265 # We write out new heads to aid in DAG awareness and to help with decision
265 # making on how the stack should be reconciled with commits made since the
266 # making on how the stack should be reconciled with commits made since the
266 # branch point.
267 # branch point.
267 if newheads:
268 if newheads:
268 # Calculate distance from base so we can render the count and so we can
269 # Calculate distance from base so we can render the count and so we can
269 # sort display order by commit distance.
270 # sort display order by commit distance.
270 revdistance = {}
271 revdistance = {}
271 for head in newheads:
272 for head in newheads:
272 # There is some redundancy in DAG traversal here and therefore
273 # There is some redundancy in DAG traversal here and therefore
273 # room to optimize.
274 # room to optimize.
274 ancestors = cl.ancestors([head], stoprev=basectx.rev())
275 ancestors = cl.ancestors([head], stoprev=basectx.rev())
275 revdistance[head] = len(list(ancestors))
276 revdistance[head] = len(list(ancestors))
276
277
277 sourcectx = repo[stackrevs[-1]]
278 sourcectx = repo[stackrevs[-1]]
278
279
279 sortedheads = sorted(newheads, key=lambda x: revdistance[x],
280 sortedheads = sorted(newheads, key=lambda x: revdistance[x],
280 reverse=True)
281 reverse=True)
281
282
282 for i, rev in enumerate(sortedheads):
283 for i, rev in enumerate(sortedheads):
283 ctx = repo[rev]
284 ctx = repo[rev]
284
285
285 if i:
286 if i:
286 ui.write(': ')
287 ui.write(': ')
287 else:
288 else:
288 ui.write(' ')
289 ui.write(' ')
289
290
290 ui.write(('o '))
291 ui.write(('o '))
291 displayer.show(ctx, nodelen=nodelen)
292 displayer.show(ctx, nodelen=nodelen)
292 displayer.flush(ctx)
293 displayer.flush(ctx)
293 ui.write('\n')
294 ui.write('\n')
294
295
295 if i:
296 if i:
296 ui.write(':/')
297 ui.write(':/')
297 else:
298 else:
298 ui.write(' /')
299 ui.write(' /')
299
300
300 ui.write(' (')
301 ui.write(' (')
301 ui.write(_('%d commits ahead') % revdistance[rev],
302 ui.write(_('%d commits ahead') % revdistance[rev],
302 label='stack.commitdistance')
303 label='stack.commitdistance')
303
304
304 if haverebase:
305 if haverebase:
305 # TODO may be able to omit --source in some scenarios
306 # TODO may be able to omit --source in some scenarios
306 ui.write('; ')
307 ui.write('; ')
307 ui.write(('hg rebase --source %s --dest %s' % (
308 ui.write(('hg rebase --source %s --dest %s' % (
308 shortest(sourcectx), shortest(ctx))),
309 shortest(sourcectx), shortest(ctx))),
309 label='stack.rebasehint')
310 label='stack.rebasehint')
310
311
311 ui.write(')\n')
312 ui.write(')\n')
312
313
313 ui.write(':\n: ')
314 ui.write(':\n: ')
314 ui.write(_('(stack head)\n'), label='stack.label')
315 ui.write(_('(stack head)\n'), label='stack.label')
315
316
316 if branchpointattip:
317 if branchpointattip:
317 ui.write(' \\ / ')
318 ui.write(' \\ / ')
318 ui.write(_('(multiple children)\n'), label='stack.label')
319 ui.write(_('(multiple children)\n'), label='stack.label')
319 ui.write(' |\n')
320 ui.write(' |\n')
320
321
321 for rev in stackrevs:
322 for rev in stackrevs:
322 ctx = repo[rev]
323 ctx = repo[rev]
323 symbol = '@' if rev == wdirctx.rev() else 'o'
324 symbol = '@' if rev == wdirctx.rev() else 'o'
324
325
325 if newheads:
326 if newheads:
326 ui.write(': ')
327 ui.write(': ')
327 else:
328 else:
328 ui.write(' ')
329 ui.write(' ')
329
330
330 ui.write(symbol, ' ')
331 ui.write(symbol, ' ')
331 displayer.show(ctx, nodelen=nodelen)
332 displayer.show(ctx, nodelen=nodelen)
332 displayer.flush(ctx)
333 displayer.flush(ctx)
333 ui.write('\n')
334 ui.write('\n')
334
335
335 # TODO display histedit hint?
336 # TODO display histedit hint?
336
337
337 if basectx:
338 if basectx:
338 # Vertically and horizontally separate stack base from parent
339 # Vertically and horizontally separate stack base from parent
339 # to reinforce stack boundary.
340 # to reinforce stack boundary.
340 if newheads:
341 if newheads:
341 ui.write(':/ ')
342 ui.write(':/ ')
342 else:
343 else:
343 ui.write(' / ')
344 ui.write(' / ')
344
345
345 ui.write(_('(stack base)'), '\n', label='stack.label')
346 ui.write(_('(stack base)'), '\n', label='stack.label')
346 ui.write(('o '))
347 ui.write(('o '))
347
348
348 displayer.show(basectx, nodelen=nodelen)
349 displayer.show(basectx, nodelen=nodelen)
349 displayer.flush(basectx)
350 displayer.flush(basectx)
350 ui.write('\n')
351 ui.write('\n')
351
352
352 @revsetpredicate('_underway([commitage[, headage]])')
353 @revsetpredicate('_underway([commitage[, headage]])')
353 def underwayrevset(repo, subset, x):
354 def underwayrevset(repo, subset, x):
354 args = revset.getargsdict(x, 'underway', 'commitage headage')
355 args = revset.getargsdict(x, 'underway', 'commitage headage')
355 if 'commitage' not in args:
356 if 'commitage' not in args:
356 args['commitage'] = None
357 args['commitage'] = None
357 if 'headage' not in args:
358 if 'headage' not in args:
358 args['headage'] = None
359 args['headage'] = None
359
360
360 # We assume callers of this revset add a topographical sort on the
361 # We assume callers of this revset add a topographical sort on the
361 # result. This means there is no benefit to making the revset lazy
362 # result. This means there is no benefit to making the revset lazy
362 # since the topographical sort needs to consume all revs.
363 # since the topographical sort needs to consume all revs.
363 #
364 #
364 # With this in mind, we build up the set manually instead of constructing
365 # With this in mind, we build up the set manually instead of constructing
365 # a complex revset. This enables faster execution.
366 # a complex revset. This enables faster execution.
366
367
367 # Mutable changesets (non-public) are the most important changesets
368 # Mutable changesets (non-public) are the most important changesets
368 # to return. ``not public()`` will also pull in obsolete changesets if
369 # to return. ``not public()`` will also pull in obsolete changesets if
369 # there is a non-obsolete changeset with obsolete ancestors. This is
370 # there is a non-obsolete changeset with obsolete ancestors. This is
370 # why we exclude obsolete changesets from this query.
371 # why we exclude obsolete changesets from this query.
371 rs = 'not public() and not obsolete()'
372 rs = 'not public() and not obsolete()'
372 rsargs = []
373 rsargs = []
373 if args['commitage']:
374 if args['commitage']:
374 rs += ' and date(%s)'
375 rs += ' and date(%s)'
375 rsargs.append(revsetlang.getstring(args['commitage'],
376 rsargs.append(revsetlang.getstring(args['commitage'],
376 _('commitage requires a string')))
377 _('commitage requires a string')))
377
378
378 mutable = repo.revs(rs, *rsargs)
379 mutable = repo.revs(rs, *rsargs)
379 relevant = revset.baseset(mutable)
380 relevant = revset.baseset(mutable)
380
381
381 # Add parents of mutable changesets to provide context.
382 # Add parents of mutable changesets to provide context.
382 relevant += repo.revs('parents(%ld)', mutable)
383 relevant += repo.revs('parents(%ld)', mutable)
383
384
384 # We also pull in (public) heads if they a) aren't closing a branch
385 # We also pull in (public) heads if they a) aren't closing a branch
385 # b) are recent.
386 # b) are recent.
386 rs = 'head() and not closed()'
387 rs = 'head() and not closed()'
387 rsargs = []
388 rsargs = []
388 if args['headage']:
389 if args['headage']:
389 rs += ' and date(%s)'
390 rs += ' and date(%s)'
390 rsargs.append(revsetlang.getstring(args['headage'],
391 rsargs.append(revsetlang.getstring(args['headage'],
391 _('headage requires a string')))
392 _('headage requires a string')))
392
393
393 relevant += repo.revs(rs, *rsargs)
394 relevant += repo.revs(rs, *rsargs)
394
395
395 # Add working directory parent.
396 # Add working directory parent.
396 wdirrev = repo['.'].rev()
397 wdirrev = repo['.'].rev()
397 if wdirrev != nullrev:
398 if wdirrev != nullrev:
398 relevant += revset.baseset({wdirrev})
399 relevant += revset.baseset({wdirrev})
399
400
400 return subset & relevant
401 return subset & relevant
401
402
402 @showview('work', csettopic='work')
403 @showview('work', csettopic='work')
403 def showwork(ui, repo, displayer):
404 def showwork(ui, repo, displayer):
404 """changesets that aren't finished"""
405 """changesets that aren't finished"""
405 # TODO support date-based limiting when calling revset.
406 # TODO support date-based limiting when calling revset.
406 revs = repo.revs('sort(_underway(), topo)')
407 revs = repo.revs('sort(_underway(), topo)')
407 nodelen = longestshortest(repo, revs)
408 nodelen = longestshortest(repo, revs)
408
409
409 revdag = graphmod.dagwalker(repo, revs)
410 revdag = graphmod.dagwalker(repo, revs)
410
411
411 ui.setconfig('experimental', 'graphshorten', True)
412 ui.setconfig('experimental', 'graphshorten', True)
412 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges,
413 logcmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges,
413 props={'nodelen': nodelen})
414 props={'nodelen': nodelen})
414
415
415 def extsetup(ui):
416 def extsetup(ui):
416 # Alias `hg <prefix><view>` to `hg show <view>`.
417 # Alias `hg <prefix><view>` to `hg show <view>`.
417 for prefix in ui.configlist('commands', 'show.aliasprefix'):
418 for prefix in ui.configlist('commands', 'show.aliasprefix'):
418 for view in showview._table:
419 for view in showview._table:
419 name = '%s%s' % (prefix, view)
420 name = '%s%s' % (prefix, view)
420
421
421 choice, allcommands = cmdutil.findpossible(name, commands.table,
422 choice, allcommands = cmdutil.findpossible(name, commands.table,
422 strict=True)
423 strict=True)
423
424
424 # This alias is already a command name. Don't set it.
425 # This alias is already a command name. Don't set it.
425 if name in choice:
426 if name in choice:
426 continue
427 continue
427
428
428 # Same for aliases.
429 # Same for aliases.
429 if ui.config('alias', name):
430 if ui.config('alias', name):
430 continue
431 continue
431
432
432 ui.setconfig('alias', name, 'show %s' % view, source='show')
433 ui.setconfig('alias', name, 'show %s' % view, source='show')
433
434
434 def longestshortest(repo, revs, minlen=4):
435 def longestshortest(repo, revs, minlen=4):
435 """Return the length of the longest shortest node to identify revisions.
436 """Return the length of the longest shortest node to identify revisions.
436
437
437 The result of this function can be used with the ``shortest()`` template
438 The result of this function can be used with the ``shortest()`` template
438 function to ensure that a value is unique and unambiguous for a given
439 function to ensure that a value is unique and unambiguous for a given
439 set of nodes.
440 set of nodes.
440
441
441 The number of revisions in the repo is taken into account to prevent
442 The number of revisions in the repo is taken into account to prevent
442 a numeric node prefix from conflicting with an integer revision number.
443 a numeric node prefix from conflicting with an integer revision number.
443 If we fail to do this, a value of e.g. ``10023`` could mean either
444 If we fail to do this, a value of e.g. ``10023`` could mean either
444 revision 10023 or node ``10023abc...``.
445 revision 10023 or node ``10023abc...``.
445 """
446 """
446 if not revs:
447 if not revs:
447 return minlen
448 return minlen
448 # don't use filtered repo because it's slow. see templater.shortest().
449 # don't use filtered repo because it's slow. see templater.shortest().
449 cl = repo.unfiltered().changelog
450 cl = repo.unfiltered().changelog
450 return max(len(cl.shortest(hex(cl.node(r)), minlen)) for r in revs)
451 return max(len(cl.shortest(hex(cl.node(r)), minlen)) for r in revs)
451
452
452 # Adjust the docstring of the show command so it shows all registered views.
453 # Adjust the docstring of the show command so it shows all registered views.
453 # This is a bit hacky because it runs at the end of module load. When moved
454 # This is a bit hacky because it runs at the end of module load. When moved
454 # into core or when another extension wants to provide a view, we'll need
455 # into core or when another extension wants to provide a view, we'll need
455 # to do this more robustly.
456 # to do this more robustly.
456 # TODO make this more robust.
457 # TODO make this more robust.
457 def _updatedocstring():
458 def _updatedocstring():
458 longest = max(map(len, showview._table.keys()))
459 longest = max(map(len, showview._table.keys()))
459 entries = []
460 entries = []
460 for key in sorted(showview._table.keys()):
461 for key in sorted(showview._table.keys()):
461 entries.append(pycompat.sysstr(' %s %s' % (
462 entries.append(pycompat.sysstr(' %s %s' % (
462 key.ljust(longest), showview._table[key]._origdoc)))
463 key.ljust(longest), showview._table[key]._origdoc)))
463
464
464 cmdtable['show'][0].__doc__ = pycompat.sysstr('%s\n\n%s\n ') % (
465 cmdtable['show'][0].__doc__ = pycompat.sysstr('%s\n\n%s\n ') % (
465 cmdtable['show'][0].__doc__.rstrip(),
466 cmdtable['show'][0].__doc__.rstrip(),
466 pycompat.sysstr('\n\n').join(entries))
467 pycompat.sysstr('\n\n').join(entries))
467
468
468 _updatedocstring()
469 _updatedocstring()
@@ -1,757 +1,758 b''
1 # Patch transplanting extension for Mercurial
1 # Patch transplanting extension for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to transplant changesets from another branch
8 '''command to transplant changesets from another branch
9
9
10 This extension allows you to transplant changes to another parent revision,
10 This extension allows you to transplant changes to another parent revision,
11 possibly in another repository. The transplant is done using 'diff' patches.
11 possibly in another repository. The transplant is done using 'diff' patches.
12
12
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 map from a changeset hash to its hash in the source repository.
14 map from a changeset hash to its hash in the source repository.
15 '''
15 '''
16 from __future__ import absolute_import
16 from __future__ import absolute_import
17
17
18 import os
18 import os
19 import tempfile
19 import tempfile
20 from mercurial.i18n import _
20 from mercurial.i18n import _
21 from mercurial import (
21 from mercurial import (
22 bundlerepo,
22 bundlerepo,
23 cmdutil,
23 cmdutil,
24 error,
24 error,
25 exchange,
25 exchange,
26 hg,
26 hg,
27 logcmdutil,
27 match,
28 match,
28 merge,
29 merge,
29 node as nodemod,
30 node as nodemod,
30 patch,
31 patch,
31 pycompat,
32 pycompat,
32 registrar,
33 registrar,
33 revlog,
34 revlog,
34 revset,
35 revset,
35 scmutil,
36 scmutil,
36 smartset,
37 smartset,
37 util,
38 util,
38 vfs as vfsmod,
39 vfs as vfsmod,
39 )
40 )
40
41
41 class TransplantError(error.Abort):
42 class TransplantError(error.Abort):
42 pass
43 pass
43
44
44 cmdtable = {}
45 cmdtable = {}
45 command = registrar.command(cmdtable)
46 command = registrar.command(cmdtable)
46 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
47 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
47 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
48 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
48 # be specifying the version(s) of Mercurial they are tested with, or
49 # be specifying the version(s) of Mercurial they are tested with, or
49 # leave the attribute unspecified.
50 # leave the attribute unspecified.
50 testedwith = 'ships-with-hg-core'
51 testedwith = 'ships-with-hg-core'
51
52
52 configtable = {}
53 configtable = {}
53 configitem = registrar.configitem(configtable)
54 configitem = registrar.configitem(configtable)
54
55
55 configitem('transplant', 'filter',
56 configitem('transplant', 'filter',
56 default=None,
57 default=None,
57 )
58 )
58 configitem('transplant', 'log',
59 configitem('transplant', 'log',
59 default=None,
60 default=None,
60 )
61 )
61
62
62 class transplantentry(object):
63 class transplantentry(object):
63 def __init__(self, lnode, rnode):
64 def __init__(self, lnode, rnode):
64 self.lnode = lnode
65 self.lnode = lnode
65 self.rnode = rnode
66 self.rnode = rnode
66
67
67 class transplants(object):
68 class transplants(object):
68 def __init__(self, path=None, transplantfile=None, opener=None):
69 def __init__(self, path=None, transplantfile=None, opener=None):
69 self.path = path
70 self.path = path
70 self.transplantfile = transplantfile
71 self.transplantfile = transplantfile
71 self.opener = opener
72 self.opener = opener
72
73
73 if not opener:
74 if not opener:
74 self.opener = vfsmod.vfs(self.path)
75 self.opener = vfsmod.vfs(self.path)
75 self.transplants = {}
76 self.transplants = {}
76 self.dirty = False
77 self.dirty = False
77 self.read()
78 self.read()
78
79
79 def read(self):
80 def read(self):
80 abspath = os.path.join(self.path, self.transplantfile)
81 abspath = os.path.join(self.path, self.transplantfile)
81 if self.transplantfile and os.path.exists(abspath):
82 if self.transplantfile and os.path.exists(abspath):
82 for line in self.opener.read(self.transplantfile).splitlines():
83 for line in self.opener.read(self.transplantfile).splitlines():
83 lnode, rnode = map(revlog.bin, line.split(':'))
84 lnode, rnode = map(revlog.bin, line.split(':'))
84 list = self.transplants.setdefault(rnode, [])
85 list = self.transplants.setdefault(rnode, [])
85 list.append(transplantentry(lnode, rnode))
86 list.append(transplantentry(lnode, rnode))
86
87
87 def write(self):
88 def write(self):
88 if self.dirty and self.transplantfile:
89 if self.dirty and self.transplantfile:
89 if not os.path.isdir(self.path):
90 if not os.path.isdir(self.path):
90 os.mkdir(self.path)
91 os.mkdir(self.path)
91 fp = self.opener(self.transplantfile, 'w')
92 fp = self.opener(self.transplantfile, 'w')
92 for list in self.transplants.itervalues():
93 for list in self.transplants.itervalues():
93 for t in list:
94 for t in list:
94 l, r = map(nodemod.hex, (t.lnode, t.rnode))
95 l, r = map(nodemod.hex, (t.lnode, t.rnode))
95 fp.write(l + ':' + r + '\n')
96 fp.write(l + ':' + r + '\n')
96 fp.close()
97 fp.close()
97 self.dirty = False
98 self.dirty = False
98
99
99 def get(self, rnode):
100 def get(self, rnode):
100 return self.transplants.get(rnode) or []
101 return self.transplants.get(rnode) or []
101
102
102 def set(self, lnode, rnode):
103 def set(self, lnode, rnode):
103 list = self.transplants.setdefault(rnode, [])
104 list = self.transplants.setdefault(rnode, [])
104 list.append(transplantentry(lnode, rnode))
105 list.append(transplantentry(lnode, rnode))
105 self.dirty = True
106 self.dirty = True
106
107
107 def remove(self, transplant):
108 def remove(self, transplant):
108 list = self.transplants.get(transplant.rnode)
109 list = self.transplants.get(transplant.rnode)
109 if list:
110 if list:
110 del list[list.index(transplant)]
111 del list[list.index(transplant)]
111 self.dirty = True
112 self.dirty = True
112
113
113 class transplanter(object):
114 class transplanter(object):
114 def __init__(self, ui, repo, opts):
115 def __init__(self, ui, repo, opts):
115 self.ui = ui
116 self.ui = ui
116 self.path = repo.vfs.join('transplant')
117 self.path = repo.vfs.join('transplant')
117 self.opener = vfsmod.vfs(self.path)
118 self.opener = vfsmod.vfs(self.path)
118 self.transplants = transplants(self.path, 'transplants',
119 self.transplants = transplants(self.path, 'transplants',
119 opener=self.opener)
120 opener=self.opener)
120 def getcommiteditor():
121 def getcommiteditor():
121 editform = cmdutil.mergeeditform(repo[None], 'transplant')
122 editform = cmdutil.mergeeditform(repo[None], 'transplant')
122 return cmdutil.getcommiteditor(editform=editform, **opts)
123 return cmdutil.getcommiteditor(editform=editform, **opts)
123 self.getcommiteditor = getcommiteditor
124 self.getcommiteditor = getcommiteditor
124
125
125 def applied(self, repo, node, parent):
126 def applied(self, repo, node, parent):
126 '''returns True if a node is already an ancestor of parent
127 '''returns True if a node is already an ancestor of parent
127 or is parent or has already been transplanted'''
128 or is parent or has already been transplanted'''
128 if hasnode(repo, parent):
129 if hasnode(repo, parent):
129 parentrev = repo.changelog.rev(parent)
130 parentrev = repo.changelog.rev(parent)
130 if hasnode(repo, node):
131 if hasnode(repo, node):
131 rev = repo.changelog.rev(node)
132 rev = repo.changelog.rev(node)
132 reachable = repo.changelog.ancestors([parentrev], rev,
133 reachable = repo.changelog.ancestors([parentrev], rev,
133 inclusive=True)
134 inclusive=True)
134 if rev in reachable:
135 if rev in reachable:
135 return True
136 return True
136 for t in self.transplants.get(node):
137 for t in self.transplants.get(node):
137 # it might have been stripped
138 # it might have been stripped
138 if not hasnode(repo, t.lnode):
139 if not hasnode(repo, t.lnode):
139 self.transplants.remove(t)
140 self.transplants.remove(t)
140 return False
141 return False
141 lnoderev = repo.changelog.rev(t.lnode)
142 lnoderev = repo.changelog.rev(t.lnode)
142 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
143 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
143 inclusive=True):
144 inclusive=True):
144 return True
145 return True
145 return False
146 return False
146
147
147 def apply(self, repo, source, revmap, merges, opts=None):
148 def apply(self, repo, source, revmap, merges, opts=None):
148 '''apply the revisions in revmap one by one in revision order'''
149 '''apply the revisions in revmap one by one in revision order'''
149 if opts is None:
150 if opts is None:
150 opts = {}
151 opts = {}
151 revs = sorted(revmap)
152 revs = sorted(revmap)
152 p1, p2 = repo.dirstate.parents()
153 p1, p2 = repo.dirstate.parents()
153 pulls = []
154 pulls = []
154 diffopts = patch.difffeatureopts(self.ui, opts)
155 diffopts = patch.difffeatureopts(self.ui, opts)
155 diffopts.git = True
156 diffopts.git = True
156
157
157 lock = tr = None
158 lock = tr = None
158 try:
159 try:
159 lock = repo.lock()
160 lock = repo.lock()
160 tr = repo.transaction('transplant')
161 tr = repo.transaction('transplant')
161 for rev in revs:
162 for rev in revs:
162 node = revmap[rev]
163 node = revmap[rev]
163 revstr = '%s:%s' % (rev, nodemod.short(node))
164 revstr = '%s:%s' % (rev, nodemod.short(node))
164
165
165 if self.applied(repo, node, p1):
166 if self.applied(repo, node, p1):
166 self.ui.warn(_('skipping already applied revision %s\n') %
167 self.ui.warn(_('skipping already applied revision %s\n') %
167 revstr)
168 revstr)
168 continue
169 continue
169
170
170 parents = source.changelog.parents(node)
171 parents = source.changelog.parents(node)
171 if not (opts.get('filter') or opts.get('log')):
172 if not (opts.get('filter') or opts.get('log')):
172 # If the changeset parent is the same as the
173 # If the changeset parent is the same as the
173 # wdir's parent, just pull it.
174 # wdir's parent, just pull it.
174 if parents[0] == p1:
175 if parents[0] == p1:
175 pulls.append(node)
176 pulls.append(node)
176 p1 = node
177 p1 = node
177 continue
178 continue
178 if pulls:
179 if pulls:
179 if source != repo:
180 if source != repo:
180 exchange.pull(repo, source.peer(), heads=pulls)
181 exchange.pull(repo, source.peer(), heads=pulls)
181 merge.update(repo, pulls[-1], False, False)
182 merge.update(repo, pulls[-1], False, False)
182 p1, p2 = repo.dirstate.parents()
183 p1, p2 = repo.dirstate.parents()
183 pulls = []
184 pulls = []
184
185
185 domerge = False
186 domerge = False
186 if node in merges:
187 if node in merges:
187 # pulling all the merge revs at once would mean we
188 # pulling all the merge revs at once would mean we
188 # couldn't transplant after the latest even if
189 # couldn't transplant after the latest even if
189 # transplants before them fail.
190 # transplants before them fail.
190 domerge = True
191 domerge = True
191 if not hasnode(repo, node):
192 if not hasnode(repo, node):
192 exchange.pull(repo, source.peer(), heads=[node])
193 exchange.pull(repo, source.peer(), heads=[node])
193
194
194 skipmerge = False
195 skipmerge = False
195 if parents[1] != revlog.nullid:
196 if parents[1] != revlog.nullid:
196 if not opts.get('parent'):
197 if not opts.get('parent'):
197 self.ui.note(_('skipping merge changeset %s:%s\n')
198 self.ui.note(_('skipping merge changeset %s:%s\n')
198 % (rev, nodemod.short(node)))
199 % (rev, nodemod.short(node)))
199 skipmerge = True
200 skipmerge = True
200 else:
201 else:
201 parent = source.lookup(opts['parent'])
202 parent = source.lookup(opts['parent'])
202 if parent not in parents:
203 if parent not in parents:
203 raise error.Abort(_('%s is not a parent of %s') %
204 raise error.Abort(_('%s is not a parent of %s') %
204 (nodemod.short(parent),
205 (nodemod.short(parent),
205 nodemod.short(node)))
206 nodemod.short(node)))
206 else:
207 else:
207 parent = parents[0]
208 parent = parents[0]
208
209
209 if skipmerge:
210 if skipmerge:
210 patchfile = None
211 patchfile = None
211 else:
212 else:
212 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
213 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
213 fp = os.fdopen(fd, pycompat.sysstr('w'))
214 fp = os.fdopen(fd, pycompat.sysstr('w'))
214 gen = patch.diff(source, parent, node, opts=diffopts)
215 gen = patch.diff(source, parent, node, opts=diffopts)
215 for chunk in gen:
216 for chunk in gen:
216 fp.write(chunk)
217 fp.write(chunk)
217 fp.close()
218 fp.close()
218
219
219 del revmap[rev]
220 del revmap[rev]
220 if patchfile or domerge:
221 if patchfile or domerge:
221 try:
222 try:
222 try:
223 try:
223 n = self.applyone(repo, node,
224 n = self.applyone(repo, node,
224 source.changelog.read(node),
225 source.changelog.read(node),
225 patchfile, merge=domerge,
226 patchfile, merge=domerge,
226 log=opts.get('log'),
227 log=opts.get('log'),
227 filter=opts.get('filter'))
228 filter=opts.get('filter'))
228 except TransplantError:
229 except TransplantError:
229 # Do not rollback, it is up to the user to
230 # Do not rollback, it is up to the user to
230 # fix the merge or cancel everything
231 # fix the merge or cancel everything
231 tr.close()
232 tr.close()
232 raise
233 raise
233 if n and domerge:
234 if n and domerge:
234 self.ui.status(_('%s merged at %s\n') % (revstr,
235 self.ui.status(_('%s merged at %s\n') % (revstr,
235 nodemod.short(n)))
236 nodemod.short(n)))
236 elif n:
237 elif n:
237 self.ui.status(_('%s transplanted to %s\n')
238 self.ui.status(_('%s transplanted to %s\n')
238 % (nodemod.short(node),
239 % (nodemod.short(node),
239 nodemod.short(n)))
240 nodemod.short(n)))
240 finally:
241 finally:
241 if patchfile:
242 if patchfile:
242 os.unlink(patchfile)
243 os.unlink(patchfile)
243 tr.close()
244 tr.close()
244 if pulls:
245 if pulls:
245 exchange.pull(repo, source.peer(), heads=pulls)
246 exchange.pull(repo, source.peer(), heads=pulls)
246 merge.update(repo, pulls[-1], False, False)
247 merge.update(repo, pulls[-1], False, False)
247 finally:
248 finally:
248 self.saveseries(revmap, merges)
249 self.saveseries(revmap, merges)
249 self.transplants.write()
250 self.transplants.write()
250 if tr:
251 if tr:
251 tr.release()
252 tr.release()
252 if lock:
253 if lock:
253 lock.release()
254 lock.release()
254
255
255 def filter(self, filter, node, changelog, patchfile):
256 def filter(self, filter, node, changelog, patchfile):
256 '''arbitrarily rewrite changeset before applying it'''
257 '''arbitrarily rewrite changeset before applying it'''
257
258
258 self.ui.status(_('filtering %s\n') % patchfile)
259 self.ui.status(_('filtering %s\n') % patchfile)
259 user, date, msg = (changelog[1], changelog[2], changelog[4])
260 user, date, msg = (changelog[1], changelog[2], changelog[4])
260 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
261 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
261 fp = os.fdopen(fd, pycompat.sysstr('w'))
262 fp = os.fdopen(fd, pycompat.sysstr('w'))
262 fp.write("# HG changeset patch\n")
263 fp.write("# HG changeset patch\n")
263 fp.write("# User %s\n" % user)
264 fp.write("# User %s\n" % user)
264 fp.write("# Date %d %d\n" % date)
265 fp.write("# Date %d %d\n" % date)
265 fp.write(msg + '\n')
266 fp.write(msg + '\n')
266 fp.close()
267 fp.close()
267
268
268 try:
269 try:
269 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
270 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
270 util.shellquote(patchfile)),
271 util.shellquote(patchfile)),
271 environ={'HGUSER': changelog[1],
272 environ={'HGUSER': changelog[1],
272 'HGREVISION': nodemod.hex(node),
273 'HGREVISION': nodemod.hex(node),
273 },
274 },
274 onerr=error.Abort, errprefix=_('filter failed'),
275 onerr=error.Abort, errprefix=_('filter failed'),
275 blockedtag='transplant_filter')
276 blockedtag='transplant_filter')
276 user, date, msg = self.parselog(file(headerfile))[1:4]
277 user, date, msg = self.parselog(file(headerfile))[1:4]
277 finally:
278 finally:
278 os.unlink(headerfile)
279 os.unlink(headerfile)
279
280
280 return (user, date, msg)
281 return (user, date, msg)
281
282
282 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
283 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
283 filter=None):
284 filter=None):
284 '''apply the patch in patchfile to the repository as a transplant'''
285 '''apply the patch in patchfile to the repository as a transplant'''
285 (manifest, user, (time, timezone), files, message) = cl[:5]
286 (manifest, user, (time, timezone), files, message) = cl[:5]
286 date = "%d %d" % (time, timezone)
287 date = "%d %d" % (time, timezone)
287 extra = {'transplant_source': node}
288 extra = {'transplant_source': node}
288 if filter:
289 if filter:
289 (user, date, message) = self.filter(filter, node, cl, patchfile)
290 (user, date, message) = self.filter(filter, node, cl, patchfile)
290
291
291 if log:
292 if log:
292 # we don't translate messages inserted into commits
293 # we don't translate messages inserted into commits
293 message += '\n(transplanted from %s)' % nodemod.hex(node)
294 message += '\n(transplanted from %s)' % nodemod.hex(node)
294
295
295 self.ui.status(_('applying %s\n') % nodemod.short(node))
296 self.ui.status(_('applying %s\n') % nodemod.short(node))
296 self.ui.note('%s %s\n%s\n' % (user, date, message))
297 self.ui.note('%s %s\n%s\n' % (user, date, message))
297
298
298 if not patchfile and not merge:
299 if not patchfile and not merge:
299 raise error.Abort(_('can only omit patchfile if merging'))
300 raise error.Abort(_('can only omit patchfile if merging'))
300 if patchfile:
301 if patchfile:
301 try:
302 try:
302 files = set()
303 files = set()
303 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
304 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
304 files = list(files)
305 files = list(files)
305 except Exception as inst:
306 except Exception as inst:
306 seriespath = os.path.join(self.path, 'series')
307 seriespath = os.path.join(self.path, 'series')
307 if os.path.exists(seriespath):
308 if os.path.exists(seriespath):
308 os.unlink(seriespath)
309 os.unlink(seriespath)
309 p1 = repo.dirstate.p1()
310 p1 = repo.dirstate.p1()
310 p2 = node
311 p2 = node
311 self.log(user, date, message, p1, p2, merge=merge)
312 self.log(user, date, message, p1, p2, merge=merge)
312 self.ui.write(str(inst) + '\n')
313 self.ui.write(str(inst) + '\n')
313 raise TransplantError(_('fix up the working directory and run '
314 raise TransplantError(_('fix up the working directory and run '
314 'hg transplant --continue'))
315 'hg transplant --continue'))
315 else:
316 else:
316 files = None
317 files = None
317 if merge:
318 if merge:
318 p1, p2 = repo.dirstate.parents()
319 p1, p2 = repo.dirstate.parents()
319 repo.setparents(p1, node)
320 repo.setparents(p1, node)
320 m = match.always(repo.root, '')
321 m = match.always(repo.root, '')
321 else:
322 else:
322 m = match.exact(repo.root, '', files)
323 m = match.exact(repo.root, '', files)
323
324
324 n = repo.commit(message, user, date, extra=extra, match=m,
325 n = repo.commit(message, user, date, extra=extra, match=m,
325 editor=self.getcommiteditor())
326 editor=self.getcommiteditor())
326 if not n:
327 if not n:
327 self.ui.warn(_('skipping emptied changeset %s\n') %
328 self.ui.warn(_('skipping emptied changeset %s\n') %
328 nodemod.short(node))
329 nodemod.short(node))
329 return None
330 return None
330 if not merge:
331 if not merge:
331 self.transplants.set(n, node)
332 self.transplants.set(n, node)
332
333
333 return n
334 return n
334
335
335 def canresume(self):
336 def canresume(self):
336 return os.path.exists(os.path.join(self.path, 'journal'))
337 return os.path.exists(os.path.join(self.path, 'journal'))
337
338
338 def resume(self, repo, source, opts):
339 def resume(self, repo, source, opts):
339 '''recover last transaction and apply remaining changesets'''
340 '''recover last transaction and apply remaining changesets'''
340 if os.path.exists(os.path.join(self.path, 'journal')):
341 if os.path.exists(os.path.join(self.path, 'journal')):
341 n, node = self.recover(repo, source, opts)
342 n, node = self.recover(repo, source, opts)
342 if n:
343 if n:
343 self.ui.status(_('%s transplanted as %s\n') %
344 self.ui.status(_('%s transplanted as %s\n') %
344 (nodemod.short(node),
345 (nodemod.short(node),
345 nodemod.short(n)))
346 nodemod.short(n)))
346 else:
347 else:
347 self.ui.status(_('%s skipped due to empty diff\n')
348 self.ui.status(_('%s skipped due to empty diff\n')
348 % (nodemod.short(node),))
349 % (nodemod.short(node),))
349 seriespath = os.path.join(self.path, 'series')
350 seriespath = os.path.join(self.path, 'series')
350 if not os.path.exists(seriespath):
351 if not os.path.exists(seriespath):
351 self.transplants.write()
352 self.transplants.write()
352 return
353 return
353 nodes, merges = self.readseries()
354 nodes, merges = self.readseries()
354 revmap = {}
355 revmap = {}
355 for n in nodes:
356 for n in nodes:
356 revmap[source.changelog.rev(n)] = n
357 revmap[source.changelog.rev(n)] = n
357 os.unlink(seriespath)
358 os.unlink(seriespath)
358
359
359 self.apply(repo, source, revmap, merges, opts)
360 self.apply(repo, source, revmap, merges, opts)
360
361
361 def recover(self, repo, source, opts):
362 def recover(self, repo, source, opts):
362 '''commit working directory using journal metadata'''
363 '''commit working directory using journal metadata'''
363 node, user, date, message, parents = self.readlog()
364 node, user, date, message, parents = self.readlog()
364 merge = False
365 merge = False
365
366
366 if not user or not date or not message or not parents[0]:
367 if not user or not date or not message or not parents[0]:
367 raise error.Abort(_('transplant log file is corrupt'))
368 raise error.Abort(_('transplant log file is corrupt'))
368
369
369 parent = parents[0]
370 parent = parents[0]
370 if len(parents) > 1:
371 if len(parents) > 1:
371 if opts.get('parent'):
372 if opts.get('parent'):
372 parent = source.lookup(opts['parent'])
373 parent = source.lookup(opts['parent'])
373 if parent not in parents:
374 if parent not in parents:
374 raise error.Abort(_('%s is not a parent of %s') %
375 raise error.Abort(_('%s is not a parent of %s') %
375 (nodemod.short(parent),
376 (nodemod.short(parent),
376 nodemod.short(node)))
377 nodemod.short(node)))
377 else:
378 else:
378 merge = True
379 merge = True
379
380
380 extra = {'transplant_source': node}
381 extra = {'transplant_source': node}
381 try:
382 try:
382 p1, p2 = repo.dirstate.parents()
383 p1, p2 = repo.dirstate.parents()
383 if p1 != parent:
384 if p1 != parent:
384 raise error.Abort(_('working directory not at transplant '
385 raise error.Abort(_('working directory not at transplant '
385 'parent %s') % nodemod.hex(parent))
386 'parent %s') % nodemod.hex(parent))
386 if merge:
387 if merge:
387 repo.setparents(p1, parents[1])
388 repo.setparents(p1, parents[1])
388 modified, added, removed, deleted = repo.status()[:4]
389 modified, added, removed, deleted = repo.status()[:4]
389 if merge or modified or added or removed or deleted:
390 if merge or modified or added or removed or deleted:
390 n = repo.commit(message, user, date, extra=extra,
391 n = repo.commit(message, user, date, extra=extra,
391 editor=self.getcommiteditor())
392 editor=self.getcommiteditor())
392 if not n:
393 if not n:
393 raise error.Abort(_('commit failed'))
394 raise error.Abort(_('commit failed'))
394 if not merge:
395 if not merge:
395 self.transplants.set(n, node)
396 self.transplants.set(n, node)
396 else:
397 else:
397 n = None
398 n = None
398 self.unlog()
399 self.unlog()
399
400
400 return n, node
401 return n, node
401 finally:
402 finally:
402 # TODO: get rid of this meaningless try/finally enclosing.
403 # TODO: get rid of this meaningless try/finally enclosing.
403 # this is kept only to reduce changes in a patch.
404 # this is kept only to reduce changes in a patch.
404 pass
405 pass
405
406
406 def readseries(self):
407 def readseries(self):
407 nodes = []
408 nodes = []
408 merges = []
409 merges = []
409 cur = nodes
410 cur = nodes
410 for line in self.opener.read('series').splitlines():
411 for line in self.opener.read('series').splitlines():
411 if line.startswith('# Merges'):
412 if line.startswith('# Merges'):
412 cur = merges
413 cur = merges
413 continue
414 continue
414 cur.append(revlog.bin(line))
415 cur.append(revlog.bin(line))
415
416
416 return (nodes, merges)
417 return (nodes, merges)
417
418
418 def saveseries(self, revmap, merges):
419 def saveseries(self, revmap, merges):
419 if not revmap:
420 if not revmap:
420 return
421 return
421
422
422 if not os.path.isdir(self.path):
423 if not os.path.isdir(self.path):
423 os.mkdir(self.path)
424 os.mkdir(self.path)
424 series = self.opener('series', 'w')
425 series = self.opener('series', 'w')
425 for rev in sorted(revmap):
426 for rev in sorted(revmap):
426 series.write(nodemod.hex(revmap[rev]) + '\n')
427 series.write(nodemod.hex(revmap[rev]) + '\n')
427 if merges:
428 if merges:
428 series.write('# Merges\n')
429 series.write('# Merges\n')
429 for m in merges:
430 for m in merges:
430 series.write(nodemod.hex(m) + '\n')
431 series.write(nodemod.hex(m) + '\n')
431 series.close()
432 series.close()
432
433
433 def parselog(self, fp):
434 def parselog(self, fp):
434 parents = []
435 parents = []
435 message = []
436 message = []
436 node = revlog.nullid
437 node = revlog.nullid
437 inmsg = False
438 inmsg = False
438 user = None
439 user = None
439 date = None
440 date = None
440 for line in fp.read().splitlines():
441 for line in fp.read().splitlines():
441 if inmsg:
442 if inmsg:
442 message.append(line)
443 message.append(line)
443 elif line.startswith('# User '):
444 elif line.startswith('# User '):
444 user = line[7:]
445 user = line[7:]
445 elif line.startswith('# Date '):
446 elif line.startswith('# Date '):
446 date = line[7:]
447 date = line[7:]
447 elif line.startswith('# Node ID '):
448 elif line.startswith('# Node ID '):
448 node = revlog.bin(line[10:])
449 node = revlog.bin(line[10:])
449 elif line.startswith('# Parent '):
450 elif line.startswith('# Parent '):
450 parents.append(revlog.bin(line[9:]))
451 parents.append(revlog.bin(line[9:]))
451 elif not line.startswith('# '):
452 elif not line.startswith('# '):
452 inmsg = True
453 inmsg = True
453 message.append(line)
454 message.append(line)
454 if None in (user, date):
455 if None in (user, date):
455 raise error.Abort(_("filter corrupted changeset (no user or date)"))
456 raise error.Abort(_("filter corrupted changeset (no user or date)"))
456 return (node, user, date, '\n'.join(message), parents)
457 return (node, user, date, '\n'.join(message), parents)
457
458
458 def log(self, user, date, message, p1, p2, merge=False):
459 def log(self, user, date, message, p1, p2, merge=False):
459 '''journal changelog metadata for later recover'''
460 '''journal changelog metadata for later recover'''
460
461
461 if not os.path.isdir(self.path):
462 if not os.path.isdir(self.path):
462 os.mkdir(self.path)
463 os.mkdir(self.path)
463 fp = self.opener('journal', 'w')
464 fp = self.opener('journal', 'w')
464 fp.write('# User %s\n' % user)
465 fp.write('# User %s\n' % user)
465 fp.write('# Date %s\n' % date)
466 fp.write('# Date %s\n' % date)
466 fp.write('# Node ID %s\n' % nodemod.hex(p2))
467 fp.write('# Node ID %s\n' % nodemod.hex(p2))
467 fp.write('# Parent ' + nodemod.hex(p1) + '\n')
468 fp.write('# Parent ' + nodemod.hex(p1) + '\n')
468 if merge:
469 if merge:
469 fp.write('# Parent ' + nodemod.hex(p2) + '\n')
470 fp.write('# Parent ' + nodemod.hex(p2) + '\n')
470 fp.write(message.rstrip() + '\n')
471 fp.write(message.rstrip() + '\n')
471 fp.close()
472 fp.close()
472
473
473 def readlog(self):
474 def readlog(self):
474 return self.parselog(self.opener('journal'))
475 return self.parselog(self.opener('journal'))
475
476
476 def unlog(self):
477 def unlog(self):
477 '''remove changelog journal'''
478 '''remove changelog journal'''
478 absdst = os.path.join(self.path, 'journal')
479 absdst = os.path.join(self.path, 'journal')
479 if os.path.exists(absdst):
480 if os.path.exists(absdst):
480 os.unlink(absdst)
481 os.unlink(absdst)
481
482
482 def transplantfilter(self, repo, source, root):
483 def transplantfilter(self, repo, source, root):
483 def matchfn(node):
484 def matchfn(node):
484 if self.applied(repo, node, root):
485 if self.applied(repo, node, root):
485 return False
486 return False
486 if source.changelog.parents(node)[1] != revlog.nullid:
487 if source.changelog.parents(node)[1] != revlog.nullid:
487 return False
488 return False
488 extra = source.changelog.read(node)[5]
489 extra = source.changelog.read(node)[5]
489 cnode = extra.get('transplant_source')
490 cnode = extra.get('transplant_source')
490 if cnode and self.applied(repo, cnode, root):
491 if cnode and self.applied(repo, cnode, root):
491 return False
492 return False
492 return True
493 return True
493
494
494 return matchfn
495 return matchfn
495
496
496 def hasnode(repo, node):
497 def hasnode(repo, node):
497 try:
498 try:
498 return repo.changelog.rev(node) is not None
499 return repo.changelog.rev(node) is not None
499 except error.RevlogError:
500 except error.RevlogError:
500 return False
501 return False
501
502
502 def browserevs(ui, repo, nodes, opts):
503 def browserevs(ui, repo, nodes, opts):
503 '''interactively transplant changesets'''
504 '''interactively transplant changesets'''
504 displayer = cmdutil.show_changeset(ui, repo, opts)
505 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
505 transplants = []
506 transplants = []
506 merges = []
507 merges = []
507 prompt = _('apply changeset? [ynmpcq?]:'
508 prompt = _('apply changeset? [ynmpcq?]:'
508 '$$ &yes, transplant this changeset'
509 '$$ &yes, transplant this changeset'
509 '$$ &no, skip this changeset'
510 '$$ &no, skip this changeset'
510 '$$ &merge at this changeset'
511 '$$ &merge at this changeset'
511 '$$ show &patch'
512 '$$ show &patch'
512 '$$ &commit selected changesets'
513 '$$ &commit selected changesets'
513 '$$ &quit and cancel transplant'
514 '$$ &quit and cancel transplant'
514 '$$ &? (show this help)')
515 '$$ &? (show this help)')
515 for node in nodes:
516 for node in nodes:
516 displayer.show(repo[node])
517 displayer.show(repo[node])
517 action = None
518 action = None
518 while not action:
519 while not action:
519 action = 'ynmpcq?'[ui.promptchoice(prompt)]
520 action = 'ynmpcq?'[ui.promptchoice(prompt)]
520 if action == '?':
521 if action == '?':
521 for c, t in ui.extractchoices(prompt)[1]:
522 for c, t in ui.extractchoices(prompt)[1]:
522 ui.write('%s: %s\n' % (c, t))
523 ui.write('%s: %s\n' % (c, t))
523 action = None
524 action = None
524 elif action == 'p':
525 elif action == 'p':
525 parent = repo.changelog.parents(node)[0]
526 parent = repo.changelog.parents(node)[0]
526 for chunk in patch.diff(repo, parent, node):
527 for chunk in patch.diff(repo, parent, node):
527 ui.write(chunk)
528 ui.write(chunk)
528 action = None
529 action = None
529 if action == 'y':
530 if action == 'y':
530 transplants.append(node)
531 transplants.append(node)
531 elif action == 'm':
532 elif action == 'm':
532 merges.append(node)
533 merges.append(node)
533 elif action == 'c':
534 elif action == 'c':
534 break
535 break
535 elif action == 'q':
536 elif action == 'q':
536 transplants = ()
537 transplants = ()
537 merges = ()
538 merges = ()
538 break
539 break
539 displayer.close()
540 displayer.close()
540 return (transplants, merges)
541 return (transplants, merges)
541
542
542 @command('transplant',
543 @command('transplant',
543 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
544 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
544 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
545 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
545 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
546 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
546 ('p', 'prune', [], _('skip over REV'), _('REV')),
547 ('p', 'prune', [], _('skip over REV'), _('REV')),
547 ('m', 'merge', [], _('merge at REV'), _('REV')),
548 ('m', 'merge', [], _('merge at REV'), _('REV')),
548 ('', 'parent', '',
549 ('', 'parent', '',
549 _('parent to choose when transplanting merge'), _('REV')),
550 _('parent to choose when transplanting merge'), _('REV')),
550 ('e', 'edit', False, _('invoke editor on commit messages')),
551 ('e', 'edit', False, _('invoke editor on commit messages')),
551 ('', 'log', None, _('append transplant info to log message')),
552 ('', 'log', None, _('append transplant info to log message')),
552 ('c', 'continue', None, _('continue last transplant session '
553 ('c', 'continue', None, _('continue last transplant session '
553 'after fixing conflicts')),
554 'after fixing conflicts')),
554 ('', 'filter', '',
555 ('', 'filter', '',
555 _('filter changesets through command'), _('CMD'))],
556 _('filter changesets through command'), _('CMD'))],
556 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
557 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
557 '[-m REV] [REV]...'))
558 '[-m REV] [REV]...'))
558 def transplant(ui, repo, *revs, **opts):
559 def transplant(ui, repo, *revs, **opts):
559 '''transplant changesets from another branch
560 '''transplant changesets from another branch
560
561
561 Selected changesets will be applied on top of the current working
562 Selected changesets will be applied on top of the current working
562 directory with the log of the original changeset. The changesets
563 directory with the log of the original changeset. The changesets
563 are copied and will thus appear twice in the history with different
564 are copied and will thus appear twice in the history with different
564 identities.
565 identities.
565
566
566 Consider using the graft command if everything is inside the same
567 Consider using the graft command if everything is inside the same
567 repository - it will use merges and will usually give a better result.
568 repository - it will use merges and will usually give a better result.
568 Use the rebase extension if the changesets are unpublished and you want
569 Use the rebase extension if the changesets are unpublished and you want
569 to move them instead of copying them.
570 to move them instead of copying them.
570
571
571 If --log is specified, log messages will have a comment appended
572 If --log is specified, log messages will have a comment appended
572 of the form::
573 of the form::
573
574
574 (transplanted from CHANGESETHASH)
575 (transplanted from CHANGESETHASH)
575
576
576 You can rewrite the changelog message with the --filter option.
577 You can rewrite the changelog message with the --filter option.
577 Its argument will be invoked with the current changelog message as
578 Its argument will be invoked with the current changelog message as
578 $1 and the patch as $2.
579 $1 and the patch as $2.
579
580
580 --source/-s specifies another repository to use for selecting changesets,
581 --source/-s specifies another repository to use for selecting changesets,
581 just as if it temporarily had been pulled.
582 just as if it temporarily had been pulled.
582 If --branch/-b is specified, these revisions will be used as
583 If --branch/-b is specified, these revisions will be used as
583 heads when deciding which changesets to transplant, just as if only
584 heads when deciding which changesets to transplant, just as if only
584 these revisions had been pulled.
585 these revisions had been pulled.
585 If --all/-a is specified, all the revisions up to the heads specified
586 If --all/-a is specified, all the revisions up to the heads specified
586 with --branch will be transplanted.
587 with --branch will be transplanted.
587
588
588 Example:
589 Example:
589
590
590 - transplant all changes up to REV on top of your current revision::
591 - transplant all changes up to REV on top of your current revision::
591
592
592 hg transplant --branch REV --all
593 hg transplant --branch REV --all
593
594
594 You can optionally mark selected transplanted changesets as merge
595 You can optionally mark selected transplanted changesets as merge
595 changesets. You will not be prompted to transplant any ancestors
596 changesets. You will not be prompted to transplant any ancestors
596 of a merged transplant, and you can merge descendants of them
597 of a merged transplant, and you can merge descendants of them
597 normally instead of transplanting them.
598 normally instead of transplanting them.
598
599
599 Merge changesets may be transplanted directly by specifying the
600 Merge changesets may be transplanted directly by specifying the
600 proper parent changeset by calling :hg:`transplant --parent`.
601 proper parent changeset by calling :hg:`transplant --parent`.
601
602
602 If no merges or revisions are provided, :hg:`transplant` will
603 If no merges or revisions are provided, :hg:`transplant` will
603 start an interactive changeset browser.
604 start an interactive changeset browser.
604
605
605 If a changeset application fails, you can fix the merge by hand
606 If a changeset application fails, you can fix the merge by hand
606 and then resume where you left off by calling :hg:`transplant
607 and then resume where you left off by calling :hg:`transplant
607 --continue/-c`.
608 --continue/-c`.
608 '''
609 '''
609 with repo.wlock():
610 with repo.wlock():
610 return _dotransplant(ui, repo, *revs, **opts)
611 return _dotransplant(ui, repo, *revs, **opts)
611
612
612 def _dotransplant(ui, repo, *revs, **opts):
613 def _dotransplant(ui, repo, *revs, **opts):
613 def incwalk(repo, csets, match=util.always):
614 def incwalk(repo, csets, match=util.always):
614 for node in csets:
615 for node in csets:
615 if match(node):
616 if match(node):
616 yield node
617 yield node
617
618
618 def transplantwalk(repo, dest, heads, match=util.always):
619 def transplantwalk(repo, dest, heads, match=util.always):
619 '''Yield all nodes that are ancestors of a head but not ancestors
620 '''Yield all nodes that are ancestors of a head but not ancestors
620 of dest.
621 of dest.
621 If no heads are specified, the heads of repo will be used.'''
622 If no heads are specified, the heads of repo will be used.'''
622 if not heads:
623 if not heads:
623 heads = repo.heads()
624 heads = repo.heads()
624 ancestors = []
625 ancestors = []
625 ctx = repo[dest]
626 ctx = repo[dest]
626 for head in heads:
627 for head in heads:
627 ancestors.append(ctx.ancestor(repo[head]).node())
628 ancestors.append(ctx.ancestor(repo[head]).node())
628 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
629 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
629 if match(node):
630 if match(node):
630 yield node
631 yield node
631
632
632 def checkopts(opts, revs):
633 def checkopts(opts, revs):
633 if opts.get('continue'):
634 if opts.get('continue'):
634 if opts.get('branch') or opts.get('all') or opts.get('merge'):
635 if opts.get('branch') or opts.get('all') or opts.get('merge'):
635 raise error.Abort(_('--continue is incompatible with '
636 raise error.Abort(_('--continue is incompatible with '
636 '--branch, --all and --merge'))
637 '--branch, --all and --merge'))
637 return
638 return
638 if not (opts.get('source') or revs or
639 if not (opts.get('source') or revs or
639 opts.get('merge') or opts.get('branch')):
640 opts.get('merge') or opts.get('branch')):
640 raise error.Abort(_('no source URL, branch revision, or revision '
641 raise error.Abort(_('no source URL, branch revision, or revision '
641 'list provided'))
642 'list provided'))
642 if opts.get('all'):
643 if opts.get('all'):
643 if not opts.get('branch'):
644 if not opts.get('branch'):
644 raise error.Abort(_('--all requires a branch revision'))
645 raise error.Abort(_('--all requires a branch revision'))
645 if revs:
646 if revs:
646 raise error.Abort(_('--all is incompatible with a '
647 raise error.Abort(_('--all is incompatible with a '
647 'revision list'))
648 'revision list'))
648
649
649 checkopts(opts, revs)
650 checkopts(opts, revs)
650
651
651 if not opts.get('log'):
652 if not opts.get('log'):
652 # deprecated config: transplant.log
653 # deprecated config: transplant.log
653 opts['log'] = ui.config('transplant', 'log')
654 opts['log'] = ui.config('transplant', 'log')
654 if not opts.get('filter'):
655 if not opts.get('filter'):
655 # deprecated config: transplant.filter
656 # deprecated config: transplant.filter
656 opts['filter'] = ui.config('transplant', 'filter')
657 opts['filter'] = ui.config('transplant', 'filter')
657
658
658 tp = transplanter(ui, repo, opts)
659 tp = transplanter(ui, repo, opts)
659
660
660 p1, p2 = repo.dirstate.parents()
661 p1, p2 = repo.dirstate.parents()
661 if len(repo) > 0 and p1 == revlog.nullid:
662 if len(repo) > 0 and p1 == revlog.nullid:
662 raise error.Abort(_('no revision checked out'))
663 raise error.Abort(_('no revision checked out'))
663 if opts.get('continue'):
664 if opts.get('continue'):
664 if not tp.canresume():
665 if not tp.canresume():
665 raise error.Abort(_('no transplant to continue'))
666 raise error.Abort(_('no transplant to continue'))
666 else:
667 else:
667 cmdutil.checkunfinished(repo)
668 cmdutil.checkunfinished(repo)
668 if p2 != revlog.nullid:
669 if p2 != revlog.nullid:
669 raise error.Abort(_('outstanding uncommitted merges'))
670 raise error.Abort(_('outstanding uncommitted merges'))
670 m, a, r, d = repo.status()[:4]
671 m, a, r, d = repo.status()[:4]
671 if m or a or r or d:
672 if m or a or r or d:
672 raise error.Abort(_('outstanding local changes'))
673 raise error.Abort(_('outstanding local changes'))
673
674
674 sourcerepo = opts.get('source')
675 sourcerepo = opts.get('source')
675 if sourcerepo:
676 if sourcerepo:
676 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
677 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
677 heads = map(peer.lookup, opts.get('branch', ()))
678 heads = map(peer.lookup, opts.get('branch', ()))
678 target = set(heads)
679 target = set(heads)
679 for r in revs:
680 for r in revs:
680 try:
681 try:
681 target.add(peer.lookup(r))
682 target.add(peer.lookup(r))
682 except error.RepoError:
683 except error.RepoError:
683 pass
684 pass
684 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
685 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
685 onlyheads=sorted(target), force=True)
686 onlyheads=sorted(target), force=True)
686 else:
687 else:
687 source = repo
688 source = repo
688 heads = map(source.lookup, opts.get('branch', ()))
689 heads = map(source.lookup, opts.get('branch', ()))
689 cleanupfn = None
690 cleanupfn = None
690
691
691 try:
692 try:
692 if opts.get('continue'):
693 if opts.get('continue'):
693 tp.resume(repo, source, opts)
694 tp.resume(repo, source, opts)
694 return
695 return
695
696
696 tf = tp.transplantfilter(repo, source, p1)
697 tf = tp.transplantfilter(repo, source, p1)
697 if opts.get('prune'):
698 if opts.get('prune'):
698 prune = set(source.lookup(r)
699 prune = set(source.lookup(r)
699 for r in scmutil.revrange(source, opts.get('prune')))
700 for r in scmutil.revrange(source, opts.get('prune')))
700 matchfn = lambda x: tf(x) and x not in prune
701 matchfn = lambda x: tf(x) and x not in prune
701 else:
702 else:
702 matchfn = tf
703 matchfn = tf
703 merges = map(source.lookup, opts.get('merge', ()))
704 merges = map(source.lookup, opts.get('merge', ()))
704 revmap = {}
705 revmap = {}
705 if revs:
706 if revs:
706 for r in scmutil.revrange(source, revs):
707 for r in scmutil.revrange(source, revs):
707 revmap[int(r)] = source.lookup(r)
708 revmap[int(r)] = source.lookup(r)
708 elif opts.get('all') or not merges:
709 elif opts.get('all') or not merges:
709 if source != repo:
710 if source != repo:
710 alltransplants = incwalk(source, csets, match=matchfn)
711 alltransplants = incwalk(source, csets, match=matchfn)
711 else:
712 else:
712 alltransplants = transplantwalk(source, p1, heads,
713 alltransplants = transplantwalk(source, p1, heads,
713 match=matchfn)
714 match=matchfn)
714 if opts.get('all'):
715 if opts.get('all'):
715 revs = alltransplants
716 revs = alltransplants
716 else:
717 else:
717 revs, newmerges = browserevs(ui, source, alltransplants, opts)
718 revs, newmerges = browserevs(ui, source, alltransplants, opts)
718 merges.extend(newmerges)
719 merges.extend(newmerges)
719 for r in revs:
720 for r in revs:
720 revmap[source.changelog.rev(r)] = r
721 revmap[source.changelog.rev(r)] = r
721 for r in merges:
722 for r in merges:
722 revmap[source.changelog.rev(r)] = r
723 revmap[source.changelog.rev(r)] = r
723
724
724 tp.apply(repo, source, revmap, merges, opts)
725 tp.apply(repo, source, revmap, merges, opts)
725 finally:
726 finally:
726 if cleanupfn:
727 if cleanupfn:
727 cleanupfn()
728 cleanupfn()
728
729
729 revsetpredicate = registrar.revsetpredicate()
730 revsetpredicate = registrar.revsetpredicate()
730
731
731 @revsetpredicate('transplanted([set])')
732 @revsetpredicate('transplanted([set])')
732 def revsettransplanted(repo, subset, x):
733 def revsettransplanted(repo, subset, x):
733 """Transplanted changesets in set, or all transplanted changesets.
734 """Transplanted changesets in set, or all transplanted changesets.
734 """
735 """
735 if x:
736 if x:
736 s = revset.getset(repo, subset, x)
737 s = revset.getset(repo, subset, x)
737 else:
738 else:
738 s = subset
739 s = subset
739 return smartset.baseset([r for r in s if
740 return smartset.baseset([r for r in s if
740 repo[r].extra().get('transplant_source')])
741 repo[r].extra().get('transplant_source')])
741
742
742 templatekeyword = registrar.templatekeyword()
743 templatekeyword = registrar.templatekeyword()
743
744
744 @templatekeyword('transplanted')
745 @templatekeyword('transplanted')
745 def kwtransplanted(repo, ctx, **args):
746 def kwtransplanted(repo, ctx, **args):
746 """String. The node identifier of the transplanted
747 """String. The node identifier of the transplanted
747 changeset if any."""
748 changeset if any."""
748 n = ctx.extra().get('transplant_source')
749 n = ctx.extra().get('transplant_source')
749 return n and nodemod.hex(n) or ''
750 return n and nodemod.hex(n) or ''
750
751
751 def extsetup(ui):
752 def extsetup(ui):
752 cmdutil.unfinishedstates.append(
753 cmdutil.unfinishedstates.append(
753 ['transplant/journal', True, False, _('transplant in progress'),
754 ['transplant/journal', True, False, _('transplant in progress'),
754 _("use 'hg transplant --continue' or 'hg update' to abort")])
755 _("use 'hg transplant --continue' or 'hg update' to abort")])
755
756
756 # tell hggettext to extract docstrings from these functions:
757 # tell hggettext to extract docstrings from these functions:
757 i18nfunctions = [revsettransplanted, kwtransplanted]
758 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,3155 +1,3139 b''
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import os
11 import os
12 import re
12 import re
13 import tempfile
13 import tempfile
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 hex,
17 hex,
18 nullid,
18 nullid,
19 nullrev,
19 nullrev,
20 short,
20 short,
21 )
21 )
22
22
23 from . import (
23 from . import (
24 bookmarks,
24 bookmarks,
25 changelog,
25 changelog,
26 copies,
26 copies,
27 crecord as crecordmod,
27 crecord as crecordmod,
28 dirstateguard,
28 dirstateguard,
29 encoding,
29 encoding,
30 error,
30 error,
31 formatter,
31 formatter,
32 logcmdutil,
32 logcmdutil,
33 match as matchmod,
33 match as matchmod,
34 obsolete,
34 obsolete,
35 patch,
35 patch,
36 pathutil,
36 pathutil,
37 pycompat,
37 pycompat,
38 registrar,
38 registrar,
39 revlog,
39 revlog,
40 rewriteutil,
40 rewriteutil,
41 scmutil,
41 scmutil,
42 smartset,
42 smartset,
43 templater,
43 templater,
44 util,
44 util,
45 vfs as vfsmod,
45 vfs as vfsmod,
46 )
46 )
47 stringio = util.stringio
47 stringio = util.stringio
48
48
49 loglimit = logcmdutil.getlimit
50 diffordiffstat = logcmdutil.diffordiffstat
51 _changesetlabels = logcmdutil.changesetlabels
52 changeset_printer = logcmdutil.changesetprinter
53 jsonchangeset = logcmdutil.jsonchangeset
54 changeset_templater = logcmdutil.changesettemplater
55 logtemplatespec = logcmdutil.templatespec
56 makelogtemplater = logcmdutil.maketemplater
57 show_changeset = logcmdutil.changesetdisplayer
58 getlogrevs = logcmdutil.getrevs
59 getloglinerangerevs = logcmdutil.getlinerangerevs
60 displaygraph = logcmdutil.displaygraph
61 graphlog = logcmdutil.graphlog
62 checkunsupportedgraphflags = logcmdutil.checkunsupportedgraphflags
63 graphrevs = logcmdutil.graphrevs
64
65 # templates of common command options
49 # templates of common command options
66
50
67 dryrunopts = [
51 dryrunopts = [
68 ('n', 'dry-run', None,
52 ('n', 'dry-run', None,
69 _('do not perform actions, just print output')),
53 _('do not perform actions, just print output')),
70 ]
54 ]
71
55
72 remoteopts = [
56 remoteopts = [
73 ('e', 'ssh', '',
57 ('e', 'ssh', '',
74 _('specify ssh command to use'), _('CMD')),
58 _('specify ssh command to use'), _('CMD')),
75 ('', 'remotecmd', '',
59 ('', 'remotecmd', '',
76 _('specify hg command to run on the remote side'), _('CMD')),
60 _('specify hg command to run on the remote side'), _('CMD')),
77 ('', 'insecure', None,
61 ('', 'insecure', None,
78 _('do not verify server certificate (ignoring web.cacerts config)')),
62 _('do not verify server certificate (ignoring web.cacerts config)')),
79 ]
63 ]
80
64
81 walkopts = [
65 walkopts = [
82 ('I', 'include', [],
66 ('I', 'include', [],
83 _('include names matching the given patterns'), _('PATTERN')),
67 _('include names matching the given patterns'), _('PATTERN')),
84 ('X', 'exclude', [],
68 ('X', 'exclude', [],
85 _('exclude names matching the given patterns'), _('PATTERN')),
69 _('exclude names matching the given patterns'), _('PATTERN')),
86 ]
70 ]
87
71
88 commitopts = [
72 commitopts = [
89 ('m', 'message', '',
73 ('m', 'message', '',
90 _('use text as commit message'), _('TEXT')),
74 _('use text as commit message'), _('TEXT')),
91 ('l', 'logfile', '',
75 ('l', 'logfile', '',
92 _('read commit message from file'), _('FILE')),
76 _('read commit message from file'), _('FILE')),
93 ]
77 ]
94
78
95 commitopts2 = [
79 commitopts2 = [
96 ('d', 'date', '',
80 ('d', 'date', '',
97 _('record the specified date as commit date'), _('DATE')),
81 _('record the specified date as commit date'), _('DATE')),
98 ('u', 'user', '',
82 ('u', 'user', '',
99 _('record the specified user as committer'), _('USER')),
83 _('record the specified user as committer'), _('USER')),
100 ]
84 ]
101
85
102 # hidden for now
86 # hidden for now
103 formatteropts = [
87 formatteropts = [
104 ('T', 'template', '',
88 ('T', 'template', '',
105 _('display with template (EXPERIMENTAL)'), _('TEMPLATE')),
89 _('display with template (EXPERIMENTAL)'), _('TEMPLATE')),
106 ]
90 ]
107
91
108 templateopts = [
92 templateopts = [
109 ('', 'style', '',
93 ('', 'style', '',
110 _('display using template map file (DEPRECATED)'), _('STYLE')),
94 _('display using template map file (DEPRECATED)'), _('STYLE')),
111 ('T', 'template', '',
95 ('T', 'template', '',
112 _('display with template'), _('TEMPLATE')),
96 _('display with template'), _('TEMPLATE')),
113 ]
97 ]
114
98
115 logopts = [
99 logopts = [
116 ('p', 'patch', None, _('show patch')),
100 ('p', 'patch', None, _('show patch')),
117 ('g', 'git', None, _('use git extended diff format')),
101 ('g', 'git', None, _('use git extended diff format')),
118 ('l', 'limit', '',
102 ('l', 'limit', '',
119 _('limit number of changes displayed'), _('NUM')),
103 _('limit number of changes displayed'), _('NUM')),
120 ('M', 'no-merges', None, _('do not show merges')),
104 ('M', 'no-merges', None, _('do not show merges')),
121 ('', 'stat', None, _('output diffstat-style summary of changes')),
105 ('', 'stat', None, _('output diffstat-style summary of changes')),
122 ('G', 'graph', None, _("show the revision DAG")),
106 ('G', 'graph', None, _("show the revision DAG")),
123 ] + templateopts
107 ] + templateopts
124
108
125 diffopts = [
109 diffopts = [
126 ('a', 'text', None, _('treat all files as text')),
110 ('a', 'text', None, _('treat all files as text')),
127 ('g', 'git', None, _('use git extended diff format')),
111 ('g', 'git', None, _('use git extended diff format')),
128 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
112 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
129 ('', 'nodates', None, _('omit dates from diff headers'))
113 ('', 'nodates', None, _('omit dates from diff headers'))
130 ]
114 ]
131
115
132 diffwsopts = [
116 diffwsopts = [
133 ('w', 'ignore-all-space', None,
117 ('w', 'ignore-all-space', None,
134 _('ignore white space when comparing lines')),
118 _('ignore white space when comparing lines')),
135 ('b', 'ignore-space-change', None,
119 ('b', 'ignore-space-change', None,
136 _('ignore changes in the amount of white space')),
120 _('ignore changes in the amount of white space')),
137 ('B', 'ignore-blank-lines', None,
121 ('B', 'ignore-blank-lines', None,
138 _('ignore changes whose lines are all blank')),
122 _('ignore changes whose lines are all blank')),
139 ('Z', 'ignore-space-at-eol', None,
123 ('Z', 'ignore-space-at-eol', None,
140 _('ignore changes in whitespace at EOL')),
124 _('ignore changes in whitespace at EOL')),
141 ]
125 ]
142
126
143 diffopts2 = [
127 diffopts2 = [
144 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
128 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
145 ('p', 'show-function', None, _('show which function each change is in')),
129 ('p', 'show-function', None, _('show which function each change is in')),
146 ('', 'reverse', None, _('produce a diff that undoes the changes')),
130 ('', 'reverse', None, _('produce a diff that undoes the changes')),
147 ] + diffwsopts + [
131 ] + diffwsopts + [
148 ('U', 'unified', '',
132 ('U', 'unified', '',
149 _('number of lines of context to show'), _('NUM')),
133 _('number of lines of context to show'), _('NUM')),
150 ('', 'stat', None, _('output diffstat-style summary of changes')),
134 ('', 'stat', None, _('output diffstat-style summary of changes')),
151 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
135 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
152 ]
136 ]
153
137
154 mergetoolopts = [
138 mergetoolopts = [
155 ('t', 'tool', '', _('specify merge tool')),
139 ('t', 'tool', '', _('specify merge tool')),
156 ]
140 ]
157
141
158 similarityopts = [
142 similarityopts = [
159 ('s', 'similarity', '',
143 ('s', 'similarity', '',
160 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
144 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
161 ]
145 ]
162
146
163 subrepoopts = [
147 subrepoopts = [
164 ('S', 'subrepos', None,
148 ('S', 'subrepos', None,
165 _('recurse into subrepositories'))
149 _('recurse into subrepositories'))
166 ]
150 ]
167
151
168 debugrevlogopts = [
152 debugrevlogopts = [
169 ('c', 'changelog', False, _('open changelog')),
153 ('c', 'changelog', False, _('open changelog')),
170 ('m', 'manifest', False, _('open manifest')),
154 ('m', 'manifest', False, _('open manifest')),
171 ('', 'dir', '', _('open directory manifest')),
155 ('', 'dir', '', _('open directory manifest')),
172 ]
156 ]
173
157
174 # special string such that everything below this line will be ingored in the
158 # special string such that everything below this line will be ingored in the
175 # editor text
159 # editor text
176 _linebelow = "^HG: ------------------------ >8 ------------------------$"
160 _linebelow = "^HG: ------------------------ >8 ------------------------$"
177
161
178 def ishunk(x):
162 def ishunk(x):
179 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
163 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
180 return isinstance(x, hunkclasses)
164 return isinstance(x, hunkclasses)
181
165
182 def newandmodified(chunks, originalchunks):
166 def newandmodified(chunks, originalchunks):
183 newlyaddedandmodifiedfiles = set()
167 newlyaddedandmodifiedfiles = set()
184 for chunk in chunks:
168 for chunk in chunks:
185 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
169 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
186 originalchunks:
170 originalchunks:
187 newlyaddedandmodifiedfiles.add(chunk.header.filename())
171 newlyaddedandmodifiedfiles.add(chunk.header.filename())
188 return newlyaddedandmodifiedfiles
172 return newlyaddedandmodifiedfiles
189
173
190 def parsealiases(cmd):
174 def parsealiases(cmd):
191 return cmd.lstrip("^").split("|")
175 return cmd.lstrip("^").split("|")
192
176
193 def setupwrapcolorwrite(ui):
177 def setupwrapcolorwrite(ui):
194 # wrap ui.write so diff output can be labeled/colorized
178 # wrap ui.write so diff output can be labeled/colorized
195 def wrapwrite(orig, *args, **kw):
179 def wrapwrite(orig, *args, **kw):
196 label = kw.pop(r'label', '')
180 label = kw.pop(r'label', '')
197 for chunk, l in patch.difflabel(lambda: args):
181 for chunk, l in patch.difflabel(lambda: args):
198 orig(chunk, label=label + l)
182 orig(chunk, label=label + l)
199
183
200 oldwrite = ui.write
184 oldwrite = ui.write
201 def wrap(*args, **kwargs):
185 def wrap(*args, **kwargs):
202 return wrapwrite(oldwrite, *args, **kwargs)
186 return wrapwrite(oldwrite, *args, **kwargs)
203 setattr(ui, 'write', wrap)
187 setattr(ui, 'write', wrap)
204 return oldwrite
188 return oldwrite
205
189
206 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
190 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
207 if usecurses:
191 if usecurses:
208 if testfile:
192 if testfile:
209 recordfn = crecordmod.testdecorator(testfile,
193 recordfn = crecordmod.testdecorator(testfile,
210 crecordmod.testchunkselector)
194 crecordmod.testchunkselector)
211 else:
195 else:
212 recordfn = crecordmod.chunkselector
196 recordfn = crecordmod.chunkselector
213
197
214 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
198 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
215
199
216 else:
200 else:
217 return patch.filterpatch(ui, originalhunks, operation)
201 return patch.filterpatch(ui, originalhunks, operation)
218
202
219 def recordfilter(ui, originalhunks, operation=None):
203 def recordfilter(ui, originalhunks, operation=None):
220 """ Prompts the user to filter the originalhunks and return a list of
204 """ Prompts the user to filter the originalhunks and return a list of
221 selected hunks.
205 selected hunks.
222 *operation* is used for to build ui messages to indicate the user what
206 *operation* is used for to build ui messages to indicate the user what
223 kind of filtering they are doing: reverting, committing, shelving, etc.
207 kind of filtering they are doing: reverting, committing, shelving, etc.
224 (see patch.filterpatch).
208 (see patch.filterpatch).
225 """
209 """
226 usecurses = crecordmod.checkcurses(ui)
210 usecurses = crecordmod.checkcurses(ui)
227 testfile = ui.config('experimental', 'crecordtest')
211 testfile = ui.config('experimental', 'crecordtest')
228 oldwrite = setupwrapcolorwrite(ui)
212 oldwrite = setupwrapcolorwrite(ui)
229 try:
213 try:
230 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
214 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
231 testfile, operation)
215 testfile, operation)
232 finally:
216 finally:
233 ui.write = oldwrite
217 ui.write = oldwrite
234 return newchunks, newopts
218 return newchunks, newopts
235
219
236 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
220 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
237 filterfn, *pats, **opts):
221 filterfn, *pats, **opts):
238 from . import merge as mergemod
222 from . import merge as mergemod
239 opts = pycompat.byteskwargs(opts)
223 opts = pycompat.byteskwargs(opts)
240 if not ui.interactive():
224 if not ui.interactive():
241 if cmdsuggest:
225 if cmdsuggest:
242 msg = _('running non-interactively, use %s instead') % cmdsuggest
226 msg = _('running non-interactively, use %s instead') % cmdsuggest
243 else:
227 else:
244 msg = _('running non-interactively')
228 msg = _('running non-interactively')
245 raise error.Abort(msg)
229 raise error.Abort(msg)
246
230
247 # make sure username is set before going interactive
231 # make sure username is set before going interactive
248 if not opts.get('user'):
232 if not opts.get('user'):
249 ui.username() # raise exception, username not provided
233 ui.username() # raise exception, username not provided
250
234
251 def recordfunc(ui, repo, message, match, opts):
235 def recordfunc(ui, repo, message, match, opts):
252 """This is generic record driver.
236 """This is generic record driver.
253
237
254 Its job is to interactively filter local changes, and
238 Its job is to interactively filter local changes, and
255 accordingly prepare working directory into a state in which the
239 accordingly prepare working directory into a state in which the
256 job can be delegated to a non-interactive commit command such as
240 job can be delegated to a non-interactive commit command such as
257 'commit' or 'qrefresh'.
241 'commit' or 'qrefresh'.
258
242
259 After the actual job is done by non-interactive command, the
243 After the actual job is done by non-interactive command, the
260 working directory is restored to its original state.
244 working directory is restored to its original state.
261
245
262 In the end we'll record interesting changes, and everything else
246 In the end we'll record interesting changes, and everything else
263 will be left in place, so the user can continue working.
247 will be left in place, so the user can continue working.
264 """
248 """
265
249
266 checkunfinished(repo, commit=True)
250 checkunfinished(repo, commit=True)
267 wctx = repo[None]
251 wctx = repo[None]
268 merge = len(wctx.parents()) > 1
252 merge = len(wctx.parents()) > 1
269 if merge:
253 if merge:
270 raise error.Abort(_('cannot partially commit a merge '
254 raise error.Abort(_('cannot partially commit a merge '
271 '(use "hg commit" instead)'))
255 '(use "hg commit" instead)'))
272
256
273 def fail(f, msg):
257 def fail(f, msg):
274 raise error.Abort('%s: %s' % (f, msg))
258 raise error.Abort('%s: %s' % (f, msg))
275
259
276 force = opts.get('force')
260 force = opts.get('force')
277 if not force:
261 if not force:
278 vdirs = []
262 vdirs = []
279 match.explicitdir = vdirs.append
263 match.explicitdir = vdirs.append
280 match.bad = fail
264 match.bad = fail
281
265
282 status = repo.status(match=match)
266 status = repo.status(match=match)
283 if not force:
267 if not force:
284 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
268 repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
285 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
269 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
286 diffopts.nodates = True
270 diffopts.nodates = True
287 diffopts.git = True
271 diffopts.git = True
288 diffopts.showfunc = True
272 diffopts.showfunc = True
289 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
273 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
290 originalchunks = patch.parsepatch(originaldiff)
274 originalchunks = patch.parsepatch(originaldiff)
291
275
292 # 1. filter patch, since we are intending to apply subset of it
276 # 1. filter patch, since we are intending to apply subset of it
293 try:
277 try:
294 chunks, newopts = filterfn(ui, originalchunks)
278 chunks, newopts = filterfn(ui, originalchunks)
295 except error.PatchError as err:
279 except error.PatchError as err:
296 raise error.Abort(_('error parsing patch: %s') % err)
280 raise error.Abort(_('error parsing patch: %s') % err)
297 opts.update(newopts)
281 opts.update(newopts)
298
282
299 # We need to keep a backup of files that have been newly added and
283 # We need to keep a backup of files that have been newly added and
300 # modified during the recording process because there is a previous
284 # modified during the recording process because there is a previous
301 # version without the edit in the workdir
285 # version without the edit in the workdir
302 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
286 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
303 contenders = set()
287 contenders = set()
304 for h in chunks:
288 for h in chunks:
305 try:
289 try:
306 contenders.update(set(h.files()))
290 contenders.update(set(h.files()))
307 except AttributeError:
291 except AttributeError:
308 pass
292 pass
309
293
310 changed = status.modified + status.added + status.removed
294 changed = status.modified + status.added + status.removed
311 newfiles = [f for f in changed if f in contenders]
295 newfiles = [f for f in changed if f in contenders]
312 if not newfiles:
296 if not newfiles:
313 ui.status(_('no changes to record\n'))
297 ui.status(_('no changes to record\n'))
314 return 0
298 return 0
315
299
316 modified = set(status.modified)
300 modified = set(status.modified)
317
301
318 # 2. backup changed files, so we can restore them in the end
302 # 2. backup changed files, so we can restore them in the end
319
303
320 if backupall:
304 if backupall:
321 tobackup = changed
305 tobackup = changed
322 else:
306 else:
323 tobackup = [f for f in newfiles if f in modified or f in \
307 tobackup = [f for f in newfiles if f in modified or f in \
324 newlyaddedandmodifiedfiles]
308 newlyaddedandmodifiedfiles]
325 backups = {}
309 backups = {}
326 if tobackup:
310 if tobackup:
327 backupdir = repo.vfs.join('record-backups')
311 backupdir = repo.vfs.join('record-backups')
328 try:
312 try:
329 os.mkdir(backupdir)
313 os.mkdir(backupdir)
330 except OSError as err:
314 except OSError as err:
331 if err.errno != errno.EEXIST:
315 if err.errno != errno.EEXIST:
332 raise
316 raise
333 try:
317 try:
334 # backup continues
318 # backup continues
335 for f in tobackup:
319 for f in tobackup:
336 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
320 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
337 dir=backupdir)
321 dir=backupdir)
338 os.close(fd)
322 os.close(fd)
339 ui.debug('backup %r as %r\n' % (f, tmpname))
323 ui.debug('backup %r as %r\n' % (f, tmpname))
340 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
324 util.copyfile(repo.wjoin(f), tmpname, copystat=True)
341 backups[f] = tmpname
325 backups[f] = tmpname
342
326
343 fp = stringio()
327 fp = stringio()
344 for c in chunks:
328 for c in chunks:
345 fname = c.filename()
329 fname = c.filename()
346 if fname in backups:
330 if fname in backups:
347 c.write(fp)
331 c.write(fp)
348 dopatch = fp.tell()
332 dopatch = fp.tell()
349 fp.seek(0)
333 fp.seek(0)
350
334
351 # 2.5 optionally review / modify patch in text editor
335 # 2.5 optionally review / modify patch in text editor
352 if opts.get('review', False):
336 if opts.get('review', False):
353 patchtext = (crecordmod.diffhelptext
337 patchtext = (crecordmod.diffhelptext
354 + crecordmod.patchhelptext
338 + crecordmod.patchhelptext
355 + fp.read())
339 + fp.read())
356 reviewedpatch = ui.edit(patchtext, "",
340 reviewedpatch = ui.edit(patchtext, "",
357 action="diff",
341 action="diff",
358 repopath=repo.path)
342 repopath=repo.path)
359 fp.truncate(0)
343 fp.truncate(0)
360 fp.write(reviewedpatch)
344 fp.write(reviewedpatch)
361 fp.seek(0)
345 fp.seek(0)
362
346
363 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
347 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
364 # 3a. apply filtered patch to clean repo (clean)
348 # 3a. apply filtered patch to clean repo (clean)
365 if backups:
349 if backups:
366 # Equivalent to hg.revert
350 # Equivalent to hg.revert
367 m = scmutil.matchfiles(repo, backups.keys())
351 m = scmutil.matchfiles(repo, backups.keys())
368 mergemod.update(repo, repo.dirstate.p1(),
352 mergemod.update(repo, repo.dirstate.p1(),
369 False, True, matcher=m)
353 False, True, matcher=m)
370
354
371 # 3b. (apply)
355 # 3b. (apply)
372 if dopatch:
356 if dopatch:
373 try:
357 try:
374 ui.debug('applying patch\n')
358 ui.debug('applying patch\n')
375 ui.debug(fp.getvalue())
359 ui.debug(fp.getvalue())
376 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
360 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
377 except error.PatchError as err:
361 except error.PatchError as err:
378 raise error.Abort(str(err))
362 raise error.Abort(str(err))
379 del fp
363 del fp
380
364
381 # 4. We prepared working directory according to filtered
365 # 4. We prepared working directory according to filtered
382 # patch. Now is the time to delegate the job to
366 # patch. Now is the time to delegate the job to
383 # commit/qrefresh or the like!
367 # commit/qrefresh or the like!
384
368
385 # Make all of the pathnames absolute.
369 # Make all of the pathnames absolute.
386 newfiles = [repo.wjoin(nf) for nf in newfiles]
370 newfiles = [repo.wjoin(nf) for nf in newfiles]
387 return commitfunc(ui, repo, *newfiles, **pycompat.strkwargs(opts))
371 return commitfunc(ui, repo, *newfiles, **pycompat.strkwargs(opts))
388 finally:
372 finally:
389 # 5. finally restore backed-up files
373 # 5. finally restore backed-up files
390 try:
374 try:
391 dirstate = repo.dirstate
375 dirstate = repo.dirstate
392 for realname, tmpname in backups.iteritems():
376 for realname, tmpname in backups.iteritems():
393 ui.debug('restoring %r to %r\n' % (tmpname, realname))
377 ui.debug('restoring %r to %r\n' % (tmpname, realname))
394
378
395 if dirstate[realname] == 'n':
379 if dirstate[realname] == 'n':
396 # without normallookup, restoring timestamp
380 # without normallookup, restoring timestamp
397 # may cause partially committed files
381 # may cause partially committed files
398 # to be treated as unmodified
382 # to be treated as unmodified
399 dirstate.normallookup(realname)
383 dirstate.normallookup(realname)
400
384
401 # copystat=True here and above are a hack to trick any
385 # copystat=True here and above are a hack to trick any
402 # editors that have f open that we haven't modified them.
386 # editors that have f open that we haven't modified them.
403 #
387 #
404 # Also note that this racy as an editor could notice the
388 # Also note that this racy as an editor could notice the
405 # file's mtime before we've finished writing it.
389 # file's mtime before we've finished writing it.
406 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
390 util.copyfile(tmpname, repo.wjoin(realname), copystat=True)
407 os.unlink(tmpname)
391 os.unlink(tmpname)
408 if tobackup:
392 if tobackup:
409 os.rmdir(backupdir)
393 os.rmdir(backupdir)
410 except OSError:
394 except OSError:
411 pass
395 pass
412
396
413 def recordinwlock(ui, repo, message, match, opts):
397 def recordinwlock(ui, repo, message, match, opts):
414 with repo.wlock():
398 with repo.wlock():
415 return recordfunc(ui, repo, message, match, opts)
399 return recordfunc(ui, repo, message, match, opts)
416
400
417 return commit(ui, repo, recordinwlock, pats, opts)
401 return commit(ui, repo, recordinwlock, pats, opts)
418
402
419 class dirnode(object):
403 class dirnode(object):
420 """
404 """
421 Represent a directory in user working copy with information required for
405 Represent a directory in user working copy with information required for
422 the purpose of tersing its status.
406 the purpose of tersing its status.
423
407
424 path is the path to the directory
408 path is the path to the directory
425
409
426 statuses is a set of statuses of all files in this directory (this includes
410 statuses is a set of statuses of all files in this directory (this includes
427 all the files in all the subdirectories too)
411 all the files in all the subdirectories too)
428
412
429 files is a list of files which are direct child of this directory
413 files is a list of files which are direct child of this directory
430
414
431 subdirs is a dictionary of sub-directory name as the key and it's own
415 subdirs is a dictionary of sub-directory name as the key and it's own
432 dirnode object as the value
416 dirnode object as the value
433 """
417 """
434
418
435 def __init__(self, dirpath):
419 def __init__(self, dirpath):
436 self.path = dirpath
420 self.path = dirpath
437 self.statuses = set([])
421 self.statuses = set([])
438 self.files = []
422 self.files = []
439 self.subdirs = {}
423 self.subdirs = {}
440
424
441 def _addfileindir(self, filename, status):
425 def _addfileindir(self, filename, status):
442 """Add a file in this directory as a direct child."""
426 """Add a file in this directory as a direct child."""
443 self.files.append((filename, status))
427 self.files.append((filename, status))
444
428
445 def addfile(self, filename, status):
429 def addfile(self, filename, status):
446 """
430 """
447 Add a file to this directory or to its direct parent directory.
431 Add a file to this directory or to its direct parent directory.
448
432
449 If the file is not direct child of this directory, we traverse to the
433 If the file is not direct child of this directory, we traverse to the
450 directory of which this file is a direct child of and add the file
434 directory of which this file is a direct child of and add the file
451 there.
435 there.
452 """
436 """
453
437
454 # the filename contains a path separator, it means it's not the direct
438 # the filename contains a path separator, it means it's not the direct
455 # child of this directory
439 # child of this directory
456 if '/' in filename:
440 if '/' in filename:
457 subdir, filep = filename.split('/', 1)
441 subdir, filep = filename.split('/', 1)
458
442
459 # does the dirnode object for subdir exists
443 # does the dirnode object for subdir exists
460 if subdir not in self.subdirs:
444 if subdir not in self.subdirs:
461 subdirpath = os.path.join(self.path, subdir)
445 subdirpath = os.path.join(self.path, subdir)
462 self.subdirs[subdir] = dirnode(subdirpath)
446 self.subdirs[subdir] = dirnode(subdirpath)
463
447
464 # try adding the file in subdir
448 # try adding the file in subdir
465 self.subdirs[subdir].addfile(filep, status)
449 self.subdirs[subdir].addfile(filep, status)
466
450
467 else:
451 else:
468 self._addfileindir(filename, status)
452 self._addfileindir(filename, status)
469
453
470 if status not in self.statuses:
454 if status not in self.statuses:
471 self.statuses.add(status)
455 self.statuses.add(status)
472
456
473 def iterfilepaths(self):
457 def iterfilepaths(self):
474 """Yield (status, path) for files directly under this directory."""
458 """Yield (status, path) for files directly under this directory."""
475 for f, st in self.files:
459 for f, st in self.files:
476 yield st, os.path.join(self.path, f)
460 yield st, os.path.join(self.path, f)
477
461
478 def tersewalk(self, terseargs):
462 def tersewalk(self, terseargs):
479 """
463 """
480 Yield (status, path) obtained by processing the status of this
464 Yield (status, path) obtained by processing the status of this
481 dirnode.
465 dirnode.
482
466
483 terseargs is the string of arguments passed by the user with `--terse`
467 terseargs is the string of arguments passed by the user with `--terse`
484 flag.
468 flag.
485
469
486 Following are the cases which can happen:
470 Following are the cases which can happen:
487
471
488 1) All the files in the directory (including all the files in its
472 1) All the files in the directory (including all the files in its
489 subdirectories) share the same status and the user has asked us to terse
473 subdirectories) share the same status and the user has asked us to terse
490 that status. -> yield (status, dirpath)
474 that status. -> yield (status, dirpath)
491
475
492 2) Otherwise, we do following:
476 2) Otherwise, we do following:
493
477
494 a) Yield (status, filepath) for all the files which are in this
478 a) Yield (status, filepath) for all the files which are in this
495 directory (only the ones in this directory, not the subdirs)
479 directory (only the ones in this directory, not the subdirs)
496
480
497 b) Recurse the function on all the subdirectories of this
481 b) Recurse the function on all the subdirectories of this
498 directory
482 directory
499 """
483 """
500
484
501 if len(self.statuses) == 1:
485 if len(self.statuses) == 1:
502 onlyst = self.statuses.pop()
486 onlyst = self.statuses.pop()
503
487
504 # Making sure we terse only when the status abbreviation is
488 # Making sure we terse only when the status abbreviation is
505 # passed as terse argument
489 # passed as terse argument
506 if onlyst in terseargs:
490 if onlyst in terseargs:
507 yield onlyst, self.path + pycompat.ossep
491 yield onlyst, self.path + pycompat.ossep
508 return
492 return
509
493
510 # add the files to status list
494 # add the files to status list
511 for st, fpath in self.iterfilepaths():
495 for st, fpath in self.iterfilepaths():
512 yield st, fpath
496 yield st, fpath
513
497
514 #recurse on the subdirs
498 #recurse on the subdirs
515 for dirobj in self.subdirs.values():
499 for dirobj in self.subdirs.values():
516 for st, fpath in dirobj.tersewalk(terseargs):
500 for st, fpath in dirobj.tersewalk(terseargs):
517 yield st, fpath
501 yield st, fpath
518
502
519 def tersedir(statuslist, terseargs):
503 def tersedir(statuslist, terseargs):
520 """
504 """
521 Terse the status if all the files in a directory shares the same status.
505 Terse the status if all the files in a directory shares the same status.
522
506
523 statuslist is scmutil.status() object which contains a list of files for
507 statuslist is scmutil.status() object which contains a list of files for
524 each status.
508 each status.
525 terseargs is string which is passed by the user as the argument to `--terse`
509 terseargs is string which is passed by the user as the argument to `--terse`
526 flag.
510 flag.
527
511
528 The function makes a tree of objects of dirnode class, and at each node it
512 The function makes a tree of objects of dirnode class, and at each node it
529 stores the information required to know whether we can terse a certain
513 stores the information required to know whether we can terse a certain
530 directory or not.
514 directory or not.
531 """
515 """
532 # the order matters here as that is used to produce final list
516 # the order matters here as that is used to produce final list
533 allst = ('m', 'a', 'r', 'd', 'u', 'i', 'c')
517 allst = ('m', 'a', 'r', 'd', 'u', 'i', 'c')
534
518
535 # checking the argument validity
519 # checking the argument validity
536 for s in pycompat.bytestr(terseargs):
520 for s in pycompat.bytestr(terseargs):
537 if s not in allst:
521 if s not in allst:
538 raise error.Abort(_("'%s' not recognized") % s)
522 raise error.Abort(_("'%s' not recognized") % s)
539
523
540 # creating a dirnode object for the root of the repo
524 # creating a dirnode object for the root of the repo
541 rootobj = dirnode('')
525 rootobj = dirnode('')
542 pstatus = ('modified', 'added', 'deleted', 'clean', 'unknown',
526 pstatus = ('modified', 'added', 'deleted', 'clean', 'unknown',
543 'ignored', 'removed')
527 'ignored', 'removed')
544
528
545 tersedict = {}
529 tersedict = {}
546 for attrname in pstatus:
530 for attrname in pstatus:
547 statuschar = attrname[0:1]
531 statuschar = attrname[0:1]
548 for f in getattr(statuslist, attrname):
532 for f in getattr(statuslist, attrname):
549 rootobj.addfile(f, statuschar)
533 rootobj.addfile(f, statuschar)
550 tersedict[statuschar] = []
534 tersedict[statuschar] = []
551
535
552 # we won't be tersing the root dir, so add files in it
536 # we won't be tersing the root dir, so add files in it
553 for st, fpath in rootobj.iterfilepaths():
537 for st, fpath in rootobj.iterfilepaths():
554 tersedict[st].append(fpath)
538 tersedict[st].append(fpath)
555
539
556 # process each sub-directory and build tersedict
540 # process each sub-directory and build tersedict
557 for subdir in rootobj.subdirs.values():
541 for subdir in rootobj.subdirs.values():
558 for st, f in subdir.tersewalk(terseargs):
542 for st, f in subdir.tersewalk(terseargs):
559 tersedict[st].append(f)
543 tersedict[st].append(f)
560
544
561 tersedlist = []
545 tersedlist = []
562 for st in allst:
546 for st in allst:
563 tersedict[st].sort()
547 tersedict[st].sort()
564 tersedlist.append(tersedict[st])
548 tersedlist.append(tersedict[st])
565
549
566 return tersedlist
550 return tersedlist
567
551
568 def _commentlines(raw):
552 def _commentlines(raw):
569 '''Surround lineswith a comment char and a new line'''
553 '''Surround lineswith a comment char and a new line'''
570 lines = raw.splitlines()
554 lines = raw.splitlines()
571 commentedlines = ['# %s' % line for line in lines]
555 commentedlines = ['# %s' % line for line in lines]
572 return '\n'.join(commentedlines) + '\n'
556 return '\n'.join(commentedlines) + '\n'
573
557
574 def _conflictsmsg(repo):
558 def _conflictsmsg(repo):
575 # avoid merge cycle
559 # avoid merge cycle
576 from . import merge as mergemod
560 from . import merge as mergemod
577 mergestate = mergemod.mergestate.read(repo)
561 mergestate = mergemod.mergestate.read(repo)
578 if not mergestate.active():
562 if not mergestate.active():
579 return
563 return
580
564
581 m = scmutil.match(repo[None])
565 m = scmutil.match(repo[None])
582 unresolvedlist = [f for f in mergestate.unresolved() if m(f)]
566 unresolvedlist = [f for f in mergestate.unresolved() if m(f)]
583 if unresolvedlist:
567 if unresolvedlist:
584 mergeliststr = '\n'.join(
568 mergeliststr = '\n'.join(
585 [' %s' % util.pathto(repo.root, pycompat.getcwd(), path)
569 [' %s' % util.pathto(repo.root, pycompat.getcwd(), path)
586 for path in unresolvedlist])
570 for path in unresolvedlist])
587 msg = _('''Unresolved merge conflicts:
571 msg = _('''Unresolved merge conflicts:
588
572
589 %s
573 %s
590
574
591 To mark files as resolved: hg resolve --mark FILE''') % mergeliststr
575 To mark files as resolved: hg resolve --mark FILE''') % mergeliststr
592 else:
576 else:
593 msg = _('No unresolved merge conflicts.')
577 msg = _('No unresolved merge conflicts.')
594
578
595 return _commentlines(msg)
579 return _commentlines(msg)
596
580
597 def _helpmessage(continuecmd, abortcmd):
581 def _helpmessage(continuecmd, abortcmd):
598 msg = _('To continue: %s\n'
582 msg = _('To continue: %s\n'
599 'To abort: %s') % (continuecmd, abortcmd)
583 'To abort: %s') % (continuecmd, abortcmd)
600 return _commentlines(msg)
584 return _commentlines(msg)
601
585
602 def _rebasemsg():
586 def _rebasemsg():
603 return _helpmessage('hg rebase --continue', 'hg rebase --abort')
587 return _helpmessage('hg rebase --continue', 'hg rebase --abort')
604
588
605 def _histeditmsg():
589 def _histeditmsg():
606 return _helpmessage('hg histedit --continue', 'hg histedit --abort')
590 return _helpmessage('hg histedit --continue', 'hg histedit --abort')
607
591
608 def _unshelvemsg():
592 def _unshelvemsg():
609 return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
593 return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
610
594
611 def _updatecleanmsg(dest=None):
595 def _updatecleanmsg(dest=None):
612 warning = _('warning: this will discard uncommitted changes')
596 warning = _('warning: this will discard uncommitted changes')
613 return 'hg update --clean %s (%s)' % (dest or '.', warning)
597 return 'hg update --clean %s (%s)' % (dest or '.', warning)
614
598
615 def _graftmsg():
599 def _graftmsg():
616 # tweakdefaults requires `update` to have a rev hence the `.`
600 # tweakdefaults requires `update` to have a rev hence the `.`
617 return _helpmessage('hg graft --continue', _updatecleanmsg())
601 return _helpmessage('hg graft --continue', _updatecleanmsg())
618
602
619 def _mergemsg():
603 def _mergemsg():
620 # tweakdefaults requires `update` to have a rev hence the `.`
604 # tweakdefaults requires `update` to have a rev hence the `.`
621 return _helpmessage('hg commit', _updatecleanmsg())
605 return _helpmessage('hg commit', _updatecleanmsg())
622
606
623 def _bisectmsg():
607 def _bisectmsg():
624 msg = _('To mark the changeset good: hg bisect --good\n'
608 msg = _('To mark the changeset good: hg bisect --good\n'
625 'To mark the changeset bad: hg bisect --bad\n'
609 'To mark the changeset bad: hg bisect --bad\n'
626 'To abort: hg bisect --reset\n')
610 'To abort: hg bisect --reset\n')
627 return _commentlines(msg)
611 return _commentlines(msg)
628
612
629 def fileexistspredicate(filename):
613 def fileexistspredicate(filename):
630 return lambda repo: repo.vfs.exists(filename)
614 return lambda repo: repo.vfs.exists(filename)
631
615
632 def _mergepredicate(repo):
616 def _mergepredicate(repo):
633 return len(repo[None].parents()) > 1
617 return len(repo[None].parents()) > 1
634
618
635 STATES = (
619 STATES = (
636 # (state, predicate to detect states, helpful message function)
620 # (state, predicate to detect states, helpful message function)
637 ('histedit', fileexistspredicate('histedit-state'), _histeditmsg),
621 ('histedit', fileexistspredicate('histedit-state'), _histeditmsg),
638 ('bisect', fileexistspredicate('bisect.state'), _bisectmsg),
622 ('bisect', fileexistspredicate('bisect.state'), _bisectmsg),
639 ('graft', fileexistspredicate('graftstate'), _graftmsg),
623 ('graft', fileexistspredicate('graftstate'), _graftmsg),
640 ('unshelve', fileexistspredicate('unshelverebasestate'), _unshelvemsg),
624 ('unshelve', fileexistspredicate('unshelverebasestate'), _unshelvemsg),
641 ('rebase', fileexistspredicate('rebasestate'), _rebasemsg),
625 ('rebase', fileexistspredicate('rebasestate'), _rebasemsg),
642 # The merge state is part of a list that will be iterated over.
626 # The merge state is part of a list that will be iterated over.
643 # They need to be last because some of the other unfinished states may also
627 # They need to be last because some of the other unfinished states may also
644 # be in a merge or update state (eg. rebase, histedit, graft, etc).
628 # be in a merge or update state (eg. rebase, histedit, graft, etc).
645 # We want those to have priority.
629 # We want those to have priority.
646 ('merge', _mergepredicate, _mergemsg),
630 ('merge', _mergepredicate, _mergemsg),
647 )
631 )
648
632
649 def _getrepostate(repo):
633 def _getrepostate(repo):
650 # experimental config: commands.status.skipstates
634 # experimental config: commands.status.skipstates
651 skip = set(repo.ui.configlist('commands', 'status.skipstates'))
635 skip = set(repo.ui.configlist('commands', 'status.skipstates'))
652 for state, statedetectionpredicate, msgfn in STATES:
636 for state, statedetectionpredicate, msgfn in STATES:
653 if state in skip:
637 if state in skip:
654 continue
638 continue
655 if statedetectionpredicate(repo):
639 if statedetectionpredicate(repo):
656 return (state, statedetectionpredicate, msgfn)
640 return (state, statedetectionpredicate, msgfn)
657
641
658 def morestatus(repo, fm):
642 def morestatus(repo, fm):
659 statetuple = _getrepostate(repo)
643 statetuple = _getrepostate(repo)
660 label = 'status.morestatus'
644 label = 'status.morestatus'
661 if statetuple:
645 if statetuple:
662 fm.startitem()
646 fm.startitem()
663 state, statedetectionpredicate, helpfulmsg = statetuple
647 state, statedetectionpredicate, helpfulmsg = statetuple
664 statemsg = _('The repository is in an unfinished *%s* state.') % state
648 statemsg = _('The repository is in an unfinished *%s* state.') % state
665 fm.write('statemsg', '%s\n', _commentlines(statemsg), label=label)
649 fm.write('statemsg', '%s\n', _commentlines(statemsg), label=label)
666 conmsg = _conflictsmsg(repo)
650 conmsg = _conflictsmsg(repo)
667 if conmsg:
651 if conmsg:
668 fm.write('conflictsmsg', '%s\n', conmsg, label=label)
652 fm.write('conflictsmsg', '%s\n', conmsg, label=label)
669 if helpfulmsg:
653 if helpfulmsg:
670 helpmsg = helpfulmsg()
654 helpmsg = helpfulmsg()
671 fm.write('helpmsg', '%s\n', helpmsg, label=label)
655 fm.write('helpmsg', '%s\n', helpmsg, label=label)
672
656
673 def findpossible(cmd, table, strict=False):
657 def findpossible(cmd, table, strict=False):
674 """
658 """
675 Return cmd -> (aliases, command table entry)
659 Return cmd -> (aliases, command table entry)
676 for each matching command.
660 for each matching command.
677 Return debug commands (or their aliases) only if no normal command matches.
661 Return debug commands (or their aliases) only if no normal command matches.
678 """
662 """
679 choice = {}
663 choice = {}
680 debugchoice = {}
664 debugchoice = {}
681
665
682 if cmd in table:
666 if cmd in table:
683 # short-circuit exact matches, "log" alias beats "^log|history"
667 # short-circuit exact matches, "log" alias beats "^log|history"
684 keys = [cmd]
668 keys = [cmd]
685 else:
669 else:
686 keys = table.keys()
670 keys = table.keys()
687
671
688 allcmds = []
672 allcmds = []
689 for e in keys:
673 for e in keys:
690 aliases = parsealiases(e)
674 aliases = parsealiases(e)
691 allcmds.extend(aliases)
675 allcmds.extend(aliases)
692 found = None
676 found = None
693 if cmd in aliases:
677 if cmd in aliases:
694 found = cmd
678 found = cmd
695 elif not strict:
679 elif not strict:
696 for a in aliases:
680 for a in aliases:
697 if a.startswith(cmd):
681 if a.startswith(cmd):
698 found = a
682 found = a
699 break
683 break
700 if found is not None:
684 if found is not None:
701 if aliases[0].startswith("debug") or found.startswith("debug"):
685 if aliases[0].startswith("debug") or found.startswith("debug"):
702 debugchoice[found] = (aliases, table[e])
686 debugchoice[found] = (aliases, table[e])
703 else:
687 else:
704 choice[found] = (aliases, table[e])
688 choice[found] = (aliases, table[e])
705
689
706 if not choice and debugchoice:
690 if not choice and debugchoice:
707 choice = debugchoice
691 choice = debugchoice
708
692
709 return choice, allcmds
693 return choice, allcmds
710
694
711 def findcmd(cmd, table, strict=True):
695 def findcmd(cmd, table, strict=True):
712 """Return (aliases, command table entry) for command string."""
696 """Return (aliases, command table entry) for command string."""
713 choice, allcmds = findpossible(cmd, table, strict)
697 choice, allcmds = findpossible(cmd, table, strict)
714
698
715 if cmd in choice:
699 if cmd in choice:
716 return choice[cmd]
700 return choice[cmd]
717
701
718 if len(choice) > 1:
702 if len(choice) > 1:
719 clist = sorted(choice)
703 clist = sorted(choice)
720 raise error.AmbiguousCommand(cmd, clist)
704 raise error.AmbiguousCommand(cmd, clist)
721
705
722 if choice:
706 if choice:
723 return list(choice.values())[0]
707 return list(choice.values())[0]
724
708
725 raise error.UnknownCommand(cmd, allcmds)
709 raise error.UnknownCommand(cmd, allcmds)
726
710
727 def changebranch(ui, repo, revs, label):
711 def changebranch(ui, repo, revs, label):
728 """ Change the branch name of given revs to label """
712 """ Change the branch name of given revs to label """
729
713
730 with repo.wlock(), repo.lock(), repo.transaction('branches'):
714 with repo.wlock(), repo.lock(), repo.transaction('branches'):
731 # abort in case of uncommitted merge or dirty wdir
715 # abort in case of uncommitted merge or dirty wdir
732 bailifchanged(repo)
716 bailifchanged(repo)
733 revs = scmutil.revrange(repo, revs)
717 revs = scmutil.revrange(repo, revs)
734 if not revs:
718 if not revs:
735 raise error.Abort("empty revision set")
719 raise error.Abort("empty revision set")
736 roots = repo.revs('roots(%ld)', revs)
720 roots = repo.revs('roots(%ld)', revs)
737 if len(roots) > 1:
721 if len(roots) > 1:
738 raise error.Abort(_("cannot change branch of non-linear revisions"))
722 raise error.Abort(_("cannot change branch of non-linear revisions"))
739 rewriteutil.precheck(repo, revs, 'change branch of')
723 rewriteutil.precheck(repo, revs, 'change branch of')
740
724
741 root = repo[roots.first()]
725 root = repo[roots.first()]
742 if not root.p1().branch() == label and label in repo.branchmap():
726 if not root.p1().branch() == label and label in repo.branchmap():
743 raise error.Abort(_("a branch of the same name already exists"))
727 raise error.Abort(_("a branch of the same name already exists"))
744
728
745 if repo.revs('merge() and %ld', revs):
729 if repo.revs('merge() and %ld', revs):
746 raise error.Abort(_("cannot change branch of a merge commit"))
730 raise error.Abort(_("cannot change branch of a merge commit"))
747 if repo.revs('obsolete() and %ld', revs):
731 if repo.revs('obsolete() and %ld', revs):
748 raise error.Abort(_("cannot change branch of a obsolete changeset"))
732 raise error.Abort(_("cannot change branch of a obsolete changeset"))
749
733
750 # make sure only topological heads
734 # make sure only topological heads
751 if repo.revs('heads(%ld) - head()', revs):
735 if repo.revs('heads(%ld) - head()', revs):
752 raise error.Abort(_("cannot change branch in middle of a stack"))
736 raise error.Abort(_("cannot change branch in middle of a stack"))
753
737
754 replacements = {}
738 replacements = {}
755 # avoid import cycle mercurial.cmdutil -> mercurial.context ->
739 # avoid import cycle mercurial.cmdutil -> mercurial.context ->
756 # mercurial.subrepo -> mercurial.cmdutil
740 # mercurial.subrepo -> mercurial.cmdutil
757 from . import context
741 from . import context
758 for rev in revs:
742 for rev in revs:
759 ctx = repo[rev]
743 ctx = repo[rev]
760 oldbranch = ctx.branch()
744 oldbranch = ctx.branch()
761 # check if ctx has same branch
745 # check if ctx has same branch
762 if oldbranch == label:
746 if oldbranch == label:
763 continue
747 continue
764
748
765 def filectxfn(repo, newctx, path):
749 def filectxfn(repo, newctx, path):
766 try:
750 try:
767 return ctx[path]
751 return ctx[path]
768 except error.ManifestLookupError:
752 except error.ManifestLookupError:
769 return None
753 return None
770
754
771 ui.debug("changing branch of '%s' from '%s' to '%s'\n"
755 ui.debug("changing branch of '%s' from '%s' to '%s'\n"
772 % (hex(ctx.node()), oldbranch, label))
756 % (hex(ctx.node()), oldbranch, label))
773 extra = ctx.extra()
757 extra = ctx.extra()
774 extra['branch_change'] = hex(ctx.node())
758 extra['branch_change'] = hex(ctx.node())
775 # While changing branch of set of linear commits, make sure that
759 # While changing branch of set of linear commits, make sure that
776 # we base our commits on new parent rather than old parent which
760 # we base our commits on new parent rather than old parent which
777 # was obsoleted while changing the branch
761 # was obsoleted while changing the branch
778 p1 = ctx.p1().node()
762 p1 = ctx.p1().node()
779 p2 = ctx.p2().node()
763 p2 = ctx.p2().node()
780 if p1 in replacements:
764 if p1 in replacements:
781 p1 = replacements[p1][0]
765 p1 = replacements[p1][0]
782 if p2 in replacements:
766 if p2 in replacements:
783 p2 = replacements[p2][0]
767 p2 = replacements[p2][0]
784
768
785 mc = context.memctx(repo, (p1, p2),
769 mc = context.memctx(repo, (p1, p2),
786 ctx.description(),
770 ctx.description(),
787 ctx.files(),
771 ctx.files(),
788 filectxfn,
772 filectxfn,
789 user=ctx.user(),
773 user=ctx.user(),
790 date=ctx.date(),
774 date=ctx.date(),
791 extra=extra,
775 extra=extra,
792 branch=label)
776 branch=label)
793
777
794 commitphase = ctx.phase()
778 commitphase = ctx.phase()
795 overrides = {('phases', 'new-commit'): commitphase}
779 overrides = {('phases', 'new-commit'): commitphase}
796 with repo.ui.configoverride(overrides, 'branch-change'):
780 with repo.ui.configoverride(overrides, 'branch-change'):
797 newnode = repo.commitctx(mc)
781 newnode = repo.commitctx(mc)
798
782
799 replacements[ctx.node()] = (newnode,)
783 replacements[ctx.node()] = (newnode,)
800 ui.debug('new node id is %s\n' % hex(newnode))
784 ui.debug('new node id is %s\n' % hex(newnode))
801
785
802 # create obsmarkers and move bookmarks
786 # create obsmarkers and move bookmarks
803 scmutil.cleanupnodes(repo, replacements, 'branch-change')
787 scmutil.cleanupnodes(repo, replacements, 'branch-change')
804
788
805 # move the working copy too
789 # move the working copy too
806 wctx = repo[None]
790 wctx = repo[None]
807 # in-progress merge is a bit too complex for now.
791 # in-progress merge is a bit too complex for now.
808 if len(wctx.parents()) == 1:
792 if len(wctx.parents()) == 1:
809 newid = replacements.get(wctx.p1().node())
793 newid = replacements.get(wctx.p1().node())
810 if newid is not None:
794 if newid is not None:
811 # avoid import cycle mercurial.cmdutil -> mercurial.hg ->
795 # avoid import cycle mercurial.cmdutil -> mercurial.hg ->
812 # mercurial.cmdutil
796 # mercurial.cmdutil
813 from . import hg
797 from . import hg
814 hg.update(repo, newid[0], quietempty=True)
798 hg.update(repo, newid[0], quietempty=True)
815
799
816 ui.status(_("changed branch on %d changesets\n") % len(replacements))
800 ui.status(_("changed branch on %d changesets\n") % len(replacements))
817
801
818 def findrepo(p):
802 def findrepo(p):
819 while not os.path.isdir(os.path.join(p, ".hg")):
803 while not os.path.isdir(os.path.join(p, ".hg")):
820 oldp, p = p, os.path.dirname(p)
804 oldp, p = p, os.path.dirname(p)
821 if p == oldp:
805 if p == oldp:
822 return None
806 return None
823
807
824 return p
808 return p
825
809
826 def bailifchanged(repo, merge=True, hint=None):
810 def bailifchanged(repo, merge=True, hint=None):
827 """ enforce the precondition that working directory must be clean.
811 """ enforce the precondition that working directory must be clean.
828
812
829 'merge' can be set to false if a pending uncommitted merge should be
813 'merge' can be set to false if a pending uncommitted merge should be
830 ignored (such as when 'update --check' runs).
814 ignored (such as when 'update --check' runs).
831
815
832 'hint' is the usual hint given to Abort exception.
816 'hint' is the usual hint given to Abort exception.
833 """
817 """
834
818
835 if merge and repo.dirstate.p2() != nullid:
819 if merge and repo.dirstate.p2() != nullid:
836 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
820 raise error.Abort(_('outstanding uncommitted merge'), hint=hint)
837 modified, added, removed, deleted = repo.status()[:4]
821 modified, added, removed, deleted = repo.status()[:4]
838 if modified or added or removed or deleted:
822 if modified or added or removed or deleted:
839 raise error.Abort(_('uncommitted changes'), hint=hint)
823 raise error.Abort(_('uncommitted changes'), hint=hint)
840 ctx = repo[None]
824 ctx = repo[None]
841 for s in sorted(ctx.substate):
825 for s in sorted(ctx.substate):
842 ctx.sub(s).bailifchanged(hint=hint)
826 ctx.sub(s).bailifchanged(hint=hint)
843
827
844 def logmessage(ui, opts):
828 def logmessage(ui, opts):
845 """ get the log message according to -m and -l option """
829 """ get the log message according to -m and -l option """
846 message = opts.get('message')
830 message = opts.get('message')
847 logfile = opts.get('logfile')
831 logfile = opts.get('logfile')
848
832
849 if message and logfile:
833 if message and logfile:
850 raise error.Abort(_('options --message and --logfile are mutually '
834 raise error.Abort(_('options --message and --logfile are mutually '
851 'exclusive'))
835 'exclusive'))
852 if not message and logfile:
836 if not message and logfile:
853 try:
837 try:
854 if isstdiofilename(logfile):
838 if isstdiofilename(logfile):
855 message = ui.fin.read()
839 message = ui.fin.read()
856 else:
840 else:
857 message = '\n'.join(util.readfile(logfile).splitlines())
841 message = '\n'.join(util.readfile(logfile).splitlines())
858 except IOError as inst:
842 except IOError as inst:
859 raise error.Abort(_("can't read commit message '%s': %s") %
843 raise error.Abort(_("can't read commit message '%s': %s") %
860 (logfile, encoding.strtolocal(inst.strerror)))
844 (logfile, encoding.strtolocal(inst.strerror)))
861 return message
845 return message
862
846
863 def mergeeditform(ctxorbool, baseformname):
847 def mergeeditform(ctxorbool, baseformname):
864 """return appropriate editform name (referencing a committemplate)
848 """return appropriate editform name (referencing a committemplate)
865
849
866 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
850 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
867 merging is committed.
851 merging is committed.
868
852
869 This returns baseformname with '.merge' appended if it is a merge,
853 This returns baseformname with '.merge' appended if it is a merge,
870 otherwise '.normal' is appended.
854 otherwise '.normal' is appended.
871 """
855 """
872 if isinstance(ctxorbool, bool):
856 if isinstance(ctxorbool, bool):
873 if ctxorbool:
857 if ctxorbool:
874 return baseformname + ".merge"
858 return baseformname + ".merge"
875 elif 1 < len(ctxorbool.parents()):
859 elif 1 < len(ctxorbool.parents()):
876 return baseformname + ".merge"
860 return baseformname + ".merge"
877
861
878 return baseformname + ".normal"
862 return baseformname + ".normal"
879
863
880 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
864 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
881 editform='', **opts):
865 editform='', **opts):
882 """get appropriate commit message editor according to '--edit' option
866 """get appropriate commit message editor according to '--edit' option
883
867
884 'finishdesc' is a function to be called with edited commit message
868 'finishdesc' is a function to be called with edited commit message
885 (= 'description' of the new changeset) just after editing, but
869 (= 'description' of the new changeset) just after editing, but
886 before checking empty-ness. It should return actual text to be
870 before checking empty-ness. It should return actual text to be
887 stored into history. This allows to change description before
871 stored into history. This allows to change description before
888 storing.
872 storing.
889
873
890 'extramsg' is a extra message to be shown in the editor instead of
874 'extramsg' is a extra message to be shown in the editor instead of
891 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
875 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
892 is automatically added.
876 is automatically added.
893
877
894 'editform' is a dot-separated list of names, to distinguish
878 'editform' is a dot-separated list of names, to distinguish
895 the purpose of commit text editing.
879 the purpose of commit text editing.
896
880
897 'getcommiteditor' returns 'commitforceeditor' regardless of
881 'getcommiteditor' returns 'commitforceeditor' regardless of
898 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
882 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
899 they are specific for usage in MQ.
883 they are specific for usage in MQ.
900 """
884 """
901 if edit or finishdesc or extramsg:
885 if edit or finishdesc or extramsg:
902 return lambda r, c, s: commitforceeditor(r, c, s,
886 return lambda r, c, s: commitforceeditor(r, c, s,
903 finishdesc=finishdesc,
887 finishdesc=finishdesc,
904 extramsg=extramsg,
888 extramsg=extramsg,
905 editform=editform)
889 editform=editform)
906 elif editform:
890 elif editform:
907 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
891 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
908 else:
892 else:
909 return commiteditor
893 return commiteditor
910
894
911 def makefilename(repo, pat, node, desc=None,
895 def makefilename(repo, pat, node, desc=None,
912 total=None, seqno=None, revwidth=None, pathname=None):
896 total=None, seqno=None, revwidth=None, pathname=None):
913 node_expander = {
897 node_expander = {
914 'H': lambda: hex(node),
898 'H': lambda: hex(node),
915 'R': lambda: '%d' % repo.changelog.rev(node),
899 'R': lambda: '%d' % repo.changelog.rev(node),
916 'h': lambda: short(node),
900 'h': lambda: short(node),
917 'm': lambda: re.sub('[^\w]', '_', desc or '')
901 'm': lambda: re.sub('[^\w]', '_', desc or '')
918 }
902 }
919 expander = {
903 expander = {
920 '%': lambda: '%',
904 '%': lambda: '%',
921 'b': lambda: os.path.basename(repo.root),
905 'b': lambda: os.path.basename(repo.root),
922 }
906 }
923
907
924 try:
908 try:
925 if node:
909 if node:
926 expander.update(node_expander)
910 expander.update(node_expander)
927 if node:
911 if node:
928 expander['r'] = (lambda:
912 expander['r'] = (lambda:
929 ('%d' % repo.changelog.rev(node)).zfill(revwidth or 0))
913 ('%d' % repo.changelog.rev(node)).zfill(revwidth or 0))
930 if total is not None:
914 if total is not None:
931 expander['N'] = lambda: '%d' % total
915 expander['N'] = lambda: '%d' % total
932 if seqno is not None:
916 if seqno is not None:
933 expander['n'] = lambda: '%d' % seqno
917 expander['n'] = lambda: '%d' % seqno
934 if total is not None and seqno is not None:
918 if total is not None and seqno is not None:
935 expander['n'] = (lambda: ('%d' % seqno).zfill(len('%d' % total)))
919 expander['n'] = (lambda: ('%d' % seqno).zfill(len('%d' % total)))
936 if pathname is not None:
920 if pathname is not None:
937 expander['s'] = lambda: os.path.basename(pathname)
921 expander['s'] = lambda: os.path.basename(pathname)
938 expander['d'] = lambda: os.path.dirname(pathname) or '.'
922 expander['d'] = lambda: os.path.dirname(pathname) or '.'
939 expander['p'] = lambda: pathname
923 expander['p'] = lambda: pathname
940
924
941 newname = []
925 newname = []
942 patlen = len(pat)
926 patlen = len(pat)
943 i = 0
927 i = 0
944 while i < patlen:
928 while i < patlen:
945 c = pat[i:i + 1]
929 c = pat[i:i + 1]
946 if c == '%':
930 if c == '%':
947 i += 1
931 i += 1
948 c = pat[i:i + 1]
932 c = pat[i:i + 1]
949 c = expander[c]()
933 c = expander[c]()
950 newname.append(c)
934 newname.append(c)
951 i += 1
935 i += 1
952 return ''.join(newname)
936 return ''.join(newname)
953 except KeyError as inst:
937 except KeyError as inst:
954 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
938 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
955 inst.args[0])
939 inst.args[0])
956
940
957 def isstdiofilename(pat):
941 def isstdiofilename(pat):
958 """True if the given pat looks like a filename denoting stdin/stdout"""
942 """True if the given pat looks like a filename denoting stdin/stdout"""
959 return not pat or pat == '-'
943 return not pat or pat == '-'
960
944
961 class _unclosablefile(object):
945 class _unclosablefile(object):
962 def __init__(self, fp):
946 def __init__(self, fp):
963 self._fp = fp
947 self._fp = fp
964
948
965 def close(self):
949 def close(self):
966 pass
950 pass
967
951
968 def __iter__(self):
952 def __iter__(self):
969 return iter(self._fp)
953 return iter(self._fp)
970
954
971 def __getattr__(self, attr):
955 def __getattr__(self, attr):
972 return getattr(self._fp, attr)
956 return getattr(self._fp, attr)
973
957
974 def __enter__(self):
958 def __enter__(self):
975 return self
959 return self
976
960
977 def __exit__(self, exc_type, exc_value, exc_tb):
961 def __exit__(self, exc_type, exc_value, exc_tb):
978 pass
962 pass
979
963
980 def makefileobj(repo, pat, node=None, desc=None, total=None,
964 def makefileobj(repo, pat, node=None, desc=None, total=None,
981 seqno=None, revwidth=None, mode='wb', modemap=None,
965 seqno=None, revwidth=None, mode='wb', modemap=None,
982 pathname=None):
966 pathname=None):
983
967
984 writable = mode not in ('r', 'rb')
968 writable = mode not in ('r', 'rb')
985
969
986 if isstdiofilename(pat):
970 if isstdiofilename(pat):
987 if writable:
971 if writable:
988 fp = repo.ui.fout
972 fp = repo.ui.fout
989 else:
973 else:
990 fp = repo.ui.fin
974 fp = repo.ui.fin
991 return _unclosablefile(fp)
975 return _unclosablefile(fp)
992 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
976 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
993 if modemap is not None:
977 if modemap is not None:
994 mode = modemap.get(fn, mode)
978 mode = modemap.get(fn, mode)
995 if mode == 'wb':
979 if mode == 'wb':
996 modemap[fn] = 'ab'
980 modemap[fn] = 'ab'
997 return open(fn, mode)
981 return open(fn, mode)
998
982
999 def openrevlog(repo, cmd, file_, opts):
983 def openrevlog(repo, cmd, file_, opts):
1000 """opens the changelog, manifest, a filelog or a given revlog"""
984 """opens the changelog, manifest, a filelog or a given revlog"""
1001 cl = opts['changelog']
985 cl = opts['changelog']
1002 mf = opts['manifest']
986 mf = opts['manifest']
1003 dir = opts['dir']
987 dir = opts['dir']
1004 msg = None
988 msg = None
1005 if cl and mf:
989 if cl and mf:
1006 msg = _('cannot specify --changelog and --manifest at the same time')
990 msg = _('cannot specify --changelog and --manifest at the same time')
1007 elif cl and dir:
991 elif cl and dir:
1008 msg = _('cannot specify --changelog and --dir at the same time')
992 msg = _('cannot specify --changelog and --dir at the same time')
1009 elif cl or mf or dir:
993 elif cl or mf or dir:
1010 if file_:
994 if file_:
1011 msg = _('cannot specify filename with --changelog or --manifest')
995 msg = _('cannot specify filename with --changelog or --manifest')
1012 elif not repo:
996 elif not repo:
1013 msg = _('cannot specify --changelog or --manifest or --dir '
997 msg = _('cannot specify --changelog or --manifest or --dir '
1014 'without a repository')
998 'without a repository')
1015 if msg:
999 if msg:
1016 raise error.Abort(msg)
1000 raise error.Abort(msg)
1017
1001
1018 r = None
1002 r = None
1019 if repo:
1003 if repo:
1020 if cl:
1004 if cl:
1021 r = repo.unfiltered().changelog
1005 r = repo.unfiltered().changelog
1022 elif dir:
1006 elif dir:
1023 if 'treemanifest' not in repo.requirements:
1007 if 'treemanifest' not in repo.requirements:
1024 raise error.Abort(_("--dir can only be used on repos with "
1008 raise error.Abort(_("--dir can only be used on repos with "
1025 "treemanifest enabled"))
1009 "treemanifest enabled"))
1026 dirlog = repo.manifestlog._revlog.dirlog(dir)
1010 dirlog = repo.manifestlog._revlog.dirlog(dir)
1027 if len(dirlog):
1011 if len(dirlog):
1028 r = dirlog
1012 r = dirlog
1029 elif mf:
1013 elif mf:
1030 r = repo.manifestlog._revlog
1014 r = repo.manifestlog._revlog
1031 elif file_:
1015 elif file_:
1032 filelog = repo.file(file_)
1016 filelog = repo.file(file_)
1033 if len(filelog):
1017 if len(filelog):
1034 r = filelog
1018 r = filelog
1035 if not r:
1019 if not r:
1036 if not file_:
1020 if not file_:
1037 raise error.CommandError(cmd, _('invalid arguments'))
1021 raise error.CommandError(cmd, _('invalid arguments'))
1038 if not os.path.isfile(file_):
1022 if not os.path.isfile(file_):
1039 raise error.Abort(_("revlog '%s' not found") % file_)
1023 raise error.Abort(_("revlog '%s' not found") % file_)
1040 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
1024 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
1041 file_[:-2] + ".i")
1025 file_[:-2] + ".i")
1042 return r
1026 return r
1043
1027
1044 def copy(ui, repo, pats, opts, rename=False):
1028 def copy(ui, repo, pats, opts, rename=False):
1045 # called with the repo lock held
1029 # called with the repo lock held
1046 #
1030 #
1047 # hgsep => pathname that uses "/" to separate directories
1031 # hgsep => pathname that uses "/" to separate directories
1048 # ossep => pathname that uses os.sep to separate directories
1032 # ossep => pathname that uses os.sep to separate directories
1049 cwd = repo.getcwd()
1033 cwd = repo.getcwd()
1050 targets = {}
1034 targets = {}
1051 after = opts.get("after")
1035 after = opts.get("after")
1052 dryrun = opts.get("dry_run")
1036 dryrun = opts.get("dry_run")
1053 wctx = repo[None]
1037 wctx = repo[None]
1054
1038
1055 def walkpat(pat):
1039 def walkpat(pat):
1056 srcs = []
1040 srcs = []
1057 if after:
1041 if after:
1058 badstates = '?'
1042 badstates = '?'
1059 else:
1043 else:
1060 badstates = '?r'
1044 badstates = '?r'
1061 m = scmutil.match(wctx, [pat], opts, globbed=True)
1045 m = scmutil.match(wctx, [pat], opts, globbed=True)
1062 for abs in wctx.walk(m):
1046 for abs in wctx.walk(m):
1063 state = repo.dirstate[abs]
1047 state = repo.dirstate[abs]
1064 rel = m.rel(abs)
1048 rel = m.rel(abs)
1065 exact = m.exact(abs)
1049 exact = m.exact(abs)
1066 if state in badstates:
1050 if state in badstates:
1067 if exact and state == '?':
1051 if exact and state == '?':
1068 ui.warn(_('%s: not copying - file is not managed\n') % rel)
1052 ui.warn(_('%s: not copying - file is not managed\n') % rel)
1069 if exact and state == 'r':
1053 if exact and state == 'r':
1070 ui.warn(_('%s: not copying - file has been marked for'
1054 ui.warn(_('%s: not copying - file has been marked for'
1071 ' remove\n') % rel)
1055 ' remove\n') % rel)
1072 continue
1056 continue
1073 # abs: hgsep
1057 # abs: hgsep
1074 # rel: ossep
1058 # rel: ossep
1075 srcs.append((abs, rel, exact))
1059 srcs.append((abs, rel, exact))
1076 return srcs
1060 return srcs
1077
1061
1078 # abssrc: hgsep
1062 # abssrc: hgsep
1079 # relsrc: ossep
1063 # relsrc: ossep
1080 # otarget: ossep
1064 # otarget: ossep
1081 def copyfile(abssrc, relsrc, otarget, exact):
1065 def copyfile(abssrc, relsrc, otarget, exact):
1082 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
1066 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
1083 if '/' in abstarget:
1067 if '/' in abstarget:
1084 # We cannot normalize abstarget itself, this would prevent
1068 # We cannot normalize abstarget itself, this would prevent
1085 # case only renames, like a => A.
1069 # case only renames, like a => A.
1086 abspath, absname = abstarget.rsplit('/', 1)
1070 abspath, absname = abstarget.rsplit('/', 1)
1087 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
1071 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
1088 reltarget = repo.pathto(abstarget, cwd)
1072 reltarget = repo.pathto(abstarget, cwd)
1089 target = repo.wjoin(abstarget)
1073 target = repo.wjoin(abstarget)
1090 src = repo.wjoin(abssrc)
1074 src = repo.wjoin(abssrc)
1091 state = repo.dirstate[abstarget]
1075 state = repo.dirstate[abstarget]
1092
1076
1093 scmutil.checkportable(ui, abstarget)
1077 scmutil.checkportable(ui, abstarget)
1094
1078
1095 # check for collisions
1079 # check for collisions
1096 prevsrc = targets.get(abstarget)
1080 prevsrc = targets.get(abstarget)
1097 if prevsrc is not None:
1081 if prevsrc is not None:
1098 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1082 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
1099 (reltarget, repo.pathto(abssrc, cwd),
1083 (reltarget, repo.pathto(abssrc, cwd),
1100 repo.pathto(prevsrc, cwd)))
1084 repo.pathto(prevsrc, cwd)))
1101 return
1085 return
1102
1086
1103 # check for overwrites
1087 # check for overwrites
1104 exists = os.path.lexists(target)
1088 exists = os.path.lexists(target)
1105 samefile = False
1089 samefile = False
1106 if exists and abssrc != abstarget:
1090 if exists and abssrc != abstarget:
1107 if (repo.dirstate.normalize(abssrc) ==
1091 if (repo.dirstate.normalize(abssrc) ==
1108 repo.dirstate.normalize(abstarget)):
1092 repo.dirstate.normalize(abstarget)):
1109 if not rename:
1093 if not rename:
1110 ui.warn(_("%s: can't copy - same file\n") % reltarget)
1094 ui.warn(_("%s: can't copy - same file\n") % reltarget)
1111 return
1095 return
1112 exists = False
1096 exists = False
1113 samefile = True
1097 samefile = True
1114
1098
1115 if not after and exists or after and state in 'mn':
1099 if not after and exists or after and state in 'mn':
1116 if not opts['force']:
1100 if not opts['force']:
1117 if state in 'mn':
1101 if state in 'mn':
1118 msg = _('%s: not overwriting - file already committed\n')
1102 msg = _('%s: not overwriting - file already committed\n')
1119 if after:
1103 if after:
1120 flags = '--after --force'
1104 flags = '--after --force'
1121 else:
1105 else:
1122 flags = '--force'
1106 flags = '--force'
1123 if rename:
1107 if rename:
1124 hint = _('(hg rename %s to replace the file by '
1108 hint = _('(hg rename %s to replace the file by '
1125 'recording a rename)\n') % flags
1109 'recording a rename)\n') % flags
1126 else:
1110 else:
1127 hint = _('(hg copy %s to replace the file by '
1111 hint = _('(hg copy %s to replace the file by '
1128 'recording a copy)\n') % flags
1112 'recording a copy)\n') % flags
1129 else:
1113 else:
1130 msg = _('%s: not overwriting - file exists\n')
1114 msg = _('%s: not overwriting - file exists\n')
1131 if rename:
1115 if rename:
1132 hint = _('(hg rename --after to record the rename)\n')
1116 hint = _('(hg rename --after to record the rename)\n')
1133 else:
1117 else:
1134 hint = _('(hg copy --after to record the copy)\n')
1118 hint = _('(hg copy --after to record the copy)\n')
1135 ui.warn(msg % reltarget)
1119 ui.warn(msg % reltarget)
1136 ui.warn(hint)
1120 ui.warn(hint)
1137 return
1121 return
1138
1122
1139 if after:
1123 if after:
1140 if not exists:
1124 if not exists:
1141 if rename:
1125 if rename:
1142 ui.warn(_('%s: not recording move - %s does not exist\n') %
1126 ui.warn(_('%s: not recording move - %s does not exist\n') %
1143 (relsrc, reltarget))
1127 (relsrc, reltarget))
1144 else:
1128 else:
1145 ui.warn(_('%s: not recording copy - %s does not exist\n') %
1129 ui.warn(_('%s: not recording copy - %s does not exist\n') %
1146 (relsrc, reltarget))
1130 (relsrc, reltarget))
1147 return
1131 return
1148 elif not dryrun:
1132 elif not dryrun:
1149 try:
1133 try:
1150 if exists:
1134 if exists:
1151 os.unlink(target)
1135 os.unlink(target)
1152 targetdir = os.path.dirname(target) or '.'
1136 targetdir = os.path.dirname(target) or '.'
1153 if not os.path.isdir(targetdir):
1137 if not os.path.isdir(targetdir):
1154 os.makedirs(targetdir)
1138 os.makedirs(targetdir)
1155 if samefile:
1139 if samefile:
1156 tmp = target + "~hgrename"
1140 tmp = target + "~hgrename"
1157 os.rename(src, tmp)
1141 os.rename(src, tmp)
1158 os.rename(tmp, target)
1142 os.rename(tmp, target)
1159 else:
1143 else:
1160 util.copyfile(src, target)
1144 util.copyfile(src, target)
1161 srcexists = True
1145 srcexists = True
1162 except IOError as inst:
1146 except IOError as inst:
1163 if inst.errno == errno.ENOENT:
1147 if inst.errno == errno.ENOENT:
1164 ui.warn(_('%s: deleted in working directory\n') % relsrc)
1148 ui.warn(_('%s: deleted in working directory\n') % relsrc)
1165 srcexists = False
1149 srcexists = False
1166 else:
1150 else:
1167 ui.warn(_('%s: cannot copy - %s\n') %
1151 ui.warn(_('%s: cannot copy - %s\n') %
1168 (relsrc, encoding.strtolocal(inst.strerror)))
1152 (relsrc, encoding.strtolocal(inst.strerror)))
1169 return True # report a failure
1153 return True # report a failure
1170
1154
1171 if ui.verbose or not exact:
1155 if ui.verbose or not exact:
1172 if rename:
1156 if rename:
1173 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
1157 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
1174 else:
1158 else:
1175 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1159 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
1176
1160
1177 targets[abstarget] = abssrc
1161 targets[abstarget] = abssrc
1178
1162
1179 # fix up dirstate
1163 # fix up dirstate
1180 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
1164 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
1181 dryrun=dryrun, cwd=cwd)
1165 dryrun=dryrun, cwd=cwd)
1182 if rename and not dryrun:
1166 if rename and not dryrun:
1183 if not after and srcexists and not samefile:
1167 if not after and srcexists and not samefile:
1184 repo.wvfs.unlinkpath(abssrc)
1168 repo.wvfs.unlinkpath(abssrc)
1185 wctx.forget([abssrc])
1169 wctx.forget([abssrc])
1186
1170
1187 # pat: ossep
1171 # pat: ossep
1188 # dest ossep
1172 # dest ossep
1189 # srcs: list of (hgsep, hgsep, ossep, bool)
1173 # srcs: list of (hgsep, hgsep, ossep, bool)
1190 # return: function that takes hgsep and returns ossep
1174 # return: function that takes hgsep and returns ossep
1191 def targetpathfn(pat, dest, srcs):
1175 def targetpathfn(pat, dest, srcs):
1192 if os.path.isdir(pat):
1176 if os.path.isdir(pat):
1193 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1177 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1194 abspfx = util.localpath(abspfx)
1178 abspfx = util.localpath(abspfx)
1195 if destdirexists:
1179 if destdirexists:
1196 striplen = len(os.path.split(abspfx)[0])
1180 striplen = len(os.path.split(abspfx)[0])
1197 else:
1181 else:
1198 striplen = len(abspfx)
1182 striplen = len(abspfx)
1199 if striplen:
1183 if striplen:
1200 striplen += len(pycompat.ossep)
1184 striplen += len(pycompat.ossep)
1201 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
1185 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
1202 elif destdirexists:
1186 elif destdirexists:
1203 res = lambda p: os.path.join(dest,
1187 res = lambda p: os.path.join(dest,
1204 os.path.basename(util.localpath(p)))
1188 os.path.basename(util.localpath(p)))
1205 else:
1189 else:
1206 res = lambda p: dest
1190 res = lambda p: dest
1207 return res
1191 return res
1208
1192
1209 # pat: ossep
1193 # pat: ossep
1210 # dest ossep
1194 # dest ossep
1211 # srcs: list of (hgsep, hgsep, ossep, bool)
1195 # srcs: list of (hgsep, hgsep, ossep, bool)
1212 # return: function that takes hgsep and returns ossep
1196 # return: function that takes hgsep and returns ossep
1213 def targetpathafterfn(pat, dest, srcs):
1197 def targetpathafterfn(pat, dest, srcs):
1214 if matchmod.patkind(pat):
1198 if matchmod.patkind(pat):
1215 # a mercurial pattern
1199 # a mercurial pattern
1216 res = lambda p: os.path.join(dest,
1200 res = lambda p: os.path.join(dest,
1217 os.path.basename(util.localpath(p)))
1201 os.path.basename(util.localpath(p)))
1218 else:
1202 else:
1219 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1203 abspfx = pathutil.canonpath(repo.root, cwd, pat)
1220 if len(abspfx) < len(srcs[0][0]):
1204 if len(abspfx) < len(srcs[0][0]):
1221 # A directory. Either the target path contains the last
1205 # A directory. Either the target path contains the last
1222 # component of the source path or it does not.
1206 # component of the source path or it does not.
1223 def evalpath(striplen):
1207 def evalpath(striplen):
1224 score = 0
1208 score = 0
1225 for s in srcs:
1209 for s in srcs:
1226 t = os.path.join(dest, util.localpath(s[0])[striplen:])
1210 t = os.path.join(dest, util.localpath(s[0])[striplen:])
1227 if os.path.lexists(t):
1211 if os.path.lexists(t):
1228 score += 1
1212 score += 1
1229 return score
1213 return score
1230
1214
1231 abspfx = util.localpath(abspfx)
1215 abspfx = util.localpath(abspfx)
1232 striplen = len(abspfx)
1216 striplen = len(abspfx)
1233 if striplen:
1217 if striplen:
1234 striplen += len(pycompat.ossep)
1218 striplen += len(pycompat.ossep)
1235 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1219 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
1236 score = evalpath(striplen)
1220 score = evalpath(striplen)
1237 striplen1 = len(os.path.split(abspfx)[0])
1221 striplen1 = len(os.path.split(abspfx)[0])
1238 if striplen1:
1222 if striplen1:
1239 striplen1 += len(pycompat.ossep)
1223 striplen1 += len(pycompat.ossep)
1240 if evalpath(striplen1) > score:
1224 if evalpath(striplen1) > score:
1241 striplen = striplen1
1225 striplen = striplen1
1242 res = lambda p: os.path.join(dest,
1226 res = lambda p: os.path.join(dest,
1243 util.localpath(p)[striplen:])
1227 util.localpath(p)[striplen:])
1244 else:
1228 else:
1245 # a file
1229 # a file
1246 if destdirexists:
1230 if destdirexists:
1247 res = lambda p: os.path.join(dest,
1231 res = lambda p: os.path.join(dest,
1248 os.path.basename(util.localpath(p)))
1232 os.path.basename(util.localpath(p)))
1249 else:
1233 else:
1250 res = lambda p: dest
1234 res = lambda p: dest
1251 return res
1235 return res
1252
1236
1253 pats = scmutil.expandpats(pats)
1237 pats = scmutil.expandpats(pats)
1254 if not pats:
1238 if not pats:
1255 raise error.Abort(_('no source or destination specified'))
1239 raise error.Abort(_('no source or destination specified'))
1256 if len(pats) == 1:
1240 if len(pats) == 1:
1257 raise error.Abort(_('no destination specified'))
1241 raise error.Abort(_('no destination specified'))
1258 dest = pats.pop()
1242 dest = pats.pop()
1259 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
1243 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
1260 if not destdirexists:
1244 if not destdirexists:
1261 if len(pats) > 1 or matchmod.patkind(pats[0]):
1245 if len(pats) > 1 or matchmod.patkind(pats[0]):
1262 raise error.Abort(_('with multiple sources, destination must be an '
1246 raise error.Abort(_('with multiple sources, destination must be an '
1263 'existing directory'))
1247 'existing directory'))
1264 if util.endswithsep(dest):
1248 if util.endswithsep(dest):
1265 raise error.Abort(_('destination %s is not a directory') % dest)
1249 raise error.Abort(_('destination %s is not a directory') % dest)
1266
1250
1267 tfn = targetpathfn
1251 tfn = targetpathfn
1268 if after:
1252 if after:
1269 tfn = targetpathafterfn
1253 tfn = targetpathafterfn
1270 copylist = []
1254 copylist = []
1271 for pat in pats:
1255 for pat in pats:
1272 srcs = walkpat(pat)
1256 srcs = walkpat(pat)
1273 if not srcs:
1257 if not srcs:
1274 continue
1258 continue
1275 copylist.append((tfn(pat, dest, srcs), srcs))
1259 copylist.append((tfn(pat, dest, srcs), srcs))
1276 if not copylist:
1260 if not copylist:
1277 raise error.Abort(_('no files to copy'))
1261 raise error.Abort(_('no files to copy'))
1278
1262
1279 errors = 0
1263 errors = 0
1280 for targetpath, srcs in copylist:
1264 for targetpath, srcs in copylist:
1281 for abssrc, relsrc, exact in srcs:
1265 for abssrc, relsrc, exact in srcs:
1282 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
1266 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
1283 errors += 1
1267 errors += 1
1284
1268
1285 if errors:
1269 if errors:
1286 ui.warn(_('(consider using --after)\n'))
1270 ui.warn(_('(consider using --after)\n'))
1287
1271
1288 return errors != 0
1272 return errors != 0
1289
1273
1290 ## facility to let extension process additional data into an import patch
1274 ## facility to let extension process additional data into an import patch
1291 # list of identifier to be executed in order
1275 # list of identifier to be executed in order
1292 extrapreimport = [] # run before commit
1276 extrapreimport = [] # run before commit
1293 extrapostimport = [] # run after commit
1277 extrapostimport = [] # run after commit
1294 # mapping from identifier to actual import function
1278 # mapping from identifier to actual import function
1295 #
1279 #
1296 # 'preimport' are run before the commit is made and are provided the following
1280 # 'preimport' are run before the commit is made and are provided the following
1297 # arguments:
1281 # arguments:
1298 # - repo: the localrepository instance,
1282 # - repo: the localrepository instance,
1299 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
1283 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
1300 # - extra: the future extra dictionary of the changeset, please mutate it,
1284 # - extra: the future extra dictionary of the changeset, please mutate it,
1301 # - opts: the import options.
1285 # - opts: the import options.
1302 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
1286 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
1303 # mutation of in memory commit and more. Feel free to rework the code to get
1287 # mutation of in memory commit and more. Feel free to rework the code to get
1304 # there.
1288 # there.
1305 extrapreimportmap = {}
1289 extrapreimportmap = {}
1306 # 'postimport' are run after the commit is made and are provided the following
1290 # 'postimport' are run after the commit is made and are provided the following
1307 # argument:
1291 # argument:
1308 # - ctx: the changectx created by import.
1292 # - ctx: the changectx created by import.
1309 extrapostimportmap = {}
1293 extrapostimportmap = {}
1310
1294
1311 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
1295 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
1312 """Utility function used by commands.import to import a single patch
1296 """Utility function used by commands.import to import a single patch
1313
1297
1314 This function is explicitly defined here to help the evolve extension to
1298 This function is explicitly defined here to help the evolve extension to
1315 wrap this part of the import logic.
1299 wrap this part of the import logic.
1316
1300
1317 The API is currently a bit ugly because it a simple code translation from
1301 The API is currently a bit ugly because it a simple code translation from
1318 the import command. Feel free to make it better.
1302 the import command. Feel free to make it better.
1319
1303
1320 :hunk: a patch (as a binary string)
1304 :hunk: a patch (as a binary string)
1321 :parents: nodes that will be parent of the created commit
1305 :parents: nodes that will be parent of the created commit
1322 :opts: the full dict of option passed to the import command
1306 :opts: the full dict of option passed to the import command
1323 :msgs: list to save commit message to.
1307 :msgs: list to save commit message to.
1324 (used in case we need to save it when failing)
1308 (used in case we need to save it when failing)
1325 :updatefunc: a function that update a repo to a given node
1309 :updatefunc: a function that update a repo to a given node
1326 updatefunc(<repo>, <node>)
1310 updatefunc(<repo>, <node>)
1327 """
1311 """
1328 # avoid cycle context -> subrepo -> cmdutil
1312 # avoid cycle context -> subrepo -> cmdutil
1329 from . import context
1313 from . import context
1330 extractdata = patch.extract(ui, hunk)
1314 extractdata = patch.extract(ui, hunk)
1331 tmpname = extractdata.get('filename')
1315 tmpname = extractdata.get('filename')
1332 message = extractdata.get('message')
1316 message = extractdata.get('message')
1333 user = opts.get('user') or extractdata.get('user')
1317 user = opts.get('user') or extractdata.get('user')
1334 date = opts.get('date') or extractdata.get('date')
1318 date = opts.get('date') or extractdata.get('date')
1335 branch = extractdata.get('branch')
1319 branch = extractdata.get('branch')
1336 nodeid = extractdata.get('nodeid')
1320 nodeid = extractdata.get('nodeid')
1337 p1 = extractdata.get('p1')
1321 p1 = extractdata.get('p1')
1338 p2 = extractdata.get('p2')
1322 p2 = extractdata.get('p2')
1339
1323
1340 nocommit = opts.get('no_commit')
1324 nocommit = opts.get('no_commit')
1341 importbranch = opts.get('import_branch')
1325 importbranch = opts.get('import_branch')
1342 update = not opts.get('bypass')
1326 update = not opts.get('bypass')
1343 strip = opts["strip"]
1327 strip = opts["strip"]
1344 prefix = opts["prefix"]
1328 prefix = opts["prefix"]
1345 sim = float(opts.get('similarity') or 0)
1329 sim = float(opts.get('similarity') or 0)
1346 if not tmpname:
1330 if not tmpname:
1347 return (None, None, False)
1331 return (None, None, False)
1348
1332
1349 rejects = False
1333 rejects = False
1350
1334
1351 try:
1335 try:
1352 cmdline_message = logmessage(ui, opts)
1336 cmdline_message = logmessage(ui, opts)
1353 if cmdline_message:
1337 if cmdline_message:
1354 # pickup the cmdline msg
1338 # pickup the cmdline msg
1355 message = cmdline_message
1339 message = cmdline_message
1356 elif message:
1340 elif message:
1357 # pickup the patch msg
1341 # pickup the patch msg
1358 message = message.strip()
1342 message = message.strip()
1359 else:
1343 else:
1360 # launch the editor
1344 # launch the editor
1361 message = None
1345 message = None
1362 ui.debug('message:\n%s\n' % message)
1346 ui.debug('message:\n%s\n' % message)
1363
1347
1364 if len(parents) == 1:
1348 if len(parents) == 1:
1365 parents.append(repo[nullid])
1349 parents.append(repo[nullid])
1366 if opts.get('exact'):
1350 if opts.get('exact'):
1367 if not nodeid or not p1:
1351 if not nodeid or not p1:
1368 raise error.Abort(_('not a Mercurial patch'))
1352 raise error.Abort(_('not a Mercurial patch'))
1369 p1 = repo[p1]
1353 p1 = repo[p1]
1370 p2 = repo[p2 or nullid]
1354 p2 = repo[p2 or nullid]
1371 elif p2:
1355 elif p2:
1372 try:
1356 try:
1373 p1 = repo[p1]
1357 p1 = repo[p1]
1374 p2 = repo[p2]
1358 p2 = repo[p2]
1375 # Without any options, consider p2 only if the
1359 # Without any options, consider p2 only if the
1376 # patch is being applied on top of the recorded
1360 # patch is being applied on top of the recorded
1377 # first parent.
1361 # first parent.
1378 if p1 != parents[0]:
1362 if p1 != parents[0]:
1379 p1 = parents[0]
1363 p1 = parents[0]
1380 p2 = repo[nullid]
1364 p2 = repo[nullid]
1381 except error.RepoError:
1365 except error.RepoError:
1382 p1, p2 = parents
1366 p1, p2 = parents
1383 if p2.node() == nullid:
1367 if p2.node() == nullid:
1384 ui.warn(_("warning: import the patch as a normal revision\n"
1368 ui.warn(_("warning: import the patch as a normal revision\n"
1385 "(use --exact to import the patch as a merge)\n"))
1369 "(use --exact to import the patch as a merge)\n"))
1386 else:
1370 else:
1387 p1, p2 = parents
1371 p1, p2 = parents
1388
1372
1389 n = None
1373 n = None
1390 if update:
1374 if update:
1391 if p1 != parents[0]:
1375 if p1 != parents[0]:
1392 updatefunc(repo, p1.node())
1376 updatefunc(repo, p1.node())
1393 if p2 != parents[1]:
1377 if p2 != parents[1]:
1394 repo.setparents(p1.node(), p2.node())
1378 repo.setparents(p1.node(), p2.node())
1395
1379
1396 if opts.get('exact') or importbranch:
1380 if opts.get('exact') or importbranch:
1397 repo.dirstate.setbranch(branch or 'default')
1381 repo.dirstate.setbranch(branch or 'default')
1398
1382
1399 partial = opts.get('partial', False)
1383 partial = opts.get('partial', False)
1400 files = set()
1384 files = set()
1401 try:
1385 try:
1402 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
1386 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
1403 files=files, eolmode=None, similarity=sim / 100.0)
1387 files=files, eolmode=None, similarity=sim / 100.0)
1404 except error.PatchError as e:
1388 except error.PatchError as e:
1405 if not partial:
1389 if not partial:
1406 raise error.Abort(str(e))
1390 raise error.Abort(str(e))
1407 if partial:
1391 if partial:
1408 rejects = True
1392 rejects = True
1409
1393
1410 files = list(files)
1394 files = list(files)
1411 if nocommit:
1395 if nocommit:
1412 if message:
1396 if message:
1413 msgs.append(message)
1397 msgs.append(message)
1414 else:
1398 else:
1415 if opts.get('exact') or p2:
1399 if opts.get('exact') or p2:
1416 # If you got here, you either use --force and know what
1400 # If you got here, you either use --force and know what
1417 # you are doing or used --exact or a merge patch while
1401 # you are doing or used --exact or a merge patch while
1418 # being updated to its first parent.
1402 # being updated to its first parent.
1419 m = None
1403 m = None
1420 else:
1404 else:
1421 m = scmutil.matchfiles(repo, files or [])
1405 m = scmutil.matchfiles(repo, files or [])
1422 editform = mergeeditform(repo[None], 'import.normal')
1406 editform = mergeeditform(repo[None], 'import.normal')
1423 if opts.get('exact'):
1407 if opts.get('exact'):
1424 editor = None
1408 editor = None
1425 else:
1409 else:
1426 editor = getcommiteditor(editform=editform,
1410 editor = getcommiteditor(editform=editform,
1427 **pycompat.strkwargs(opts))
1411 **pycompat.strkwargs(opts))
1428 extra = {}
1412 extra = {}
1429 for idfunc in extrapreimport:
1413 for idfunc in extrapreimport:
1430 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
1414 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
1431 overrides = {}
1415 overrides = {}
1432 if partial:
1416 if partial:
1433 overrides[('ui', 'allowemptycommit')] = True
1417 overrides[('ui', 'allowemptycommit')] = True
1434 with repo.ui.configoverride(overrides, 'import'):
1418 with repo.ui.configoverride(overrides, 'import'):
1435 n = repo.commit(message, user,
1419 n = repo.commit(message, user,
1436 date, match=m,
1420 date, match=m,
1437 editor=editor, extra=extra)
1421 editor=editor, extra=extra)
1438 for idfunc in extrapostimport:
1422 for idfunc in extrapostimport:
1439 extrapostimportmap[idfunc](repo[n])
1423 extrapostimportmap[idfunc](repo[n])
1440 else:
1424 else:
1441 if opts.get('exact') or importbranch:
1425 if opts.get('exact') or importbranch:
1442 branch = branch or 'default'
1426 branch = branch or 'default'
1443 else:
1427 else:
1444 branch = p1.branch()
1428 branch = p1.branch()
1445 store = patch.filestore()
1429 store = patch.filestore()
1446 try:
1430 try:
1447 files = set()
1431 files = set()
1448 try:
1432 try:
1449 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
1433 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
1450 files, eolmode=None)
1434 files, eolmode=None)
1451 except error.PatchError as e:
1435 except error.PatchError as e:
1452 raise error.Abort(str(e))
1436 raise error.Abort(str(e))
1453 if opts.get('exact'):
1437 if opts.get('exact'):
1454 editor = None
1438 editor = None
1455 else:
1439 else:
1456 editor = getcommiteditor(editform='import.bypass')
1440 editor = getcommiteditor(editform='import.bypass')
1457 memctx = context.memctx(repo, (p1.node(), p2.node()),
1441 memctx = context.memctx(repo, (p1.node(), p2.node()),
1458 message,
1442 message,
1459 files=files,
1443 files=files,
1460 filectxfn=store,
1444 filectxfn=store,
1461 user=user,
1445 user=user,
1462 date=date,
1446 date=date,
1463 branch=branch,
1447 branch=branch,
1464 editor=editor)
1448 editor=editor)
1465 n = memctx.commit()
1449 n = memctx.commit()
1466 finally:
1450 finally:
1467 store.close()
1451 store.close()
1468 if opts.get('exact') and nocommit:
1452 if opts.get('exact') and nocommit:
1469 # --exact with --no-commit is still useful in that it does merge
1453 # --exact with --no-commit is still useful in that it does merge
1470 # and branch bits
1454 # and branch bits
1471 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1455 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1472 elif opts.get('exact') and hex(n) != nodeid:
1456 elif opts.get('exact') and hex(n) != nodeid:
1473 raise error.Abort(_('patch is damaged or loses information'))
1457 raise error.Abort(_('patch is damaged or loses information'))
1474 msg = _('applied to working directory')
1458 msg = _('applied to working directory')
1475 if n:
1459 if n:
1476 # i18n: refers to a short changeset id
1460 # i18n: refers to a short changeset id
1477 msg = _('created %s') % short(n)
1461 msg = _('created %s') % short(n)
1478 return (msg, n, rejects)
1462 return (msg, n, rejects)
1479 finally:
1463 finally:
1480 os.unlink(tmpname)
1464 os.unlink(tmpname)
1481
1465
1482 # facility to let extensions include additional data in an exported patch
1466 # facility to let extensions include additional data in an exported patch
1483 # list of identifiers to be executed in order
1467 # list of identifiers to be executed in order
1484 extraexport = []
1468 extraexport = []
1485 # mapping from identifier to actual export function
1469 # mapping from identifier to actual export function
1486 # function as to return a string to be added to the header or None
1470 # function as to return a string to be added to the header or None
1487 # it is given two arguments (sequencenumber, changectx)
1471 # it is given two arguments (sequencenumber, changectx)
1488 extraexportmap = {}
1472 extraexportmap = {}
1489
1473
1490 def _exportsingle(repo, ctx, match, switch_parent, rev, seqno, write, diffopts):
1474 def _exportsingle(repo, ctx, match, switch_parent, rev, seqno, write, diffopts):
1491 node = scmutil.binnode(ctx)
1475 node = scmutil.binnode(ctx)
1492 parents = [p.node() for p in ctx.parents() if p]
1476 parents = [p.node() for p in ctx.parents() if p]
1493 branch = ctx.branch()
1477 branch = ctx.branch()
1494 if switch_parent:
1478 if switch_parent:
1495 parents.reverse()
1479 parents.reverse()
1496
1480
1497 if parents:
1481 if parents:
1498 prev = parents[0]
1482 prev = parents[0]
1499 else:
1483 else:
1500 prev = nullid
1484 prev = nullid
1501
1485
1502 write("# HG changeset patch\n")
1486 write("# HG changeset patch\n")
1503 write("# User %s\n" % ctx.user())
1487 write("# User %s\n" % ctx.user())
1504 write("# Date %d %d\n" % ctx.date())
1488 write("# Date %d %d\n" % ctx.date())
1505 write("# %s\n" % util.datestr(ctx.date()))
1489 write("# %s\n" % util.datestr(ctx.date()))
1506 if branch and branch != 'default':
1490 if branch and branch != 'default':
1507 write("# Branch %s\n" % branch)
1491 write("# Branch %s\n" % branch)
1508 write("# Node ID %s\n" % hex(node))
1492 write("# Node ID %s\n" % hex(node))
1509 write("# Parent %s\n" % hex(prev))
1493 write("# Parent %s\n" % hex(prev))
1510 if len(parents) > 1:
1494 if len(parents) > 1:
1511 write("# Parent %s\n" % hex(parents[1]))
1495 write("# Parent %s\n" % hex(parents[1]))
1512
1496
1513 for headerid in extraexport:
1497 for headerid in extraexport:
1514 header = extraexportmap[headerid](seqno, ctx)
1498 header = extraexportmap[headerid](seqno, ctx)
1515 if header is not None:
1499 if header is not None:
1516 write('# %s\n' % header)
1500 write('# %s\n' % header)
1517 write(ctx.description().rstrip())
1501 write(ctx.description().rstrip())
1518 write("\n\n")
1502 write("\n\n")
1519
1503
1520 for chunk, label in patch.diffui(repo, prev, node, match, opts=diffopts):
1504 for chunk, label in patch.diffui(repo, prev, node, match, opts=diffopts):
1521 write(chunk, label=label)
1505 write(chunk, label=label)
1522
1506
1523 def export(repo, revs, fntemplate='hg-%h.patch', fp=None, switch_parent=False,
1507 def export(repo, revs, fntemplate='hg-%h.patch', fp=None, switch_parent=False,
1524 opts=None, match=None):
1508 opts=None, match=None):
1525 '''export changesets as hg patches
1509 '''export changesets as hg patches
1526
1510
1527 Args:
1511 Args:
1528 repo: The repository from which we're exporting revisions.
1512 repo: The repository from which we're exporting revisions.
1529 revs: A list of revisions to export as revision numbers.
1513 revs: A list of revisions to export as revision numbers.
1530 fntemplate: An optional string to use for generating patch file names.
1514 fntemplate: An optional string to use for generating patch file names.
1531 fp: An optional file-like object to which patches should be written.
1515 fp: An optional file-like object to which patches should be written.
1532 switch_parent: If True, show diffs against second parent when not nullid.
1516 switch_parent: If True, show diffs against second parent when not nullid.
1533 Default is false, which always shows diff against p1.
1517 Default is false, which always shows diff against p1.
1534 opts: diff options to use for generating the patch.
1518 opts: diff options to use for generating the patch.
1535 match: If specified, only export changes to files matching this matcher.
1519 match: If specified, only export changes to files matching this matcher.
1536
1520
1537 Returns:
1521 Returns:
1538 Nothing.
1522 Nothing.
1539
1523
1540 Side Effect:
1524 Side Effect:
1541 "HG Changeset Patch" data is emitted to one of the following
1525 "HG Changeset Patch" data is emitted to one of the following
1542 destinations:
1526 destinations:
1543 fp is specified: All revs are written to the specified
1527 fp is specified: All revs are written to the specified
1544 file-like object.
1528 file-like object.
1545 fntemplate specified: Each rev is written to a unique file named using
1529 fntemplate specified: Each rev is written to a unique file named using
1546 the given template.
1530 the given template.
1547 Neither fp nor template specified: All revs written to repo.ui.write()
1531 Neither fp nor template specified: All revs written to repo.ui.write()
1548 '''
1532 '''
1549
1533
1550 total = len(revs)
1534 total = len(revs)
1551 revwidth = max(len(str(rev)) for rev in revs)
1535 revwidth = max(len(str(rev)) for rev in revs)
1552 filemode = {}
1536 filemode = {}
1553
1537
1554 write = None
1538 write = None
1555 dest = '<unnamed>'
1539 dest = '<unnamed>'
1556 if fp:
1540 if fp:
1557 dest = getattr(fp, 'name', dest)
1541 dest = getattr(fp, 'name', dest)
1558 def write(s, **kw):
1542 def write(s, **kw):
1559 fp.write(s)
1543 fp.write(s)
1560 elif not fntemplate:
1544 elif not fntemplate:
1561 write = repo.ui.write
1545 write = repo.ui.write
1562
1546
1563 for seqno, rev in enumerate(revs, 1):
1547 for seqno, rev in enumerate(revs, 1):
1564 ctx = repo[rev]
1548 ctx = repo[rev]
1565 fo = None
1549 fo = None
1566 if not fp and fntemplate:
1550 if not fp and fntemplate:
1567 desc_lines = ctx.description().rstrip().split('\n')
1551 desc_lines = ctx.description().rstrip().split('\n')
1568 desc = desc_lines[0] #Commit always has a first line.
1552 desc = desc_lines[0] #Commit always has a first line.
1569 fo = makefileobj(repo, fntemplate, ctx.node(), desc=desc,
1553 fo = makefileobj(repo, fntemplate, ctx.node(), desc=desc,
1570 total=total, seqno=seqno, revwidth=revwidth,
1554 total=total, seqno=seqno, revwidth=revwidth,
1571 mode='wb', modemap=filemode)
1555 mode='wb', modemap=filemode)
1572 dest = fo.name
1556 dest = fo.name
1573 def write(s, **kw):
1557 def write(s, **kw):
1574 fo.write(s)
1558 fo.write(s)
1575 if not dest.startswith('<'):
1559 if not dest.startswith('<'):
1576 repo.ui.note("%s\n" % dest)
1560 repo.ui.note("%s\n" % dest)
1577 _exportsingle(
1561 _exportsingle(
1578 repo, ctx, match, switch_parent, rev, seqno, write, opts)
1562 repo, ctx, match, switch_parent, rev, seqno, write, opts)
1579 if fo is not None:
1563 if fo is not None:
1580 fo.close()
1564 fo.close()
1581
1565
1582 class _regrettablereprbytes(bytes):
1566 class _regrettablereprbytes(bytes):
1583 """Bytes subclass that makes the repr the same on Python 3 as Python 2.
1567 """Bytes subclass that makes the repr the same on Python 3 as Python 2.
1584
1568
1585 This is a huge hack.
1569 This is a huge hack.
1586 """
1570 """
1587 def __repr__(self):
1571 def __repr__(self):
1588 return repr(pycompat.sysstr(self))
1572 return repr(pycompat.sysstr(self))
1589
1573
1590 def _maybebytestr(v):
1574 def _maybebytestr(v):
1591 if pycompat.ispy3 and isinstance(v, bytes):
1575 if pycompat.ispy3 and isinstance(v, bytes):
1592 return _regrettablereprbytes(v)
1576 return _regrettablereprbytes(v)
1593 return v
1577 return v
1594
1578
1595 def showmarker(fm, marker, index=None):
1579 def showmarker(fm, marker, index=None):
1596 """utility function to display obsolescence marker in a readable way
1580 """utility function to display obsolescence marker in a readable way
1597
1581
1598 To be used by debug function."""
1582 To be used by debug function."""
1599 if index is not None:
1583 if index is not None:
1600 fm.write('index', '%i ', index)
1584 fm.write('index', '%i ', index)
1601 fm.write('prednode', '%s ', hex(marker.prednode()))
1585 fm.write('prednode', '%s ', hex(marker.prednode()))
1602 succs = marker.succnodes()
1586 succs = marker.succnodes()
1603 fm.condwrite(succs, 'succnodes', '%s ',
1587 fm.condwrite(succs, 'succnodes', '%s ',
1604 fm.formatlist(map(hex, succs), name='node'))
1588 fm.formatlist(map(hex, succs), name='node'))
1605 fm.write('flag', '%X ', marker.flags())
1589 fm.write('flag', '%X ', marker.flags())
1606 parents = marker.parentnodes()
1590 parents = marker.parentnodes()
1607 if parents is not None:
1591 if parents is not None:
1608 fm.write('parentnodes', '{%s} ',
1592 fm.write('parentnodes', '{%s} ',
1609 fm.formatlist(map(hex, parents), name='node', sep=', '))
1593 fm.formatlist(map(hex, parents), name='node', sep=', '))
1610 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1594 fm.write('date', '(%s) ', fm.formatdate(marker.date()))
1611 meta = marker.metadata().copy()
1595 meta = marker.metadata().copy()
1612 meta.pop('date', None)
1596 meta.pop('date', None)
1613 smeta = {_maybebytestr(k): _maybebytestr(v) for k, v in meta.iteritems()}
1597 smeta = {_maybebytestr(k): _maybebytestr(v) for k, v in meta.iteritems()}
1614 fm.write('metadata', '{%s}', fm.formatdict(smeta, fmt='%r: %r', sep=', '))
1598 fm.write('metadata', '{%s}', fm.formatdict(smeta, fmt='%r: %r', sep=', '))
1615 fm.plain('\n')
1599 fm.plain('\n')
1616
1600
1617 def finddate(ui, repo, date):
1601 def finddate(ui, repo, date):
1618 """Find the tipmost changeset that matches the given date spec"""
1602 """Find the tipmost changeset that matches the given date spec"""
1619
1603
1620 df = util.matchdate(date)
1604 df = util.matchdate(date)
1621 m = scmutil.matchall(repo)
1605 m = scmutil.matchall(repo)
1622 results = {}
1606 results = {}
1623
1607
1624 def prep(ctx, fns):
1608 def prep(ctx, fns):
1625 d = ctx.date()
1609 d = ctx.date()
1626 if df(d[0]):
1610 if df(d[0]):
1627 results[ctx.rev()] = d
1611 results[ctx.rev()] = d
1628
1612
1629 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1613 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1630 rev = ctx.rev()
1614 rev = ctx.rev()
1631 if rev in results:
1615 if rev in results:
1632 ui.status(_("found revision %s from %s\n") %
1616 ui.status(_("found revision %s from %s\n") %
1633 (rev, util.datestr(results[rev])))
1617 (rev, util.datestr(results[rev])))
1634 return '%d' % rev
1618 return '%d' % rev
1635
1619
1636 raise error.Abort(_("revision matching date not found"))
1620 raise error.Abort(_("revision matching date not found"))
1637
1621
1638 def increasingwindows(windowsize=8, sizelimit=512):
1622 def increasingwindows(windowsize=8, sizelimit=512):
1639 while True:
1623 while True:
1640 yield windowsize
1624 yield windowsize
1641 if windowsize < sizelimit:
1625 if windowsize < sizelimit:
1642 windowsize *= 2
1626 windowsize *= 2
1643
1627
1644 def _walkrevs(repo, opts):
1628 def _walkrevs(repo, opts):
1645 # Default --rev value depends on --follow but --follow behavior
1629 # Default --rev value depends on --follow but --follow behavior
1646 # depends on revisions resolved from --rev...
1630 # depends on revisions resolved from --rev...
1647 follow = opts.get('follow') or opts.get('follow_first')
1631 follow = opts.get('follow') or opts.get('follow_first')
1648 if opts.get('rev'):
1632 if opts.get('rev'):
1649 revs = scmutil.revrange(repo, opts['rev'])
1633 revs = scmutil.revrange(repo, opts['rev'])
1650 elif follow and repo.dirstate.p1() == nullid:
1634 elif follow and repo.dirstate.p1() == nullid:
1651 revs = smartset.baseset()
1635 revs = smartset.baseset()
1652 elif follow:
1636 elif follow:
1653 revs = repo.revs('reverse(:.)')
1637 revs = repo.revs('reverse(:.)')
1654 else:
1638 else:
1655 revs = smartset.spanset(repo)
1639 revs = smartset.spanset(repo)
1656 revs.reverse()
1640 revs.reverse()
1657 return revs
1641 return revs
1658
1642
1659 class FileWalkError(Exception):
1643 class FileWalkError(Exception):
1660 pass
1644 pass
1661
1645
1662 def walkfilerevs(repo, match, follow, revs, fncache):
1646 def walkfilerevs(repo, match, follow, revs, fncache):
1663 '''Walks the file history for the matched files.
1647 '''Walks the file history for the matched files.
1664
1648
1665 Returns the changeset revs that are involved in the file history.
1649 Returns the changeset revs that are involved in the file history.
1666
1650
1667 Throws FileWalkError if the file history can't be walked using
1651 Throws FileWalkError if the file history can't be walked using
1668 filelogs alone.
1652 filelogs alone.
1669 '''
1653 '''
1670 wanted = set()
1654 wanted = set()
1671 copies = []
1655 copies = []
1672 minrev, maxrev = min(revs), max(revs)
1656 minrev, maxrev = min(revs), max(revs)
1673 def filerevgen(filelog, last):
1657 def filerevgen(filelog, last):
1674 """
1658 """
1675 Only files, no patterns. Check the history of each file.
1659 Only files, no patterns. Check the history of each file.
1676
1660
1677 Examines filelog entries within minrev, maxrev linkrev range
1661 Examines filelog entries within minrev, maxrev linkrev range
1678 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1662 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1679 tuples in backwards order
1663 tuples in backwards order
1680 """
1664 """
1681 cl_count = len(repo)
1665 cl_count = len(repo)
1682 revs = []
1666 revs = []
1683 for j in xrange(0, last + 1):
1667 for j in xrange(0, last + 1):
1684 linkrev = filelog.linkrev(j)
1668 linkrev = filelog.linkrev(j)
1685 if linkrev < minrev:
1669 if linkrev < minrev:
1686 continue
1670 continue
1687 # only yield rev for which we have the changelog, it can
1671 # only yield rev for which we have the changelog, it can
1688 # happen while doing "hg log" during a pull or commit
1672 # happen while doing "hg log" during a pull or commit
1689 if linkrev >= cl_count:
1673 if linkrev >= cl_count:
1690 break
1674 break
1691
1675
1692 parentlinkrevs = []
1676 parentlinkrevs = []
1693 for p in filelog.parentrevs(j):
1677 for p in filelog.parentrevs(j):
1694 if p != nullrev:
1678 if p != nullrev:
1695 parentlinkrevs.append(filelog.linkrev(p))
1679 parentlinkrevs.append(filelog.linkrev(p))
1696 n = filelog.node(j)
1680 n = filelog.node(j)
1697 revs.append((linkrev, parentlinkrevs,
1681 revs.append((linkrev, parentlinkrevs,
1698 follow and filelog.renamed(n)))
1682 follow and filelog.renamed(n)))
1699
1683
1700 return reversed(revs)
1684 return reversed(revs)
1701 def iterfiles():
1685 def iterfiles():
1702 pctx = repo['.']
1686 pctx = repo['.']
1703 for filename in match.files():
1687 for filename in match.files():
1704 if follow:
1688 if follow:
1705 if filename not in pctx:
1689 if filename not in pctx:
1706 raise error.Abort(_('cannot follow file not in parent '
1690 raise error.Abort(_('cannot follow file not in parent '
1707 'revision: "%s"') % filename)
1691 'revision: "%s"') % filename)
1708 yield filename, pctx[filename].filenode()
1692 yield filename, pctx[filename].filenode()
1709 else:
1693 else:
1710 yield filename, None
1694 yield filename, None
1711 for filename_node in copies:
1695 for filename_node in copies:
1712 yield filename_node
1696 yield filename_node
1713
1697
1714 for file_, node in iterfiles():
1698 for file_, node in iterfiles():
1715 filelog = repo.file(file_)
1699 filelog = repo.file(file_)
1716 if not len(filelog):
1700 if not len(filelog):
1717 if node is None:
1701 if node is None:
1718 # A zero count may be a directory or deleted file, so
1702 # A zero count may be a directory or deleted file, so
1719 # try to find matching entries on the slow path.
1703 # try to find matching entries on the slow path.
1720 if follow:
1704 if follow:
1721 raise error.Abort(
1705 raise error.Abort(
1722 _('cannot follow nonexistent file: "%s"') % file_)
1706 _('cannot follow nonexistent file: "%s"') % file_)
1723 raise FileWalkError("Cannot walk via filelog")
1707 raise FileWalkError("Cannot walk via filelog")
1724 else:
1708 else:
1725 continue
1709 continue
1726
1710
1727 if node is None:
1711 if node is None:
1728 last = len(filelog) - 1
1712 last = len(filelog) - 1
1729 else:
1713 else:
1730 last = filelog.rev(node)
1714 last = filelog.rev(node)
1731
1715
1732 # keep track of all ancestors of the file
1716 # keep track of all ancestors of the file
1733 ancestors = {filelog.linkrev(last)}
1717 ancestors = {filelog.linkrev(last)}
1734
1718
1735 # iterate from latest to oldest revision
1719 # iterate from latest to oldest revision
1736 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1720 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1737 if not follow:
1721 if not follow:
1738 if rev > maxrev:
1722 if rev > maxrev:
1739 continue
1723 continue
1740 else:
1724 else:
1741 # Note that last might not be the first interesting
1725 # Note that last might not be the first interesting
1742 # rev to us:
1726 # rev to us:
1743 # if the file has been changed after maxrev, we'll
1727 # if the file has been changed after maxrev, we'll
1744 # have linkrev(last) > maxrev, and we still need
1728 # have linkrev(last) > maxrev, and we still need
1745 # to explore the file graph
1729 # to explore the file graph
1746 if rev not in ancestors:
1730 if rev not in ancestors:
1747 continue
1731 continue
1748 # XXX insert 1327 fix here
1732 # XXX insert 1327 fix here
1749 if flparentlinkrevs:
1733 if flparentlinkrevs:
1750 ancestors.update(flparentlinkrevs)
1734 ancestors.update(flparentlinkrevs)
1751
1735
1752 fncache.setdefault(rev, []).append(file_)
1736 fncache.setdefault(rev, []).append(file_)
1753 wanted.add(rev)
1737 wanted.add(rev)
1754 if copied:
1738 if copied:
1755 copies.append(copied)
1739 copies.append(copied)
1756
1740
1757 return wanted
1741 return wanted
1758
1742
1759 class _followfilter(object):
1743 class _followfilter(object):
1760 def __init__(self, repo, onlyfirst=False):
1744 def __init__(self, repo, onlyfirst=False):
1761 self.repo = repo
1745 self.repo = repo
1762 self.startrev = nullrev
1746 self.startrev = nullrev
1763 self.roots = set()
1747 self.roots = set()
1764 self.onlyfirst = onlyfirst
1748 self.onlyfirst = onlyfirst
1765
1749
1766 def match(self, rev):
1750 def match(self, rev):
1767 def realparents(rev):
1751 def realparents(rev):
1768 if self.onlyfirst:
1752 if self.onlyfirst:
1769 return self.repo.changelog.parentrevs(rev)[0:1]
1753 return self.repo.changelog.parentrevs(rev)[0:1]
1770 else:
1754 else:
1771 return filter(lambda x: x != nullrev,
1755 return filter(lambda x: x != nullrev,
1772 self.repo.changelog.parentrevs(rev))
1756 self.repo.changelog.parentrevs(rev))
1773
1757
1774 if self.startrev == nullrev:
1758 if self.startrev == nullrev:
1775 self.startrev = rev
1759 self.startrev = rev
1776 return True
1760 return True
1777
1761
1778 if rev > self.startrev:
1762 if rev > self.startrev:
1779 # forward: all descendants
1763 # forward: all descendants
1780 if not self.roots:
1764 if not self.roots:
1781 self.roots.add(self.startrev)
1765 self.roots.add(self.startrev)
1782 for parent in realparents(rev):
1766 for parent in realparents(rev):
1783 if parent in self.roots:
1767 if parent in self.roots:
1784 self.roots.add(rev)
1768 self.roots.add(rev)
1785 return True
1769 return True
1786 else:
1770 else:
1787 # backwards: all parents
1771 # backwards: all parents
1788 if not self.roots:
1772 if not self.roots:
1789 self.roots.update(realparents(self.startrev))
1773 self.roots.update(realparents(self.startrev))
1790 if rev in self.roots:
1774 if rev in self.roots:
1791 self.roots.remove(rev)
1775 self.roots.remove(rev)
1792 self.roots.update(realparents(rev))
1776 self.roots.update(realparents(rev))
1793 return True
1777 return True
1794
1778
1795 return False
1779 return False
1796
1780
1797 def walkchangerevs(repo, match, opts, prepare):
1781 def walkchangerevs(repo, match, opts, prepare):
1798 '''Iterate over files and the revs in which they changed.
1782 '''Iterate over files and the revs in which they changed.
1799
1783
1800 Callers most commonly need to iterate backwards over the history
1784 Callers most commonly need to iterate backwards over the history
1801 in which they are interested. Doing so has awful (quadratic-looking)
1785 in which they are interested. Doing so has awful (quadratic-looking)
1802 performance, so we use iterators in a "windowed" way.
1786 performance, so we use iterators in a "windowed" way.
1803
1787
1804 We walk a window of revisions in the desired order. Within the
1788 We walk a window of revisions in the desired order. Within the
1805 window, we first walk forwards to gather data, then in the desired
1789 window, we first walk forwards to gather data, then in the desired
1806 order (usually backwards) to display it.
1790 order (usually backwards) to display it.
1807
1791
1808 This function returns an iterator yielding contexts. Before
1792 This function returns an iterator yielding contexts. Before
1809 yielding each context, the iterator will first call the prepare
1793 yielding each context, the iterator will first call the prepare
1810 function on each context in the window in forward order.'''
1794 function on each context in the window in forward order.'''
1811
1795
1812 follow = opts.get('follow') or opts.get('follow_first')
1796 follow = opts.get('follow') or opts.get('follow_first')
1813 revs = _walkrevs(repo, opts)
1797 revs = _walkrevs(repo, opts)
1814 if not revs:
1798 if not revs:
1815 return []
1799 return []
1816 wanted = set()
1800 wanted = set()
1817 slowpath = match.anypats() or (not match.always() and opts.get('removed'))
1801 slowpath = match.anypats() or (not match.always() and opts.get('removed'))
1818 fncache = {}
1802 fncache = {}
1819 change = repo.changectx
1803 change = repo.changectx
1820
1804
1821 # First step is to fill wanted, the set of revisions that we want to yield.
1805 # First step is to fill wanted, the set of revisions that we want to yield.
1822 # When it does not induce extra cost, we also fill fncache for revisions in
1806 # When it does not induce extra cost, we also fill fncache for revisions in
1823 # wanted: a cache of filenames that were changed (ctx.files()) and that
1807 # wanted: a cache of filenames that were changed (ctx.files()) and that
1824 # match the file filtering conditions.
1808 # match the file filtering conditions.
1825
1809
1826 if match.always():
1810 if match.always():
1827 # No files, no patterns. Display all revs.
1811 # No files, no patterns. Display all revs.
1828 wanted = revs
1812 wanted = revs
1829 elif not slowpath:
1813 elif not slowpath:
1830 # We only have to read through the filelog to find wanted revisions
1814 # We only have to read through the filelog to find wanted revisions
1831
1815
1832 try:
1816 try:
1833 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1817 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1834 except FileWalkError:
1818 except FileWalkError:
1835 slowpath = True
1819 slowpath = True
1836
1820
1837 # We decided to fall back to the slowpath because at least one
1821 # We decided to fall back to the slowpath because at least one
1838 # of the paths was not a file. Check to see if at least one of them
1822 # of the paths was not a file. Check to see if at least one of them
1839 # existed in history, otherwise simply return
1823 # existed in history, otherwise simply return
1840 for path in match.files():
1824 for path in match.files():
1841 if path == '.' or path in repo.store:
1825 if path == '.' or path in repo.store:
1842 break
1826 break
1843 else:
1827 else:
1844 return []
1828 return []
1845
1829
1846 if slowpath:
1830 if slowpath:
1847 # We have to read the changelog to match filenames against
1831 # We have to read the changelog to match filenames against
1848 # changed files
1832 # changed files
1849
1833
1850 if follow:
1834 if follow:
1851 raise error.Abort(_('can only follow copies/renames for explicit '
1835 raise error.Abort(_('can only follow copies/renames for explicit '
1852 'filenames'))
1836 'filenames'))
1853
1837
1854 # The slow path checks files modified in every changeset.
1838 # The slow path checks files modified in every changeset.
1855 # This is really slow on large repos, so compute the set lazily.
1839 # This is really slow on large repos, so compute the set lazily.
1856 class lazywantedset(object):
1840 class lazywantedset(object):
1857 def __init__(self):
1841 def __init__(self):
1858 self.set = set()
1842 self.set = set()
1859 self.revs = set(revs)
1843 self.revs = set(revs)
1860
1844
1861 # No need to worry about locality here because it will be accessed
1845 # No need to worry about locality here because it will be accessed
1862 # in the same order as the increasing window below.
1846 # in the same order as the increasing window below.
1863 def __contains__(self, value):
1847 def __contains__(self, value):
1864 if value in self.set:
1848 if value in self.set:
1865 return True
1849 return True
1866 elif not value in self.revs:
1850 elif not value in self.revs:
1867 return False
1851 return False
1868 else:
1852 else:
1869 self.revs.discard(value)
1853 self.revs.discard(value)
1870 ctx = change(value)
1854 ctx = change(value)
1871 matches = filter(match, ctx.files())
1855 matches = filter(match, ctx.files())
1872 if matches:
1856 if matches:
1873 fncache[value] = matches
1857 fncache[value] = matches
1874 self.set.add(value)
1858 self.set.add(value)
1875 return True
1859 return True
1876 return False
1860 return False
1877
1861
1878 def discard(self, value):
1862 def discard(self, value):
1879 self.revs.discard(value)
1863 self.revs.discard(value)
1880 self.set.discard(value)
1864 self.set.discard(value)
1881
1865
1882 wanted = lazywantedset()
1866 wanted = lazywantedset()
1883
1867
1884 # it might be worthwhile to do this in the iterator if the rev range
1868 # it might be worthwhile to do this in the iterator if the rev range
1885 # is descending and the prune args are all within that range
1869 # is descending and the prune args are all within that range
1886 for rev in opts.get('prune', ()):
1870 for rev in opts.get('prune', ()):
1887 rev = repo[rev].rev()
1871 rev = repo[rev].rev()
1888 ff = _followfilter(repo)
1872 ff = _followfilter(repo)
1889 stop = min(revs[0], revs[-1])
1873 stop = min(revs[0], revs[-1])
1890 for x in xrange(rev, stop - 1, -1):
1874 for x in xrange(rev, stop - 1, -1):
1891 if ff.match(x):
1875 if ff.match(x):
1892 wanted = wanted - [x]
1876 wanted = wanted - [x]
1893
1877
1894 # Now that wanted is correctly initialized, we can iterate over the
1878 # Now that wanted is correctly initialized, we can iterate over the
1895 # revision range, yielding only revisions in wanted.
1879 # revision range, yielding only revisions in wanted.
1896 def iterate():
1880 def iterate():
1897 if follow and match.always():
1881 if follow and match.always():
1898 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1882 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1899 def want(rev):
1883 def want(rev):
1900 return ff.match(rev) and rev in wanted
1884 return ff.match(rev) and rev in wanted
1901 else:
1885 else:
1902 def want(rev):
1886 def want(rev):
1903 return rev in wanted
1887 return rev in wanted
1904
1888
1905 it = iter(revs)
1889 it = iter(revs)
1906 stopiteration = False
1890 stopiteration = False
1907 for windowsize in increasingwindows():
1891 for windowsize in increasingwindows():
1908 nrevs = []
1892 nrevs = []
1909 for i in xrange(windowsize):
1893 for i in xrange(windowsize):
1910 rev = next(it, None)
1894 rev = next(it, None)
1911 if rev is None:
1895 if rev is None:
1912 stopiteration = True
1896 stopiteration = True
1913 break
1897 break
1914 elif want(rev):
1898 elif want(rev):
1915 nrevs.append(rev)
1899 nrevs.append(rev)
1916 for rev in sorted(nrevs):
1900 for rev in sorted(nrevs):
1917 fns = fncache.get(rev)
1901 fns = fncache.get(rev)
1918 ctx = change(rev)
1902 ctx = change(rev)
1919 if not fns:
1903 if not fns:
1920 def fns_generator():
1904 def fns_generator():
1921 for f in ctx.files():
1905 for f in ctx.files():
1922 if match(f):
1906 if match(f):
1923 yield f
1907 yield f
1924 fns = fns_generator()
1908 fns = fns_generator()
1925 prepare(ctx, fns)
1909 prepare(ctx, fns)
1926 for rev in nrevs:
1910 for rev in nrevs:
1927 yield change(rev)
1911 yield change(rev)
1928
1912
1929 if stopiteration:
1913 if stopiteration:
1930 break
1914 break
1931
1915
1932 return iterate()
1916 return iterate()
1933
1917
1934 def add(ui, repo, match, prefix, explicitonly, **opts):
1918 def add(ui, repo, match, prefix, explicitonly, **opts):
1935 join = lambda f: os.path.join(prefix, f)
1919 join = lambda f: os.path.join(prefix, f)
1936 bad = []
1920 bad = []
1937
1921
1938 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
1922 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
1939 names = []
1923 names = []
1940 wctx = repo[None]
1924 wctx = repo[None]
1941 cca = None
1925 cca = None
1942 abort, warn = scmutil.checkportabilityalert(ui)
1926 abort, warn = scmutil.checkportabilityalert(ui)
1943 if abort or warn:
1927 if abort or warn:
1944 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
1928 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
1945
1929
1946 badmatch = matchmod.badmatch(match, badfn)
1930 badmatch = matchmod.badmatch(match, badfn)
1947 dirstate = repo.dirstate
1931 dirstate = repo.dirstate
1948 # We don't want to just call wctx.walk here, since it would return a lot of
1932 # We don't want to just call wctx.walk here, since it would return a lot of
1949 # clean files, which we aren't interested in and takes time.
1933 # clean files, which we aren't interested in and takes time.
1950 for f in sorted(dirstate.walk(badmatch, subrepos=sorted(wctx.substate),
1934 for f in sorted(dirstate.walk(badmatch, subrepos=sorted(wctx.substate),
1951 unknown=True, ignored=False, full=False)):
1935 unknown=True, ignored=False, full=False)):
1952 exact = match.exact(f)
1936 exact = match.exact(f)
1953 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
1937 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
1954 if cca:
1938 if cca:
1955 cca(f)
1939 cca(f)
1956 names.append(f)
1940 names.append(f)
1957 if ui.verbose or not exact:
1941 if ui.verbose or not exact:
1958 ui.status(_('adding %s\n') % match.rel(f))
1942 ui.status(_('adding %s\n') % match.rel(f))
1959
1943
1960 for subpath in sorted(wctx.substate):
1944 for subpath in sorted(wctx.substate):
1961 sub = wctx.sub(subpath)
1945 sub = wctx.sub(subpath)
1962 try:
1946 try:
1963 submatch = matchmod.subdirmatcher(subpath, match)
1947 submatch = matchmod.subdirmatcher(subpath, match)
1964 if opts.get(r'subrepos'):
1948 if opts.get(r'subrepos'):
1965 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
1949 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
1966 else:
1950 else:
1967 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
1951 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
1968 except error.LookupError:
1952 except error.LookupError:
1969 ui.status(_("skipping missing subrepository: %s\n")
1953 ui.status(_("skipping missing subrepository: %s\n")
1970 % join(subpath))
1954 % join(subpath))
1971
1955
1972 if not opts.get(r'dry_run'):
1956 if not opts.get(r'dry_run'):
1973 rejected = wctx.add(names, prefix)
1957 rejected = wctx.add(names, prefix)
1974 bad.extend(f for f in rejected if f in match.files())
1958 bad.extend(f for f in rejected if f in match.files())
1975 return bad
1959 return bad
1976
1960
1977 def addwebdirpath(repo, serverpath, webconf):
1961 def addwebdirpath(repo, serverpath, webconf):
1978 webconf[serverpath] = repo.root
1962 webconf[serverpath] = repo.root
1979 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
1963 repo.ui.debug('adding %s = %s\n' % (serverpath, repo.root))
1980
1964
1981 for r in repo.revs('filelog("path:.hgsub")'):
1965 for r in repo.revs('filelog("path:.hgsub")'):
1982 ctx = repo[r]
1966 ctx = repo[r]
1983 for subpath in ctx.substate:
1967 for subpath in ctx.substate:
1984 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
1968 ctx.sub(subpath).addwebdirpath(serverpath, webconf)
1985
1969
1986 def forget(ui, repo, match, prefix, explicitonly):
1970 def forget(ui, repo, match, prefix, explicitonly):
1987 join = lambda f: os.path.join(prefix, f)
1971 join = lambda f: os.path.join(prefix, f)
1988 bad = []
1972 bad = []
1989 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
1973 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
1990 wctx = repo[None]
1974 wctx = repo[None]
1991 forgot = []
1975 forgot = []
1992
1976
1993 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
1977 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
1994 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1978 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1995 if explicitonly:
1979 if explicitonly:
1996 forget = [f for f in forget if match.exact(f)]
1980 forget = [f for f in forget if match.exact(f)]
1997
1981
1998 for subpath in sorted(wctx.substate):
1982 for subpath in sorted(wctx.substate):
1999 sub = wctx.sub(subpath)
1983 sub = wctx.sub(subpath)
2000 try:
1984 try:
2001 submatch = matchmod.subdirmatcher(subpath, match)
1985 submatch = matchmod.subdirmatcher(subpath, match)
2002 subbad, subforgot = sub.forget(submatch, prefix)
1986 subbad, subforgot = sub.forget(submatch, prefix)
2003 bad.extend([subpath + '/' + f for f in subbad])
1987 bad.extend([subpath + '/' + f for f in subbad])
2004 forgot.extend([subpath + '/' + f for f in subforgot])
1988 forgot.extend([subpath + '/' + f for f in subforgot])
2005 except error.LookupError:
1989 except error.LookupError:
2006 ui.status(_("skipping missing subrepository: %s\n")
1990 ui.status(_("skipping missing subrepository: %s\n")
2007 % join(subpath))
1991 % join(subpath))
2008
1992
2009 if not explicitonly:
1993 if not explicitonly:
2010 for f in match.files():
1994 for f in match.files():
2011 if f not in repo.dirstate and not repo.wvfs.isdir(f):
1995 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2012 if f not in forgot:
1996 if f not in forgot:
2013 if repo.wvfs.exists(f):
1997 if repo.wvfs.exists(f):
2014 # Don't complain if the exact case match wasn't given.
1998 # Don't complain if the exact case match wasn't given.
2015 # But don't do this until after checking 'forgot', so
1999 # But don't do this until after checking 'forgot', so
2016 # that subrepo files aren't normalized, and this op is
2000 # that subrepo files aren't normalized, and this op is
2017 # purely from data cached by the status walk above.
2001 # purely from data cached by the status walk above.
2018 if repo.dirstate.normalize(f) in repo.dirstate:
2002 if repo.dirstate.normalize(f) in repo.dirstate:
2019 continue
2003 continue
2020 ui.warn(_('not removing %s: '
2004 ui.warn(_('not removing %s: '
2021 'file is already untracked\n')
2005 'file is already untracked\n')
2022 % match.rel(f))
2006 % match.rel(f))
2023 bad.append(f)
2007 bad.append(f)
2024
2008
2025 for f in forget:
2009 for f in forget:
2026 if ui.verbose or not match.exact(f):
2010 if ui.verbose or not match.exact(f):
2027 ui.status(_('removing %s\n') % match.rel(f))
2011 ui.status(_('removing %s\n') % match.rel(f))
2028
2012
2029 rejected = wctx.forget(forget, prefix)
2013 rejected = wctx.forget(forget, prefix)
2030 bad.extend(f for f in rejected if f in match.files())
2014 bad.extend(f for f in rejected if f in match.files())
2031 forgot.extend(f for f in forget if f not in rejected)
2015 forgot.extend(f for f in forget if f not in rejected)
2032 return bad, forgot
2016 return bad, forgot
2033
2017
2034 def files(ui, ctx, m, fm, fmt, subrepos):
2018 def files(ui, ctx, m, fm, fmt, subrepos):
2035 rev = ctx.rev()
2019 rev = ctx.rev()
2036 ret = 1
2020 ret = 1
2037 ds = ctx.repo().dirstate
2021 ds = ctx.repo().dirstate
2038
2022
2039 for f in ctx.matches(m):
2023 for f in ctx.matches(m):
2040 if rev is None and ds[f] == 'r':
2024 if rev is None and ds[f] == 'r':
2041 continue
2025 continue
2042 fm.startitem()
2026 fm.startitem()
2043 if ui.verbose:
2027 if ui.verbose:
2044 fc = ctx[f]
2028 fc = ctx[f]
2045 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2029 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2046 fm.data(abspath=f)
2030 fm.data(abspath=f)
2047 fm.write('path', fmt, m.rel(f))
2031 fm.write('path', fmt, m.rel(f))
2048 ret = 0
2032 ret = 0
2049
2033
2050 for subpath in sorted(ctx.substate):
2034 for subpath in sorted(ctx.substate):
2051 submatch = matchmod.subdirmatcher(subpath, m)
2035 submatch = matchmod.subdirmatcher(subpath, m)
2052 if (subrepos or m.exact(subpath) or any(submatch.files())):
2036 if (subrepos or m.exact(subpath) or any(submatch.files())):
2053 sub = ctx.sub(subpath)
2037 sub = ctx.sub(subpath)
2054 try:
2038 try:
2055 recurse = m.exact(subpath) or subrepos
2039 recurse = m.exact(subpath) or subrepos
2056 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2040 if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
2057 ret = 0
2041 ret = 0
2058 except error.LookupError:
2042 except error.LookupError:
2059 ui.status(_("skipping missing subrepository: %s\n")
2043 ui.status(_("skipping missing subrepository: %s\n")
2060 % m.abs(subpath))
2044 % m.abs(subpath))
2061
2045
2062 return ret
2046 return ret
2063
2047
2064 def remove(ui, repo, m, prefix, after, force, subrepos, warnings=None):
2048 def remove(ui, repo, m, prefix, after, force, subrepos, warnings=None):
2065 join = lambda f: os.path.join(prefix, f)
2049 join = lambda f: os.path.join(prefix, f)
2066 ret = 0
2050 ret = 0
2067 s = repo.status(match=m, clean=True)
2051 s = repo.status(match=m, clean=True)
2068 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2052 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2069
2053
2070 wctx = repo[None]
2054 wctx = repo[None]
2071
2055
2072 if warnings is None:
2056 if warnings is None:
2073 warnings = []
2057 warnings = []
2074 warn = True
2058 warn = True
2075 else:
2059 else:
2076 warn = False
2060 warn = False
2077
2061
2078 subs = sorted(wctx.substate)
2062 subs = sorted(wctx.substate)
2079 total = len(subs)
2063 total = len(subs)
2080 count = 0
2064 count = 0
2081 for subpath in subs:
2065 for subpath in subs:
2082 count += 1
2066 count += 1
2083 submatch = matchmod.subdirmatcher(subpath, m)
2067 submatch = matchmod.subdirmatcher(subpath, m)
2084 if subrepos or m.exact(subpath) or any(submatch.files()):
2068 if subrepos or m.exact(subpath) or any(submatch.files()):
2085 ui.progress(_('searching'), count, total=total, unit=_('subrepos'))
2069 ui.progress(_('searching'), count, total=total, unit=_('subrepos'))
2086 sub = wctx.sub(subpath)
2070 sub = wctx.sub(subpath)
2087 try:
2071 try:
2088 if sub.removefiles(submatch, prefix, after, force, subrepos,
2072 if sub.removefiles(submatch, prefix, after, force, subrepos,
2089 warnings):
2073 warnings):
2090 ret = 1
2074 ret = 1
2091 except error.LookupError:
2075 except error.LookupError:
2092 warnings.append(_("skipping missing subrepository: %s\n")
2076 warnings.append(_("skipping missing subrepository: %s\n")
2093 % join(subpath))
2077 % join(subpath))
2094 ui.progress(_('searching'), None)
2078 ui.progress(_('searching'), None)
2095
2079
2096 # warn about failure to delete explicit files/dirs
2080 # warn about failure to delete explicit files/dirs
2097 deleteddirs = util.dirs(deleted)
2081 deleteddirs = util.dirs(deleted)
2098 files = m.files()
2082 files = m.files()
2099 total = len(files)
2083 total = len(files)
2100 count = 0
2084 count = 0
2101 for f in files:
2085 for f in files:
2102 def insubrepo():
2086 def insubrepo():
2103 for subpath in wctx.substate:
2087 for subpath in wctx.substate:
2104 if f.startswith(subpath + '/'):
2088 if f.startswith(subpath + '/'):
2105 return True
2089 return True
2106 return False
2090 return False
2107
2091
2108 count += 1
2092 count += 1
2109 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2093 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2110 isdir = f in deleteddirs or wctx.hasdir(f)
2094 isdir = f in deleteddirs or wctx.hasdir(f)
2111 if (f in repo.dirstate or isdir or f == '.'
2095 if (f in repo.dirstate or isdir or f == '.'
2112 or insubrepo() or f in subs):
2096 or insubrepo() or f in subs):
2113 continue
2097 continue
2114
2098
2115 if repo.wvfs.exists(f):
2099 if repo.wvfs.exists(f):
2116 if repo.wvfs.isdir(f):
2100 if repo.wvfs.isdir(f):
2117 warnings.append(_('not removing %s: no tracked files\n')
2101 warnings.append(_('not removing %s: no tracked files\n')
2118 % m.rel(f))
2102 % m.rel(f))
2119 else:
2103 else:
2120 warnings.append(_('not removing %s: file is untracked\n')
2104 warnings.append(_('not removing %s: file is untracked\n')
2121 % m.rel(f))
2105 % m.rel(f))
2122 # missing files will generate a warning elsewhere
2106 # missing files will generate a warning elsewhere
2123 ret = 1
2107 ret = 1
2124 ui.progress(_('deleting'), None)
2108 ui.progress(_('deleting'), None)
2125
2109
2126 if force:
2110 if force:
2127 list = modified + deleted + clean + added
2111 list = modified + deleted + clean + added
2128 elif after:
2112 elif after:
2129 list = deleted
2113 list = deleted
2130 remaining = modified + added + clean
2114 remaining = modified + added + clean
2131 total = len(remaining)
2115 total = len(remaining)
2132 count = 0
2116 count = 0
2133 for f in remaining:
2117 for f in remaining:
2134 count += 1
2118 count += 1
2135 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2119 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2136 if ui.verbose or (f in files):
2120 if ui.verbose or (f in files):
2137 warnings.append(_('not removing %s: file still exists\n')
2121 warnings.append(_('not removing %s: file still exists\n')
2138 % m.rel(f))
2122 % m.rel(f))
2139 ret = 1
2123 ret = 1
2140 ui.progress(_('skipping'), None)
2124 ui.progress(_('skipping'), None)
2141 else:
2125 else:
2142 list = deleted + clean
2126 list = deleted + clean
2143 total = len(modified) + len(added)
2127 total = len(modified) + len(added)
2144 count = 0
2128 count = 0
2145 for f in modified:
2129 for f in modified:
2146 count += 1
2130 count += 1
2147 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2131 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2148 warnings.append(_('not removing %s: file is modified (use -f'
2132 warnings.append(_('not removing %s: file is modified (use -f'
2149 ' to force removal)\n') % m.rel(f))
2133 ' to force removal)\n') % m.rel(f))
2150 ret = 1
2134 ret = 1
2151 for f in added:
2135 for f in added:
2152 count += 1
2136 count += 1
2153 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2137 ui.progress(_('skipping'), count, total=total, unit=_('files'))
2154 warnings.append(_("not removing %s: file has been marked for add"
2138 warnings.append(_("not removing %s: file has been marked for add"
2155 " (use 'hg forget' to undo add)\n") % m.rel(f))
2139 " (use 'hg forget' to undo add)\n") % m.rel(f))
2156 ret = 1
2140 ret = 1
2157 ui.progress(_('skipping'), None)
2141 ui.progress(_('skipping'), None)
2158
2142
2159 list = sorted(list)
2143 list = sorted(list)
2160 total = len(list)
2144 total = len(list)
2161 count = 0
2145 count = 0
2162 for f in list:
2146 for f in list:
2163 count += 1
2147 count += 1
2164 if ui.verbose or not m.exact(f):
2148 if ui.verbose or not m.exact(f):
2165 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2149 ui.progress(_('deleting'), count, total=total, unit=_('files'))
2166 ui.status(_('removing %s\n') % m.rel(f))
2150 ui.status(_('removing %s\n') % m.rel(f))
2167 ui.progress(_('deleting'), None)
2151 ui.progress(_('deleting'), None)
2168
2152
2169 with repo.wlock():
2153 with repo.wlock():
2170 if not after:
2154 if not after:
2171 for f in list:
2155 for f in list:
2172 if f in added:
2156 if f in added:
2173 continue # we never unlink added files on remove
2157 continue # we never unlink added files on remove
2174 repo.wvfs.unlinkpath(f, ignoremissing=True)
2158 repo.wvfs.unlinkpath(f, ignoremissing=True)
2175 repo[None].forget(list)
2159 repo[None].forget(list)
2176
2160
2177 if warn:
2161 if warn:
2178 for warning in warnings:
2162 for warning in warnings:
2179 ui.warn(warning)
2163 ui.warn(warning)
2180
2164
2181 return ret
2165 return ret
2182
2166
2183 def _updatecatformatter(fm, ctx, matcher, path, decode):
2167 def _updatecatformatter(fm, ctx, matcher, path, decode):
2184 """Hook for adding data to the formatter used by ``hg cat``.
2168 """Hook for adding data to the formatter used by ``hg cat``.
2185
2169
2186 Extensions (e.g., lfs) can wrap this to inject keywords/data, but must call
2170 Extensions (e.g., lfs) can wrap this to inject keywords/data, but must call
2187 this method first."""
2171 this method first."""
2188 data = ctx[path].data()
2172 data = ctx[path].data()
2189 if decode:
2173 if decode:
2190 data = ctx.repo().wwritedata(path, data)
2174 data = ctx.repo().wwritedata(path, data)
2191 fm.startitem()
2175 fm.startitem()
2192 fm.write('data', '%s', data)
2176 fm.write('data', '%s', data)
2193 fm.data(abspath=path, path=matcher.rel(path))
2177 fm.data(abspath=path, path=matcher.rel(path))
2194
2178
2195 def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts):
2179 def cat(ui, repo, ctx, matcher, basefm, fntemplate, prefix, **opts):
2196 err = 1
2180 err = 1
2197 opts = pycompat.byteskwargs(opts)
2181 opts = pycompat.byteskwargs(opts)
2198
2182
2199 def write(path):
2183 def write(path):
2200 filename = None
2184 filename = None
2201 if fntemplate:
2185 if fntemplate:
2202 filename = makefilename(repo, fntemplate, ctx.node(),
2186 filename = makefilename(repo, fntemplate, ctx.node(),
2203 pathname=os.path.join(prefix, path))
2187 pathname=os.path.join(prefix, path))
2204 # attempt to create the directory if it does not already exist
2188 # attempt to create the directory if it does not already exist
2205 try:
2189 try:
2206 os.makedirs(os.path.dirname(filename))
2190 os.makedirs(os.path.dirname(filename))
2207 except OSError:
2191 except OSError:
2208 pass
2192 pass
2209 with formatter.maybereopen(basefm, filename, opts) as fm:
2193 with formatter.maybereopen(basefm, filename, opts) as fm:
2210 _updatecatformatter(fm, ctx, matcher, path, opts.get('decode'))
2194 _updatecatformatter(fm, ctx, matcher, path, opts.get('decode'))
2211
2195
2212 # Automation often uses hg cat on single files, so special case it
2196 # Automation often uses hg cat on single files, so special case it
2213 # for performance to avoid the cost of parsing the manifest.
2197 # for performance to avoid the cost of parsing the manifest.
2214 if len(matcher.files()) == 1 and not matcher.anypats():
2198 if len(matcher.files()) == 1 and not matcher.anypats():
2215 file = matcher.files()[0]
2199 file = matcher.files()[0]
2216 mfl = repo.manifestlog
2200 mfl = repo.manifestlog
2217 mfnode = ctx.manifestnode()
2201 mfnode = ctx.manifestnode()
2218 try:
2202 try:
2219 if mfnode and mfl[mfnode].find(file)[0]:
2203 if mfnode and mfl[mfnode].find(file)[0]:
2220 write(file)
2204 write(file)
2221 return 0
2205 return 0
2222 except KeyError:
2206 except KeyError:
2223 pass
2207 pass
2224
2208
2225 for abs in ctx.walk(matcher):
2209 for abs in ctx.walk(matcher):
2226 write(abs)
2210 write(abs)
2227 err = 0
2211 err = 0
2228
2212
2229 for subpath in sorted(ctx.substate):
2213 for subpath in sorted(ctx.substate):
2230 sub = ctx.sub(subpath)
2214 sub = ctx.sub(subpath)
2231 try:
2215 try:
2232 submatch = matchmod.subdirmatcher(subpath, matcher)
2216 submatch = matchmod.subdirmatcher(subpath, matcher)
2233
2217
2234 if not sub.cat(submatch, basefm, fntemplate,
2218 if not sub.cat(submatch, basefm, fntemplate,
2235 os.path.join(prefix, sub._path),
2219 os.path.join(prefix, sub._path),
2236 **pycompat.strkwargs(opts)):
2220 **pycompat.strkwargs(opts)):
2237 err = 0
2221 err = 0
2238 except error.RepoLookupError:
2222 except error.RepoLookupError:
2239 ui.status(_("skipping missing subrepository: %s\n")
2223 ui.status(_("skipping missing subrepository: %s\n")
2240 % os.path.join(prefix, subpath))
2224 % os.path.join(prefix, subpath))
2241
2225
2242 return err
2226 return err
2243
2227
2244 def commit(ui, repo, commitfunc, pats, opts):
2228 def commit(ui, repo, commitfunc, pats, opts):
2245 '''commit the specified files or all outstanding changes'''
2229 '''commit the specified files or all outstanding changes'''
2246 date = opts.get('date')
2230 date = opts.get('date')
2247 if date:
2231 if date:
2248 opts['date'] = util.parsedate(date)
2232 opts['date'] = util.parsedate(date)
2249 message = logmessage(ui, opts)
2233 message = logmessage(ui, opts)
2250 matcher = scmutil.match(repo[None], pats, opts)
2234 matcher = scmutil.match(repo[None], pats, opts)
2251
2235
2252 dsguard = None
2236 dsguard = None
2253 # extract addremove carefully -- this function can be called from a command
2237 # extract addremove carefully -- this function can be called from a command
2254 # that doesn't support addremove
2238 # that doesn't support addremove
2255 if opts.get('addremove'):
2239 if opts.get('addremove'):
2256 dsguard = dirstateguard.dirstateguard(repo, 'commit')
2240 dsguard = dirstateguard.dirstateguard(repo, 'commit')
2257 with dsguard or util.nullcontextmanager():
2241 with dsguard or util.nullcontextmanager():
2258 if dsguard:
2242 if dsguard:
2259 if scmutil.addremove(repo, matcher, "", opts) != 0:
2243 if scmutil.addremove(repo, matcher, "", opts) != 0:
2260 raise error.Abort(
2244 raise error.Abort(
2261 _("failed to mark all new/missing files as added/removed"))
2245 _("failed to mark all new/missing files as added/removed"))
2262
2246
2263 return commitfunc(ui, repo, message, matcher, opts)
2247 return commitfunc(ui, repo, message, matcher, opts)
2264
2248
2265 def samefile(f, ctx1, ctx2):
2249 def samefile(f, ctx1, ctx2):
2266 if f in ctx1.manifest():
2250 if f in ctx1.manifest():
2267 a = ctx1.filectx(f)
2251 a = ctx1.filectx(f)
2268 if f in ctx2.manifest():
2252 if f in ctx2.manifest():
2269 b = ctx2.filectx(f)
2253 b = ctx2.filectx(f)
2270 return (not a.cmp(b)
2254 return (not a.cmp(b)
2271 and a.flags() == b.flags())
2255 and a.flags() == b.flags())
2272 else:
2256 else:
2273 return False
2257 return False
2274 else:
2258 else:
2275 return f not in ctx2.manifest()
2259 return f not in ctx2.manifest()
2276
2260
2277 def amend(ui, repo, old, extra, pats, opts):
2261 def amend(ui, repo, old, extra, pats, opts):
2278 # avoid cycle context -> subrepo -> cmdutil
2262 # avoid cycle context -> subrepo -> cmdutil
2279 from . import context
2263 from . import context
2280
2264
2281 # amend will reuse the existing user if not specified, but the obsolete
2265 # amend will reuse the existing user if not specified, but the obsolete
2282 # marker creation requires that the current user's name is specified.
2266 # marker creation requires that the current user's name is specified.
2283 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2267 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2284 ui.username() # raise exception if username not set
2268 ui.username() # raise exception if username not set
2285
2269
2286 ui.note(_('amending changeset %s\n') % old)
2270 ui.note(_('amending changeset %s\n') % old)
2287 base = old.p1()
2271 base = old.p1()
2288
2272
2289 with repo.wlock(), repo.lock(), repo.transaction('amend'):
2273 with repo.wlock(), repo.lock(), repo.transaction('amend'):
2290 # Participating changesets:
2274 # Participating changesets:
2291 #
2275 #
2292 # wctx o - workingctx that contains changes from working copy
2276 # wctx o - workingctx that contains changes from working copy
2293 # | to go into amending commit
2277 # | to go into amending commit
2294 # |
2278 # |
2295 # old o - changeset to amend
2279 # old o - changeset to amend
2296 # |
2280 # |
2297 # base o - first parent of the changeset to amend
2281 # base o - first parent of the changeset to amend
2298 wctx = repo[None]
2282 wctx = repo[None]
2299
2283
2300 # Copy to avoid mutating input
2284 # Copy to avoid mutating input
2301 extra = extra.copy()
2285 extra = extra.copy()
2302 # Update extra dict from amended commit (e.g. to preserve graft
2286 # Update extra dict from amended commit (e.g. to preserve graft
2303 # source)
2287 # source)
2304 extra.update(old.extra())
2288 extra.update(old.extra())
2305
2289
2306 # Also update it from the from the wctx
2290 # Also update it from the from the wctx
2307 extra.update(wctx.extra())
2291 extra.update(wctx.extra())
2308
2292
2309 user = opts.get('user') or old.user()
2293 user = opts.get('user') or old.user()
2310 date = opts.get('date') or old.date()
2294 date = opts.get('date') or old.date()
2311
2295
2312 # Parse the date to allow comparison between date and old.date()
2296 # Parse the date to allow comparison between date and old.date()
2313 date = util.parsedate(date)
2297 date = util.parsedate(date)
2314
2298
2315 if len(old.parents()) > 1:
2299 if len(old.parents()) > 1:
2316 # ctx.files() isn't reliable for merges, so fall back to the
2300 # ctx.files() isn't reliable for merges, so fall back to the
2317 # slower repo.status() method
2301 # slower repo.status() method
2318 files = set([fn for st in repo.status(base, old)[:3]
2302 files = set([fn for st in repo.status(base, old)[:3]
2319 for fn in st])
2303 for fn in st])
2320 else:
2304 else:
2321 files = set(old.files())
2305 files = set(old.files())
2322
2306
2323 # add/remove the files to the working copy if the "addremove" option
2307 # add/remove the files to the working copy if the "addremove" option
2324 # was specified.
2308 # was specified.
2325 matcher = scmutil.match(wctx, pats, opts)
2309 matcher = scmutil.match(wctx, pats, opts)
2326 if (opts.get('addremove')
2310 if (opts.get('addremove')
2327 and scmutil.addremove(repo, matcher, "", opts)):
2311 and scmutil.addremove(repo, matcher, "", opts)):
2328 raise error.Abort(
2312 raise error.Abort(
2329 _("failed to mark all new/missing files as added/removed"))
2313 _("failed to mark all new/missing files as added/removed"))
2330
2314
2331 # Check subrepos. This depends on in-place wctx._status update in
2315 # Check subrepos. This depends on in-place wctx._status update in
2332 # subrepo.precommit(). To minimize the risk of this hack, we do
2316 # subrepo.precommit(). To minimize the risk of this hack, we do
2333 # nothing if .hgsub does not exist.
2317 # nothing if .hgsub does not exist.
2334 if '.hgsub' in wctx or '.hgsub' in old:
2318 if '.hgsub' in wctx or '.hgsub' in old:
2335 from . import subrepo # avoid cycle: cmdutil -> subrepo -> cmdutil
2319 from . import subrepo # avoid cycle: cmdutil -> subrepo -> cmdutil
2336 subs, commitsubs, newsubstate = subrepo.precommit(
2320 subs, commitsubs, newsubstate = subrepo.precommit(
2337 ui, wctx, wctx._status, matcher)
2321 ui, wctx, wctx._status, matcher)
2338 # amend should abort if commitsubrepos is enabled
2322 # amend should abort if commitsubrepos is enabled
2339 assert not commitsubs
2323 assert not commitsubs
2340 if subs:
2324 if subs:
2341 subrepo.writestate(repo, newsubstate)
2325 subrepo.writestate(repo, newsubstate)
2342
2326
2343 filestoamend = set(f for f in wctx.files() if matcher(f))
2327 filestoamend = set(f for f in wctx.files() if matcher(f))
2344
2328
2345 changes = (len(filestoamend) > 0)
2329 changes = (len(filestoamend) > 0)
2346 if changes:
2330 if changes:
2347 # Recompute copies (avoid recording a -> b -> a)
2331 # Recompute copies (avoid recording a -> b -> a)
2348 copied = copies.pathcopies(base, wctx, matcher)
2332 copied = copies.pathcopies(base, wctx, matcher)
2349 if old.p2:
2333 if old.p2:
2350 copied.update(copies.pathcopies(old.p2(), wctx, matcher))
2334 copied.update(copies.pathcopies(old.p2(), wctx, matcher))
2351
2335
2352 # Prune files which were reverted by the updates: if old
2336 # Prune files which were reverted by the updates: if old
2353 # introduced file X and the file was renamed in the working
2337 # introduced file X and the file was renamed in the working
2354 # copy, then those two files are the same and
2338 # copy, then those two files are the same and
2355 # we can discard X from our list of files. Likewise if X
2339 # we can discard X from our list of files. Likewise if X
2356 # was removed, it's no longer relevant. If X is missing (aka
2340 # was removed, it's no longer relevant. If X is missing (aka
2357 # deleted), old X must be preserved.
2341 # deleted), old X must be preserved.
2358 files.update(filestoamend)
2342 files.update(filestoamend)
2359 files = [f for f in files if (not samefile(f, wctx, base)
2343 files = [f for f in files if (not samefile(f, wctx, base)
2360 or f in wctx.deleted())]
2344 or f in wctx.deleted())]
2361
2345
2362 def filectxfn(repo, ctx_, path):
2346 def filectxfn(repo, ctx_, path):
2363 try:
2347 try:
2364 # If the file being considered is not amongst the files
2348 # If the file being considered is not amongst the files
2365 # to be amended, we should return the file context from the
2349 # to be amended, we should return the file context from the
2366 # old changeset. This avoids issues when only some files in
2350 # old changeset. This avoids issues when only some files in
2367 # the working copy are being amended but there are also
2351 # the working copy are being amended but there are also
2368 # changes to other files from the old changeset.
2352 # changes to other files from the old changeset.
2369 if path not in filestoamend:
2353 if path not in filestoamend:
2370 return old.filectx(path)
2354 return old.filectx(path)
2371
2355
2372 # Return None for removed files.
2356 # Return None for removed files.
2373 if path in wctx.removed():
2357 if path in wctx.removed():
2374 return None
2358 return None
2375
2359
2376 fctx = wctx[path]
2360 fctx = wctx[path]
2377 flags = fctx.flags()
2361 flags = fctx.flags()
2378 mctx = context.memfilectx(repo, ctx_,
2362 mctx = context.memfilectx(repo, ctx_,
2379 fctx.path(), fctx.data(),
2363 fctx.path(), fctx.data(),
2380 islink='l' in flags,
2364 islink='l' in flags,
2381 isexec='x' in flags,
2365 isexec='x' in flags,
2382 copied=copied.get(path))
2366 copied=copied.get(path))
2383 return mctx
2367 return mctx
2384 except KeyError:
2368 except KeyError:
2385 return None
2369 return None
2386 else:
2370 else:
2387 ui.note(_('copying changeset %s to %s\n') % (old, base))
2371 ui.note(_('copying changeset %s to %s\n') % (old, base))
2388
2372
2389 # Use version of files as in the old cset
2373 # Use version of files as in the old cset
2390 def filectxfn(repo, ctx_, path):
2374 def filectxfn(repo, ctx_, path):
2391 try:
2375 try:
2392 return old.filectx(path)
2376 return old.filectx(path)
2393 except KeyError:
2377 except KeyError:
2394 return None
2378 return None
2395
2379
2396 # See if we got a message from -m or -l, if not, open the editor with
2380 # See if we got a message from -m or -l, if not, open the editor with
2397 # the message of the changeset to amend.
2381 # the message of the changeset to amend.
2398 message = logmessage(ui, opts)
2382 message = logmessage(ui, opts)
2399
2383
2400 editform = mergeeditform(old, 'commit.amend')
2384 editform = mergeeditform(old, 'commit.amend')
2401 editor = getcommiteditor(editform=editform,
2385 editor = getcommiteditor(editform=editform,
2402 **pycompat.strkwargs(opts))
2386 **pycompat.strkwargs(opts))
2403
2387
2404 if not message:
2388 if not message:
2405 editor = getcommiteditor(edit=True, editform=editform)
2389 editor = getcommiteditor(edit=True, editform=editform)
2406 message = old.description()
2390 message = old.description()
2407
2391
2408 pureextra = extra.copy()
2392 pureextra = extra.copy()
2409 extra['amend_source'] = old.hex()
2393 extra['amend_source'] = old.hex()
2410
2394
2411 new = context.memctx(repo,
2395 new = context.memctx(repo,
2412 parents=[base.node(), old.p2().node()],
2396 parents=[base.node(), old.p2().node()],
2413 text=message,
2397 text=message,
2414 files=files,
2398 files=files,
2415 filectxfn=filectxfn,
2399 filectxfn=filectxfn,
2416 user=user,
2400 user=user,
2417 date=date,
2401 date=date,
2418 extra=extra,
2402 extra=extra,
2419 editor=editor)
2403 editor=editor)
2420
2404
2421 newdesc = changelog.stripdesc(new.description())
2405 newdesc = changelog.stripdesc(new.description())
2422 if ((not changes)
2406 if ((not changes)
2423 and newdesc == old.description()
2407 and newdesc == old.description()
2424 and user == old.user()
2408 and user == old.user()
2425 and date == old.date()
2409 and date == old.date()
2426 and pureextra == old.extra()):
2410 and pureextra == old.extra()):
2427 # nothing changed. continuing here would create a new node
2411 # nothing changed. continuing here would create a new node
2428 # anyway because of the amend_source noise.
2412 # anyway because of the amend_source noise.
2429 #
2413 #
2430 # This not what we expect from amend.
2414 # This not what we expect from amend.
2431 return old.node()
2415 return old.node()
2432
2416
2433 if opts.get('secret'):
2417 if opts.get('secret'):
2434 commitphase = 'secret'
2418 commitphase = 'secret'
2435 else:
2419 else:
2436 commitphase = old.phase()
2420 commitphase = old.phase()
2437 overrides = {('phases', 'new-commit'): commitphase}
2421 overrides = {('phases', 'new-commit'): commitphase}
2438 with ui.configoverride(overrides, 'amend'):
2422 with ui.configoverride(overrides, 'amend'):
2439 newid = repo.commitctx(new)
2423 newid = repo.commitctx(new)
2440
2424
2441 # Reroute the working copy parent to the new changeset
2425 # Reroute the working copy parent to the new changeset
2442 repo.setparents(newid, nullid)
2426 repo.setparents(newid, nullid)
2443 mapping = {old.node(): (newid,)}
2427 mapping = {old.node(): (newid,)}
2444 obsmetadata = None
2428 obsmetadata = None
2445 if opts.get('note'):
2429 if opts.get('note'):
2446 obsmetadata = {'note': opts['note']}
2430 obsmetadata = {'note': opts['note']}
2447 scmutil.cleanupnodes(repo, mapping, 'amend', metadata=obsmetadata)
2431 scmutil.cleanupnodes(repo, mapping, 'amend', metadata=obsmetadata)
2448
2432
2449 # Fixing the dirstate because localrepo.commitctx does not update
2433 # Fixing the dirstate because localrepo.commitctx does not update
2450 # it. This is rather convenient because we did not need to update
2434 # it. This is rather convenient because we did not need to update
2451 # the dirstate for all the files in the new commit which commitctx
2435 # the dirstate for all the files in the new commit which commitctx
2452 # could have done if it updated the dirstate. Now, we can
2436 # could have done if it updated the dirstate. Now, we can
2453 # selectively update the dirstate only for the amended files.
2437 # selectively update the dirstate only for the amended files.
2454 dirstate = repo.dirstate
2438 dirstate = repo.dirstate
2455
2439
2456 # Update the state of the files which were added and
2440 # Update the state of the files which were added and
2457 # and modified in the amend to "normal" in the dirstate.
2441 # and modified in the amend to "normal" in the dirstate.
2458 normalfiles = set(wctx.modified() + wctx.added()) & filestoamend
2442 normalfiles = set(wctx.modified() + wctx.added()) & filestoamend
2459 for f in normalfiles:
2443 for f in normalfiles:
2460 dirstate.normal(f)
2444 dirstate.normal(f)
2461
2445
2462 # Update the state of files which were removed in the amend
2446 # Update the state of files which were removed in the amend
2463 # to "removed" in the dirstate.
2447 # to "removed" in the dirstate.
2464 removedfiles = set(wctx.removed()) & filestoamend
2448 removedfiles = set(wctx.removed()) & filestoamend
2465 for f in removedfiles:
2449 for f in removedfiles:
2466 dirstate.drop(f)
2450 dirstate.drop(f)
2467
2451
2468 return newid
2452 return newid
2469
2453
2470 def commiteditor(repo, ctx, subs, editform=''):
2454 def commiteditor(repo, ctx, subs, editform=''):
2471 if ctx.description():
2455 if ctx.description():
2472 return ctx.description()
2456 return ctx.description()
2473 return commitforceeditor(repo, ctx, subs, editform=editform,
2457 return commitforceeditor(repo, ctx, subs, editform=editform,
2474 unchangedmessagedetection=True)
2458 unchangedmessagedetection=True)
2475
2459
2476 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2460 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2477 editform='', unchangedmessagedetection=False):
2461 editform='', unchangedmessagedetection=False):
2478 if not extramsg:
2462 if not extramsg:
2479 extramsg = _("Leave message empty to abort commit.")
2463 extramsg = _("Leave message empty to abort commit.")
2480
2464
2481 forms = [e for e in editform.split('.') if e]
2465 forms = [e for e in editform.split('.') if e]
2482 forms.insert(0, 'changeset')
2466 forms.insert(0, 'changeset')
2483 templatetext = None
2467 templatetext = None
2484 while forms:
2468 while forms:
2485 ref = '.'.join(forms)
2469 ref = '.'.join(forms)
2486 if repo.ui.config('committemplate', ref):
2470 if repo.ui.config('committemplate', ref):
2487 templatetext = committext = buildcommittemplate(
2471 templatetext = committext = buildcommittemplate(
2488 repo, ctx, subs, extramsg, ref)
2472 repo, ctx, subs, extramsg, ref)
2489 break
2473 break
2490 forms.pop()
2474 forms.pop()
2491 else:
2475 else:
2492 committext = buildcommittext(repo, ctx, subs, extramsg)
2476 committext = buildcommittext(repo, ctx, subs, extramsg)
2493
2477
2494 # run editor in the repository root
2478 # run editor in the repository root
2495 olddir = pycompat.getcwd()
2479 olddir = pycompat.getcwd()
2496 os.chdir(repo.root)
2480 os.chdir(repo.root)
2497
2481
2498 # make in-memory changes visible to external process
2482 # make in-memory changes visible to external process
2499 tr = repo.currenttransaction()
2483 tr = repo.currenttransaction()
2500 repo.dirstate.write(tr)
2484 repo.dirstate.write(tr)
2501 pending = tr and tr.writepending() and repo.root
2485 pending = tr and tr.writepending() and repo.root
2502
2486
2503 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2487 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2504 editform=editform, pending=pending,
2488 editform=editform, pending=pending,
2505 repopath=repo.path, action='commit')
2489 repopath=repo.path, action='commit')
2506 text = editortext
2490 text = editortext
2507
2491
2508 # strip away anything below this special string (used for editors that want
2492 # strip away anything below this special string (used for editors that want
2509 # to display the diff)
2493 # to display the diff)
2510 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2494 stripbelow = re.search(_linebelow, text, flags=re.MULTILINE)
2511 if stripbelow:
2495 if stripbelow:
2512 text = text[:stripbelow.start()]
2496 text = text[:stripbelow.start()]
2513
2497
2514 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2498 text = re.sub("(?m)^HG:.*(\n|$)", "", text)
2515 os.chdir(olddir)
2499 os.chdir(olddir)
2516
2500
2517 if finishdesc:
2501 if finishdesc:
2518 text = finishdesc(text)
2502 text = finishdesc(text)
2519 if not text.strip():
2503 if not text.strip():
2520 raise error.Abort(_("empty commit message"))
2504 raise error.Abort(_("empty commit message"))
2521 if unchangedmessagedetection and editortext == templatetext:
2505 if unchangedmessagedetection and editortext == templatetext:
2522 raise error.Abort(_("commit message unchanged"))
2506 raise error.Abort(_("commit message unchanged"))
2523
2507
2524 return text
2508 return text
2525
2509
2526 def buildcommittemplate(repo, ctx, subs, extramsg, ref):
2510 def buildcommittemplate(repo, ctx, subs, extramsg, ref):
2527 ui = repo.ui
2511 ui = repo.ui
2528 spec = formatter.templatespec(ref, None, None)
2512 spec = formatter.templatespec(ref, None, None)
2529 t = changeset_templater(ui, repo, spec, None, {}, False)
2513 t = logcmdutil.changesettemplater(ui, repo, spec, None, {}, False)
2530 t.t.cache.update((k, templater.unquotestring(v))
2514 t.t.cache.update((k, templater.unquotestring(v))
2531 for k, v in repo.ui.configitems('committemplate'))
2515 for k, v in repo.ui.configitems('committemplate'))
2532
2516
2533 if not extramsg:
2517 if not extramsg:
2534 extramsg = '' # ensure that extramsg is string
2518 extramsg = '' # ensure that extramsg is string
2535
2519
2536 ui.pushbuffer()
2520 ui.pushbuffer()
2537 t.show(ctx, extramsg=extramsg)
2521 t.show(ctx, extramsg=extramsg)
2538 return ui.popbuffer()
2522 return ui.popbuffer()
2539
2523
2540 def hgprefix(msg):
2524 def hgprefix(msg):
2541 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2525 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2542
2526
2543 def buildcommittext(repo, ctx, subs, extramsg):
2527 def buildcommittext(repo, ctx, subs, extramsg):
2544 edittext = []
2528 edittext = []
2545 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2529 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2546 if ctx.description():
2530 if ctx.description():
2547 edittext.append(ctx.description())
2531 edittext.append(ctx.description())
2548 edittext.append("")
2532 edittext.append("")
2549 edittext.append("") # Empty line between message and comments.
2533 edittext.append("") # Empty line between message and comments.
2550 edittext.append(hgprefix(_("Enter commit message."
2534 edittext.append(hgprefix(_("Enter commit message."
2551 " Lines beginning with 'HG:' are removed.")))
2535 " Lines beginning with 'HG:' are removed.")))
2552 edittext.append(hgprefix(extramsg))
2536 edittext.append(hgprefix(extramsg))
2553 edittext.append("HG: --")
2537 edittext.append("HG: --")
2554 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2538 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2555 if ctx.p2():
2539 if ctx.p2():
2556 edittext.append(hgprefix(_("branch merge")))
2540 edittext.append(hgprefix(_("branch merge")))
2557 if ctx.branch():
2541 if ctx.branch():
2558 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2542 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2559 if bookmarks.isactivewdirparent(repo):
2543 if bookmarks.isactivewdirparent(repo):
2560 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2544 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2561 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2545 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2562 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2546 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2563 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2547 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2564 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2548 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2565 if not added and not modified and not removed:
2549 if not added and not modified and not removed:
2566 edittext.append(hgprefix(_("no files changed")))
2550 edittext.append(hgprefix(_("no files changed")))
2567 edittext.append("")
2551 edittext.append("")
2568
2552
2569 return "\n".join(edittext)
2553 return "\n".join(edittext)
2570
2554
2571 def commitstatus(repo, node, branch, bheads=None, opts=None):
2555 def commitstatus(repo, node, branch, bheads=None, opts=None):
2572 if opts is None:
2556 if opts is None:
2573 opts = {}
2557 opts = {}
2574 ctx = repo[node]
2558 ctx = repo[node]
2575 parents = ctx.parents()
2559 parents = ctx.parents()
2576
2560
2577 if (not opts.get('amend') and bheads and node not in bheads and not
2561 if (not opts.get('amend') and bheads and node not in bheads and not
2578 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2562 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2579 repo.ui.status(_('created new head\n'))
2563 repo.ui.status(_('created new head\n'))
2580 # The message is not printed for initial roots. For the other
2564 # The message is not printed for initial roots. For the other
2581 # changesets, it is printed in the following situations:
2565 # changesets, it is printed in the following situations:
2582 #
2566 #
2583 # Par column: for the 2 parents with ...
2567 # Par column: for the 2 parents with ...
2584 # N: null or no parent
2568 # N: null or no parent
2585 # B: parent is on another named branch
2569 # B: parent is on another named branch
2586 # C: parent is a regular non head changeset
2570 # C: parent is a regular non head changeset
2587 # H: parent was a branch head of the current branch
2571 # H: parent was a branch head of the current branch
2588 # Msg column: whether we print "created new head" message
2572 # Msg column: whether we print "created new head" message
2589 # In the following, it is assumed that there already exists some
2573 # In the following, it is assumed that there already exists some
2590 # initial branch heads of the current branch, otherwise nothing is
2574 # initial branch heads of the current branch, otherwise nothing is
2591 # printed anyway.
2575 # printed anyway.
2592 #
2576 #
2593 # Par Msg Comment
2577 # Par Msg Comment
2594 # N N y additional topo root
2578 # N N y additional topo root
2595 #
2579 #
2596 # B N y additional branch root
2580 # B N y additional branch root
2597 # C N y additional topo head
2581 # C N y additional topo head
2598 # H N n usual case
2582 # H N n usual case
2599 #
2583 #
2600 # B B y weird additional branch root
2584 # B B y weird additional branch root
2601 # C B y branch merge
2585 # C B y branch merge
2602 # H B n merge with named branch
2586 # H B n merge with named branch
2603 #
2587 #
2604 # C C y additional head from merge
2588 # C C y additional head from merge
2605 # C H n merge with a head
2589 # C H n merge with a head
2606 #
2590 #
2607 # H H n head merge: head count decreases
2591 # H H n head merge: head count decreases
2608
2592
2609 if not opts.get('close_branch'):
2593 if not opts.get('close_branch'):
2610 for r in parents:
2594 for r in parents:
2611 if r.closesbranch() and r.branch() == branch:
2595 if r.closesbranch() and r.branch() == branch:
2612 repo.ui.status(_('reopening closed branch head %d\n') % r)
2596 repo.ui.status(_('reopening closed branch head %d\n') % r)
2613
2597
2614 if repo.ui.debugflag:
2598 if repo.ui.debugflag:
2615 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2599 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2616 elif repo.ui.verbose:
2600 elif repo.ui.verbose:
2617 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2601 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2618
2602
2619 def postcommitstatus(repo, pats, opts):
2603 def postcommitstatus(repo, pats, opts):
2620 return repo.status(match=scmutil.match(repo[None], pats, opts))
2604 return repo.status(match=scmutil.match(repo[None], pats, opts))
2621
2605
2622 def revert(ui, repo, ctx, parents, *pats, **opts):
2606 def revert(ui, repo, ctx, parents, *pats, **opts):
2623 opts = pycompat.byteskwargs(opts)
2607 opts = pycompat.byteskwargs(opts)
2624 parent, p2 = parents
2608 parent, p2 = parents
2625 node = ctx.node()
2609 node = ctx.node()
2626
2610
2627 mf = ctx.manifest()
2611 mf = ctx.manifest()
2628 if node == p2:
2612 if node == p2:
2629 parent = p2
2613 parent = p2
2630
2614
2631 # need all matching names in dirstate and manifest of target rev,
2615 # need all matching names in dirstate and manifest of target rev,
2632 # so have to walk both. do not print errors if files exist in one
2616 # so have to walk both. do not print errors if files exist in one
2633 # but not other. in both cases, filesets should be evaluated against
2617 # but not other. in both cases, filesets should be evaluated against
2634 # workingctx to get consistent result (issue4497). this means 'set:**'
2618 # workingctx to get consistent result (issue4497). this means 'set:**'
2635 # cannot be used to select missing files from target rev.
2619 # cannot be used to select missing files from target rev.
2636
2620
2637 # `names` is a mapping for all elements in working copy and target revision
2621 # `names` is a mapping for all elements in working copy and target revision
2638 # The mapping is in the form:
2622 # The mapping is in the form:
2639 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2623 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2640 names = {}
2624 names = {}
2641
2625
2642 with repo.wlock():
2626 with repo.wlock():
2643 ## filling of the `names` mapping
2627 ## filling of the `names` mapping
2644 # walk dirstate to fill `names`
2628 # walk dirstate to fill `names`
2645
2629
2646 interactive = opts.get('interactive', False)
2630 interactive = opts.get('interactive', False)
2647 wctx = repo[None]
2631 wctx = repo[None]
2648 m = scmutil.match(wctx, pats, opts)
2632 m = scmutil.match(wctx, pats, opts)
2649
2633
2650 # we'll need this later
2634 # we'll need this later
2651 targetsubs = sorted(s for s in wctx.substate if m(s))
2635 targetsubs = sorted(s for s in wctx.substate if m(s))
2652
2636
2653 if not m.always():
2637 if not m.always():
2654 matcher = matchmod.badmatch(m, lambda x, y: False)
2638 matcher = matchmod.badmatch(m, lambda x, y: False)
2655 for abs in wctx.walk(matcher):
2639 for abs in wctx.walk(matcher):
2656 names[abs] = m.rel(abs), m.exact(abs)
2640 names[abs] = m.rel(abs), m.exact(abs)
2657
2641
2658 # walk target manifest to fill `names`
2642 # walk target manifest to fill `names`
2659
2643
2660 def badfn(path, msg):
2644 def badfn(path, msg):
2661 if path in names:
2645 if path in names:
2662 return
2646 return
2663 if path in ctx.substate:
2647 if path in ctx.substate:
2664 return
2648 return
2665 path_ = path + '/'
2649 path_ = path + '/'
2666 for f in names:
2650 for f in names:
2667 if f.startswith(path_):
2651 if f.startswith(path_):
2668 return
2652 return
2669 ui.warn("%s: %s\n" % (m.rel(path), msg))
2653 ui.warn("%s: %s\n" % (m.rel(path), msg))
2670
2654
2671 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2655 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2672 if abs not in names:
2656 if abs not in names:
2673 names[abs] = m.rel(abs), m.exact(abs)
2657 names[abs] = m.rel(abs), m.exact(abs)
2674
2658
2675 # Find status of all file in `names`.
2659 # Find status of all file in `names`.
2676 m = scmutil.matchfiles(repo, names)
2660 m = scmutil.matchfiles(repo, names)
2677
2661
2678 changes = repo.status(node1=node, match=m,
2662 changes = repo.status(node1=node, match=m,
2679 unknown=True, ignored=True, clean=True)
2663 unknown=True, ignored=True, clean=True)
2680 else:
2664 else:
2681 changes = repo.status(node1=node, match=m)
2665 changes = repo.status(node1=node, match=m)
2682 for kind in changes:
2666 for kind in changes:
2683 for abs in kind:
2667 for abs in kind:
2684 names[abs] = m.rel(abs), m.exact(abs)
2668 names[abs] = m.rel(abs), m.exact(abs)
2685
2669
2686 m = scmutil.matchfiles(repo, names)
2670 m = scmutil.matchfiles(repo, names)
2687
2671
2688 modified = set(changes.modified)
2672 modified = set(changes.modified)
2689 added = set(changes.added)
2673 added = set(changes.added)
2690 removed = set(changes.removed)
2674 removed = set(changes.removed)
2691 _deleted = set(changes.deleted)
2675 _deleted = set(changes.deleted)
2692 unknown = set(changes.unknown)
2676 unknown = set(changes.unknown)
2693 unknown.update(changes.ignored)
2677 unknown.update(changes.ignored)
2694 clean = set(changes.clean)
2678 clean = set(changes.clean)
2695 modadded = set()
2679 modadded = set()
2696
2680
2697 # We need to account for the state of the file in the dirstate,
2681 # We need to account for the state of the file in the dirstate,
2698 # even when we revert against something else than parent. This will
2682 # even when we revert against something else than parent. This will
2699 # slightly alter the behavior of revert (doing back up or not, delete
2683 # slightly alter the behavior of revert (doing back up or not, delete
2700 # or just forget etc).
2684 # or just forget etc).
2701 if parent == node:
2685 if parent == node:
2702 dsmodified = modified
2686 dsmodified = modified
2703 dsadded = added
2687 dsadded = added
2704 dsremoved = removed
2688 dsremoved = removed
2705 # store all local modifications, useful later for rename detection
2689 # store all local modifications, useful later for rename detection
2706 localchanges = dsmodified | dsadded
2690 localchanges = dsmodified | dsadded
2707 modified, added, removed = set(), set(), set()
2691 modified, added, removed = set(), set(), set()
2708 else:
2692 else:
2709 changes = repo.status(node1=parent, match=m)
2693 changes = repo.status(node1=parent, match=m)
2710 dsmodified = set(changes.modified)
2694 dsmodified = set(changes.modified)
2711 dsadded = set(changes.added)
2695 dsadded = set(changes.added)
2712 dsremoved = set(changes.removed)
2696 dsremoved = set(changes.removed)
2713 # store all local modifications, useful later for rename detection
2697 # store all local modifications, useful later for rename detection
2714 localchanges = dsmodified | dsadded
2698 localchanges = dsmodified | dsadded
2715
2699
2716 # only take into account for removes between wc and target
2700 # only take into account for removes between wc and target
2717 clean |= dsremoved - removed
2701 clean |= dsremoved - removed
2718 dsremoved &= removed
2702 dsremoved &= removed
2719 # distinct between dirstate remove and other
2703 # distinct between dirstate remove and other
2720 removed -= dsremoved
2704 removed -= dsremoved
2721
2705
2722 modadded = added & dsmodified
2706 modadded = added & dsmodified
2723 added -= modadded
2707 added -= modadded
2724
2708
2725 # tell newly modified apart.
2709 # tell newly modified apart.
2726 dsmodified &= modified
2710 dsmodified &= modified
2727 dsmodified |= modified & dsadded # dirstate added may need backup
2711 dsmodified |= modified & dsadded # dirstate added may need backup
2728 modified -= dsmodified
2712 modified -= dsmodified
2729
2713
2730 # We need to wait for some post-processing to update this set
2714 # We need to wait for some post-processing to update this set
2731 # before making the distinction. The dirstate will be used for
2715 # before making the distinction. The dirstate will be used for
2732 # that purpose.
2716 # that purpose.
2733 dsadded = added
2717 dsadded = added
2734
2718
2735 # in case of merge, files that are actually added can be reported as
2719 # in case of merge, files that are actually added can be reported as
2736 # modified, we need to post process the result
2720 # modified, we need to post process the result
2737 if p2 != nullid:
2721 if p2 != nullid:
2738 mergeadd = set(dsmodified)
2722 mergeadd = set(dsmodified)
2739 for path in dsmodified:
2723 for path in dsmodified:
2740 if path in mf:
2724 if path in mf:
2741 mergeadd.remove(path)
2725 mergeadd.remove(path)
2742 dsadded |= mergeadd
2726 dsadded |= mergeadd
2743 dsmodified -= mergeadd
2727 dsmodified -= mergeadd
2744
2728
2745 # if f is a rename, update `names` to also revert the source
2729 # if f is a rename, update `names` to also revert the source
2746 cwd = repo.getcwd()
2730 cwd = repo.getcwd()
2747 for f in localchanges:
2731 for f in localchanges:
2748 src = repo.dirstate.copied(f)
2732 src = repo.dirstate.copied(f)
2749 # XXX should we check for rename down to target node?
2733 # XXX should we check for rename down to target node?
2750 if src and src not in names and repo.dirstate[src] == 'r':
2734 if src and src not in names and repo.dirstate[src] == 'r':
2751 dsremoved.add(src)
2735 dsremoved.add(src)
2752 names[src] = (repo.pathto(src, cwd), True)
2736 names[src] = (repo.pathto(src, cwd), True)
2753
2737
2754 # determine the exact nature of the deleted changesets
2738 # determine the exact nature of the deleted changesets
2755 deladded = set(_deleted)
2739 deladded = set(_deleted)
2756 for path in _deleted:
2740 for path in _deleted:
2757 if path in mf:
2741 if path in mf:
2758 deladded.remove(path)
2742 deladded.remove(path)
2759 deleted = _deleted - deladded
2743 deleted = _deleted - deladded
2760
2744
2761 # distinguish between file to forget and the other
2745 # distinguish between file to forget and the other
2762 added = set()
2746 added = set()
2763 for abs in dsadded:
2747 for abs in dsadded:
2764 if repo.dirstate[abs] != 'a':
2748 if repo.dirstate[abs] != 'a':
2765 added.add(abs)
2749 added.add(abs)
2766 dsadded -= added
2750 dsadded -= added
2767
2751
2768 for abs in deladded:
2752 for abs in deladded:
2769 if repo.dirstate[abs] == 'a':
2753 if repo.dirstate[abs] == 'a':
2770 dsadded.add(abs)
2754 dsadded.add(abs)
2771 deladded -= dsadded
2755 deladded -= dsadded
2772
2756
2773 # For files marked as removed, we check if an unknown file is present at
2757 # For files marked as removed, we check if an unknown file is present at
2774 # the same path. If a such file exists it may need to be backed up.
2758 # the same path. If a such file exists it may need to be backed up.
2775 # Making the distinction at this stage helps have simpler backup
2759 # Making the distinction at this stage helps have simpler backup
2776 # logic.
2760 # logic.
2777 removunk = set()
2761 removunk = set()
2778 for abs in removed:
2762 for abs in removed:
2779 target = repo.wjoin(abs)
2763 target = repo.wjoin(abs)
2780 if os.path.lexists(target):
2764 if os.path.lexists(target):
2781 removunk.add(abs)
2765 removunk.add(abs)
2782 removed -= removunk
2766 removed -= removunk
2783
2767
2784 dsremovunk = set()
2768 dsremovunk = set()
2785 for abs in dsremoved:
2769 for abs in dsremoved:
2786 target = repo.wjoin(abs)
2770 target = repo.wjoin(abs)
2787 if os.path.lexists(target):
2771 if os.path.lexists(target):
2788 dsremovunk.add(abs)
2772 dsremovunk.add(abs)
2789 dsremoved -= dsremovunk
2773 dsremoved -= dsremovunk
2790
2774
2791 # action to be actually performed by revert
2775 # action to be actually performed by revert
2792 # (<list of file>, message>) tuple
2776 # (<list of file>, message>) tuple
2793 actions = {'revert': ([], _('reverting %s\n')),
2777 actions = {'revert': ([], _('reverting %s\n')),
2794 'add': ([], _('adding %s\n')),
2778 'add': ([], _('adding %s\n')),
2795 'remove': ([], _('removing %s\n')),
2779 'remove': ([], _('removing %s\n')),
2796 'drop': ([], _('removing %s\n')),
2780 'drop': ([], _('removing %s\n')),
2797 'forget': ([], _('forgetting %s\n')),
2781 'forget': ([], _('forgetting %s\n')),
2798 'undelete': ([], _('undeleting %s\n')),
2782 'undelete': ([], _('undeleting %s\n')),
2799 'noop': (None, _('no changes needed to %s\n')),
2783 'noop': (None, _('no changes needed to %s\n')),
2800 'unknown': (None, _('file not managed: %s\n')),
2784 'unknown': (None, _('file not managed: %s\n')),
2801 }
2785 }
2802
2786
2803 # "constant" that convey the backup strategy.
2787 # "constant" that convey the backup strategy.
2804 # All set to `discard` if `no-backup` is set do avoid checking
2788 # All set to `discard` if `no-backup` is set do avoid checking
2805 # no_backup lower in the code.
2789 # no_backup lower in the code.
2806 # These values are ordered for comparison purposes
2790 # These values are ordered for comparison purposes
2807 backupinteractive = 3 # do backup if interactively modified
2791 backupinteractive = 3 # do backup if interactively modified
2808 backup = 2 # unconditionally do backup
2792 backup = 2 # unconditionally do backup
2809 check = 1 # check if the existing file differs from target
2793 check = 1 # check if the existing file differs from target
2810 discard = 0 # never do backup
2794 discard = 0 # never do backup
2811 if opts.get('no_backup'):
2795 if opts.get('no_backup'):
2812 backupinteractive = backup = check = discard
2796 backupinteractive = backup = check = discard
2813 if interactive:
2797 if interactive:
2814 dsmodifiedbackup = backupinteractive
2798 dsmodifiedbackup = backupinteractive
2815 else:
2799 else:
2816 dsmodifiedbackup = backup
2800 dsmodifiedbackup = backup
2817 tobackup = set()
2801 tobackup = set()
2818
2802
2819 backupanddel = actions['remove']
2803 backupanddel = actions['remove']
2820 if not opts.get('no_backup'):
2804 if not opts.get('no_backup'):
2821 backupanddel = actions['drop']
2805 backupanddel = actions['drop']
2822
2806
2823 disptable = (
2807 disptable = (
2824 # dispatch table:
2808 # dispatch table:
2825 # file state
2809 # file state
2826 # action
2810 # action
2827 # make backup
2811 # make backup
2828
2812
2829 ## Sets that results that will change file on disk
2813 ## Sets that results that will change file on disk
2830 # Modified compared to target, no local change
2814 # Modified compared to target, no local change
2831 (modified, actions['revert'], discard),
2815 (modified, actions['revert'], discard),
2832 # Modified compared to target, but local file is deleted
2816 # Modified compared to target, but local file is deleted
2833 (deleted, actions['revert'], discard),
2817 (deleted, actions['revert'], discard),
2834 # Modified compared to target, local change
2818 # Modified compared to target, local change
2835 (dsmodified, actions['revert'], dsmodifiedbackup),
2819 (dsmodified, actions['revert'], dsmodifiedbackup),
2836 # Added since target
2820 # Added since target
2837 (added, actions['remove'], discard),
2821 (added, actions['remove'], discard),
2838 # Added in working directory
2822 # Added in working directory
2839 (dsadded, actions['forget'], discard),
2823 (dsadded, actions['forget'], discard),
2840 # Added since target, have local modification
2824 # Added since target, have local modification
2841 (modadded, backupanddel, backup),
2825 (modadded, backupanddel, backup),
2842 # Added since target but file is missing in working directory
2826 # Added since target but file is missing in working directory
2843 (deladded, actions['drop'], discard),
2827 (deladded, actions['drop'], discard),
2844 # Removed since target, before working copy parent
2828 # Removed since target, before working copy parent
2845 (removed, actions['add'], discard),
2829 (removed, actions['add'], discard),
2846 # Same as `removed` but an unknown file exists at the same path
2830 # Same as `removed` but an unknown file exists at the same path
2847 (removunk, actions['add'], check),
2831 (removunk, actions['add'], check),
2848 # Removed since targe, marked as such in working copy parent
2832 # Removed since targe, marked as such in working copy parent
2849 (dsremoved, actions['undelete'], discard),
2833 (dsremoved, actions['undelete'], discard),
2850 # Same as `dsremoved` but an unknown file exists at the same path
2834 # Same as `dsremoved` but an unknown file exists at the same path
2851 (dsremovunk, actions['undelete'], check),
2835 (dsremovunk, actions['undelete'], check),
2852 ## the following sets does not result in any file changes
2836 ## the following sets does not result in any file changes
2853 # File with no modification
2837 # File with no modification
2854 (clean, actions['noop'], discard),
2838 (clean, actions['noop'], discard),
2855 # Existing file, not tracked anywhere
2839 # Existing file, not tracked anywhere
2856 (unknown, actions['unknown'], discard),
2840 (unknown, actions['unknown'], discard),
2857 )
2841 )
2858
2842
2859 for abs, (rel, exact) in sorted(names.items()):
2843 for abs, (rel, exact) in sorted(names.items()):
2860 # target file to be touch on disk (relative to cwd)
2844 # target file to be touch on disk (relative to cwd)
2861 target = repo.wjoin(abs)
2845 target = repo.wjoin(abs)
2862 # search the entry in the dispatch table.
2846 # search the entry in the dispatch table.
2863 # if the file is in any of these sets, it was touched in the working
2847 # if the file is in any of these sets, it was touched in the working
2864 # directory parent and we are sure it needs to be reverted.
2848 # directory parent and we are sure it needs to be reverted.
2865 for table, (xlist, msg), dobackup in disptable:
2849 for table, (xlist, msg), dobackup in disptable:
2866 if abs not in table:
2850 if abs not in table:
2867 continue
2851 continue
2868 if xlist is not None:
2852 if xlist is not None:
2869 xlist.append(abs)
2853 xlist.append(abs)
2870 if dobackup:
2854 if dobackup:
2871 # If in interactive mode, don't automatically create
2855 # If in interactive mode, don't automatically create
2872 # .orig files (issue4793)
2856 # .orig files (issue4793)
2873 if dobackup == backupinteractive:
2857 if dobackup == backupinteractive:
2874 tobackup.add(abs)
2858 tobackup.add(abs)
2875 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
2859 elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
2876 bakname = scmutil.origpath(ui, repo, rel)
2860 bakname = scmutil.origpath(ui, repo, rel)
2877 ui.note(_('saving current version of %s as %s\n') %
2861 ui.note(_('saving current version of %s as %s\n') %
2878 (rel, bakname))
2862 (rel, bakname))
2879 if not opts.get('dry_run'):
2863 if not opts.get('dry_run'):
2880 if interactive:
2864 if interactive:
2881 util.copyfile(target, bakname)
2865 util.copyfile(target, bakname)
2882 else:
2866 else:
2883 util.rename(target, bakname)
2867 util.rename(target, bakname)
2884 if ui.verbose or not exact:
2868 if ui.verbose or not exact:
2885 if not isinstance(msg, bytes):
2869 if not isinstance(msg, bytes):
2886 msg = msg(abs)
2870 msg = msg(abs)
2887 ui.status(msg % rel)
2871 ui.status(msg % rel)
2888 elif exact:
2872 elif exact:
2889 ui.warn(msg % rel)
2873 ui.warn(msg % rel)
2890 break
2874 break
2891
2875
2892 if not opts.get('dry_run'):
2876 if not opts.get('dry_run'):
2893 needdata = ('revert', 'add', 'undelete')
2877 needdata = ('revert', 'add', 'undelete')
2894 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
2878 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
2895 _performrevert(repo, parents, ctx, actions, interactive, tobackup)
2879 _performrevert(repo, parents, ctx, actions, interactive, tobackup)
2896
2880
2897 if targetsubs:
2881 if targetsubs:
2898 # Revert the subrepos on the revert list
2882 # Revert the subrepos on the revert list
2899 for sub in targetsubs:
2883 for sub in targetsubs:
2900 try:
2884 try:
2901 wctx.sub(sub).revert(ctx.substate[sub], *pats,
2885 wctx.sub(sub).revert(ctx.substate[sub], *pats,
2902 **pycompat.strkwargs(opts))
2886 **pycompat.strkwargs(opts))
2903 except KeyError:
2887 except KeyError:
2904 raise error.Abort("subrepository '%s' does not exist in %s!"
2888 raise error.Abort("subrepository '%s' does not exist in %s!"
2905 % (sub, short(ctx.node())))
2889 % (sub, short(ctx.node())))
2906
2890
2907 def _revertprefetch(repo, ctx, *files):
2891 def _revertprefetch(repo, ctx, *files):
2908 """Let extension changing the storage layer prefetch content"""
2892 """Let extension changing the storage layer prefetch content"""
2909
2893
2910 def _performrevert(repo, parents, ctx, actions, interactive=False,
2894 def _performrevert(repo, parents, ctx, actions, interactive=False,
2911 tobackup=None):
2895 tobackup=None):
2912 """function that actually perform all the actions computed for revert
2896 """function that actually perform all the actions computed for revert
2913
2897
2914 This is an independent function to let extension to plug in and react to
2898 This is an independent function to let extension to plug in and react to
2915 the imminent revert.
2899 the imminent revert.
2916
2900
2917 Make sure you have the working directory locked when calling this function.
2901 Make sure you have the working directory locked when calling this function.
2918 """
2902 """
2919 parent, p2 = parents
2903 parent, p2 = parents
2920 node = ctx.node()
2904 node = ctx.node()
2921 excluded_files = []
2905 excluded_files = []
2922 matcher_opts = {"exclude": excluded_files}
2906 matcher_opts = {"exclude": excluded_files}
2923
2907
2924 def checkout(f):
2908 def checkout(f):
2925 fc = ctx[f]
2909 fc = ctx[f]
2926 repo.wwrite(f, fc.data(), fc.flags())
2910 repo.wwrite(f, fc.data(), fc.flags())
2927
2911
2928 def doremove(f):
2912 def doremove(f):
2929 try:
2913 try:
2930 repo.wvfs.unlinkpath(f)
2914 repo.wvfs.unlinkpath(f)
2931 except OSError:
2915 except OSError:
2932 pass
2916 pass
2933 repo.dirstate.remove(f)
2917 repo.dirstate.remove(f)
2934
2918
2935 audit_path = pathutil.pathauditor(repo.root, cached=True)
2919 audit_path = pathutil.pathauditor(repo.root, cached=True)
2936 for f in actions['forget'][0]:
2920 for f in actions['forget'][0]:
2937 if interactive:
2921 if interactive:
2938 choice = repo.ui.promptchoice(
2922 choice = repo.ui.promptchoice(
2939 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
2923 _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
2940 if choice == 0:
2924 if choice == 0:
2941 repo.dirstate.drop(f)
2925 repo.dirstate.drop(f)
2942 else:
2926 else:
2943 excluded_files.append(repo.wjoin(f))
2927 excluded_files.append(repo.wjoin(f))
2944 else:
2928 else:
2945 repo.dirstate.drop(f)
2929 repo.dirstate.drop(f)
2946 for f in actions['remove'][0]:
2930 for f in actions['remove'][0]:
2947 audit_path(f)
2931 audit_path(f)
2948 if interactive:
2932 if interactive:
2949 choice = repo.ui.promptchoice(
2933 choice = repo.ui.promptchoice(
2950 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
2934 _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
2951 if choice == 0:
2935 if choice == 0:
2952 doremove(f)
2936 doremove(f)
2953 else:
2937 else:
2954 excluded_files.append(repo.wjoin(f))
2938 excluded_files.append(repo.wjoin(f))
2955 else:
2939 else:
2956 doremove(f)
2940 doremove(f)
2957 for f in actions['drop'][0]:
2941 for f in actions['drop'][0]:
2958 audit_path(f)
2942 audit_path(f)
2959 repo.dirstate.remove(f)
2943 repo.dirstate.remove(f)
2960
2944
2961 normal = None
2945 normal = None
2962 if node == parent:
2946 if node == parent:
2963 # We're reverting to our parent. If possible, we'd like status
2947 # We're reverting to our parent. If possible, we'd like status
2964 # to report the file as clean. We have to use normallookup for
2948 # to report the file as clean. We have to use normallookup for
2965 # merges to avoid losing information about merged/dirty files.
2949 # merges to avoid losing information about merged/dirty files.
2966 if p2 != nullid:
2950 if p2 != nullid:
2967 normal = repo.dirstate.normallookup
2951 normal = repo.dirstate.normallookup
2968 else:
2952 else:
2969 normal = repo.dirstate.normal
2953 normal = repo.dirstate.normal
2970
2954
2971 newlyaddedandmodifiedfiles = set()
2955 newlyaddedandmodifiedfiles = set()
2972 if interactive:
2956 if interactive:
2973 # Prompt the user for changes to revert
2957 # Prompt the user for changes to revert
2974 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
2958 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
2975 m = scmutil.match(ctx, torevert, matcher_opts)
2959 m = scmutil.match(ctx, torevert, matcher_opts)
2976 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
2960 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
2977 diffopts.nodates = True
2961 diffopts.nodates = True
2978 diffopts.git = True
2962 diffopts.git = True
2979 operation = 'discard'
2963 operation = 'discard'
2980 reversehunks = True
2964 reversehunks = True
2981 if node != parent:
2965 if node != parent:
2982 operation = 'apply'
2966 operation = 'apply'
2983 reversehunks = False
2967 reversehunks = False
2984 if reversehunks:
2968 if reversehunks:
2985 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
2969 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
2986 else:
2970 else:
2987 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
2971 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
2988 originalchunks = patch.parsepatch(diff)
2972 originalchunks = patch.parsepatch(diff)
2989
2973
2990 try:
2974 try:
2991
2975
2992 chunks, opts = recordfilter(repo.ui, originalchunks,
2976 chunks, opts = recordfilter(repo.ui, originalchunks,
2993 operation=operation)
2977 operation=operation)
2994 if reversehunks:
2978 if reversehunks:
2995 chunks = patch.reversehunks(chunks)
2979 chunks = patch.reversehunks(chunks)
2996
2980
2997 except error.PatchError as err:
2981 except error.PatchError as err:
2998 raise error.Abort(_('error parsing patch: %s') % err)
2982 raise error.Abort(_('error parsing patch: %s') % err)
2999
2983
3000 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
2984 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3001 if tobackup is None:
2985 if tobackup is None:
3002 tobackup = set()
2986 tobackup = set()
3003 # Apply changes
2987 # Apply changes
3004 fp = stringio()
2988 fp = stringio()
3005 for c in chunks:
2989 for c in chunks:
3006 # Create a backup file only if this hunk should be backed up
2990 # Create a backup file only if this hunk should be backed up
3007 if ishunk(c) and c.header.filename() in tobackup:
2991 if ishunk(c) and c.header.filename() in tobackup:
3008 abs = c.header.filename()
2992 abs = c.header.filename()
3009 target = repo.wjoin(abs)
2993 target = repo.wjoin(abs)
3010 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
2994 bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
3011 util.copyfile(target, bakname)
2995 util.copyfile(target, bakname)
3012 tobackup.remove(abs)
2996 tobackup.remove(abs)
3013 c.write(fp)
2997 c.write(fp)
3014 dopatch = fp.tell()
2998 dopatch = fp.tell()
3015 fp.seek(0)
2999 fp.seek(0)
3016 if dopatch:
3000 if dopatch:
3017 try:
3001 try:
3018 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3002 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3019 except error.PatchError as err:
3003 except error.PatchError as err:
3020 raise error.Abort(str(err))
3004 raise error.Abort(str(err))
3021 del fp
3005 del fp
3022 else:
3006 else:
3023 for f in actions['revert'][0]:
3007 for f in actions['revert'][0]:
3024 checkout(f)
3008 checkout(f)
3025 if normal:
3009 if normal:
3026 normal(f)
3010 normal(f)
3027
3011
3028 for f in actions['add'][0]:
3012 for f in actions['add'][0]:
3029 # Don't checkout modified files, they are already created by the diff
3013 # Don't checkout modified files, they are already created by the diff
3030 if f not in newlyaddedandmodifiedfiles:
3014 if f not in newlyaddedandmodifiedfiles:
3031 checkout(f)
3015 checkout(f)
3032 repo.dirstate.add(f)
3016 repo.dirstate.add(f)
3033
3017
3034 normal = repo.dirstate.normallookup
3018 normal = repo.dirstate.normallookup
3035 if node == parent and p2 == nullid:
3019 if node == parent and p2 == nullid:
3036 normal = repo.dirstate.normal
3020 normal = repo.dirstate.normal
3037 for f in actions['undelete'][0]:
3021 for f in actions['undelete'][0]:
3038 checkout(f)
3022 checkout(f)
3039 normal(f)
3023 normal(f)
3040
3024
3041 copied = copies.pathcopies(repo[parent], ctx)
3025 copied = copies.pathcopies(repo[parent], ctx)
3042
3026
3043 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3027 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3044 if f in copied:
3028 if f in copied:
3045 repo.dirstate.copy(copied[f], f)
3029 repo.dirstate.copy(copied[f], f)
3046
3030
3047 class command(registrar.command):
3031 class command(registrar.command):
3048 """deprecated: used registrar.command instead"""
3032 """deprecated: used registrar.command instead"""
3049 def _doregister(self, func, name, *args, **kwargs):
3033 def _doregister(self, func, name, *args, **kwargs):
3050 func._deprecatedregistrar = True # flag for deprecwarn in extensions.py
3034 func._deprecatedregistrar = True # flag for deprecwarn in extensions.py
3051 return super(command, self)._doregister(func, name, *args, **kwargs)
3035 return super(command, self)._doregister(func, name, *args, **kwargs)
3052
3036
3053 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3037 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3054 # commands.outgoing. "missing" is "missing" of the result of
3038 # commands.outgoing. "missing" is "missing" of the result of
3055 # "findcommonoutgoing()"
3039 # "findcommonoutgoing()"
3056 outgoinghooks = util.hooks()
3040 outgoinghooks = util.hooks()
3057
3041
3058 # a list of (ui, repo) functions called by commands.summary
3042 # a list of (ui, repo) functions called by commands.summary
3059 summaryhooks = util.hooks()
3043 summaryhooks = util.hooks()
3060
3044
3061 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3045 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3062 #
3046 #
3063 # functions should return tuple of booleans below, if 'changes' is None:
3047 # functions should return tuple of booleans below, if 'changes' is None:
3064 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3048 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3065 #
3049 #
3066 # otherwise, 'changes' is a tuple of tuples below:
3050 # otherwise, 'changes' is a tuple of tuples below:
3067 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3051 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3068 # - (desturl, destbranch, destpeer, outgoing)
3052 # - (desturl, destbranch, destpeer, outgoing)
3069 summaryremotehooks = util.hooks()
3053 summaryremotehooks = util.hooks()
3070
3054
3071 # A list of state files kept by multistep operations like graft.
3055 # A list of state files kept by multistep operations like graft.
3072 # Since graft cannot be aborted, it is considered 'clearable' by update.
3056 # Since graft cannot be aborted, it is considered 'clearable' by update.
3073 # note: bisect is intentionally excluded
3057 # note: bisect is intentionally excluded
3074 # (state file, clearable, allowcommit, error, hint)
3058 # (state file, clearable, allowcommit, error, hint)
3075 unfinishedstates = [
3059 unfinishedstates = [
3076 ('graftstate', True, False, _('graft in progress'),
3060 ('graftstate', True, False, _('graft in progress'),
3077 _("use 'hg graft --continue' or 'hg update' to abort")),
3061 _("use 'hg graft --continue' or 'hg update' to abort")),
3078 ('updatestate', True, False, _('last update was interrupted'),
3062 ('updatestate', True, False, _('last update was interrupted'),
3079 _("use 'hg update' to get a consistent checkout"))
3063 _("use 'hg update' to get a consistent checkout"))
3080 ]
3064 ]
3081
3065
3082 def checkunfinished(repo, commit=False):
3066 def checkunfinished(repo, commit=False):
3083 '''Look for an unfinished multistep operation, like graft, and abort
3067 '''Look for an unfinished multistep operation, like graft, and abort
3084 if found. It's probably good to check this right before
3068 if found. It's probably good to check this right before
3085 bailifchanged().
3069 bailifchanged().
3086 '''
3070 '''
3087 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3071 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3088 if commit and allowcommit:
3072 if commit and allowcommit:
3089 continue
3073 continue
3090 if repo.vfs.exists(f):
3074 if repo.vfs.exists(f):
3091 raise error.Abort(msg, hint=hint)
3075 raise error.Abort(msg, hint=hint)
3092
3076
3093 def clearunfinished(repo):
3077 def clearunfinished(repo):
3094 '''Check for unfinished operations (as above), and clear the ones
3078 '''Check for unfinished operations (as above), and clear the ones
3095 that are clearable.
3079 that are clearable.
3096 '''
3080 '''
3097 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3081 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3098 if not clearable and repo.vfs.exists(f):
3082 if not clearable and repo.vfs.exists(f):
3099 raise error.Abort(msg, hint=hint)
3083 raise error.Abort(msg, hint=hint)
3100 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3084 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3101 if clearable and repo.vfs.exists(f):
3085 if clearable and repo.vfs.exists(f):
3102 util.unlink(repo.vfs.join(f))
3086 util.unlink(repo.vfs.join(f))
3103
3087
3104 afterresolvedstates = [
3088 afterresolvedstates = [
3105 ('graftstate',
3089 ('graftstate',
3106 _('hg graft --continue')),
3090 _('hg graft --continue')),
3107 ]
3091 ]
3108
3092
3109 def howtocontinue(repo):
3093 def howtocontinue(repo):
3110 '''Check for an unfinished operation and return the command to finish
3094 '''Check for an unfinished operation and return the command to finish
3111 it.
3095 it.
3112
3096
3113 afterresolvedstates tuples define a .hg/{file} and the corresponding
3097 afterresolvedstates tuples define a .hg/{file} and the corresponding
3114 command needed to finish it.
3098 command needed to finish it.
3115
3099
3116 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3100 Returns a (msg, warning) tuple. 'msg' is a string and 'warning' is
3117 a boolean.
3101 a boolean.
3118 '''
3102 '''
3119 contmsg = _("continue: %s")
3103 contmsg = _("continue: %s")
3120 for f, msg in afterresolvedstates:
3104 for f, msg in afterresolvedstates:
3121 if repo.vfs.exists(f):
3105 if repo.vfs.exists(f):
3122 return contmsg % msg, True
3106 return contmsg % msg, True
3123 if repo[None].dirty(missing=True, merge=False, branch=False):
3107 if repo[None].dirty(missing=True, merge=False, branch=False):
3124 return contmsg % _("hg commit"), False
3108 return contmsg % _("hg commit"), False
3125 return None, None
3109 return None, None
3126
3110
3127 def checkafterresolved(repo):
3111 def checkafterresolved(repo):
3128 '''Inform the user about the next action after completing hg resolve
3112 '''Inform the user about the next action after completing hg resolve
3129
3113
3130 If there's a matching afterresolvedstates, howtocontinue will yield
3114 If there's a matching afterresolvedstates, howtocontinue will yield
3131 repo.ui.warn as the reporter.
3115 repo.ui.warn as the reporter.
3132
3116
3133 Otherwise, it will yield repo.ui.note.
3117 Otherwise, it will yield repo.ui.note.
3134 '''
3118 '''
3135 msg, warning = howtocontinue(repo)
3119 msg, warning = howtocontinue(repo)
3136 if msg is not None:
3120 if msg is not None:
3137 if warning:
3121 if warning:
3138 repo.ui.warn("%s\n" % msg)
3122 repo.ui.warn("%s\n" % msg)
3139 else:
3123 else:
3140 repo.ui.note("%s\n" % msg)
3124 repo.ui.note("%s\n" % msg)
3141
3125
3142 def wrongtooltocontinue(repo, task):
3126 def wrongtooltocontinue(repo, task):
3143 '''Raise an abort suggesting how to properly continue if there is an
3127 '''Raise an abort suggesting how to properly continue if there is an
3144 active task.
3128 active task.
3145
3129
3146 Uses howtocontinue() to find the active task.
3130 Uses howtocontinue() to find the active task.
3147
3131
3148 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3132 If there's no task (repo.ui.note for 'hg commit'), it does not offer
3149 a hint.
3133 a hint.
3150 '''
3134 '''
3151 after = howtocontinue(repo)
3135 after = howtocontinue(repo)
3152 hint = None
3136 hint = None
3153 if after[1]:
3137 if after[1]:
3154 hint = after[0]
3138 hint = after[0]
3155 raise error.Abort(_('no %s in progress') % task, hint=hint)
3139 raise error.Abort(_('no %s in progress') % task, hint=hint)
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now