##// END OF EJS Templates
stringutil: bulk-replace call sites to point to new module...
Yuya Nishihara -
r37102:f0b6fbea default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,1124 +1,1128
1 # bugzilla.py - bugzilla integration for mercurial
1 # bugzilla.py - bugzilla integration for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''hooks for integrating with the Bugzilla bug tracker
9 '''hooks for integrating with the Bugzilla bug tracker
10
10
11 This hook extension adds comments on bugs in Bugzilla when changesets
11 This hook extension adds comments on bugs in Bugzilla when changesets
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 the Mercurial template mechanism.
13 the Mercurial template mechanism.
14
14
15 The bug references can optionally include an update for Bugzilla of the
15 The bug references can optionally include an update for Bugzilla of the
16 hours spent working on the bug. Bugs can also be marked fixed.
16 hours spent working on the bug. Bugs can also be marked fixed.
17
17
18 Four basic modes of access to Bugzilla are provided:
18 Four basic modes of access to Bugzilla are provided:
19
19
20 1. Access via the Bugzilla REST-API. Requires bugzilla 5.0 or later.
20 1. Access via the Bugzilla REST-API. Requires bugzilla 5.0 or later.
21
21
22 2. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
22 2. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
23
23
24 3. Check data via the Bugzilla XMLRPC interface and submit bug change
24 3. Check data via the Bugzilla XMLRPC interface and submit bug change
25 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
25 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
26
26
27 4. Writing directly to the Bugzilla database. Only Bugzilla installations
27 4. Writing directly to the Bugzilla database. Only Bugzilla installations
28 using MySQL are supported. Requires Python MySQLdb.
28 using MySQL are supported. Requires Python MySQLdb.
29
29
30 Writing directly to the database is susceptible to schema changes, and
30 Writing directly to the database is susceptible to schema changes, and
31 relies on a Bugzilla contrib script to send out bug change
31 relies on a Bugzilla contrib script to send out bug change
32 notification emails. This script runs as the user running Mercurial,
32 notification emails. This script runs as the user running Mercurial,
33 must be run on the host with the Bugzilla install, and requires
33 must be run on the host with the Bugzilla install, and requires
34 permission to read Bugzilla configuration details and the necessary
34 permission to read Bugzilla configuration details and the necessary
35 MySQL user and password to have full access rights to the Bugzilla
35 MySQL user and password to have full access rights to the Bugzilla
36 database. For these reasons this access mode is now considered
36 database. For these reasons this access mode is now considered
37 deprecated, and will not be updated for new Bugzilla versions going
37 deprecated, and will not be updated for new Bugzilla versions going
38 forward. Only adding comments is supported in this access mode.
38 forward. Only adding comments is supported in this access mode.
39
39
40 Access via XMLRPC needs a Bugzilla username and password to be specified
40 Access via XMLRPC needs a Bugzilla username and password to be specified
41 in the configuration. Comments are added under that username. Since the
41 in the configuration. Comments are added under that username. Since the
42 configuration must be readable by all Mercurial users, it is recommended
42 configuration must be readable by all Mercurial users, it is recommended
43 that the rights of that user are restricted in Bugzilla to the minimum
43 that the rights of that user are restricted in Bugzilla to the minimum
44 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
44 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
45
45
46 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
46 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
47 email to the Bugzilla email interface to submit comments to bugs.
47 email to the Bugzilla email interface to submit comments to bugs.
48 The From: address in the email is set to the email address of the Mercurial
48 The From: address in the email is set to the email address of the Mercurial
49 user, so the comment appears to come from the Mercurial user. In the event
49 user, so the comment appears to come from the Mercurial user. In the event
50 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
50 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
51 user, the email associated with the Bugzilla username used to log into
51 user, the email associated with the Bugzilla username used to log into
52 Bugzilla is used instead as the source of the comment. Marking bugs fixed
52 Bugzilla is used instead as the source of the comment. Marking bugs fixed
53 works on all supported Bugzilla versions.
53 works on all supported Bugzilla versions.
54
54
55 Access via the REST-API needs either a Bugzilla username and password
55 Access via the REST-API needs either a Bugzilla username and password
56 or an apikey specified in the configuration. Comments are made under
56 or an apikey specified in the configuration. Comments are made under
57 the given username or the user associated with the apikey in Bugzilla.
57 the given username or the user associated with the apikey in Bugzilla.
58
58
59 Configuration items common to all access modes:
59 Configuration items common to all access modes:
60
60
61 bugzilla.version
61 bugzilla.version
62 The access type to use. Values recognized are:
62 The access type to use. Values recognized are:
63
63
64 :``restapi``: Bugzilla REST-API, Bugzilla 5.0 and later.
64 :``restapi``: Bugzilla REST-API, Bugzilla 5.0 and later.
65 :``xmlrpc``: Bugzilla XMLRPC interface.
65 :``xmlrpc``: Bugzilla XMLRPC interface.
66 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
66 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
67 :``3.0``: MySQL access, Bugzilla 3.0 and later.
67 :``3.0``: MySQL access, Bugzilla 3.0 and later.
68 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
68 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
69 including 3.0.
69 including 3.0.
70 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
70 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
71 including 2.18.
71 including 2.18.
72
72
73 bugzilla.regexp
73 bugzilla.regexp
74 Regular expression to match bug IDs for update in changeset commit message.
74 Regular expression to match bug IDs for update in changeset commit message.
75 It must contain one "()" named group ``<ids>`` containing the bug
75 It must contain one "()" named group ``<ids>`` containing the bug
76 IDs separated by non-digit characters. It may also contain
76 IDs separated by non-digit characters. It may also contain
77 a named group ``<hours>`` with a floating-point number giving the
77 a named group ``<hours>`` with a floating-point number giving the
78 hours worked on the bug. If no named groups are present, the first
78 hours worked on the bug. If no named groups are present, the first
79 "()" group is assumed to contain the bug IDs, and work time is not
79 "()" group is assumed to contain the bug IDs, and work time is not
80 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
80 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
81 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
81 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
82 variations thereof, followed by an hours number prefixed by ``h`` or
82 variations thereof, followed by an hours number prefixed by ``h`` or
83 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
83 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
84
84
85 bugzilla.fixregexp
85 bugzilla.fixregexp
86 Regular expression to match bug IDs for marking fixed in changeset
86 Regular expression to match bug IDs for marking fixed in changeset
87 commit message. This must contain a "()" named group ``<ids>` containing
87 commit message. This must contain a "()" named group ``<ids>` containing
88 the bug IDs separated by non-digit characters. It may also contain
88 the bug IDs separated by non-digit characters. It may also contain
89 a named group ``<hours>`` with a floating-point number giving the
89 a named group ``<hours>`` with a floating-point number giving the
90 hours worked on the bug. If no named groups are present, the first
90 hours worked on the bug. If no named groups are present, the first
91 "()" group is assumed to contain the bug IDs, and work time is not
91 "()" group is assumed to contain the bug IDs, and work time is not
92 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
92 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
93 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
93 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
94 variations thereof, followed by an hours number prefixed by ``h`` or
94 variations thereof, followed by an hours number prefixed by ``h`` or
95 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
95 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
96
96
97 bugzilla.fixstatus
97 bugzilla.fixstatus
98 The status to set a bug to when marking fixed. Default ``RESOLVED``.
98 The status to set a bug to when marking fixed. Default ``RESOLVED``.
99
99
100 bugzilla.fixresolution
100 bugzilla.fixresolution
101 The resolution to set a bug to when marking fixed. Default ``FIXED``.
101 The resolution to set a bug to when marking fixed. Default ``FIXED``.
102
102
103 bugzilla.style
103 bugzilla.style
104 The style file to use when formatting comments.
104 The style file to use when formatting comments.
105
105
106 bugzilla.template
106 bugzilla.template
107 Template to use when formatting comments. Overrides style if
107 Template to use when formatting comments. Overrides style if
108 specified. In addition to the usual Mercurial keywords, the
108 specified. In addition to the usual Mercurial keywords, the
109 extension specifies:
109 extension specifies:
110
110
111 :``{bug}``: The Bugzilla bug ID.
111 :``{bug}``: The Bugzilla bug ID.
112 :``{root}``: The full pathname of the Mercurial repository.
112 :``{root}``: The full pathname of the Mercurial repository.
113 :``{webroot}``: Stripped pathname of the Mercurial repository.
113 :``{webroot}``: Stripped pathname of the Mercurial repository.
114 :``{hgweb}``: Base URL for browsing Mercurial repositories.
114 :``{hgweb}``: Base URL for browsing Mercurial repositories.
115
115
116 Default ``changeset {node|short} in repo {root} refers to bug
116 Default ``changeset {node|short} in repo {root} refers to bug
117 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
117 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
118
118
119 bugzilla.strip
119 bugzilla.strip
120 The number of path separator characters to strip from the front of
120 The number of path separator characters to strip from the front of
121 the Mercurial repository path (``{root}`` in templates) to produce
121 the Mercurial repository path (``{root}`` in templates) to produce
122 ``{webroot}``. For example, a repository with ``{root}``
122 ``{webroot}``. For example, a repository with ``{root}``
123 ``/var/local/my-project`` with a strip of 2 gives a value for
123 ``/var/local/my-project`` with a strip of 2 gives a value for
124 ``{webroot}`` of ``my-project``. Default 0.
124 ``{webroot}`` of ``my-project``. Default 0.
125
125
126 web.baseurl
126 web.baseurl
127 Base URL for browsing Mercurial repositories. Referenced from
127 Base URL for browsing Mercurial repositories. Referenced from
128 templates as ``{hgweb}``.
128 templates as ``{hgweb}``.
129
129
130 Configuration items common to XMLRPC+email and MySQL access modes:
130 Configuration items common to XMLRPC+email and MySQL access modes:
131
131
132 bugzilla.usermap
132 bugzilla.usermap
133 Path of file containing Mercurial committer email to Bugzilla user email
133 Path of file containing Mercurial committer email to Bugzilla user email
134 mappings. If specified, the file should contain one mapping per
134 mappings. If specified, the file should contain one mapping per
135 line::
135 line::
136
136
137 committer = Bugzilla user
137 committer = Bugzilla user
138
138
139 See also the ``[usermap]`` section.
139 See also the ``[usermap]`` section.
140
140
141 The ``[usermap]`` section is used to specify mappings of Mercurial
141 The ``[usermap]`` section is used to specify mappings of Mercurial
142 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
142 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
143 Contains entries of the form ``committer = Bugzilla user``.
143 Contains entries of the form ``committer = Bugzilla user``.
144
144
145 XMLRPC and REST-API access mode configuration:
145 XMLRPC and REST-API access mode configuration:
146
146
147 bugzilla.bzurl
147 bugzilla.bzurl
148 The base URL for the Bugzilla installation.
148 The base URL for the Bugzilla installation.
149 Default ``http://localhost/bugzilla``.
149 Default ``http://localhost/bugzilla``.
150
150
151 bugzilla.user
151 bugzilla.user
152 The username to use to log into Bugzilla via XMLRPC. Default
152 The username to use to log into Bugzilla via XMLRPC. Default
153 ``bugs``.
153 ``bugs``.
154
154
155 bugzilla.password
155 bugzilla.password
156 The password for Bugzilla login.
156 The password for Bugzilla login.
157
157
158 REST-API access mode uses the options listed above as well as:
158 REST-API access mode uses the options listed above as well as:
159
159
160 bugzilla.apikey
160 bugzilla.apikey
161 An apikey generated on the Bugzilla instance for api access.
161 An apikey generated on the Bugzilla instance for api access.
162 Using an apikey removes the need to store the user and password
162 Using an apikey removes the need to store the user and password
163 options.
163 options.
164
164
165 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
165 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
166 and also:
166 and also:
167
167
168 bugzilla.bzemail
168 bugzilla.bzemail
169 The Bugzilla email address.
169 The Bugzilla email address.
170
170
171 In addition, the Mercurial email settings must be configured. See the
171 In addition, the Mercurial email settings must be configured. See the
172 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
172 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
173
173
174 MySQL access mode configuration:
174 MySQL access mode configuration:
175
175
176 bugzilla.host
176 bugzilla.host
177 Hostname of the MySQL server holding the Bugzilla database.
177 Hostname of the MySQL server holding the Bugzilla database.
178 Default ``localhost``.
178 Default ``localhost``.
179
179
180 bugzilla.db
180 bugzilla.db
181 Name of the Bugzilla database in MySQL. Default ``bugs``.
181 Name of the Bugzilla database in MySQL. Default ``bugs``.
182
182
183 bugzilla.user
183 bugzilla.user
184 Username to use to access MySQL server. Default ``bugs``.
184 Username to use to access MySQL server. Default ``bugs``.
185
185
186 bugzilla.password
186 bugzilla.password
187 Password to use to access MySQL server.
187 Password to use to access MySQL server.
188
188
189 bugzilla.timeout
189 bugzilla.timeout
190 Database connection timeout (seconds). Default 5.
190 Database connection timeout (seconds). Default 5.
191
191
192 bugzilla.bzuser
192 bugzilla.bzuser
193 Fallback Bugzilla user name to record comments with, if changeset
193 Fallback Bugzilla user name to record comments with, if changeset
194 committer cannot be found as a Bugzilla user.
194 committer cannot be found as a Bugzilla user.
195
195
196 bugzilla.bzdir
196 bugzilla.bzdir
197 Bugzilla install directory. Used by default notify. Default
197 Bugzilla install directory. Used by default notify. Default
198 ``/var/www/html/bugzilla``.
198 ``/var/www/html/bugzilla``.
199
199
200 bugzilla.notify
200 bugzilla.notify
201 The command to run to get Bugzilla to send bug change notification
201 The command to run to get Bugzilla to send bug change notification
202 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
202 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
203 id) and ``user`` (committer bugzilla email). Default depends on
203 id) and ``user`` (committer bugzilla email). Default depends on
204 version; from 2.18 it is "cd %(bzdir)s && perl -T
204 version; from 2.18 it is "cd %(bzdir)s && perl -T
205 contrib/sendbugmail.pl %(id)s %(user)s".
205 contrib/sendbugmail.pl %(id)s %(user)s".
206
206
207 Activating the extension::
207 Activating the extension::
208
208
209 [extensions]
209 [extensions]
210 bugzilla =
210 bugzilla =
211
211
212 [hooks]
212 [hooks]
213 # run bugzilla hook on every change pulled or pushed in here
213 # run bugzilla hook on every change pulled or pushed in here
214 incoming.bugzilla = python:hgext.bugzilla.hook
214 incoming.bugzilla = python:hgext.bugzilla.hook
215
215
216 Example configurations:
216 Example configurations:
217
217
218 XMLRPC example configuration. This uses the Bugzilla at
218 XMLRPC example configuration. This uses the Bugzilla at
219 ``http://my-project.org/bugzilla``, logging in as user
219 ``http://my-project.org/bugzilla``, logging in as user
220 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
220 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
221 collection of Mercurial repositories in ``/var/local/hg/repos/``,
221 collection of Mercurial repositories in ``/var/local/hg/repos/``,
222 with a web interface at ``http://my-project.org/hg``. ::
222 with a web interface at ``http://my-project.org/hg``. ::
223
223
224 [bugzilla]
224 [bugzilla]
225 bzurl=http://my-project.org/bugzilla
225 bzurl=http://my-project.org/bugzilla
226 user=bugmail@my-project.org
226 user=bugmail@my-project.org
227 password=plugh
227 password=plugh
228 version=xmlrpc
228 version=xmlrpc
229 template=Changeset {node|short} in {root|basename}.
229 template=Changeset {node|short} in {root|basename}.
230 {hgweb}/{webroot}/rev/{node|short}\\n
230 {hgweb}/{webroot}/rev/{node|short}\\n
231 {desc}\\n
231 {desc}\\n
232 strip=5
232 strip=5
233
233
234 [web]
234 [web]
235 baseurl=http://my-project.org/hg
235 baseurl=http://my-project.org/hg
236
236
237 XMLRPC+email example configuration. This uses the Bugzilla at
237 XMLRPC+email example configuration. This uses the Bugzilla at
238 ``http://my-project.org/bugzilla``, logging in as user
238 ``http://my-project.org/bugzilla``, logging in as user
239 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
239 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
240 collection of Mercurial repositories in ``/var/local/hg/repos/``,
240 collection of Mercurial repositories in ``/var/local/hg/repos/``,
241 with a web interface at ``http://my-project.org/hg``. Bug comments
241 with a web interface at ``http://my-project.org/hg``. Bug comments
242 are sent to the Bugzilla email address
242 are sent to the Bugzilla email address
243 ``bugzilla@my-project.org``. ::
243 ``bugzilla@my-project.org``. ::
244
244
245 [bugzilla]
245 [bugzilla]
246 bzurl=http://my-project.org/bugzilla
246 bzurl=http://my-project.org/bugzilla
247 user=bugmail@my-project.org
247 user=bugmail@my-project.org
248 password=plugh
248 password=plugh
249 version=xmlrpc+email
249 version=xmlrpc+email
250 bzemail=bugzilla@my-project.org
250 bzemail=bugzilla@my-project.org
251 template=Changeset {node|short} in {root|basename}.
251 template=Changeset {node|short} in {root|basename}.
252 {hgweb}/{webroot}/rev/{node|short}\\n
252 {hgweb}/{webroot}/rev/{node|short}\\n
253 {desc}\\n
253 {desc}\\n
254 strip=5
254 strip=5
255
255
256 [web]
256 [web]
257 baseurl=http://my-project.org/hg
257 baseurl=http://my-project.org/hg
258
258
259 [usermap]
259 [usermap]
260 user@emaildomain.com=user.name@bugzilladomain.com
260 user@emaildomain.com=user.name@bugzilladomain.com
261
261
262 MySQL example configuration. This has a local Bugzilla 3.2 installation
262 MySQL example configuration. This has a local Bugzilla 3.2 installation
263 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
263 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
264 the Bugzilla database name is ``bugs`` and MySQL is
264 the Bugzilla database name is ``bugs`` and MySQL is
265 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
265 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
266 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
266 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
267 with a web interface at ``http://my-project.org/hg``. ::
267 with a web interface at ``http://my-project.org/hg``. ::
268
268
269 [bugzilla]
269 [bugzilla]
270 host=localhost
270 host=localhost
271 password=XYZZY
271 password=XYZZY
272 version=3.0
272 version=3.0
273 bzuser=unknown@domain.com
273 bzuser=unknown@domain.com
274 bzdir=/opt/bugzilla-3.2
274 bzdir=/opt/bugzilla-3.2
275 template=Changeset {node|short} in {root|basename}.
275 template=Changeset {node|short} in {root|basename}.
276 {hgweb}/{webroot}/rev/{node|short}\\n
276 {hgweb}/{webroot}/rev/{node|short}\\n
277 {desc}\\n
277 {desc}\\n
278 strip=5
278 strip=5
279
279
280 [web]
280 [web]
281 baseurl=http://my-project.org/hg
281 baseurl=http://my-project.org/hg
282
282
283 [usermap]
283 [usermap]
284 user@emaildomain.com=user.name@bugzilladomain.com
284 user@emaildomain.com=user.name@bugzilladomain.com
285
285
286 All the above add a comment to the Bugzilla bug record of the form::
286 All the above add a comment to the Bugzilla bug record of the form::
287
287
288 Changeset 3b16791d6642 in repository-name.
288 Changeset 3b16791d6642 in repository-name.
289 http://my-project.org/hg/repository-name/rev/3b16791d6642
289 http://my-project.org/hg/repository-name/rev/3b16791d6642
290
290
291 Changeset commit comment. Bug 1234.
291 Changeset commit comment. Bug 1234.
292 '''
292 '''
293
293
294 from __future__ import absolute_import
294 from __future__ import absolute_import
295
295
296 import json
296 import json
297 import re
297 import re
298 import time
298 import time
299
299
300 from mercurial.i18n import _
300 from mercurial.i18n import _
301 from mercurial.node import short
301 from mercurial.node import short
302 from mercurial import (
302 from mercurial import (
303 error,
303 error,
304 logcmdutil,
304 logcmdutil,
305 mail,
305 mail,
306 registrar,
306 registrar,
307 url,
307 url,
308 util,
308 util,
309 )
309 )
310 from mercurial.utils import (
311 stringutil,
312 )
310
313
311 xmlrpclib = util.xmlrpclib
314 xmlrpclib = util.xmlrpclib
312
315
313 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
316 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
314 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
317 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
315 # be specifying the version(s) of Mercurial they are tested with, or
318 # be specifying the version(s) of Mercurial they are tested with, or
316 # leave the attribute unspecified.
319 # leave the attribute unspecified.
317 testedwith = 'ships-with-hg-core'
320 testedwith = 'ships-with-hg-core'
318
321
319 configtable = {}
322 configtable = {}
320 configitem = registrar.configitem(configtable)
323 configitem = registrar.configitem(configtable)
321
324
322 configitem('bugzilla', 'apikey',
325 configitem('bugzilla', 'apikey',
323 default='',
326 default='',
324 )
327 )
325 configitem('bugzilla', 'bzdir',
328 configitem('bugzilla', 'bzdir',
326 default='/var/www/html/bugzilla',
329 default='/var/www/html/bugzilla',
327 )
330 )
328 configitem('bugzilla', 'bzemail',
331 configitem('bugzilla', 'bzemail',
329 default=None,
332 default=None,
330 )
333 )
331 configitem('bugzilla', 'bzurl',
334 configitem('bugzilla', 'bzurl',
332 default='http://localhost/bugzilla/',
335 default='http://localhost/bugzilla/',
333 )
336 )
334 configitem('bugzilla', 'bzuser',
337 configitem('bugzilla', 'bzuser',
335 default=None,
338 default=None,
336 )
339 )
337 configitem('bugzilla', 'db',
340 configitem('bugzilla', 'db',
338 default='bugs',
341 default='bugs',
339 )
342 )
340 configitem('bugzilla', 'fixregexp',
343 configitem('bugzilla', 'fixregexp',
341 default=(r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
344 default=(r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
342 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
345 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
343 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
346 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
344 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
347 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
345 )
348 )
346 configitem('bugzilla', 'fixresolution',
349 configitem('bugzilla', 'fixresolution',
347 default='FIXED',
350 default='FIXED',
348 )
351 )
349 configitem('bugzilla', 'fixstatus',
352 configitem('bugzilla', 'fixstatus',
350 default='RESOLVED',
353 default='RESOLVED',
351 )
354 )
352 configitem('bugzilla', 'host',
355 configitem('bugzilla', 'host',
353 default='localhost',
356 default='localhost',
354 )
357 )
355 configitem('bugzilla', 'notify',
358 configitem('bugzilla', 'notify',
356 default=configitem.dynamicdefault,
359 default=configitem.dynamicdefault,
357 )
360 )
358 configitem('bugzilla', 'password',
361 configitem('bugzilla', 'password',
359 default=None,
362 default=None,
360 )
363 )
361 configitem('bugzilla', 'regexp',
364 configitem('bugzilla', 'regexp',
362 default=(r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
365 default=(r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
363 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
366 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
364 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
367 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
365 )
368 )
366 configitem('bugzilla', 'strip',
369 configitem('bugzilla', 'strip',
367 default=0,
370 default=0,
368 )
371 )
369 configitem('bugzilla', 'style',
372 configitem('bugzilla', 'style',
370 default=None,
373 default=None,
371 )
374 )
372 configitem('bugzilla', 'template',
375 configitem('bugzilla', 'template',
373 default=None,
376 default=None,
374 )
377 )
375 configitem('bugzilla', 'timeout',
378 configitem('bugzilla', 'timeout',
376 default=5,
379 default=5,
377 )
380 )
378 configitem('bugzilla', 'user',
381 configitem('bugzilla', 'user',
379 default='bugs',
382 default='bugs',
380 )
383 )
381 configitem('bugzilla', 'usermap',
384 configitem('bugzilla', 'usermap',
382 default=None,
385 default=None,
383 )
386 )
384 configitem('bugzilla', 'version',
387 configitem('bugzilla', 'version',
385 default=None,
388 default=None,
386 )
389 )
387
390
388 class bzaccess(object):
391 class bzaccess(object):
389 '''Base class for access to Bugzilla.'''
392 '''Base class for access to Bugzilla.'''
390
393
391 def __init__(self, ui):
394 def __init__(self, ui):
392 self.ui = ui
395 self.ui = ui
393 usermap = self.ui.config('bugzilla', 'usermap')
396 usermap = self.ui.config('bugzilla', 'usermap')
394 if usermap:
397 if usermap:
395 self.ui.readconfig(usermap, sections=['usermap'])
398 self.ui.readconfig(usermap, sections=['usermap'])
396
399
397 def map_committer(self, user):
400 def map_committer(self, user):
398 '''map name of committer to Bugzilla user name.'''
401 '''map name of committer to Bugzilla user name.'''
399 for committer, bzuser in self.ui.configitems('usermap'):
402 for committer, bzuser in self.ui.configitems('usermap'):
400 if committer.lower() == user.lower():
403 if committer.lower() == user.lower():
401 return bzuser
404 return bzuser
402 return user
405 return user
403
406
404 # Methods to be implemented by access classes.
407 # Methods to be implemented by access classes.
405 #
408 #
406 # 'bugs' is a dict keyed on bug id, where values are a dict holding
409 # 'bugs' is a dict keyed on bug id, where values are a dict holding
407 # updates to bug state. Recognized dict keys are:
410 # updates to bug state. Recognized dict keys are:
408 #
411 #
409 # 'hours': Value, float containing work hours to be updated.
412 # 'hours': Value, float containing work hours to be updated.
410 # 'fix': If key present, bug is to be marked fixed. Value ignored.
413 # 'fix': If key present, bug is to be marked fixed. Value ignored.
411
414
412 def filter_real_bug_ids(self, bugs):
415 def filter_real_bug_ids(self, bugs):
413 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
416 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
414
417
415 def filter_cset_known_bug_ids(self, node, bugs):
418 def filter_cset_known_bug_ids(self, node, bugs):
416 '''remove bug IDs where node occurs in comment text from bugs.'''
419 '''remove bug IDs where node occurs in comment text from bugs.'''
417
420
418 def updatebug(self, bugid, newstate, text, committer):
421 def updatebug(self, bugid, newstate, text, committer):
419 '''update the specified bug. Add comment text and set new states.
422 '''update the specified bug. Add comment text and set new states.
420
423
421 If possible add the comment as being from the committer of
424 If possible add the comment as being from the committer of
422 the changeset. Otherwise use the default Bugzilla user.
425 the changeset. Otherwise use the default Bugzilla user.
423 '''
426 '''
424
427
425 def notify(self, bugs, committer):
428 def notify(self, bugs, committer):
426 '''Force sending of Bugzilla notification emails.
429 '''Force sending of Bugzilla notification emails.
427
430
428 Only required if the access method does not trigger notification
431 Only required if the access method does not trigger notification
429 emails automatically.
432 emails automatically.
430 '''
433 '''
431
434
432 # Bugzilla via direct access to MySQL database.
435 # Bugzilla via direct access to MySQL database.
433 class bzmysql(bzaccess):
436 class bzmysql(bzaccess):
434 '''Support for direct MySQL access to Bugzilla.
437 '''Support for direct MySQL access to Bugzilla.
435
438
436 The earliest Bugzilla version this is tested with is version 2.16.
439 The earliest Bugzilla version this is tested with is version 2.16.
437
440
438 If your Bugzilla is version 3.4 or above, you are strongly
441 If your Bugzilla is version 3.4 or above, you are strongly
439 recommended to use the XMLRPC access method instead.
442 recommended to use the XMLRPC access method instead.
440 '''
443 '''
441
444
442 @staticmethod
445 @staticmethod
443 def sql_buglist(ids):
446 def sql_buglist(ids):
444 '''return SQL-friendly list of bug ids'''
447 '''return SQL-friendly list of bug ids'''
445 return '(' + ','.join(map(str, ids)) + ')'
448 return '(' + ','.join(map(str, ids)) + ')'
446
449
447 _MySQLdb = None
450 _MySQLdb = None
448
451
449 def __init__(self, ui):
452 def __init__(self, ui):
450 try:
453 try:
451 import MySQLdb as mysql
454 import MySQLdb as mysql
452 bzmysql._MySQLdb = mysql
455 bzmysql._MySQLdb = mysql
453 except ImportError as err:
456 except ImportError as err:
454 raise error.Abort(_('python mysql support not available: %s') % err)
457 raise error.Abort(_('python mysql support not available: %s') % err)
455
458
456 bzaccess.__init__(self, ui)
459 bzaccess.__init__(self, ui)
457
460
458 host = self.ui.config('bugzilla', 'host')
461 host = self.ui.config('bugzilla', 'host')
459 user = self.ui.config('bugzilla', 'user')
462 user = self.ui.config('bugzilla', 'user')
460 passwd = self.ui.config('bugzilla', 'password')
463 passwd = self.ui.config('bugzilla', 'password')
461 db = self.ui.config('bugzilla', 'db')
464 db = self.ui.config('bugzilla', 'db')
462 timeout = int(self.ui.config('bugzilla', 'timeout'))
465 timeout = int(self.ui.config('bugzilla', 'timeout'))
463 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
466 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
464 (host, db, user, '*' * len(passwd)))
467 (host, db, user, '*' * len(passwd)))
465 self.conn = bzmysql._MySQLdb.connect(host=host,
468 self.conn = bzmysql._MySQLdb.connect(host=host,
466 user=user, passwd=passwd,
469 user=user, passwd=passwd,
467 db=db,
470 db=db,
468 connect_timeout=timeout)
471 connect_timeout=timeout)
469 self.cursor = self.conn.cursor()
472 self.cursor = self.conn.cursor()
470 self.longdesc_id = self.get_longdesc_id()
473 self.longdesc_id = self.get_longdesc_id()
471 self.user_ids = {}
474 self.user_ids = {}
472 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
475 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
473
476
474 def run(self, *args, **kwargs):
477 def run(self, *args, **kwargs):
475 '''run a query.'''
478 '''run a query.'''
476 self.ui.note(_('query: %s %s\n') % (args, kwargs))
479 self.ui.note(_('query: %s %s\n') % (args, kwargs))
477 try:
480 try:
478 self.cursor.execute(*args, **kwargs)
481 self.cursor.execute(*args, **kwargs)
479 except bzmysql._MySQLdb.MySQLError:
482 except bzmysql._MySQLdb.MySQLError:
480 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
483 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
481 raise
484 raise
482
485
483 def get_longdesc_id(self):
486 def get_longdesc_id(self):
484 '''get identity of longdesc field'''
487 '''get identity of longdesc field'''
485 self.run('select fieldid from fielddefs where name = "longdesc"')
488 self.run('select fieldid from fielddefs where name = "longdesc"')
486 ids = self.cursor.fetchall()
489 ids = self.cursor.fetchall()
487 if len(ids) != 1:
490 if len(ids) != 1:
488 raise error.Abort(_('unknown database schema'))
491 raise error.Abort(_('unknown database schema'))
489 return ids[0][0]
492 return ids[0][0]
490
493
491 def filter_real_bug_ids(self, bugs):
494 def filter_real_bug_ids(self, bugs):
492 '''filter not-existing bugs from set.'''
495 '''filter not-existing bugs from set.'''
493 self.run('select bug_id from bugs where bug_id in %s' %
496 self.run('select bug_id from bugs where bug_id in %s' %
494 bzmysql.sql_buglist(bugs.keys()))
497 bzmysql.sql_buglist(bugs.keys()))
495 existing = [id for (id,) in self.cursor.fetchall()]
498 existing = [id for (id,) in self.cursor.fetchall()]
496 for id in bugs.keys():
499 for id in bugs.keys():
497 if id not in existing:
500 if id not in existing:
498 self.ui.status(_('bug %d does not exist\n') % id)
501 self.ui.status(_('bug %d does not exist\n') % id)
499 del bugs[id]
502 del bugs[id]
500
503
501 def filter_cset_known_bug_ids(self, node, bugs):
504 def filter_cset_known_bug_ids(self, node, bugs):
502 '''filter bug ids that already refer to this changeset from set.'''
505 '''filter bug ids that already refer to this changeset from set.'''
503 self.run('''select bug_id from longdescs where
506 self.run('''select bug_id from longdescs where
504 bug_id in %s and thetext like "%%%s%%"''' %
507 bug_id in %s and thetext like "%%%s%%"''' %
505 (bzmysql.sql_buglist(bugs.keys()), short(node)))
508 (bzmysql.sql_buglist(bugs.keys()), short(node)))
506 for (id,) in self.cursor.fetchall():
509 for (id,) in self.cursor.fetchall():
507 self.ui.status(_('bug %d already knows about changeset %s\n') %
510 self.ui.status(_('bug %d already knows about changeset %s\n') %
508 (id, short(node)))
511 (id, short(node)))
509 del bugs[id]
512 del bugs[id]
510
513
511 def notify(self, bugs, committer):
514 def notify(self, bugs, committer):
512 '''tell bugzilla to send mail.'''
515 '''tell bugzilla to send mail.'''
513 self.ui.status(_('telling bugzilla to send mail:\n'))
516 self.ui.status(_('telling bugzilla to send mail:\n'))
514 (user, userid) = self.get_bugzilla_user(committer)
517 (user, userid) = self.get_bugzilla_user(committer)
515 for id in bugs.keys():
518 for id in bugs.keys():
516 self.ui.status(_(' bug %s\n') % id)
519 self.ui.status(_(' bug %s\n') % id)
517 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
520 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
518 bzdir = self.ui.config('bugzilla', 'bzdir')
521 bzdir = self.ui.config('bugzilla', 'bzdir')
519 try:
522 try:
520 # Backwards-compatible with old notify string, which
523 # Backwards-compatible with old notify string, which
521 # took one string. This will throw with a new format
524 # took one string. This will throw with a new format
522 # string.
525 # string.
523 cmd = cmdfmt % id
526 cmd = cmdfmt % id
524 except TypeError:
527 except TypeError:
525 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
528 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
526 self.ui.note(_('running notify command %s\n') % cmd)
529 self.ui.note(_('running notify command %s\n') % cmd)
527 fp = util.popen('(%s) 2>&1' % cmd)
530 fp = util.popen('(%s) 2>&1' % cmd)
528 out = fp.read()
531 out = fp.read()
529 ret = fp.close()
532 ret = fp.close()
530 if ret:
533 if ret:
531 self.ui.warn(out)
534 self.ui.warn(out)
532 raise error.Abort(_('bugzilla notify command %s') %
535 raise error.Abort(_('bugzilla notify command %s') %
533 util.explainexit(ret)[0])
536 util.explainexit(ret)[0])
534 self.ui.status(_('done\n'))
537 self.ui.status(_('done\n'))
535
538
536 def get_user_id(self, user):
539 def get_user_id(self, user):
537 '''look up numeric bugzilla user id.'''
540 '''look up numeric bugzilla user id.'''
538 try:
541 try:
539 return self.user_ids[user]
542 return self.user_ids[user]
540 except KeyError:
543 except KeyError:
541 try:
544 try:
542 userid = int(user)
545 userid = int(user)
543 except ValueError:
546 except ValueError:
544 self.ui.note(_('looking up user %s\n') % user)
547 self.ui.note(_('looking up user %s\n') % user)
545 self.run('''select userid from profiles
548 self.run('''select userid from profiles
546 where login_name like %s''', user)
549 where login_name like %s''', user)
547 all = self.cursor.fetchall()
550 all = self.cursor.fetchall()
548 if len(all) != 1:
551 if len(all) != 1:
549 raise KeyError(user)
552 raise KeyError(user)
550 userid = int(all[0][0])
553 userid = int(all[0][0])
551 self.user_ids[user] = userid
554 self.user_ids[user] = userid
552 return userid
555 return userid
553
556
554 def get_bugzilla_user(self, committer):
557 def get_bugzilla_user(self, committer):
555 '''See if committer is a registered bugzilla user. Return
558 '''See if committer is a registered bugzilla user. Return
556 bugzilla username and userid if so. If not, return default
559 bugzilla username and userid if so. If not, return default
557 bugzilla username and userid.'''
560 bugzilla username and userid.'''
558 user = self.map_committer(committer)
561 user = self.map_committer(committer)
559 try:
562 try:
560 userid = self.get_user_id(user)
563 userid = self.get_user_id(user)
561 except KeyError:
564 except KeyError:
562 try:
565 try:
563 defaultuser = self.ui.config('bugzilla', 'bzuser')
566 defaultuser = self.ui.config('bugzilla', 'bzuser')
564 if not defaultuser:
567 if not defaultuser:
565 raise error.Abort(_('cannot find bugzilla user id for %s') %
568 raise error.Abort(_('cannot find bugzilla user id for %s') %
566 user)
569 user)
567 userid = self.get_user_id(defaultuser)
570 userid = self.get_user_id(defaultuser)
568 user = defaultuser
571 user = defaultuser
569 except KeyError:
572 except KeyError:
570 raise error.Abort(_('cannot find bugzilla user id for %s or %s')
573 raise error.Abort(_('cannot find bugzilla user id for %s or %s')
571 % (user, defaultuser))
574 % (user, defaultuser))
572 return (user, userid)
575 return (user, userid)
573
576
574 def updatebug(self, bugid, newstate, text, committer):
577 def updatebug(self, bugid, newstate, text, committer):
575 '''update bug state with comment text.
578 '''update bug state with comment text.
576
579
577 Try adding comment as committer of changeset, otherwise as
580 Try adding comment as committer of changeset, otherwise as
578 default bugzilla user.'''
581 default bugzilla user.'''
579 if len(newstate) > 0:
582 if len(newstate) > 0:
580 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
583 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
581
584
582 (user, userid) = self.get_bugzilla_user(committer)
585 (user, userid) = self.get_bugzilla_user(committer)
583 now = time.strftime(r'%Y-%m-%d %H:%M:%S')
586 now = time.strftime(r'%Y-%m-%d %H:%M:%S')
584 self.run('''insert into longdescs
587 self.run('''insert into longdescs
585 (bug_id, who, bug_when, thetext)
588 (bug_id, who, bug_when, thetext)
586 values (%s, %s, %s, %s)''',
589 values (%s, %s, %s, %s)''',
587 (bugid, userid, now, text))
590 (bugid, userid, now, text))
588 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
591 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
589 values (%s, %s, %s, %s)''',
592 values (%s, %s, %s, %s)''',
590 (bugid, userid, now, self.longdesc_id))
593 (bugid, userid, now, self.longdesc_id))
591 self.conn.commit()
594 self.conn.commit()
592
595
593 class bzmysql_2_18(bzmysql):
596 class bzmysql_2_18(bzmysql):
594 '''support for bugzilla 2.18 series.'''
597 '''support for bugzilla 2.18 series.'''
595
598
596 def __init__(self, ui):
599 def __init__(self, ui):
597 bzmysql.__init__(self, ui)
600 bzmysql.__init__(self, ui)
598 self.default_notify = \
601 self.default_notify = \
599 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
602 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
600
603
601 class bzmysql_3_0(bzmysql_2_18):
604 class bzmysql_3_0(bzmysql_2_18):
602 '''support for bugzilla 3.0 series.'''
605 '''support for bugzilla 3.0 series.'''
603
606
604 def __init__(self, ui):
607 def __init__(self, ui):
605 bzmysql_2_18.__init__(self, ui)
608 bzmysql_2_18.__init__(self, ui)
606
609
607 def get_longdesc_id(self):
610 def get_longdesc_id(self):
608 '''get identity of longdesc field'''
611 '''get identity of longdesc field'''
609 self.run('select id from fielddefs where name = "longdesc"')
612 self.run('select id from fielddefs where name = "longdesc"')
610 ids = self.cursor.fetchall()
613 ids = self.cursor.fetchall()
611 if len(ids) != 1:
614 if len(ids) != 1:
612 raise error.Abort(_('unknown database schema'))
615 raise error.Abort(_('unknown database schema'))
613 return ids[0][0]
616 return ids[0][0]
614
617
615 # Bugzilla via XMLRPC interface.
618 # Bugzilla via XMLRPC interface.
616
619
617 class cookietransportrequest(object):
620 class cookietransportrequest(object):
618 """A Transport request method that retains cookies over its lifetime.
621 """A Transport request method that retains cookies over its lifetime.
619
622
620 The regular xmlrpclib transports ignore cookies. Which causes
623 The regular xmlrpclib transports ignore cookies. Which causes
621 a bit of a problem when you need a cookie-based login, as with
624 a bit of a problem when you need a cookie-based login, as with
622 the Bugzilla XMLRPC interface prior to 4.4.3.
625 the Bugzilla XMLRPC interface prior to 4.4.3.
623
626
624 So this is a helper for defining a Transport which looks for
627 So this is a helper for defining a Transport which looks for
625 cookies being set in responses and saves them to add to all future
628 cookies being set in responses and saves them to add to all future
626 requests.
629 requests.
627 """
630 """
628
631
629 # Inspiration drawn from
632 # Inspiration drawn from
630 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
633 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
631 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
634 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
632
635
633 cookies = []
636 cookies = []
634 def send_cookies(self, connection):
637 def send_cookies(self, connection):
635 if self.cookies:
638 if self.cookies:
636 for cookie in self.cookies:
639 for cookie in self.cookies:
637 connection.putheader("Cookie", cookie)
640 connection.putheader("Cookie", cookie)
638
641
639 def request(self, host, handler, request_body, verbose=0):
642 def request(self, host, handler, request_body, verbose=0):
640 self.verbose = verbose
643 self.verbose = verbose
641 self.accept_gzip_encoding = False
644 self.accept_gzip_encoding = False
642
645
643 # issue XML-RPC request
646 # issue XML-RPC request
644 h = self.make_connection(host)
647 h = self.make_connection(host)
645 if verbose:
648 if verbose:
646 h.set_debuglevel(1)
649 h.set_debuglevel(1)
647
650
648 self.send_request(h, handler, request_body)
651 self.send_request(h, handler, request_body)
649 self.send_host(h, host)
652 self.send_host(h, host)
650 self.send_cookies(h)
653 self.send_cookies(h)
651 self.send_user_agent(h)
654 self.send_user_agent(h)
652 self.send_content(h, request_body)
655 self.send_content(h, request_body)
653
656
654 # Deal with differences between Python 2.6 and 2.7.
657 # Deal with differences between Python 2.6 and 2.7.
655 # In the former h is a HTTP(S). In the latter it's a
658 # In the former h is a HTTP(S). In the latter it's a
656 # HTTP(S)Connection. Luckily, the 2.6 implementation of
659 # HTTP(S)Connection. Luckily, the 2.6 implementation of
657 # HTTP(S) has an underlying HTTP(S)Connection, so extract
660 # HTTP(S) has an underlying HTTP(S)Connection, so extract
658 # that and use it.
661 # that and use it.
659 try:
662 try:
660 response = h.getresponse()
663 response = h.getresponse()
661 except AttributeError:
664 except AttributeError:
662 response = h._conn.getresponse()
665 response = h._conn.getresponse()
663
666
664 # Add any cookie definitions to our list.
667 # Add any cookie definitions to our list.
665 for header in response.msg.getallmatchingheaders("Set-Cookie"):
668 for header in response.msg.getallmatchingheaders("Set-Cookie"):
666 val = header.split(": ", 1)[1]
669 val = header.split(": ", 1)[1]
667 cookie = val.split(";", 1)[0]
670 cookie = val.split(";", 1)[0]
668 self.cookies.append(cookie)
671 self.cookies.append(cookie)
669
672
670 if response.status != 200:
673 if response.status != 200:
671 raise xmlrpclib.ProtocolError(host + handler, response.status,
674 raise xmlrpclib.ProtocolError(host + handler, response.status,
672 response.reason, response.msg.headers)
675 response.reason, response.msg.headers)
673
676
674 payload = response.read()
677 payload = response.read()
675 parser, unmarshaller = self.getparser()
678 parser, unmarshaller = self.getparser()
676 parser.feed(payload)
679 parser.feed(payload)
677 parser.close()
680 parser.close()
678
681
679 return unmarshaller.close()
682 return unmarshaller.close()
680
683
681 # The explicit calls to the underlying xmlrpclib __init__() methods are
684 # The explicit calls to the underlying xmlrpclib __init__() methods are
682 # necessary. The xmlrpclib.Transport classes are old-style classes, and
685 # necessary. The xmlrpclib.Transport classes are old-style classes, and
683 # it turns out their __init__() doesn't get called when doing multiple
686 # it turns out their __init__() doesn't get called when doing multiple
684 # inheritance with a new-style class.
687 # inheritance with a new-style class.
685 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
688 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
686 def __init__(self, use_datetime=0):
689 def __init__(self, use_datetime=0):
687 if util.safehasattr(xmlrpclib.Transport, "__init__"):
690 if util.safehasattr(xmlrpclib.Transport, "__init__"):
688 xmlrpclib.Transport.__init__(self, use_datetime)
691 xmlrpclib.Transport.__init__(self, use_datetime)
689
692
690 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
693 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
691 def __init__(self, use_datetime=0):
694 def __init__(self, use_datetime=0):
692 if util.safehasattr(xmlrpclib.Transport, "__init__"):
695 if util.safehasattr(xmlrpclib.Transport, "__init__"):
693 xmlrpclib.SafeTransport.__init__(self, use_datetime)
696 xmlrpclib.SafeTransport.__init__(self, use_datetime)
694
697
695 class bzxmlrpc(bzaccess):
698 class bzxmlrpc(bzaccess):
696 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
699 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
697
700
698 Requires a minimum Bugzilla version 3.4.
701 Requires a minimum Bugzilla version 3.4.
699 """
702 """
700
703
701 def __init__(self, ui):
704 def __init__(self, ui):
702 bzaccess.__init__(self, ui)
705 bzaccess.__init__(self, ui)
703
706
704 bzweb = self.ui.config('bugzilla', 'bzurl')
707 bzweb = self.ui.config('bugzilla', 'bzurl')
705 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
708 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
706
709
707 user = self.ui.config('bugzilla', 'user')
710 user = self.ui.config('bugzilla', 'user')
708 passwd = self.ui.config('bugzilla', 'password')
711 passwd = self.ui.config('bugzilla', 'password')
709
712
710 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
713 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
711 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
714 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
712
715
713 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
716 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
714 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
717 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
715 self.bzvermajor = int(ver[0])
718 self.bzvermajor = int(ver[0])
716 self.bzverminor = int(ver[1])
719 self.bzverminor = int(ver[1])
717 login = self.bzproxy.User.login({'login': user, 'password': passwd,
720 login = self.bzproxy.User.login({'login': user, 'password': passwd,
718 'restrict_login': True})
721 'restrict_login': True})
719 self.bztoken = login.get('token', '')
722 self.bztoken = login.get('token', '')
720
723
721 def transport(self, uri):
724 def transport(self, uri):
722 if util.urlreq.urlparse(uri, "http")[0] == "https":
725 if util.urlreq.urlparse(uri, "http")[0] == "https":
723 return cookiesafetransport()
726 return cookiesafetransport()
724 else:
727 else:
725 return cookietransport()
728 return cookietransport()
726
729
727 def get_bug_comments(self, id):
730 def get_bug_comments(self, id):
728 """Return a string with all comment text for a bug."""
731 """Return a string with all comment text for a bug."""
729 c = self.bzproxy.Bug.comments({'ids': [id],
732 c = self.bzproxy.Bug.comments({'ids': [id],
730 'include_fields': ['text'],
733 'include_fields': ['text'],
731 'token': self.bztoken})
734 'token': self.bztoken})
732 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
735 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
733
736
734 def filter_real_bug_ids(self, bugs):
737 def filter_real_bug_ids(self, bugs):
735 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
738 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
736 'include_fields': [],
739 'include_fields': [],
737 'permissive': True,
740 'permissive': True,
738 'token': self.bztoken,
741 'token': self.bztoken,
739 })
742 })
740 for badbug in probe['faults']:
743 for badbug in probe['faults']:
741 id = badbug['id']
744 id = badbug['id']
742 self.ui.status(_('bug %d does not exist\n') % id)
745 self.ui.status(_('bug %d does not exist\n') % id)
743 del bugs[id]
746 del bugs[id]
744
747
745 def filter_cset_known_bug_ids(self, node, bugs):
748 def filter_cset_known_bug_ids(self, node, bugs):
746 for id in sorted(bugs.keys()):
749 for id in sorted(bugs.keys()):
747 if self.get_bug_comments(id).find(short(node)) != -1:
750 if self.get_bug_comments(id).find(short(node)) != -1:
748 self.ui.status(_('bug %d already knows about changeset %s\n') %
751 self.ui.status(_('bug %d already knows about changeset %s\n') %
749 (id, short(node)))
752 (id, short(node)))
750 del bugs[id]
753 del bugs[id]
751
754
752 def updatebug(self, bugid, newstate, text, committer):
755 def updatebug(self, bugid, newstate, text, committer):
753 args = {}
756 args = {}
754 if 'hours' in newstate:
757 if 'hours' in newstate:
755 args['work_time'] = newstate['hours']
758 args['work_time'] = newstate['hours']
756
759
757 if self.bzvermajor >= 4:
760 if self.bzvermajor >= 4:
758 args['ids'] = [bugid]
761 args['ids'] = [bugid]
759 args['comment'] = {'body' : text}
762 args['comment'] = {'body' : text}
760 if 'fix' in newstate:
763 if 'fix' in newstate:
761 args['status'] = self.fixstatus
764 args['status'] = self.fixstatus
762 args['resolution'] = self.fixresolution
765 args['resolution'] = self.fixresolution
763 args['token'] = self.bztoken
766 args['token'] = self.bztoken
764 self.bzproxy.Bug.update(args)
767 self.bzproxy.Bug.update(args)
765 else:
768 else:
766 if 'fix' in newstate:
769 if 'fix' in newstate:
767 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
770 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
768 "to mark bugs fixed\n"))
771 "to mark bugs fixed\n"))
769 args['id'] = bugid
772 args['id'] = bugid
770 args['comment'] = text
773 args['comment'] = text
771 self.bzproxy.Bug.add_comment(args)
774 self.bzproxy.Bug.add_comment(args)
772
775
773 class bzxmlrpcemail(bzxmlrpc):
776 class bzxmlrpcemail(bzxmlrpc):
774 """Read data from Bugzilla via XMLRPC, send updates via email.
777 """Read data from Bugzilla via XMLRPC, send updates via email.
775
778
776 Advantages of sending updates via email:
779 Advantages of sending updates via email:
777 1. Comments can be added as any user, not just logged in user.
780 1. Comments can be added as any user, not just logged in user.
778 2. Bug statuses or other fields not accessible via XMLRPC can
781 2. Bug statuses or other fields not accessible via XMLRPC can
779 potentially be updated.
782 potentially be updated.
780
783
781 There is no XMLRPC function to change bug status before Bugzilla
784 There is no XMLRPC function to change bug status before Bugzilla
782 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
785 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
783 But bugs can be marked fixed via email from 3.4 onwards.
786 But bugs can be marked fixed via email from 3.4 onwards.
784 """
787 """
785
788
786 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
789 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
787 # in-email fields are specified as '@<fieldname> = <value>'. In
790 # in-email fields are specified as '@<fieldname> = <value>'. In
788 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
791 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
789 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
792 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
790 # compatibility, but rather than rely on this use the new format for
793 # compatibility, but rather than rely on this use the new format for
791 # 4.0 onwards.
794 # 4.0 onwards.
792
795
793 def __init__(self, ui):
796 def __init__(self, ui):
794 bzxmlrpc.__init__(self, ui)
797 bzxmlrpc.__init__(self, ui)
795
798
796 self.bzemail = self.ui.config('bugzilla', 'bzemail')
799 self.bzemail = self.ui.config('bugzilla', 'bzemail')
797 if not self.bzemail:
800 if not self.bzemail:
798 raise error.Abort(_("configuration 'bzemail' missing"))
801 raise error.Abort(_("configuration 'bzemail' missing"))
799 mail.validateconfig(self.ui)
802 mail.validateconfig(self.ui)
800
803
801 def makecommandline(self, fieldname, value):
804 def makecommandline(self, fieldname, value):
802 if self.bzvermajor >= 4:
805 if self.bzvermajor >= 4:
803 return "@%s %s" % (fieldname, str(value))
806 return "@%s %s" % (fieldname, str(value))
804 else:
807 else:
805 if fieldname == "id":
808 if fieldname == "id":
806 fieldname = "bug_id"
809 fieldname = "bug_id"
807 return "@%s = %s" % (fieldname, str(value))
810 return "@%s = %s" % (fieldname, str(value))
808
811
809 def send_bug_modify_email(self, bugid, commands, comment, committer):
812 def send_bug_modify_email(self, bugid, commands, comment, committer):
810 '''send modification message to Bugzilla bug via email.
813 '''send modification message to Bugzilla bug via email.
811
814
812 The message format is documented in the Bugzilla email_in.pl
815 The message format is documented in the Bugzilla email_in.pl
813 specification. commands is a list of command lines, comment is the
816 specification. commands is a list of command lines, comment is the
814 comment text.
817 comment text.
815
818
816 To stop users from crafting commit comments with
819 To stop users from crafting commit comments with
817 Bugzilla commands, specify the bug ID via the message body, rather
820 Bugzilla commands, specify the bug ID via the message body, rather
818 than the subject line, and leave a blank line after it.
821 than the subject line, and leave a blank line after it.
819 '''
822 '''
820 user = self.map_committer(committer)
823 user = self.map_committer(committer)
821 matches = self.bzproxy.User.get({'match': [user],
824 matches = self.bzproxy.User.get({'match': [user],
822 'token': self.bztoken})
825 'token': self.bztoken})
823 if not matches['users']:
826 if not matches['users']:
824 user = self.ui.config('bugzilla', 'user')
827 user = self.ui.config('bugzilla', 'user')
825 matches = self.bzproxy.User.get({'match': [user],
828 matches = self.bzproxy.User.get({'match': [user],
826 'token': self.bztoken})
829 'token': self.bztoken})
827 if not matches['users']:
830 if not matches['users']:
828 raise error.Abort(_("default bugzilla user %s email not found")
831 raise error.Abort(_("default bugzilla user %s email not found")
829 % user)
832 % user)
830 user = matches['users'][0]['email']
833 user = matches['users'][0]['email']
831 commands.append(self.makecommandline("id", bugid))
834 commands.append(self.makecommandline("id", bugid))
832
835
833 text = "\n".join(commands) + "\n\n" + comment
836 text = "\n".join(commands) + "\n\n" + comment
834
837
835 _charsets = mail._charsets(self.ui)
838 _charsets = mail._charsets(self.ui)
836 user = mail.addressencode(self.ui, user, _charsets)
839 user = mail.addressencode(self.ui, user, _charsets)
837 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
840 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
838 msg = mail.mimeencode(self.ui, text, _charsets)
841 msg = mail.mimeencode(self.ui, text, _charsets)
839 msg['From'] = user
842 msg['From'] = user
840 msg['To'] = bzemail
843 msg['To'] = bzemail
841 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
844 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
842 sendmail = mail.connect(self.ui)
845 sendmail = mail.connect(self.ui)
843 sendmail(user, bzemail, msg.as_string())
846 sendmail(user, bzemail, msg.as_string())
844
847
845 def updatebug(self, bugid, newstate, text, committer):
848 def updatebug(self, bugid, newstate, text, committer):
846 cmds = []
849 cmds = []
847 if 'hours' in newstate:
850 if 'hours' in newstate:
848 cmds.append(self.makecommandline("work_time", newstate['hours']))
851 cmds.append(self.makecommandline("work_time", newstate['hours']))
849 if 'fix' in newstate:
852 if 'fix' in newstate:
850 cmds.append(self.makecommandline("bug_status", self.fixstatus))
853 cmds.append(self.makecommandline("bug_status", self.fixstatus))
851 cmds.append(self.makecommandline("resolution", self.fixresolution))
854 cmds.append(self.makecommandline("resolution", self.fixresolution))
852 self.send_bug_modify_email(bugid, cmds, text, committer)
855 self.send_bug_modify_email(bugid, cmds, text, committer)
853
856
854 class NotFound(LookupError):
857 class NotFound(LookupError):
855 pass
858 pass
856
859
857 class bzrestapi(bzaccess):
860 class bzrestapi(bzaccess):
858 """Read and write bugzilla data using the REST API available since
861 """Read and write bugzilla data using the REST API available since
859 Bugzilla 5.0.
862 Bugzilla 5.0.
860 """
863 """
861 def __init__(self, ui):
864 def __init__(self, ui):
862 bzaccess.__init__(self, ui)
865 bzaccess.__init__(self, ui)
863 bz = self.ui.config('bugzilla', 'bzurl')
866 bz = self.ui.config('bugzilla', 'bzurl')
864 self.bzroot = '/'.join([bz, 'rest'])
867 self.bzroot = '/'.join([bz, 'rest'])
865 self.apikey = self.ui.config('bugzilla', 'apikey')
868 self.apikey = self.ui.config('bugzilla', 'apikey')
866 self.user = self.ui.config('bugzilla', 'user')
869 self.user = self.ui.config('bugzilla', 'user')
867 self.passwd = self.ui.config('bugzilla', 'password')
870 self.passwd = self.ui.config('bugzilla', 'password')
868 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
871 self.fixstatus = self.ui.config('bugzilla', 'fixstatus')
869 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
872 self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
870
873
871 def apiurl(self, targets, include_fields=None):
874 def apiurl(self, targets, include_fields=None):
872 url = '/'.join([self.bzroot] + [str(t) for t in targets])
875 url = '/'.join([self.bzroot] + [str(t) for t in targets])
873 qv = {}
876 qv = {}
874 if self.apikey:
877 if self.apikey:
875 qv['api_key'] = self.apikey
878 qv['api_key'] = self.apikey
876 elif self.user and self.passwd:
879 elif self.user and self.passwd:
877 qv['login'] = self.user
880 qv['login'] = self.user
878 qv['password'] = self.passwd
881 qv['password'] = self.passwd
879 if include_fields:
882 if include_fields:
880 qv['include_fields'] = include_fields
883 qv['include_fields'] = include_fields
881 if qv:
884 if qv:
882 url = '%s?%s' % (url, util.urlreq.urlencode(qv))
885 url = '%s?%s' % (url, util.urlreq.urlencode(qv))
883 return url
886 return url
884
887
885 def _fetch(self, burl):
888 def _fetch(self, burl):
886 try:
889 try:
887 resp = url.open(self.ui, burl)
890 resp = url.open(self.ui, burl)
888 return json.loads(resp.read())
891 return json.loads(resp.read())
889 except util.urlerr.httperror as inst:
892 except util.urlerr.httperror as inst:
890 if inst.code == 401:
893 if inst.code == 401:
891 raise error.Abort(_('authorization failed'))
894 raise error.Abort(_('authorization failed'))
892 if inst.code == 404:
895 if inst.code == 404:
893 raise NotFound()
896 raise NotFound()
894 else:
897 else:
895 raise
898 raise
896
899
897 def _submit(self, burl, data, method='POST'):
900 def _submit(self, burl, data, method='POST'):
898 data = json.dumps(data)
901 data = json.dumps(data)
899 if method == 'PUT':
902 if method == 'PUT':
900 class putrequest(util.urlreq.request):
903 class putrequest(util.urlreq.request):
901 def get_method(self):
904 def get_method(self):
902 return 'PUT'
905 return 'PUT'
903 request_type = putrequest
906 request_type = putrequest
904 else:
907 else:
905 request_type = util.urlreq.request
908 request_type = util.urlreq.request
906 req = request_type(burl, data,
909 req = request_type(burl, data,
907 {'Content-Type': 'application/json'})
910 {'Content-Type': 'application/json'})
908 try:
911 try:
909 resp = url.opener(self.ui).open(req)
912 resp = url.opener(self.ui).open(req)
910 return json.loads(resp.read())
913 return json.loads(resp.read())
911 except util.urlerr.httperror as inst:
914 except util.urlerr.httperror as inst:
912 if inst.code == 401:
915 if inst.code == 401:
913 raise error.Abort(_('authorization failed'))
916 raise error.Abort(_('authorization failed'))
914 if inst.code == 404:
917 if inst.code == 404:
915 raise NotFound()
918 raise NotFound()
916 else:
919 else:
917 raise
920 raise
918
921
919 def filter_real_bug_ids(self, bugs):
922 def filter_real_bug_ids(self, bugs):
920 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
923 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
921 badbugs = set()
924 badbugs = set()
922 for bugid in bugs:
925 for bugid in bugs:
923 burl = self.apiurl(('bug', bugid), include_fields='status')
926 burl = self.apiurl(('bug', bugid), include_fields='status')
924 try:
927 try:
925 self._fetch(burl)
928 self._fetch(burl)
926 except NotFound:
929 except NotFound:
927 badbugs.add(bugid)
930 badbugs.add(bugid)
928 for bugid in badbugs:
931 for bugid in badbugs:
929 del bugs[bugid]
932 del bugs[bugid]
930
933
931 def filter_cset_known_bug_ids(self, node, bugs):
934 def filter_cset_known_bug_ids(self, node, bugs):
932 '''remove bug IDs where node occurs in comment text from bugs.'''
935 '''remove bug IDs where node occurs in comment text from bugs.'''
933 sn = short(node)
936 sn = short(node)
934 for bugid in bugs.keys():
937 for bugid in bugs.keys():
935 burl = self.apiurl(('bug', bugid, 'comment'), include_fields='text')
938 burl = self.apiurl(('bug', bugid, 'comment'), include_fields='text')
936 result = self._fetch(burl)
939 result = self._fetch(burl)
937 comments = result['bugs'][str(bugid)]['comments']
940 comments = result['bugs'][str(bugid)]['comments']
938 if any(sn in c['text'] for c in comments):
941 if any(sn in c['text'] for c in comments):
939 self.ui.status(_('bug %d already knows about changeset %s\n') %
942 self.ui.status(_('bug %d already knows about changeset %s\n') %
940 (bugid, sn))
943 (bugid, sn))
941 del bugs[bugid]
944 del bugs[bugid]
942
945
943 def updatebug(self, bugid, newstate, text, committer):
946 def updatebug(self, bugid, newstate, text, committer):
944 '''update the specified bug. Add comment text and set new states.
947 '''update the specified bug. Add comment text and set new states.
945
948
946 If possible add the comment as being from the committer of
949 If possible add the comment as being from the committer of
947 the changeset. Otherwise use the default Bugzilla user.
950 the changeset. Otherwise use the default Bugzilla user.
948 '''
951 '''
949 bugmod = {}
952 bugmod = {}
950 if 'hours' in newstate:
953 if 'hours' in newstate:
951 bugmod['work_time'] = newstate['hours']
954 bugmod['work_time'] = newstate['hours']
952 if 'fix' in newstate:
955 if 'fix' in newstate:
953 bugmod['status'] = self.fixstatus
956 bugmod['status'] = self.fixstatus
954 bugmod['resolution'] = self.fixresolution
957 bugmod['resolution'] = self.fixresolution
955 if bugmod:
958 if bugmod:
956 # if we have to change the bugs state do it here
959 # if we have to change the bugs state do it here
957 bugmod['comment'] = {
960 bugmod['comment'] = {
958 'comment': text,
961 'comment': text,
959 'is_private': False,
962 'is_private': False,
960 'is_markdown': False,
963 'is_markdown': False,
961 }
964 }
962 burl = self.apiurl(('bug', bugid))
965 burl = self.apiurl(('bug', bugid))
963 self._submit(burl, bugmod, method='PUT')
966 self._submit(burl, bugmod, method='PUT')
964 self.ui.debug('updated bug %s\n' % bugid)
967 self.ui.debug('updated bug %s\n' % bugid)
965 else:
968 else:
966 burl = self.apiurl(('bug', bugid, 'comment'))
969 burl = self.apiurl(('bug', bugid, 'comment'))
967 self._submit(burl, {
970 self._submit(burl, {
968 'comment': text,
971 'comment': text,
969 'is_private': False,
972 'is_private': False,
970 'is_markdown': False,
973 'is_markdown': False,
971 })
974 })
972 self.ui.debug('added comment to bug %s\n' % bugid)
975 self.ui.debug('added comment to bug %s\n' % bugid)
973
976
974 def notify(self, bugs, committer):
977 def notify(self, bugs, committer):
975 '''Force sending of Bugzilla notification emails.
978 '''Force sending of Bugzilla notification emails.
976
979
977 Only required if the access method does not trigger notification
980 Only required if the access method does not trigger notification
978 emails automatically.
981 emails automatically.
979 '''
982 '''
980 pass
983 pass
981
984
982 class bugzilla(object):
985 class bugzilla(object):
983 # supported versions of bugzilla. different versions have
986 # supported versions of bugzilla. different versions have
984 # different schemas.
987 # different schemas.
985 _versions = {
988 _versions = {
986 '2.16': bzmysql,
989 '2.16': bzmysql,
987 '2.18': bzmysql_2_18,
990 '2.18': bzmysql_2_18,
988 '3.0': bzmysql_3_0,
991 '3.0': bzmysql_3_0,
989 'xmlrpc': bzxmlrpc,
992 'xmlrpc': bzxmlrpc,
990 'xmlrpc+email': bzxmlrpcemail,
993 'xmlrpc+email': bzxmlrpcemail,
991 'restapi': bzrestapi,
994 'restapi': bzrestapi,
992 }
995 }
993
996
994 def __init__(self, ui, repo):
997 def __init__(self, ui, repo):
995 self.ui = ui
998 self.ui = ui
996 self.repo = repo
999 self.repo = repo
997
1000
998 bzversion = self.ui.config('bugzilla', 'version')
1001 bzversion = self.ui.config('bugzilla', 'version')
999 try:
1002 try:
1000 bzclass = bugzilla._versions[bzversion]
1003 bzclass = bugzilla._versions[bzversion]
1001 except KeyError:
1004 except KeyError:
1002 raise error.Abort(_('bugzilla version %s not supported') %
1005 raise error.Abort(_('bugzilla version %s not supported') %
1003 bzversion)
1006 bzversion)
1004 self.bzdriver = bzclass(self.ui)
1007 self.bzdriver = bzclass(self.ui)
1005
1008
1006 self.bug_re = re.compile(
1009 self.bug_re = re.compile(
1007 self.ui.config('bugzilla', 'regexp'), re.IGNORECASE)
1010 self.ui.config('bugzilla', 'regexp'), re.IGNORECASE)
1008 self.fix_re = re.compile(
1011 self.fix_re = re.compile(
1009 self.ui.config('bugzilla', 'fixregexp'), re.IGNORECASE)
1012 self.ui.config('bugzilla', 'fixregexp'), re.IGNORECASE)
1010 self.split_re = re.compile(r'\D+')
1013 self.split_re = re.compile(r'\D+')
1011
1014
1012 def find_bugs(self, ctx):
1015 def find_bugs(self, ctx):
1013 '''return bugs dictionary created from commit comment.
1016 '''return bugs dictionary created from commit comment.
1014
1017
1015 Extract bug info from changeset comments. Filter out any that are
1018 Extract bug info from changeset comments. Filter out any that are
1016 not known to Bugzilla, and any that already have a reference to
1019 not known to Bugzilla, and any that already have a reference to
1017 the given changeset in their comments.
1020 the given changeset in their comments.
1018 '''
1021 '''
1019 start = 0
1022 start = 0
1020 hours = 0.0
1023 hours = 0.0
1021 bugs = {}
1024 bugs = {}
1022 bugmatch = self.bug_re.search(ctx.description(), start)
1025 bugmatch = self.bug_re.search(ctx.description(), start)
1023 fixmatch = self.fix_re.search(ctx.description(), start)
1026 fixmatch = self.fix_re.search(ctx.description(), start)
1024 while True:
1027 while True:
1025 bugattribs = {}
1028 bugattribs = {}
1026 if not bugmatch and not fixmatch:
1029 if not bugmatch and not fixmatch:
1027 break
1030 break
1028 if not bugmatch:
1031 if not bugmatch:
1029 m = fixmatch
1032 m = fixmatch
1030 elif not fixmatch:
1033 elif not fixmatch:
1031 m = bugmatch
1034 m = bugmatch
1032 else:
1035 else:
1033 if bugmatch.start() < fixmatch.start():
1036 if bugmatch.start() < fixmatch.start():
1034 m = bugmatch
1037 m = bugmatch
1035 else:
1038 else:
1036 m = fixmatch
1039 m = fixmatch
1037 start = m.end()
1040 start = m.end()
1038 if m is bugmatch:
1041 if m is bugmatch:
1039 bugmatch = self.bug_re.search(ctx.description(), start)
1042 bugmatch = self.bug_re.search(ctx.description(), start)
1040 if 'fix' in bugattribs:
1043 if 'fix' in bugattribs:
1041 del bugattribs['fix']
1044 del bugattribs['fix']
1042 else:
1045 else:
1043 fixmatch = self.fix_re.search(ctx.description(), start)
1046 fixmatch = self.fix_re.search(ctx.description(), start)
1044 bugattribs['fix'] = None
1047 bugattribs['fix'] = None
1045
1048
1046 try:
1049 try:
1047 ids = m.group('ids')
1050 ids = m.group('ids')
1048 except IndexError:
1051 except IndexError:
1049 ids = m.group(1)
1052 ids = m.group(1)
1050 try:
1053 try:
1051 hours = float(m.group('hours'))
1054 hours = float(m.group('hours'))
1052 bugattribs['hours'] = hours
1055 bugattribs['hours'] = hours
1053 except IndexError:
1056 except IndexError:
1054 pass
1057 pass
1055 except TypeError:
1058 except TypeError:
1056 pass
1059 pass
1057 except ValueError:
1060 except ValueError:
1058 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
1061 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
1059
1062
1060 for id in self.split_re.split(ids):
1063 for id in self.split_re.split(ids):
1061 if not id:
1064 if not id:
1062 continue
1065 continue
1063 bugs[int(id)] = bugattribs
1066 bugs[int(id)] = bugattribs
1064 if bugs:
1067 if bugs:
1065 self.bzdriver.filter_real_bug_ids(bugs)
1068 self.bzdriver.filter_real_bug_ids(bugs)
1066 if bugs:
1069 if bugs:
1067 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
1070 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
1068 return bugs
1071 return bugs
1069
1072
1070 def update(self, bugid, newstate, ctx):
1073 def update(self, bugid, newstate, ctx):
1071 '''update bugzilla bug with reference to changeset.'''
1074 '''update bugzilla bug with reference to changeset.'''
1072
1075
1073 def webroot(root):
1076 def webroot(root):
1074 '''strip leading prefix of repo root and turn into
1077 '''strip leading prefix of repo root and turn into
1075 url-safe path.'''
1078 url-safe path.'''
1076 count = int(self.ui.config('bugzilla', 'strip'))
1079 count = int(self.ui.config('bugzilla', 'strip'))
1077 root = util.pconvert(root)
1080 root = util.pconvert(root)
1078 while count > 0:
1081 while count > 0:
1079 c = root.find('/')
1082 c = root.find('/')
1080 if c == -1:
1083 if c == -1:
1081 break
1084 break
1082 root = root[c + 1:]
1085 root = root[c + 1:]
1083 count -= 1
1086 count -= 1
1084 return root
1087 return root
1085
1088
1086 mapfile = None
1089 mapfile = None
1087 tmpl = self.ui.config('bugzilla', 'template')
1090 tmpl = self.ui.config('bugzilla', 'template')
1088 if not tmpl:
1091 if not tmpl:
1089 mapfile = self.ui.config('bugzilla', 'style')
1092 mapfile = self.ui.config('bugzilla', 'style')
1090 if not mapfile and not tmpl:
1093 if not mapfile and not tmpl:
1091 tmpl = _('changeset {node|short} in repo {root} refers '
1094 tmpl = _('changeset {node|short} in repo {root} refers '
1092 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
1095 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
1093 spec = logcmdutil.templatespec(tmpl, mapfile)
1096 spec = logcmdutil.templatespec(tmpl, mapfile)
1094 t = logcmdutil.changesettemplater(self.ui, self.repo, spec)
1097 t = logcmdutil.changesettemplater(self.ui, self.repo, spec)
1095 self.ui.pushbuffer()
1098 self.ui.pushbuffer()
1096 t.show(ctx, changes=ctx.changeset(),
1099 t.show(ctx, changes=ctx.changeset(),
1097 bug=str(bugid),
1100 bug=str(bugid),
1098 hgweb=self.ui.config('web', 'baseurl'),
1101 hgweb=self.ui.config('web', 'baseurl'),
1099 root=self.repo.root,
1102 root=self.repo.root,
1100 webroot=webroot(self.repo.root))
1103 webroot=webroot(self.repo.root))
1101 data = self.ui.popbuffer()
1104 data = self.ui.popbuffer()
1102 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
1105 self.bzdriver.updatebug(bugid, newstate, data,
1106 stringutil.email(ctx.user()))
1103
1107
1104 def notify(self, bugs, committer):
1108 def notify(self, bugs, committer):
1105 '''ensure Bugzilla users are notified of bug change.'''
1109 '''ensure Bugzilla users are notified of bug change.'''
1106 self.bzdriver.notify(bugs, committer)
1110 self.bzdriver.notify(bugs, committer)
1107
1111
1108 def hook(ui, repo, hooktype, node=None, **kwargs):
1112 def hook(ui, repo, hooktype, node=None, **kwargs):
1109 '''add comment to bugzilla for each changeset that refers to a
1113 '''add comment to bugzilla for each changeset that refers to a
1110 bugzilla bug id. only add a comment once per bug, so same change
1114 bugzilla bug id. only add a comment once per bug, so same change
1111 seen multiple times does not fill bug with duplicate data.'''
1115 seen multiple times does not fill bug with duplicate data.'''
1112 if node is None:
1116 if node is None:
1113 raise error.Abort(_('hook type %s does not pass a changeset id') %
1117 raise error.Abort(_('hook type %s does not pass a changeset id') %
1114 hooktype)
1118 hooktype)
1115 try:
1119 try:
1116 bz = bugzilla(ui, repo)
1120 bz = bugzilla(ui, repo)
1117 ctx = repo[node]
1121 ctx = repo[node]
1118 bugs = bz.find_bugs(ctx)
1122 bugs = bz.find_bugs(ctx)
1119 if bugs:
1123 if bugs:
1120 for bug in bugs:
1124 for bug in bugs:
1121 bz.update(bug, bugs[bug], ctx)
1125 bz.update(bug, bugs[bug], ctx)
1122 bz.notify(bugs, util.email(ctx.user()))
1126 bz.notify(bugs, stringutil.email(ctx.user()))
1123 except Exception as e:
1127 except Exception as e:
1124 raise error.Abort(_('Bugzilla error: %s') % e)
1128 raise error.Abort(_('Bugzilla error: %s') % e)
@@ -1,953 +1,957
1 # Mercurial built-in replacement for cvsps.
1 # Mercurial built-in replacement for cvsps.
2 #
2 #
3 # Copyright 2008, Frank Kingswood <frank@kingswood-consulting.co.uk>
3 # Copyright 2008, Frank Kingswood <frank@kingswood-consulting.co.uk>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from __future__ import absolute_import
7 from __future__ import absolute_import
8
8
9 import os
9 import os
10 import re
10 import re
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13 from mercurial import (
13 from mercurial import (
14 encoding,
14 encoding,
15 error,
15 error,
16 hook,
16 hook,
17 pycompat,
17 pycompat,
18 util,
18 util,
19 )
19 )
20 from mercurial.utils import dateutil
20 from mercurial.utils import (
21 dateutil,
22 stringutil,
23 )
21
24
22 pickle = util.pickle
25 pickle = util.pickle
23
26
24 class logentry(object):
27 class logentry(object):
25 '''Class logentry has the following attributes:
28 '''Class logentry has the following attributes:
26 .author - author name as CVS knows it
29 .author - author name as CVS knows it
27 .branch - name of branch this revision is on
30 .branch - name of branch this revision is on
28 .branches - revision tuple of branches starting at this revision
31 .branches - revision tuple of branches starting at this revision
29 .comment - commit message
32 .comment - commit message
30 .commitid - CVS commitid or None
33 .commitid - CVS commitid or None
31 .date - the commit date as a (time, tz) tuple
34 .date - the commit date as a (time, tz) tuple
32 .dead - true if file revision is dead
35 .dead - true if file revision is dead
33 .file - Name of file
36 .file - Name of file
34 .lines - a tuple (+lines, -lines) or None
37 .lines - a tuple (+lines, -lines) or None
35 .parent - Previous revision of this entry
38 .parent - Previous revision of this entry
36 .rcs - name of file as returned from CVS
39 .rcs - name of file as returned from CVS
37 .revision - revision number as tuple
40 .revision - revision number as tuple
38 .tags - list of tags on the file
41 .tags - list of tags on the file
39 .synthetic - is this a synthetic "file ... added on ..." revision?
42 .synthetic - is this a synthetic "file ... added on ..." revision?
40 .mergepoint - the branch that has been merged from (if present in
43 .mergepoint - the branch that has been merged from (if present in
41 rlog output) or None
44 rlog output) or None
42 .branchpoints - the branches that start at the current entry or empty
45 .branchpoints - the branches that start at the current entry or empty
43 '''
46 '''
44 def __init__(self, **entries):
47 def __init__(self, **entries):
45 self.synthetic = False
48 self.synthetic = False
46 self.__dict__.update(entries)
49 self.__dict__.update(entries)
47
50
48 def __repr__(self):
51 def __repr__(self):
49 items = ("%s=%r"%(k, self.__dict__[k]) for k in sorted(self.__dict__))
52 items = ("%s=%r"%(k, self.__dict__[k]) for k in sorted(self.__dict__))
50 return "%s(%s)"%(type(self).__name__, ", ".join(items))
53 return "%s(%s)"%(type(self).__name__, ", ".join(items))
51
54
52 class logerror(Exception):
55 class logerror(Exception):
53 pass
56 pass
54
57
55 def getrepopath(cvspath):
58 def getrepopath(cvspath):
56 """Return the repository path from a CVS path.
59 """Return the repository path from a CVS path.
57
60
58 >>> getrepopath(b'/foo/bar')
61 >>> getrepopath(b'/foo/bar')
59 '/foo/bar'
62 '/foo/bar'
60 >>> getrepopath(b'c:/foo/bar')
63 >>> getrepopath(b'c:/foo/bar')
61 '/foo/bar'
64 '/foo/bar'
62 >>> getrepopath(b':pserver:10/foo/bar')
65 >>> getrepopath(b':pserver:10/foo/bar')
63 '/foo/bar'
66 '/foo/bar'
64 >>> getrepopath(b':pserver:10c:/foo/bar')
67 >>> getrepopath(b':pserver:10c:/foo/bar')
65 '/foo/bar'
68 '/foo/bar'
66 >>> getrepopath(b':pserver:/foo/bar')
69 >>> getrepopath(b':pserver:/foo/bar')
67 '/foo/bar'
70 '/foo/bar'
68 >>> getrepopath(b':pserver:c:/foo/bar')
71 >>> getrepopath(b':pserver:c:/foo/bar')
69 '/foo/bar'
72 '/foo/bar'
70 >>> getrepopath(b':pserver:truc@foo.bar:/foo/bar')
73 >>> getrepopath(b':pserver:truc@foo.bar:/foo/bar')
71 '/foo/bar'
74 '/foo/bar'
72 >>> getrepopath(b':pserver:truc@foo.bar:c:/foo/bar')
75 >>> getrepopath(b':pserver:truc@foo.bar:c:/foo/bar')
73 '/foo/bar'
76 '/foo/bar'
74 >>> getrepopath(b'user@server/path/to/repository')
77 >>> getrepopath(b'user@server/path/to/repository')
75 '/path/to/repository'
78 '/path/to/repository'
76 """
79 """
77 # According to CVS manual, CVS paths are expressed like:
80 # According to CVS manual, CVS paths are expressed like:
78 # [:method:][[user][:password]@]hostname[:[port]]/path/to/repository
81 # [:method:][[user][:password]@]hostname[:[port]]/path/to/repository
79 #
82 #
80 # CVSpath is splitted into parts and then position of the first occurrence
83 # CVSpath is splitted into parts and then position of the first occurrence
81 # of the '/' char after the '@' is located. The solution is the rest of the
84 # of the '/' char after the '@' is located. The solution is the rest of the
82 # string after that '/' sign including it
85 # string after that '/' sign including it
83
86
84 parts = cvspath.split(':')
87 parts = cvspath.split(':')
85 atposition = parts[-1].find('@')
88 atposition = parts[-1].find('@')
86 start = 0
89 start = 0
87
90
88 if atposition != -1:
91 if atposition != -1:
89 start = atposition
92 start = atposition
90
93
91 repopath = parts[-1][parts[-1].find('/', start):]
94 repopath = parts[-1][parts[-1].find('/', start):]
92 return repopath
95 return repopath
93
96
94 def createlog(ui, directory=None, root="", rlog=True, cache=None):
97 def createlog(ui, directory=None, root="", rlog=True, cache=None):
95 '''Collect the CVS rlog'''
98 '''Collect the CVS rlog'''
96
99
97 # Because we store many duplicate commit log messages, reusing strings
100 # Because we store many duplicate commit log messages, reusing strings
98 # saves a lot of memory and pickle storage space.
101 # saves a lot of memory and pickle storage space.
99 _scache = {}
102 _scache = {}
100 def scache(s):
103 def scache(s):
101 "return a shared version of a string"
104 "return a shared version of a string"
102 return _scache.setdefault(s, s)
105 return _scache.setdefault(s, s)
103
106
104 ui.status(_('collecting CVS rlog\n'))
107 ui.status(_('collecting CVS rlog\n'))
105
108
106 log = [] # list of logentry objects containing the CVS state
109 log = [] # list of logentry objects containing the CVS state
107
110
108 # patterns to match in CVS (r)log output, by state of use
111 # patterns to match in CVS (r)log output, by state of use
109 re_00 = re.compile('RCS file: (.+)$')
112 re_00 = re.compile('RCS file: (.+)$')
110 re_01 = re.compile('cvs \\[r?log aborted\\]: (.+)$')
113 re_01 = re.compile('cvs \\[r?log aborted\\]: (.+)$')
111 re_02 = re.compile('cvs (r?log|server): (.+)\n$')
114 re_02 = re.compile('cvs (r?log|server): (.+)\n$')
112 re_03 = re.compile("(Cannot access.+CVSROOT)|"
115 re_03 = re.compile("(Cannot access.+CVSROOT)|"
113 "(can't create temporary directory.+)$")
116 "(can't create temporary directory.+)$")
114 re_10 = re.compile('Working file: (.+)$')
117 re_10 = re.compile('Working file: (.+)$')
115 re_20 = re.compile('symbolic names:')
118 re_20 = re.compile('symbolic names:')
116 re_30 = re.compile('\t(.+): ([\\d.]+)$')
119 re_30 = re.compile('\t(.+): ([\\d.]+)$')
117 re_31 = re.compile('----------------------------$')
120 re_31 = re.compile('----------------------------$')
118 re_32 = re.compile('======================================='
121 re_32 = re.compile('======================================='
119 '======================================$')
122 '======================================$')
120 re_50 = re.compile('revision ([\\d.]+)(\s+locked by:\s+.+;)?$')
123 re_50 = re.compile('revision ([\\d.]+)(\s+locked by:\s+.+;)?$')
121 re_60 = re.compile(r'date:\s+(.+);\s+author:\s+(.+);\s+state:\s+(.+?);'
124 re_60 = re.compile(r'date:\s+(.+);\s+author:\s+(.+);\s+state:\s+(.+?);'
122 r'(\s+lines:\s+(\+\d+)?\s+(-\d+)?;)?'
125 r'(\s+lines:\s+(\+\d+)?\s+(-\d+)?;)?'
123 r'(\s+commitid:\s+([^;]+);)?'
126 r'(\s+commitid:\s+([^;]+);)?'
124 r'(.*mergepoint:\s+([^;]+);)?')
127 r'(.*mergepoint:\s+([^;]+);)?')
125 re_70 = re.compile('branches: (.+);$')
128 re_70 = re.compile('branches: (.+);$')
126
129
127 file_added_re = re.compile(r'file [^/]+ was (initially )?added on branch')
130 file_added_re = re.compile(r'file [^/]+ was (initially )?added on branch')
128
131
129 prefix = '' # leading path to strip of what we get from CVS
132 prefix = '' # leading path to strip of what we get from CVS
130
133
131 if directory is None:
134 if directory is None:
132 # Current working directory
135 # Current working directory
133
136
134 # Get the real directory in the repository
137 # Get the real directory in the repository
135 try:
138 try:
136 prefix = open(os.path.join('CVS','Repository'), 'rb').read().strip()
139 prefix = open(os.path.join('CVS','Repository'), 'rb').read().strip()
137 directory = prefix
140 directory = prefix
138 if prefix == ".":
141 if prefix == ".":
139 prefix = ""
142 prefix = ""
140 except IOError:
143 except IOError:
141 raise logerror(_('not a CVS sandbox'))
144 raise logerror(_('not a CVS sandbox'))
142
145
143 if prefix and not prefix.endswith(pycompat.ossep):
146 if prefix and not prefix.endswith(pycompat.ossep):
144 prefix += pycompat.ossep
147 prefix += pycompat.ossep
145
148
146 # Use the Root file in the sandbox, if it exists
149 # Use the Root file in the sandbox, if it exists
147 try:
150 try:
148 root = open(os.path.join('CVS','Root'), 'rb').read().strip()
151 root = open(os.path.join('CVS','Root'), 'rb').read().strip()
149 except IOError:
152 except IOError:
150 pass
153 pass
151
154
152 if not root:
155 if not root:
153 root = encoding.environ.get('CVSROOT', '')
156 root = encoding.environ.get('CVSROOT', '')
154
157
155 # read log cache if one exists
158 # read log cache if one exists
156 oldlog = []
159 oldlog = []
157 date = None
160 date = None
158
161
159 if cache:
162 if cache:
160 cachedir = os.path.expanduser('~/.hg.cvsps')
163 cachedir = os.path.expanduser('~/.hg.cvsps')
161 if not os.path.exists(cachedir):
164 if not os.path.exists(cachedir):
162 os.mkdir(cachedir)
165 os.mkdir(cachedir)
163
166
164 # The cvsps cache pickle needs a uniquified name, based on the
167 # The cvsps cache pickle needs a uniquified name, based on the
165 # repository location. The address may have all sort of nasties
168 # repository location. The address may have all sort of nasties
166 # in it, slashes, colons and such. So here we take just the
169 # in it, slashes, colons and such. So here we take just the
167 # alphanumeric characters, concatenated in a way that does not
170 # alphanumeric characters, concatenated in a way that does not
168 # mix up the various components, so that
171 # mix up the various components, so that
169 # :pserver:user@server:/path
172 # :pserver:user@server:/path
170 # and
173 # and
171 # /pserver/user/server/path
174 # /pserver/user/server/path
172 # are mapped to different cache file names.
175 # are mapped to different cache file names.
173 cachefile = root.split(":") + [directory, "cache"]
176 cachefile = root.split(":") + [directory, "cache"]
174 cachefile = ['-'.join(re.findall(br'\w+', s)) for s in cachefile if s]
177 cachefile = ['-'.join(re.findall(br'\w+', s)) for s in cachefile if s]
175 cachefile = os.path.join(cachedir,
178 cachefile = os.path.join(cachedir,
176 '.'.join([s for s in cachefile if s]))
179 '.'.join([s for s in cachefile if s]))
177
180
178 if cache == 'update':
181 if cache == 'update':
179 try:
182 try:
180 ui.note(_('reading cvs log cache %s\n') % cachefile)
183 ui.note(_('reading cvs log cache %s\n') % cachefile)
181 oldlog = pickle.load(open(cachefile, 'rb'))
184 oldlog = pickle.load(open(cachefile, 'rb'))
182 for e in oldlog:
185 for e in oldlog:
183 if not (util.safehasattr(e, 'branchpoints') and
186 if not (util.safehasattr(e, 'branchpoints') and
184 util.safehasattr(e, 'commitid') and
187 util.safehasattr(e, 'commitid') and
185 util.safehasattr(e, 'mergepoint')):
188 util.safehasattr(e, 'mergepoint')):
186 ui.status(_('ignoring old cache\n'))
189 ui.status(_('ignoring old cache\n'))
187 oldlog = []
190 oldlog = []
188 break
191 break
189
192
190 ui.note(_('cache has %d log entries\n') % len(oldlog))
193 ui.note(_('cache has %d log entries\n') % len(oldlog))
191 except Exception as e:
194 except Exception as e:
192 ui.note(_('error reading cache: %r\n') % e)
195 ui.note(_('error reading cache: %r\n') % e)
193
196
194 if oldlog:
197 if oldlog:
195 date = oldlog[-1].date # last commit date as a (time,tz) tuple
198 date = oldlog[-1].date # last commit date as a (time,tz) tuple
196 date = dateutil.datestr(date, '%Y/%m/%d %H:%M:%S %1%2')
199 date = dateutil.datestr(date, '%Y/%m/%d %H:%M:%S %1%2')
197
200
198 # build the CVS commandline
201 # build the CVS commandline
199 cmd = ['cvs', '-q']
202 cmd = ['cvs', '-q']
200 if root:
203 if root:
201 cmd.append('-d%s' % root)
204 cmd.append('-d%s' % root)
202 p = util.normpath(getrepopath(root))
205 p = util.normpath(getrepopath(root))
203 if not p.endswith('/'):
206 if not p.endswith('/'):
204 p += '/'
207 p += '/'
205 if prefix:
208 if prefix:
206 # looks like normpath replaces "" by "."
209 # looks like normpath replaces "" by "."
207 prefix = p + util.normpath(prefix)
210 prefix = p + util.normpath(prefix)
208 else:
211 else:
209 prefix = p
212 prefix = p
210 cmd.append(['log', 'rlog'][rlog])
213 cmd.append(['log', 'rlog'][rlog])
211 if date:
214 if date:
212 # no space between option and date string
215 # no space between option and date string
213 cmd.append('-d>%s' % date)
216 cmd.append('-d>%s' % date)
214 cmd.append(directory)
217 cmd.append(directory)
215
218
216 # state machine begins here
219 # state machine begins here
217 tags = {} # dictionary of revisions on current file with their tags
220 tags = {} # dictionary of revisions on current file with their tags
218 branchmap = {} # mapping between branch names and revision numbers
221 branchmap = {} # mapping between branch names and revision numbers
219 rcsmap = {}
222 rcsmap = {}
220 state = 0
223 state = 0
221 store = False # set when a new record can be appended
224 store = False # set when a new record can be appended
222
225
223 cmd = [util.shellquote(arg) for arg in cmd]
226 cmd = [util.shellquote(arg) for arg in cmd]
224 ui.note(_("running %s\n") % (' '.join(cmd)))
227 ui.note(_("running %s\n") % (' '.join(cmd)))
225 ui.debug("prefix=%r directory=%r root=%r\n" % (prefix, directory, root))
228 ui.debug("prefix=%r directory=%r root=%r\n" % (prefix, directory, root))
226
229
227 pfp = util.popen(' '.join(cmd))
230 pfp = util.popen(' '.join(cmd))
228 peek = pfp.readline()
231 peek = pfp.readline()
229 while True:
232 while True:
230 line = peek
233 line = peek
231 if line == '':
234 if line == '':
232 break
235 break
233 peek = pfp.readline()
236 peek = pfp.readline()
234 if line.endswith('\n'):
237 if line.endswith('\n'):
235 line = line[:-1]
238 line = line[:-1]
236 #ui.debug('state=%d line=%r\n' % (state, line))
239 #ui.debug('state=%d line=%r\n' % (state, line))
237
240
238 if state == 0:
241 if state == 0:
239 # initial state, consume input until we see 'RCS file'
242 # initial state, consume input until we see 'RCS file'
240 match = re_00.match(line)
243 match = re_00.match(line)
241 if match:
244 if match:
242 rcs = match.group(1)
245 rcs = match.group(1)
243 tags = {}
246 tags = {}
244 if rlog:
247 if rlog:
245 filename = util.normpath(rcs[:-2])
248 filename = util.normpath(rcs[:-2])
246 if filename.startswith(prefix):
249 if filename.startswith(prefix):
247 filename = filename[len(prefix):]
250 filename = filename[len(prefix):]
248 if filename.startswith('/'):
251 if filename.startswith('/'):
249 filename = filename[1:]
252 filename = filename[1:]
250 if filename.startswith('Attic/'):
253 if filename.startswith('Attic/'):
251 filename = filename[6:]
254 filename = filename[6:]
252 else:
255 else:
253 filename = filename.replace('/Attic/', '/')
256 filename = filename.replace('/Attic/', '/')
254 state = 2
257 state = 2
255 continue
258 continue
256 state = 1
259 state = 1
257 continue
260 continue
258 match = re_01.match(line)
261 match = re_01.match(line)
259 if match:
262 if match:
260 raise logerror(match.group(1))
263 raise logerror(match.group(1))
261 match = re_02.match(line)
264 match = re_02.match(line)
262 if match:
265 if match:
263 raise logerror(match.group(2))
266 raise logerror(match.group(2))
264 if re_03.match(line):
267 if re_03.match(line):
265 raise logerror(line)
268 raise logerror(line)
266
269
267 elif state == 1:
270 elif state == 1:
268 # expect 'Working file' (only when using log instead of rlog)
271 # expect 'Working file' (only when using log instead of rlog)
269 match = re_10.match(line)
272 match = re_10.match(line)
270 assert match, _('RCS file must be followed by working file')
273 assert match, _('RCS file must be followed by working file')
271 filename = util.normpath(match.group(1))
274 filename = util.normpath(match.group(1))
272 state = 2
275 state = 2
273
276
274 elif state == 2:
277 elif state == 2:
275 # expect 'symbolic names'
278 # expect 'symbolic names'
276 if re_20.match(line):
279 if re_20.match(line):
277 branchmap = {}
280 branchmap = {}
278 state = 3
281 state = 3
279
282
280 elif state == 3:
283 elif state == 3:
281 # read the symbolic names and store as tags
284 # read the symbolic names and store as tags
282 match = re_30.match(line)
285 match = re_30.match(line)
283 if match:
286 if match:
284 rev = [int(x) for x in match.group(2).split('.')]
287 rev = [int(x) for x in match.group(2).split('.')]
285
288
286 # Convert magic branch number to an odd-numbered one
289 # Convert magic branch number to an odd-numbered one
287 revn = len(rev)
290 revn = len(rev)
288 if revn > 3 and (revn % 2) == 0 and rev[-2] == 0:
291 if revn > 3 and (revn % 2) == 0 and rev[-2] == 0:
289 rev = rev[:-2] + rev[-1:]
292 rev = rev[:-2] + rev[-1:]
290 rev = tuple(rev)
293 rev = tuple(rev)
291
294
292 if rev not in tags:
295 if rev not in tags:
293 tags[rev] = []
296 tags[rev] = []
294 tags[rev].append(match.group(1))
297 tags[rev].append(match.group(1))
295 branchmap[match.group(1)] = match.group(2)
298 branchmap[match.group(1)] = match.group(2)
296
299
297 elif re_31.match(line):
300 elif re_31.match(line):
298 state = 5
301 state = 5
299 elif re_32.match(line):
302 elif re_32.match(line):
300 state = 0
303 state = 0
301
304
302 elif state == 4:
305 elif state == 4:
303 # expecting '------' separator before first revision
306 # expecting '------' separator before first revision
304 if re_31.match(line):
307 if re_31.match(line):
305 state = 5
308 state = 5
306 else:
309 else:
307 assert not re_32.match(line), _('must have at least '
310 assert not re_32.match(line), _('must have at least '
308 'some revisions')
311 'some revisions')
309
312
310 elif state == 5:
313 elif state == 5:
311 # expecting revision number and possibly (ignored) lock indication
314 # expecting revision number and possibly (ignored) lock indication
312 # we create the logentry here from values stored in states 0 to 4,
315 # we create the logentry here from values stored in states 0 to 4,
313 # as this state is re-entered for subsequent revisions of a file.
316 # as this state is re-entered for subsequent revisions of a file.
314 match = re_50.match(line)
317 match = re_50.match(line)
315 assert match, _('expected revision number')
318 assert match, _('expected revision number')
316 e = logentry(rcs=scache(rcs),
319 e = logentry(rcs=scache(rcs),
317 file=scache(filename),
320 file=scache(filename),
318 revision=tuple([int(x) for x in
321 revision=tuple([int(x) for x in
319 match.group(1).split('.')]),
322 match.group(1).split('.')]),
320 branches=[],
323 branches=[],
321 parent=None,
324 parent=None,
322 commitid=None,
325 commitid=None,
323 mergepoint=None,
326 mergepoint=None,
324 branchpoints=set())
327 branchpoints=set())
325
328
326 state = 6
329 state = 6
327
330
328 elif state == 6:
331 elif state == 6:
329 # expecting date, author, state, lines changed
332 # expecting date, author, state, lines changed
330 match = re_60.match(line)
333 match = re_60.match(line)
331 assert match, _('revision must be followed by date line')
334 assert match, _('revision must be followed by date line')
332 d = match.group(1)
335 d = match.group(1)
333 if d[2] == '/':
336 if d[2] == '/':
334 # Y2K
337 # Y2K
335 d = '19' + d
338 d = '19' + d
336
339
337 if len(d.split()) != 3:
340 if len(d.split()) != 3:
338 # cvs log dates always in GMT
341 # cvs log dates always in GMT
339 d = d + ' UTC'
342 d = d + ' UTC'
340 e.date = dateutil.parsedate(d, ['%y/%m/%d %H:%M:%S',
343 e.date = dateutil.parsedate(d, ['%y/%m/%d %H:%M:%S',
341 '%Y/%m/%d %H:%M:%S',
344 '%Y/%m/%d %H:%M:%S',
342 '%Y-%m-%d %H:%M:%S'])
345 '%Y-%m-%d %H:%M:%S'])
343 e.author = scache(match.group(2))
346 e.author = scache(match.group(2))
344 e.dead = match.group(3).lower() == 'dead'
347 e.dead = match.group(3).lower() == 'dead'
345
348
346 if match.group(5):
349 if match.group(5):
347 if match.group(6):
350 if match.group(6):
348 e.lines = (int(match.group(5)), int(match.group(6)))
351 e.lines = (int(match.group(5)), int(match.group(6)))
349 else:
352 else:
350 e.lines = (int(match.group(5)), 0)
353 e.lines = (int(match.group(5)), 0)
351 elif match.group(6):
354 elif match.group(6):
352 e.lines = (0, int(match.group(6)))
355 e.lines = (0, int(match.group(6)))
353 else:
356 else:
354 e.lines = None
357 e.lines = None
355
358
356 if match.group(7): # cvs 1.12 commitid
359 if match.group(7): # cvs 1.12 commitid
357 e.commitid = match.group(8)
360 e.commitid = match.group(8)
358
361
359 if match.group(9): # cvsnt mergepoint
362 if match.group(9): # cvsnt mergepoint
360 myrev = match.group(10).split('.')
363 myrev = match.group(10).split('.')
361 if len(myrev) == 2: # head
364 if len(myrev) == 2: # head
362 e.mergepoint = 'HEAD'
365 e.mergepoint = 'HEAD'
363 else:
366 else:
364 myrev = '.'.join(myrev[:-2] + ['0', myrev[-2]])
367 myrev = '.'.join(myrev[:-2] + ['0', myrev[-2]])
365 branches = [b for b in branchmap if branchmap[b] == myrev]
368 branches = [b for b in branchmap if branchmap[b] == myrev]
366 assert len(branches) == 1, ('unknown branch: %s'
369 assert len(branches) == 1, ('unknown branch: %s'
367 % e.mergepoint)
370 % e.mergepoint)
368 e.mergepoint = branches[0]
371 e.mergepoint = branches[0]
369
372
370 e.comment = []
373 e.comment = []
371 state = 7
374 state = 7
372
375
373 elif state == 7:
376 elif state == 7:
374 # read the revision numbers of branches that start at this revision
377 # read the revision numbers of branches that start at this revision
375 # or store the commit log message otherwise
378 # or store the commit log message otherwise
376 m = re_70.match(line)
379 m = re_70.match(line)
377 if m:
380 if m:
378 e.branches = [tuple([int(y) for y in x.strip().split('.')])
381 e.branches = [tuple([int(y) for y in x.strip().split('.')])
379 for x in m.group(1).split(';')]
382 for x in m.group(1).split(';')]
380 state = 8
383 state = 8
381 elif re_31.match(line) and re_50.match(peek):
384 elif re_31.match(line) and re_50.match(peek):
382 state = 5
385 state = 5
383 store = True
386 store = True
384 elif re_32.match(line):
387 elif re_32.match(line):
385 state = 0
388 state = 0
386 store = True
389 store = True
387 else:
390 else:
388 e.comment.append(line)
391 e.comment.append(line)
389
392
390 elif state == 8:
393 elif state == 8:
391 # store commit log message
394 # store commit log message
392 if re_31.match(line):
395 if re_31.match(line):
393 cpeek = peek
396 cpeek = peek
394 if cpeek.endswith('\n'):
397 if cpeek.endswith('\n'):
395 cpeek = cpeek[:-1]
398 cpeek = cpeek[:-1]
396 if re_50.match(cpeek):
399 if re_50.match(cpeek):
397 state = 5
400 state = 5
398 store = True
401 store = True
399 else:
402 else:
400 e.comment.append(line)
403 e.comment.append(line)
401 elif re_32.match(line):
404 elif re_32.match(line):
402 state = 0
405 state = 0
403 store = True
406 store = True
404 else:
407 else:
405 e.comment.append(line)
408 e.comment.append(line)
406
409
407 # When a file is added on a branch B1, CVS creates a synthetic
410 # When a file is added on a branch B1, CVS creates a synthetic
408 # dead trunk revision 1.1 so that the branch has a root.
411 # dead trunk revision 1.1 so that the branch has a root.
409 # Likewise, if you merge such a file to a later branch B2 (one
412 # Likewise, if you merge such a file to a later branch B2 (one
410 # that already existed when the file was added on B1), CVS
413 # that already existed when the file was added on B1), CVS
411 # creates a synthetic dead revision 1.1.x.1 on B2. Don't drop
414 # creates a synthetic dead revision 1.1.x.1 on B2. Don't drop
412 # these revisions now, but mark them synthetic so
415 # these revisions now, but mark them synthetic so
413 # createchangeset() can take care of them.
416 # createchangeset() can take care of them.
414 if (store and
417 if (store and
415 e.dead and
418 e.dead and
416 e.revision[-1] == 1 and # 1.1 or 1.1.x.1
419 e.revision[-1] == 1 and # 1.1 or 1.1.x.1
417 len(e.comment) == 1 and
420 len(e.comment) == 1 and
418 file_added_re.match(e.comment[0])):
421 file_added_re.match(e.comment[0])):
419 ui.debug('found synthetic revision in %s: %r\n'
422 ui.debug('found synthetic revision in %s: %r\n'
420 % (e.rcs, e.comment[0]))
423 % (e.rcs, e.comment[0]))
421 e.synthetic = True
424 e.synthetic = True
422
425
423 if store:
426 if store:
424 # clean up the results and save in the log.
427 # clean up the results and save in the log.
425 store = False
428 store = False
426 e.tags = sorted([scache(x) for x in tags.get(e.revision, [])])
429 e.tags = sorted([scache(x) for x in tags.get(e.revision, [])])
427 e.comment = scache('\n'.join(e.comment))
430 e.comment = scache('\n'.join(e.comment))
428
431
429 revn = len(e.revision)
432 revn = len(e.revision)
430 if revn > 3 and (revn % 2) == 0:
433 if revn > 3 and (revn % 2) == 0:
431 e.branch = tags.get(e.revision[:-1], [None])[0]
434 e.branch = tags.get(e.revision[:-1], [None])[0]
432 else:
435 else:
433 e.branch = None
436 e.branch = None
434
437
435 # find the branches starting from this revision
438 # find the branches starting from this revision
436 branchpoints = set()
439 branchpoints = set()
437 for branch, revision in branchmap.iteritems():
440 for branch, revision in branchmap.iteritems():
438 revparts = tuple([int(i) for i in revision.split('.')])
441 revparts = tuple([int(i) for i in revision.split('.')])
439 if len(revparts) < 2: # bad tags
442 if len(revparts) < 2: # bad tags
440 continue
443 continue
441 if revparts[-2] == 0 and revparts[-1] % 2 == 0:
444 if revparts[-2] == 0 and revparts[-1] % 2 == 0:
442 # normal branch
445 # normal branch
443 if revparts[:-2] == e.revision:
446 if revparts[:-2] == e.revision:
444 branchpoints.add(branch)
447 branchpoints.add(branch)
445 elif revparts == (1, 1, 1): # vendor branch
448 elif revparts == (1, 1, 1): # vendor branch
446 if revparts in e.branches:
449 if revparts in e.branches:
447 branchpoints.add(branch)
450 branchpoints.add(branch)
448 e.branchpoints = branchpoints
451 e.branchpoints = branchpoints
449
452
450 log.append(e)
453 log.append(e)
451
454
452 rcsmap[e.rcs.replace('/Attic/', '/')] = e.rcs
455 rcsmap[e.rcs.replace('/Attic/', '/')] = e.rcs
453
456
454 if len(log) % 100 == 0:
457 if len(log) % 100 == 0:
455 ui.status(util.ellipsis('%d %s' % (len(log), e.file), 80)+'\n')
458 ui.status(stringutil.ellipsis('%d %s' % (len(log), e.file), 80)
459 + '\n')
456
460
457 log.sort(key=lambda x: (x.rcs, x.revision))
461 log.sort(key=lambda x: (x.rcs, x.revision))
458
462
459 # find parent revisions of individual files
463 # find parent revisions of individual files
460 versions = {}
464 versions = {}
461 for e in sorted(oldlog, key=lambda x: (x.rcs, x.revision)):
465 for e in sorted(oldlog, key=lambda x: (x.rcs, x.revision)):
462 rcs = e.rcs.replace('/Attic/', '/')
466 rcs = e.rcs.replace('/Attic/', '/')
463 if rcs in rcsmap:
467 if rcs in rcsmap:
464 e.rcs = rcsmap[rcs]
468 e.rcs = rcsmap[rcs]
465 branch = e.revision[:-1]
469 branch = e.revision[:-1]
466 versions[(e.rcs, branch)] = e.revision
470 versions[(e.rcs, branch)] = e.revision
467
471
468 for e in log:
472 for e in log:
469 branch = e.revision[:-1]
473 branch = e.revision[:-1]
470 p = versions.get((e.rcs, branch), None)
474 p = versions.get((e.rcs, branch), None)
471 if p is None:
475 if p is None:
472 p = e.revision[:-2]
476 p = e.revision[:-2]
473 e.parent = p
477 e.parent = p
474 versions[(e.rcs, branch)] = e.revision
478 versions[(e.rcs, branch)] = e.revision
475
479
476 # update the log cache
480 # update the log cache
477 if cache:
481 if cache:
478 if log:
482 if log:
479 # join up the old and new logs
483 # join up the old and new logs
480 log.sort(key=lambda x: x.date)
484 log.sort(key=lambda x: x.date)
481
485
482 if oldlog and oldlog[-1].date >= log[0].date:
486 if oldlog and oldlog[-1].date >= log[0].date:
483 raise logerror(_('log cache overlaps with new log entries,'
487 raise logerror(_('log cache overlaps with new log entries,'
484 ' re-run without cache.'))
488 ' re-run without cache.'))
485
489
486 log = oldlog + log
490 log = oldlog + log
487
491
488 # write the new cachefile
492 # write the new cachefile
489 ui.note(_('writing cvs log cache %s\n') % cachefile)
493 ui.note(_('writing cvs log cache %s\n') % cachefile)
490 pickle.dump(log, open(cachefile, 'wb'))
494 pickle.dump(log, open(cachefile, 'wb'))
491 else:
495 else:
492 log = oldlog
496 log = oldlog
493
497
494 ui.status(_('%d log entries\n') % len(log))
498 ui.status(_('%d log entries\n') % len(log))
495
499
496 encodings = ui.configlist('convert', 'cvsps.logencoding')
500 encodings = ui.configlist('convert', 'cvsps.logencoding')
497 if encodings:
501 if encodings:
498 def revstr(r):
502 def revstr(r):
499 # this is needed, because logentry.revision is a tuple of "int"
503 # this is needed, because logentry.revision is a tuple of "int"
500 # (e.g. (1, 2) for "1.2")
504 # (e.g. (1, 2) for "1.2")
501 return '.'.join(pycompat.maplist(pycompat.bytestr, r))
505 return '.'.join(pycompat.maplist(pycompat.bytestr, r))
502
506
503 for entry in log:
507 for entry in log:
504 comment = entry.comment
508 comment = entry.comment
505 for e in encodings:
509 for e in encodings:
506 try:
510 try:
507 entry.comment = comment.decode(e).encode('utf-8')
511 entry.comment = comment.decode(e).encode('utf-8')
508 if ui.debugflag:
512 if ui.debugflag:
509 ui.debug("transcoding by %s: %s of %s\n" %
513 ui.debug("transcoding by %s: %s of %s\n" %
510 (e, revstr(entry.revision), entry.file))
514 (e, revstr(entry.revision), entry.file))
511 break
515 break
512 except UnicodeDecodeError:
516 except UnicodeDecodeError:
513 pass # try next encoding
517 pass # try next encoding
514 except LookupError as inst: # unknown encoding, maybe
518 except LookupError as inst: # unknown encoding, maybe
515 raise error.Abort(inst,
519 raise error.Abort(inst,
516 hint=_('check convert.cvsps.logencoding'
520 hint=_('check convert.cvsps.logencoding'
517 ' configuration'))
521 ' configuration'))
518 else:
522 else:
519 raise error.Abort(_("no encoding can transcode"
523 raise error.Abort(_("no encoding can transcode"
520 " CVS log message for %s of %s")
524 " CVS log message for %s of %s")
521 % (revstr(entry.revision), entry.file),
525 % (revstr(entry.revision), entry.file),
522 hint=_('check convert.cvsps.logencoding'
526 hint=_('check convert.cvsps.logencoding'
523 ' configuration'))
527 ' configuration'))
524
528
525 hook.hook(ui, None, "cvslog", True, log=log)
529 hook.hook(ui, None, "cvslog", True, log=log)
526
530
527 return log
531 return log
528
532
529
533
530 class changeset(object):
534 class changeset(object):
531 '''Class changeset has the following attributes:
535 '''Class changeset has the following attributes:
532 .id - integer identifying this changeset (list index)
536 .id - integer identifying this changeset (list index)
533 .author - author name as CVS knows it
537 .author - author name as CVS knows it
534 .branch - name of branch this changeset is on, or None
538 .branch - name of branch this changeset is on, or None
535 .comment - commit message
539 .comment - commit message
536 .commitid - CVS commitid or None
540 .commitid - CVS commitid or None
537 .date - the commit date as a (time,tz) tuple
541 .date - the commit date as a (time,tz) tuple
538 .entries - list of logentry objects in this changeset
542 .entries - list of logentry objects in this changeset
539 .parents - list of one or two parent changesets
543 .parents - list of one or two parent changesets
540 .tags - list of tags on this changeset
544 .tags - list of tags on this changeset
541 .synthetic - from synthetic revision "file ... added on branch ..."
545 .synthetic - from synthetic revision "file ... added on branch ..."
542 .mergepoint- the branch that has been merged from or None
546 .mergepoint- the branch that has been merged from or None
543 .branchpoints- the branches that start at the current entry or empty
547 .branchpoints- the branches that start at the current entry or empty
544 '''
548 '''
545 def __init__(self, **entries):
549 def __init__(self, **entries):
546 self.id = None
550 self.id = None
547 self.synthetic = False
551 self.synthetic = False
548 self.__dict__.update(entries)
552 self.__dict__.update(entries)
549
553
550 def __repr__(self):
554 def __repr__(self):
551 items = ("%s=%r"%(k, self.__dict__[k]) for k in sorted(self.__dict__))
555 items = ("%s=%r"%(k, self.__dict__[k]) for k in sorted(self.__dict__))
552 return "%s(%s)"%(type(self).__name__, ", ".join(items))
556 return "%s(%s)"%(type(self).__name__, ", ".join(items))
553
557
554 def createchangeset(ui, log, fuzz=60, mergefrom=None, mergeto=None):
558 def createchangeset(ui, log, fuzz=60, mergefrom=None, mergeto=None):
555 '''Convert log into changesets.'''
559 '''Convert log into changesets.'''
556
560
557 ui.status(_('creating changesets\n'))
561 ui.status(_('creating changesets\n'))
558
562
559 # try to order commitids by date
563 # try to order commitids by date
560 mindate = {}
564 mindate = {}
561 for e in log:
565 for e in log:
562 if e.commitid:
566 if e.commitid:
563 mindate[e.commitid] = min(e.date, mindate.get(e.commitid))
567 mindate[e.commitid] = min(e.date, mindate.get(e.commitid))
564
568
565 # Merge changesets
569 # Merge changesets
566 log.sort(key=lambda x: (mindate.get(x.commitid), x.commitid, x.comment,
570 log.sort(key=lambda x: (mindate.get(x.commitid), x.commitid, x.comment,
567 x.author, x.branch, x.date, x.branchpoints))
571 x.author, x.branch, x.date, x.branchpoints))
568
572
569 changesets = []
573 changesets = []
570 files = set()
574 files = set()
571 c = None
575 c = None
572 for i, e in enumerate(log):
576 for i, e in enumerate(log):
573
577
574 # Check if log entry belongs to the current changeset or not.
578 # Check if log entry belongs to the current changeset or not.
575
579
576 # Since CVS is file-centric, two different file revisions with
580 # Since CVS is file-centric, two different file revisions with
577 # different branchpoints should be treated as belonging to two
581 # different branchpoints should be treated as belonging to two
578 # different changesets (and the ordering is important and not
582 # different changesets (and the ordering is important and not
579 # honoured by cvsps at this point).
583 # honoured by cvsps at this point).
580 #
584 #
581 # Consider the following case:
585 # Consider the following case:
582 # foo 1.1 branchpoints: [MYBRANCH]
586 # foo 1.1 branchpoints: [MYBRANCH]
583 # bar 1.1 branchpoints: [MYBRANCH, MYBRANCH2]
587 # bar 1.1 branchpoints: [MYBRANCH, MYBRANCH2]
584 #
588 #
585 # Here foo is part only of MYBRANCH, but not MYBRANCH2, e.g. a
589 # Here foo is part only of MYBRANCH, but not MYBRANCH2, e.g. a
586 # later version of foo may be in MYBRANCH2, so foo should be the
590 # later version of foo may be in MYBRANCH2, so foo should be the
587 # first changeset and bar the next and MYBRANCH and MYBRANCH2
591 # first changeset and bar the next and MYBRANCH and MYBRANCH2
588 # should both start off of the bar changeset. No provisions are
592 # should both start off of the bar changeset. No provisions are
589 # made to ensure that this is, in fact, what happens.
593 # made to ensure that this is, in fact, what happens.
590 if not (c and e.branchpoints == c.branchpoints and
594 if not (c and e.branchpoints == c.branchpoints and
591 (# cvs commitids
595 (# cvs commitids
592 (e.commitid is not None and e.commitid == c.commitid) or
596 (e.commitid is not None and e.commitid == c.commitid) or
593 (# no commitids, use fuzzy commit detection
597 (# no commitids, use fuzzy commit detection
594 (e.commitid is None or c.commitid is None) and
598 (e.commitid is None or c.commitid is None) and
595 e.comment == c.comment and
599 e.comment == c.comment and
596 e.author == c.author and
600 e.author == c.author and
597 e.branch == c.branch and
601 e.branch == c.branch and
598 ((c.date[0] + c.date[1]) <=
602 ((c.date[0] + c.date[1]) <=
599 (e.date[0] + e.date[1]) <=
603 (e.date[0] + e.date[1]) <=
600 (c.date[0] + c.date[1]) + fuzz) and
604 (c.date[0] + c.date[1]) + fuzz) and
601 e.file not in files))):
605 e.file not in files))):
602 c = changeset(comment=e.comment, author=e.author,
606 c = changeset(comment=e.comment, author=e.author,
603 branch=e.branch, date=e.date,
607 branch=e.branch, date=e.date,
604 entries=[], mergepoint=e.mergepoint,
608 entries=[], mergepoint=e.mergepoint,
605 branchpoints=e.branchpoints, commitid=e.commitid)
609 branchpoints=e.branchpoints, commitid=e.commitid)
606 changesets.append(c)
610 changesets.append(c)
607
611
608 files = set()
612 files = set()
609 if len(changesets) % 100 == 0:
613 if len(changesets) % 100 == 0:
610 t = '%d %s' % (len(changesets), repr(e.comment)[1:-1])
614 t = '%d %s' % (len(changesets), repr(e.comment)[1:-1])
611 ui.status(util.ellipsis(t, 80) + '\n')
615 ui.status(stringutil.ellipsis(t, 80) + '\n')
612
616
613 c.entries.append(e)
617 c.entries.append(e)
614 files.add(e.file)
618 files.add(e.file)
615 c.date = e.date # changeset date is date of latest commit in it
619 c.date = e.date # changeset date is date of latest commit in it
616
620
617 # Mark synthetic changesets
621 # Mark synthetic changesets
618
622
619 for c in changesets:
623 for c in changesets:
620 # Synthetic revisions always get their own changeset, because
624 # Synthetic revisions always get their own changeset, because
621 # the log message includes the filename. E.g. if you add file3
625 # the log message includes the filename. E.g. if you add file3
622 # and file4 on a branch, you get four log entries and three
626 # and file4 on a branch, you get four log entries and three
623 # changesets:
627 # changesets:
624 # "File file3 was added on branch ..." (synthetic, 1 entry)
628 # "File file3 was added on branch ..." (synthetic, 1 entry)
625 # "File file4 was added on branch ..." (synthetic, 1 entry)
629 # "File file4 was added on branch ..." (synthetic, 1 entry)
626 # "Add file3 and file4 to fix ..." (real, 2 entries)
630 # "Add file3 and file4 to fix ..." (real, 2 entries)
627 # Hence the check for 1 entry here.
631 # Hence the check for 1 entry here.
628 c.synthetic = len(c.entries) == 1 and c.entries[0].synthetic
632 c.synthetic = len(c.entries) == 1 and c.entries[0].synthetic
629
633
630 # Sort files in each changeset
634 # Sort files in each changeset
631
635
632 def entitycompare(l, r):
636 def entitycompare(l, r):
633 'Mimic cvsps sorting order'
637 'Mimic cvsps sorting order'
634 l = l.file.split('/')
638 l = l.file.split('/')
635 r = r.file.split('/')
639 r = r.file.split('/')
636 nl = len(l)
640 nl = len(l)
637 nr = len(r)
641 nr = len(r)
638 n = min(nl, nr)
642 n = min(nl, nr)
639 for i in range(n):
643 for i in range(n):
640 if i + 1 == nl and nl < nr:
644 if i + 1 == nl and nl < nr:
641 return -1
645 return -1
642 elif i + 1 == nr and nl > nr:
646 elif i + 1 == nr and nl > nr:
643 return +1
647 return +1
644 elif l[i] < r[i]:
648 elif l[i] < r[i]:
645 return -1
649 return -1
646 elif l[i] > r[i]:
650 elif l[i] > r[i]:
647 return +1
651 return +1
648 return 0
652 return 0
649
653
650 for c in changesets:
654 for c in changesets:
651 c.entries.sort(entitycompare)
655 c.entries.sort(entitycompare)
652
656
653 # Sort changesets by date
657 # Sort changesets by date
654
658
655 odd = set()
659 odd = set()
656 def cscmp(l, r):
660 def cscmp(l, r):
657 d = sum(l.date) - sum(r.date)
661 d = sum(l.date) - sum(r.date)
658 if d:
662 if d:
659 return d
663 return d
660
664
661 # detect vendor branches and initial commits on a branch
665 # detect vendor branches and initial commits on a branch
662 le = {}
666 le = {}
663 for e in l.entries:
667 for e in l.entries:
664 le[e.rcs] = e.revision
668 le[e.rcs] = e.revision
665 re = {}
669 re = {}
666 for e in r.entries:
670 for e in r.entries:
667 re[e.rcs] = e.revision
671 re[e.rcs] = e.revision
668
672
669 d = 0
673 d = 0
670 for e in l.entries:
674 for e in l.entries:
671 if re.get(e.rcs, None) == e.parent:
675 if re.get(e.rcs, None) == e.parent:
672 assert not d
676 assert not d
673 d = 1
677 d = 1
674 break
678 break
675
679
676 for e in r.entries:
680 for e in r.entries:
677 if le.get(e.rcs, None) == e.parent:
681 if le.get(e.rcs, None) == e.parent:
678 if d:
682 if d:
679 odd.add((l, r))
683 odd.add((l, r))
680 d = -1
684 d = -1
681 break
685 break
682 # By this point, the changesets are sufficiently compared that
686 # By this point, the changesets are sufficiently compared that
683 # we don't really care about ordering. However, this leaves
687 # we don't really care about ordering. However, this leaves
684 # some race conditions in the tests, so we compare on the
688 # some race conditions in the tests, so we compare on the
685 # number of files modified, the files contained in each
689 # number of files modified, the files contained in each
686 # changeset, and the branchpoints in the change to ensure test
690 # changeset, and the branchpoints in the change to ensure test
687 # output remains stable.
691 # output remains stable.
688
692
689 # recommended replacement for cmp from
693 # recommended replacement for cmp from
690 # https://docs.python.org/3.0/whatsnew/3.0.html
694 # https://docs.python.org/3.0/whatsnew/3.0.html
691 c = lambda x, y: (x > y) - (x < y)
695 c = lambda x, y: (x > y) - (x < y)
692 # Sort bigger changes first.
696 # Sort bigger changes first.
693 if not d:
697 if not d:
694 d = c(len(l.entries), len(r.entries))
698 d = c(len(l.entries), len(r.entries))
695 # Try sorting by filename in the change.
699 # Try sorting by filename in the change.
696 if not d:
700 if not d:
697 d = c([e.file for e in l.entries], [e.file for e in r.entries])
701 d = c([e.file for e in l.entries], [e.file for e in r.entries])
698 # Try and put changes without a branch point before ones with
702 # Try and put changes without a branch point before ones with
699 # a branch point.
703 # a branch point.
700 if not d:
704 if not d:
701 d = c(len(l.branchpoints), len(r.branchpoints))
705 d = c(len(l.branchpoints), len(r.branchpoints))
702 return d
706 return d
703
707
704 changesets.sort(cscmp)
708 changesets.sort(cscmp)
705
709
706 # Collect tags
710 # Collect tags
707
711
708 globaltags = {}
712 globaltags = {}
709 for c in changesets:
713 for c in changesets:
710 for e in c.entries:
714 for e in c.entries:
711 for tag in e.tags:
715 for tag in e.tags:
712 # remember which is the latest changeset to have this tag
716 # remember which is the latest changeset to have this tag
713 globaltags[tag] = c
717 globaltags[tag] = c
714
718
715 for c in changesets:
719 for c in changesets:
716 tags = set()
720 tags = set()
717 for e in c.entries:
721 for e in c.entries:
718 tags.update(e.tags)
722 tags.update(e.tags)
719 # remember tags only if this is the latest changeset to have it
723 # remember tags only if this is the latest changeset to have it
720 c.tags = sorted(tag for tag in tags if globaltags[tag] is c)
724 c.tags = sorted(tag for tag in tags if globaltags[tag] is c)
721
725
722 # Find parent changesets, handle {{mergetobranch BRANCHNAME}}
726 # Find parent changesets, handle {{mergetobranch BRANCHNAME}}
723 # by inserting dummy changesets with two parents, and handle
727 # by inserting dummy changesets with two parents, and handle
724 # {{mergefrombranch BRANCHNAME}} by setting two parents.
728 # {{mergefrombranch BRANCHNAME}} by setting two parents.
725
729
726 if mergeto is None:
730 if mergeto is None:
727 mergeto = r'{{mergetobranch ([-\w]+)}}'
731 mergeto = r'{{mergetobranch ([-\w]+)}}'
728 if mergeto:
732 if mergeto:
729 mergeto = re.compile(mergeto)
733 mergeto = re.compile(mergeto)
730
734
731 if mergefrom is None:
735 if mergefrom is None:
732 mergefrom = r'{{mergefrombranch ([-\w]+)}}'
736 mergefrom = r'{{mergefrombranch ([-\w]+)}}'
733 if mergefrom:
737 if mergefrom:
734 mergefrom = re.compile(mergefrom)
738 mergefrom = re.compile(mergefrom)
735
739
736 versions = {} # changeset index where we saw any particular file version
740 versions = {} # changeset index where we saw any particular file version
737 branches = {} # changeset index where we saw a branch
741 branches = {} # changeset index where we saw a branch
738 n = len(changesets)
742 n = len(changesets)
739 i = 0
743 i = 0
740 while i < n:
744 while i < n:
741 c = changesets[i]
745 c = changesets[i]
742
746
743 for f in c.entries:
747 for f in c.entries:
744 versions[(f.rcs, f.revision)] = i
748 versions[(f.rcs, f.revision)] = i
745
749
746 p = None
750 p = None
747 if c.branch in branches:
751 if c.branch in branches:
748 p = branches[c.branch]
752 p = branches[c.branch]
749 else:
753 else:
750 # first changeset on a new branch
754 # first changeset on a new branch
751 # the parent is a changeset with the branch in its
755 # the parent is a changeset with the branch in its
752 # branchpoints such that it is the latest possible
756 # branchpoints such that it is the latest possible
753 # commit without any intervening, unrelated commits.
757 # commit without any intervening, unrelated commits.
754
758
755 for candidate in xrange(i):
759 for candidate in xrange(i):
756 if c.branch not in changesets[candidate].branchpoints:
760 if c.branch not in changesets[candidate].branchpoints:
757 if p is not None:
761 if p is not None:
758 break
762 break
759 continue
763 continue
760 p = candidate
764 p = candidate
761
765
762 c.parents = []
766 c.parents = []
763 if p is not None:
767 if p is not None:
764 p = changesets[p]
768 p = changesets[p]
765
769
766 # Ensure no changeset has a synthetic changeset as a parent.
770 # Ensure no changeset has a synthetic changeset as a parent.
767 while p.synthetic:
771 while p.synthetic:
768 assert len(p.parents) <= 1, \
772 assert len(p.parents) <= 1, \
769 _('synthetic changeset cannot have multiple parents')
773 _('synthetic changeset cannot have multiple parents')
770 if p.parents:
774 if p.parents:
771 p = p.parents[0]
775 p = p.parents[0]
772 else:
776 else:
773 p = None
777 p = None
774 break
778 break
775
779
776 if p is not None:
780 if p is not None:
777 c.parents.append(p)
781 c.parents.append(p)
778
782
779 if c.mergepoint:
783 if c.mergepoint:
780 if c.mergepoint == 'HEAD':
784 if c.mergepoint == 'HEAD':
781 c.mergepoint = None
785 c.mergepoint = None
782 c.parents.append(changesets[branches[c.mergepoint]])
786 c.parents.append(changesets[branches[c.mergepoint]])
783
787
784 if mergefrom:
788 if mergefrom:
785 m = mergefrom.search(c.comment)
789 m = mergefrom.search(c.comment)
786 if m:
790 if m:
787 m = m.group(1)
791 m = m.group(1)
788 if m == 'HEAD':
792 if m == 'HEAD':
789 m = None
793 m = None
790 try:
794 try:
791 candidate = changesets[branches[m]]
795 candidate = changesets[branches[m]]
792 except KeyError:
796 except KeyError:
793 ui.warn(_("warning: CVS commit message references "
797 ui.warn(_("warning: CVS commit message references "
794 "non-existent branch %r:\n%s\n")
798 "non-existent branch %r:\n%s\n")
795 % (m, c.comment))
799 % (m, c.comment))
796 if m in branches and c.branch != m and not candidate.synthetic:
800 if m in branches and c.branch != m and not candidate.synthetic:
797 c.parents.append(candidate)
801 c.parents.append(candidate)
798
802
799 if mergeto:
803 if mergeto:
800 m = mergeto.search(c.comment)
804 m = mergeto.search(c.comment)
801 if m:
805 if m:
802 if m.groups():
806 if m.groups():
803 m = m.group(1)
807 m = m.group(1)
804 if m == 'HEAD':
808 if m == 'HEAD':
805 m = None
809 m = None
806 else:
810 else:
807 m = None # if no group found then merge to HEAD
811 m = None # if no group found then merge to HEAD
808 if m in branches and c.branch != m:
812 if m in branches and c.branch != m:
809 # insert empty changeset for merge
813 # insert empty changeset for merge
810 cc = changeset(
814 cc = changeset(
811 author=c.author, branch=m, date=c.date,
815 author=c.author, branch=m, date=c.date,
812 comment='convert-repo: CVS merge from branch %s'
816 comment='convert-repo: CVS merge from branch %s'
813 % c.branch,
817 % c.branch,
814 entries=[], tags=[],
818 entries=[], tags=[],
815 parents=[changesets[branches[m]], c])
819 parents=[changesets[branches[m]], c])
816 changesets.insert(i + 1, cc)
820 changesets.insert(i + 1, cc)
817 branches[m] = i + 1
821 branches[m] = i + 1
818
822
819 # adjust our loop counters now we have inserted a new entry
823 # adjust our loop counters now we have inserted a new entry
820 n += 1
824 n += 1
821 i += 2
825 i += 2
822 continue
826 continue
823
827
824 branches[c.branch] = i
828 branches[c.branch] = i
825 i += 1
829 i += 1
826
830
827 # Drop synthetic changesets (safe now that we have ensured no other
831 # Drop synthetic changesets (safe now that we have ensured no other
828 # changesets can have them as parents).
832 # changesets can have them as parents).
829 i = 0
833 i = 0
830 while i < len(changesets):
834 while i < len(changesets):
831 if changesets[i].synthetic:
835 if changesets[i].synthetic:
832 del changesets[i]
836 del changesets[i]
833 else:
837 else:
834 i += 1
838 i += 1
835
839
836 # Number changesets
840 # Number changesets
837
841
838 for i, c in enumerate(changesets):
842 for i, c in enumerate(changesets):
839 c.id = i + 1
843 c.id = i + 1
840
844
841 if odd:
845 if odd:
842 for l, r in odd:
846 for l, r in odd:
843 if l.id is not None and r.id is not None:
847 if l.id is not None and r.id is not None:
844 ui.warn(_('changeset %d is both before and after %d\n')
848 ui.warn(_('changeset %d is both before and after %d\n')
845 % (l.id, r.id))
849 % (l.id, r.id))
846
850
847 ui.status(_('%d changeset entries\n') % len(changesets))
851 ui.status(_('%d changeset entries\n') % len(changesets))
848
852
849 hook.hook(ui, None, "cvschangesets", True, changesets=changesets)
853 hook.hook(ui, None, "cvschangesets", True, changesets=changesets)
850
854
851 return changesets
855 return changesets
852
856
853
857
854 def debugcvsps(ui, *args, **opts):
858 def debugcvsps(ui, *args, **opts):
855 '''Read CVS rlog for current directory or named path in
859 '''Read CVS rlog for current directory or named path in
856 repository, and convert the log to changesets based on matching
860 repository, and convert the log to changesets based on matching
857 commit log entries and dates.
861 commit log entries and dates.
858 '''
862 '''
859 opts = pycompat.byteskwargs(opts)
863 opts = pycompat.byteskwargs(opts)
860 if opts["new_cache"]:
864 if opts["new_cache"]:
861 cache = "write"
865 cache = "write"
862 elif opts["update_cache"]:
866 elif opts["update_cache"]:
863 cache = "update"
867 cache = "update"
864 else:
868 else:
865 cache = None
869 cache = None
866
870
867 revisions = opts["revisions"]
871 revisions = opts["revisions"]
868
872
869 try:
873 try:
870 if args:
874 if args:
871 log = []
875 log = []
872 for d in args:
876 for d in args:
873 log += createlog(ui, d, root=opts["root"], cache=cache)
877 log += createlog(ui, d, root=opts["root"], cache=cache)
874 else:
878 else:
875 log = createlog(ui, root=opts["root"], cache=cache)
879 log = createlog(ui, root=opts["root"], cache=cache)
876 except logerror as e:
880 except logerror as e:
877 ui.write("%r\n"%e)
881 ui.write("%r\n"%e)
878 return
882 return
879
883
880 changesets = createchangeset(ui, log, opts["fuzz"])
884 changesets = createchangeset(ui, log, opts["fuzz"])
881 del log
885 del log
882
886
883 # Print changesets (optionally filtered)
887 # Print changesets (optionally filtered)
884
888
885 off = len(revisions)
889 off = len(revisions)
886 branches = {} # latest version number in each branch
890 branches = {} # latest version number in each branch
887 ancestors = {} # parent branch
891 ancestors = {} # parent branch
888 for cs in changesets:
892 for cs in changesets:
889
893
890 if opts["ancestors"]:
894 if opts["ancestors"]:
891 if cs.branch not in branches and cs.parents and cs.parents[0].id:
895 if cs.branch not in branches and cs.parents and cs.parents[0].id:
892 ancestors[cs.branch] = (changesets[cs.parents[0].id - 1].branch,
896 ancestors[cs.branch] = (changesets[cs.parents[0].id - 1].branch,
893 cs.parents[0].id)
897 cs.parents[0].id)
894 branches[cs.branch] = cs.id
898 branches[cs.branch] = cs.id
895
899
896 # limit by branches
900 # limit by branches
897 if opts["branches"] and (cs.branch or 'HEAD') not in opts["branches"]:
901 if opts["branches"] and (cs.branch or 'HEAD') not in opts["branches"]:
898 continue
902 continue
899
903
900 if not off:
904 if not off:
901 # Note: trailing spaces on several lines here are needed to have
905 # Note: trailing spaces on several lines here are needed to have
902 # bug-for-bug compatibility with cvsps.
906 # bug-for-bug compatibility with cvsps.
903 ui.write('---------------------\n')
907 ui.write('---------------------\n')
904 ui.write(('PatchSet %d \n' % cs.id))
908 ui.write(('PatchSet %d \n' % cs.id))
905 ui.write(('Date: %s\n' % dateutil.datestr(cs.date,
909 ui.write(('Date: %s\n' % dateutil.datestr(cs.date,
906 '%Y/%m/%d %H:%M:%S %1%2')))
910 '%Y/%m/%d %H:%M:%S %1%2')))
907 ui.write(('Author: %s\n' % cs.author))
911 ui.write(('Author: %s\n' % cs.author))
908 ui.write(('Branch: %s\n' % (cs.branch or 'HEAD')))
912 ui.write(('Branch: %s\n' % (cs.branch or 'HEAD')))
909 ui.write(('Tag%s: %s \n' % (['', 's'][len(cs.tags) > 1],
913 ui.write(('Tag%s: %s \n' % (['', 's'][len(cs.tags) > 1],
910 ','.join(cs.tags) or '(none)')))
914 ','.join(cs.tags) or '(none)')))
911 if cs.branchpoints:
915 if cs.branchpoints:
912 ui.write(('Branchpoints: %s \n') %
916 ui.write(('Branchpoints: %s \n') %
913 ', '.join(sorted(cs.branchpoints)))
917 ', '.join(sorted(cs.branchpoints)))
914 if opts["parents"] and cs.parents:
918 if opts["parents"] and cs.parents:
915 if len(cs.parents) > 1:
919 if len(cs.parents) > 1:
916 ui.write(('Parents: %s\n' %
920 ui.write(('Parents: %s\n' %
917 (','.join([str(p.id) for p in cs.parents]))))
921 (','.join([str(p.id) for p in cs.parents]))))
918 else:
922 else:
919 ui.write(('Parent: %d\n' % cs.parents[0].id))
923 ui.write(('Parent: %d\n' % cs.parents[0].id))
920
924
921 if opts["ancestors"]:
925 if opts["ancestors"]:
922 b = cs.branch
926 b = cs.branch
923 r = []
927 r = []
924 while b:
928 while b:
925 b, c = ancestors[b]
929 b, c = ancestors[b]
926 r.append('%s:%d:%d' % (b or "HEAD", c, branches[b]))
930 r.append('%s:%d:%d' % (b or "HEAD", c, branches[b]))
927 if r:
931 if r:
928 ui.write(('Ancestors: %s\n' % (','.join(r))))
932 ui.write(('Ancestors: %s\n' % (','.join(r))))
929
933
930 ui.write(('Log:\n'))
934 ui.write(('Log:\n'))
931 ui.write('%s\n\n' % cs.comment)
935 ui.write('%s\n\n' % cs.comment)
932 ui.write(('Members: \n'))
936 ui.write(('Members: \n'))
933 for f in cs.entries:
937 for f in cs.entries:
934 fn = f.file
938 fn = f.file
935 if fn.startswith(opts["prefix"]):
939 if fn.startswith(opts["prefix"]):
936 fn = fn[len(opts["prefix"]):]
940 fn = fn[len(opts["prefix"]):]
937 ui.write('\t%s:%s->%s%s \n' % (
941 ui.write('\t%s:%s->%s%s \n' % (
938 fn, '.'.join([str(x) for x in f.parent]) or 'INITIAL',
942 fn, '.'.join([str(x) for x in f.parent]) or 'INITIAL',
939 '.'.join([str(x) for x in f.revision]),
943 '.'.join([str(x) for x in f.revision]),
940 ['', '(DEAD)'][f.dead]))
944 ['', '(DEAD)'][f.dead]))
941 ui.write('\n')
945 ui.write('\n')
942
946
943 # have we seen the start tag?
947 # have we seen the start tag?
944 if revisions and off:
948 if revisions and off:
945 if revisions[0] == str(cs.id) or \
949 if revisions[0] == str(cs.id) or \
946 revisions[0] in cs.tags:
950 revisions[0] in cs.tags:
947 off = False
951 off = False
948
952
949 # see if we reached the end tag
953 # see if we reached the end tag
950 if len(revisions) > 1 and not off:
954 if len(revisions) > 1 and not off:
951 if revisions[1] == str(cs.id) or \
955 if revisions[1] == str(cs.id) or \
952 revisions[1] in cs.tags:
956 revisions[1] in cs.tags:
953 break
957 break
@@ -1,374 +1,377
1 # Perforce source for convert extension.
1 # Perforce source for convert extension.
2 #
2 #
3 # Copyright 2009, Frank Kingswood <frank@kingswood-consulting.co.uk>
3 # Copyright 2009, Frank Kingswood <frank@kingswood-consulting.co.uk>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from __future__ import absolute_import
7 from __future__ import absolute_import
8
8
9 import marshal
9 import marshal
10 import re
10 import re
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13 from mercurial import (
13 from mercurial import (
14 error,
14 error,
15 util,
15 util,
16 )
16 )
17 from mercurial.utils import dateutil
17 from mercurial.utils import (
18 dateutil,
19 stringutil,
20 )
18
21
19 from . import common
22 from . import common
20
23
21 def loaditer(f):
24 def loaditer(f):
22 "Yield the dictionary objects generated by p4"
25 "Yield the dictionary objects generated by p4"
23 try:
26 try:
24 while True:
27 while True:
25 d = marshal.load(f)
28 d = marshal.load(f)
26 if not d:
29 if not d:
27 break
30 break
28 yield d
31 yield d
29 except EOFError:
32 except EOFError:
30 pass
33 pass
31
34
32 def decodefilename(filename):
35 def decodefilename(filename):
33 """Perforce escapes special characters @, #, *, or %
36 """Perforce escapes special characters @, #, *, or %
34 with %40, %23, %2A, or %25 respectively
37 with %40, %23, %2A, or %25 respectively
35
38
36 >>> decodefilename(b'portable-net45%252Bnetcore45%252Bwp8%252BMonoAndroid')
39 >>> decodefilename(b'portable-net45%252Bnetcore45%252Bwp8%252BMonoAndroid')
37 'portable-net45%2Bnetcore45%2Bwp8%2BMonoAndroid'
40 'portable-net45%2Bnetcore45%2Bwp8%2BMonoAndroid'
38 >>> decodefilename(b'//Depot/Directory/%2525/%2523/%23%40.%2A')
41 >>> decodefilename(b'//Depot/Directory/%2525/%2523/%23%40.%2A')
39 '//Depot/Directory/%25/%23/#@.*'
42 '//Depot/Directory/%25/%23/#@.*'
40 """
43 """
41 replacements = [('%2A', '*'), ('%23', '#'), ('%40', '@'), ('%25', '%')]
44 replacements = [('%2A', '*'), ('%23', '#'), ('%40', '@'), ('%25', '%')]
42 for k, v in replacements:
45 for k, v in replacements:
43 filename = filename.replace(k, v)
46 filename = filename.replace(k, v)
44 return filename
47 return filename
45
48
46 class p4_source(common.converter_source):
49 class p4_source(common.converter_source):
47 def __init__(self, ui, repotype, path, revs=None):
50 def __init__(self, ui, repotype, path, revs=None):
48 # avoid import cycle
51 # avoid import cycle
49 from . import convcmd
52 from . import convcmd
50
53
51 super(p4_source, self).__init__(ui, repotype, path, revs=revs)
54 super(p4_source, self).__init__(ui, repotype, path, revs=revs)
52
55
53 if "/" in path and not path.startswith('//'):
56 if "/" in path and not path.startswith('//'):
54 raise common.NoRepo(_('%s does not look like a P4 repository') %
57 raise common.NoRepo(_('%s does not look like a P4 repository') %
55 path)
58 path)
56
59
57 common.checktool('p4', abort=False)
60 common.checktool('p4', abort=False)
58
61
59 self.revmap = {}
62 self.revmap = {}
60 self.encoding = self.ui.config('convert', 'p4.encoding',
63 self.encoding = self.ui.config('convert', 'p4.encoding',
61 convcmd.orig_encoding)
64 convcmd.orig_encoding)
62 self.re_type = re.compile(
65 self.re_type = re.compile(
63 "([a-z]+)?(text|binary|symlink|apple|resource|unicode|utf\d+)"
66 "([a-z]+)?(text|binary|symlink|apple|resource|unicode|utf\d+)"
64 "(\+\w+)?$")
67 "(\+\w+)?$")
65 self.re_keywords = re.compile(
68 self.re_keywords = re.compile(
66 r"\$(Id|Header|Date|DateTime|Change|File|Revision|Author)"
69 r"\$(Id|Header|Date|DateTime|Change|File|Revision|Author)"
67 r":[^$\n]*\$")
70 r":[^$\n]*\$")
68 self.re_keywords_old = re.compile("\$(Id|Header):[^$\n]*\$")
71 self.re_keywords_old = re.compile("\$(Id|Header):[^$\n]*\$")
69
72
70 if revs and len(revs) > 1:
73 if revs and len(revs) > 1:
71 raise error.Abort(_("p4 source does not support specifying "
74 raise error.Abort(_("p4 source does not support specifying "
72 "multiple revisions"))
75 "multiple revisions"))
73
76
74 def setrevmap(self, revmap):
77 def setrevmap(self, revmap):
75 """Sets the parsed revmap dictionary.
78 """Sets the parsed revmap dictionary.
76
79
77 Revmap stores mappings from a source revision to a target revision.
80 Revmap stores mappings from a source revision to a target revision.
78 It is set in convertcmd.convert and provided by the user as a file
81 It is set in convertcmd.convert and provided by the user as a file
79 on the commandline.
82 on the commandline.
80
83
81 Revisions in the map are considered beeing present in the
84 Revisions in the map are considered beeing present in the
82 repository and ignored during _parse(). This allows for incremental
85 repository and ignored during _parse(). This allows for incremental
83 imports if a revmap is provided.
86 imports if a revmap is provided.
84 """
87 """
85 self.revmap = revmap
88 self.revmap = revmap
86
89
87 def _parse_view(self, path):
90 def _parse_view(self, path):
88 "Read changes affecting the path"
91 "Read changes affecting the path"
89 cmd = 'p4 -G changes -s submitted %s' % util.shellquote(path)
92 cmd = 'p4 -G changes -s submitted %s' % util.shellquote(path)
90 stdout = util.popen(cmd, mode='rb')
93 stdout = util.popen(cmd, mode='rb')
91 p4changes = {}
94 p4changes = {}
92 for d in loaditer(stdout):
95 for d in loaditer(stdout):
93 c = d.get("change", None)
96 c = d.get("change", None)
94 if c:
97 if c:
95 p4changes[c] = True
98 p4changes[c] = True
96 return p4changes
99 return p4changes
97
100
98 def _parse(self, ui, path):
101 def _parse(self, ui, path):
99 "Prepare list of P4 filenames and revisions to import"
102 "Prepare list of P4 filenames and revisions to import"
100 p4changes = {}
103 p4changes = {}
101 changeset = {}
104 changeset = {}
102 files_map = {}
105 files_map = {}
103 copies_map = {}
106 copies_map = {}
104 localname = {}
107 localname = {}
105 depotname = {}
108 depotname = {}
106 heads = []
109 heads = []
107
110
108 ui.status(_('reading p4 views\n'))
111 ui.status(_('reading p4 views\n'))
109
112
110 # read client spec or view
113 # read client spec or view
111 if "/" in path:
114 if "/" in path:
112 p4changes.update(self._parse_view(path))
115 p4changes.update(self._parse_view(path))
113 if path.startswith("//") and path.endswith("/..."):
116 if path.startswith("//") and path.endswith("/..."):
114 views = {path[:-3]:""}
117 views = {path[:-3]:""}
115 else:
118 else:
116 views = {"//": ""}
119 views = {"//": ""}
117 else:
120 else:
118 cmd = 'p4 -G client -o %s' % util.shellquote(path)
121 cmd = 'p4 -G client -o %s' % util.shellquote(path)
119 clientspec = marshal.load(util.popen(cmd, mode='rb'))
122 clientspec = marshal.load(util.popen(cmd, mode='rb'))
120
123
121 views = {}
124 views = {}
122 for client in clientspec:
125 for client in clientspec:
123 if client.startswith("View"):
126 if client.startswith("View"):
124 sview, cview = clientspec[client].split()
127 sview, cview = clientspec[client].split()
125 p4changes.update(self._parse_view(sview))
128 p4changes.update(self._parse_view(sview))
126 if sview.endswith("...") and cview.endswith("..."):
129 if sview.endswith("...") and cview.endswith("..."):
127 sview = sview[:-3]
130 sview = sview[:-3]
128 cview = cview[:-3]
131 cview = cview[:-3]
129 cview = cview[2:]
132 cview = cview[2:]
130 cview = cview[cview.find("/") + 1:]
133 cview = cview[cview.find("/") + 1:]
131 views[sview] = cview
134 views[sview] = cview
132
135
133 # list of changes that affect our source files
136 # list of changes that affect our source files
134 p4changes = p4changes.keys()
137 p4changes = p4changes.keys()
135 p4changes.sort(key=int)
138 p4changes.sort(key=int)
136
139
137 # list with depot pathnames, longest first
140 # list with depot pathnames, longest first
138 vieworder = views.keys()
141 vieworder = views.keys()
139 vieworder.sort(key=len, reverse=True)
142 vieworder.sort(key=len, reverse=True)
140
143
141 # handle revision limiting
144 # handle revision limiting
142 startrev = self.ui.config('convert', 'p4.startrev')
145 startrev = self.ui.config('convert', 'p4.startrev')
143
146
144 # now read the full changelists to get the list of file revisions
147 # now read the full changelists to get the list of file revisions
145 ui.status(_('collecting p4 changelists\n'))
148 ui.status(_('collecting p4 changelists\n'))
146 lastid = None
149 lastid = None
147 for change in p4changes:
150 for change in p4changes:
148 if startrev and int(change) < int(startrev):
151 if startrev and int(change) < int(startrev):
149 continue
152 continue
150 if self.revs and int(change) > int(self.revs[0]):
153 if self.revs and int(change) > int(self.revs[0]):
151 continue
154 continue
152 if change in self.revmap:
155 if change in self.revmap:
153 # Ignore already present revisions, but set the parent pointer.
156 # Ignore already present revisions, but set the parent pointer.
154 lastid = change
157 lastid = change
155 continue
158 continue
156
159
157 if lastid:
160 if lastid:
158 parents = [lastid]
161 parents = [lastid]
159 else:
162 else:
160 parents = []
163 parents = []
161
164
162 d = self._fetch_revision(change)
165 d = self._fetch_revision(change)
163 c = self._construct_commit(d, parents)
166 c = self._construct_commit(d, parents)
164
167
165 descarr = c.desc.splitlines(True)
168 descarr = c.desc.splitlines(True)
166 if len(descarr) > 0:
169 if len(descarr) > 0:
167 shortdesc = descarr[0].rstrip('\r\n')
170 shortdesc = descarr[0].rstrip('\r\n')
168 else:
171 else:
169 shortdesc = '**empty changelist description**'
172 shortdesc = '**empty changelist description**'
170
173
171 t = '%s %s' % (c.rev, repr(shortdesc)[1:-1])
174 t = '%s %s' % (c.rev, repr(shortdesc)[1:-1])
172 ui.status(util.ellipsis(t, 80) + '\n')
175 ui.status(stringutil.ellipsis(t, 80) + '\n')
173
176
174 files = []
177 files = []
175 copies = {}
178 copies = {}
176 copiedfiles = []
179 copiedfiles = []
177 i = 0
180 i = 0
178 while ("depotFile%d" % i) in d and ("rev%d" % i) in d:
181 while ("depotFile%d" % i) in d and ("rev%d" % i) in d:
179 oldname = d["depotFile%d" % i]
182 oldname = d["depotFile%d" % i]
180 filename = None
183 filename = None
181 for v in vieworder:
184 for v in vieworder:
182 if oldname.lower().startswith(v.lower()):
185 if oldname.lower().startswith(v.lower()):
183 filename = decodefilename(views[v] + oldname[len(v):])
186 filename = decodefilename(views[v] + oldname[len(v):])
184 break
187 break
185 if filename:
188 if filename:
186 files.append((filename, d["rev%d" % i]))
189 files.append((filename, d["rev%d" % i]))
187 depotname[filename] = oldname
190 depotname[filename] = oldname
188 if (d.get("action%d" % i) == "move/add"):
191 if (d.get("action%d" % i) == "move/add"):
189 copiedfiles.append(filename)
192 copiedfiles.append(filename)
190 localname[oldname] = filename
193 localname[oldname] = filename
191 i += 1
194 i += 1
192
195
193 # Collect information about copied files
196 # Collect information about copied files
194 for filename in copiedfiles:
197 for filename in copiedfiles:
195 oldname = depotname[filename]
198 oldname = depotname[filename]
196
199
197 flcmd = 'p4 -G filelog %s' \
200 flcmd = 'p4 -G filelog %s' \
198 % util.shellquote(oldname)
201 % util.shellquote(oldname)
199 flstdout = util.popen(flcmd, mode='rb')
202 flstdout = util.popen(flcmd, mode='rb')
200
203
201 copiedfilename = None
204 copiedfilename = None
202 for d in loaditer(flstdout):
205 for d in loaditer(flstdout):
203 copiedoldname = None
206 copiedoldname = None
204
207
205 i = 0
208 i = 0
206 while ("change%d" % i) in d:
209 while ("change%d" % i) in d:
207 if (d["change%d" % i] == change and
210 if (d["change%d" % i] == change and
208 d["action%d" % i] == "move/add"):
211 d["action%d" % i] == "move/add"):
209 j = 0
212 j = 0
210 while ("file%d,%d" % (i, j)) in d:
213 while ("file%d,%d" % (i, j)) in d:
211 if d["how%d,%d" % (i, j)] == "moved from":
214 if d["how%d,%d" % (i, j)] == "moved from":
212 copiedoldname = d["file%d,%d" % (i, j)]
215 copiedoldname = d["file%d,%d" % (i, j)]
213 break
216 break
214 j += 1
217 j += 1
215 i += 1
218 i += 1
216
219
217 if copiedoldname and copiedoldname in localname:
220 if copiedoldname and copiedoldname in localname:
218 copiedfilename = localname[copiedoldname]
221 copiedfilename = localname[copiedoldname]
219 break
222 break
220
223
221 if copiedfilename:
224 if copiedfilename:
222 copies[filename] = copiedfilename
225 copies[filename] = copiedfilename
223 else:
226 else:
224 ui.warn(_("cannot find source for copied file: %s@%s\n")
227 ui.warn(_("cannot find source for copied file: %s@%s\n")
225 % (filename, change))
228 % (filename, change))
226
229
227 changeset[change] = c
230 changeset[change] = c
228 files_map[change] = files
231 files_map[change] = files
229 copies_map[change] = copies
232 copies_map[change] = copies
230 lastid = change
233 lastid = change
231
234
232 if lastid and len(changeset) > 0:
235 if lastid and len(changeset) > 0:
233 heads = [lastid]
236 heads = [lastid]
234
237
235 return {
238 return {
236 'changeset': changeset,
239 'changeset': changeset,
237 'files': files_map,
240 'files': files_map,
238 'copies': copies_map,
241 'copies': copies_map,
239 'heads': heads,
242 'heads': heads,
240 'depotname': depotname,
243 'depotname': depotname,
241 }
244 }
242
245
243 @util.propertycache
246 @util.propertycache
244 def _parse_once(self):
247 def _parse_once(self):
245 return self._parse(self.ui, self.path)
248 return self._parse(self.ui, self.path)
246
249
247 @util.propertycache
250 @util.propertycache
248 def copies(self):
251 def copies(self):
249 return self._parse_once['copies']
252 return self._parse_once['copies']
250
253
251 @util.propertycache
254 @util.propertycache
252 def files(self):
255 def files(self):
253 return self._parse_once['files']
256 return self._parse_once['files']
254
257
255 @util.propertycache
258 @util.propertycache
256 def changeset(self):
259 def changeset(self):
257 return self._parse_once['changeset']
260 return self._parse_once['changeset']
258
261
259 @util.propertycache
262 @util.propertycache
260 def heads(self):
263 def heads(self):
261 return self._parse_once['heads']
264 return self._parse_once['heads']
262
265
263 @util.propertycache
266 @util.propertycache
264 def depotname(self):
267 def depotname(self):
265 return self._parse_once['depotname']
268 return self._parse_once['depotname']
266
269
267 def getheads(self):
270 def getheads(self):
268 return self.heads
271 return self.heads
269
272
270 def getfile(self, name, rev):
273 def getfile(self, name, rev):
271 cmd = 'p4 -G print %s' \
274 cmd = 'p4 -G print %s' \
272 % util.shellquote("%s#%s" % (self.depotname[name], rev))
275 % util.shellquote("%s#%s" % (self.depotname[name], rev))
273
276
274 lasterror = None
277 lasterror = None
275 while True:
278 while True:
276 stdout = util.popen(cmd, mode='rb')
279 stdout = util.popen(cmd, mode='rb')
277
280
278 mode = None
281 mode = None
279 contents = []
282 contents = []
280 keywords = None
283 keywords = None
281
284
282 for d in loaditer(stdout):
285 for d in loaditer(stdout):
283 code = d["code"]
286 code = d["code"]
284 data = d.get("data")
287 data = d.get("data")
285
288
286 if code == "error":
289 if code == "error":
287 # if this is the first time error happened
290 # if this is the first time error happened
288 # re-attempt getting the file
291 # re-attempt getting the file
289 if not lasterror:
292 if not lasterror:
290 lasterror = IOError(d["generic"], data)
293 lasterror = IOError(d["generic"], data)
291 # this will exit inner-most for-loop
294 # this will exit inner-most for-loop
292 break
295 break
293 else:
296 else:
294 raise lasterror
297 raise lasterror
295
298
296 elif code == "stat":
299 elif code == "stat":
297 action = d.get("action")
300 action = d.get("action")
298 if action in ["purge", "delete", "move/delete"]:
301 if action in ["purge", "delete", "move/delete"]:
299 return None, None
302 return None, None
300 p4type = self.re_type.match(d["type"])
303 p4type = self.re_type.match(d["type"])
301 if p4type:
304 if p4type:
302 mode = ""
305 mode = ""
303 flags = ((p4type.group(1) or "")
306 flags = ((p4type.group(1) or "")
304 + (p4type.group(3) or ""))
307 + (p4type.group(3) or ""))
305 if "x" in flags:
308 if "x" in flags:
306 mode = "x"
309 mode = "x"
307 if p4type.group(2) == "symlink":
310 if p4type.group(2) == "symlink":
308 mode = "l"
311 mode = "l"
309 if "ko" in flags:
312 if "ko" in flags:
310 keywords = self.re_keywords_old
313 keywords = self.re_keywords_old
311 elif "k" in flags:
314 elif "k" in flags:
312 keywords = self.re_keywords
315 keywords = self.re_keywords
313
316
314 elif code == "text" or code == "binary":
317 elif code == "text" or code == "binary":
315 contents.append(data)
318 contents.append(data)
316
319
317 lasterror = None
320 lasterror = None
318
321
319 if not lasterror:
322 if not lasterror:
320 break
323 break
321
324
322 if mode is None:
325 if mode is None:
323 return None, None
326 return None, None
324
327
325 contents = ''.join(contents)
328 contents = ''.join(contents)
326
329
327 if keywords:
330 if keywords:
328 contents = keywords.sub("$\\1$", contents)
331 contents = keywords.sub("$\\1$", contents)
329 if mode == "l" and contents.endswith("\n"):
332 if mode == "l" and contents.endswith("\n"):
330 contents = contents[:-1]
333 contents = contents[:-1]
331
334
332 return contents, mode
335 return contents, mode
333
336
334 def getchanges(self, rev, full):
337 def getchanges(self, rev, full):
335 if full:
338 if full:
336 raise error.Abort(_("convert from p4 does not support --full"))
339 raise error.Abort(_("convert from p4 does not support --full"))
337 return self.files[rev], self.copies[rev], set()
340 return self.files[rev], self.copies[rev], set()
338
341
339 def _construct_commit(self, obj, parents=None):
342 def _construct_commit(self, obj, parents=None):
340 """
343 """
341 Constructs a common.commit object from an unmarshalled
344 Constructs a common.commit object from an unmarshalled
342 `p4 describe` output
345 `p4 describe` output
343 """
346 """
344 desc = self.recode(obj.get("desc", ""))
347 desc = self.recode(obj.get("desc", ""))
345 date = (int(obj["time"]), 0) # timezone not set
348 date = (int(obj["time"]), 0) # timezone not set
346 if parents is None:
349 if parents is None:
347 parents = []
350 parents = []
348
351
349 return common.commit(author=self.recode(obj["user"]),
352 return common.commit(author=self.recode(obj["user"]),
350 date=dateutil.datestr(date, '%Y-%m-%d %H:%M:%S %1%2'),
353 date=dateutil.datestr(date, '%Y-%m-%d %H:%M:%S %1%2'),
351 parents=parents, desc=desc, branch=None, rev=obj['change'],
354 parents=parents, desc=desc, branch=None, rev=obj['change'],
352 extra={"p4": obj['change'], "convert_revision": obj['change']})
355 extra={"p4": obj['change'], "convert_revision": obj['change']})
353
356
354 def _fetch_revision(self, rev):
357 def _fetch_revision(self, rev):
355 """Return an output of `p4 describe` including author, commit date as
358 """Return an output of `p4 describe` including author, commit date as
356 a dictionary."""
359 a dictionary."""
357 cmd = "p4 -G describe -s %s" % rev
360 cmd = "p4 -G describe -s %s" % rev
358 stdout = util.popen(cmd, mode='rb')
361 stdout = util.popen(cmd, mode='rb')
359 return marshal.load(stdout)
362 return marshal.load(stdout)
360
363
361 def getcommit(self, rev):
364 def getcommit(self, rev):
362 if rev in self.changeset:
365 if rev in self.changeset:
363 return self.changeset[rev]
366 return self.changeset[rev]
364 elif rev in self.revmap:
367 elif rev in self.revmap:
365 d = self._fetch_revision(rev)
368 d = self._fetch_revision(rev)
366 return self._construct_commit(d, parents=None)
369 return self._construct_commit(d, parents=None)
367 raise error.Abort(
370 raise error.Abort(
368 _("cannot find %s in the revmap or parsed changesets") % rev)
371 _("cannot find %s in the revmap or parsed changesets") % rev)
369
372
370 def gettags(self):
373 def gettags(self):
371 return {}
374 return {}
372
375
373 def getchangedfiles(self, rev, i):
376 def getchangedfiles(self, rev, i):
374 return sorted([x[0] for x in self.files[rev]])
377 return sorted([x[0] for x in self.files[rev]])
@@ -1,1357 +1,1360
1 # Subversion 1.4/1.5 Python API backend
1 # Subversion 1.4/1.5 Python API backend
2 #
2 #
3 # Copyright(C) 2007 Daniel Holth et al
3 # Copyright(C) 2007 Daniel Holth et al
4 from __future__ import absolute_import
4 from __future__ import absolute_import
5
5
6 import os
6 import os
7 import re
7 import re
8 import tempfile
8 import tempfile
9 import xml.dom.minidom
9 import xml.dom.minidom
10
10
11 from mercurial.i18n import _
11 from mercurial.i18n import _
12 from mercurial import (
12 from mercurial import (
13 encoding,
13 encoding,
14 error,
14 error,
15 pycompat,
15 pycompat,
16 util,
16 util,
17 vfs as vfsmod,
17 vfs as vfsmod,
18 )
18 )
19 from mercurial.utils import dateutil
19 from mercurial.utils import (
20 dateutil,
21 stringutil,
22 )
20
23
21 from . import common
24 from . import common
22
25
23 pickle = util.pickle
26 pickle = util.pickle
24 stringio = util.stringio
27 stringio = util.stringio
25 propertycache = util.propertycache
28 propertycache = util.propertycache
26 urlerr = util.urlerr
29 urlerr = util.urlerr
27 urlreq = util.urlreq
30 urlreq = util.urlreq
28
31
29 commandline = common.commandline
32 commandline = common.commandline
30 commit = common.commit
33 commit = common.commit
31 converter_sink = common.converter_sink
34 converter_sink = common.converter_sink
32 converter_source = common.converter_source
35 converter_source = common.converter_source
33 decodeargs = common.decodeargs
36 decodeargs = common.decodeargs
34 encodeargs = common.encodeargs
37 encodeargs = common.encodeargs
35 makedatetimestamp = common.makedatetimestamp
38 makedatetimestamp = common.makedatetimestamp
36 mapfile = common.mapfile
39 mapfile = common.mapfile
37 MissingTool = common.MissingTool
40 MissingTool = common.MissingTool
38 NoRepo = common.NoRepo
41 NoRepo = common.NoRepo
39
42
40 # Subversion stuff. Works best with very recent Python SVN bindings
43 # Subversion stuff. Works best with very recent Python SVN bindings
41 # e.g. SVN 1.5 or backports. Thanks to the bzr folks for enhancing
44 # e.g. SVN 1.5 or backports. Thanks to the bzr folks for enhancing
42 # these bindings.
45 # these bindings.
43
46
44 try:
47 try:
45 import svn
48 import svn
46 import svn.client
49 import svn.client
47 import svn.core
50 import svn.core
48 import svn.ra
51 import svn.ra
49 import svn.delta
52 import svn.delta
50 from . import transport
53 from . import transport
51 import warnings
54 import warnings
52 warnings.filterwarnings('ignore',
55 warnings.filterwarnings('ignore',
53 module='svn.core',
56 module='svn.core',
54 category=DeprecationWarning)
57 category=DeprecationWarning)
55 svn.core.SubversionException # trigger import to catch error
58 svn.core.SubversionException # trigger import to catch error
56
59
57 except ImportError:
60 except ImportError:
58 svn = None
61 svn = None
59
62
60 class SvnPathNotFound(Exception):
63 class SvnPathNotFound(Exception):
61 pass
64 pass
62
65
63 def revsplit(rev):
66 def revsplit(rev):
64 """Parse a revision string and return (uuid, path, revnum).
67 """Parse a revision string and return (uuid, path, revnum).
65 >>> revsplit(b'svn:a2147622-4a9f-4db4-a8d3-13562ff547b2'
68 >>> revsplit(b'svn:a2147622-4a9f-4db4-a8d3-13562ff547b2'
66 ... b'/proj%20B/mytrunk/mytrunk@1')
69 ... b'/proj%20B/mytrunk/mytrunk@1')
67 ('a2147622-4a9f-4db4-a8d3-13562ff547b2', '/proj%20B/mytrunk/mytrunk', 1)
70 ('a2147622-4a9f-4db4-a8d3-13562ff547b2', '/proj%20B/mytrunk/mytrunk', 1)
68 >>> revsplit(b'svn:8af66a51-67f5-4354-b62c-98d67cc7be1d@1')
71 >>> revsplit(b'svn:8af66a51-67f5-4354-b62c-98d67cc7be1d@1')
69 ('', '', 1)
72 ('', '', 1)
70 >>> revsplit(b'@7')
73 >>> revsplit(b'@7')
71 ('', '', 7)
74 ('', '', 7)
72 >>> revsplit(b'7')
75 >>> revsplit(b'7')
73 ('', '', 0)
76 ('', '', 0)
74 >>> revsplit(b'bad')
77 >>> revsplit(b'bad')
75 ('', '', 0)
78 ('', '', 0)
76 """
79 """
77 parts = rev.rsplit('@', 1)
80 parts = rev.rsplit('@', 1)
78 revnum = 0
81 revnum = 0
79 if len(parts) > 1:
82 if len(parts) > 1:
80 revnum = int(parts[1])
83 revnum = int(parts[1])
81 parts = parts[0].split('/', 1)
84 parts = parts[0].split('/', 1)
82 uuid = ''
85 uuid = ''
83 mod = ''
86 mod = ''
84 if len(parts) > 1 and parts[0].startswith('svn:'):
87 if len(parts) > 1 and parts[0].startswith('svn:'):
85 uuid = parts[0][4:]
88 uuid = parts[0][4:]
86 mod = '/' + parts[1]
89 mod = '/' + parts[1]
87 return uuid, mod, revnum
90 return uuid, mod, revnum
88
91
89 def quote(s):
92 def quote(s):
90 # As of svn 1.7, many svn calls expect "canonical" paths. In
93 # As of svn 1.7, many svn calls expect "canonical" paths. In
91 # theory, we should call svn.core.*canonicalize() on all paths
94 # theory, we should call svn.core.*canonicalize() on all paths
92 # before passing them to the API. Instead, we assume the base url
95 # before passing them to the API. Instead, we assume the base url
93 # is canonical and copy the behaviour of svn URL encoding function
96 # is canonical and copy the behaviour of svn URL encoding function
94 # so we can extend it safely with new components. The "safe"
97 # so we can extend it safely with new components. The "safe"
95 # characters were taken from the "svn_uri__char_validity" table in
98 # characters were taken from the "svn_uri__char_validity" table in
96 # libsvn_subr/path.c.
99 # libsvn_subr/path.c.
97 return urlreq.quote(s, "!$&'()*+,-./:=@_~")
100 return urlreq.quote(s, "!$&'()*+,-./:=@_~")
98
101
99 def geturl(path):
102 def geturl(path):
100 try:
103 try:
101 return svn.client.url_from_path(svn.core.svn_path_canonicalize(path))
104 return svn.client.url_from_path(svn.core.svn_path_canonicalize(path))
102 except svn.core.SubversionException:
105 except svn.core.SubversionException:
103 # svn.client.url_from_path() fails with local repositories
106 # svn.client.url_from_path() fails with local repositories
104 pass
107 pass
105 if os.path.isdir(path):
108 if os.path.isdir(path):
106 path = os.path.normpath(os.path.abspath(path))
109 path = os.path.normpath(os.path.abspath(path))
107 if pycompat.iswindows:
110 if pycompat.iswindows:
108 path = '/' + util.normpath(path)
111 path = '/' + util.normpath(path)
109 # Module URL is later compared with the repository URL returned
112 # Module URL is later compared with the repository URL returned
110 # by svn API, which is UTF-8.
113 # by svn API, which is UTF-8.
111 path = encoding.tolocal(path)
114 path = encoding.tolocal(path)
112 path = 'file://%s' % quote(path)
115 path = 'file://%s' % quote(path)
113 return svn.core.svn_path_canonicalize(path)
116 return svn.core.svn_path_canonicalize(path)
114
117
115 def optrev(number):
118 def optrev(number):
116 optrev = svn.core.svn_opt_revision_t()
119 optrev = svn.core.svn_opt_revision_t()
117 optrev.kind = svn.core.svn_opt_revision_number
120 optrev.kind = svn.core.svn_opt_revision_number
118 optrev.value.number = number
121 optrev.value.number = number
119 return optrev
122 return optrev
120
123
121 class changedpath(object):
124 class changedpath(object):
122 def __init__(self, p):
125 def __init__(self, p):
123 self.copyfrom_path = p.copyfrom_path
126 self.copyfrom_path = p.copyfrom_path
124 self.copyfrom_rev = p.copyfrom_rev
127 self.copyfrom_rev = p.copyfrom_rev
125 self.action = p.action
128 self.action = p.action
126
129
127 def get_log_child(fp, url, paths, start, end, limit=0,
130 def get_log_child(fp, url, paths, start, end, limit=0,
128 discover_changed_paths=True, strict_node_history=False):
131 discover_changed_paths=True, strict_node_history=False):
129 protocol = -1
132 protocol = -1
130 def receiver(orig_paths, revnum, author, date, message, pool):
133 def receiver(orig_paths, revnum, author, date, message, pool):
131 paths = {}
134 paths = {}
132 if orig_paths is not None:
135 if orig_paths is not None:
133 for k, v in orig_paths.iteritems():
136 for k, v in orig_paths.iteritems():
134 paths[k] = changedpath(v)
137 paths[k] = changedpath(v)
135 pickle.dump((paths, revnum, author, date, message),
138 pickle.dump((paths, revnum, author, date, message),
136 fp, protocol)
139 fp, protocol)
137
140
138 try:
141 try:
139 # Use an ra of our own so that our parent can consume
142 # Use an ra of our own so that our parent can consume
140 # our results without confusing the server.
143 # our results without confusing the server.
141 t = transport.SvnRaTransport(url=url)
144 t = transport.SvnRaTransport(url=url)
142 svn.ra.get_log(t.ra, paths, start, end, limit,
145 svn.ra.get_log(t.ra, paths, start, end, limit,
143 discover_changed_paths,
146 discover_changed_paths,
144 strict_node_history,
147 strict_node_history,
145 receiver)
148 receiver)
146 except IOError:
149 except IOError:
147 # Caller may interrupt the iteration
150 # Caller may interrupt the iteration
148 pickle.dump(None, fp, protocol)
151 pickle.dump(None, fp, protocol)
149 except Exception as inst:
152 except Exception as inst:
150 pickle.dump(util.forcebytestr(inst), fp, protocol)
153 pickle.dump(stringutil.forcebytestr(inst), fp, protocol)
151 else:
154 else:
152 pickle.dump(None, fp, protocol)
155 pickle.dump(None, fp, protocol)
153 fp.flush()
156 fp.flush()
154 # With large history, cleanup process goes crazy and suddenly
157 # With large history, cleanup process goes crazy and suddenly
155 # consumes *huge* amount of memory. The output file being closed,
158 # consumes *huge* amount of memory. The output file being closed,
156 # there is no need for clean termination.
159 # there is no need for clean termination.
157 os._exit(0)
160 os._exit(0)
158
161
159 def debugsvnlog(ui, **opts):
162 def debugsvnlog(ui, **opts):
160 """Fetch SVN log in a subprocess and channel them back to parent to
163 """Fetch SVN log in a subprocess and channel them back to parent to
161 avoid memory collection issues.
164 avoid memory collection issues.
162 """
165 """
163 if svn is None:
166 if svn is None:
164 raise error.Abort(_('debugsvnlog could not load Subversion python '
167 raise error.Abort(_('debugsvnlog could not load Subversion python '
165 'bindings'))
168 'bindings'))
166
169
167 args = decodeargs(ui.fin.read())
170 args = decodeargs(ui.fin.read())
168 get_log_child(ui.fout, *args)
171 get_log_child(ui.fout, *args)
169
172
170 class logstream(object):
173 class logstream(object):
171 """Interruptible revision log iterator."""
174 """Interruptible revision log iterator."""
172 def __init__(self, stdout):
175 def __init__(self, stdout):
173 self._stdout = stdout
176 self._stdout = stdout
174
177
175 def __iter__(self):
178 def __iter__(self):
176 while True:
179 while True:
177 try:
180 try:
178 entry = pickle.load(self._stdout)
181 entry = pickle.load(self._stdout)
179 except EOFError:
182 except EOFError:
180 raise error.Abort(_('Mercurial failed to run itself, check'
183 raise error.Abort(_('Mercurial failed to run itself, check'
181 ' hg executable is in PATH'))
184 ' hg executable is in PATH'))
182 try:
185 try:
183 orig_paths, revnum, author, date, message = entry
186 orig_paths, revnum, author, date, message = entry
184 except (TypeError, ValueError):
187 except (TypeError, ValueError):
185 if entry is None:
188 if entry is None:
186 break
189 break
187 raise error.Abort(_("log stream exception '%s'") % entry)
190 raise error.Abort(_("log stream exception '%s'") % entry)
188 yield entry
191 yield entry
189
192
190 def close(self):
193 def close(self):
191 if self._stdout:
194 if self._stdout:
192 self._stdout.close()
195 self._stdout.close()
193 self._stdout = None
196 self._stdout = None
194
197
195 class directlogstream(list):
198 class directlogstream(list):
196 """Direct revision log iterator.
199 """Direct revision log iterator.
197 This can be used for debugging and development but it will probably leak
200 This can be used for debugging and development but it will probably leak
198 memory and is not suitable for real conversions."""
201 memory and is not suitable for real conversions."""
199 def __init__(self, url, paths, start, end, limit=0,
202 def __init__(self, url, paths, start, end, limit=0,
200 discover_changed_paths=True, strict_node_history=False):
203 discover_changed_paths=True, strict_node_history=False):
201
204
202 def receiver(orig_paths, revnum, author, date, message, pool):
205 def receiver(orig_paths, revnum, author, date, message, pool):
203 paths = {}
206 paths = {}
204 if orig_paths is not None:
207 if orig_paths is not None:
205 for k, v in orig_paths.iteritems():
208 for k, v in orig_paths.iteritems():
206 paths[k] = changedpath(v)
209 paths[k] = changedpath(v)
207 self.append((paths, revnum, author, date, message))
210 self.append((paths, revnum, author, date, message))
208
211
209 # Use an ra of our own so that our parent can consume
212 # Use an ra of our own so that our parent can consume
210 # our results without confusing the server.
213 # our results without confusing the server.
211 t = transport.SvnRaTransport(url=url)
214 t = transport.SvnRaTransport(url=url)
212 svn.ra.get_log(t.ra, paths, start, end, limit,
215 svn.ra.get_log(t.ra, paths, start, end, limit,
213 discover_changed_paths,
216 discover_changed_paths,
214 strict_node_history,
217 strict_node_history,
215 receiver)
218 receiver)
216
219
217 def close(self):
220 def close(self):
218 pass
221 pass
219
222
220 # Check to see if the given path is a local Subversion repo. Verify this by
223 # Check to see if the given path is a local Subversion repo. Verify this by
221 # looking for several svn-specific files and directories in the given
224 # looking for several svn-specific files and directories in the given
222 # directory.
225 # directory.
223 def filecheck(ui, path, proto):
226 def filecheck(ui, path, proto):
224 for x in ('locks', 'hooks', 'format', 'db'):
227 for x in ('locks', 'hooks', 'format', 'db'):
225 if not os.path.exists(os.path.join(path, x)):
228 if not os.path.exists(os.path.join(path, x)):
226 return False
229 return False
227 return True
230 return True
228
231
229 # Check to see if a given path is the root of an svn repo over http. We verify
232 # Check to see if a given path is the root of an svn repo over http. We verify
230 # this by requesting a version-controlled URL we know can't exist and looking
233 # this by requesting a version-controlled URL we know can't exist and looking
231 # for the svn-specific "not found" XML.
234 # for the svn-specific "not found" XML.
232 def httpcheck(ui, path, proto):
235 def httpcheck(ui, path, proto):
233 try:
236 try:
234 opener = urlreq.buildopener()
237 opener = urlreq.buildopener()
235 rsp = opener.open('%s://%s/!svn/ver/0/.svn' % (proto, path), 'rb')
238 rsp = opener.open('%s://%s/!svn/ver/0/.svn' % (proto, path), 'rb')
236 data = rsp.read()
239 data = rsp.read()
237 except urlerr.httperror as inst:
240 except urlerr.httperror as inst:
238 if inst.code != 404:
241 if inst.code != 404:
239 # Except for 404 we cannot know for sure this is not an svn repo
242 # Except for 404 we cannot know for sure this is not an svn repo
240 ui.warn(_('svn: cannot probe remote repository, assume it could '
243 ui.warn(_('svn: cannot probe remote repository, assume it could '
241 'be a subversion repository. Use --source-type if you '
244 'be a subversion repository. Use --source-type if you '
242 'know better.\n'))
245 'know better.\n'))
243 return True
246 return True
244 data = inst.fp.read()
247 data = inst.fp.read()
245 except Exception:
248 except Exception:
246 # Could be urlerr.urlerror if the URL is invalid or anything else.
249 # Could be urlerr.urlerror if the URL is invalid or anything else.
247 return False
250 return False
248 return '<m:human-readable errcode="160013">' in data
251 return '<m:human-readable errcode="160013">' in data
249
252
250 protomap = {'http': httpcheck,
253 protomap = {'http': httpcheck,
251 'https': httpcheck,
254 'https': httpcheck,
252 'file': filecheck,
255 'file': filecheck,
253 }
256 }
254 def issvnurl(ui, url):
257 def issvnurl(ui, url):
255 try:
258 try:
256 proto, path = url.split('://', 1)
259 proto, path = url.split('://', 1)
257 if proto == 'file':
260 if proto == 'file':
258 if (pycompat.iswindows and path[:1] == '/'
261 if (pycompat.iswindows and path[:1] == '/'
259 and path[1:2].isalpha() and path[2:6].lower() == '%3a/'):
262 and path[1:2].isalpha() and path[2:6].lower() == '%3a/'):
260 path = path[:2] + ':/' + path[6:]
263 path = path[:2] + ':/' + path[6:]
261 path = urlreq.url2pathname(path)
264 path = urlreq.url2pathname(path)
262 except ValueError:
265 except ValueError:
263 proto = 'file'
266 proto = 'file'
264 path = os.path.abspath(url)
267 path = os.path.abspath(url)
265 if proto == 'file':
268 if proto == 'file':
266 path = util.pconvert(path)
269 path = util.pconvert(path)
267 check = protomap.get(proto, lambda *args: False)
270 check = protomap.get(proto, lambda *args: False)
268 while '/' in path:
271 while '/' in path:
269 if check(ui, path, proto):
272 if check(ui, path, proto):
270 return True
273 return True
271 path = path.rsplit('/', 1)[0]
274 path = path.rsplit('/', 1)[0]
272 return False
275 return False
273
276
274 # SVN conversion code stolen from bzr-svn and tailor
277 # SVN conversion code stolen from bzr-svn and tailor
275 #
278 #
276 # Subversion looks like a versioned filesystem, branches structures
279 # Subversion looks like a versioned filesystem, branches structures
277 # are defined by conventions and not enforced by the tool. First,
280 # are defined by conventions and not enforced by the tool. First,
278 # we define the potential branches (modules) as "trunk" and "branches"
281 # we define the potential branches (modules) as "trunk" and "branches"
279 # children directories. Revisions are then identified by their
282 # children directories. Revisions are then identified by their
280 # module and revision number (and a repository identifier).
283 # module and revision number (and a repository identifier).
281 #
284 #
282 # The revision graph is really a tree (or a forest). By default, a
285 # The revision graph is really a tree (or a forest). By default, a
283 # revision parent is the previous revision in the same module. If the
286 # revision parent is the previous revision in the same module. If the
284 # module directory is copied/moved from another module then the
287 # module directory is copied/moved from another module then the
285 # revision is the module root and its parent the source revision in
288 # revision is the module root and its parent the source revision in
286 # the parent module. A revision has at most one parent.
289 # the parent module. A revision has at most one parent.
287 #
290 #
288 class svn_source(converter_source):
291 class svn_source(converter_source):
289 def __init__(self, ui, repotype, url, revs=None):
292 def __init__(self, ui, repotype, url, revs=None):
290 super(svn_source, self).__init__(ui, repotype, url, revs=revs)
293 super(svn_source, self).__init__(ui, repotype, url, revs=revs)
291
294
292 if not (url.startswith('svn://') or url.startswith('svn+ssh://') or
295 if not (url.startswith('svn://') or url.startswith('svn+ssh://') or
293 (os.path.exists(url) and
296 (os.path.exists(url) and
294 os.path.exists(os.path.join(url, '.svn'))) or
297 os.path.exists(os.path.join(url, '.svn'))) or
295 issvnurl(ui, url)):
298 issvnurl(ui, url)):
296 raise NoRepo(_("%s does not look like a Subversion repository")
299 raise NoRepo(_("%s does not look like a Subversion repository")
297 % url)
300 % url)
298 if svn is None:
301 if svn is None:
299 raise MissingTool(_('could not load Subversion python bindings'))
302 raise MissingTool(_('could not load Subversion python bindings'))
300
303
301 try:
304 try:
302 version = svn.core.SVN_VER_MAJOR, svn.core.SVN_VER_MINOR
305 version = svn.core.SVN_VER_MAJOR, svn.core.SVN_VER_MINOR
303 if version < (1, 4):
306 if version < (1, 4):
304 raise MissingTool(_('Subversion python bindings %d.%d found, '
307 raise MissingTool(_('Subversion python bindings %d.%d found, '
305 '1.4 or later required') % version)
308 '1.4 or later required') % version)
306 except AttributeError:
309 except AttributeError:
307 raise MissingTool(_('Subversion python bindings are too old, 1.4 '
310 raise MissingTool(_('Subversion python bindings are too old, 1.4 '
308 'or later required'))
311 'or later required'))
309
312
310 self.lastrevs = {}
313 self.lastrevs = {}
311
314
312 latest = None
315 latest = None
313 try:
316 try:
314 # Support file://path@rev syntax. Useful e.g. to convert
317 # Support file://path@rev syntax. Useful e.g. to convert
315 # deleted branches.
318 # deleted branches.
316 at = url.rfind('@')
319 at = url.rfind('@')
317 if at >= 0:
320 if at >= 0:
318 latest = int(url[at + 1:])
321 latest = int(url[at + 1:])
319 url = url[:at]
322 url = url[:at]
320 except ValueError:
323 except ValueError:
321 pass
324 pass
322 self.url = geturl(url)
325 self.url = geturl(url)
323 self.encoding = 'UTF-8' # Subversion is always nominal UTF-8
326 self.encoding = 'UTF-8' # Subversion is always nominal UTF-8
324 try:
327 try:
325 self.transport = transport.SvnRaTransport(url=self.url)
328 self.transport = transport.SvnRaTransport(url=self.url)
326 self.ra = self.transport.ra
329 self.ra = self.transport.ra
327 self.ctx = self.transport.client
330 self.ctx = self.transport.client
328 self.baseurl = svn.ra.get_repos_root(self.ra)
331 self.baseurl = svn.ra.get_repos_root(self.ra)
329 # Module is either empty or a repository path starting with
332 # Module is either empty or a repository path starting with
330 # a slash and not ending with a slash.
333 # a slash and not ending with a slash.
331 self.module = urlreq.unquote(self.url[len(self.baseurl):])
334 self.module = urlreq.unquote(self.url[len(self.baseurl):])
332 self.prevmodule = None
335 self.prevmodule = None
333 self.rootmodule = self.module
336 self.rootmodule = self.module
334 self.commits = {}
337 self.commits = {}
335 self.paths = {}
338 self.paths = {}
336 self.uuid = svn.ra.get_uuid(self.ra)
339 self.uuid = svn.ra.get_uuid(self.ra)
337 except svn.core.SubversionException:
340 except svn.core.SubversionException:
338 ui.traceback()
341 ui.traceback()
339 svnversion = '%d.%d.%d' % (svn.core.SVN_VER_MAJOR,
342 svnversion = '%d.%d.%d' % (svn.core.SVN_VER_MAJOR,
340 svn.core.SVN_VER_MINOR,
343 svn.core.SVN_VER_MINOR,
341 svn.core.SVN_VER_MICRO)
344 svn.core.SVN_VER_MICRO)
342 raise NoRepo(_("%s does not look like a Subversion repository "
345 raise NoRepo(_("%s does not look like a Subversion repository "
343 "to libsvn version %s")
346 "to libsvn version %s")
344 % (self.url, svnversion))
347 % (self.url, svnversion))
345
348
346 if revs:
349 if revs:
347 if len(revs) > 1:
350 if len(revs) > 1:
348 raise error.Abort(_('subversion source does not support '
351 raise error.Abort(_('subversion source does not support '
349 'specifying multiple revisions'))
352 'specifying multiple revisions'))
350 try:
353 try:
351 latest = int(revs[0])
354 latest = int(revs[0])
352 except ValueError:
355 except ValueError:
353 raise error.Abort(_('svn: revision %s is not an integer') %
356 raise error.Abort(_('svn: revision %s is not an integer') %
354 revs[0])
357 revs[0])
355
358
356 trunkcfg = self.ui.config('convert', 'svn.trunk')
359 trunkcfg = self.ui.config('convert', 'svn.trunk')
357 if trunkcfg is None:
360 if trunkcfg is None:
358 trunkcfg = 'trunk'
361 trunkcfg = 'trunk'
359 self.trunkname = trunkcfg.strip('/')
362 self.trunkname = trunkcfg.strip('/')
360 self.startrev = self.ui.config('convert', 'svn.startrev')
363 self.startrev = self.ui.config('convert', 'svn.startrev')
361 try:
364 try:
362 self.startrev = int(self.startrev)
365 self.startrev = int(self.startrev)
363 if self.startrev < 0:
366 if self.startrev < 0:
364 self.startrev = 0
367 self.startrev = 0
365 except ValueError:
368 except ValueError:
366 raise error.Abort(_('svn: start revision %s is not an integer')
369 raise error.Abort(_('svn: start revision %s is not an integer')
367 % self.startrev)
370 % self.startrev)
368
371
369 try:
372 try:
370 self.head = self.latest(self.module, latest)
373 self.head = self.latest(self.module, latest)
371 except SvnPathNotFound:
374 except SvnPathNotFound:
372 self.head = None
375 self.head = None
373 if not self.head:
376 if not self.head:
374 raise error.Abort(_('no revision found in module %s')
377 raise error.Abort(_('no revision found in module %s')
375 % self.module)
378 % self.module)
376 self.last_changed = self.revnum(self.head)
379 self.last_changed = self.revnum(self.head)
377
380
378 self._changescache = (None, None)
381 self._changescache = (None, None)
379
382
380 if os.path.exists(os.path.join(url, '.svn/entries')):
383 if os.path.exists(os.path.join(url, '.svn/entries')):
381 self.wc = url
384 self.wc = url
382 else:
385 else:
383 self.wc = None
386 self.wc = None
384 self.convertfp = None
387 self.convertfp = None
385
388
386 def setrevmap(self, revmap):
389 def setrevmap(self, revmap):
387 lastrevs = {}
390 lastrevs = {}
388 for revid in revmap:
391 for revid in revmap:
389 uuid, module, revnum = revsplit(revid)
392 uuid, module, revnum = revsplit(revid)
390 lastrevnum = lastrevs.setdefault(module, revnum)
393 lastrevnum = lastrevs.setdefault(module, revnum)
391 if revnum > lastrevnum:
394 if revnum > lastrevnum:
392 lastrevs[module] = revnum
395 lastrevs[module] = revnum
393 self.lastrevs = lastrevs
396 self.lastrevs = lastrevs
394
397
395 def exists(self, path, optrev):
398 def exists(self, path, optrev):
396 try:
399 try:
397 svn.client.ls(self.url.rstrip('/') + '/' + quote(path),
400 svn.client.ls(self.url.rstrip('/') + '/' + quote(path),
398 optrev, False, self.ctx)
401 optrev, False, self.ctx)
399 return True
402 return True
400 except svn.core.SubversionException:
403 except svn.core.SubversionException:
401 return False
404 return False
402
405
403 def getheads(self):
406 def getheads(self):
404
407
405 def isdir(path, revnum):
408 def isdir(path, revnum):
406 kind = self._checkpath(path, revnum)
409 kind = self._checkpath(path, revnum)
407 return kind == svn.core.svn_node_dir
410 return kind == svn.core.svn_node_dir
408
411
409 def getcfgpath(name, rev):
412 def getcfgpath(name, rev):
410 cfgpath = self.ui.config('convert', 'svn.' + name)
413 cfgpath = self.ui.config('convert', 'svn.' + name)
411 if cfgpath is not None and cfgpath.strip() == '':
414 if cfgpath is not None and cfgpath.strip() == '':
412 return None
415 return None
413 path = (cfgpath or name).strip('/')
416 path = (cfgpath or name).strip('/')
414 if not self.exists(path, rev):
417 if not self.exists(path, rev):
415 if self.module.endswith(path) and name == 'trunk':
418 if self.module.endswith(path) and name == 'trunk':
416 # we are converting from inside this directory
419 # we are converting from inside this directory
417 return None
420 return None
418 if cfgpath:
421 if cfgpath:
419 raise error.Abort(_('expected %s to be at %r, but not found'
422 raise error.Abort(_('expected %s to be at %r, but not found'
420 ) % (name, path))
423 ) % (name, path))
421 return None
424 return None
422 self.ui.note(_('found %s at %r\n') % (name, path))
425 self.ui.note(_('found %s at %r\n') % (name, path))
423 return path
426 return path
424
427
425 rev = optrev(self.last_changed)
428 rev = optrev(self.last_changed)
426 oldmodule = ''
429 oldmodule = ''
427 trunk = getcfgpath('trunk', rev)
430 trunk = getcfgpath('trunk', rev)
428 self.tags = getcfgpath('tags', rev)
431 self.tags = getcfgpath('tags', rev)
429 branches = getcfgpath('branches', rev)
432 branches = getcfgpath('branches', rev)
430
433
431 # If the project has a trunk or branches, we will extract heads
434 # If the project has a trunk or branches, we will extract heads
432 # from them. We keep the project root otherwise.
435 # from them. We keep the project root otherwise.
433 if trunk:
436 if trunk:
434 oldmodule = self.module or ''
437 oldmodule = self.module or ''
435 self.module += '/' + trunk
438 self.module += '/' + trunk
436 self.head = self.latest(self.module, self.last_changed)
439 self.head = self.latest(self.module, self.last_changed)
437 if not self.head:
440 if not self.head:
438 raise error.Abort(_('no revision found in module %s')
441 raise error.Abort(_('no revision found in module %s')
439 % self.module)
442 % self.module)
440
443
441 # First head in the list is the module's head
444 # First head in the list is the module's head
442 self.heads = [self.head]
445 self.heads = [self.head]
443 if self.tags is not None:
446 if self.tags is not None:
444 self.tags = '%s/%s' % (oldmodule , (self.tags or 'tags'))
447 self.tags = '%s/%s' % (oldmodule , (self.tags or 'tags'))
445
448
446 # Check if branches bring a few more heads to the list
449 # Check if branches bring a few more heads to the list
447 if branches:
450 if branches:
448 rpath = self.url.strip('/')
451 rpath = self.url.strip('/')
449 branchnames = svn.client.ls(rpath + '/' + quote(branches),
452 branchnames = svn.client.ls(rpath + '/' + quote(branches),
450 rev, False, self.ctx)
453 rev, False, self.ctx)
451 for branch in sorted(branchnames):
454 for branch in sorted(branchnames):
452 module = '%s/%s/%s' % (oldmodule, branches, branch)
455 module = '%s/%s/%s' % (oldmodule, branches, branch)
453 if not isdir(module, self.last_changed):
456 if not isdir(module, self.last_changed):
454 continue
457 continue
455 brevid = self.latest(module, self.last_changed)
458 brevid = self.latest(module, self.last_changed)
456 if not brevid:
459 if not brevid:
457 self.ui.note(_('ignoring empty branch %s\n') % branch)
460 self.ui.note(_('ignoring empty branch %s\n') % branch)
458 continue
461 continue
459 self.ui.note(_('found branch %s at %d\n') %
462 self.ui.note(_('found branch %s at %d\n') %
460 (branch, self.revnum(brevid)))
463 (branch, self.revnum(brevid)))
461 self.heads.append(brevid)
464 self.heads.append(brevid)
462
465
463 if self.startrev and self.heads:
466 if self.startrev and self.heads:
464 if len(self.heads) > 1:
467 if len(self.heads) > 1:
465 raise error.Abort(_('svn: start revision is not supported '
468 raise error.Abort(_('svn: start revision is not supported '
466 'with more than one branch'))
469 'with more than one branch'))
467 revnum = self.revnum(self.heads[0])
470 revnum = self.revnum(self.heads[0])
468 if revnum < self.startrev:
471 if revnum < self.startrev:
469 raise error.Abort(
472 raise error.Abort(
470 _('svn: no revision found after start revision %d')
473 _('svn: no revision found after start revision %d')
471 % self.startrev)
474 % self.startrev)
472
475
473 return self.heads
476 return self.heads
474
477
475 def _getchanges(self, rev, full):
478 def _getchanges(self, rev, full):
476 (paths, parents) = self.paths[rev]
479 (paths, parents) = self.paths[rev]
477 copies = {}
480 copies = {}
478 if parents:
481 if parents:
479 files, self.removed, copies = self.expandpaths(rev, paths, parents)
482 files, self.removed, copies = self.expandpaths(rev, paths, parents)
480 if full or not parents:
483 if full or not parents:
481 # Perform a full checkout on roots
484 # Perform a full checkout on roots
482 uuid, module, revnum = revsplit(rev)
485 uuid, module, revnum = revsplit(rev)
483 entries = svn.client.ls(self.baseurl + quote(module),
486 entries = svn.client.ls(self.baseurl + quote(module),
484 optrev(revnum), True, self.ctx)
487 optrev(revnum), True, self.ctx)
485 files = [n for n, e in entries.iteritems()
488 files = [n for n, e in entries.iteritems()
486 if e.kind == svn.core.svn_node_file]
489 if e.kind == svn.core.svn_node_file]
487 self.removed = set()
490 self.removed = set()
488
491
489 files.sort()
492 files.sort()
490 files = zip(files, [rev] * len(files))
493 files = zip(files, [rev] * len(files))
491 return (files, copies)
494 return (files, copies)
492
495
493 def getchanges(self, rev, full):
496 def getchanges(self, rev, full):
494 # reuse cache from getchangedfiles
497 # reuse cache from getchangedfiles
495 if self._changescache[0] == rev and not full:
498 if self._changescache[0] == rev and not full:
496 (files, copies) = self._changescache[1]
499 (files, copies) = self._changescache[1]
497 else:
500 else:
498 (files, copies) = self._getchanges(rev, full)
501 (files, copies) = self._getchanges(rev, full)
499 # caller caches the result, so free it here to release memory
502 # caller caches the result, so free it here to release memory
500 del self.paths[rev]
503 del self.paths[rev]
501 return (files, copies, set())
504 return (files, copies, set())
502
505
503 def getchangedfiles(self, rev, i):
506 def getchangedfiles(self, rev, i):
504 # called from filemap - cache computed values for reuse in getchanges
507 # called from filemap - cache computed values for reuse in getchanges
505 (files, copies) = self._getchanges(rev, False)
508 (files, copies) = self._getchanges(rev, False)
506 self._changescache = (rev, (files, copies))
509 self._changescache = (rev, (files, copies))
507 return [f[0] for f in files]
510 return [f[0] for f in files]
508
511
509 def getcommit(self, rev):
512 def getcommit(self, rev):
510 if rev not in self.commits:
513 if rev not in self.commits:
511 uuid, module, revnum = revsplit(rev)
514 uuid, module, revnum = revsplit(rev)
512 self.module = module
515 self.module = module
513 self.reparent(module)
516 self.reparent(module)
514 # We assume that:
517 # We assume that:
515 # - requests for revisions after "stop" come from the
518 # - requests for revisions after "stop" come from the
516 # revision graph backward traversal. Cache all of them
519 # revision graph backward traversal. Cache all of them
517 # down to stop, they will be used eventually.
520 # down to stop, they will be used eventually.
518 # - requests for revisions before "stop" come to get
521 # - requests for revisions before "stop" come to get
519 # isolated branches parents. Just fetch what is needed.
522 # isolated branches parents. Just fetch what is needed.
520 stop = self.lastrevs.get(module, 0)
523 stop = self.lastrevs.get(module, 0)
521 if revnum < stop:
524 if revnum < stop:
522 stop = revnum + 1
525 stop = revnum + 1
523 self._fetch_revisions(revnum, stop)
526 self._fetch_revisions(revnum, stop)
524 if rev not in self.commits:
527 if rev not in self.commits:
525 raise error.Abort(_('svn: revision %s not found') % revnum)
528 raise error.Abort(_('svn: revision %s not found') % revnum)
526 revcommit = self.commits[rev]
529 revcommit = self.commits[rev]
527 # caller caches the result, so free it here to release memory
530 # caller caches the result, so free it here to release memory
528 del self.commits[rev]
531 del self.commits[rev]
529 return revcommit
532 return revcommit
530
533
531 def checkrevformat(self, revstr, mapname='splicemap'):
534 def checkrevformat(self, revstr, mapname='splicemap'):
532 """ fails if revision format does not match the correct format"""
535 """ fails if revision format does not match the correct format"""
533 if not re.match(r'svn:[0-9a-f]{8,8}-[0-9a-f]{4,4}-'
536 if not re.match(r'svn:[0-9a-f]{8,8}-[0-9a-f]{4,4}-'
534 r'[0-9a-f]{4,4}-[0-9a-f]{4,4}-[0-9a-f]'
537 r'[0-9a-f]{4,4}-[0-9a-f]{4,4}-[0-9a-f]'
535 r'{12,12}(.*)\@[0-9]+$',revstr):
538 r'{12,12}(.*)\@[0-9]+$',revstr):
536 raise error.Abort(_('%s entry %s is not a valid revision'
539 raise error.Abort(_('%s entry %s is not a valid revision'
537 ' identifier') % (mapname, revstr))
540 ' identifier') % (mapname, revstr))
538
541
539 def numcommits(self):
542 def numcommits(self):
540 return int(self.head.rsplit('@', 1)[1]) - self.startrev
543 return int(self.head.rsplit('@', 1)[1]) - self.startrev
541
544
542 def gettags(self):
545 def gettags(self):
543 tags = {}
546 tags = {}
544 if self.tags is None:
547 if self.tags is None:
545 return tags
548 return tags
546
549
547 # svn tags are just a convention, project branches left in a
550 # svn tags are just a convention, project branches left in a
548 # 'tags' directory. There is no other relationship than
551 # 'tags' directory. There is no other relationship than
549 # ancestry, which is expensive to discover and makes them hard
552 # ancestry, which is expensive to discover and makes them hard
550 # to update incrementally. Worse, past revisions may be
553 # to update incrementally. Worse, past revisions may be
551 # referenced by tags far away in the future, requiring a deep
554 # referenced by tags far away in the future, requiring a deep
552 # history traversal on every calculation. Current code
555 # history traversal on every calculation. Current code
553 # performs a single backward traversal, tracking moves within
556 # performs a single backward traversal, tracking moves within
554 # the tags directory (tag renaming) and recording a new tag
557 # the tags directory (tag renaming) and recording a new tag
555 # everytime a project is copied from outside the tags
558 # everytime a project is copied from outside the tags
556 # directory. It also lists deleted tags, this behaviour may
559 # directory. It also lists deleted tags, this behaviour may
557 # change in the future.
560 # change in the future.
558 pendings = []
561 pendings = []
559 tagspath = self.tags
562 tagspath = self.tags
560 start = svn.ra.get_latest_revnum(self.ra)
563 start = svn.ra.get_latest_revnum(self.ra)
561 stream = self._getlog([self.tags], start, self.startrev)
564 stream = self._getlog([self.tags], start, self.startrev)
562 try:
565 try:
563 for entry in stream:
566 for entry in stream:
564 origpaths, revnum, author, date, message = entry
567 origpaths, revnum, author, date, message = entry
565 if not origpaths:
568 if not origpaths:
566 origpaths = []
569 origpaths = []
567 copies = [(e.copyfrom_path, e.copyfrom_rev, p) for p, e
570 copies = [(e.copyfrom_path, e.copyfrom_rev, p) for p, e
568 in origpaths.iteritems() if e.copyfrom_path]
571 in origpaths.iteritems() if e.copyfrom_path]
569 # Apply moves/copies from more specific to general
572 # Apply moves/copies from more specific to general
570 copies.sort(reverse=True)
573 copies.sort(reverse=True)
571
574
572 srctagspath = tagspath
575 srctagspath = tagspath
573 if copies and copies[-1][2] == tagspath:
576 if copies and copies[-1][2] == tagspath:
574 # Track tags directory moves
577 # Track tags directory moves
575 srctagspath = copies.pop()[0]
578 srctagspath = copies.pop()[0]
576
579
577 for source, sourcerev, dest in copies:
580 for source, sourcerev, dest in copies:
578 if not dest.startswith(tagspath + '/'):
581 if not dest.startswith(tagspath + '/'):
579 continue
582 continue
580 for tag in pendings:
583 for tag in pendings:
581 if tag[0].startswith(dest):
584 if tag[0].startswith(dest):
582 tagpath = source + tag[0][len(dest):]
585 tagpath = source + tag[0][len(dest):]
583 tag[:2] = [tagpath, sourcerev]
586 tag[:2] = [tagpath, sourcerev]
584 break
587 break
585 else:
588 else:
586 pendings.append([source, sourcerev, dest])
589 pendings.append([source, sourcerev, dest])
587
590
588 # Filter out tags with children coming from different
591 # Filter out tags with children coming from different
589 # parts of the repository like:
592 # parts of the repository like:
590 # /tags/tag.1 (from /trunk:10)
593 # /tags/tag.1 (from /trunk:10)
591 # /tags/tag.1/foo (from /branches/foo:12)
594 # /tags/tag.1/foo (from /branches/foo:12)
592 # Here/tags/tag.1 discarded as well as its children.
595 # Here/tags/tag.1 discarded as well as its children.
593 # It happens with tools like cvs2svn. Such tags cannot
596 # It happens with tools like cvs2svn. Such tags cannot
594 # be represented in mercurial.
597 # be represented in mercurial.
595 addeds = dict((p, e.copyfrom_path) for p, e
598 addeds = dict((p, e.copyfrom_path) for p, e
596 in origpaths.iteritems()
599 in origpaths.iteritems()
597 if e.action == 'A' and e.copyfrom_path)
600 if e.action == 'A' and e.copyfrom_path)
598 badroots = set()
601 badroots = set()
599 for destroot in addeds:
602 for destroot in addeds:
600 for source, sourcerev, dest in pendings:
603 for source, sourcerev, dest in pendings:
601 if (not dest.startswith(destroot + '/')
604 if (not dest.startswith(destroot + '/')
602 or source.startswith(addeds[destroot] + '/')):
605 or source.startswith(addeds[destroot] + '/')):
603 continue
606 continue
604 badroots.add(destroot)
607 badroots.add(destroot)
605 break
608 break
606
609
607 for badroot in badroots:
610 for badroot in badroots:
608 pendings = [p for p in pendings if p[2] != badroot
611 pendings = [p for p in pendings if p[2] != badroot
609 and not p[2].startswith(badroot + '/')]
612 and not p[2].startswith(badroot + '/')]
610
613
611 # Tell tag renamings from tag creations
614 # Tell tag renamings from tag creations
612 renamings = []
615 renamings = []
613 for source, sourcerev, dest in pendings:
616 for source, sourcerev, dest in pendings:
614 tagname = dest.split('/')[-1]
617 tagname = dest.split('/')[-1]
615 if source.startswith(srctagspath):
618 if source.startswith(srctagspath):
616 renamings.append([source, sourcerev, tagname])
619 renamings.append([source, sourcerev, tagname])
617 continue
620 continue
618 if tagname in tags:
621 if tagname in tags:
619 # Keep the latest tag value
622 # Keep the latest tag value
620 continue
623 continue
621 # From revision may be fake, get one with changes
624 # From revision may be fake, get one with changes
622 try:
625 try:
623 tagid = self.latest(source, sourcerev)
626 tagid = self.latest(source, sourcerev)
624 if tagid and tagname not in tags:
627 if tagid and tagname not in tags:
625 tags[tagname] = tagid
628 tags[tagname] = tagid
626 except SvnPathNotFound:
629 except SvnPathNotFound:
627 # It happens when we are following directories
630 # It happens when we are following directories
628 # we assumed were copied with their parents
631 # we assumed were copied with their parents
629 # but were really created in the tag
632 # but were really created in the tag
630 # directory.
633 # directory.
631 pass
634 pass
632 pendings = renamings
635 pendings = renamings
633 tagspath = srctagspath
636 tagspath = srctagspath
634 finally:
637 finally:
635 stream.close()
638 stream.close()
636 return tags
639 return tags
637
640
638 def converted(self, rev, destrev):
641 def converted(self, rev, destrev):
639 if not self.wc:
642 if not self.wc:
640 return
643 return
641 if self.convertfp is None:
644 if self.convertfp is None:
642 self.convertfp = open(os.path.join(self.wc, '.svn', 'hg-shamap'),
645 self.convertfp = open(os.path.join(self.wc, '.svn', 'hg-shamap'),
643 'ab')
646 'ab')
644 self.convertfp.write(util.tonativeeol('%s %d\n'
647 self.convertfp.write(util.tonativeeol('%s %d\n'
645 % (destrev, self.revnum(rev))))
648 % (destrev, self.revnum(rev))))
646 self.convertfp.flush()
649 self.convertfp.flush()
647
650
648 def revid(self, revnum, module=None):
651 def revid(self, revnum, module=None):
649 return 'svn:%s%s@%s' % (self.uuid, module or self.module, revnum)
652 return 'svn:%s%s@%s' % (self.uuid, module or self.module, revnum)
650
653
651 def revnum(self, rev):
654 def revnum(self, rev):
652 return int(rev.split('@')[-1])
655 return int(rev.split('@')[-1])
653
656
654 def latest(self, path, stop=None):
657 def latest(self, path, stop=None):
655 """Find the latest revid affecting path, up to stop revision
658 """Find the latest revid affecting path, up to stop revision
656 number. If stop is None, default to repository latest
659 number. If stop is None, default to repository latest
657 revision. It may return a revision in a different module,
660 revision. It may return a revision in a different module,
658 since a branch may be moved without a change being
661 since a branch may be moved without a change being
659 reported. Return None if computed module does not belong to
662 reported. Return None if computed module does not belong to
660 rootmodule subtree.
663 rootmodule subtree.
661 """
664 """
662 def findchanges(path, start, stop=None):
665 def findchanges(path, start, stop=None):
663 stream = self._getlog([path], start, stop or 1)
666 stream = self._getlog([path], start, stop or 1)
664 try:
667 try:
665 for entry in stream:
668 for entry in stream:
666 paths, revnum, author, date, message = entry
669 paths, revnum, author, date, message = entry
667 if stop is None and paths:
670 if stop is None and paths:
668 # We do not know the latest changed revision,
671 # We do not know the latest changed revision,
669 # keep the first one with changed paths.
672 # keep the first one with changed paths.
670 break
673 break
671 if revnum <= stop:
674 if revnum <= stop:
672 break
675 break
673
676
674 for p in paths:
677 for p in paths:
675 if (not path.startswith(p) or
678 if (not path.startswith(p) or
676 not paths[p].copyfrom_path):
679 not paths[p].copyfrom_path):
677 continue
680 continue
678 newpath = paths[p].copyfrom_path + path[len(p):]
681 newpath = paths[p].copyfrom_path + path[len(p):]
679 self.ui.debug("branch renamed from %s to %s at %d\n" %
682 self.ui.debug("branch renamed from %s to %s at %d\n" %
680 (path, newpath, revnum))
683 (path, newpath, revnum))
681 path = newpath
684 path = newpath
682 break
685 break
683 if not paths:
686 if not paths:
684 revnum = None
687 revnum = None
685 return revnum, path
688 return revnum, path
686 finally:
689 finally:
687 stream.close()
690 stream.close()
688
691
689 if not path.startswith(self.rootmodule):
692 if not path.startswith(self.rootmodule):
690 # Requests on foreign branches may be forbidden at server level
693 # Requests on foreign branches may be forbidden at server level
691 self.ui.debug('ignoring foreign branch %r\n' % path)
694 self.ui.debug('ignoring foreign branch %r\n' % path)
692 return None
695 return None
693
696
694 if stop is None:
697 if stop is None:
695 stop = svn.ra.get_latest_revnum(self.ra)
698 stop = svn.ra.get_latest_revnum(self.ra)
696 try:
699 try:
697 prevmodule = self.reparent('')
700 prevmodule = self.reparent('')
698 dirent = svn.ra.stat(self.ra, path.strip('/'), stop)
701 dirent = svn.ra.stat(self.ra, path.strip('/'), stop)
699 self.reparent(prevmodule)
702 self.reparent(prevmodule)
700 except svn.core.SubversionException:
703 except svn.core.SubversionException:
701 dirent = None
704 dirent = None
702 if not dirent:
705 if not dirent:
703 raise SvnPathNotFound(_('%s not found up to revision %d')
706 raise SvnPathNotFound(_('%s not found up to revision %d')
704 % (path, stop))
707 % (path, stop))
705
708
706 # stat() gives us the previous revision on this line of
709 # stat() gives us the previous revision on this line of
707 # development, but it might be in *another module*. Fetch the
710 # development, but it might be in *another module*. Fetch the
708 # log and detect renames down to the latest revision.
711 # log and detect renames down to the latest revision.
709 revnum, realpath = findchanges(path, stop, dirent.created_rev)
712 revnum, realpath = findchanges(path, stop, dirent.created_rev)
710 if revnum is None:
713 if revnum is None:
711 # Tools like svnsync can create empty revision, when
714 # Tools like svnsync can create empty revision, when
712 # synchronizing only a subtree for instance. These empty
715 # synchronizing only a subtree for instance. These empty
713 # revisions created_rev still have their original values
716 # revisions created_rev still have their original values
714 # despite all changes having disappeared and can be
717 # despite all changes having disappeared and can be
715 # returned by ra.stat(), at least when stating the root
718 # returned by ra.stat(), at least when stating the root
716 # module. In that case, do not trust created_rev and scan
719 # module. In that case, do not trust created_rev and scan
717 # the whole history.
720 # the whole history.
718 revnum, realpath = findchanges(path, stop)
721 revnum, realpath = findchanges(path, stop)
719 if revnum is None:
722 if revnum is None:
720 self.ui.debug('ignoring empty branch %r\n' % realpath)
723 self.ui.debug('ignoring empty branch %r\n' % realpath)
721 return None
724 return None
722
725
723 if not realpath.startswith(self.rootmodule):
726 if not realpath.startswith(self.rootmodule):
724 self.ui.debug('ignoring foreign branch %r\n' % realpath)
727 self.ui.debug('ignoring foreign branch %r\n' % realpath)
725 return None
728 return None
726 return self.revid(revnum, realpath)
729 return self.revid(revnum, realpath)
727
730
728 def reparent(self, module):
731 def reparent(self, module):
729 """Reparent the svn transport and return the previous parent."""
732 """Reparent the svn transport and return the previous parent."""
730 if self.prevmodule == module:
733 if self.prevmodule == module:
731 return module
734 return module
732 svnurl = self.baseurl + quote(module)
735 svnurl = self.baseurl + quote(module)
733 prevmodule = self.prevmodule
736 prevmodule = self.prevmodule
734 if prevmodule is None:
737 if prevmodule is None:
735 prevmodule = ''
738 prevmodule = ''
736 self.ui.debug("reparent to %s\n" % svnurl)
739 self.ui.debug("reparent to %s\n" % svnurl)
737 svn.ra.reparent(self.ra, svnurl)
740 svn.ra.reparent(self.ra, svnurl)
738 self.prevmodule = module
741 self.prevmodule = module
739 return prevmodule
742 return prevmodule
740
743
741 def expandpaths(self, rev, paths, parents):
744 def expandpaths(self, rev, paths, parents):
742 changed, removed = set(), set()
745 changed, removed = set(), set()
743 copies = {}
746 copies = {}
744
747
745 new_module, revnum = revsplit(rev)[1:]
748 new_module, revnum = revsplit(rev)[1:]
746 if new_module != self.module:
749 if new_module != self.module:
747 self.module = new_module
750 self.module = new_module
748 self.reparent(self.module)
751 self.reparent(self.module)
749
752
750 for i, (path, ent) in enumerate(paths):
753 for i, (path, ent) in enumerate(paths):
751 self.ui.progress(_('scanning paths'), i, item=path,
754 self.ui.progress(_('scanning paths'), i, item=path,
752 total=len(paths), unit=_('paths'))
755 total=len(paths), unit=_('paths'))
753 entrypath = self.getrelpath(path)
756 entrypath = self.getrelpath(path)
754
757
755 kind = self._checkpath(entrypath, revnum)
758 kind = self._checkpath(entrypath, revnum)
756 if kind == svn.core.svn_node_file:
759 if kind == svn.core.svn_node_file:
757 changed.add(self.recode(entrypath))
760 changed.add(self.recode(entrypath))
758 if not ent.copyfrom_path or not parents:
761 if not ent.copyfrom_path or not parents:
759 continue
762 continue
760 # Copy sources not in parent revisions cannot be
763 # Copy sources not in parent revisions cannot be
761 # represented, ignore their origin for now
764 # represented, ignore their origin for now
762 pmodule, prevnum = revsplit(parents[0])[1:]
765 pmodule, prevnum = revsplit(parents[0])[1:]
763 if ent.copyfrom_rev < prevnum:
766 if ent.copyfrom_rev < prevnum:
764 continue
767 continue
765 copyfrom_path = self.getrelpath(ent.copyfrom_path, pmodule)
768 copyfrom_path = self.getrelpath(ent.copyfrom_path, pmodule)
766 if not copyfrom_path:
769 if not copyfrom_path:
767 continue
770 continue
768 self.ui.debug("copied to %s from %s@%s\n" %
771 self.ui.debug("copied to %s from %s@%s\n" %
769 (entrypath, copyfrom_path, ent.copyfrom_rev))
772 (entrypath, copyfrom_path, ent.copyfrom_rev))
770 copies[self.recode(entrypath)] = self.recode(copyfrom_path)
773 copies[self.recode(entrypath)] = self.recode(copyfrom_path)
771 elif kind == 0: # gone, but had better be a deleted *file*
774 elif kind == 0: # gone, but had better be a deleted *file*
772 self.ui.debug("gone from %s\n" % ent.copyfrom_rev)
775 self.ui.debug("gone from %s\n" % ent.copyfrom_rev)
773 pmodule, prevnum = revsplit(parents[0])[1:]
776 pmodule, prevnum = revsplit(parents[0])[1:]
774 parentpath = pmodule + "/" + entrypath
777 parentpath = pmodule + "/" + entrypath
775 fromkind = self._checkpath(entrypath, prevnum, pmodule)
778 fromkind = self._checkpath(entrypath, prevnum, pmodule)
776
779
777 if fromkind == svn.core.svn_node_file:
780 if fromkind == svn.core.svn_node_file:
778 removed.add(self.recode(entrypath))
781 removed.add(self.recode(entrypath))
779 elif fromkind == svn.core.svn_node_dir:
782 elif fromkind == svn.core.svn_node_dir:
780 oroot = parentpath.strip('/')
783 oroot = parentpath.strip('/')
781 nroot = path.strip('/')
784 nroot = path.strip('/')
782 children = self._iterfiles(oroot, prevnum)
785 children = self._iterfiles(oroot, prevnum)
783 for childpath in children:
786 for childpath in children:
784 childpath = childpath.replace(oroot, nroot)
787 childpath = childpath.replace(oroot, nroot)
785 childpath = self.getrelpath("/" + childpath, pmodule)
788 childpath = self.getrelpath("/" + childpath, pmodule)
786 if childpath:
789 if childpath:
787 removed.add(self.recode(childpath))
790 removed.add(self.recode(childpath))
788 else:
791 else:
789 self.ui.debug('unknown path in revision %d: %s\n' % \
792 self.ui.debug('unknown path in revision %d: %s\n' % \
790 (revnum, path))
793 (revnum, path))
791 elif kind == svn.core.svn_node_dir:
794 elif kind == svn.core.svn_node_dir:
792 if ent.action == 'M':
795 if ent.action == 'M':
793 # If the directory just had a prop change,
796 # If the directory just had a prop change,
794 # then we shouldn't need to look for its children.
797 # then we shouldn't need to look for its children.
795 continue
798 continue
796 if ent.action == 'R' and parents:
799 if ent.action == 'R' and parents:
797 # If a directory is replacing a file, mark the previous
800 # If a directory is replacing a file, mark the previous
798 # file as deleted
801 # file as deleted
799 pmodule, prevnum = revsplit(parents[0])[1:]
802 pmodule, prevnum = revsplit(parents[0])[1:]
800 pkind = self._checkpath(entrypath, prevnum, pmodule)
803 pkind = self._checkpath(entrypath, prevnum, pmodule)
801 if pkind == svn.core.svn_node_file:
804 if pkind == svn.core.svn_node_file:
802 removed.add(self.recode(entrypath))
805 removed.add(self.recode(entrypath))
803 elif pkind == svn.core.svn_node_dir:
806 elif pkind == svn.core.svn_node_dir:
804 # We do not know what files were kept or removed,
807 # We do not know what files were kept or removed,
805 # mark them all as changed.
808 # mark them all as changed.
806 for childpath in self._iterfiles(pmodule, prevnum):
809 for childpath in self._iterfiles(pmodule, prevnum):
807 childpath = self.getrelpath("/" + childpath)
810 childpath = self.getrelpath("/" + childpath)
808 if childpath:
811 if childpath:
809 changed.add(self.recode(childpath))
812 changed.add(self.recode(childpath))
810
813
811 for childpath in self._iterfiles(path, revnum):
814 for childpath in self._iterfiles(path, revnum):
812 childpath = self.getrelpath("/" + childpath)
815 childpath = self.getrelpath("/" + childpath)
813 if childpath:
816 if childpath:
814 changed.add(self.recode(childpath))
817 changed.add(self.recode(childpath))
815
818
816 # Handle directory copies
819 # Handle directory copies
817 if not ent.copyfrom_path or not parents:
820 if not ent.copyfrom_path or not parents:
818 continue
821 continue
819 # Copy sources not in parent revisions cannot be
822 # Copy sources not in parent revisions cannot be
820 # represented, ignore their origin for now
823 # represented, ignore their origin for now
821 pmodule, prevnum = revsplit(parents[0])[1:]
824 pmodule, prevnum = revsplit(parents[0])[1:]
822 if ent.copyfrom_rev < prevnum:
825 if ent.copyfrom_rev < prevnum:
823 continue
826 continue
824 copyfrompath = self.getrelpath(ent.copyfrom_path, pmodule)
827 copyfrompath = self.getrelpath(ent.copyfrom_path, pmodule)
825 if not copyfrompath:
828 if not copyfrompath:
826 continue
829 continue
827 self.ui.debug("mark %s came from %s:%d\n"
830 self.ui.debug("mark %s came from %s:%d\n"
828 % (path, copyfrompath, ent.copyfrom_rev))
831 % (path, copyfrompath, ent.copyfrom_rev))
829 children = self._iterfiles(ent.copyfrom_path, ent.copyfrom_rev)
832 children = self._iterfiles(ent.copyfrom_path, ent.copyfrom_rev)
830 for childpath in children:
833 for childpath in children:
831 childpath = self.getrelpath("/" + childpath, pmodule)
834 childpath = self.getrelpath("/" + childpath, pmodule)
832 if not childpath:
835 if not childpath:
833 continue
836 continue
834 copytopath = path + childpath[len(copyfrompath):]
837 copytopath = path + childpath[len(copyfrompath):]
835 copytopath = self.getrelpath(copytopath)
838 copytopath = self.getrelpath(copytopath)
836 copies[self.recode(copytopath)] = self.recode(childpath)
839 copies[self.recode(copytopath)] = self.recode(childpath)
837
840
838 self.ui.progress(_('scanning paths'), None)
841 self.ui.progress(_('scanning paths'), None)
839 changed.update(removed)
842 changed.update(removed)
840 return (list(changed), removed, copies)
843 return (list(changed), removed, copies)
841
844
842 def _fetch_revisions(self, from_revnum, to_revnum):
845 def _fetch_revisions(self, from_revnum, to_revnum):
843 if from_revnum < to_revnum:
846 if from_revnum < to_revnum:
844 from_revnum, to_revnum = to_revnum, from_revnum
847 from_revnum, to_revnum = to_revnum, from_revnum
845
848
846 self.child_cset = None
849 self.child_cset = None
847
850
848 def parselogentry(orig_paths, revnum, author, date, message):
851 def parselogentry(orig_paths, revnum, author, date, message):
849 """Return the parsed commit object or None, and True if
852 """Return the parsed commit object or None, and True if
850 the revision is a branch root.
853 the revision is a branch root.
851 """
854 """
852 self.ui.debug("parsing revision %d (%d changes)\n" %
855 self.ui.debug("parsing revision %d (%d changes)\n" %
853 (revnum, len(orig_paths)))
856 (revnum, len(orig_paths)))
854
857
855 branched = False
858 branched = False
856 rev = self.revid(revnum)
859 rev = self.revid(revnum)
857 # branch log might return entries for a parent we already have
860 # branch log might return entries for a parent we already have
858
861
859 if rev in self.commits or revnum < to_revnum:
862 if rev in self.commits or revnum < to_revnum:
860 return None, branched
863 return None, branched
861
864
862 parents = []
865 parents = []
863 # check whether this revision is the start of a branch or part
866 # check whether this revision is the start of a branch or part
864 # of a branch renaming
867 # of a branch renaming
865 orig_paths = sorted(orig_paths.iteritems())
868 orig_paths = sorted(orig_paths.iteritems())
866 root_paths = [(p, e) for p, e in orig_paths
869 root_paths = [(p, e) for p, e in orig_paths
867 if self.module.startswith(p)]
870 if self.module.startswith(p)]
868 if root_paths:
871 if root_paths:
869 path, ent = root_paths[-1]
872 path, ent = root_paths[-1]
870 if ent.copyfrom_path:
873 if ent.copyfrom_path:
871 branched = True
874 branched = True
872 newpath = ent.copyfrom_path + self.module[len(path):]
875 newpath = ent.copyfrom_path + self.module[len(path):]
873 # ent.copyfrom_rev may not be the actual last revision
876 # ent.copyfrom_rev may not be the actual last revision
874 previd = self.latest(newpath, ent.copyfrom_rev)
877 previd = self.latest(newpath, ent.copyfrom_rev)
875 if previd is not None:
878 if previd is not None:
876 prevmodule, prevnum = revsplit(previd)[1:]
879 prevmodule, prevnum = revsplit(previd)[1:]
877 if prevnum >= self.startrev:
880 if prevnum >= self.startrev:
878 parents = [previd]
881 parents = [previd]
879 self.ui.note(
882 self.ui.note(
880 _('found parent of branch %s at %d: %s\n') %
883 _('found parent of branch %s at %d: %s\n') %
881 (self.module, prevnum, prevmodule))
884 (self.module, prevnum, prevmodule))
882 else:
885 else:
883 self.ui.debug("no copyfrom path, don't know what to do.\n")
886 self.ui.debug("no copyfrom path, don't know what to do.\n")
884
887
885 paths = []
888 paths = []
886 # filter out unrelated paths
889 # filter out unrelated paths
887 for path, ent in orig_paths:
890 for path, ent in orig_paths:
888 if self.getrelpath(path) is None:
891 if self.getrelpath(path) is None:
889 continue
892 continue
890 paths.append((path, ent))
893 paths.append((path, ent))
891
894
892 # Example SVN datetime. Includes microseconds.
895 # Example SVN datetime. Includes microseconds.
893 # ISO-8601 conformant
896 # ISO-8601 conformant
894 # '2007-01-04T17:35:00.902377Z'
897 # '2007-01-04T17:35:00.902377Z'
895 date = dateutil.parsedate(date[:19] + " UTC", ["%Y-%m-%dT%H:%M:%S"])
898 date = dateutil.parsedate(date[:19] + " UTC", ["%Y-%m-%dT%H:%M:%S"])
896 if self.ui.configbool('convert', 'localtimezone'):
899 if self.ui.configbool('convert', 'localtimezone'):
897 date = makedatetimestamp(date[0])
900 date = makedatetimestamp(date[0])
898
901
899 if message:
902 if message:
900 log = self.recode(message)
903 log = self.recode(message)
901 else:
904 else:
902 log = ''
905 log = ''
903
906
904 if author:
907 if author:
905 author = self.recode(author)
908 author = self.recode(author)
906 else:
909 else:
907 author = ''
910 author = ''
908
911
909 try:
912 try:
910 branch = self.module.split("/")[-1]
913 branch = self.module.split("/")[-1]
911 if branch == self.trunkname:
914 if branch == self.trunkname:
912 branch = None
915 branch = None
913 except IndexError:
916 except IndexError:
914 branch = None
917 branch = None
915
918
916 cset = commit(author=author,
919 cset = commit(author=author,
917 date=dateutil.datestr(date, '%Y-%m-%d %H:%M:%S %1%2'),
920 date=dateutil.datestr(date, '%Y-%m-%d %H:%M:%S %1%2'),
918 desc=log,
921 desc=log,
919 parents=parents,
922 parents=parents,
920 branch=branch,
923 branch=branch,
921 rev=rev)
924 rev=rev)
922
925
923 self.commits[rev] = cset
926 self.commits[rev] = cset
924 # The parents list is *shared* among self.paths and the
927 # The parents list is *shared* among self.paths and the
925 # commit object. Both will be updated below.
928 # commit object. Both will be updated below.
926 self.paths[rev] = (paths, cset.parents)
929 self.paths[rev] = (paths, cset.parents)
927 if self.child_cset and not self.child_cset.parents:
930 if self.child_cset and not self.child_cset.parents:
928 self.child_cset.parents[:] = [rev]
931 self.child_cset.parents[:] = [rev]
929 self.child_cset = cset
932 self.child_cset = cset
930 return cset, branched
933 return cset, branched
931
934
932 self.ui.note(_('fetching revision log for "%s" from %d to %d\n') %
935 self.ui.note(_('fetching revision log for "%s" from %d to %d\n') %
933 (self.module, from_revnum, to_revnum))
936 (self.module, from_revnum, to_revnum))
934
937
935 try:
938 try:
936 firstcset = None
939 firstcset = None
937 lastonbranch = False
940 lastonbranch = False
938 stream = self._getlog([self.module], from_revnum, to_revnum)
941 stream = self._getlog([self.module], from_revnum, to_revnum)
939 try:
942 try:
940 for entry in stream:
943 for entry in stream:
941 paths, revnum, author, date, message = entry
944 paths, revnum, author, date, message = entry
942 if revnum < self.startrev:
945 if revnum < self.startrev:
943 lastonbranch = True
946 lastonbranch = True
944 break
947 break
945 if not paths:
948 if not paths:
946 self.ui.debug('revision %d has no entries\n' % revnum)
949 self.ui.debug('revision %d has no entries\n' % revnum)
947 # If we ever leave the loop on an empty
950 # If we ever leave the loop on an empty
948 # revision, do not try to get a parent branch
951 # revision, do not try to get a parent branch
949 lastonbranch = lastonbranch or revnum == 0
952 lastonbranch = lastonbranch or revnum == 0
950 continue
953 continue
951 cset, lastonbranch = parselogentry(paths, revnum, author,
954 cset, lastonbranch = parselogentry(paths, revnum, author,
952 date, message)
955 date, message)
953 if cset:
956 if cset:
954 firstcset = cset
957 firstcset = cset
955 if lastonbranch:
958 if lastonbranch:
956 break
959 break
957 finally:
960 finally:
958 stream.close()
961 stream.close()
959
962
960 if not lastonbranch and firstcset and not firstcset.parents:
963 if not lastonbranch and firstcset and not firstcset.parents:
961 # The first revision of the sequence (the last fetched one)
964 # The first revision of the sequence (the last fetched one)
962 # has invalid parents if not a branch root. Find the parent
965 # has invalid parents if not a branch root. Find the parent
963 # revision now, if any.
966 # revision now, if any.
964 try:
967 try:
965 firstrevnum = self.revnum(firstcset.rev)
968 firstrevnum = self.revnum(firstcset.rev)
966 if firstrevnum > 1:
969 if firstrevnum > 1:
967 latest = self.latest(self.module, firstrevnum - 1)
970 latest = self.latest(self.module, firstrevnum - 1)
968 if latest:
971 if latest:
969 firstcset.parents.append(latest)
972 firstcset.parents.append(latest)
970 except SvnPathNotFound:
973 except SvnPathNotFound:
971 pass
974 pass
972 except svn.core.SubversionException as xxx_todo_changeme:
975 except svn.core.SubversionException as xxx_todo_changeme:
973 (inst, num) = xxx_todo_changeme.args
976 (inst, num) = xxx_todo_changeme.args
974 if num == svn.core.SVN_ERR_FS_NO_SUCH_REVISION:
977 if num == svn.core.SVN_ERR_FS_NO_SUCH_REVISION:
975 raise error.Abort(_('svn: branch has no revision %s')
978 raise error.Abort(_('svn: branch has no revision %s')
976 % to_revnum)
979 % to_revnum)
977 raise
980 raise
978
981
979 def getfile(self, file, rev):
982 def getfile(self, file, rev):
980 # TODO: ra.get_file transmits the whole file instead of diffs.
983 # TODO: ra.get_file transmits the whole file instead of diffs.
981 if file in self.removed:
984 if file in self.removed:
982 return None, None
985 return None, None
983 mode = ''
986 mode = ''
984 try:
987 try:
985 new_module, revnum = revsplit(rev)[1:]
988 new_module, revnum = revsplit(rev)[1:]
986 if self.module != new_module:
989 if self.module != new_module:
987 self.module = new_module
990 self.module = new_module
988 self.reparent(self.module)
991 self.reparent(self.module)
989 io = stringio()
992 io = stringio()
990 info = svn.ra.get_file(self.ra, file, revnum, io)
993 info = svn.ra.get_file(self.ra, file, revnum, io)
991 data = io.getvalue()
994 data = io.getvalue()
992 # ra.get_file() seems to keep a reference on the input buffer
995 # ra.get_file() seems to keep a reference on the input buffer
993 # preventing collection. Release it explicitly.
996 # preventing collection. Release it explicitly.
994 io.close()
997 io.close()
995 if isinstance(info, list):
998 if isinstance(info, list):
996 info = info[-1]
999 info = info[-1]
997 mode = ("svn:executable" in info) and 'x' or ''
1000 mode = ("svn:executable" in info) and 'x' or ''
998 mode = ("svn:special" in info) and 'l' or mode
1001 mode = ("svn:special" in info) and 'l' or mode
999 except svn.core.SubversionException as e:
1002 except svn.core.SubversionException as e:
1000 notfound = (svn.core.SVN_ERR_FS_NOT_FOUND,
1003 notfound = (svn.core.SVN_ERR_FS_NOT_FOUND,
1001 svn.core.SVN_ERR_RA_DAV_PATH_NOT_FOUND)
1004 svn.core.SVN_ERR_RA_DAV_PATH_NOT_FOUND)
1002 if e.apr_err in notfound: # File not found
1005 if e.apr_err in notfound: # File not found
1003 return None, None
1006 return None, None
1004 raise
1007 raise
1005 if mode == 'l':
1008 if mode == 'l':
1006 link_prefix = "link "
1009 link_prefix = "link "
1007 if data.startswith(link_prefix):
1010 if data.startswith(link_prefix):
1008 data = data[len(link_prefix):]
1011 data = data[len(link_prefix):]
1009 return data, mode
1012 return data, mode
1010
1013
1011 def _iterfiles(self, path, revnum):
1014 def _iterfiles(self, path, revnum):
1012 """Enumerate all files in path at revnum, recursively."""
1015 """Enumerate all files in path at revnum, recursively."""
1013 path = path.strip('/')
1016 path = path.strip('/')
1014 pool = svn.core.Pool()
1017 pool = svn.core.Pool()
1015 rpath = '/'.join([self.baseurl, quote(path)]).strip('/')
1018 rpath = '/'.join([self.baseurl, quote(path)]).strip('/')
1016 entries = svn.client.ls(rpath, optrev(revnum), True, self.ctx, pool)
1019 entries = svn.client.ls(rpath, optrev(revnum), True, self.ctx, pool)
1017 if path:
1020 if path:
1018 path += '/'
1021 path += '/'
1019 return ((path + p) for p, e in entries.iteritems()
1022 return ((path + p) for p, e in entries.iteritems()
1020 if e.kind == svn.core.svn_node_file)
1023 if e.kind == svn.core.svn_node_file)
1021
1024
1022 def getrelpath(self, path, module=None):
1025 def getrelpath(self, path, module=None):
1023 if module is None:
1026 if module is None:
1024 module = self.module
1027 module = self.module
1025 # Given the repository url of this wc, say
1028 # Given the repository url of this wc, say
1026 # "http://server/plone/CMFPlone/branches/Plone-2_0-branch"
1029 # "http://server/plone/CMFPlone/branches/Plone-2_0-branch"
1027 # extract the "entry" portion (a relative path) from what
1030 # extract the "entry" portion (a relative path) from what
1028 # svn log --xml says, i.e.
1031 # svn log --xml says, i.e.
1029 # "/CMFPlone/branches/Plone-2_0-branch/tests/PloneTestCase.py"
1032 # "/CMFPlone/branches/Plone-2_0-branch/tests/PloneTestCase.py"
1030 # that is to say "tests/PloneTestCase.py"
1033 # that is to say "tests/PloneTestCase.py"
1031 if path.startswith(module):
1034 if path.startswith(module):
1032 relative = path.rstrip('/')[len(module):]
1035 relative = path.rstrip('/')[len(module):]
1033 if relative.startswith('/'):
1036 if relative.startswith('/'):
1034 return relative[1:]
1037 return relative[1:]
1035 elif relative == '':
1038 elif relative == '':
1036 return relative
1039 return relative
1037
1040
1038 # The path is outside our tracked tree...
1041 # The path is outside our tracked tree...
1039 self.ui.debug('%r is not under %r, ignoring\n' % (path, module))
1042 self.ui.debug('%r is not under %r, ignoring\n' % (path, module))
1040 return None
1043 return None
1041
1044
1042 def _checkpath(self, path, revnum, module=None):
1045 def _checkpath(self, path, revnum, module=None):
1043 if module is not None:
1046 if module is not None:
1044 prevmodule = self.reparent('')
1047 prevmodule = self.reparent('')
1045 path = module + '/' + path
1048 path = module + '/' + path
1046 try:
1049 try:
1047 # ra.check_path does not like leading slashes very much, it leads
1050 # ra.check_path does not like leading slashes very much, it leads
1048 # to PROPFIND subversion errors
1051 # to PROPFIND subversion errors
1049 return svn.ra.check_path(self.ra, path.strip('/'), revnum)
1052 return svn.ra.check_path(self.ra, path.strip('/'), revnum)
1050 finally:
1053 finally:
1051 if module is not None:
1054 if module is not None:
1052 self.reparent(prevmodule)
1055 self.reparent(prevmodule)
1053
1056
1054 def _getlog(self, paths, start, end, limit=0, discover_changed_paths=True,
1057 def _getlog(self, paths, start, end, limit=0, discover_changed_paths=True,
1055 strict_node_history=False):
1058 strict_node_history=False):
1056 # Normalize path names, svn >= 1.5 only wants paths relative to
1059 # Normalize path names, svn >= 1.5 only wants paths relative to
1057 # supplied URL
1060 # supplied URL
1058 relpaths = []
1061 relpaths = []
1059 for p in paths:
1062 for p in paths:
1060 if not p.startswith('/'):
1063 if not p.startswith('/'):
1061 p = self.module + '/' + p
1064 p = self.module + '/' + p
1062 relpaths.append(p.strip('/'))
1065 relpaths.append(p.strip('/'))
1063 args = [self.baseurl, relpaths, start, end, limit,
1066 args = [self.baseurl, relpaths, start, end, limit,
1064 discover_changed_paths, strict_node_history]
1067 discover_changed_paths, strict_node_history]
1065 # developer config: convert.svn.debugsvnlog
1068 # developer config: convert.svn.debugsvnlog
1066 if not self.ui.configbool('convert', 'svn.debugsvnlog'):
1069 if not self.ui.configbool('convert', 'svn.debugsvnlog'):
1067 return directlogstream(*args)
1070 return directlogstream(*args)
1068 arg = encodeargs(args)
1071 arg = encodeargs(args)
1069 hgexe = util.hgexecutable()
1072 hgexe = util.hgexecutable()
1070 cmd = '%s debugsvnlog' % util.shellquote(hgexe)
1073 cmd = '%s debugsvnlog' % util.shellquote(hgexe)
1071 stdin, stdout = util.popen2(util.quotecommand(cmd))
1074 stdin, stdout = util.popen2(util.quotecommand(cmd))
1072 stdin.write(arg)
1075 stdin.write(arg)
1073 try:
1076 try:
1074 stdin.close()
1077 stdin.close()
1075 except IOError:
1078 except IOError:
1076 raise error.Abort(_('Mercurial failed to run itself, check'
1079 raise error.Abort(_('Mercurial failed to run itself, check'
1077 ' hg executable is in PATH'))
1080 ' hg executable is in PATH'))
1078 return logstream(stdout)
1081 return logstream(stdout)
1079
1082
1080 pre_revprop_change = '''#!/bin/sh
1083 pre_revprop_change = '''#!/bin/sh
1081
1084
1082 REPOS="$1"
1085 REPOS="$1"
1083 REV="$2"
1086 REV="$2"
1084 USER="$3"
1087 USER="$3"
1085 PROPNAME="$4"
1088 PROPNAME="$4"
1086 ACTION="$5"
1089 ACTION="$5"
1087
1090
1088 if [ "$ACTION" = "M" -a "$PROPNAME" = "svn:log" ]; then exit 0; fi
1091 if [ "$ACTION" = "M" -a "$PROPNAME" = "svn:log" ]; then exit 0; fi
1089 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-branch" ]; then exit 0; fi
1092 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-branch" ]; then exit 0; fi
1090 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-rev" ]; then exit 0; fi
1093 if [ "$ACTION" = "A" -a "$PROPNAME" = "hg:convert-rev" ]; then exit 0; fi
1091
1094
1092 echo "Changing prohibited revision property" >&2
1095 echo "Changing prohibited revision property" >&2
1093 exit 1
1096 exit 1
1094 '''
1097 '''
1095
1098
1096 class svn_sink(converter_sink, commandline):
1099 class svn_sink(converter_sink, commandline):
1097 commit_re = re.compile(r'Committed revision (\d+).', re.M)
1100 commit_re = re.compile(r'Committed revision (\d+).', re.M)
1098 uuid_re = re.compile(r'Repository UUID:\s*(\S+)', re.M)
1101 uuid_re = re.compile(r'Repository UUID:\s*(\S+)', re.M)
1099
1102
1100 def prerun(self):
1103 def prerun(self):
1101 if self.wc:
1104 if self.wc:
1102 os.chdir(self.wc)
1105 os.chdir(self.wc)
1103
1106
1104 def postrun(self):
1107 def postrun(self):
1105 if self.wc:
1108 if self.wc:
1106 os.chdir(self.cwd)
1109 os.chdir(self.cwd)
1107
1110
1108 def join(self, name):
1111 def join(self, name):
1109 return os.path.join(self.wc, '.svn', name)
1112 return os.path.join(self.wc, '.svn', name)
1110
1113
1111 def revmapfile(self):
1114 def revmapfile(self):
1112 return self.join('hg-shamap')
1115 return self.join('hg-shamap')
1113
1116
1114 def authorfile(self):
1117 def authorfile(self):
1115 return self.join('hg-authormap')
1118 return self.join('hg-authormap')
1116
1119
1117 def __init__(self, ui, repotype, path):
1120 def __init__(self, ui, repotype, path):
1118
1121
1119 converter_sink.__init__(self, ui, repotype, path)
1122 converter_sink.__init__(self, ui, repotype, path)
1120 commandline.__init__(self, ui, 'svn')
1123 commandline.__init__(self, ui, 'svn')
1121 self.delete = []
1124 self.delete = []
1122 self.setexec = []
1125 self.setexec = []
1123 self.delexec = []
1126 self.delexec = []
1124 self.copies = []
1127 self.copies = []
1125 self.wc = None
1128 self.wc = None
1126 self.cwd = pycompat.getcwd()
1129 self.cwd = pycompat.getcwd()
1127
1130
1128 created = False
1131 created = False
1129 if os.path.isfile(os.path.join(path, '.svn', 'entries')):
1132 if os.path.isfile(os.path.join(path, '.svn', 'entries')):
1130 self.wc = os.path.realpath(path)
1133 self.wc = os.path.realpath(path)
1131 self.run0('update')
1134 self.run0('update')
1132 else:
1135 else:
1133 if not re.search(br'^(file|http|https|svn|svn\+ssh)\://', path):
1136 if not re.search(br'^(file|http|https|svn|svn\+ssh)\://', path):
1134 path = os.path.realpath(path)
1137 path = os.path.realpath(path)
1135 if os.path.isdir(os.path.dirname(path)):
1138 if os.path.isdir(os.path.dirname(path)):
1136 if not os.path.exists(os.path.join(path, 'db', 'fs-type')):
1139 if not os.path.exists(os.path.join(path, 'db', 'fs-type')):
1137 ui.status(_('initializing svn repository %r\n') %
1140 ui.status(_('initializing svn repository %r\n') %
1138 os.path.basename(path))
1141 os.path.basename(path))
1139 commandline(ui, 'svnadmin').run0('create', path)
1142 commandline(ui, 'svnadmin').run0('create', path)
1140 created = path
1143 created = path
1141 path = util.normpath(path)
1144 path = util.normpath(path)
1142 if not path.startswith('/'):
1145 if not path.startswith('/'):
1143 path = '/' + path
1146 path = '/' + path
1144 path = 'file://' + path
1147 path = 'file://' + path
1145
1148
1146 wcpath = os.path.join(pycompat.getcwd(), os.path.basename(path) +
1149 wcpath = os.path.join(pycompat.getcwd(), os.path.basename(path) +
1147 '-wc')
1150 '-wc')
1148 ui.status(_('initializing svn working copy %r\n')
1151 ui.status(_('initializing svn working copy %r\n')
1149 % os.path.basename(wcpath))
1152 % os.path.basename(wcpath))
1150 self.run0('checkout', path, wcpath)
1153 self.run0('checkout', path, wcpath)
1151
1154
1152 self.wc = wcpath
1155 self.wc = wcpath
1153 self.opener = vfsmod.vfs(self.wc)
1156 self.opener = vfsmod.vfs(self.wc)
1154 self.wopener = vfsmod.vfs(self.wc)
1157 self.wopener = vfsmod.vfs(self.wc)
1155 self.childmap = mapfile(ui, self.join('hg-childmap'))
1158 self.childmap = mapfile(ui, self.join('hg-childmap'))
1156 if util.checkexec(self.wc):
1159 if util.checkexec(self.wc):
1157 self.is_exec = util.isexec
1160 self.is_exec = util.isexec
1158 else:
1161 else:
1159 self.is_exec = None
1162 self.is_exec = None
1160
1163
1161 if created:
1164 if created:
1162 hook = os.path.join(created, 'hooks', 'pre-revprop-change')
1165 hook = os.path.join(created, 'hooks', 'pre-revprop-change')
1163 fp = open(hook, 'wb')
1166 fp = open(hook, 'wb')
1164 fp.write(pre_revprop_change)
1167 fp.write(pre_revprop_change)
1165 fp.close()
1168 fp.close()
1166 util.setflags(hook, False, True)
1169 util.setflags(hook, False, True)
1167
1170
1168 output = self.run0('info')
1171 output = self.run0('info')
1169 self.uuid = self.uuid_re.search(output).group(1).strip()
1172 self.uuid = self.uuid_re.search(output).group(1).strip()
1170
1173
1171 def wjoin(self, *names):
1174 def wjoin(self, *names):
1172 return os.path.join(self.wc, *names)
1175 return os.path.join(self.wc, *names)
1173
1176
1174 @propertycache
1177 @propertycache
1175 def manifest(self):
1178 def manifest(self):
1176 # As of svn 1.7, the "add" command fails when receiving
1179 # As of svn 1.7, the "add" command fails when receiving
1177 # already tracked entries, so we have to track and filter them
1180 # already tracked entries, so we have to track and filter them
1178 # ourselves.
1181 # ourselves.
1179 m = set()
1182 m = set()
1180 output = self.run0('ls', recursive=True, xml=True)
1183 output = self.run0('ls', recursive=True, xml=True)
1181 doc = xml.dom.minidom.parseString(output)
1184 doc = xml.dom.minidom.parseString(output)
1182 for e in doc.getElementsByTagName('entry'):
1185 for e in doc.getElementsByTagName('entry'):
1183 for n in e.childNodes:
1186 for n in e.childNodes:
1184 if n.nodeType != n.ELEMENT_NODE or n.tagName != 'name':
1187 if n.nodeType != n.ELEMENT_NODE or n.tagName != 'name':
1185 continue
1188 continue
1186 name = ''.join(c.data for c in n.childNodes
1189 name = ''.join(c.data for c in n.childNodes
1187 if c.nodeType == c.TEXT_NODE)
1190 if c.nodeType == c.TEXT_NODE)
1188 # Entries are compared with names coming from
1191 # Entries are compared with names coming from
1189 # mercurial, so bytes with undefined encoding. Our
1192 # mercurial, so bytes with undefined encoding. Our
1190 # best bet is to assume they are in local
1193 # best bet is to assume they are in local
1191 # encoding. They will be passed to command line calls
1194 # encoding. They will be passed to command line calls
1192 # later anyway, so they better be.
1195 # later anyway, so they better be.
1193 m.add(encoding.unitolocal(name))
1196 m.add(encoding.unitolocal(name))
1194 break
1197 break
1195 return m
1198 return m
1196
1199
1197 def putfile(self, filename, flags, data):
1200 def putfile(self, filename, flags, data):
1198 if 'l' in flags:
1201 if 'l' in flags:
1199 self.wopener.symlink(data, filename)
1202 self.wopener.symlink(data, filename)
1200 else:
1203 else:
1201 try:
1204 try:
1202 if os.path.islink(self.wjoin(filename)):
1205 if os.path.islink(self.wjoin(filename)):
1203 os.unlink(filename)
1206 os.unlink(filename)
1204 except OSError:
1207 except OSError:
1205 pass
1208 pass
1206 self.wopener.write(filename, data)
1209 self.wopener.write(filename, data)
1207
1210
1208 if self.is_exec:
1211 if self.is_exec:
1209 if self.is_exec(self.wjoin(filename)):
1212 if self.is_exec(self.wjoin(filename)):
1210 if 'x' not in flags:
1213 if 'x' not in flags:
1211 self.delexec.append(filename)
1214 self.delexec.append(filename)
1212 else:
1215 else:
1213 if 'x' in flags:
1216 if 'x' in flags:
1214 self.setexec.append(filename)
1217 self.setexec.append(filename)
1215 util.setflags(self.wjoin(filename), False, 'x' in flags)
1218 util.setflags(self.wjoin(filename), False, 'x' in flags)
1216
1219
1217 def _copyfile(self, source, dest):
1220 def _copyfile(self, source, dest):
1218 # SVN's copy command pukes if the destination file exists, but
1221 # SVN's copy command pukes if the destination file exists, but
1219 # our copyfile method expects to record a copy that has
1222 # our copyfile method expects to record a copy that has
1220 # already occurred. Cross the semantic gap.
1223 # already occurred. Cross the semantic gap.
1221 wdest = self.wjoin(dest)
1224 wdest = self.wjoin(dest)
1222 exists = os.path.lexists(wdest)
1225 exists = os.path.lexists(wdest)
1223 if exists:
1226 if exists:
1224 fd, tempname = tempfile.mkstemp(
1227 fd, tempname = tempfile.mkstemp(
1225 prefix='hg-copy-', dir=os.path.dirname(wdest))
1228 prefix='hg-copy-', dir=os.path.dirname(wdest))
1226 os.close(fd)
1229 os.close(fd)
1227 os.unlink(tempname)
1230 os.unlink(tempname)
1228 os.rename(wdest, tempname)
1231 os.rename(wdest, tempname)
1229 try:
1232 try:
1230 self.run0('copy', source, dest)
1233 self.run0('copy', source, dest)
1231 finally:
1234 finally:
1232 self.manifest.add(dest)
1235 self.manifest.add(dest)
1233 if exists:
1236 if exists:
1234 try:
1237 try:
1235 os.unlink(wdest)
1238 os.unlink(wdest)
1236 except OSError:
1239 except OSError:
1237 pass
1240 pass
1238 os.rename(tempname, wdest)
1241 os.rename(tempname, wdest)
1239
1242
1240 def dirs_of(self, files):
1243 def dirs_of(self, files):
1241 dirs = set()
1244 dirs = set()
1242 for f in files:
1245 for f in files:
1243 if os.path.isdir(self.wjoin(f)):
1246 if os.path.isdir(self.wjoin(f)):
1244 dirs.add(f)
1247 dirs.add(f)
1245 i = len(f)
1248 i = len(f)
1246 for i in iter(lambda: f.rfind('/', 0, i), -1):
1249 for i in iter(lambda: f.rfind('/', 0, i), -1):
1247 dirs.add(f[:i])
1250 dirs.add(f[:i])
1248 return dirs
1251 return dirs
1249
1252
1250 def add_dirs(self, files):
1253 def add_dirs(self, files):
1251 add_dirs = [d for d in sorted(self.dirs_of(files))
1254 add_dirs = [d for d in sorted(self.dirs_of(files))
1252 if d not in self.manifest]
1255 if d not in self.manifest]
1253 if add_dirs:
1256 if add_dirs:
1254 self.manifest.update(add_dirs)
1257 self.manifest.update(add_dirs)
1255 self.xargs(add_dirs, 'add', non_recursive=True, quiet=True)
1258 self.xargs(add_dirs, 'add', non_recursive=True, quiet=True)
1256 return add_dirs
1259 return add_dirs
1257
1260
1258 def add_files(self, files):
1261 def add_files(self, files):
1259 files = [f for f in files if f not in self.manifest]
1262 files = [f for f in files if f not in self.manifest]
1260 if files:
1263 if files:
1261 self.manifest.update(files)
1264 self.manifest.update(files)
1262 self.xargs(files, 'add', quiet=True)
1265 self.xargs(files, 'add', quiet=True)
1263 return files
1266 return files
1264
1267
1265 def addchild(self, parent, child):
1268 def addchild(self, parent, child):
1266 self.childmap[parent] = child
1269 self.childmap[parent] = child
1267
1270
1268 def revid(self, rev):
1271 def revid(self, rev):
1269 return u"svn:%s@%s" % (self.uuid, rev)
1272 return u"svn:%s@%s" % (self.uuid, rev)
1270
1273
1271 def putcommit(self, files, copies, parents, commit, source, revmap, full,
1274 def putcommit(self, files, copies, parents, commit, source, revmap, full,
1272 cleanp2):
1275 cleanp2):
1273 for parent in parents:
1276 for parent in parents:
1274 try:
1277 try:
1275 return self.revid(self.childmap[parent])
1278 return self.revid(self.childmap[parent])
1276 except KeyError:
1279 except KeyError:
1277 pass
1280 pass
1278
1281
1279 # Apply changes to working copy
1282 # Apply changes to working copy
1280 for f, v in files:
1283 for f, v in files:
1281 data, mode = source.getfile(f, v)
1284 data, mode = source.getfile(f, v)
1282 if data is None:
1285 if data is None:
1283 self.delete.append(f)
1286 self.delete.append(f)
1284 else:
1287 else:
1285 self.putfile(f, mode, data)
1288 self.putfile(f, mode, data)
1286 if f in copies:
1289 if f in copies:
1287 self.copies.append([copies[f], f])
1290 self.copies.append([copies[f], f])
1288 if full:
1291 if full:
1289 self.delete.extend(sorted(self.manifest.difference(files)))
1292 self.delete.extend(sorted(self.manifest.difference(files)))
1290 files = [f[0] for f in files]
1293 files = [f[0] for f in files]
1291
1294
1292 entries = set(self.delete)
1295 entries = set(self.delete)
1293 files = frozenset(files)
1296 files = frozenset(files)
1294 entries.update(self.add_dirs(files.difference(entries)))
1297 entries.update(self.add_dirs(files.difference(entries)))
1295 if self.copies:
1298 if self.copies:
1296 for s, d in self.copies:
1299 for s, d in self.copies:
1297 self._copyfile(s, d)
1300 self._copyfile(s, d)
1298 self.copies = []
1301 self.copies = []
1299 if self.delete:
1302 if self.delete:
1300 self.xargs(self.delete, 'delete')
1303 self.xargs(self.delete, 'delete')
1301 for f in self.delete:
1304 for f in self.delete:
1302 self.manifest.remove(f)
1305 self.manifest.remove(f)
1303 self.delete = []
1306 self.delete = []
1304 entries.update(self.add_files(files.difference(entries)))
1307 entries.update(self.add_files(files.difference(entries)))
1305 if self.delexec:
1308 if self.delexec:
1306 self.xargs(self.delexec, 'propdel', 'svn:executable')
1309 self.xargs(self.delexec, 'propdel', 'svn:executable')
1307 self.delexec = []
1310 self.delexec = []
1308 if self.setexec:
1311 if self.setexec:
1309 self.xargs(self.setexec, 'propset', 'svn:executable', '*')
1312 self.xargs(self.setexec, 'propset', 'svn:executable', '*')
1310 self.setexec = []
1313 self.setexec = []
1311
1314
1312 fd, messagefile = tempfile.mkstemp(prefix='hg-convert-')
1315 fd, messagefile = tempfile.mkstemp(prefix='hg-convert-')
1313 fp = os.fdopen(fd, r'wb')
1316 fp = os.fdopen(fd, r'wb')
1314 fp.write(util.tonativeeol(commit.desc))
1317 fp.write(util.tonativeeol(commit.desc))
1315 fp.close()
1318 fp.close()
1316 try:
1319 try:
1317 output = self.run0('commit',
1320 output = self.run0('commit',
1318 username=util.shortuser(commit.author),
1321 username=stringutil.shortuser(commit.author),
1319 file=messagefile,
1322 file=messagefile,
1320 encoding='utf-8')
1323 encoding='utf-8')
1321 try:
1324 try:
1322 rev = self.commit_re.search(output).group(1)
1325 rev = self.commit_re.search(output).group(1)
1323 except AttributeError:
1326 except AttributeError:
1324 if parents and not files:
1327 if parents and not files:
1325 return parents[0]
1328 return parents[0]
1326 self.ui.warn(_('unexpected svn output:\n'))
1329 self.ui.warn(_('unexpected svn output:\n'))
1327 self.ui.warn(output)
1330 self.ui.warn(output)
1328 raise error.Abort(_('unable to cope with svn output'))
1331 raise error.Abort(_('unable to cope with svn output'))
1329 if commit.rev:
1332 if commit.rev:
1330 self.run('propset', 'hg:convert-rev', commit.rev,
1333 self.run('propset', 'hg:convert-rev', commit.rev,
1331 revprop=True, revision=rev)
1334 revprop=True, revision=rev)
1332 if commit.branch and commit.branch != 'default':
1335 if commit.branch and commit.branch != 'default':
1333 self.run('propset', 'hg:convert-branch', commit.branch,
1336 self.run('propset', 'hg:convert-branch', commit.branch,
1334 revprop=True, revision=rev)
1337 revprop=True, revision=rev)
1335 for parent in parents:
1338 for parent in parents:
1336 self.addchild(parent, rev)
1339 self.addchild(parent, rev)
1337 return self.revid(rev)
1340 return self.revid(rev)
1338 finally:
1341 finally:
1339 os.unlink(messagefile)
1342 os.unlink(messagefile)
1340
1343
1341 def puttags(self, tags):
1344 def puttags(self, tags):
1342 self.ui.warn(_('writing Subversion tags is not yet implemented\n'))
1345 self.ui.warn(_('writing Subversion tags is not yet implemented\n'))
1343 return None, None
1346 return None, None
1344
1347
1345 def hascommitfrommap(self, rev):
1348 def hascommitfrommap(self, rev):
1346 # We trust that revisions referenced in a map still is present
1349 # We trust that revisions referenced in a map still is present
1347 # TODO: implement something better if necessary and feasible
1350 # TODO: implement something better if necessary and feasible
1348 return True
1351 return True
1349
1352
1350 def hascommitforsplicemap(self, rev):
1353 def hascommitforsplicemap(self, rev):
1351 # This is not correct as one can convert to an existing subversion
1354 # This is not correct as one can convert to an existing subversion
1352 # repository and childmap would not list all revisions. Too bad.
1355 # repository and childmap would not list all revisions. Too bad.
1353 if rev in self.childmap:
1356 if rev in self.childmap:
1354 return True
1357 return True
1355 raise error.Abort(_('splice map revision %s not found in subversion '
1358 raise error.Abort(_('splice map revision %s not found in subversion '
1356 'child map (revision lookups are not implemented)')
1359 'child map (revision lookups are not implemented)')
1357 % rev)
1360 % rev)
@@ -1,416 +1,419
1 """automatically manage newlines in repository files
1 """automatically manage newlines in repository files
2
2
3 This extension allows you to manage the type of line endings (CRLF or
3 This extension allows you to manage the type of line endings (CRLF or
4 LF) that are used in the repository and in the local working
4 LF) that are used in the repository and in the local working
5 directory. That way you can get CRLF line endings on Windows and LF on
5 directory. That way you can get CRLF line endings on Windows and LF on
6 Unix/Mac, thereby letting everybody use their OS native line endings.
6 Unix/Mac, thereby letting everybody use their OS native line endings.
7
7
8 The extension reads its configuration from a versioned ``.hgeol``
8 The extension reads its configuration from a versioned ``.hgeol``
9 configuration file found in the root of the working directory. The
9 configuration file found in the root of the working directory. The
10 ``.hgeol`` file use the same syntax as all other Mercurial
10 ``.hgeol`` file use the same syntax as all other Mercurial
11 configuration files. It uses two sections, ``[patterns]`` and
11 configuration files. It uses two sections, ``[patterns]`` and
12 ``[repository]``.
12 ``[repository]``.
13
13
14 The ``[patterns]`` section specifies how line endings should be
14 The ``[patterns]`` section specifies how line endings should be
15 converted between the working directory and the repository. The format is
15 converted between the working directory and the repository. The format is
16 specified by a file pattern. The first match is used, so put more
16 specified by a file pattern. The first match is used, so put more
17 specific patterns first. The available line endings are ``LF``,
17 specific patterns first. The available line endings are ``LF``,
18 ``CRLF``, and ``BIN``.
18 ``CRLF``, and ``BIN``.
19
19
20 Files with the declared format of ``CRLF`` or ``LF`` are always
20 Files with the declared format of ``CRLF`` or ``LF`` are always
21 checked out and stored in the repository in that format and files
21 checked out and stored in the repository in that format and files
22 declared to be binary (``BIN``) are left unchanged. Additionally,
22 declared to be binary (``BIN``) are left unchanged. Additionally,
23 ``native`` is an alias for checking out in the platform's default line
23 ``native`` is an alias for checking out in the platform's default line
24 ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on
24 ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on
25 Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's
25 Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's
26 default behavior; it is only needed if you need to override a later,
26 default behavior; it is only needed if you need to override a later,
27 more general pattern.
27 more general pattern.
28
28
29 The optional ``[repository]`` section specifies the line endings to
29 The optional ``[repository]`` section specifies the line endings to
30 use for files stored in the repository. It has a single setting,
30 use for files stored in the repository. It has a single setting,
31 ``native``, which determines the storage line endings for files
31 ``native``, which determines the storage line endings for files
32 declared as ``native`` in the ``[patterns]`` section. It can be set to
32 declared as ``native`` in the ``[patterns]`` section. It can be set to
33 ``LF`` or ``CRLF``. The default is ``LF``. For example, this means
33 ``LF`` or ``CRLF``. The default is ``LF``. For example, this means
34 that on Windows, files configured as ``native`` (``CRLF`` by default)
34 that on Windows, files configured as ``native`` (``CRLF`` by default)
35 will be converted to ``LF`` when stored in the repository. Files
35 will be converted to ``LF`` when stored in the repository. Files
36 declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section
36 declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section
37 are always stored as-is in the repository.
37 are always stored as-is in the repository.
38
38
39 Example versioned ``.hgeol`` file::
39 Example versioned ``.hgeol`` file::
40
40
41 [patterns]
41 [patterns]
42 **.py = native
42 **.py = native
43 **.vcproj = CRLF
43 **.vcproj = CRLF
44 **.txt = native
44 **.txt = native
45 Makefile = LF
45 Makefile = LF
46 **.jpg = BIN
46 **.jpg = BIN
47
47
48 [repository]
48 [repository]
49 native = LF
49 native = LF
50
50
51 .. note::
51 .. note::
52
52
53 The rules will first apply when files are touched in the working
53 The rules will first apply when files are touched in the working
54 directory, e.g. by updating to null and back to tip to touch all files.
54 directory, e.g. by updating to null and back to tip to touch all files.
55
55
56 The extension uses an optional ``[eol]`` section read from both the
56 The extension uses an optional ``[eol]`` section read from both the
57 normal Mercurial configuration files and the ``.hgeol`` file, with the
57 normal Mercurial configuration files and the ``.hgeol`` file, with the
58 latter overriding the former. You can use that section to control the
58 latter overriding the former. You can use that section to control the
59 overall behavior. There are three settings:
59 overall behavior. There are three settings:
60
60
61 - ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or
61 - ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or
62 ``CRLF`` to override the default interpretation of ``native`` for
62 ``CRLF`` to override the default interpretation of ``native`` for
63 checkout. This can be used with :hg:`archive` on Unix, say, to
63 checkout. This can be used with :hg:`archive` on Unix, say, to
64 generate an archive where files have line endings for Windows.
64 generate an archive where files have line endings for Windows.
65
65
66 - ``eol.only-consistent`` (default True) can be set to False to make
66 - ``eol.only-consistent`` (default True) can be set to False to make
67 the extension convert files with inconsistent EOLs. Inconsistent
67 the extension convert files with inconsistent EOLs. Inconsistent
68 means that there is both ``CRLF`` and ``LF`` present in the file.
68 means that there is both ``CRLF`` and ``LF`` present in the file.
69 Such files are normally not touched under the assumption that they
69 Such files are normally not touched under the assumption that they
70 have mixed EOLs on purpose.
70 have mixed EOLs on purpose.
71
71
72 - ``eol.fix-trailing-newline`` (default False) can be set to True to
72 - ``eol.fix-trailing-newline`` (default False) can be set to True to
73 ensure that converted files end with a EOL character (either ``\\n``
73 ensure that converted files end with a EOL character (either ``\\n``
74 or ``\\r\\n`` as per the configured patterns).
74 or ``\\r\\n`` as per the configured patterns).
75
75
76 The extension provides ``cleverencode:`` and ``cleverdecode:`` filters
76 The extension provides ``cleverencode:`` and ``cleverdecode:`` filters
77 like the deprecated win32text extension does. This means that you can
77 like the deprecated win32text extension does. This means that you can
78 disable win32text and enable eol and your filters will still work. You
78 disable win32text and enable eol and your filters will still work. You
79 only need to these filters until you have prepared a ``.hgeol`` file.
79 only need to these filters until you have prepared a ``.hgeol`` file.
80
80
81 The ``win32text.forbid*`` hooks provided by the win32text extension
81 The ``win32text.forbid*`` hooks provided by the win32text extension
82 have been unified into a single hook named ``eol.checkheadshook``. The
82 have been unified into a single hook named ``eol.checkheadshook``. The
83 hook will lookup the expected line endings from the ``.hgeol`` file,
83 hook will lookup the expected line endings from the ``.hgeol`` file,
84 which means you must migrate to a ``.hgeol`` file first before using
84 which means you must migrate to a ``.hgeol`` file first before using
85 the hook. ``eol.checkheadshook`` only checks heads, intermediate
85 the hook. ``eol.checkheadshook`` only checks heads, intermediate
86 invalid revisions will be pushed. To forbid them completely, use the
86 invalid revisions will be pushed. To forbid them completely, use the
87 ``eol.checkallhook`` hook. These hooks are best used as
87 ``eol.checkallhook`` hook. These hooks are best used as
88 ``pretxnchangegroup`` hooks.
88 ``pretxnchangegroup`` hooks.
89
89
90 See :hg:`help patterns` for more information about the glob patterns
90 See :hg:`help patterns` for more information about the glob patterns
91 used.
91 used.
92 """
92 """
93
93
94 from __future__ import absolute_import
94 from __future__ import absolute_import
95
95
96 import os
96 import os
97 import re
97 import re
98 from mercurial.i18n import _
98 from mercurial.i18n import _
99 from mercurial import (
99 from mercurial import (
100 config,
100 config,
101 error as errormod,
101 error as errormod,
102 extensions,
102 extensions,
103 match,
103 match,
104 pycompat,
104 pycompat,
105 registrar,
105 registrar,
106 util,
106 util,
107 )
107 )
108 from mercurial.utils import (
109 stringutil,
110 )
108
111
109 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
112 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
110 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
113 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
111 # be specifying the version(s) of Mercurial they are tested with, or
114 # be specifying the version(s) of Mercurial they are tested with, or
112 # leave the attribute unspecified.
115 # leave the attribute unspecified.
113 testedwith = 'ships-with-hg-core'
116 testedwith = 'ships-with-hg-core'
114
117
115 configtable = {}
118 configtable = {}
116 configitem = registrar.configitem(configtable)
119 configitem = registrar.configitem(configtable)
117
120
118 configitem('eol', 'fix-trailing-newline',
121 configitem('eol', 'fix-trailing-newline',
119 default=False,
122 default=False,
120 )
123 )
121 configitem('eol', 'native',
124 configitem('eol', 'native',
122 default=pycompat.oslinesep,
125 default=pycompat.oslinesep,
123 )
126 )
124 configitem('eol', 'only-consistent',
127 configitem('eol', 'only-consistent',
125 default=True,
128 default=True,
126 )
129 )
127
130
128 # Matches a lone LF, i.e., one that is not part of CRLF.
131 # Matches a lone LF, i.e., one that is not part of CRLF.
129 singlelf = re.compile('(^|[^\r])\n')
132 singlelf = re.compile('(^|[^\r])\n')
130
133
131 def inconsistenteol(data):
134 def inconsistenteol(data):
132 return '\r\n' in data and singlelf.search(data)
135 return '\r\n' in data and singlelf.search(data)
133
136
134 def tolf(s, params, ui, **kwargs):
137 def tolf(s, params, ui, **kwargs):
135 """Filter to convert to LF EOLs."""
138 """Filter to convert to LF EOLs."""
136 if util.binary(s):
139 if stringutil.binary(s):
137 return s
140 return s
138 if ui.configbool('eol', 'only-consistent') and inconsistenteol(s):
141 if ui.configbool('eol', 'only-consistent') and inconsistenteol(s):
139 return s
142 return s
140 if (ui.configbool('eol', 'fix-trailing-newline')
143 if (ui.configbool('eol', 'fix-trailing-newline')
141 and s and s[-1] != '\n'):
144 and s and s[-1] != '\n'):
142 s = s + '\n'
145 s = s + '\n'
143 return util.tolf(s)
146 return util.tolf(s)
144
147
145 def tocrlf(s, params, ui, **kwargs):
148 def tocrlf(s, params, ui, **kwargs):
146 """Filter to convert to CRLF EOLs."""
149 """Filter to convert to CRLF EOLs."""
147 if util.binary(s):
150 if stringutil.binary(s):
148 return s
151 return s
149 if ui.configbool('eol', 'only-consistent') and inconsistenteol(s):
152 if ui.configbool('eol', 'only-consistent') and inconsistenteol(s):
150 return s
153 return s
151 if (ui.configbool('eol', 'fix-trailing-newline')
154 if (ui.configbool('eol', 'fix-trailing-newline')
152 and s and s[-1] != '\n'):
155 and s and s[-1] != '\n'):
153 s = s + '\n'
156 s = s + '\n'
154 return util.tocrlf(s)
157 return util.tocrlf(s)
155
158
156 def isbinary(s, params):
159 def isbinary(s, params):
157 """Filter to do nothing with the file."""
160 """Filter to do nothing with the file."""
158 return s
161 return s
159
162
160 filters = {
163 filters = {
161 'to-lf': tolf,
164 'to-lf': tolf,
162 'to-crlf': tocrlf,
165 'to-crlf': tocrlf,
163 'is-binary': isbinary,
166 'is-binary': isbinary,
164 # The following provide backwards compatibility with win32text
167 # The following provide backwards compatibility with win32text
165 'cleverencode:': tolf,
168 'cleverencode:': tolf,
166 'cleverdecode:': tocrlf
169 'cleverdecode:': tocrlf
167 }
170 }
168
171
169 class eolfile(object):
172 class eolfile(object):
170 def __init__(self, ui, root, data):
173 def __init__(self, ui, root, data):
171 self._decode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
174 self._decode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
172 self._encode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
175 self._encode = {'LF': 'to-lf', 'CRLF': 'to-crlf', 'BIN': 'is-binary'}
173
176
174 self.cfg = config.config()
177 self.cfg = config.config()
175 # Our files should not be touched. The pattern must be
178 # Our files should not be touched. The pattern must be
176 # inserted first override a '** = native' pattern.
179 # inserted first override a '** = native' pattern.
177 self.cfg.set('patterns', '.hg*', 'BIN', 'eol')
180 self.cfg.set('patterns', '.hg*', 'BIN', 'eol')
178 # We can then parse the user's patterns.
181 # We can then parse the user's patterns.
179 self.cfg.parse('.hgeol', data)
182 self.cfg.parse('.hgeol', data)
180
183
181 isrepolf = self.cfg.get('repository', 'native') != 'CRLF'
184 isrepolf = self.cfg.get('repository', 'native') != 'CRLF'
182 self._encode['NATIVE'] = isrepolf and 'to-lf' or 'to-crlf'
185 self._encode['NATIVE'] = isrepolf and 'to-lf' or 'to-crlf'
183 iswdlf = ui.config('eol', 'native') in ('LF', '\n')
186 iswdlf = ui.config('eol', 'native') in ('LF', '\n')
184 self._decode['NATIVE'] = iswdlf and 'to-lf' or 'to-crlf'
187 self._decode['NATIVE'] = iswdlf and 'to-lf' or 'to-crlf'
185
188
186 include = []
189 include = []
187 exclude = []
190 exclude = []
188 self.patterns = []
191 self.patterns = []
189 for pattern, style in self.cfg.items('patterns'):
192 for pattern, style in self.cfg.items('patterns'):
190 key = style.upper()
193 key = style.upper()
191 if key == 'BIN':
194 if key == 'BIN':
192 exclude.append(pattern)
195 exclude.append(pattern)
193 else:
196 else:
194 include.append(pattern)
197 include.append(pattern)
195 m = match.match(root, '', [pattern])
198 m = match.match(root, '', [pattern])
196 self.patterns.append((pattern, key, m))
199 self.patterns.append((pattern, key, m))
197 # This will match the files for which we need to care
200 # This will match the files for which we need to care
198 # about inconsistent newlines.
201 # about inconsistent newlines.
199 self.match = match.match(root, '', [], include, exclude)
202 self.match = match.match(root, '', [], include, exclude)
200
203
201 def copytoui(self, ui):
204 def copytoui(self, ui):
202 for pattern, key, m in self.patterns:
205 for pattern, key, m in self.patterns:
203 try:
206 try:
204 ui.setconfig('decode', pattern, self._decode[key], 'eol')
207 ui.setconfig('decode', pattern, self._decode[key], 'eol')
205 ui.setconfig('encode', pattern, self._encode[key], 'eol')
208 ui.setconfig('encode', pattern, self._encode[key], 'eol')
206 except KeyError:
209 except KeyError:
207 ui.warn(_("ignoring unknown EOL style '%s' from %s\n")
210 ui.warn(_("ignoring unknown EOL style '%s' from %s\n")
208 % (key, self.cfg.source('patterns', pattern)))
211 % (key, self.cfg.source('patterns', pattern)))
209 # eol.only-consistent can be specified in ~/.hgrc or .hgeol
212 # eol.only-consistent can be specified in ~/.hgrc or .hgeol
210 for k, v in self.cfg.items('eol'):
213 for k, v in self.cfg.items('eol'):
211 ui.setconfig('eol', k, v, 'eol')
214 ui.setconfig('eol', k, v, 'eol')
212
215
213 def checkrev(self, repo, ctx, files):
216 def checkrev(self, repo, ctx, files):
214 failed = []
217 failed = []
215 for f in (files or ctx.files()):
218 for f in (files or ctx.files()):
216 if f not in ctx:
219 if f not in ctx:
217 continue
220 continue
218 for pattern, key, m in self.patterns:
221 for pattern, key, m in self.patterns:
219 if not m(f):
222 if not m(f):
220 continue
223 continue
221 target = self._encode[key]
224 target = self._encode[key]
222 data = ctx[f].data()
225 data = ctx[f].data()
223 if (target == "to-lf" and "\r\n" in data
226 if (target == "to-lf" and "\r\n" in data
224 or target == "to-crlf" and singlelf.search(data)):
227 or target == "to-crlf" and singlelf.search(data)):
225 failed.append((f, target, bytes(ctx)))
228 failed.append((f, target, bytes(ctx)))
226 break
229 break
227 return failed
230 return failed
228
231
229 def parseeol(ui, repo, nodes):
232 def parseeol(ui, repo, nodes):
230 try:
233 try:
231 for node in nodes:
234 for node in nodes:
232 try:
235 try:
233 if node is None:
236 if node is None:
234 # Cannot use workingctx.data() since it would load
237 # Cannot use workingctx.data() since it would load
235 # and cache the filters before we configure them.
238 # and cache the filters before we configure them.
236 data = repo.wvfs('.hgeol').read()
239 data = repo.wvfs('.hgeol').read()
237 else:
240 else:
238 data = repo[node]['.hgeol'].data()
241 data = repo[node]['.hgeol'].data()
239 return eolfile(ui, repo.root, data)
242 return eolfile(ui, repo.root, data)
240 except (IOError, LookupError):
243 except (IOError, LookupError):
241 pass
244 pass
242 except errormod.ParseError as inst:
245 except errormod.ParseError as inst:
243 ui.warn(_("warning: ignoring .hgeol file due to parse error "
246 ui.warn(_("warning: ignoring .hgeol file due to parse error "
244 "at %s: %s\n") % (inst.args[1], inst.args[0]))
247 "at %s: %s\n") % (inst.args[1], inst.args[0]))
245 return None
248 return None
246
249
247 def ensureenabled(ui):
250 def ensureenabled(ui):
248 """make sure the extension is enabled when used as hook
251 """make sure the extension is enabled when used as hook
249
252
250 When eol is used through hooks, the extension is never formally loaded and
253 When eol is used through hooks, the extension is never formally loaded and
251 enabled. This has some side effect, for example the config declaration is
254 enabled. This has some side effect, for example the config declaration is
252 never loaded. This function ensure the extension is enabled when running
255 never loaded. This function ensure the extension is enabled when running
253 hooks.
256 hooks.
254 """
257 """
255 if 'eol' in ui._knownconfig:
258 if 'eol' in ui._knownconfig:
256 return
259 return
257 ui.setconfig('extensions', 'eol', '', source='internal')
260 ui.setconfig('extensions', 'eol', '', source='internal')
258 extensions.loadall(ui, ['eol'])
261 extensions.loadall(ui, ['eol'])
259
262
260 def _checkhook(ui, repo, node, headsonly):
263 def _checkhook(ui, repo, node, headsonly):
261 # Get revisions to check and touched files at the same time
264 # Get revisions to check and touched files at the same time
262 ensureenabled(ui)
265 ensureenabled(ui)
263 files = set()
266 files = set()
264 revs = set()
267 revs = set()
265 for rev in xrange(repo[node].rev(), len(repo)):
268 for rev in xrange(repo[node].rev(), len(repo)):
266 revs.add(rev)
269 revs.add(rev)
267 if headsonly:
270 if headsonly:
268 ctx = repo[rev]
271 ctx = repo[rev]
269 files.update(ctx.files())
272 files.update(ctx.files())
270 for pctx in ctx.parents():
273 for pctx in ctx.parents():
271 revs.discard(pctx.rev())
274 revs.discard(pctx.rev())
272 failed = []
275 failed = []
273 for rev in revs:
276 for rev in revs:
274 ctx = repo[rev]
277 ctx = repo[rev]
275 eol = parseeol(ui, repo, [ctx.node()])
278 eol = parseeol(ui, repo, [ctx.node()])
276 if eol:
279 if eol:
277 failed.extend(eol.checkrev(repo, ctx, files))
280 failed.extend(eol.checkrev(repo, ctx, files))
278
281
279 if failed:
282 if failed:
280 eols = {'to-lf': 'CRLF', 'to-crlf': 'LF'}
283 eols = {'to-lf': 'CRLF', 'to-crlf': 'LF'}
281 msgs = []
284 msgs = []
282 for f, target, node in sorted(failed):
285 for f, target, node in sorted(failed):
283 msgs.append(_(" %s in %s should not have %s line endings") %
286 msgs.append(_(" %s in %s should not have %s line endings") %
284 (f, node, eols[target]))
287 (f, node, eols[target]))
285 raise errormod.Abort(_("end-of-line check failed:\n") + "\n".join(msgs))
288 raise errormod.Abort(_("end-of-line check failed:\n") + "\n".join(msgs))
286
289
287 def checkallhook(ui, repo, node, hooktype, **kwargs):
290 def checkallhook(ui, repo, node, hooktype, **kwargs):
288 """verify that files have expected EOLs"""
291 """verify that files have expected EOLs"""
289 _checkhook(ui, repo, node, False)
292 _checkhook(ui, repo, node, False)
290
293
291 def checkheadshook(ui, repo, node, hooktype, **kwargs):
294 def checkheadshook(ui, repo, node, hooktype, **kwargs):
292 """verify that files have expected EOLs"""
295 """verify that files have expected EOLs"""
293 _checkhook(ui, repo, node, True)
296 _checkhook(ui, repo, node, True)
294
297
295 # "checkheadshook" used to be called "hook"
298 # "checkheadshook" used to be called "hook"
296 hook = checkheadshook
299 hook = checkheadshook
297
300
298 def preupdate(ui, repo, hooktype, parent1, parent2):
301 def preupdate(ui, repo, hooktype, parent1, parent2):
299 repo.loadeol([parent1])
302 repo.loadeol([parent1])
300 return False
303 return False
301
304
302 def uisetup(ui):
305 def uisetup(ui):
303 ui.setconfig('hooks', 'preupdate.eol', preupdate, 'eol')
306 ui.setconfig('hooks', 'preupdate.eol', preupdate, 'eol')
304
307
305 def extsetup(ui):
308 def extsetup(ui):
306 try:
309 try:
307 extensions.find('win32text')
310 extensions.find('win32text')
308 ui.warn(_("the eol extension is incompatible with the "
311 ui.warn(_("the eol extension is incompatible with the "
309 "win32text extension\n"))
312 "win32text extension\n"))
310 except KeyError:
313 except KeyError:
311 pass
314 pass
312
315
313
316
314 def reposetup(ui, repo):
317 def reposetup(ui, repo):
315 uisetup(repo.ui)
318 uisetup(repo.ui)
316
319
317 if not repo.local():
320 if not repo.local():
318 return
321 return
319 for name, fn in filters.iteritems():
322 for name, fn in filters.iteritems():
320 repo.adddatafilter(name, fn)
323 repo.adddatafilter(name, fn)
321
324
322 ui.setconfig('patch', 'eol', 'auto', 'eol')
325 ui.setconfig('patch', 'eol', 'auto', 'eol')
323
326
324 class eolrepo(repo.__class__):
327 class eolrepo(repo.__class__):
325
328
326 def loadeol(self, nodes):
329 def loadeol(self, nodes):
327 eol = parseeol(self.ui, self, nodes)
330 eol = parseeol(self.ui, self, nodes)
328 if eol is None:
331 if eol is None:
329 return None
332 return None
330 eol.copytoui(self.ui)
333 eol.copytoui(self.ui)
331 return eol.match
334 return eol.match
332
335
333 def _hgcleardirstate(self):
336 def _hgcleardirstate(self):
334 self._eolmatch = self.loadeol([None, 'tip'])
337 self._eolmatch = self.loadeol([None, 'tip'])
335 if not self._eolmatch:
338 if not self._eolmatch:
336 self._eolmatch = util.never
339 self._eolmatch = util.never
337 return
340 return
338
341
339 oldeol = None
342 oldeol = None
340 try:
343 try:
341 cachemtime = os.path.getmtime(self.vfs.join("eol.cache"))
344 cachemtime = os.path.getmtime(self.vfs.join("eol.cache"))
342 except OSError:
345 except OSError:
343 cachemtime = 0
346 cachemtime = 0
344 else:
347 else:
345 olddata = self.vfs.read("eol.cache")
348 olddata = self.vfs.read("eol.cache")
346 if olddata:
349 if olddata:
347 oldeol = eolfile(self.ui, self.root, olddata)
350 oldeol = eolfile(self.ui, self.root, olddata)
348
351
349 try:
352 try:
350 eolmtime = os.path.getmtime(self.wjoin(".hgeol"))
353 eolmtime = os.path.getmtime(self.wjoin(".hgeol"))
351 except OSError:
354 except OSError:
352 eolmtime = 0
355 eolmtime = 0
353
356
354 if eolmtime > cachemtime:
357 if eolmtime > cachemtime:
355 self.ui.debug("eol: detected change in .hgeol\n")
358 self.ui.debug("eol: detected change in .hgeol\n")
356
359
357 hgeoldata = self.wvfs.read('.hgeol')
360 hgeoldata = self.wvfs.read('.hgeol')
358 neweol = eolfile(self.ui, self.root, hgeoldata)
361 neweol = eolfile(self.ui, self.root, hgeoldata)
359
362
360 wlock = None
363 wlock = None
361 try:
364 try:
362 wlock = self.wlock()
365 wlock = self.wlock()
363 for f in self.dirstate:
366 for f in self.dirstate:
364 if self.dirstate[f] != 'n':
367 if self.dirstate[f] != 'n':
365 continue
368 continue
366 if oldeol is not None:
369 if oldeol is not None:
367 if not oldeol.match(f) and not neweol.match(f):
370 if not oldeol.match(f) and not neweol.match(f):
368 continue
371 continue
369 oldkey = None
372 oldkey = None
370 for pattern, key, m in oldeol.patterns:
373 for pattern, key, m in oldeol.patterns:
371 if m(f):
374 if m(f):
372 oldkey = key
375 oldkey = key
373 break
376 break
374 newkey = None
377 newkey = None
375 for pattern, key, m in neweol.patterns:
378 for pattern, key, m in neweol.patterns:
376 if m(f):
379 if m(f):
377 newkey = key
380 newkey = key
378 break
381 break
379 if oldkey == newkey:
382 if oldkey == newkey:
380 continue
383 continue
381 # all normal files need to be looked at again since
384 # all normal files need to be looked at again since
382 # the new .hgeol file specify a different filter
385 # the new .hgeol file specify a different filter
383 self.dirstate.normallookup(f)
386 self.dirstate.normallookup(f)
384 # Write the cache to update mtime and cache .hgeol
387 # Write the cache to update mtime and cache .hgeol
385 with self.vfs("eol.cache", "w") as f:
388 with self.vfs("eol.cache", "w") as f:
386 f.write(hgeoldata)
389 f.write(hgeoldata)
387 except errormod.LockUnavailable:
390 except errormod.LockUnavailable:
388 # If we cannot lock the repository and clear the
391 # If we cannot lock the repository and clear the
389 # dirstate, then a commit might not see all files
392 # dirstate, then a commit might not see all files
390 # as modified. But if we cannot lock the
393 # as modified. But if we cannot lock the
391 # repository, then we can also not make a commit,
394 # repository, then we can also not make a commit,
392 # so ignore the error.
395 # so ignore the error.
393 pass
396 pass
394 finally:
397 finally:
395 if wlock is not None:
398 if wlock is not None:
396 wlock.release()
399 wlock.release()
397
400
398 def commitctx(self, ctx, error=False):
401 def commitctx(self, ctx, error=False):
399 for f in sorted(ctx.added() + ctx.modified()):
402 for f in sorted(ctx.added() + ctx.modified()):
400 if not self._eolmatch(f):
403 if not self._eolmatch(f):
401 continue
404 continue
402 fctx = ctx[f]
405 fctx = ctx[f]
403 if fctx is None:
406 if fctx is None:
404 continue
407 continue
405 data = fctx.data()
408 data = fctx.data()
406 if util.binary(data):
409 if stringutil.binary(data):
407 # We should not abort here, since the user should
410 # We should not abort here, since the user should
408 # be able to say "** = native" to automatically
411 # be able to say "** = native" to automatically
409 # have all non-binary files taken care of.
412 # have all non-binary files taken care of.
410 continue
413 continue
411 if inconsistenteol(data):
414 if inconsistenteol(data):
412 raise errormod.Abort(_("inconsistent newline style "
415 raise errormod.Abort(_("inconsistent newline style "
413 "in %s\n") % f)
416 "in %s\n") % f)
414 return super(eolrepo, self).commitctx(ctx, error)
417 return super(eolrepo, self).commitctx(ctx, error)
415 repo.__class__ = eolrepo
418 repo.__class__ = eolrepo
416 repo._hgcleardirstate()
419 repo._hgcleardirstate()
@@ -1,418 +1,421
1 # extdiff.py - external diff program support for mercurial
1 # extdiff.py - external diff program support for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to allow external programs to compare revisions
8 '''command to allow external programs to compare revisions
9
9
10 The extdiff Mercurial extension allows you to use external programs
10 The extdiff Mercurial extension allows you to use external programs
11 to compare revisions, or revision with working directory. The external
11 to compare revisions, or revision with working directory. The external
12 diff programs are called with a configurable set of options and two
12 diff programs are called with a configurable set of options and two
13 non-option arguments: paths to directories containing snapshots of
13 non-option arguments: paths to directories containing snapshots of
14 files to compare.
14 files to compare.
15
15
16 The extdiff extension also allows you to configure new diff commands, so
16 The extdiff extension also allows you to configure new diff commands, so
17 you do not need to type :hg:`extdiff -p kdiff3` always. ::
17 you do not need to type :hg:`extdiff -p kdiff3` always. ::
18
18
19 [extdiff]
19 [extdiff]
20 # add new command that runs GNU diff(1) in 'context diff' mode
20 # add new command that runs GNU diff(1) in 'context diff' mode
21 cdiff = gdiff -Nprc5
21 cdiff = gdiff -Nprc5
22 ## or the old way:
22 ## or the old way:
23 #cmd.cdiff = gdiff
23 #cmd.cdiff = gdiff
24 #opts.cdiff = -Nprc5
24 #opts.cdiff = -Nprc5
25
25
26 # add new command called meld, runs meld (no need to name twice). If
26 # add new command called meld, runs meld (no need to name twice). If
27 # the meld executable is not available, the meld tool in [merge-tools]
27 # the meld executable is not available, the meld tool in [merge-tools]
28 # will be used, if available
28 # will be used, if available
29 meld =
29 meld =
30
30
31 # add new command called vimdiff, runs gvimdiff with DirDiff plugin
31 # add new command called vimdiff, runs gvimdiff with DirDiff plugin
32 # (see http://www.vim.org/scripts/script.php?script_id=102) Non
32 # (see http://www.vim.org/scripts/script.php?script_id=102) Non
33 # English user, be sure to put "let g:DirDiffDynamicDiffText = 1" in
33 # English user, be sure to put "let g:DirDiffDynamicDiffText = 1" in
34 # your .vimrc
34 # your .vimrc
35 vimdiff = gvim -f "+next" \\
35 vimdiff = gvim -f "+next" \\
36 "+execute 'DirDiff' fnameescape(argv(0)) fnameescape(argv(1))"
36 "+execute 'DirDiff' fnameescape(argv(0)) fnameescape(argv(1))"
37
37
38 Tool arguments can include variables that are expanded at runtime::
38 Tool arguments can include variables that are expanded at runtime::
39
39
40 $parent1, $plabel1 - filename, descriptive label of first parent
40 $parent1, $plabel1 - filename, descriptive label of first parent
41 $child, $clabel - filename, descriptive label of child revision
41 $child, $clabel - filename, descriptive label of child revision
42 $parent2, $plabel2 - filename, descriptive label of second parent
42 $parent2, $plabel2 - filename, descriptive label of second parent
43 $root - repository root
43 $root - repository root
44 $parent is an alias for $parent1.
44 $parent is an alias for $parent1.
45
45
46 The extdiff extension will look in your [diff-tools] and [merge-tools]
46 The extdiff extension will look in your [diff-tools] and [merge-tools]
47 sections for diff tool arguments, when none are specified in [extdiff].
47 sections for diff tool arguments, when none are specified in [extdiff].
48
48
49 ::
49 ::
50
50
51 [extdiff]
51 [extdiff]
52 kdiff3 =
52 kdiff3 =
53
53
54 [diff-tools]
54 [diff-tools]
55 kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
55 kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
56
56
57 You can use -I/-X and list of file or directory names like normal
57 You can use -I/-X and list of file or directory names like normal
58 :hg:`diff` command. The extdiff extension makes snapshots of only
58 :hg:`diff` command. The extdiff extension makes snapshots of only
59 needed files, so running the external diff program will actually be
59 needed files, so running the external diff program will actually be
60 pretty fast (at least faster than having to compare the entire tree).
60 pretty fast (at least faster than having to compare the entire tree).
61 '''
61 '''
62
62
63 from __future__ import absolute_import
63 from __future__ import absolute_import
64
64
65 import os
65 import os
66 import re
66 import re
67 import shutil
67 import shutil
68 import stat
68 import stat
69 import tempfile
69 import tempfile
70 from mercurial.i18n import _
70 from mercurial.i18n import _
71 from mercurial.node import (
71 from mercurial.node import (
72 nullid,
72 nullid,
73 short,
73 short,
74 )
74 )
75 from mercurial import (
75 from mercurial import (
76 archival,
76 archival,
77 cmdutil,
77 cmdutil,
78 error,
78 error,
79 filemerge,
79 filemerge,
80 pycompat,
80 pycompat,
81 registrar,
81 registrar,
82 scmutil,
82 scmutil,
83 util,
83 util,
84 )
84 )
85 from mercurial.utils import (
86 stringutil,
87 )
85
88
86 cmdtable = {}
89 cmdtable = {}
87 command = registrar.command(cmdtable)
90 command = registrar.command(cmdtable)
88
91
89 configtable = {}
92 configtable = {}
90 configitem = registrar.configitem(configtable)
93 configitem = registrar.configitem(configtable)
91
94
92 configitem('extdiff', br'opts\..*',
95 configitem('extdiff', br'opts\..*',
93 default='',
96 default='',
94 generic=True,
97 generic=True,
95 )
98 )
96
99
97 configitem('diff-tools', br'.*\.diffargs$',
100 configitem('diff-tools', br'.*\.diffargs$',
98 default=None,
101 default=None,
99 generic=True,
102 generic=True,
100 )
103 )
101
104
102 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
105 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
103 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
106 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
104 # be specifying the version(s) of Mercurial they are tested with, or
107 # be specifying the version(s) of Mercurial they are tested with, or
105 # leave the attribute unspecified.
108 # leave the attribute unspecified.
106 testedwith = 'ships-with-hg-core'
109 testedwith = 'ships-with-hg-core'
107
110
108 def snapshot(ui, repo, files, node, tmproot, listsubrepos):
111 def snapshot(ui, repo, files, node, tmproot, listsubrepos):
109 '''snapshot files as of some revision
112 '''snapshot files as of some revision
110 if not using snapshot, -I/-X does not work and recursive diff
113 if not using snapshot, -I/-X does not work and recursive diff
111 in tools like kdiff3 and meld displays too many files.'''
114 in tools like kdiff3 and meld displays too many files.'''
112 dirname = os.path.basename(repo.root)
115 dirname = os.path.basename(repo.root)
113 if dirname == "":
116 if dirname == "":
114 dirname = "root"
117 dirname = "root"
115 if node is not None:
118 if node is not None:
116 dirname = '%s.%s' % (dirname, short(node))
119 dirname = '%s.%s' % (dirname, short(node))
117 base = os.path.join(tmproot, dirname)
120 base = os.path.join(tmproot, dirname)
118 os.mkdir(base)
121 os.mkdir(base)
119 fnsandstat = []
122 fnsandstat = []
120
123
121 if node is not None:
124 if node is not None:
122 ui.note(_('making snapshot of %d files from rev %s\n') %
125 ui.note(_('making snapshot of %d files from rev %s\n') %
123 (len(files), short(node)))
126 (len(files), short(node)))
124 else:
127 else:
125 ui.note(_('making snapshot of %d files from working directory\n') %
128 ui.note(_('making snapshot of %d files from working directory\n') %
126 (len(files)))
129 (len(files)))
127
130
128 if files:
131 if files:
129 repo.ui.setconfig("ui", "archivemeta", False)
132 repo.ui.setconfig("ui", "archivemeta", False)
130
133
131 archival.archive(repo, base, node, 'files',
134 archival.archive(repo, base, node, 'files',
132 matchfn=scmutil.matchfiles(repo, files),
135 matchfn=scmutil.matchfiles(repo, files),
133 subrepos=listsubrepos)
136 subrepos=listsubrepos)
134
137
135 for fn in sorted(files):
138 for fn in sorted(files):
136 wfn = util.pconvert(fn)
139 wfn = util.pconvert(fn)
137 ui.note(' %s\n' % wfn)
140 ui.note(' %s\n' % wfn)
138
141
139 if node is None:
142 if node is None:
140 dest = os.path.join(base, wfn)
143 dest = os.path.join(base, wfn)
141
144
142 fnsandstat.append((dest, repo.wjoin(fn), os.lstat(dest)))
145 fnsandstat.append((dest, repo.wjoin(fn), os.lstat(dest)))
143 return dirname, fnsandstat
146 return dirname, fnsandstat
144
147
145 def dodiff(ui, repo, cmdline, pats, opts):
148 def dodiff(ui, repo, cmdline, pats, opts):
146 '''Do the actual diff:
149 '''Do the actual diff:
147
150
148 - copy to a temp structure if diffing 2 internal revisions
151 - copy to a temp structure if diffing 2 internal revisions
149 - copy to a temp structure if diffing working revision with
152 - copy to a temp structure if diffing working revision with
150 another one and more than 1 file is changed
153 another one and more than 1 file is changed
151 - just invoke the diff for a single file in the working dir
154 - just invoke the diff for a single file in the working dir
152 '''
155 '''
153
156
154 revs = opts.get('rev')
157 revs = opts.get('rev')
155 change = opts.get('change')
158 change = opts.get('change')
156 do3way = '$parent2' in cmdline
159 do3way = '$parent2' in cmdline
157
160
158 if revs and change:
161 if revs and change:
159 msg = _('cannot specify --rev and --change at the same time')
162 msg = _('cannot specify --rev and --change at the same time')
160 raise error.Abort(msg)
163 raise error.Abort(msg)
161 elif change:
164 elif change:
162 node2 = scmutil.revsingle(repo, change, None).node()
165 node2 = scmutil.revsingle(repo, change, None).node()
163 node1a, node1b = repo.changelog.parents(node2)
166 node1a, node1b = repo.changelog.parents(node2)
164 else:
167 else:
165 node1a, node2 = scmutil.revpair(repo, revs)
168 node1a, node2 = scmutil.revpair(repo, revs)
166 if not revs:
169 if not revs:
167 node1b = repo.dirstate.p2()
170 node1b = repo.dirstate.p2()
168 else:
171 else:
169 node1b = nullid
172 node1b = nullid
170
173
171 # Disable 3-way merge if there is only one parent
174 # Disable 3-way merge if there is only one parent
172 if do3way:
175 if do3way:
173 if node1b == nullid:
176 if node1b == nullid:
174 do3way = False
177 do3way = False
175
178
176 subrepos=opts.get('subrepos')
179 subrepos=opts.get('subrepos')
177
180
178 matcher = scmutil.match(repo[node2], pats, opts)
181 matcher = scmutil.match(repo[node2], pats, opts)
179
182
180 if opts.get('patch'):
183 if opts.get('patch'):
181 if subrepos:
184 if subrepos:
182 raise error.Abort(_('--patch cannot be used with --subrepos'))
185 raise error.Abort(_('--patch cannot be used with --subrepos'))
183 if node2 is None:
186 if node2 is None:
184 raise error.Abort(_('--patch requires two revisions'))
187 raise error.Abort(_('--patch requires two revisions'))
185 else:
188 else:
186 mod_a, add_a, rem_a = map(set, repo.status(node1a, node2, matcher,
189 mod_a, add_a, rem_a = map(set, repo.status(node1a, node2, matcher,
187 listsubrepos=subrepos)[:3])
190 listsubrepos=subrepos)[:3])
188 if do3way:
191 if do3way:
189 mod_b, add_b, rem_b = map(set,
192 mod_b, add_b, rem_b = map(set,
190 repo.status(node1b, node2, matcher,
193 repo.status(node1b, node2, matcher,
191 listsubrepos=subrepos)[:3])
194 listsubrepos=subrepos)[:3])
192 else:
195 else:
193 mod_b, add_b, rem_b = set(), set(), set()
196 mod_b, add_b, rem_b = set(), set(), set()
194 modadd = mod_a | add_a | mod_b | add_b
197 modadd = mod_a | add_a | mod_b | add_b
195 common = modadd | rem_a | rem_b
198 common = modadd | rem_a | rem_b
196 if not common:
199 if not common:
197 return 0
200 return 0
198
201
199 tmproot = tempfile.mkdtemp(prefix='extdiff.')
202 tmproot = tempfile.mkdtemp(prefix='extdiff.')
200 try:
203 try:
201 if not opts.get('patch'):
204 if not opts.get('patch'):
202 # Always make a copy of node1a (and node1b, if applicable)
205 # Always make a copy of node1a (and node1b, if applicable)
203 dir1a_files = mod_a | rem_a | ((mod_b | add_b) - add_a)
206 dir1a_files = mod_a | rem_a | ((mod_b | add_b) - add_a)
204 dir1a = snapshot(ui, repo, dir1a_files, node1a, tmproot,
207 dir1a = snapshot(ui, repo, dir1a_files, node1a, tmproot,
205 subrepos)[0]
208 subrepos)[0]
206 rev1a = '@%d' % repo[node1a].rev()
209 rev1a = '@%d' % repo[node1a].rev()
207 if do3way:
210 if do3way:
208 dir1b_files = mod_b | rem_b | ((mod_a | add_a) - add_b)
211 dir1b_files = mod_b | rem_b | ((mod_a | add_a) - add_b)
209 dir1b = snapshot(ui, repo, dir1b_files, node1b, tmproot,
212 dir1b = snapshot(ui, repo, dir1b_files, node1b, tmproot,
210 subrepos)[0]
213 subrepos)[0]
211 rev1b = '@%d' % repo[node1b].rev()
214 rev1b = '@%d' % repo[node1b].rev()
212 else:
215 else:
213 dir1b = None
216 dir1b = None
214 rev1b = ''
217 rev1b = ''
215
218
216 fnsandstat = []
219 fnsandstat = []
217
220
218 # If node2 in not the wc or there is >1 change, copy it
221 # If node2 in not the wc or there is >1 change, copy it
219 dir2root = ''
222 dir2root = ''
220 rev2 = ''
223 rev2 = ''
221 if node2:
224 if node2:
222 dir2 = snapshot(ui, repo, modadd, node2, tmproot, subrepos)[0]
225 dir2 = snapshot(ui, repo, modadd, node2, tmproot, subrepos)[0]
223 rev2 = '@%d' % repo[node2].rev()
226 rev2 = '@%d' % repo[node2].rev()
224 elif len(common) > 1:
227 elif len(common) > 1:
225 #we only actually need to get the files to copy back to
228 #we only actually need to get the files to copy back to
226 #the working dir in this case (because the other cases
229 #the working dir in this case (because the other cases
227 #are: diffing 2 revisions or single file -- in which case
230 #are: diffing 2 revisions or single file -- in which case
228 #the file is already directly passed to the diff tool).
231 #the file is already directly passed to the diff tool).
229 dir2, fnsandstat = snapshot(ui, repo, modadd, None, tmproot,
232 dir2, fnsandstat = snapshot(ui, repo, modadd, None, tmproot,
230 subrepos)
233 subrepos)
231 else:
234 else:
232 # This lets the diff tool open the changed file directly
235 # This lets the diff tool open the changed file directly
233 dir2 = ''
236 dir2 = ''
234 dir2root = repo.root
237 dir2root = repo.root
235
238
236 label1a = rev1a
239 label1a = rev1a
237 label1b = rev1b
240 label1b = rev1b
238 label2 = rev2
241 label2 = rev2
239
242
240 # If only one change, diff the files instead of the directories
243 # If only one change, diff the files instead of the directories
241 # Handle bogus modifies correctly by checking if the files exist
244 # Handle bogus modifies correctly by checking if the files exist
242 if len(common) == 1:
245 if len(common) == 1:
243 common_file = util.localpath(common.pop())
246 common_file = util.localpath(common.pop())
244 dir1a = os.path.join(tmproot, dir1a, common_file)
247 dir1a = os.path.join(tmproot, dir1a, common_file)
245 label1a = common_file + rev1a
248 label1a = common_file + rev1a
246 if not os.path.isfile(dir1a):
249 if not os.path.isfile(dir1a):
247 dir1a = os.devnull
250 dir1a = os.devnull
248 if do3way:
251 if do3way:
249 dir1b = os.path.join(tmproot, dir1b, common_file)
252 dir1b = os.path.join(tmproot, dir1b, common_file)
250 label1b = common_file + rev1b
253 label1b = common_file + rev1b
251 if not os.path.isfile(dir1b):
254 if not os.path.isfile(dir1b):
252 dir1b = os.devnull
255 dir1b = os.devnull
253 dir2 = os.path.join(dir2root, dir2, common_file)
256 dir2 = os.path.join(dir2root, dir2, common_file)
254 label2 = common_file + rev2
257 label2 = common_file + rev2
255 else:
258 else:
256 template = 'hg-%h.patch'
259 template = 'hg-%h.patch'
257 cmdutil.export(repo, [repo[node1a].rev(), repo[node2].rev()],
260 cmdutil.export(repo, [repo[node1a].rev(), repo[node2].rev()],
258 fntemplate=repo.vfs.reljoin(tmproot, template),
261 fntemplate=repo.vfs.reljoin(tmproot, template),
259 match=matcher)
262 match=matcher)
260 label1a = cmdutil.makefilename(repo[node1a], template)
263 label1a = cmdutil.makefilename(repo[node1a], template)
261 label2 = cmdutil.makefilename(repo[node2], template)
264 label2 = cmdutil.makefilename(repo[node2], template)
262 dir1a = repo.vfs.reljoin(tmproot, label1a)
265 dir1a = repo.vfs.reljoin(tmproot, label1a)
263 dir2 = repo.vfs.reljoin(tmproot, label2)
266 dir2 = repo.vfs.reljoin(tmproot, label2)
264 dir1b = None
267 dir1b = None
265 label1b = None
268 label1b = None
266 fnsandstat = []
269 fnsandstat = []
267
270
268 # Function to quote file/dir names in the argument string.
271 # Function to quote file/dir names in the argument string.
269 # When not operating in 3-way mode, an empty string is
272 # When not operating in 3-way mode, an empty string is
270 # returned for parent2
273 # returned for parent2
271 replace = {'parent': dir1a, 'parent1': dir1a, 'parent2': dir1b,
274 replace = {'parent': dir1a, 'parent1': dir1a, 'parent2': dir1b,
272 'plabel1': label1a, 'plabel2': label1b,
275 'plabel1': label1a, 'plabel2': label1b,
273 'clabel': label2, 'child': dir2,
276 'clabel': label2, 'child': dir2,
274 'root': repo.root}
277 'root': repo.root}
275 def quote(match):
278 def quote(match):
276 pre = match.group(2)
279 pre = match.group(2)
277 key = match.group(3)
280 key = match.group(3)
278 if not do3way and key == 'parent2':
281 if not do3way and key == 'parent2':
279 return pre
282 return pre
280 return pre + util.shellquote(replace[key])
283 return pre + util.shellquote(replace[key])
281
284
282 # Match parent2 first, so 'parent1?' will match both parent1 and parent
285 # Match parent2 first, so 'parent1?' will match both parent1 and parent
283 regex = (br'''(['"]?)([^\s'"$]*)'''
286 regex = (br'''(['"]?)([^\s'"$]*)'''
284 br'\$(parent2|parent1?|child|plabel1|plabel2|clabel|root)\1')
287 br'\$(parent2|parent1?|child|plabel1|plabel2|clabel|root)\1')
285 if not do3way and not re.search(regex, cmdline):
288 if not do3way and not re.search(regex, cmdline):
286 cmdline += ' $parent1 $child'
289 cmdline += ' $parent1 $child'
287 cmdline = re.sub(regex, quote, cmdline)
290 cmdline = re.sub(regex, quote, cmdline)
288
291
289 ui.debug('running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot))
292 ui.debug('running %r in %s\n' % (pycompat.bytestr(cmdline), tmproot))
290 ui.system(cmdline, cwd=tmproot, blockedtag='extdiff')
293 ui.system(cmdline, cwd=tmproot, blockedtag='extdiff')
291
294
292 for copy_fn, working_fn, st in fnsandstat:
295 for copy_fn, working_fn, st in fnsandstat:
293 cpstat = os.lstat(copy_fn)
296 cpstat = os.lstat(copy_fn)
294 # Some tools copy the file and attributes, so mtime may not detect
297 # Some tools copy the file and attributes, so mtime may not detect
295 # all changes. A size check will detect more cases, but not all.
298 # all changes. A size check will detect more cases, but not all.
296 # The only certain way to detect every case is to diff all files,
299 # The only certain way to detect every case is to diff all files,
297 # which could be expensive.
300 # which could be expensive.
298 # copyfile() carries over the permission, so the mode check could
301 # copyfile() carries over the permission, so the mode check could
299 # be in an 'elif' branch, but for the case where the file has
302 # be in an 'elif' branch, but for the case where the file has
300 # changed without affecting mtime or size.
303 # changed without affecting mtime or size.
301 if (cpstat[stat.ST_MTIME] != st[stat.ST_MTIME]
304 if (cpstat[stat.ST_MTIME] != st[stat.ST_MTIME]
302 or cpstat.st_size != st.st_size
305 or cpstat.st_size != st.st_size
303 or (cpstat.st_mode & 0o100) != (st.st_mode & 0o100)):
306 or (cpstat.st_mode & 0o100) != (st.st_mode & 0o100)):
304 ui.debug('file changed while diffing. '
307 ui.debug('file changed while diffing. '
305 'Overwriting: %s (src: %s)\n' % (working_fn, copy_fn))
308 'Overwriting: %s (src: %s)\n' % (working_fn, copy_fn))
306 util.copyfile(copy_fn, working_fn)
309 util.copyfile(copy_fn, working_fn)
307
310
308 return 1
311 return 1
309 finally:
312 finally:
310 ui.note(_('cleaning up temp directory\n'))
313 ui.note(_('cleaning up temp directory\n'))
311 shutil.rmtree(tmproot)
314 shutil.rmtree(tmproot)
312
315
313 extdiffopts = [
316 extdiffopts = [
314 ('o', 'option', [],
317 ('o', 'option', [],
315 _('pass option to comparison program'), _('OPT')),
318 _('pass option to comparison program'), _('OPT')),
316 ('r', 'rev', [], _('revision'), _('REV')),
319 ('r', 'rev', [], _('revision'), _('REV')),
317 ('c', 'change', '', _('change made by revision'), _('REV')),
320 ('c', 'change', '', _('change made by revision'), _('REV')),
318 ('', 'patch', None, _('compare patches for two revisions'))
321 ('', 'patch', None, _('compare patches for two revisions'))
319 ] + cmdutil.walkopts + cmdutil.subrepoopts
322 ] + cmdutil.walkopts + cmdutil.subrepoopts
320
323
321 @command('extdiff',
324 @command('extdiff',
322 [('p', 'program', '', _('comparison program to run'), _('CMD')),
325 [('p', 'program', '', _('comparison program to run'), _('CMD')),
323 ] + extdiffopts,
326 ] + extdiffopts,
324 _('hg extdiff [OPT]... [FILE]...'),
327 _('hg extdiff [OPT]... [FILE]...'),
325 inferrepo=True)
328 inferrepo=True)
326 def extdiff(ui, repo, *pats, **opts):
329 def extdiff(ui, repo, *pats, **opts):
327 '''use external program to diff repository (or selected files)
330 '''use external program to diff repository (or selected files)
328
331
329 Show differences between revisions for the specified files, using
332 Show differences between revisions for the specified files, using
330 an external program. The default program used is diff, with
333 an external program. The default program used is diff, with
331 default options "-Npru".
334 default options "-Npru".
332
335
333 To select a different program, use the -p/--program option. The
336 To select a different program, use the -p/--program option. The
334 program will be passed the names of two directories to compare. To
337 program will be passed the names of two directories to compare. To
335 pass additional options to the program, use -o/--option. These
338 pass additional options to the program, use -o/--option. These
336 will be passed before the names of the directories to compare.
339 will be passed before the names of the directories to compare.
337
340
338 When two revision arguments are given, then changes are shown
341 When two revision arguments are given, then changes are shown
339 between those revisions. If only one revision is specified then
342 between those revisions. If only one revision is specified then
340 that revision is compared to the working directory, and, when no
343 that revision is compared to the working directory, and, when no
341 revisions are specified, the working directory files are compared
344 revisions are specified, the working directory files are compared
342 to its parent.'''
345 to its parent.'''
343 opts = pycompat.byteskwargs(opts)
346 opts = pycompat.byteskwargs(opts)
344 program = opts.get('program')
347 program = opts.get('program')
345 option = opts.get('option')
348 option = opts.get('option')
346 if not program:
349 if not program:
347 program = 'diff'
350 program = 'diff'
348 option = option or ['-Npru']
351 option = option or ['-Npru']
349 cmdline = ' '.join(map(util.shellquote, [program] + option))
352 cmdline = ' '.join(map(util.shellquote, [program] + option))
350 return dodiff(ui, repo, cmdline, pats, opts)
353 return dodiff(ui, repo, cmdline, pats, opts)
351
354
352 class savedcmd(object):
355 class savedcmd(object):
353 """use external program to diff repository (or selected files)
356 """use external program to diff repository (or selected files)
354
357
355 Show differences between revisions for the specified files, using
358 Show differences between revisions for the specified files, using
356 the following program::
359 the following program::
357
360
358 %(path)s
361 %(path)s
359
362
360 When two revision arguments are given, then changes are shown
363 When two revision arguments are given, then changes are shown
361 between those revisions. If only one revision is specified then
364 between those revisions. If only one revision is specified then
362 that revision is compared to the working directory, and, when no
365 that revision is compared to the working directory, and, when no
363 revisions are specified, the working directory files are compared
366 revisions are specified, the working directory files are compared
364 to its parent.
367 to its parent.
365 """
368 """
366
369
367 def __init__(self, path, cmdline):
370 def __init__(self, path, cmdline):
368 # We can't pass non-ASCII through docstrings (and path is
371 # We can't pass non-ASCII through docstrings (and path is
369 # in an unknown encoding anyway)
372 # in an unknown encoding anyway)
370 docpath = util.escapestr(path)
373 docpath = stringutil.escapestr(path)
371 self.__doc__ %= {r'path': pycompat.sysstr(util.uirepr(docpath))}
374 self.__doc__ %= {r'path': pycompat.sysstr(stringutil.uirepr(docpath))}
372 self._cmdline = cmdline
375 self._cmdline = cmdline
373
376
374 def __call__(self, ui, repo, *pats, **opts):
377 def __call__(self, ui, repo, *pats, **opts):
375 opts = pycompat.byteskwargs(opts)
378 opts = pycompat.byteskwargs(opts)
376 options = ' '.join(map(util.shellquote, opts['option']))
379 options = ' '.join(map(util.shellquote, opts['option']))
377 if options:
380 if options:
378 options = ' ' + options
381 options = ' ' + options
379 return dodiff(ui, repo, self._cmdline + options, pats, opts)
382 return dodiff(ui, repo, self._cmdline + options, pats, opts)
380
383
381 def uisetup(ui):
384 def uisetup(ui):
382 for cmd, path in ui.configitems('extdiff'):
385 for cmd, path in ui.configitems('extdiff'):
383 path = util.expandpath(path)
386 path = util.expandpath(path)
384 if cmd.startswith('cmd.'):
387 if cmd.startswith('cmd.'):
385 cmd = cmd[4:]
388 cmd = cmd[4:]
386 if not path:
389 if not path:
387 path = util.findexe(cmd)
390 path = util.findexe(cmd)
388 if path is None:
391 if path is None:
389 path = filemerge.findexternaltool(ui, cmd) or cmd
392 path = filemerge.findexternaltool(ui, cmd) or cmd
390 diffopts = ui.config('extdiff', 'opts.' + cmd)
393 diffopts = ui.config('extdiff', 'opts.' + cmd)
391 cmdline = util.shellquote(path)
394 cmdline = util.shellquote(path)
392 if diffopts:
395 if diffopts:
393 cmdline += ' ' + diffopts
396 cmdline += ' ' + diffopts
394 elif cmd.startswith('opts.'):
397 elif cmd.startswith('opts.'):
395 continue
398 continue
396 else:
399 else:
397 if path:
400 if path:
398 # case "cmd = path opts"
401 # case "cmd = path opts"
399 cmdline = path
402 cmdline = path
400 diffopts = len(pycompat.shlexsplit(cmdline)) > 1
403 diffopts = len(pycompat.shlexsplit(cmdline)) > 1
401 else:
404 else:
402 # case "cmd ="
405 # case "cmd ="
403 path = util.findexe(cmd)
406 path = util.findexe(cmd)
404 if path is None:
407 if path is None:
405 path = filemerge.findexternaltool(ui, cmd) or cmd
408 path = filemerge.findexternaltool(ui, cmd) or cmd
406 cmdline = util.shellquote(path)
409 cmdline = util.shellquote(path)
407 diffopts = False
410 diffopts = False
408 # look for diff arguments in [diff-tools] then [merge-tools]
411 # look for diff arguments in [diff-tools] then [merge-tools]
409 if not diffopts:
412 if not diffopts:
410 args = ui.config('diff-tools', cmd+'.diffargs') or \
413 args = ui.config('diff-tools', cmd+'.diffargs') or \
411 ui.config('merge-tools', cmd+'.diffargs')
414 ui.config('merge-tools', cmd+'.diffargs')
412 if args:
415 if args:
413 cmdline += ' ' + args
416 cmdline += ' ' + args
414 command(cmd, extdiffopts[:], _('hg %s [OPTION]... [FILE]...') % cmd,
417 command(cmd, extdiffopts[:], _('hg %s [OPTION]... [FILE]...') % cmd,
415 inferrepo=True)(savedcmd(path, cmdline))
418 inferrepo=True)(savedcmd(path, cmdline))
416
419
417 # tell hggettext to extract docstrings from these functions:
420 # tell hggettext to extract docstrings from these functions:
418 i18nfunctions = [savedcmd]
421 i18nfunctions = [savedcmd]
@@ -1,93 +1,96
1 # highlight.py - highlight extension implementation file
1 # highlight.py - highlight extension implementation file
2 #
2 #
3 # Copyright 2007-2009 Adam Hupp <adam@hupp.org> and others
3 # Copyright 2007-2009 Adam Hupp <adam@hupp.org> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 #
7 #
8 # The original module was split in an interface and an implementation
8 # The original module was split in an interface and an implementation
9 # file to defer pygments loading and speedup extension setup.
9 # file to defer pygments loading and speedup extension setup.
10
10
11 from __future__ import absolute_import
11 from __future__ import absolute_import
12
12
13 from mercurial import demandimport
13 from mercurial import demandimport
14 demandimport.ignore.extend(['pkgutil', 'pkg_resources', '__main__'])
14 demandimport.ignore.extend(['pkgutil', 'pkg_resources', '__main__'])
15
15
16 from mercurial import (
16 from mercurial import (
17 encoding,
17 encoding,
18 util,
18 )
19
20 from mercurial.utils import (
21 stringutil,
19 )
22 )
20
23
21 with demandimport.deactivated():
24 with demandimport.deactivated():
22 import pygments
25 import pygments
23 import pygments.formatters
26 import pygments.formatters
24 import pygments.lexers
27 import pygments.lexers
25 import pygments.plugin
28 import pygments.plugin
26 import pygments.util
29 import pygments.util
27
30
28 for unused in pygments.plugin.find_plugin_lexers():
31 for unused in pygments.plugin.find_plugin_lexers():
29 pass
32 pass
30
33
31 highlight = pygments.highlight
34 highlight = pygments.highlight
32 ClassNotFound = pygments.util.ClassNotFound
35 ClassNotFound = pygments.util.ClassNotFound
33 guess_lexer = pygments.lexers.guess_lexer
36 guess_lexer = pygments.lexers.guess_lexer
34 guess_lexer_for_filename = pygments.lexers.guess_lexer_for_filename
37 guess_lexer_for_filename = pygments.lexers.guess_lexer_for_filename
35 TextLexer = pygments.lexers.TextLexer
38 TextLexer = pygments.lexers.TextLexer
36 HtmlFormatter = pygments.formatters.HtmlFormatter
39 HtmlFormatter = pygments.formatters.HtmlFormatter
37
40
38 SYNTAX_CSS = ('\n<link rel="stylesheet" href="{url}highlightcss" '
41 SYNTAX_CSS = ('\n<link rel="stylesheet" href="{url}highlightcss" '
39 'type="text/css" />')
42 'type="text/css" />')
40
43
41 def pygmentize(field, fctx, style, tmpl, guessfilenameonly=False):
44 def pygmentize(field, fctx, style, tmpl, guessfilenameonly=False):
42
45
43 # append a <link ...> to the syntax highlighting css
46 # append a <link ...> to the syntax highlighting css
44 old_header = tmpl.load('header')
47 old_header = tmpl.load('header')
45 if SYNTAX_CSS not in old_header:
48 if SYNTAX_CSS not in old_header:
46 new_header = old_header + SYNTAX_CSS
49 new_header = old_header + SYNTAX_CSS
47 tmpl.cache['header'] = new_header
50 tmpl.cache['header'] = new_header
48
51
49 text = fctx.data()
52 text = fctx.data()
50 if util.binary(text):
53 if stringutil.binary(text):
51 return
54 return
52
55
53 # str.splitlines() != unicode.splitlines() because "reasons"
56 # str.splitlines() != unicode.splitlines() because "reasons"
54 for c in "\x0c\x1c\x1d\x1e":
57 for c in "\x0c\x1c\x1d\x1e":
55 if c in text:
58 if c in text:
56 text = text.replace(c, '')
59 text = text.replace(c, '')
57
60
58 # Pygments is best used with Unicode strings:
61 # Pygments is best used with Unicode strings:
59 # <http://pygments.org/docs/unicode/>
62 # <http://pygments.org/docs/unicode/>
60 text = text.decode(encoding.encoding, 'replace')
63 text = text.decode(encoding.encoding, 'replace')
61
64
62 # To get multi-line strings right, we can't format line-by-line
65 # To get multi-line strings right, we can't format line-by-line
63 try:
66 try:
64 lexer = guess_lexer_for_filename(fctx.path(), text[:1024],
67 lexer = guess_lexer_for_filename(fctx.path(), text[:1024],
65 stripnl=False)
68 stripnl=False)
66 except (ClassNotFound, ValueError):
69 except (ClassNotFound, ValueError):
67 # guess_lexer will return a lexer if *any* lexer matches. There is
70 # guess_lexer will return a lexer if *any* lexer matches. There is
68 # no way to specify a minimum match score. This can give a high rate of
71 # no way to specify a minimum match score. This can give a high rate of
69 # false positives on files with an unknown filename pattern.
72 # false positives on files with an unknown filename pattern.
70 if guessfilenameonly:
73 if guessfilenameonly:
71 return
74 return
72
75
73 try:
76 try:
74 lexer = guess_lexer(text[:1024], stripnl=False)
77 lexer = guess_lexer(text[:1024], stripnl=False)
75 except (ClassNotFound, ValueError):
78 except (ClassNotFound, ValueError):
76 # Don't highlight unknown files
79 # Don't highlight unknown files
77 return
80 return
78
81
79 # Don't highlight text files
82 # Don't highlight text files
80 if isinstance(lexer, TextLexer):
83 if isinstance(lexer, TextLexer):
81 return
84 return
82
85
83 formatter = HtmlFormatter(nowrap=True, style=style)
86 formatter = HtmlFormatter(nowrap=True, style=style)
84
87
85 colorized = highlight(text, lexer, formatter)
88 colorized = highlight(text, lexer, formatter)
86 coloriter = (s.encode(encoding.encoding, 'replace')
89 coloriter = (s.encode(encoding.encoding, 'replace')
87 for s in colorized.splitlines())
90 for s in colorized.splitlines())
88
91
89 tmpl.filters['colorize'] = lambda x: next(coloriter)
92 tmpl.filters['colorize'] = lambda x: next(coloriter)
90
93
91 oldl = tmpl.cache[field]
94 oldl = tmpl.cache[field]
92 newl = oldl.replace('line|escape', 'line|colorize')
95 newl = oldl.replace('line|escape', 'line|colorize')
93 tmpl.cache[field] = newl
96 tmpl.cache[field] = newl
@@ -1,1651 +1,1654
1 # histedit.py - interactive history editing for mercurial
1 # histedit.py - interactive history editing for mercurial
2 #
2 #
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """interactive history editing
7 """interactive history editing
8
8
9 With this extension installed, Mercurial gains one new command: histedit. Usage
9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 is as follows, assuming the following history::
10 is as follows, assuming the following history::
11
11
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 | Add delta
13 | Add delta
14 |
14 |
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 | Add gamma
16 | Add gamma
17 |
17 |
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 | Add beta
19 | Add beta
20 |
20 |
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 Add alpha
22 Add alpha
23
23
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 file open in your editor::
25 file open in your editor::
26
26
27 pick c561b4e977df Add beta
27 pick c561b4e977df Add beta
28 pick 030b686bedc4 Add gamma
28 pick 030b686bedc4 Add gamma
29 pick 7c2fd3b9020c Add delta
29 pick 7c2fd3b9020c Add delta
30
30
31 # Edit history between c561b4e977df and 7c2fd3b9020c
31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 #
32 #
33 # Commits are listed from least to most recent
33 # Commits are listed from least to most recent
34 #
34 #
35 # Commands:
35 # Commands:
36 # p, pick = use commit
36 # p, pick = use commit
37 # e, edit = use commit, but stop for amending
37 # e, edit = use commit, but stop for amending
38 # f, fold = use commit, but combine it with the one above
38 # f, fold = use commit, but combine it with the one above
39 # r, roll = like fold, but discard this commit's description and date
39 # r, roll = like fold, but discard this commit's description and date
40 # d, drop = remove commit from history
40 # d, drop = remove commit from history
41 # m, mess = edit commit message without changing commit content
41 # m, mess = edit commit message without changing commit content
42 # b, base = checkout changeset and apply further changesets from there
42 # b, base = checkout changeset and apply further changesets from there
43 #
43 #
44
44
45 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 In this file, lines beginning with ``#`` are ignored. You must specify a rule
46 for each revision in your history. For example, if you had meant to add gamma
46 for each revision in your history. For example, if you had meant to add gamma
47 before beta, and then wanted to add delta in the same revision as beta, you
47 before beta, and then wanted to add delta in the same revision as beta, you
48 would reorganize the file to look like this::
48 would reorganize the file to look like this::
49
49
50 pick 030b686bedc4 Add gamma
50 pick 030b686bedc4 Add gamma
51 pick c561b4e977df Add beta
51 pick c561b4e977df Add beta
52 fold 7c2fd3b9020c Add delta
52 fold 7c2fd3b9020c Add delta
53
53
54 # Edit history between c561b4e977df and 7c2fd3b9020c
54 # Edit history between c561b4e977df and 7c2fd3b9020c
55 #
55 #
56 # Commits are listed from least to most recent
56 # Commits are listed from least to most recent
57 #
57 #
58 # Commands:
58 # Commands:
59 # p, pick = use commit
59 # p, pick = use commit
60 # e, edit = use commit, but stop for amending
60 # e, edit = use commit, but stop for amending
61 # f, fold = use commit, but combine it with the one above
61 # f, fold = use commit, but combine it with the one above
62 # r, roll = like fold, but discard this commit's description and date
62 # r, roll = like fold, but discard this commit's description and date
63 # d, drop = remove commit from history
63 # d, drop = remove commit from history
64 # m, mess = edit commit message without changing commit content
64 # m, mess = edit commit message without changing commit content
65 # b, base = checkout changeset and apply further changesets from there
65 # b, base = checkout changeset and apply further changesets from there
66 #
66 #
67
67
68 At which point you close the editor and ``histedit`` starts working. When you
68 At which point you close the editor and ``histedit`` starts working. When you
69 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
69 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
70 those revisions together, offering you a chance to clean up the commit message::
70 those revisions together, offering you a chance to clean up the commit message::
71
71
72 Add beta
72 Add beta
73 ***
73 ***
74 Add delta
74 Add delta
75
75
76 Edit the commit message to your liking, then close the editor. The date used
76 Edit the commit message to your liking, then close the editor. The date used
77 for the commit will be the later of the two commits' dates. For this example,
77 for the commit will be the later of the two commits' dates. For this example,
78 let's assume that the commit message was changed to ``Add beta and delta.``
78 let's assume that the commit message was changed to ``Add beta and delta.``
79 After histedit has run and had a chance to remove any old or temporary
79 After histedit has run and had a chance to remove any old or temporary
80 revisions it needed, the history looks like this::
80 revisions it needed, the history looks like this::
81
81
82 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
82 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
83 | Add beta and delta.
83 | Add beta and delta.
84 |
84 |
85 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
85 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
86 | Add gamma
86 | Add gamma
87 |
87 |
88 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
88 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
89 Add alpha
89 Add alpha
90
90
91 Note that ``histedit`` does *not* remove any revisions (even its own temporary
91 Note that ``histedit`` does *not* remove any revisions (even its own temporary
92 ones) until after it has completed all the editing operations, so it will
92 ones) until after it has completed all the editing operations, so it will
93 probably perform several strip operations when it's done. For the above example,
93 probably perform several strip operations when it's done. For the above example,
94 it had to run strip twice. Strip can be slow depending on a variety of factors,
94 it had to run strip twice. Strip can be slow depending on a variety of factors,
95 so you might need to be a little patient. You can choose to keep the original
95 so you might need to be a little patient. You can choose to keep the original
96 revisions by passing the ``--keep`` flag.
96 revisions by passing the ``--keep`` flag.
97
97
98 The ``edit`` operation will drop you back to a command prompt,
98 The ``edit`` operation will drop you back to a command prompt,
99 allowing you to edit files freely, or even use ``hg record`` to commit
99 allowing you to edit files freely, or even use ``hg record`` to commit
100 some changes as a separate commit. When you're done, any remaining
100 some changes as a separate commit. When you're done, any remaining
101 uncommitted changes will be committed as well. When done, run ``hg
101 uncommitted changes will be committed as well. When done, run ``hg
102 histedit --continue`` to finish this step. If there are uncommitted
102 histedit --continue`` to finish this step. If there are uncommitted
103 changes, you'll be prompted for a new commit message, but the default
103 changes, you'll be prompted for a new commit message, but the default
104 commit message will be the original message for the ``edit`` ed
104 commit message will be the original message for the ``edit`` ed
105 revision, and the date of the original commit will be preserved.
105 revision, and the date of the original commit will be preserved.
106
106
107 The ``message`` operation will give you a chance to revise a commit
107 The ``message`` operation will give you a chance to revise a commit
108 message without changing the contents. It's a shortcut for doing
108 message without changing the contents. It's a shortcut for doing
109 ``edit`` immediately followed by `hg histedit --continue``.
109 ``edit`` immediately followed by `hg histedit --continue``.
110
110
111 If ``histedit`` encounters a conflict when moving a revision (while
111 If ``histedit`` encounters a conflict when moving a revision (while
112 handling ``pick`` or ``fold``), it'll stop in a similar manner to
112 handling ``pick`` or ``fold``), it'll stop in a similar manner to
113 ``edit`` with the difference that it won't prompt you for a commit
113 ``edit`` with the difference that it won't prompt you for a commit
114 message when done. If you decide at this point that you don't like how
114 message when done. If you decide at this point that you don't like how
115 much work it will be to rearrange history, or that you made a mistake,
115 much work it will be to rearrange history, or that you made a mistake,
116 you can use ``hg histedit --abort`` to abandon the new changes you
116 you can use ``hg histedit --abort`` to abandon the new changes you
117 have made and return to the state before you attempted to edit your
117 have made and return to the state before you attempted to edit your
118 history.
118 history.
119
119
120 If we clone the histedit-ed example repository above and add four more
120 If we clone the histedit-ed example repository above and add four more
121 changes, such that we have the following history::
121 changes, such that we have the following history::
122
122
123 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
123 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
124 | Add theta
124 | Add theta
125 |
125 |
126 o 5 140988835471 2009-04-27 18:04 -0500 stefan
126 o 5 140988835471 2009-04-27 18:04 -0500 stefan
127 | Add eta
127 | Add eta
128 |
128 |
129 o 4 122930637314 2009-04-27 18:04 -0500 stefan
129 o 4 122930637314 2009-04-27 18:04 -0500 stefan
130 | Add zeta
130 | Add zeta
131 |
131 |
132 o 3 836302820282 2009-04-27 18:04 -0500 stefan
132 o 3 836302820282 2009-04-27 18:04 -0500 stefan
133 | Add epsilon
133 | Add epsilon
134 |
134 |
135 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
135 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
136 | Add beta and delta.
136 | Add beta and delta.
137 |
137 |
138 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
138 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
139 | Add gamma
139 | Add gamma
140 |
140 |
141 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
141 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
142 Add alpha
142 Add alpha
143
143
144 If you run ``hg histedit --outgoing`` on the clone then it is the same
144 If you run ``hg histedit --outgoing`` on the clone then it is the same
145 as running ``hg histedit 836302820282``. If you need plan to push to a
145 as running ``hg histedit 836302820282``. If you need plan to push to a
146 repository that Mercurial does not detect to be related to the source
146 repository that Mercurial does not detect to be related to the source
147 repo, you can add a ``--force`` option.
147 repo, you can add a ``--force`` option.
148
148
149 Config
149 Config
150 ------
150 ------
151
151
152 Histedit rule lines are truncated to 80 characters by default. You
152 Histedit rule lines are truncated to 80 characters by default. You
153 can customize this behavior by setting a different length in your
153 can customize this behavior by setting a different length in your
154 configuration file::
154 configuration file::
155
155
156 [histedit]
156 [histedit]
157 linelen = 120 # truncate rule lines at 120 characters
157 linelen = 120 # truncate rule lines at 120 characters
158
158
159 ``hg histedit`` attempts to automatically choose an appropriate base
159 ``hg histedit`` attempts to automatically choose an appropriate base
160 revision to use. To change which base revision is used, define a
160 revision to use. To change which base revision is used, define a
161 revset in your configuration file::
161 revset in your configuration file::
162
162
163 [histedit]
163 [histedit]
164 defaultrev = only(.) & draft()
164 defaultrev = only(.) & draft()
165
165
166 By default each edited revision needs to be present in histedit commands.
166 By default each edited revision needs to be present in histedit commands.
167 To remove revision you need to use ``drop`` operation. You can configure
167 To remove revision you need to use ``drop`` operation. You can configure
168 the drop to be implicit for missing commits by adding::
168 the drop to be implicit for missing commits by adding::
169
169
170 [histedit]
170 [histedit]
171 dropmissing = True
171 dropmissing = True
172
172
173 By default, histedit will close the transaction after each action. For
173 By default, histedit will close the transaction after each action. For
174 performance purposes, you can configure histedit to use a single transaction
174 performance purposes, you can configure histedit to use a single transaction
175 across the entire histedit. WARNING: This setting introduces a significant risk
175 across the entire histedit. WARNING: This setting introduces a significant risk
176 of losing the work you've done in a histedit if the histedit aborts
176 of losing the work you've done in a histedit if the histedit aborts
177 unexpectedly::
177 unexpectedly::
178
178
179 [histedit]
179 [histedit]
180 singletransaction = True
180 singletransaction = True
181
181
182 """
182 """
183
183
184 from __future__ import absolute_import
184 from __future__ import absolute_import
185
185
186 import errno
186 import errno
187 import os
187 import os
188
188
189 from mercurial.i18n import _
189 from mercurial.i18n import _
190 from mercurial import (
190 from mercurial import (
191 bundle2,
191 bundle2,
192 cmdutil,
192 cmdutil,
193 context,
193 context,
194 copies,
194 copies,
195 destutil,
195 destutil,
196 discovery,
196 discovery,
197 error,
197 error,
198 exchange,
198 exchange,
199 extensions,
199 extensions,
200 hg,
200 hg,
201 lock,
201 lock,
202 merge as mergemod,
202 merge as mergemod,
203 mergeutil,
203 mergeutil,
204 node,
204 node,
205 obsolete,
205 obsolete,
206 pycompat,
206 pycompat,
207 registrar,
207 registrar,
208 repair,
208 repair,
209 scmutil,
209 scmutil,
210 util,
210 util,
211 )
211 )
212 from mercurial.utils import (
213 stringutil,
214 )
212
215
213 pickle = util.pickle
216 pickle = util.pickle
214 release = lock.release
217 release = lock.release
215 cmdtable = {}
218 cmdtable = {}
216 command = registrar.command(cmdtable)
219 command = registrar.command(cmdtable)
217
220
218 configtable = {}
221 configtable = {}
219 configitem = registrar.configitem(configtable)
222 configitem = registrar.configitem(configtable)
220 configitem('experimental', 'histedit.autoverb',
223 configitem('experimental', 'histedit.autoverb',
221 default=False,
224 default=False,
222 )
225 )
223 configitem('histedit', 'defaultrev',
226 configitem('histedit', 'defaultrev',
224 default=None,
227 default=None,
225 )
228 )
226 configitem('histedit', 'dropmissing',
229 configitem('histedit', 'dropmissing',
227 default=False,
230 default=False,
228 )
231 )
229 configitem('histedit', 'linelen',
232 configitem('histedit', 'linelen',
230 default=80,
233 default=80,
231 )
234 )
232 configitem('histedit', 'singletransaction',
235 configitem('histedit', 'singletransaction',
233 default=False,
236 default=False,
234 )
237 )
235
238
236 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
239 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
237 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
240 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
238 # be specifying the version(s) of Mercurial they are tested with, or
241 # be specifying the version(s) of Mercurial they are tested with, or
239 # leave the attribute unspecified.
242 # leave the attribute unspecified.
240 testedwith = 'ships-with-hg-core'
243 testedwith = 'ships-with-hg-core'
241
244
242 actiontable = {}
245 actiontable = {}
243 primaryactions = set()
246 primaryactions = set()
244 secondaryactions = set()
247 secondaryactions = set()
245 tertiaryactions = set()
248 tertiaryactions = set()
246 internalactions = set()
249 internalactions = set()
247
250
248 def geteditcomment(ui, first, last):
251 def geteditcomment(ui, first, last):
249 """ construct the editor comment
252 """ construct the editor comment
250 The comment includes::
253 The comment includes::
251 - an intro
254 - an intro
252 - sorted primary commands
255 - sorted primary commands
253 - sorted short commands
256 - sorted short commands
254 - sorted long commands
257 - sorted long commands
255 - additional hints
258 - additional hints
256
259
257 Commands are only included once.
260 Commands are only included once.
258 """
261 """
259 intro = _("""Edit history between %s and %s
262 intro = _("""Edit history between %s and %s
260
263
261 Commits are listed from least to most recent
264 Commits are listed from least to most recent
262
265
263 You can reorder changesets by reordering the lines
266 You can reorder changesets by reordering the lines
264
267
265 Commands:
268 Commands:
266 """)
269 """)
267 actions = []
270 actions = []
268 def addverb(v):
271 def addverb(v):
269 a = actiontable[v]
272 a = actiontable[v]
270 lines = a.message.split("\n")
273 lines = a.message.split("\n")
271 if len(a.verbs):
274 if len(a.verbs):
272 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
275 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
273 actions.append(" %s = %s" % (v, lines[0]))
276 actions.append(" %s = %s" % (v, lines[0]))
274 actions.extend([' %s' for l in lines[1:]])
277 actions.extend([' %s' for l in lines[1:]])
275
278
276 for v in (
279 for v in (
277 sorted(primaryactions) +
280 sorted(primaryactions) +
278 sorted(secondaryactions) +
281 sorted(secondaryactions) +
279 sorted(tertiaryactions)
282 sorted(tertiaryactions)
280 ):
283 ):
281 addverb(v)
284 addverb(v)
282 actions.append('')
285 actions.append('')
283
286
284 hints = []
287 hints = []
285 if ui.configbool('histedit', 'dropmissing'):
288 if ui.configbool('histedit', 'dropmissing'):
286 hints.append("Deleting a changeset from the list "
289 hints.append("Deleting a changeset from the list "
287 "will DISCARD it from the edited history!")
290 "will DISCARD it from the edited history!")
288
291
289 lines = (intro % (first, last)).split('\n') + actions + hints
292 lines = (intro % (first, last)).split('\n') + actions + hints
290
293
291 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
294 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
292
295
293 class histeditstate(object):
296 class histeditstate(object):
294 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
297 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
295 topmost=None, replacements=None, lock=None, wlock=None):
298 topmost=None, replacements=None, lock=None, wlock=None):
296 self.repo = repo
299 self.repo = repo
297 self.actions = actions
300 self.actions = actions
298 self.keep = keep
301 self.keep = keep
299 self.topmost = topmost
302 self.topmost = topmost
300 self.parentctxnode = parentctxnode
303 self.parentctxnode = parentctxnode
301 self.lock = lock
304 self.lock = lock
302 self.wlock = wlock
305 self.wlock = wlock
303 self.backupfile = None
306 self.backupfile = None
304 if replacements is None:
307 if replacements is None:
305 self.replacements = []
308 self.replacements = []
306 else:
309 else:
307 self.replacements = replacements
310 self.replacements = replacements
308
311
309 def read(self):
312 def read(self):
310 """Load histedit state from disk and set fields appropriately."""
313 """Load histedit state from disk and set fields appropriately."""
311 try:
314 try:
312 state = self.repo.vfs.read('histedit-state')
315 state = self.repo.vfs.read('histedit-state')
313 except IOError as err:
316 except IOError as err:
314 if err.errno != errno.ENOENT:
317 if err.errno != errno.ENOENT:
315 raise
318 raise
316 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
319 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
317
320
318 if state.startswith('v1\n'):
321 if state.startswith('v1\n'):
319 data = self._load()
322 data = self._load()
320 parentctxnode, rules, keep, topmost, replacements, backupfile = data
323 parentctxnode, rules, keep, topmost, replacements, backupfile = data
321 else:
324 else:
322 data = pickle.loads(state)
325 data = pickle.loads(state)
323 parentctxnode, rules, keep, topmost, replacements = data
326 parentctxnode, rules, keep, topmost, replacements = data
324 backupfile = None
327 backupfile = None
325
328
326 self.parentctxnode = parentctxnode
329 self.parentctxnode = parentctxnode
327 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
330 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
328 actions = parserules(rules, self)
331 actions = parserules(rules, self)
329 self.actions = actions
332 self.actions = actions
330 self.keep = keep
333 self.keep = keep
331 self.topmost = topmost
334 self.topmost = topmost
332 self.replacements = replacements
335 self.replacements = replacements
333 self.backupfile = backupfile
336 self.backupfile = backupfile
334
337
335 def write(self, tr=None):
338 def write(self, tr=None):
336 if tr:
339 if tr:
337 tr.addfilegenerator('histedit-state', ('histedit-state',),
340 tr.addfilegenerator('histedit-state', ('histedit-state',),
338 self._write, location='plain')
341 self._write, location='plain')
339 else:
342 else:
340 with self.repo.vfs("histedit-state", "w") as f:
343 with self.repo.vfs("histedit-state", "w") as f:
341 self._write(f)
344 self._write(f)
342
345
343 def _write(self, fp):
346 def _write(self, fp):
344 fp.write('v1\n')
347 fp.write('v1\n')
345 fp.write('%s\n' % node.hex(self.parentctxnode))
348 fp.write('%s\n' % node.hex(self.parentctxnode))
346 fp.write('%s\n' % node.hex(self.topmost))
349 fp.write('%s\n' % node.hex(self.topmost))
347 fp.write('%s\n' % ('True' if self.keep else 'False'))
350 fp.write('%s\n' % ('True' if self.keep else 'False'))
348 fp.write('%d\n' % len(self.actions))
351 fp.write('%d\n' % len(self.actions))
349 for action in self.actions:
352 for action in self.actions:
350 fp.write('%s\n' % action.tostate())
353 fp.write('%s\n' % action.tostate())
351 fp.write('%d\n' % len(self.replacements))
354 fp.write('%d\n' % len(self.replacements))
352 for replacement in self.replacements:
355 for replacement in self.replacements:
353 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
356 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
354 for r in replacement[1])))
357 for r in replacement[1])))
355 backupfile = self.backupfile
358 backupfile = self.backupfile
356 if not backupfile:
359 if not backupfile:
357 backupfile = ''
360 backupfile = ''
358 fp.write('%s\n' % backupfile)
361 fp.write('%s\n' % backupfile)
359
362
360 def _load(self):
363 def _load(self):
361 fp = self.repo.vfs('histedit-state', 'r')
364 fp = self.repo.vfs('histedit-state', 'r')
362 lines = [l[:-1] for l in fp.readlines()]
365 lines = [l[:-1] for l in fp.readlines()]
363
366
364 index = 0
367 index = 0
365 lines[index] # version number
368 lines[index] # version number
366 index += 1
369 index += 1
367
370
368 parentctxnode = node.bin(lines[index])
371 parentctxnode = node.bin(lines[index])
369 index += 1
372 index += 1
370
373
371 topmost = node.bin(lines[index])
374 topmost = node.bin(lines[index])
372 index += 1
375 index += 1
373
376
374 keep = lines[index] == 'True'
377 keep = lines[index] == 'True'
375 index += 1
378 index += 1
376
379
377 # Rules
380 # Rules
378 rules = []
381 rules = []
379 rulelen = int(lines[index])
382 rulelen = int(lines[index])
380 index += 1
383 index += 1
381 for i in xrange(rulelen):
384 for i in xrange(rulelen):
382 ruleaction = lines[index]
385 ruleaction = lines[index]
383 index += 1
386 index += 1
384 rule = lines[index]
387 rule = lines[index]
385 index += 1
388 index += 1
386 rules.append((ruleaction, rule))
389 rules.append((ruleaction, rule))
387
390
388 # Replacements
391 # Replacements
389 replacements = []
392 replacements = []
390 replacementlen = int(lines[index])
393 replacementlen = int(lines[index])
391 index += 1
394 index += 1
392 for i in xrange(replacementlen):
395 for i in xrange(replacementlen):
393 replacement = lines[index]
396 replacement = lines[index]
394 original = node.bin(replacement[:40])
397 original = node.bin(replacement[:40])
395 succ = [node.bin(replacement[i:i + 40]) for i in
398 succ = [node.bin(replacement[i:i + 40]) for i in
396 range(40, len(replacement), 40)]
399 range(40, len(replacement), 40)]
397 replacements.append((original, succ))
400 replacements.append((original, succ))
398 index += 1
401 index += 1
399
402
400 backupfile = lines[index]
403 backupfile = lines[index]
401 index += 1
404 index += 1
402
405
403 fp.close()
406 fp.close()
404
407
405 return parentctxnode, rules, keep, topmost, replacements, backupfile
408 return parentctxnode, rules, keep, topmost, replacements, backupfile
406
409
407 def clear(self):
410 def clear(self):
408 if self.inprogress():
411 if self.inprogress():
409 self.repo.vfs.unlink('histedit-state')
412 self.repo.vfs.unlink('histedit-state')
410
413
411 def inprogress(self):
414 def inprogress(self):
412 return self.repo.vfs.exists('histedit-state')
415 return self.repo.vfs.exists('histedit-state')
413
416
414
417
415 class histeditaction(object):
418 class histeditaction(object):
416 def __init__(self, state, node):
419 def __init__(self, state, node):
417 self.state = state
420 self.state = state
418 self.repo = state.repo
421 self.repo = state.repo
419 self.node = node
422 self.node = node
420
423
421 @classmethod
424 @classmethod
422 def fromrule(cls, state, rule):
425 def fromrule(cls, state, rule):
423 """Parses the given rule, returning an instance of the histeditaction.
426 """Parses the given rule, returning an instance of the histeditaction.
424 """
427 """
425 rulehash = rule.strip().split(' ', 1)[0]
428 rulehash = rule.strip().split(' ', 1)[0]
426 try:
429 try:
427 rev = node.bin(rulehash)
430 rev = node.bin(rulehash)
428 except TypeError:
431 except TypeError:
429 raise error.ParseError("invalid changeset %s" % rulehash)
432 raise error.ParseError("invalid changeset %s" % rulehash)
430 return cls(state, rev)
433 return cls(state, rev)
431
434
432 def verify(self, prev, expected, seen):
435 def verify(self, prev, expected, seen):
433 """ Verifies semantic correctness of the rule"""
436 """ Verifies semantic correctness of the rule"""
434 repo = self.repo
437 repo = self.repo
435 ha = node.hex(self.node)
438 ha = node.hex(self.node)
436 try:
439 try:
437 self.node = repo[ha].node()
440 self.node = repo[ha].node()
438 except error.RepoError:
441 except error.RepoError:
439 raise error.ParseError(_('unknown changeset %s listed')
442 raise error.ParseError(_('unknown changeset %s listed')
440 % ha[:12])
443 % ha[:12])
441 if self.node is not None:
444 if self.node is not None:
442 self._verifynodeconstraints(prev, expected, seen)
445 self._verifynodeconstraints(prev, expected, seen)
443
446
444 def _verifynodeconstraints(self, prev, expected, seen):
447 def _verifynodeconstraints(self, prev, expected, seen):
445 # by default command need a node in the edited list
448 # by default command need a node in the edited list
446 if self.node not in expected:
449 if self.node not in expected:
447 raise error.ParseError(_('%s "%s" changeset was not a candidate')
450 raise error.ParseError(_('%s "%s" changeset was not a candidate')
448 % (self.verb, node.short(self.node)),
451 % (self.verb, node.short(self.node)),
449 hint=_('only use listed changesets'))
452 hint=_('only use listed changesets'))
450 # and only one command per node
453 # and only one command per node
451 if self.node in seen:
454 if self.node in seen:
452 raise error.ParseError(_('duplicated command for changeset %s') %
455 raise error.ParseError(_('duplicated command for changeset %s') %
453 node.short(self.node))
456 node.short(self.node))
454
457
455 def torule(self):
458 def torule(self):
456 """build a histedit rule line for an action
459 """build a histedit rule line for an action
457
460
458 by default lines are in the form:
461 by default lines are in the form:
459 <hash> <rev> <summary>
462 <hash> <rev> <summary>
460 """
463 """
461 ctx = self.repo[self.node]
464 ctx = self.repo[self.node]
462 summary = _getsummary(ctx)
465 summary = _getsummary(ctx)
463 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
466 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
464 # trim to 75 columns by default so it's not stupidly wide in my editor
467 # trim to 75 columns by default so it's not stupidly wide in my editor
465 # (the 5 more are left for verb)
468 # (the 5 more are left for verb)
466 maxlen = self.repo.ui.configint('histedit', 'linelen')
469 maxlen = self.repo.ui.configint('histedit', 'linelen')
467 maxlen = max(maxlen, 22) # avoid truncating hash
470 maxlen = max(maxlen, 22) # avoid truncating hash
468 return util.ellipsis(line, maxlen)
471 return stringutil.ellipsis(line, maxlen)
469
472
470 def tostate(self):
473 def tostate(self):
471 """Print an action in format used by histedit state files
474 """Print an action in format used by histedit state files
472 (the first line is a verb, the remainder is the second)
475 (the first line is a verb, the remainder is the second)
473 """
476 """
474 return "%s\n%s" % (self.verb, node.hex(self.node))
477 return "%s\n%s" % (self.verb, node.hex(self.node))
475
478
476 def run(self):
479 def run(self):
477 """Runs the action. The default behavior is simply apply the action's
480 """Runs the action. The default behavior is simply apply the action's
478 rulectx onto the current parentctx."""
481 rulectx onto the current parentctx."""
479 self.applychange()
482 self.applychange()
480 self.continuedirty()
483 self.continuedirty()
481 return self.continueclean()
484 return self.continueclean()
482
485
483 def applychange(self):
486 def applychange(self):
484 """Applies the changes from this action's rulectx onto the current
487 """Applies the changes from this action's rulectx onto the current
485 parentctx, but does not commit them."""
488 parentctx, but does not commit them."""
486 repo = self.repo
489 repo = self.repo
487 rulectx = repo[self.node]
490 rulectx = repo[self.node]
488 repo.ui.pushbuffer(error=True, labeled=True)
491 repo.ui.pushbuffer(error=True, labeled=True)
489 hg.update(repo, self.state.parentctxnode, quietempty=True)
492 hg.update(repo, self.state.parentctxnode, quietempty=True)
490 stats = applychanges(repo.ui, repo, rulectx, {})
493 stats = applychanges(repo.ui, repo, rulectx, {})
491 repo.dirstate.setbranch(rulectx.branch())
494 repo.dirstate.setbranch(rulectx.branch())
492 if stats and stats[3] > 0:
495 if stats and stats[3] > 0:
493 buf = repo.ui.popbuffer()
496 buf = repo.ui.popbuffer()
494 repo.ui.write(buf)
497 repo.ui.write(buf)
495 raise error.InterventionRequired(
498 raise error.InterventionRequired(
496 _('Fix up the change (%s %s)') %
499 _('Fix up the change (%s %s)') %
497 (self.verb, node.short(self.node)),
500 (self.verb, node.short(self.node)),
498 hint=_('hg histedit --continue to resume'))
501 hint=_('hg histedit --continue to resume'))
499 else:
502 else:
500 repo.ui.popbuffer()
503 repo.ui.popbuffer()
501
504
502 def continuedirty(self):
505 def continuedirty(self):
503 """Continues the action when changes have been applied to the working
506 """Continues the action when changes have been applied to the working
504 copy. The default behavior is to commit the dirty changes."""
507 copy. The default behavior is to commit the dirty changes."""
505 repo = self.repo
508 repo = self.repo
506 rulectx = repo[self.node]
509 rulectx = repo[self.node]
507
510
508 editor = self.commiteditor()
511 editor = self.commiteditor()
509 commit = commitfuncfor(repo, rulectx)
512 commit = commitfuncfor(repo, rulectx)
510
513
511 commit(text=rulectx.description(), user=rulectx.user(),
514 commit(text=rulectx.description(), user=rulectx.user(),
512 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
515 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
513
516
514 def commiteditor(self):
517 def commiteditor(self):
515 """The editor to be used to edit the commit message."""
518 """The editor to be used to edit the commit message."""
516 return False
519 return False
517
520
518 def continueclean(self):
521 def continueclean(self):
519 """Continues the action when the working copy is clean. The default
522 """Continues the action when the working copy is clean. The default
520 behavior is to accept the current commit as the new version of the
523 behavior is to accept the current commit as the new version of the
521 rulectx."""
524 rulectx."""
522 ctx = self.repo['.']
525 ctx = self.repo['.']
523 if ctx.node() == self.state.parentctxnode:
526 if ctx.node() == self.state.parentctxnode:
524 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
527 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
525 node.short(self.node))
528 node.short(self.node))
526 return ctx, [(self.node, tuple())]
529 return ctx, [(self.node, tuple())]
527 if ctx.node() == self.node:
530 if ctx.node() == self.node:
528 # Nothing changed
531 # Nothing changed
529 return ctx, []
532 return ctx, []
530 return ctx, [(self.node, (ctx.node(),))]
533 return ctx, [(self.node, (ctx.node(),))]
531
534
532 def commitfuncfor(repo, src):
535 def commitfuncfor(repo, src):
533 """Build a commit function for the replacement of <src>
536 """Build a commit function for the replacement of <src>
534
537
535 This function ensure we apply the same treatment to all changesets.
538 This function ensure we apply the same treatment to all changesets.
536
539
537 - Add a 'histedit_source' entry in extra.
540 - Add a 'histedit_source' entry in extra.
538
541
539 Note that fold has its own separated logic because its handling is a bit
542 Note that fold has its own separated logic because its handling is a bit
540 different and not easily factored out of the fold method.
543 different and not easily factored out of the fold method.
541 """
544 """
542 phasemin = src.phase()
545 phasemin = src.phase()
543 def commitfunc(**kwargs):
546 def commitfunc(**kwargs):
544 overrides = {('phases', 'new-commit'): phasemin}
547 overrides = {('phases', 'new-commit'): phasemin}
545 with repo.ui.configoverride(overrides, 'histedit'):
548 with repo.ui.configoverride(overrides, 'histedit'):
546 extra = kwargs.get(r'extra', {}).copy()
549 extra = kwargs.get(r'extra', {}).copy()
547 extra['histedit_source'] = src.hex()
550 extra['histedit_source'] = src.hex()
548 kwargs[r'extra'] = extra
551 kwargs[r'extra'] = extra
549 return repo.commit(**kwargs)
552 return repo.commit(**kwargs)
550 return commitfunc
553 return commitfunc
551
554
552 def applychanges(ui, repo, ctx, opts):
555 def applychanges(ui, repo, ctx, opts):
553 """Merge changeset from ctx (only) in the current working directory"""
556 """Merge changeset from ctx (only) in the current working directory"""
554 wcpar = repo.dirstate.parents()[0]
557 wcpar = repo.dirstate.parents()[0]
555 if ctx.p1().node() == wcpar:
558 if ctx.p1().node() == wcpar:
556 # edits are "in place" we do not need to make any merge,
559 # edits are "in place" we do not need to make any merge,
557 # just applies changes on parent for editing
560 # just applies changes on parent for editing
558 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
561 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
559 stats = None
562 stats = None
560 else:
563 else:
561 try:
564 try:
562 # ui.forcemerge is an internal variable, do not document
565 # ui.forcemerge is an internal variable, do not document
563 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
566 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
564 'histedit')
567 'histedit')
565 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
568 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
566 finally:
569 finally:
567 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
570 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
568 return stats
571 return stats
569
572
570 def collapse(repo, firstctx, lastctx, commitopts, skipprompt=False):
573 def collapse(repo, firstctx, lastctx, commitopts, skipprompt=False):
571 """collapse the set of revisions from first to last as new one.
574 """collapse the set of revisions from first to last as new one.
572
575
573 Expected commit options are:
576 Expected commit options are:
574 - message
577 - message
575 - date
578 - date
576 - username
579 - username
577 Commit message is edited in all cases.
580 Commit message is edited in all cases.
578
581
579 This function works in memory."""
582 This function works in memory."""
580 ctxs = list(repo.set('%d::%d', firstctx.rev(), lastctx.rev()))
583 ctxs = list(repo.set('%d::%d', firstctx.rev(), lastctx.rev()))
581 if not ctxs:
584 if not ctxs:
582 return None
585 return None
583 for c in ctxs:
586 for c in ctxs:
584 if not c.mutable():
587 if not c.mutable():
585 raise error.ParseError(
588 raise error.ParseError(
586 _("cannot fold into public change %s") % node.short(c.node()))
589 _("cannot fold into public change %s") % node.short(c.node()))
587 base = firstctx.parents()[0]
590 base = firstctx.parents()[0]
588
591
589 # commit a new version of the old changeset, including the update
592 # commit a new version of the old changeset, including the update
590 # collect all files which might be affected
593 # collect all files which might be affected
591 files = set()
594 files = set()
592 for ctx in ctxs:
595 for ctx in ctxs:
593 files.update(ctx.files())
596 files.update(ctx.files())
594
597
595 # Recompute copies (avoid recording a -> b -> a)
598 # Recompute copies (avoid recording a -> b -> a)
596 copied = copies.pathcopies(base, lastctx)
599 copied = copies.pathcopies(base, lastctx)
597
600
598 # prune files which were reverted by the updates
601 # prune files which were reverted by the updates
599 files = [f for f in files if not cmdutil.samefile(f, lastctx, base)]
602 files = [f for f in files if not cmdutil.samefile(f, lastctx, base)]
600 # commit version of these files as defined by head
603 # commit version of these files as defined by head
601 headmf = lastctx.manifest()
604 headmf = lastctx.manifest()
602 def filectxfn(repo, ctx, path):
605 def filectxfn(repo, ctx, path):
603 if path in headmf:
606 if path in headmf:
604 fctx = lastctx[path]
607 fctx = lastctx[path]
605 flags = fctx.flags()
608 flags = fctx.flags()
606 mctx = context.memfilectx(repo, ctx,
609 mctx = context.memfilectx(repo, ctx,
607 fctx.path(), fctx.data(),
610 fctx.path(), fctx.data(),
608 islink='l' in flags,
611 islink='l' in flags,
609 isexec='x' in flags,
612 isexec='x' in flags,
610 copied=copied.get(path))
613 copied=copied.get(path))
611 return mctx
614 return mctx
612 return None
615 return None
613
616
614 if commitopts.get('message'):
617 if commitopts.get('message'):
615 message = commitopts['message']
618 message = commitopts['message']
616 else:
619 else:
617 message = firstctx.description()
620 message = firstctx.description()
618 user = commitopts.get('user')
621 user = commitopts.get('user')
619 date = commitopts.get('date')
622 date = commitopts.get('date')
620 extra = commitopts.get('extra')
623 extra = commitopts.get('extra')
621
624
622 parents = (firstctx.p1().node(), firstctx.p2().node())
625 parents = (firstctx.p1().node(), firstctx.p2().node())
623 editor = None
626 editor = None
624 if not skipprompt:
627 if not skipprompt:
625 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
628 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
626 new = context.memctx(repo,
629 new = context.memctx(repo,
627 parents=parents,
630 parents=parents,
628 text=message,
631 text=message,
629 files=files,
632 files=files,
630 filectxfn=filectxfn,
633 filectxfn=filectxfn,
631 user=user,
634 user=user,
632 date=date,
635 date=date,
633 extra=extra,
636 extra=extra,
634 editor=editor)
637 editor=editor)
635 return repo.commitctx(new)
638 return repo.commitctx(new)
636
639
637 def _isdirtywc(repo):
640 def _isdirtywc(repo):
638 return repo[None].dirty(missing=True)
641 return repo[None].dirty(missing=True)
639
642
640 def abortdirty():
643 def abortdirty():
641 raise error.Abort(_('working copy has pending changes'),
644 raise error.Abort(_('working copy has pending changes'),
642 hint=_('amend, commit, or revert them and run histedit '
645 hint=_('amend, commit, or revert them and run histedit '
643 '--continue, or abort with histedit --abort'))
646 '--continue, or abort with histedit --abort'))
644
647
645 def action(verbs, message, priority=False, internal=False):
648 def action(verbs, message, priority=False, internal=False):
646 def wrap(cls):
649 def wrap(cls):
647 assert not priority or not internal
650 assert not priority or not internal
648 verb = verbs[0]
651 verb = verbs[0]
649 if priority:
652 if priority:
650 primaryactions.add(verb)
653 primaryactions.add(verb)
651 elif internal:
654 elif internal:
652 internalactions.add(verb)
655 internalactions.add(verb)
653 elif len(verbs) > 1:
656 elif len(verbs) > 1:
654 secondaryactions.add(verb)
657 secondaryactions.add(verb)
655 else:
658 else:
656 tertiaryactions.add(verb)
659 tertiaryactions.add(verb)
657
660
658 cls.verb = verb
661 cls.verb = verb
659 cls.verbs = verbs
662 cls.verbs = verbs
660 cls.message = message
663 cls.message = message
661 for verb in verbs:
664 for verb in verbs:
662 actiontable[verb] = cls
665 actiontable[verb] = cls
663 return cls
666 return cls
664 return wrap
667 return wrap
665
668
666 @action(['pick', 'p'],
669 @action(['pick', 'p'],
667 _('use commit'),
670 _('use commit'),
668 priority=True)
671 priority=True)
669 class pick(histeditaction):
672 class pick(histeditaction):
670 def run(self):
673 def run(self):
671 rulectx = self.repo[self.node]
674 rulectx = self.repo[self.node]
672 if rulectx.parents()[0].node() == self.state.parentctxnode:
675 if rulectx.parents()[0].node() == self.state.parentctxnode:
673 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
676 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
674 return rulectx, []
677 return rulectx, []
675
678
676 return super(pick, self).run()
679 return super(pick, self).run()
677
680
678 @action(['edit', 'e'],
681 @action(['edit', 'e'],
679 _('use commit, but stop for amending'),
682 _('use commit, but stop for amending'),
680 priority=True)
683 priority=True)
681 class edit(histeditaction):
684 class edit(histeditaction):
682 def run(self):
685 def run(self):
683 repo = self.repo
686 repo = self.repo
684 rulectx = repo[self.node]
687 rulectx = repo[self.node]
685 hg.update(repo, self.state.parentctxnode, quietempty=True)
688 hg.update(repo, self.state.parentctxnode, quietempty=True)
686 applychanges(repo.ui, repo, rulectx, {})
689 applychanges(repo.ui, repo, rulectx, {})
687 raise error.InterventionRequired(
690 raise error.InterventionRequired(
688 _('Editing (%s), you may commit or record as needed now.')
691 _('Editing (%s), you may commit or record as needed now.')
689 % node.short(self.node),
692 % node.short(self.node),
690 hint=_('hg histedit --continue to resume'))
693 hint=_('hg histedit --continue to resume'))
691
694
692 def commiteditor(self):
695 def commiteditor(self):
693 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
696 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
694
697
695 @action(['fold', 'f'],
698 @action(['fold', 'f'],
696 _('use commit, but combine it with the one above'))
699 _('use commit, but combine it with the one above'))
697 class fold(histeditaction):
700 class fold(histeditaction):
698 def verify(self, prev, expected, seen):
701 def verify(self, prev, expected, seen):
699 """ Verifies semantic correctness of the fold rule"""
702 """ Verifies semantic correctness of the fold rule"""
700 super(fold, self).verify(prev, expected, seen)
703 super(fold, self).verify(prev, expected, seen)
701 repo = self.repo
704 repo = self.repo
702 if not prev:
705 if not prev:
703 c = repo[self.node].parents()[0]
706 c = repo[self.node].parents()[0]
704 elif not prev.verb in ('pick', 'base'):
707 elif not prev.verb in ('pick', 'base'):
705 return
708 return
706 else:
709 else:
707 c = repo[prev.node]
710 c = repo[prev.node]
708 if not c.mutable():
711 if not c.mutable():
709 raise error.ParseError(
712 raise error.ParseError(
710 _("cannot fold into public change %s") % node.short(c.node()))
713 _("cannot fold into public change %s") % node.short(c.node()))
711
714
712
715
713 def continuedirty(self):
716 def continuedirty(self):
714 repo = self.repo
717 repo = self.repo
715 rulectx = repo[self.node]
718 rulectx = repo[self.node]
716
719
717 commit = commitfuncfor(repo, rulectx)
720 commit = commitfuncfor(repo, rulectx)
718 commit(text='fold-temp-revision %s' % node.short(self.node),
721 commit(text='fold-temp-revision %s' % node.short(self.node),
719 user=rulectx.user(), date=rulectx.date(),
722 user=rulectx.user(), date=rulectx.date(),
720 extra=rulectx.extra())
723 extra=rulectx.extra())
721
724
722 def continueclean(self):
725 def continueclean(self):
723 repo = self.repo
726 repo = self.repo
724 ctx = repo['.']
727 ctx = repo['.']
725 rulectx = repo[self.node]
728 rulectx = repo[self.node]
726 parentctxnode = self.state.parentctxnode
729 parentctxnode = self.state.parentctxnode
727 if ctx.node() == parentctxnode:
730 if ctx.node() == parentctxnode:
728 repo.ui.warn(_('%s: empty changeset\n') %
731 repo.ui.warn(_('%s: empty changeset\n') %
729 node.short(self.node))
732 node.short(self.node))
730 return ctx, [(self.node, (parentctxnode,))]
733 return ctx, [(self.node, (parentctxnode,))]
731
734
732 parentctx = repo[parentctxnode]
735 parentctx = repo[parentctxnode]
733 newcommits = set(c.node() for c in repo.set('(%d::. - %d)',
736 newcommits = set(c.node() for c in repo.set('(%d::. - %d)',
734 parentctx.rev(),
737 parentctx.rev(),
735 parentctx.rev()))
738 parentctx.rev()))
736 if not newcommits:
739 if not newcommits:
737 repo.ui.warn(_('%s: cannot fold - working copy is not a '
740 repo.ui.warn(_('%s: cannot fold - working copy is not a '
738 'descendant of previous commit %s\n') %
741 'descendant of previous commit %s\n') %
739 (node.short(self.node), node.short(parentctxnode)))
742 (node.short(self.node), node.short(parentctxnode)))
740 return ctx, [(self.node, (ctx.node(),))]
743 return ctx, [(self.node, (ctx.node(),))]
741
744
742 middlecommits = newcommits.copy()
745 middlecommits = newcommits.copy()
743 middlecommits.discard(ctx.node())
746 middlecommits.discard(ctx.node())
744
747
745 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
748 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
746 middlecommits)
749 middlecommits)
747
750
748 def skipprompt(self):
751 def skipprompt(self):
749 """Returns true if the rule should skip the message editor.
752 """Returns true if the rule should skip the message editor.
750
753
751 For example, 'fold' wants to show an editor, but 'rollup'
754 For example, 'fold' wants to show an editor, but 'rollup'
752 doesn't want to.
755 doesn't want to.
753 """
756 """
754 return False
757 return False
755
758
756 def mergedescs(self):
759 def mergedescs(self):
757 """Returns true if the rule should merge messages of multiple changes.
760 """Returns true if the rule should merge messages of multiple changes.
758
761
759 This exists mainly so that 'rollup' rules can be a subclass of
762 This exists mainly so that 'rollup' rules can be a subclass of
760 'fold'.
763 'fold'.
761 """
764 """
762 return True
765 return True
763
766
764 def firstdate(self):
767 def firstdate(self):
765 """Returns true if the rule should preserve the date of the first
768 """Returns true if the rule should preserve the date of the first
766 change.
769 change.
767
770
768 This exists mainly so that 'rollup' rules can be a subclass of
771 This exists mainly so that 'rollup' rules can be a subclass of
769 'fold'.
772 'fold'.
770 """
773 """
771 return False
774 return False
772
775
773 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
776 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
774 parent = ctx.parents()[0].node()
777 parent = ctx.parents()[0].node()
775 repo.ui.pushbuffer()
778 repo.ui.pushbuffer()
776 hg.update(repo, parent)
779 hg.update(repo, parent)
777 repo.ui.popbuffer()
780 repo.ui.popbuffer()
778 ### prepare new commit data
781 ### prepare new commit data
779 commitopts = {}
782 commitopts = {}
780 commitopts['user'] = ctx.user()
783 commitopts['user'] = ctx.user()
781 # commit message
784 # commit message
782 if not self.mergedescs():
785 if not self.mergedescs():
783 newmessage = ctx.description()
786 newmessage = ctx.description()
784 else:
787 else:
785 newmessage = '\n***\n'.join(
788 newmessage = '\n***\n'.join(
786 [ctx.description()] +
789 [ctx.description()] +
787 [repo[r].description() for r in internalchanges] +
790 [repo[r].description() for r in internalchanges] +
788 [oldctx.description()]) + '\n'
791 [oldctx.description()]) + '\n'
789 commitopts['message'] = newmessage
792 commitopts['message'] = newmessage
790 # date
793 # date
791 if self.firstdate():
794 if self.firstdate():
792 commitopts['date'] = ctx.date()
795 commitopts['date'] = ctx.date()
793 else:
796 else:
794 commitopts['date'] = max(ctx.date(), oldctx.date())
797 commitopts['date'] = max(ctx.date(), oldctx.date())
795 extra = ctx.extra().copy()
798 extra = ctx.extra().copy()
796 # histedit_source
799 # histedit_source
797 # note: ctx is likely a temporary commit but that the best we can do
800 # note: ctx is likely a temporary commit but that the best we can do
798 # here. This is sufficient to solve issue3681 anyway.
801 # here. This is sufficient to solve issue3681 anyway.
799 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
802 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
800 commitopts['extra'] = extra
803 commitopts['extra'] = extra
801 phasemin = max(ctx.phase(), oldctx.phase())
804 phasemin = max(ctx.phase(), oldctx.phase())
802 overrides = {('phases', 'new-commit'): phasemin}
805 overrides = {('phases', 'new-commit'): phasemin}
803 with repo.ui.configoverride(overrides, 'histedit'):
806 with repo.ui.configoverride(overrides, 'histedit'):
804 n = collapse(repo, ctx, repo[newnode], commitopts,
807 n = collapse(repo, ctx, repo[newnode], commitopts,
805 skipprompt=self.skipprompt())
808 skipprompt=self.skipprompt())
806 if n is None:
809 if n is None:
807 return ctx, []
810 return ctx, []
808 repo.ui.pushbuffer()
811 repo.ui.pushbuffer()
809 hg.update(repo, n)
812 hg.update(repo, n)
810 repo.ui.popbuffer()
813 repo.ui.popbuffer()
811 replacements = [(oldctx.node(), (newnode,)),
814 replacements = [(oldctx.node(), (newnode,)),
812 (ctx.node(), (n,)),
815 (ctx.node(), (n,)),
813 (newnode, (n,)),
816 (newnode, (n,)),
814 ]
817 ]
815 for ich in internalchanges:
818 for ich in internalchanges:
816 replacements.append((ich, (n,)))
819 replacements.append((ich, (n,)))
817 return repo[n], replacements
820 return repo[n], replacements
818
821
819 @action(['base', 'b'],
822 @action(['base', 'b'],
820 _('checkout changeset and apply further changesets from there'))
823 _('checkout changeset and apply further changesets from there'))
821 class base(histeditaction):
824 class base(histeditaction):
822
825
823 def run(self):
826 def run(self):
824 if self.repo['.'].node() != self.node:
827 if self.repo['.'].node() != self.node:
825 mergemod.update(self.repo, self.node, False, True)
828 mergemod.update(self.repo, self.node, False, True)
826 # branchmerge, force)
829 # branchmerge, force)
827 return self.continueclean()
830 return self.continueclean()
828
831
829 def continuedirty(self):
832 def continuedirty(self):
830 abortdirty()
833 abortdirty()
831
834
832 def continueclean(self):
835 def continueclean(self):
833 basectx = self.repo['.']
836 basectx = self.repo['.']
834 return basectx, []
837 return basectx, []
835
838
836 def _verifynodeconstraints(self, prev, expected, seen):
839 def _verifynodeconstraints(self, prev, expected, seen):
837 # base can only be use with a node not in the edited set
840 # base can only be use with a node not in the edited set
838 if self.node in expected:
841 if self.node in expected:
839 msg = _('%s "%s" changeset was an edited list candidate')
842 msg = _('%s "%s" changeset was an edited list candidate')
840 raise error.ParseError(
843 raise error.ParseError(
841 msg % (self.verb, node.short(self.node)),
844 msg % (self.verb, node.short(self.node)),
842 hint=_('base must only use unlisted changesets'))
845 hint=_('base must only use unlisted changesets'))
843
846
844 @action(['_multifold'],
847 @action(['_multifold'],
845 _(
848 _(
846 """fold subclass used for when multiple folds happen in a row
849 """fold subclass used for when multiple folds happen in a row
847
850
848 We only want to fire the editor for the folded message once when
851 We only want to fire the editor for the folded message once when
849 (say) four changes are folded down into a single change. This is
852 (say) four changes are folded down into a single change. This is
850 similar to rollup, but we should preserve both messages so that
853 similar to rollup, but we should preserve both messages so that
851 when the last fold operation runs we can show the user all the
854 when the last fold operation runs we can show the user all the
852 commit messages in their editor.
855 commit messages in their editor.
853 """),
856 """),
854 internal=True)
857 internal=True)
855 class _multifold(fold):
858 class _multifold(fold):
856 def skipprompt(self):
859 def skipprompt(self):
857 return True
860 return True
858
861
859 @action(["roll", "r"],
862 @action(["roll", "r"],
860 _("like fold, but discard this commit's description and date"))
863 _("like fold, but discard this commit's description and date"))
861 class rollup(fold):
864 class rollup(fold):
862 def mergedescs(self):
865 def mergedescs(self):
863 return False
866 return False
864
867
865 def skipprompt(self):
868 def skipprompt(self):
866 return True
869 return True
867
870
868 def firstdate(self):
871 def firstdate(self):
869 return True
872 return True
870
873
871 @action(["drop", "d"],
874 @action(["drop", "d"],
872 _('remove commit from history'))
875 _('remove commit from history'))
873 class drop(histeditaction):
876 class drop(histeditaction):
874 def run(self):
877 def run(self):
875 parentctx = self.repo[self.state.parentctxnode]
878 parentctx = self.repo[self.state.parentctxnode]
876 return parentctx, [(self.node, tuple())]
879 return parentctx, [(self.node, tuple())]
877
880
878 @action(["mess", "m"],
881 @action(["mess", "m"],
879 _('edit commit message without changing commit content'),
882 _('edit commit message without changing commit content'),
880 priority=True)
883 priority=True)
881 class message(histeditaction):
884 class message(histeditaction):
882 def commiteditor(self):
885 def commiteditor(self):
883 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
886 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
884
887
885 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
888 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
886 """utility function to find the first outgoing changeset
889 """utility function to find the first outgoing changeset
887
890
888 Used by initialization code"""
891 Used by initialization code"""
889 if opts is None:
892 if opts is None:
890 opts = {}
893 opts = {}
891 dest = ui.expandpath(remote or 'default-push', remote or 'default')
894 dest = ui.expandpath(remote or 'default-push', remote or 'default')
892 dest, revs = hg.parseurl(dest, None)[:2]
895 dest, revs = hg.parseurl(dest, None)[:2]
893 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
896 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
894
897
895 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
898 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
896 other = hg.peer(repo, opts, dest)
899 other = hg.peer(repo, opts, dest)
897
900
898 if revs:
901 if revs:
899 revs = [repo.lookup(rev) for rev in revs]
902 revs = [repo.lookup(rev) for rev in revs]
900
903
901 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
904 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
902 if not outgoing.missing:
905 if not outgoing.missing:
903 raise error.Abort(_('no outgoing ancestors'))
906 raise error.Abort(_('no outgoing ancestors'))
904 roots = list(repo.revs("roots(%ln)", outgoing.missing))
907 roots = list(repo.revs("roots(%ln)", outgoing.missing))
905 if 1 < len(roots):
908 if 1 < len(roots):
906 msg = _('there are ambiguous outgoing revisions')
909 msg = _('there are ambiguous outgoing revisions')
907 hint = _("see 'hg help histedit' for more detail")
910 hint = _("see 'hg help histedit' for more detail")
908 raise error.Abort(msg, hint=hint)
911 raise error.Abort(msg, hint=hint)
909 return repo.lookup(roots[0])
912 return repo.lookup(roots[0])
910
913
911 @command('histedit',
914 @command('histedit',
912 [('', 'commands', '',
915 [('', 'commands', '',
913 _('read history edits from the specified file'), _('FILE')),
916 _('read history edits from the specified file'), _('FILE')),
914 ('c', 'continue', False, _('continue an edit already in progress')),
917 ('c', 'continue', False, _('continue an edit already in progress')),
915 ('', 'edit-plan', False, _('edit remaining actions list')),
918 ('', 'edit-plan', False, _('edit remaining actions list')),
916 ('k', 'keep', False,
919 ('k', 'keep', False,
917 _("don't strip old nodes after edit is complete")),
920 _("don't strip old nodes after edit is complete")),
918 ('', 'abort', False, _('abort an edit in progress')),
921 ('', 'abort', False, _('abort an edit in progress')),
919 ('o', 'outgoing', False, _('changesets not found in destination')),
922 ('o', 'outgoing', False, _('changesets not found in destination')),
920 ('f', 'force', False,
923 ('f', 'force', False,
921 _('force outgoing even for unrelated repositories')),
924 _('force outgoing even for unrelated repositories')),
922 ('r', 'rev', [], _('first revision to be edited'), _('REV'))] +
925 ('r', 'rev', [], _('first revision to be edited'), _('REV'))] +
923 cmdutil.formatteropts,
926 cmdutil.formatteropts,
924 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"))
927 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"))
925 def histedit(ui, repo, *freeargs, **opts):
928 def histedit(ui, repo, *freeargs, **opts):
926 """interactively edit changeset history
929 """interactively edit changeset history
927
930
928 This command lets you edit a linear series of changesets (up to
931 This command lets you edit a linear series of changesets (up to
929 and including the working directory, which should be clean).
932 and including the working directory, which should be clean).
930 You can:
933 You can:
931
934
932 - `pick` to [re]order a changeset
935 - `pick` to [re]order a changeset
933
936
934 - `drop` to omit changeset
937 - `drop` to omit changeset
935
938
936 - `mess` to reword the changeset commit message
939 - `mess` to reword the changeset commit message
937
940
938 - `fold` to combine it with the preceding changeset (using the later date)
941 - `fold` to combine it with the preceding changeset (using the later date)
939
942
940 - `roll` like fold, but discarding this commit's description and date
943 - `roll` like fold, but discarding this commit's description and date
941
944
942 - `edit` to edit this changeset (preserving date)
945 - `edit` to edit this changeset (preserving date)
943
946
944 - `base` to checkout changeset and apply further changesets from there
947 - `base` to checkout changeset and apply further changesets from there
945
948
946 There are a number of ways to select the root changeset:
949 There are a number of ways to select the root changeset:
947
950
948 - Specify ANCESTOR directly
951 - Specify ANCESTOR directly
949
952
950 - Use --outgoing -- it will be the first linear changeset not
953 - Use --outgoing -- it will be the first linear changeset not
951 included in destination. (See :hg:`help config.paths.default-push`)
954 included in destination. (See :hg:`help config.paths.default-push`)
952
955
953 - Otherwise, the value from the "histedit.defaultrev" config option
956 - Otherwise, the value from the "histedit.defaultrev" config option
954 is used as a revset to select the base revision when ANCESTOR is not
957 is used as a revset to select the base revision when ANCESTOR is not
955 specified. The first revision returned by the revset is used. By
958 specified. The first revision returned by the revset is used. By
956 default, this selects the editable history that is unique to the
959 default, this selects the editable history that is unique to the
957 ancestry of the working directory.
960 ancestry of the working directory.
958
961
959 .. container:: verbose
962 .. container:: verbose
960
963
961 If you use --outgoing, this command will abort if there are ambiguous
964 If you use --outgoing, this command will abort if there are ambiguous
962 outgoing revisions. For example, if there are multiple branches
965 outgoing revisions. For example, if there are multiple branches
963 containing outgoing revisions.
966 containing outgoing revisions.
964
967
965 Use "min(outgoing() and ::.)" or similar revset specification
968 Use "min(outgoing() and ::.)" or similar revset specification
966 instead of --outgoing to specify edit target revision exactly in
969 instead of --outgoing to specify edit target revision exactly in
967 such ambiguous situation. See :hg:`help revsets` for detail about
970 such ambiguous situation. See :hg:`help revsets` for detail about
968 selecting revisions.
971 selecting revisions.
969
972
970 .. container:: verbose
973 .. container:: verbose
971
974
972 Examples:
975 Examples:
973
976
974 - A number of changes have been made.
977 - A number of changes have been made.
975 Revision 3 is no longer needed.
978 Revision 3 is no longer needed.
976
979
977 Start history editing from revision 3::
980 Start history editing from revision 3::
978
981
979 hg histedit -r 3
982 hg histedit -r 3
980
983
981 An editor opens, containing the list of revisions,
984 An editor opens, containing the list of revisions,
982 with specific actions specified::
985 with specific actions specified::
983
986
984 pick 5339bf82f0ca 3 Zworgle the foobar
987 pick 5339bf82f0ca 3 Zworgle the foobar
985 pick 8ef592ce7cc4 4 Bedazzle the zerlog
988 pick 8ef592ce7cc4 4 Bedazzle the zerlog
986 pick 0a9639fcda9d 5 Morgify the cromulancy
989 pick 0a9639fcda9d 5 Morgify the cromulancy
987
990
988 Additional information about the possible actions
991 Additional information about the possible actions
989 to take appears below the list of revisions.
992 to take appears below the list of revisions.
990
993
991 To remove revision 3 from the history,
994 To remove revision 3 from the history,
992 its action (at the beginning of the relevant line)
995 its action (at the beginning of the relevant line)
993 is changed to 'drop'::
996 is changed to 'drop'::
994
997
995 drop 5339bf82f0ca 3 Zworgle the foobar
998 drop 5339bf82f0ca 3 Zworgle the foobar
996 pick 8ef592ce7cc4 4 Bedazzle the zerlog
999 pick 8ef592ce7cc4 4 Bedazzle the zerlog
997 pick 0a9639fcda9d 5 Morgify the cromulancy
1000 pick 0a9639fcda9d 5 Morgify the cromulancy
998
1001
999 - A number of changes have been made.
1002 - A number of changes have been made.
1000 Revision 2 and 4 need to be swapped.
1003 Revision 2 and 4 need to be swapped.
1001
1004
1002 Start history editing from revision 2::
1005 Start history editing from revision 2::
1003
1006
1004 hg histedit -r 2
1007 hg histedit -r 2
1005
1008
1006 An editor opens, containing the list of revisions,
1009 An editor opens, containing the list of revisions,
1007 with specific actions specified::
1010 with specific actions specified::
1008
1011
1009 pick 252a1af424ad 2 Blorb a morgwazzle
1012 pick 252a1af424ad 2 Blorb a morgwazzle
1010 pick 5339bf82f0ca 3 Zworgle the foobar
1013 pick 5339bf82f0ca 3 Zworgle the foobar
1011 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1014 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1012
1015
1013 To swap revision 2 and 4, its lines are swapped
1016 To swap revision 2 and 4, its lines are swapped
1014 in the editor::
1017 in the editor::
1015
1018
1016 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1019 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1017 pick 5339bf82f0ca 3 Zworgle the foobar
1020 pick 5339bf82f0ca 3 Zworgle the foobar
1018 pick 252a1af424ad 2 Blorb a morgwazzle
1021 pick 252a1af424ad 2 Blorb a morgwazzle
1019
1022
1020 Returns 0 on success, 1 if user intervention is required (not only
1023 Returns 0 on success, 1 if user intervention is required (not only
1021 for intentional "edit" command, but also for resolving unexpected
1024 for intentional "edit" command, but also for resolving unexpected
1022 conflicts).
1025 conflicts).
1023 """
1026 """
1024 state = histeditstate(repo)
1027 state = histeditstate(repo)
1025 try:
1028 try:
1026 state.wlock = repo.wlock()
1029 state.wlock = repo.wlock()
1027 state.lock = repo.lock()
1030 state.lock = repo.lock()
1028 _histedit(ui, repo, state, *freeargs, **opts)
1031 _histedit(ui, repo, state, *freeargs, **opts)
1029 finally:
1032 finally:
1030 release(state.lock, state.wlock)
1033 release(state.lock, state.wlock)
1031
1034
1032 goalcontinue = 'continue'
1035 goalcontinue = 'continue'
1033 goalabort = 'abort'
1036 goalabort = 'abort'
1034 goaleditplan = 'edit-plan'
1037 goaleditplan = 'edit-plan'
1035 goalnew = 'new'
1038 goalnew = 'new'
1036
1039
1037 def _getgoal(opts):
1040 def _getgoal(opts):
1038 if opts.get('continue'):
1041 if opts.get('continue'):
1039 return goalcontinue
1042 return goalcontinue
1040 if opts.get('abort'):
1043 if opts.get('abort'):
1041 return goalabort
1044 return goalabort
1042 if opts.get('edit_plan'):
1045 if opts.get('edit_plan'):
1043 return goaleditplan
1046 return goaleditplan
1044 return goalnew
1047 return goalnew
1045
1048
1046 def _readfile(ui, path):
1049 def _readfile(ui, path):
1047 if path == '-':
1050 if path == '-':
1048 with ui.timeblockedsection('histedit'):
1051 with ui.timeblockedsection('histedit'):
1049 return ui.fin.read()
1052 return ui.fin.read()
1050 else:
1053 else:
1051 with open(path, 'rb') as f:
1054 with open(path, 'rb') as f:
1052 return f.read()
1055 return f.read()
1053
1056
1054 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1057 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1055 # TODO only abort if we try to histedit mq patches, not just
1058 # TODO only abort if we try to histedit mq patches, not just
1056 # blanket if mq patches are applied somewhere
1059 # blanket if mq patches are applied somewhere
1057 mq = getattr(repo, 'mq', None)
1060 mq = getattr(repo, 'mq', None)
1058 if mq and mq.applied:
1061 if mq and mq.applied:
1059 raise error.Abort(_('source has mq patches applied'))
1062 raise error.Abort(_('source has mq patches applied'))
1060
1063
1061 # basic argument incompatibility processing
1064 # basic argument incompatibility processing
1062 outg = opts.get('outgoing')
1065 outg = opts.get('outgoing')
1063 editplan = opts.get('edit_plan')
1066 editplan = opts.get('edit_plan')
1064 abort = opts.get('abort')
1067 abort = opts.get('abort')
1065 force = opts.get('force')
1068 force = opts.get('force')
1066 if force and not outg:
1069 if force and not outg:
1067 raise error.Abort(_('--force only allowed with --outgoing'))
1070 raise error.Abort(_('--force only allowed with --outgoing'))
1068 if goal == 'continue':
1071 if goal == 'continue':
1069 if any((outg, abort, revs, freeargs, rules, editplan)):
1072 if any((outg, abort, revs, freeargs, rules, editplan)):
1070 raise error.Abort(_('no arguments allowed with --continue'))
1073 raise error.Abort(_('no arguments allowed with --continue'))
1071 elif goal == 'abort':
1074 elif goal == 'abort':
1072 if any((outg, revs, freeargs, rules, editplan)):
1075 if any((outg, revs, freeargs, rules, editplan)):
1073 raise error.Abort(_('no arguments allowed with --abort'))
1076 raise error.Abort(_('no arguments allowed with --abort'))
1074 elif goal == 'edit-plan':
1077 elif goal == 'edit-plan':
1075 if any((outg, revs, freeargs)):
1078 if any((outg, revs, freeargs)):
1076 raise error.Abort(_('only --commands argument allowed with '
1079 raise error.Abort(_('only --commands argument allowed with '
1077 '--edit-plan'))
1080 '--edit-plan'))
1078 else:
1081 else:
1079 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1082 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1080 raise error.Abort(_('history edit already in progress, try '
1083 raise error.Abort(_('history edit already in progress, try '
1081 '--continue or --abort'))
1084 '--continue or --abort'))
1082 if outg:
1085 if outg:
1083 if revs:
1086 if revs:
1084 raise error.Abort(_('no revisions allowed with --outgoing'))
1087 raise error.Abort(_('no revisions allowed with --outgoing'))
1085 if len(freeargs) > 1:
1088 if len(freeargs) > 1:
1086 raise error.Abort(
1089 raise error.Abort(
1087 _('only one repo argument allowed with --outgoing'))
1090 _('only one repo argument allowed with --outgoing'))
1088 else:
1091 else:
1089 revs.extend(freeargs)
1092 revs.extend(freeargs)
1090 if len(revs) == 0:
1093 if len(revs) == 0:
1091 defaultrev = destutil.desthistedit(ui, repo)
1094 defaultrev = destutil.desthistedit(ui, repo)
1092 if defaultrev is not None:
1095 if defaultrev is not None:
1093 revs.append(defaultrev)
1096 revs.append(defaultrev)
1094
1097
1095 if len(revs) != 1:
1098 if len(revs) != 1:
1096 raise error.Abort(
1099 raise error.Abort(
1097 _('histedit requires exactly one ancestor revision'))
1100 _('histedit requires exactly one ancestor revision'))
1098
1101
1099 def _histedit(ui, repo, state, *freeargs, **opts):
1102 def _histedit(ui, repo, state, *freeargs, **opts):
1100 opts = pycompat.byteskwargs(opts)
1103 opts = pycompat.byteskwargs(opts)
1101 fm = ui.formatter('histedit', opts)
1104 fm = ui.formatter('histedit', opts)
1102 fm.startitem()
1105 fm.startitem()
1103 goal = _getgoal(opts)
1106 goal = _getgoal(opts)
1104 revs = opts.get('rev', [])
1107 revs = opts.get('rev', [])
1105 rules = opts.get('commands', '')
1108 rules = opts.get('commands', '')
1106 state.keep = opts.get('keep', False)
1109 state.keep = opts.get('keep', False)
1107
1110
1108 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1111 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1109
1112
1110 # rebuild state
1113 # rebuild state
1111 if goal == goalcontinue:
1114 if goal == goalcontinue:
1112 state.read()
1115 state.read()
1113 state = bootstrapcontinue(ui, state, opts)
1116 state = bootstrapcontinue(ui, state, opts)
1114 elif goal == goaleditplan:
1117 elif goal == goaleditplan:
1115 _edithisteditplan(ui, repo, state, rules)
1118 _edithisteditplan(ui, repo, state, rules)
1116 return
1119 return
1117 elif goal == goalabort:
1120 elif goal == goalabort:
1118 _aborthistedit(ui, repo, state)
1121 _aborthistedit(ui, repo, state)
1119 return
1122 return
1120 else:
1123 else:
1121 # goal == goalnew
1124 # goal == goalnew
1122 _newhistedit(ui, repo, state, revs, freeargs, opts)
1125 _newhistedit(ui, repo, state, revs, freeargs, opts)
1123
1126
1124 _continuehistedit(ui, repo, state)
1127 _continuehistedit(ui, repo, state)
1125 _finishhistedit(ui, repo, state, fm)
1128 _finishhistedit(ui, repo, state, fm)
1126 fm.end()
1129 fm.end()
1127
1130
1128 def _continuehistedit(ui, repo, state):
1131 def _continuehistedit(ui, repo, state):
1129 """This function runs after either:
1132 """This function runs after either:
1130 - bootstrapcontinue (if the goal is 'continue')
1133 - bootstrapcontinue (if the goal is 'continue')
1131 - _newhistedit (if the goal is 'new')
1134 - _newhistedit (if the goal is 'new')
1132 """
1135 """
1133 # preprocess rules so that we can hide inner folds from the user
1136 # preprocess rules so that we can hide inner folds from the user
1134 # and only show one editor
1137 # and only show one editor
1135 actions = state.actions[:]
1138 actions = state.actions[:]
1136 for idx, (action, nextact) in enumerate(
1139 for idx, (action, nextact) in enumerate(
1137 zip(actions, actions[1:] + [None])):
1140 zip(actions, actions[1:] + [None])):
1138 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1141 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1139 state.actions[idx].__class__ = _multifold
1142 state.actions[idx].__class__ = _multifold
1140
1143
1141 # Force an initial state file write, so the user can run --abort/continue
1144 # Force an initial state file write, so the user can run --abort/continue
1142 # even if there's an exception before the first transaction serialize.
1145 # even if there's an exception before the first transaction serialize.
1143 state.write()
1146 state.write()
1144
1147
1145 total = len(state.actions)
1148 total = len(state.actions)
1146 pos = 0
1149 pos = 0
1147 tr = None
1150 tr = None
1148 # Don't use singletransaction by default since it rolls the entire
1151 # Don't use singletransaction by default since it rolls the entire
1149 # transaction back if an unexpected exception happens (like a
1152 # transaction back if an unexpected exception happens (like a
1150 # pretxncommit hook throws, or the user aborts the commit msg editor).
1153 # pretxncommit hook throws, or the user aborts the commit msg editor).
1151 if ui.configbool("histedit", "singletransaction"):
1154 if ui.configbool("histedit", "singletransaction"):
1152 # Don't use a 'with' for the transaction, since actions may close
1155 # Don't use a 'with' for the transaction, since actions may close
1153 # and reopen a transaction. For example, if the action executes an
1156 # and reopen a transaction. For example, if the action executes an
1154 # external process it may choose to commit the transaction first.
1157 # external process it may choose to commit the transaction first.
1155 tr = repo.transaction('histedit')
1158 tr = repo.transaction('histedit')
1156 with util.acceptintervention(tr):
1159 with util.acceptintervention(tr):
1157 while state.actions:
1160 while state.actions:
1158 state.write(tr=tr)
1161 state.write(tr=tr)
1159 actobj = state.actions[0]
1162 actobj = state.actions[0]
1160 pos += 1
1163 pos += 1
1161 ui.progress(_("editing"), pos, actobj.torule(),
1164 ui.progress(_("editing"), pos, actobj.torule(),
1162 _('changes'), total)
1165 _('changes'), total)
1163 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1166 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1164 actobj.torule()))
1167 actobj.torule()))
1165 parentctx, replacement_ = actobj.run()
1168 parentctx, replacement_ = actobj.run()
1166 state.parentctxnode = parentctx.node()
1169 state.parentctxnode = parentctx.node()
1167 state.replacements.extend(replacement_)
1170 state.replacements.extend(replacement_)
1168 state.actions.pop(0)
1171 state.actions.pop(0)
1169
1172
1170 state.write()
1173 state.write()
1171 ui.progress(_("editing"), None)
1174 ui.progress(_("editing"), None)
1172
1175
1173 def _finishhistedit(ui, repo, state, fm):
1176 def _finishhistedit(ui, repo, state, fm):
1174 """This action runs when histedit is finishing its session"""
1177 """This action runs when histedit is finishing its session"""
1175 repo.ui.pushbuffer()
1178 repo.ui.pushbuffer()
1176 hg.update(repo, state.parentctxnode, quietempty=True)
1179 hg.update(repo, state.parentctxnode, quietempty=True)
1177 repo.ui.popbuffer()
1180 repo.ui.popbuffer()
1178
1181
1179 mapping, tmpnodes, created, ntm = processreplacement(state)
1182 mapping, tmpnodes, created, ntm = processreplacement(state)
1180 if mapping:
1183 if mapping:
1181 for prec, succs in mapping.iteritems():
1184 for prec, succs in mapping.iteritems():
1182 if not succs:
1185 if not succs:
1183 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1186 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1184 else:
1187 else:
1185 ui.debug('histedit: %s is replaced by %s\n' % (
1188 ui.debug('histedit: %s is replaced by %s\n' % (
1186 node.short(prec), node.short(succs[0])))
1189 node.short(prec), node.short(succs[0])))
1187 if len(succs) > 1:
1190 if len(succs) > 1:
1188 m = 'histedit: %s'
1191 m = 'histedit: %s'
1189 for n in succs[1:]:
1192 for n in succs[1:]:
1190 ui.debug(m % node.short(n))
1193 ui.debug(m % node.short(n))
1191
1194
1192 if not state.keep:
1195 if not state.keep:
1193 if mapping:
1196 if mapping:
1194 movetopmostbookmarks(repo, state.topmost, ntm)
1197 movetopmostbookmarks(repo, state.topmost, ntm)
1195 # TODO update mq state
1198 # TODO update mq state
1196 else:
1199 else:
1197 mapping = {}
1200 mapping = {}
1198
1201
1199 for n in tmpnodes:
1202 for n in tmpnodes:
1200 mapping[n] = ()
1203 mapping[n] = ()
1201
1204
1202 # remove entries about unknown nodes
1205 # remove entries about unknown nodes
1203 nodemap = repo.unfiltered().changelog.nodemap
1206 nodemap = repo.unfiltered().changelog.nodemap
1204 mapping = {k: v for k, v in mapping.items()
1207 mapping = {k: v for k, v in mapping.items()
1205 if k in nodemap and all(n in nodemap for n in v)}
1208 if k in nodemap and all(n in nodemap for n in v)}
1206 scmutil.cleanupnodes(repo, mapping, 'histedit')
1209 scmutil.cleanupnodes(repo, mapping, 'histedit')
1207 hf = fm.hexfunc
1210 hf = fm.hexfunc
1208 fl = fm.formatlist
1211 fl = fm.formatlist
1209 fd = fm.formatdict
1212 fd = fm.formatdict
1210 nodechanges = fd({hf(oldn): fl([hf(n) for n in newn], name='node')
1213 nodechanges = fd({hf(oldn): fl([hf(n) for n in newn], name='node')
1211 for oldn, newn in mapping.iteritems()},
1214 for oldn, newn in mapping.iteritems()},
1212 key="oldnode", value="newnodes")
1215 key="oldnode", value="newnodes")
1213 fm.data(nodechanges=nodechanges)
1216 fm.data(nodechanges=nodechanges)
1214
1217
1215 state.clear()
1218 state.clear()
1216 if os.path.exists(repo.sjoin('undo')):
1219 if os.path.exists(repo.sjoin('undo')):
1217 os.unlink(repo.sjoin('undo'))
1220 os.unlink(repo.sjoin('undo'))
1218 if repo.vfs.exists('histedit-last-edit.txt'):
1221 if repo.vfs.exists('histedit-last-edit.txt'):
1219 repo.vfs.unlink('histedit-last-edit.txt')
1222 repo.vfs.unlink('histedit-last-edit.txt')
1220
1223
1221 def _aborthistedit(ui, repo, state):
1224 def _aborthistedit(ui, repo, state):
1222 try:
1225 try:
1223 state.read()
1226 state.read()
1224 __, leafs, tmpnodes, __ = processreplacement(state)
1227 __, leafs, tmpnodes, __ = processreplacement(state)
1225 ui.debug('restore wc to old parent %s\n'
1228 ui.debug('restore wc to old parent %s\n'
1226 % node.short(state.topmost))
1229 % node.short(state.topmost))
1227
1230
1228 # Recover our old commits if necessary
1231 # Recover our old commits if necessary
1229 if not state.topmost in repo and state.backupfile:
1232 if not state.topmost in repo and state.backupfile:
1230 backupfile = repo.vfs.join(state.backupfile)
1233 backupfile = repo.vfs.join(state.backupfile)
1231 f = hg.openpath(ui, backupfile)
1234 f = hg.openpath(ui, backupfile)
1232 gen = exchange.readbundle(ui, f, backupfile)
1235 gen = exchange.readbundle(ui, f, backupfile)
1233 with repo.transaction('histedit.abort') as tr:
1236 with repo.transaction('histedit.abort') as tr:
1234 bundle2.applybundle(repo, gen, tr, source='histedit',
1237 bundle2.applybundle(repo, gen, tr, source='histedit',
1235 url='bundle:' + backupfile)
1238 url='bundle:' + backupfile)
1236
1239
1237 os.remove(backupfile)
1240 os.remove(backupfile)
1238
1241
1239 # check whether we should update away
1242 # check whether we should update away
1240 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1243 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1241 state.parentctxnode, leafs | tmpnodes):
1244 state.parentctxnode, leafs | tmpnodes):
1242 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1245 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1243 cleanupnode(ui, repo, tmpnodes)
1246 cleanupnode(ui, repo, tmpnodes)
1244 cleanupnode(ui, repo, leafs)
1247 cleanupnode(ui, repo, leafs)
1245 except Exception:
1248 except Exception:
1246 if state.inprogress():
1249 if state.inprogress():
1247 ui.warn(_('warning: encountered an exception during histedit '
1250 ui.warn(_('warning: encountered an exception during histedit '
1248 '--abort; the repository may not have been completely '
1251 '--abort; the repository may not have been completely '
1249 'cleaned up\n'))
1252 'cleaned up\n'))
1250 raise
1253 raise
1251 finally:
1254 finally:
1252 state.clear()
1255 state.clear()
1253
1256
1254 def _edithisteditplan(ui, repo, state, rules):
1257 def _edithisteditplan(ui, repo, state, rules):
1255 state.read()
1258 state.read()
1256 if not rules:
1259 if not rules:
1257 comment = geteditcomment(ui,
1260 comment = geteditcomment(ui,
1258 node.short(state.parentctxnode),
1261 node.short(state.parentctxnode),
1259 node.short(state.topmost))
1262 node.short(state.topmost))
1260 rules = ruleeditor(repo, ui, state.actions, comment)
1263 rules = ruleeditor(repo, ui, state.actions, comment)
1261 else:
1264 else:
1262 rules = _readfile(ui, rules)
1265 rules = _readfile(ui, rules)
1263 actions = parserules(rules, state)
1266 actions = parserules(rules, state)
1264 ctxs = [repo[act.node] \
1267 ctxs = [repo[act.node] \
1265 for act in state.actions if act.node]
1268 for act in state.actions if act.node]
1266 warnverifyactions(ui, repo, actions, state, ctxs)
1269 warnverifyactions(ui, repo, actions, state, ctxs)
1267 state.actions = actions
1270 state.actions = actions
1268 state.write()
1271 state.write()
1269
1272
1270 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1273 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1271 outg = opts.get('outgoing')
1274 outg = opts.get('outgoing')
1272 rules = opts.get('commands', '')
1275 rules = opts.get('commands', '')
1273 force = opts.get('force')
1276 force = opts.get('force')
1274
1277
1275 cmdutil.checkunfinished(repo)
1278 cmdutil.checkunfinished(repo)
1276 cmdutil.bailifchanged(repo)
1279 cmdutil.bailifchanged(repo)
1277
1280
1278 topmost, empty = repo.dirstate.parents()
1281 topmost, empty = repo.dirstate.parents()
1279 if outg:
1282 if outg:
1280 if freeargs:
1283 if freeargs:
1281 remote = freeargs[0]
1284 remote = freeargs[0]
1282 else:
1285 else:
1283 remote = None
1286 remote = None
1284 root = findoutgoing(ui, repo, remote, force, opts)
1287 root = findoutgoing(ui, repo, remote, force, opts)
1285 else:
1288 else:
1286 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1289 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1287 if len(rr) != 1:
1290 if len(rr) != 1:
1288 raise error.Abort(_('The specified revisions must have '
1291 raise error.Abort(_('The specified revisions must have '
1289 'exactly one common root'))
1292 'exactly one common root'))
1290 root = rr[0].node()
1293 root = rr[0].node()
1291
1294
1292 revs = between(repo, root, topmost, state.keep)
1295 revs = between(repo, root, topmost, state.keep)
1293 if not revs:
1296 if not revs:
1294 raise error.Abort(_('%s is not an ancestor of working directory') %
1297 raise error.Abort(_('%s is not an ancestor of working directory') %
1295 node.short(root))
1298 node.short(root))
1296
1299
1297 ctxs = [repo[r] for r in revs]
1300 ctxs = [repo[r] for r in revs]
1298 if not rules:
1301 if not rules:
1299 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1302 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1300 actions = [pick(state, r) for r in revs]
1303 actions = [pick(state, r) for r in revs]
1301 rules = ruleeditor(repo, ui, actions, comment)
1304 rules = ruleeditor(repo, ui, actions, comment)
1302 else:
1305 else:
1303 rules = _readfile(ui, rules)
1306 rules = _readfile(ui, rules)
1304 actions = parserules(rules, state)
1307 actions = parserules(rules, state)
1305 warnverifyactions(ui, repo, actions, state, ctxs)
1308 warnverifyactions(ui, repo, actions, state, ctxs)
1306
1309
1307 parentctxnode = repo[root].parents()[0].node()
1310 parentctxnode = repo[root].parents()[0].node()
1308
1311
1309 state.parentctxnode = parentctxnode
1312 state.parentctxnode = parentctxnode
1310 state.actions = actions
1313 state.actions = actions
1311 state.topmost = topmost
1314 state.topmost = topmost
1312 state.replacements = []
1315 state.replacements = []
1313
1316
1314 ui.log("histedit", "%d actions to histedit", len(actions),
1317 ui.log("histedit", "%d actions to histedit", len(actions),
1315 histedit_num_actions=len(actions))
1318 histedit_num_actions=len(actions))
1316
1319
1317 # Create a backup so we can always abort completely.
1320 # Create a backup so we can always abort completely.
1318 backupfile = None
1321 backupfile = None
1319 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1322 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1320 backupfile = repair.backupbundle(repo, [parentctxnode],
1323 backupfile = repair.backupbundle(repo, [parentctxnode],
1321 [topmost], root, 'histedit')
1324 [topmost], root, 'histedit')
1322 state.backupfile = backupfile
1325 state.backupfile = backupfile
1323
1326
1324 def _getsummary(ctx):
1327 def _getsummary(ctx):
1325 # a common pattern is to extract the summary but default to the empty
1328 # a common pattern is to extract the summary but default to the empty
1326 # string
1329 # string
1327 summary = ctx.description() or ''
1330 summary = ctx.description() or ''
1328 if summary:
1331 if summary:
1329 summary = summary.splitlines()[0]
1332 summary = summary.splitlines()[0]
1330 return summary
1333 return summary
1331
1334
1332 def bootstrapcontinue(ui, state, opts):
1335 def bootstrapcontinue(ui, state, opts):
1333 repo = state.repo
1336 repo = state.repo
1334
1337
1335 ms = mergemod.mergestate.read(repo)
1338 ms = mergemod.mergestate.read(repo)
1336 mergeutil.checkunresolved(ms)
1339 mergeutil.checkunresolved(ms)
1337
1340
1338 if state.actions:
1341 if state.actions:
1339 actobj = state.actions.pop(0)
1342 actobj = state.actions.pop(0)
1340
1343
1341 if _isdirtywc(repo):
1344 if _isdirtywc(repo):
1342 actobj.continuedirty()
1345 actobj.continuedirty()
1343 if _isdirtywc(repo):
1346 if _isdirtywc(repo):
1344 abortdirty()
1347 abortdirty()
1345
1348
1346 parentctx, replacements = actobj.continueclean()
1349 parentctx, replacements = actobj.continueclean()
1347
1350
1348 state.parentctxnode = parentctx.node()
1351 state.parentctxnode = parentctx.node()
1349 state.replacements.extend(replacements)
1352 state.replacements.extend(replacements)
1350
1353
1351 return state
1354 return state
1352
1355
1353 def between(repo, old, new, keep):
1356 def between(repo, old, new, keep):
1354 """select and validate the set of revision to edit
1357 """select and validate the set of revision to edit
1355
1358
1356 When keep is false, the specified set can't have children."""
1359 When keep is false, the specified set can't have children."""
1357 revs = repo.revs('%n::%n', old, new)
1360 revs = repo.revs('%n::%n', old, new)
1358 if revs and not keep:
1361 if revs and not keep:
1359 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1362 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1360 repo.revs('(%ld::) - (%ld)', revs, revs)):
1363 repo.revs('(%ld::) - (%ld)', revs, revs)):
1361 raise error.Abort(_('can only histedit a changeset together '
1364 raise error.Abort(_('can only histedit a changeset together '
1362 'with all its descendants'))
1365 'with all its descendants'))
1363 if repo.revs('(%ld) and merge()', revs):
1366 if repo.revs('(%ld) and merge()', revs):
1364 raise error.Abort(_('cannot edit history that contains merges'))
1367 raise error.Abort(_('cannot edit history that contains merges'))
1365 root = repo[revs.first()] # list is already sorted by repo.revs()
1368 root = repo[revs.first()] # list is already sorted by repo.revs()
1366 if not root.mutable():
1369 if not root.mutable():
1367 raise error.Abort(_('cannot edit public changeset: %s') % root,
1370 raise error.Abort(_('cannot edit public changeset: %s') % root,
1368 hint=_("see 'hg help phases' for details"))
1371 hint=_("see 'hg help phases' for details"))
1369 return pycompat.maplist(repo.changelog.node, revs)
1372 return pycompat.maplist(repo.changelog.node, revs)
1370
1373
1371 def ruleeditor(repo, ui, actions, editcomment=""):
1374 def ruleeditor(repo, ui, actions, editcomment=""):
1372 """open an editor to edit rules
1375 """open an editor to edit rules
1373
1376
1374 rules are in the format [ [act, ctx], ...] like in state.rules
1377 rules are in the format [ [act, ctx], ...] like in state.rules
1375 """
1378 """
1376 if repo.ui.configbool("experimental", "histedit.autoverb"):
1379 if repo.ui.configbool("experimental", "histedit.autoverb"):
1377 newact = util.sortdict()
1380 newact = util.sortdict()
1378 for act in actions:
1381 for act in actions:
1379 ctx = repo[act.node]
1382 ctx = repo[act.node]
1380 summary = _getsummary(ctx)
1383 summary = _getsummary(ctx)
1381 fword = summary.split(' ', 1)[0].lower()
1384 fword = summary.split(' ', 1)[0].lower()
1382 added = False
1385 added = False
1383
1386
1384 # if it doesn't end with the special character '!' just skip this
1387 # if it doesn't end with the special character '!' just skip this
1385 if fword.endswith('!'):
1388 if fword.endswith('!'):
1386 fword = fword[:-1]
1389 fword = fword[:-1]
1387 if fword in primaryactions | secondaryactions | tertiaryactions:
1390 if fword in primaryactions | secondaryactions | tertiaryactions:
1388 act.verb = fword
1391 act.verb = fword
1389 # get the target summary
1392 # get the target summary
1390 tsum = summary[len(fword) + 1:].lstrip()
1393 tsum = summary[len(fword) + 1:].lstrip()
1391 # safe but slow: reverse iterate over the actions so we
1394 # safe but slow: reverse iterate over the actions so we
1392 # don't clash on two commits having the same summary
1395 # don't clash on two commits having the same summary
1393 for na, l in reversed(list(newact.iteritems())):
1396 for na, l in reversed(list(newact.iteritems())):
1394 actx = repo[na.node]
1397 actx = repo[na.node]
1395 asum = _getsummary(actx)
1398 asum = _getsummary(actx)
1396 if asum == tsum:
1399 if asum == tsum:
1397 added = True
1400 added = True
1398 l.append(act)
1401 l.append(act)
1399 break
1402 break
1400
1403
1401 if not added:
1404 if not added:
1402 newact[act] = []
1405 newact[act] = []
1403
1406
1404 # copy over and flatten the new list
1407 # copy over and flatten the new list
1405 actions = []
1408 actions = []
1406 for na, l in newact.iteritems():
1409 for na, l in newact.iteritems():
1407 actions.append(na)
1410 actions.append(na)
1408 actions += l
1411 actions += l
1409
1412
1410 rules = '\n'.join([act.torule() for act in actions])
1413 rules = '\n'.join([act.torule() for act in actions])
1411 rules += '\n\n'
1414 rules += '\n\n'
1412 rules += editcomment
1415 rules += editcomment
1413 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1416 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1414 repopath=repo.path, action='histedit')
1417 repopath=repo.path, action='histedit')
1415
1418
1416 # Save edit rules in .hg/histedit-last-edit.txt in case
1419 # Save edit rules in .hg/histedit-last-edit.txt in case
1417 # the user needs to ask for help after something
1420 # the user needs to ask for help after something
1418 # surprising happens.
1421 # surprising happens.
1419 with repo.vfs('histedit-last-edit.txt', 'wb') as f:
1422 with repo.vfs('histedit-last-edit.txt', 'wb') as f:
1420 f.write(rules)
1423 f.write(rules)
1421
1424
1422 return rules
1425 return rules
1423
1426
1424 def parserules(rules, state):
1427 def parserules(rules, state):
1425 """Read the histedit rules string and return list of action objects """
1428 """Read the histedit rules string and return list of action objects """
1426 rules = [l for l in (r.strip() for r in rules.splitlines())
1429 rules = [l for l in (r.strip() for r in rules.splitlines())
1427 if l and not l.startswith('#')]
1430 if l and not l.startswith('#')]
1428 actions = []
1431 actions = []
1429 for r in rules:
1432 for r in rules:
1430 if ' ' not in r:
1433 if ' ' not in r:
1431 raise error.ParseError(_('malformed line "%s"') % r)
1434 raise error.ParseError(_('malformed line "%s"') % r)
1432 verb, rest = r.split(' ', 1)
1435 verb, rest = r.split(' ', 1)
1433
1436
1434 if verb not in actiontable:
1437 if verb not in actiontable:
1435 raise error.ParseError(_('unknown action "%s"') % verb)
1438 raise error.ParseError(_('unknown action "%s"') % verb)
1436
1439
1437 action = actiontable[verb].fromrule(state, rest)
1440 action = actiontable[verb].fromrule(state, rest)
1438 actions.append(action)
1441 actions.append(action)
1439 return actions
1442 return actions
1440
1443
1441 def warnverifyactions(ui, repo, actions, state, ctxs):
1444 def warnverifyactions(ui, repo, actions, state, ctxs):
1442 try:
1445 try:
1443 verifyactions(actions, state, ctxs)
1446 verifyactions(actions, state, ctxs)
1444 except error.ParseError:
1447 except error.ParseError:
1445 if repo.vfs.exists('histedit-last-edit.txt'):
1448 if repo.vfs.exists('histedit-last-edit.txt'):
1446 ui.warn(_('warning: histedit rules saved '
1449 ui.warn(_('warning: histedit rules saved '
1447 'to: .hg/histedit-last-edit.txt\n'))
1450 'to: .hg/histedit-last-edit.txt\n'))
1448 raise
1451 raise
1449
1452
1450 def verifyactions(actions, state, ctxs):
1453 def verifyactions(actions, state, ctxs):
1451 """Verify that there exists exactly one action per given changeset and
1454 """Verify that there exists exactly one action per given changeset and
1452 other constraints.
1455 other constraints.
1453
1456
1454 Will abort if there are to many or too few rules, a malformed rule,
1457 Will abort if there are to many or too few rules, a malformed rule,
1455 or a rule on a changeset outside of the user-given range.
1458 or a rule on a changeset outside of the user-given range.
1456 """
1459 """
1457 expected = set(c.node() for c in ctxs)
1460 expected = set(c.node() for c in ctxs)
1458 seen = set()
1461 seen = set()
1459 prev = None
1462 prev = None
1460
1463
1461 if actions and actions[0].verb in ['roll', 'fold']:
1464 if actions and actions[0].verb in ['roll', 'fold']:
1462 raise error.ParseError(_('first changeset cannot use verb "%s"') %
1465 raise error.ParseError(_('first changeset cannot use verb "%s"') %
1463 actions[0].verb)
1466 actions[0].verb)
1464
1467
1465 for action in actions:
1468 for action in actions:
1466 action.verify(prev, expected, seen)
1469 action.verify(prev, expected, seen)
1467 prev = action
1470 prev = action
1468 if action.node is not None:
1471 if action.node is not None:
1469 seen.add(action.node)
1472 seen.add(action.node)
1470 missing = sorted(expected - seen) # sort to stabilize output
1473 missing = sorted(expected - seen) # sort to stabilize output
1471
1474
1472 if state.repo.ui.configbool('histedit', 'dropmissing'):
1475 if state.repo.ui.configbool('histedit', 'dropmissing'):
1473 if len(actions) == 0:
1476 if len(actions) == 0:
1474 raise error.ParseError(_('no rules provided'),
1477 raise error.ParseError(_('no rules provided'),
1475 hint=_('use strip extension to remove commits'))
1478 hint=_('use strip extension to remove commits'))
1476
1479
1477 drops = [drop(state, n) for n in missing]
1480 drops = [drop(state, n) for n in missing]
1478 # put the in the beginning so they execute immediately and
1481 # put the in the beginning so they execute immediately and
1479 # don't show in the edit-plan in the future
1482 # don't show in the edit-plan in the future
1480 actions[:0] = drops
1483 actions[:0] = drops
1481 elif missing:
1484 elif missing:
1482 raise error.ParseError(_('missing rules for changeset %s') %
1485 raise error.ParseError(_('missing rules for changeset %s') %
1483 node.short(missing[0]),
1486 node.short(missing[0]),
1484 hint=_('use "drop %s" to discard, see also: '
1487 hint=_('use "drop %s" to discard, see also: '
1485 "'hg help -e histedit.config'")
1488 "'hg help -e histedit.config'")
1486 % node.short(missing[0]))
1489 % node.short(missing[0]))
1487
1490
1488 def adjustreplacementsfrommarkers(repo, oldreplacements):
1491 def adjustreplacementsfrommarkers(repo, oldreplacements):
1489 """Adjust replacements from obsolescence markers
1492 """Adjust replacements from obsolescence markers
1490
1493
1491 Replacements structure is originally generated based on
1494 Replacements structure is originally generated based on
1492 histedit's state and does not account for changes that are
1495 histedit's state and does not account for changes that are
1493 not recorded there. This function fixes that by adding
1496 not recorded there. This function fixes that by adding
1494 data read from obsolescence markers"""
1497 data read from obsolescence markers"""
1495 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1498 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1496 return oldreplacements
1499 return oldreplacements
1497
1500
1498 unfi = repo.unfiltered()
1501 unfi = repo.unfiltered()
1499 nm = unfi.changelog.nodemap
1502 nm = unfi.changelog.nodemap
1500 obsstore = repo.obsstore
1503 obsstore = repo.obsstore
1501 newreplacements = list(oldreplacements)
1504 newreplacements = list(oldreplacements)
1502 oldsuccs = [r[1] for r in oldreplacements]
1505 oldsuccs = [r[1] for r in oldreplacements]
1503 # successors that have already been added to succstocheck once
1506 # successors that have already been added to succstocheck once
1504 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1507 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1505 succstocheck = list(seensuccs)
1508 succstocheck = list(seensuccs)
1506 while succstocheck:
1509 while succstocheck:
1507 n = succstocheck.pop()
1510 n = succstocheck.pop()
1508 missing = nm.get(n) is None
1511 missing = nm.get(n) is None
1509 markers = obsstore.successors.get(n, ())
1512 markers = obsstore.successors.get(n, ())
1510 if missing and not markers:
1513 if missing and not markers:
1511 # dead end, mark it as such
1514 # dead end, mark it as such
1512 newreplacements.append((n, ()))
1515 newreplacements.append((n, ()))
1513 for marker in markers:
1516 for marker in markers:
1514 nsuccs = marker[1]
1517 nsuccs = marker[1]
1515 newreplacements.append((n, nsuccs))
1518 newreplacements.append((n, nsuccs))
1516 for nsucc in nsuccs:
1519 for nsucc in nsuccs:
1517 if nsucc not in seensuccs:
1520 if nsucc not in seensuccs:
1518 seensuccs.add(nsucc)
1521 seensuccs.add(nsucc)
1519 succstocheck.append(nsucc)
1522 succstocheck.append(nsucc)
1520
1523
1521 return newreplacements
1524 return newreplacements
1522
1525
1523 def processreplacement(state):
1526 def processreplacement(state):
1524 """process the list of replacements to return
1527 """process the list of replacements to return
1525
1528
1526 1) the final mapping between original and created nodes
1529 1) the final mapping between original and created nodes
1527 2) the list of temporary node created by histedit
1530 2) the list of temporary node created by histedit
1528 3) the list of new commit created by histedit"""
1531 3) the list of new commit created by histedit"""
1529 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1532 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1530 allsuccs = set()
1533 allsuccs = set()
1531 replaced = set()
1534 replaced = set()
1532 fullmapping = {}
1535 fullmapping = {}
1533 # initialize basic set
1536 # initialize basic set
1534 # fullmapping records all operations recorded in replacement
1537 # fullmapping records all operations recorded in replacement
1535 for rep in replacements:
1538 for rep in replacements:
1536 allsuccs.update(rep[1])
1539 allsuccs.update(rep[1])
1537 replaced.add(rep[0])
1540 replaced.add(rep[0])
1538 fullmapping.setdefault(rep[0], set()).update(rep[1])
1541 fullmapping.setdefault(rep[0], set()).update(rep[1])
1539 new = allsuccs - replaced
1542 new = allsuccs - replaced
1540 tmpnodes = allsuccs & replaced
1543 tmpnodes = allsuccs & replaced
1541 # Reduce content fullmapping into direct relation between original nodes
1544 # Reduce content fullmapping into direct relation between original nodes
1542 # and final node created during history edition
1545 # and final node created during history edition
1543 # Dropped changeset are replaced by an empty list
1546 # Dropped changeset are replaced by an empty list
1544 toproceed = set(fullmapping)
1547 toproceed = set(fullmapping)
1545 final = {}
1548 final = {}
1546 while toproceed:
1549 while toproceed:
1547 for x in list(toproceed):
1550 for x in list(toproceed):
1548 succs = fullmapping[x]
1551 succs = fullmapping[x]
1549 for s in list(succs):
1552 for s in list(succs):
1550 if s in toproceed:
1553 if s in toproceed:
1551 # non final node with unknown closure
1554 # non final node with unknown closure
1552 # We can't process this now
1555 # We can't process this now
1553 break
1556 break
1554 elif s in final:
1557 elif s in final:
1555 # non final node, replace with closure
1558 # non final node, replace with closure
1556 succs.remove(s)
1559 succs.remove(s)
1557 succs.update(final[s])
1560 succs.update(final[s])
1558 else:
1561 else:
1559 final[x] = succs
1562 final[x] = succs
1560 toproceed.remove(x)
1563 toproceed.remove(x)
1561 # remove tmpnodes from final mapping
1564 # remove tmpnodes from final mapping
1562 for n in tmpnodes:
1565 for n in tmpnodes:
1563 del final[n]
1566 del final[n]
1564 # we expect all changes involved in final to exist in the repo
1567 # we expect all changes involved in final to exist in the repo
1565 # turn `final` into list (topologically sorted)
1568 # turn `final` into list (topologically sorted)
1566 nm = state.repo.changelog.nodemap
1569 nm = state.repo.changelog.nodemap
1567 for prec, succs in final.items():
1570 for prec, succs in final.items():
1568 final[prec] = sorted(succs, key=nm.get)
1571 final[prec] = sorted(succs, key=nm.get)
1569
1572
1570 # computed topmost element (necessary for bookmark)
1573 # computed topmost element (necessary for bookmark)
1571 if new:
1574 if new:
1572 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1575 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1573 elif not final:
1576 elif not final:
1574 # Nothing rewritten at all. we won't need `newtopmost`
1577 # Nothing rewritten at all. we won't need `newtopmost`
1575 # It is the same as `oldtopmost` and `processreplacement` know it
1578 # It is the same as `oldtopmost` and `processreplacement` know it
1576 newtopmost = None
1579 newtopmost = None
1577 else:
1580 else:
1578 # every body died. The newtopmost is the parent of the root.
1581 # every body died. The newtopmost is the parent of the root.
1579 r = state.repo.changelog.rev
1582 r = state.repo.changelog.rev
1580 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1583 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1581
1584
1582 return final, tmpnodes, new, newtopmost
1585 return final, tmpnodes, new, newtopmost
1583
1586
1584 def movetopmostbookmarks(repo, oldtopmost, newtopmost):
1587 def movetopmostbookmarks(repo, oldtopmost, newtopmost):
1585 """Move bookmark from oldtopmost to newly created topmost
1588 """Move bookmark from oldtopmost to newly created topmost
1586
1589
1587 This is arguably a feature and we may only want that for the active
1590 This is arguably a feature and we may only want that for the active
1588 bookmark. But the behavior is kept compatible with the old version for now.
1591 bookmark. But the behavior is kept compatible with the old version for now.
1589 """
1592 """
1590 if not oldtopmost or not newtopmost:
1593 if not oldtopmost or not newtopmost:
1591 return
1594 return
1592 oldbmarks = repo.nodebookmarks(oldtopmost)
1595 oldbmarks = repo.nodebookmarks(oldtopmost)
1593 if oldbmarks:
1596 if oldbmarks:
1594 with repo.lock(), repo.transaction('histedit') as tr:
1597 with repo.lock(), repo.transaction('histedit') as tr:
1595 marks = repo._bookmarks
1598 marks = repo._bookmarks
1596 changes = []
1599 changes = []
1597 for name in oldbmarks:
1600 for name in oldbmarks:
1598 changes.append((name, newtopmost))
1601 changes.append((name, newtopmost))
1599 marks.applychanges(repo, tr, changes)
1602 marks.applychanges(repo, tr, changes)
1600
1603
1601 def cleanupnode(ui, repo, nodes):
1604 def cleanupnode(ui, repo, nodes):
1602 """strip a group of nodes from the repository
1605 """strip a group of nodes from the repository
1603
1606
1604 The set of node to strip may contains unknown nodes."""
1607 The set of node to strip may contains unknown nodes."""
1605 with repo.lock():
1608 with repo.lock():
1606 # do not let filtering get in the way of the cleanse
1609 # do not let filtering get in the way of the cleanse
1607 # we should probably get rid of obsolescence marker created during the
1610 # we should probably get rid of obsolescence marker created during the
1608 # histedit, but we currently do not have such information.
1611 # histedit, but we currently do not have such information.
1609 repo = repo.unfiltered()
1612 repo = repo.unfiltered()
1610 # Find all nodes that need to be stripped
1613 # Find all nodes that need to be stripped
1611 # (we use %lr instead of %ln to silently ignore unknown items)
1614 # (we use %lr instead of %ln to silently ignore unknown items)
1612 nm = repo.changelog.nodemap
1615 nm = repo.changelog.nodemap
1613 nodes = sorted(n for n in nodes if n in nm)
1616 nodes = sorted(n for n in nodes if n in nm)
1614 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1617 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1615 if roots:
1618 if roots:
1616 repair.strip(ui, repo, roots)
1619 repair.strip(ui, repo, roots)
1617
1620
1618 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1621 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1619 if isinstance(nodelist, str):
1622 if isinstance(nodelist, str):
1620 nodelist = [nodelist]
1623 nodelist = [nodelist]
1621 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1624 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1622 state = histeditstate(repo)
1625 state = histeditstate(repo)
1623 state.read()
1626 state.read()
1624 histedit_nodes = {action.node for action
1627 histedit_nodes = {action.node for action
1625 in state.actions if action.node}
1628 in state.actions if action.node}
1626 common_nodes = histedit_nodes & set(nodelist)
1629 common_nodes = histedit_nodes & set(nodelist)
1627 if common_nodes:
1630 if common_nodes:
1628 raise error.Abort(_("histedit in progress, can't strip %s")
1631 raise error.Abort(_("histedit in progress, can't strip %s")
1629 % ', '.join(node.short(x) for x in common_nodes))
1632 % ', '.join(node.short(x) for x in common_nodes))
1630 return orig(ui, repo, nodelist, *args, **kwargs)
1633 return orig(ui, repo, nodelist, *args, **kwargs)
1631
1634
1632 extensions.wrapfunction(repair, 'strip', stripwrapper)
1635 extensions.wrapfunction(repair, 'strip', stripwrapper)
1633
1636
1634 def summaryhook(ui, repo):
1637 def summaryhook(ui, repo):
1635 if not os.path.exists(repo.vfs.join('histedit-state')):
1638 if not os.path.exists(repo.vfs.join('histedit-state')):
1636 return
1639 return
1637 state = histeditstate(repo)
1640 state = histeditstate(repo)
1638 state.read()
1641 state.read()
1639 if state.actions:
1642 if state.actions:
1640 # i18n: column positioning for "hg summary"
1643 # i18n: column positioning for "hg summary"
1641 ui.write(_('hist: %s (histedit --continue)\n') %
1644 ui.write(_('hist: %s (histedit --continue)\n') %
1642 (ui.label(_('%d remaining'), 'histedit.remaining') %
1645 (ui.label(_('%d remaining'), 'histedit.remaining') %
1643 len(state.actions)))
1646 len(state.actions)))
1644
1647
1645 def extsetup(ui):
1648 def extsetup(ui):
1646 cmdutil.summaryhooks.add('histedit', summaryhook)
1649 cmdutil.summaryhooks.add('histedit', summaryhook)
1647 cmdutil.unfinishedstates.append(
1650 cmdutil.unfinishedstates.append(
1648 ['histedit-state', False, True, _('histedit in progress'),
1651 ['histedit-state', False, True, _('histedit in progress'),
1649 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1652 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1650 cmdutil.afterresolvedstates.append(
1653 cmdutil.afterresolvedstates.append(
1651 ['histedit-state', _('hg histedit --continue')])
1654 ['histedit-state', _('hg histedit --continue')])
@@ -1,520 +1,523
1 # journal.py
1 # journal.py
2 #
2 #
3 # Copyright 2014-2016 Facebook, Inc.
3 # Copyright 2014-2016 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """track previous positions of bookmarks (EXPERIMENTAL)
7 """track previous positions of bookmarks (EXPERIMENTAL)
8
8
9 This extension adds a new command: `hg journal`, which shows you where
9 This extension adds a new command: `hg journal`, which shows you where
10 bookmarks were previously located.
10 bookmarks were previously located.
11
11
12 """
12 """
13
13
14 from __future__ import absolute_import
14 from __future__ import absolute_import
15
15
16 import collections
16 import collections
17 import errno
17 import errno
18 import os
18 import os
19 import weakref
19 import weakref
20
20
21 from mercurial.i18n import _
21 from mercurial.i18n import _
22
22
23 from mercurial import (
23 from mercurial import (
24 bookmarks,
24 bookmarks,
25 cmdutil,
25 cmdutil,
26 dispatch,
26 dispatch,
27 encoding,
27 encoding,
28 error,
28 error,
29 extensions,
29 extensions,
30 hg,
30 hg,
31 localrepo,
31 localrepo,
32 lock,
32 lock,
33 logcmdutil,
33 logcmdutil,
34 node,
34 node,
35 pycompat,
35 pycompat,
36 registrar,
36 registrar,
37 util,
37 util,
38 )
38 )
39 from mercurial.utils import dateutil
39 from mercurial.utils import (
40 dateutil,
41 stringutil,
42 )
40
43
41 cmdtable = {}
44 cmdtable = {}
42 command = registrar.command(cmdtable)
45 command = registrar.command(cmdtable)
43
46
44 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
47 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
45 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
48 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
46 # be specifying the version(s) of Mercurial they are tested with, or
49 # be specifying the version(s) of Mercurial they are tested with, or
47 # leave the attribute unspecified.
50 # leave the attribute unspecified.
48 testedwith = 'ships-with-hg-core'
51 testedwith = 'ships-with-hg-core'
49
52
50 # storage format version; increment when the format changes
53 # storage format version; increment when the format changes
51 storageversion = 0
54 storageversion = 0
52
55
53 # namespaces
56 # namespaces
54 bookmarktype = 'bookmark'
57 bookmarktype = 'bookmark'
55 wdirparenttype = 'wdirparent'
58 wdirparenttype = 'wdirparent'
56 # In a shared repository, what shared feature name is used
59 # In a shared repository, what shared feature name is used
57 # to indicate this namespace is shared with the source?
60 # to indicate this namespace is shared with the source?
58 sharednamespaces = {
61 sharednamespaces = {
59 bookmarktype: hg.sharedbookmarks,
62 bookmarktype: hg.sharedbookmarks,
60 }
63 }
61
64
62 # Journal recording, register hooks and storage object
65 # Journal recording, register hooks and storage object
63 def extsetup(ui):
66 def extsetup(ui):
64 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
67 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
65 extensions.wrapfunction(bookmarks.bmstore, '_write', recordbookmarks)
68 extensions.wrapfunction(bookmarks.bmstore, '_write', recordbookmarks)
66 extensions.wrapfilecache(
69 extensions.wrapfilecache(
67 localrepo.localrepository, 'dirstate', wrapdirstate)
70 localrepo.localrepository, 'dirstate', wrapdirstate)
68 extensions.wrapfunction(hg, 'postshare', wrappostshare)
71 extensions.wrapfunction(hg, 'postshare', wrappostshare)
69 extensions.wrapfunction(hg, 'copystore', unsharejournal)
72 extensions.wrapfunction(hg, 'copystore', unsharejournal)
70
73
71 def reposetup(ui, repo):
74 def reposetup(ui, repo):
72 if repo.local():
75 if repo.local():
73 repo.journal = journalstorage(repo)
76 repo.journal = journalstorage(repo)
74 repo._wlockfreeprefix.add('namejournal')
77 repo._wlockfreeprefix.add('namejournal')
75
78
76 dirstate, cached = localrepo.isfilecached(repo, 'dirstate')
79 dirstate, cached = localrepo.isfilecached(repo, 'dirstate')
77 if cached:
80 if cached:
78 # already instantiated dirstate isn't yet marked as
81 # already instantiated dirstate isn't yet marked as
79 # "journal"-ing, even though repo.dirstate() was already
82 # "journal"-ing, even though repo.dirstate() was already
80 # wrapped by own wrapdirstate()
83 # wrapped by own wrapdirstate()
81 _setupdirstate(repo, dirstate)
84 _setupdirstate(repo, dirstate)
82
85
83 def runcommand(orig, lui, repo, cmd, fullargs, *args):
86 def runcommand(orig, lui, repo, cmd, fullargs, *args):
84 """Track the command line options for recording in the journal"""
87 """Track the command line options for recording in the journal"""
85 journalstorage.recordcommand(*fullargs)
88 journalstorage.recordcommand(*fullargs)
86 return orig(lui, repo, cmd, fullargs, *args)
89 return orig(lui, repo, cmd, fullargs, *args)
87
90
88 def _setupdirstate(repo, dirstate):
91 def _setupdirstate(repo, dirstate):
89 dirstate.journalstorage = repo.journal
92 dirstate.journalstorage = repo.journal
90 dirstate.addparentchangecallback('journal', recorddirstateparents)
93 dirstate.addparentchangecallback('journal', recorddirstateparents)
91
94
92 # hooks to record dirstate changes
95 # hooks to record dirstate changes
93 def wrapdirstate(orig, repo):
96 def wrapdirstate(orig, repo):
94 """Make journal storage available to the dirstate object"""
97 """Make journal storage available to the dirstate object"""
95 dirstate = orig(repo)
98 dirstate = orig(repo)
96 if util.safehasattr(repo, 'journal'):
99 if util.safehasattr(repo, 'journal'):
97 _setupdirstate(repo, dirstate)
100 _setupdirstate(repo, dirstate)
98 return dirstate
101 return dirstate
99
102
100 def recorddirstateparents(dirstate, old, new):
103 def recorddirstateparents(dirstate, old, new):
101 """Records all dirstate parent changes in the journal."""
104 """Records all dirstate parent changes in the journal."""
102 old = list(old)
105 old = list(old)
103 new = list(new)
106 new = list(new)
104 if util.safehasattr(dirstate, 'journalstorage'):
107 if util.safehasattr(dirstate, 'journalstorage'):
105 # only record two hashes if there was a merge
108 # only record two hashes if there was a merge
106 oldhashes = old[:1] if old[1] == node.nullid else old
109 oldhashes = old[:1] if old[1] == node.nullid else old
107 newhashes = new[:1] if new[1] == node.nullid else new
110 newhashes = new[:1] if new[1] == node.nullid else new
108 dirstate.journalstorage.record(
111 dirstate.journalstorage.record(
109 wdirparenttype, '.', oldhashes, newhashes)
112 wdirparenttype, '.', oldhashes, newhashes)
110
113
111 # hooks to record bookmark changes (both local and remote)
114 # hooks to record bookmark changes (both local and remote)
112 def recordbookmarks(orig, store, fp):
115 def recordbookmarks(orig, store, fp):
113 """Records all bookmark changes in the journal."""
116 """Records all bookmark changes in the journal."""
114 repo = store._repo
117 repo = store._repo
115 if util.safehasattr(repo, 'journal'):
118 if util.safehasattr(repo, 'journal'):
116 oldmarks = bookmarks.bmstore(repo)
119 oldmarks = bookmarks.bmstore(repo)
117 for mark, value in store.iteritems():
120 for mark, value in store.iteritems():
118 oldvalue = oldmarks.get(mark, node.nullid)
121 oldvalue = oldmarks.get(mark, node.nullid)
119 if value != oldvalue:
122 if value != oldvalue:
120 repo.journal.record(bookmarktype, mark, oldvalue, value)
123 repo.journal.record(bookmarktype, mark, oldvalue, value)
121 return orig(store, fp)
124 return orig(store, fp)
122
125
123 # shared repository support
126 # shared repository support
124 def _readsharedfeatures(repo):
127 def _readsharedfeatures(repo):
125 """A set of shared features for this repository"""
128 """A set of shared features for this repository"""
126 try:
129 try:
127 return set(repo.vfs.read('shared').splitlines())
130 return set(repo.vfs.read('shared').splitlines())
128 except IOError as inst:
131 except IOError as inst:
129 if inst.errno != errno.ENOENT:
132 if inst.errno != errno.ENOENT:
130 raise
133 raise
131 return set()
134 return set()
132
135
133 def _mergeentriesiter(*iterables, **kwargs):
136 def _mergeentriesiter(*iterables, **kwargs):
134 """Given a set of sorted iterables, yield the next entry in merged order
137 """Given a set of sorted iterables, yield the next entry in merged order
135
138
136 Note that by default entries go from most recent to oldest.
139 Note that by default entries go from most recent to oldest.
137 """
140 """
138 order = kwargs.pop(r'order', max)
141 order = kwargs.pop(r'order', max)
139 iterables = [iter(it) for it in iterables]
142 iterables = [iter(it) for it in iterables]
140 # this tracks still active iterables; iterables are deleted as they are
143 # this tracks still active iterables; iterables are deleted as they are
141 # exhausted, which is why this is a dictionary and why each entry also
144 # exhausted, which is why this is a dictionary and why each entry also
142 # stores the key. Entries are mutable so we can store the next value each
145 # stores the key. Entries are mutable so we can store the next value each
143 # time.
146 # time.
144 iterable_map = {}
147 iterable_map = {}
145 for key, it in enumerate(iterables):
148 for key, it in enumerate(iterables):
146 try:
149 try:
147 iterable_map[key] = [next(it), key, it]
150 iterable_map[key] = [next(it), key, it]
148 except StopIteration:
151 except StopIteration:
149 # empty entry, can be ignored
152 # empty entry, can be ignored
150 pass
153 pass
151
154
152 while iterable_map:
155 while iterable_map:
153 value, key, it = order(iterable_map.itervalues())
156 value, key, it = order(iterable_map.itervalues())
154 yield value
157 yield value
155 try:
158 try:
156 iterable_map[key][0] = next(it)
159 iterable_map[key][0] = next(it)
157 except StopIteration:
160 except StopIteration:
158 # this iterable is empty, remove it from consideration
161 # this iterable is empty, remove it from consideration
159 del iterable_map[key]
162 del iterable_map[key]
160
163
161 def wrappostshare(orig, sourcerepo, destrepo, **kwargs):
164 def wrappostshare(orig, sourcerepo, destrepo, **kwargs):
162 """Mark this shared working copy as sharing journal information"""
165 """Mark this shared working copy as sharing journal information"""
163 with destrepo.wlock():
166 with destrepo.wlock():
164 orig(sourcerepo, destrepo, **kwargs)
167 orig(sourcerepo, destrepo, **kwargs)
165 with destrepo.vfs('shared', 'a') as fp:
168 with destrepo.vfs('shared', 'a') as fp:
166 fp.write('journal\n')
169 fp.write('journal\n')
167
170
168 def unsharejournal(orig, ui, repo, repopath):
171 def unsharejournal(orig, ui, repo, repopath):
169 """Copy shared journal entries into this repo when unsharing"""
172 """Copy shared journal entries into this repo when unsharing"""
170 if (repo.path == repopath and repo.shared() and
173 if (repo.path == repopath and repo.shared() and
171 util.safehasattr(repo, 'journal')):
174 util.safehasattr(repo, 'journal')):
172 sharedrepo = hg.sharedreposource(repo)
175 sharedrepo = hg.sharedreposource(repo)
173 sharedfeatures = _readsharedfeatures(repo)
176 sharedfeatures = _readsharedfeatures(repo)
174 if sharedrepo and sharedfeatures > {'journal'}:
177 if sharedrepo and sharedfeatures > {'journal'}:
175 # there is a shared repository and there are shared journal entries
178 # there is a shared repository and there are shared journal entries
176 # to copy. move shared date over from source to destination but
179 # to copy. move shared date over from source to destination but
177 # move the local file first
180 # move the local file first
178 if repo.vfs.exists('namejournal'):
181 if repo.vfs.exists('namejournal'):
179 journalpath = repo.vfs.join('namejournal')
182 journalpath = repo.vfs.join('namejournal')
180 util.rename(journalpath, journalpath + '.bak')
183 util.rename(journalpath, journalpath + '.bak')
181 storage = repo.journal
184 storage = repo.journal
182 local = storage._open(
185 local = storage._open(
183 repo.vfs, filename='namejournal.bak', _newestfirst=False)
186 repo.vfs, filename='namejournal.bak', _newestfirst=False)
184 shared = (
187 shared = (
185 e for e in storage._open(sharedrepo.vfs, _newestfirst=False)
188 e for e in storage._open(sharedrepo.vfs, _newestfirst=False)
186 if sharednamespaces.get(e.namespace) in sharedfeatures)
189 if sharednamespaces.get(e.namespace) in sharedfeatures)
187 for entry in _mergeentriesiter(local, shared, order=min):
190 for entry in _mergeentriesiter(local, shared, order=min):
188 storage._write(repo.vfs, entry)
191 storage._write(repo.vfs, entry)
189
192
190 return orig(ui, repo, repopath)
193 return orig(ui, repo, repopath)
191
194
192 class journalentry(collections.namedtuple(
195 class journalentry(collections.namedtuple(
193 u'journalentry',
196 u'journalentry',
194 u'timestamp user command namespace name oldhashes newhashes')):
197 u'timestamp user command namespace name oldhashes newhashes')):
195 """Individual journal entry
198 """Individual journal entry
196
199
197 * timestamp: a mercurial (time, timezone) tuple
200 * timestamp: a mercurial (time, timezone) tuple
198 * user: the username that ran the command
201 * user: the username that ran the command
199 * namespace: the entry namespace, an opaque string
202 * namespace: the entry namespace, an opaque string
200 * name: the name of the changed item, opaque string with meaning in the
203 * name: the name of the changed item, opaque string with meaning in the
201 namespace
204 namespace
202 * command: the hg command that triggered this record
205 * command: the hg command that triggered this record
203 * oldhashes: a tuple of one or more binary hashes for the old location
206 * oldhashes: a tuple of one or more binary hashes for the old location
204 * newhashes: a tuple of one or more binary hashes for the new location
207 * newhashes: a tuple of one or more binary hashes for the new location
205
208
206 Handles serialisation from and to the storage format. Fields are
209 Handles serialisation from and to the storage format. Fields are
207 separated by newlines, hashes are written out in hex separated by commas,
210 separated by newlines, hashes are written out in hex separated by commas,
208 timestamp and timezone are separated by a space.
211 timestamp and timezone are separated by a space.
209
212
210 """
213 """
211 @classmethod
214 @classmethod
212 def fromstorage(cls, line):
215 def fromstorage(cls, line):
213 (time, user, command, namespace, name,
216 (time, user, command, namespace, name,
214 oldhashes, newhashes) = line.split('\n')
217 oldhashes, newhashes) = line.split('\n')
215 timestamp, tz = time.split()
218 timestamp, tz = time.split()
216 timestamp, tz = float(timestamp), int(tz)
219 timestamp, tz = float(timestamp), int(tz)
217 oldhashes = tuple(node.bin(hash) for hash in oldhashes.split(','))
220 oldhashes = tuple(node.bin(hash) for hash in oldhashes.split(','))
218 newhashes = tuple(node.bin(hash) for hash in newhashes.split(','))
221 newhashes = tuple(node.bin(hash) for hash in newhashes.split(','))
219 return cls(
222 return cls(
220 (timestamp, tz), user, command, namespace, name,
223 (timestamp, tz), user, command, namespace, name,
221 oldhashes, newhashes)
224 oldhashes, newhashes)
222
225
223 def __bytes__(self):
226 def __bytes__(self):
224 """bytes representation for storage"""
227 """bytes representation for storage"""
225 time = ' '.join(map(str, self.timestamp))
228 time = ' '.join(map(str, self.timestamp))
226 oldhashes = ','.join([node.hex(hash) for hash in self.oldhashes])
229 oldhashes = ','.join([node.hex(hash) for hash in self.oldhashes])
227 newhashes = ','.join([node.hex(hash) for hash in self.newhashes])
230 newhashes = ','.join([node.hex(hash) for hash in self.newhashes])
228 return '\n'.join((
231 return '\n'.join((
229 time, self.user, self.command, self.namespace, self.name,
232 time, self.user, self.command, self.namespace, self.name,
230 oldhashes, newhashes))
233 oldhashes, newhashes))
231
234
232 __str__ = encoding.strmethod(__bytes__)
235 __str__ = encoding.strmethod(__bytes__)
233
236
234 class journalstorage(object):
237 class journalstorage(object):
235 """Storage for journal entries
238 """Storage for journal entries
236
239
237 Entries are divided over two files; one with entries that pertain to the
240 Entries are divided over two files; one with entries that pertain to the
238 local working copy *only*, and one with entries that are shared across
241 local working copy *only*, and one with entries that are shared across
239 multiple working copies when shared using the share extension.
242 multiple working copies when shared using the share extension.
240
243
241 Entries are stored with NUL bytes as separators. See the journalentry
244 Entries are stored with NUL bytes as separators. See the journalentry
242 class for the per-entry structure.
245 class for the per-entry structure.
243
246
244 The file format starts with an integer version, delimited by a NUL.
247 The file format starts with an integer version, delimited by a NUL.
245
248
246 This storage uses a dedicated lock; this makes it easier to avoid issues
249 This storage uses a dedicated lock; this makes it easier to avoid issues
247 with adding entries that added when the regular wlock is unlocked (e.g.
250 with adding entries that added when the regular wlock is unlocked (e.g.
248 the dirstate).
251 the dirstate).
249
252
250 """
253 """
251 _currentcommand = ()
254 _currentcommand = ()
252 _lockref = None
255 _lockref = None
253
256
254 def __init__(self, repo):
257 def __init__(self, repo):
255 self.user = util.getuser()
258 self.user = util.getuser()
256 self.ui = repo.ui
259 self.ui = repo.ui
257 self.vfs = repo.vfs
260 self.vfs = repo.vfs
258
261
259 # is this working copy using a shared storage?
262 # is this working copy using a shared storage?
260 self.sharedfeatures = self.sharedvfs = None
263 self.sharedfeatures = self.sharedvfs = None
261 if repo.shared():
264 if repo.shared():
262 features = _readsharedfeatures(repo)
265 features = _readsharedfeatures(repo)
263 sharedrepo = hg.sharedreposource(repo)
266 sharedrepo = hg.sharedreposource(repo)
264 if sharedrepo is not None and 'journal' in features:
267 if sharedrepo is not None and 'journal' in features:
265 self.sharedvfs = sharedrepo.vfs
268 self.sharedvfs = sharedrepo.vfs
266 self.sharedfeatures = features
269 self.sharedfeatures = features
267
270
268 # track the current command for recording in journal entries
271 # track the current command for recording in journal entries
269 @property
272 @property
270 def command(self):
273 def command(self):
271 commandstr = ' '.join(
274 commandstr = ' '.join(
272 map(util.shellquote, journalstorage._currentcommand))
275 map(util.shellquote, journalstorage._currentcommand))
273 if '\n' in commandstr:
276 if '\n' in commandstr:
274 # truncate multi-line commands
277 # truncate multi-line commands
275 commandstr = commandstr.partition('\n')[0] + ' ...'
278 commandstr = commandstr.partition('\n')[0] + ' ...'
276 return commandstr
279 return commandstr
277
280
278 @classmethod
281 @classmethod
279 def recordcommand(cls, *fullargs):
282 def recordcommand(cls, *fullargs):
280 """Set the current hg arguments, stored with recorded entries"""
283 """Set the current hg arguments, stored with recorded entries"""
281 # Set the current command on the class because we may have started
284 # Set the current command on the class because we may have started
282 # with a non-local repo (cloning for example).
285 # with a non-local repo (cloning for example).
283 cls._currentcommand = fullargs
286 cls._currentcommand = fullargs
284
287
285 def _currentlock(self, lockref):
288 def _currentlock(self, lockref):
286 """Returns the lock if it's held, or None if it's not.
289 """Returns the lock if it's held, or None if it's not.
287
290
288 (This is copied from the localrepo class)
291 (This is copied from the localrepo class)
289 """
292 """
290 if lockref is None:
293 if lockref is None:
291 return None
294 return None
292 l = lockref()
295 l = lockref()
293 if l is None or not l.held:
296 if l is None or not l.held:
294 return None
297 return None
295 return l
298 return l
296
299
297 def jlock(self, vfs):
300 def jlock(self, vfs):
298 """Create a lock for the journal file"""
301 """Create a lock for the journal file"""
299 if self._currentlock(self._lockref) is not None:
302 if self._currentlock(self._lockref) is not None:
300 raise error.Abort(_('journal lock does not support nesting'))
303 raise error.Abort(_('journal lock does not support nesting'))
301 desc = _('journal of %s') % vfs.base
304 desc = _('journal of %s') % vfs.base
302 try:
305 try:
303 l = lock.lock(vfs, 'namejournal.lock', 0, desc=desc)
306 l = lock.lock(vfs, 'namejournal.lock', 0, desc=desc)
304 except error.LockHeld as inst:
307 except error.LockHeld as inst:
305 self.ui.warn(
308 self.ui.warn(
306 _("waiting for lock on %s held by %r\n") % (desc, inst.locker))
309 _("waiting for lock on %s held by %r\n") % (desc, inst.locker))
307 # default to 600 seconds timeout
310 # default to 600 seconds timeout
308 l = lock.lock(
311 l = lock.lock(
309 vfs, 'namejournal.lock',
312 vfs, 'namejournal.lock',
310 self.ui.configint("ui", "timeout"), desc=desc)
313 self.ui.configint("ui", "timeout"), desc=desc)
311 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
314 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
312 self._lockref = weakref.ref(l)
315 self._lockref = weakref.ref(l)
313 return l
316 return l
314
317
315 def record(self, namespace, name, oldhashes, newhashes):
318 def record(self, namespace, name, oldhashes, newhashes):
316 """Record a new journal entry
319 """Record a new journal entry
317
320
318 * namespace: an opaque string; this can be used to filter on the type
321 * namespace: an opaque string; this can be used to filter on the type
319 of recorded entries.
322 of recorded entries.
320 * name: the name defining this entry; for bookmarks, this is the
323 * name: the name defining this entry; for bookmarks, this is the
321 bookmark name. Can be filtered on when retrieving entries.
324 bookmark name. Can be filtered on when retrieving entries.
322 * oldhashes and newhashes: each a single binary hash, or a list of
325 * oldhashes and newhashes: each a single binary hash, or a list of
323 binary hashes. These represent the old and new position of the named
326 binary hashes. These represent the old and new position of the named
324 item.
327 item.
325
328
326 """
329 """
327 if not isinstance(oldhashes, list):
330 if not isinstance(oldhashes, list):
328 oldhashes = [oldhashes]
331 oldhashes = [oldhashes]
329 if not isinstance(newhashes, list):
332 if not isinstance(newhashes, list):
330 newhashes = [newhashes]
333 newhashes = [newhashes]
331
334
332 entry = journalentry(
335 entry = journalentry(
333 dateutil.makedate(), self.user, self.command, namespace, name,
336 dateutil.makedate(), self.user, self.command, namespace, name,
334 oldhashes, newhashes)
337 oldhashes, newhashes)
335
338
336 vfs = self.vfs
339 vfs = self.vfs
337 if self.sharedvfs is not None:
340 if self.sharedvfs is not None:
338 # write to the shared repository if this feature is being
341 # write to the shared repository if this feature is being
339 # shared between working copies.
342 # shared between working copies.
340 if sharednamespaces.get(namespace) in self.sharedfeatures:
343 if sharednamespaces.get(namespace) in self.sharedfeatures:
341 vfs = self.sharedvfs
344 vfs = self.sharedvfs
342
345
343 self._write(vfs, entry)
346 self._write(vfs, entry)
344
347
345 def _write(self, vfs, entry):
348 def _write(self, vfs, entry):
346 with self.jlock(vfs):
349 with self.jlock(vfs):
347 version = None
350 version = None
348 # open file in amend mode to ensure it is created if missing
351 # open file in amend mode to ensure it is created if missing
349 with vfs('namejournal', mode='a+b') as f:
352 with vfs('namejournal', mode='a+b') as f:
350 f.seek(0, os.SEEK_SET)
353 f.seek(0, os.SEEK_SET)
351 # Read just enough bytes to get a version number (up to 2
354 # Read just enough bytes to get a version number (up to 2
352 # digits plus separator)
355 # digits plus separator)
353 version = f.read(3).partition('\0')[0]
356 version = f.read(3).partition('\0')[0]
354 if version and version != "%d" % storageversion:
357 if version and version != "%d" % storageversion:
355 # different version of the storage. Exit early (and not
358 # different version of the storage. Exit early (and not
356 # write anything) if this is not a version we can handle or
359 # write anything) if this is not a version we can handle or
357 # the file is corrupt. In future, perhaps rotate the file
360 # the file is corrupt. In future, perhaps rotate the file
358 # instead?
361 # instead?
359 self.ui.warn(
362 self.ui.warn(
360 _("unsupported journal file version '%s'\n") % version)
363 _("unsupported journal file version '%s'\n") % version)
361 return
364 return
362 if not version:
365 if not version:
363 # empty file, write version first
366 # empty file, write version first
364 f.write(("%d" % storageversion) + '\0')
367 f.write(("%d" % storageversion) + '\0')
365 f.seek(0, os.SEEK_END)
368 f.seek(0, os.SEEK_END)
366 f.write(bytes(entry) + '\0')
369 f.write(bytes(entry) + '\0')
367
370
368 def filtered(self, namespace=None, name=None):
371 def filtered(self, namespace=None, name=None):
369 """Yield all journal entries with the given namespace or name
372 """Yield all journal entries with the given namespace or name
370
373
371 Both the namespace and the name are optional; if neither is given all
374 Both the namespace and the name are optional; if neither is given all
372 entries in the journal are produced.
375 entries in the journal are produced.
373
376
374 Matching supports regular expressions by using the `re:` prefix
377 Matching supports regular expressions by using the `re:` prefix
375 (use `literal:` to match names or namespaces that start with `re:`)
378 (use `literal:` to match names or namespaces that start with `re:`)
376
379
377 """
380 """
378 if namespace is not None:
381 if namespace is not None:
379 namespace = util.stringmatcher(namespace)[-1]
382 namespace = stringutil.stringmatcher(namespace)[-1]
380 if name is not None:
383 if name is not None:
381 name = util.stringmatcher(name)[-1]
384 name = stringutil.stringmatcher(name)[-1]
382 for entry in self:
385 for entry in self:
383 if namespace is not None and not namespace(entry.namespace):
386 if namespace is not None and not namespace(entry.namespace):
384 continue
387 continue
385 if name is not None and not name(entry.name):
388 if name is not None and not name(entry.name):
386 continue
389 continue
387 yield entry
390 yield entry
388
391
389 def __iter__(self):
392 def __iter__(self):
390 """Iterate over the storage
393 """Iterate over the storage
391
394
392 Yields journalentry instances for each contained journal record.
395 Yields journalentry instances for each contained journal record.
393
396
394 """
397 """
395 local = self._open(self.vfs)
398 local = self._open(self.vfs)
396
399
397 if self.sharedvfs is None:
400 if self.sharedvfs is None:
398 return local
401 return local
399
402
400 # iterate over both local and shared entries, but only those
403 # iterate over both local and shared entries, but only those
401 # shared entries that are among the currently shared features
404 # shared entries that are among the currently shared features
402 shared = (
405 shared = (
403 e for e in self._open(self.sharedvfs)
406 e for e in self._open(self.sharedvfs)
404 if sharednamespaces.get(e.namespace) in self.sharedfeatures)
407 if sharednamespaces.get(e.namespace) in self.sharedfeatures)
405 return _mergeentriesiter(local, shared)
408 return _mergeentriesiter(local, shared)
406
409
407 def _open(self, vfs, filename='namejournal', _newestfirst=True):
410 def _open(self, vfs, filename='namejournal', _newestfirst=True):
408 if not vfs.exists(filename):
411 if not vfs.exists(filename):
409 return
412 return
410
413
411 with vfs(filename) as f:
414 with vfs(filename) as f:
412 raw = f.read()
415 raw = f.read()
413
416
414 lines = raw.split('\0')
417 lines = raw.split('\0')
415 version = lines and lines[0]
418 version = lines and lines[0]
416 if version != "%d" % storageversion:
419 if version != "%d" % storageversion:
417 version = version or _('not available')
420 version = version or _('not available')
418 raise error.Abort(_("unknown journal file version '%s'") % version)
421 raise error.Abort(_("unknown journal file version '%s'") % version)
419
422
420 # Skip the first line, it's a version number. Normally we iterate over
423 # Skip the first line, it's a version number. Normally we iterate over
421 # these in reverse order to list newest first; only when copying across
424 # these in reverse order to list newest first; only when copying across
422 # a shared storage do we forgo reversing.
425 # a shared storage do we forgo reversing.
423 lines = lines[1:]
426 lines = lines[1:]
424 if _newestfirst:
427 if _newestfirst:
425 lines = reversed(lines)
428 lines = reversed(lines)
426 for line in lines:
429 for line in lines:
427 if not line:
430 if not line:
428 continue
431 continue
429 yield journalentry.fromstorage(line)
432 yield journalentry.fromstorage(line)
430
433
431 # journal reading
434 # journal reading
432 # log options that don't make sense for journal
435 # log options that don't make sense for journal
433 _ignoreopts = ('no-merges', 'graph')
436 _ignoreopts = ('no-merges', 'graph')
434 @command(
437 @command(
435 'journal', [
438 'journal', [
436 ('', 'all', None, 'show history for all names'),
439 ('', 'all', None, 'show history for all names'),
437 ('c', 'commits', None, 'show commit metadata'),
440 ('c', 'commits', None, 'show commit metadata'),
438 ] + [opt for opt in cmdutil.logopts if opt[1] not in _ignoreopts],
441 ] + [opt for opt in cmdutil.logopts if opt[1] not in _ignoreopts],
439 '[OPTION]... [BOOKMARKNAME]')
442 '[OPTION]... [BOOKMARKNAME]')
440 def journal(ui, repo, *args, **opts):
443 def journal(ui, repo, *args, **opts):
441 """show the previous position of bookmarks and the working copy
444 """show the previous position of bookmarks and the working copy
442
445
443 The journal is used to see the previous commits that bookmarks and the
446 The journal is used to see the previous commits that bookmarks and the
444 working copy pointed to. By default the previous locations for the working
447 working copy pointed to. By default the previous locations for the working
445 copy. Passing a bookmark name will show all the previous positions of
448 copy. Passing a bookmark name will show all the previous positions of
446 that bookmark. Use the --all switch to show previous locations for all
449 that bookmark. Use the --all switch to show previous locations for all
447 bookmarks and the working copy; each line will then include the bookmark
450 bookmarks and the working copy; each line will then include the bookmark
448 name, or '.' for the working copy, as well.
451 name, or '.' for the working copy, as well.
449
452
450 If `name` starts with `re:`, the remainder of the name is treated as
453 If `name` starts with `re:`, the remainder of the name is treated as
451 a regular expression. To match a name that actually starts with `re:`,
454 a regular expression. To match a name that actually starts with `re:`,
452 use the prefix `literal:`.
455 use the prefix `literal:`.
453
456
454 By default hg journal only shows the commit hash and the command that was
457 By default hg journal only shows the commit hash and the command that was
455 running at that time. -v/--verbose will show the prior hash, the user, and
458 running at that time. -v/--verbose will show the prior hash, the user, and
456 the time at which it happened.
459 the time at which it happened.
457
460
458 Use -c/--commits to output log information on each commit hash; at this
461 Use -c/--commits to output log information on each commit hash; at this
459 point you can use the usual `--patch`, `--git`, `--stat` and `--template`
462 point you can use the usual `--patch`, `--git`, `--stat` and `--template`
460 switches to alter the log output for these.
463 switches to alter the log output for these.
461
464
462 `hg journal -T json` can be used to produce machine readable output.
465 `hg journal -T json` can be used to produce machine readable output.
463
466
464 """
467 """
465 opts = pycompat.byteskwargs(opts)
468 opts = pycompat.byteskwargs(opts)
466 name = '.'
469 name = '.'
467 if opts.get('all'):
470 if opts.get('all'):
468 if args:
471 if args:
469 raise error.Abort(
472 raise error.Abort(
470 _("You can't combine --all and filtering on a name"))
473 _("You can't combine --all and filtering on a name"))
471 name = None
474 name = None
472 if args:
475 if args:
473 name = args[0]
476 name = args[0]
474
477
475 fm = ui.formatter('journal', opts)
478 fm = ui.formatter('journal', opts)
476
479
477 if opts.get("template") != "json":
480 if opts.get("template") != "json":
478 if name is None:
481 if name is None:
479 displayname = _('the working copy and bookmarks')
482 displayname = _('the working copy and bookmarks')
480 else:
483 else:
481 displayname = "'%s'" % name
484 displayname = "'%s'" % name
482 ui.status(_("previous locations of %s:\n") % displayname)
485 ui.status(_("previous locations of %s:\n") % displayname)
483
486
484 limit = logcmdutil.getlimit(opts)
487 limit = logcmdutil.getlimit(opts)
485 entry = None
488 entry = None
486 ui.pager('journal')
489 ui.pager('journal')
487 for count, entry in enumerate(repo.journal.filtered(name=name)):
490 for count, entry in enumerate(repo.journal.filtered(name=name)):
488 if count == limit:
491 if count == limit:
489 break
492 break
490 newhashesstr = fm.formatlist(map(fm.hexfunc, entry.newhashes),
493 newhashesstr = fm.formatlist(map(fm.hexfunc, entry.newhashes),
491 name='node', sep=',')
494 name='node', sep=',')
492 oldhashesstr = fm.formatlist(map(fm.hexfunc, entry.oldhashes),
495 oldhashesstr = fm.formatlist(map(fm.hexfunc, entry.oldhashes),
493 name='node', sep=',')
496 name='node', sep=',')
494
497
495 fm.startitem()
498 fm.startitem()
496 fm.condwrite(ui.verbose, 'oldhashes', '%s -> ', oldhashesstr)
499 fm.condwrite(ui.verbose, 'oldhashes', '%s -> ', oldhashesstr)
497 fm.write('newhashes', '%s', newhashesstr)
500 fm.write('newhashes', '%s', newhashesstr)
498 fm.condwrite(ui.verbose, 'user', ' %-8s', entry.user)
501 fm.condwrite(ui.verbose, 'user', ' %-8s', entry.user)
499 fm.condwrite(
502 fm.condwrite(
500 opts.get('all') or name.startswith('re:'),
503 opts.get('all') or name.startswith('re:'),
501 'name', ' %-8s', entry.name)
504 'name', ' %-8s', entry.name)
502
505
503 timestring = fm.formatdate(entry.timestamp, '%Y-%m-%d %H:%M %1%2')
506 timestring = fm.formatdate(entry.timestamp, '%Y-%m-%d %H:%M %1%2')
504 fm.condwrite(ui.verbose, 'date', ' %s', timestring)
507 fm.condwrite(ui.verbose, 'date', ' %s', timestring)
505 fm.write('command', ' %s\n', entry.command)
508 fm.write('command', ' %s\n', entry.command)
506
509
507 if opts.get("commits"):
510 if opts.get("commits"):
508 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
511 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
509 for hash in entry.newhashes:
512 for hash in entry.newhashes:
510 try:
513 try:
511 ctx = repo[hash]
514 ctx = repo[hash]
512 displayer.show(ctx)
515 displayer.show(ctx)
513 except error.RepoLookupError as e:
516 except error.RepoLookupError as e:
514 fm.write('repolookuperror', "%s\n\n", pycompat.bytestr(e))
517 fm.write('repolookuperror', "%s\n\n", pycompat.bytestr(e))
515 displayer.close()
518 displayer.close()
516
519
517 fm.end()
520 fm.end()
518
521
519 if entry is None:
522 if entry is None:
520 ui.status(_("no recorded locations\n"))
523 ui.status(_("no recorded locations\n"))
@@ -1,811 +1,815
1 # keyword.py - $Keyword$ expansion for Mercurial
1 # keyword.py - $Keyword$ expansion for Mercurial
2 #
2 #
3 # Copyright 2007-2015 Christian Ebert <blacktrash@gmx.net>
3 # Copyright 2007-2015 Christian Ebert <blacktrash@gmx.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 #
7 #
8 # $Id$
8 # $Id$
9 #
9 #
10 # Keyword expansion hack against the grain of a Distributed SCM
10 # Keyword expansion hack against the grain of a Distributed SCM
11 #
11 #
12 # There are many good reasons why this is not needed in a distributed
12 # There are many good reasons why this is not needed in a distributed
13 # SCM, still it may be useful in very small projects based on single
13 # SCM, still it may be useful in very small projects based on single
14 # files (like LaTeX packages), that are mostly addressed to an
14 # files (like LaTeX packages), that are mostly addressed to an
15 # audience not running a version control system.
15 # audience not running a version control system.
16 #
16 #
17 # For in-depth discussion refer to
17 # For in-depth discussion refer to
18 # <https://mercurial-scm.org/wiki/KeywordPlan>.
18 # <https://mercurial-scm.org/wiki/KeywordPlan>.
19 #
19 #
20 # Keyword expansion is based on Mercurial's changeset template mappings.
20 # Keyword expansion is based on Mercurial's changeset template mappings.
21 #
21 #
22 # Binary files are not touched.
22 # Binary files are not touched.
23 #
23 #
24 # Files to act upon/ignore are specified in the [keyword] section.
24 # Files to act upon/ignore are specified in the [keyword] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
25 # Customized keyword template mappings in the [keywordmaps] section.
26 #
26 #
27 # Run 'hg help keyword' and 'hg kwdemo' to get info on configuration.
27 # Run 'hg help keyword' and 'hg kwdemo' to get info on configuration.
28
28
29 '''expand keywords in tracked files
29 '''expand keywords in tracked files
30
30
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
31 This extension expands RCS/CVS-like or self-customized $Keywords$ in
32 tracked text files selected by your configuration.
32 tracked text files selected by your configuration.
33
33
34 Keywords are only expanded in local repositories and not stored in the
34 Keywords are only expanded in local repositories and not stored in the
35 change history. The mechanism can be regarded as a convenience for the
35 change history. The mechanism can be regarded as a convenience for the
36 current user or for archive distribution.
36 current user or for archive distribution.
37
37
38 Keywords expand to the changeset data pertaining to the latest change
38 Keywords expand to the changeset data pertaining to the latest change
39 relative to the working directory parent of each file.
39 relative to the working directory parent of each file.
40
40
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
41 Configuration is done in the [keyword], [keywordset] and [keywordmaps]
42 sections of hgrc files.
42 sections of hgrc files.
43
43
44 Example::
44 Example::
45
45
46 [keyword]
46 [keyword]
47 # expand keywords in every python file except those matching "x*"
47 # expand keywords in every python file except those matching "x*"
48 **.py =
48 **.py =
49 x* = ignore
49 x* = ignore
50
50
51 [keywordset]
51 [keywordset]
52 # prefer svn- over cvs-like default keywordmaps
52 # prefer svn- over cvs-like default keywordmaps
53 svn = True
53 svn = True
54
54
55 .. note::
55 .. note::
56
56
57 The more specific you are in your filename patterns the less you
57 The more specific you are in your filename patterns the less you
58 lose speed in huge repositories.
58 lose speed in huge repositories.
59
59
60 For [keywordmaps] template mapping and expansion demonstration and
60 For [keywordmaps] template mapping and expansion demonstration and
61 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
61 control run :hg:`kwdemo`. See :hg:`help templates` for a list of
62 available templates and filters.
62 available templates and filters.
63
63
64 Three additional date template filters are provided:
64 Three additional date template filters are provided:
65
65
66 :``utcdate``: "2006/09/18 15:13:13"
66 :``utcdate``: "2006/09/18 15:13:13"
67 :``svnutcdate``: "2006-09-18 15:13:13Z"
67 :``svnutcdate``: "2006-09-18 15:13:13Z"
68 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
68 :``svnisodate``: "2006-09-18 08:13:13 -700 (Mon, 18 Sep 2006)"
69
69
70 The default template mappings (view with :hg:`kwdemo -d`) can be
70 The default template mappings (view with :hg:`kwdemo -d`) can be
71 replaced with customized keywords and templates. Again, run
71 replaced with customized keywords and templates. Again, run
72 :hg:`kwdemo` to control the results of your configuration changes.
72 :hg:`kwdemo` to control the results of your configuration changes.
73
73
74 Before changing/disabling active keywords, you must run :hg:`kwshrink`
74 Before changing/disabling active keywords, you must run :hg:`kwshrink`
75 to avoid storing expanded keywords in the change history.
75 to avoid storing expanded keywords in the change history.
76
76
77 To force expansion after enabling it, or a configuration change, run
77 To force expansion after enabling it, or a configuration change, run
78 :hg:`kwexpand`.
78 :hg:`kwexpand`.
79
79
80 Expansions spanning more than one line and incremental expansions,
80 Expansions spanning more than one line and incremental expansions,
81 like CVS' $Log$, are not supported. A keyword template map "Log =
81 like CVS' $Log$, are not supported. A keyword template map "Log =
82 {desc}" expands to the first line of the changeset description.
82 {desc}" expands to the first line of the changeset description.
83 '''
83 '''
84
84
85
85
86 from __future__ import absolute_import
86 from __future__ import absolute_import
87
87
88 import os
88 import os
89 import re
89 import re
90 import tempfile
90 import tempfile
91 import weakref
91 import weakref
92
92
93 from mercurial.i18n import _
93 from mercurial.i18n import _
94 from mercurial.hgweb import webcommands
94 from mercurial.hgweb import webcommands
95
95
96 from mercurial import (
96 from mercurial import (
97 cmdutil,
97 cmdutil,
98 context,
98 context,
99 dispatch,
99 dispatch,
100 error,
100 error,
101 extensions,
101 extensions,
102 filelog,
102 filelog,
103 localrepo,
103 localrepo,
104 logcmdutil,
104 logcmdutil,
105 match,
105 match,
106 patch,
106 patch,
107 pathutil,
107 pathutil,
108 pycompat,
108 pycompat,
109 registrar,
109 registrar,
110 scmutil,
110 scmutil,
111 templatefilters,
111 templatefilters,
112 util,
112 util,
113 )
113 )
114 from mercurial.utils import dateutil
114 from mercurial.utils import (
115 dateutil,
116 stringutil,
117 )
115
118
116 cmdtable = {}
119 cmdtable = {}
117 command = registrar.command(cmdtable)
120 command = registrar.command(cmdtable)
118 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
121 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
119 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
122 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
120 # be specifying the version(s) of Mercurial they are tested with, or
123 # be specifying the version(s) of Mercurial they are tested with, or
121 # leave the attribute unspecified.
124 # leave the attribute unspecified.
122 testedwith = 'ships-with-hg-core'
125 testedwith = 'ships-with-hg-core'
123
126
124 # hg commands that do not act on keywords
127 # hg commands that do not act on keywords
125 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
128 nokwcommands = ('add addremove annotate bundle export grep incoming init log'
126 ' outgoing push tip verify convert email glog')
129 ' outgoing push tip verify convert email glog')
127
130
128 # webcommands that do not act on keywords
131 # webcommands that do not act on keywords
129 nokwwebcommands = ('annotate changeset rev filediff diff comparison')
132 nokwwebcommands = ('annotate changeset rev filediff diff comparison')
130
133
131 # hg commands that trigger expansion only when writing to working dir,
134 # hg commands that trigger expansion only when writing to working dir,
132 # not when reading filelog, and unexpand when reading from working dir
135 # not when reading filelog, and unexpand when reading from working dir
133 restricted = ('merge kwexpand kwshrink record qrecord resolve transplant'
136 restricted = ('merge kwexpand kwshrink record qrecord resolve transplant'
134 ' unshelve rebase graft backout histedit fetch')
137 ' unshelve rebase graft backout histedit fetch')
135
138
136 # names of extensions using dorecord
139 # names of extensions using dorecord
137 recordextensions = 'record'
140 recordextensions = 'record'
138
141
139 colortable = {
142 colortable = {
140 'kwfiles.enabled': 'green bold',
143 'kwfiles.enabled': 'green bold',
141 'kwfiles.deleted': 'cyan bold underline',
144 'kwfiles.deleted': 'cyan bold underline',
142 'kwfiles.enabledunknown': 'green',
145 'kwfiles.enabledunknown': 'green',
143 'kwfiles.ignored': 'bold',
146 'kwfiles.ignored': 'bold',
144 'kwfiles.ignoredunknown': 'none'
147 'kwfiles.ignoredunknown': 'none'
145 }
148 }
146
149
147 templatefilter = registrar.templatefilter()
150 templatefilter = registrar.templatefilter()
148
151
149 configtable = {}
152 configtable = {}
150 configitem = registrar.configitem(configtable)
153 configitem = registrar.configitem(configtable)
151
154
152 configitem('keywordset', 'svn',
155 configitem('keywordset', 'svn',
153 default=False,
156 default=False,
154 )
157 )
155 # date like in cvs' $Date
158 # date like in cvs' $Date
156 @templatefilter('utcdate')
159 @templatefilter('utcdate')
157 def utcdate(text):
160 def utcdate(text):
158 '''Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
161 '''Date. Returns a UTC-date in this format: "2009/08/18 11:00:13".
159 '''
162 '''
160 dateformat = '%Y/%m/%d %H:%M:%S'
163 dateformat = '%Y/%m/%d %H:%M:%S'
161 return dateutil.datestr((dateutil.parsedate(text)[0], 0), dateformat)
164 return dateutil.datestr((dateutil.parsedate(text)[0], 0), dateformat)
162 # date like in svn's $Date
165 # date like in svn's $Date
163 @templatefilter('svnisodate')
166 @templatefilter('svnisodate')
164 def svnisodate(text):
167 def svnisodate(text):
165 '''Date. Returns a date in this format: "2009-08-18 13:00:13
168 '''Date. Returns a date in this format: "2009-08-18 13:00:13
166 +0200 (Tue, 18 Aug 2009)".
169 +0200 (Tue, 18 Aug 2009)".
167 '''
170 '''
168 return dateutil.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
171 return dateutil.datestr(text, '%Y-%m-%d %H:%M:%S %1%2 (%a, %d %b %Y)')
169 # date like in svn's $Id
172 # date like in svn's $Id
170 @templatefilter('svnutcdate')
173 @templatefilter('svnutcdate')
171 def svnutcdate(text):
174 def svnutcdate(text):
172 '''Date. Returns a UTC-date in this format: "2009-08-18
175 '''Date. Returns a UTC-date in this format: "2009-08-18
173 11:00:13Z".
176 11:00:13Z".
174 '''
177 '''
175 dateformat = '%Y-%m-%d %H:%M:%SZ'
178 dateformat = '%Y-%m-%d %H:%M:%SZ'
176 return dateutil.datestr((dateutil.parsedate(text)[0], 0), dateformat)
179 return dateutil.datestr((dateutil.parsedate(text)[0], 0), dateformat)
177
180
178 # make keyword tools accessible
181 # make keyword tools accessible
179 kwtools = {'hgcmd': ''}
182 kwtools = {'hgcmd': ''}
180
183
181 def _defaultkwmaps(ui):
184 def _defaultkwmaps(ui):
182 '''Returns default keywordmaps according to keywordset configuration.'''
185 '''Returns default keywordmaps according to keywordset configuration.'''
183 templates = {
186 templates = {
184 'Revision': '{node|short}',
187 'Revision': '{node|short}',
185 'Author': '{author|user}',
188 'Author': '{author|user}',
186 }
189 }
187 kwsets = ({
190 kwsets = ({
188 'Date': '{date|utcdate}',
191 'Date': '{date|utcdate}',
189 'RCSfile': '{file|basename},v',
192 'RCSfile': '{file|basename},v',
190 'RCSFile': '{file|basename},v', # kept for backwards compatibility
193 'RCSFile': '{file|basename},v', # kept for backwards compatibility
191 # with hg-keyword
194 # with hg-keyword
192 'Source': '{root}/{file},v',
195 'Source': '{root}/{file},v',
193 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
196 'Id': '{file|basename},v {node|short} {date|utcdate} {author|user}',
194 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
197 'Header': '{root}/{file},v {node|short} {date|utcdate} {author|user}',
195 }, {
198 }, {
196 'Date': '{date|svnisodate}',
199 'Date': '{date|svnisodate}',
197 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
200 'Id': '{file|basename},v {node|short} {date|svnutcdate} {author|user}',
198 'LastChangedRevision': '{node|short}',
201 'LastChangedRevision': '{node|short}',
199 'LastChangedBy': '{author|user}',
202 'LastChangedBy': '{author|user}',
200 'LastChangedDate': '{date|svnisodate}',
203 'LastChangedDate': '{date|svnisodate}',
201 })
204 })
202 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
205 templates.update(kwsets[ui.configbool('keywordset', 'svn')])
203 return templates
206 return templates
204
207
205 def _shrinktext(text, subfunc):
208 def _shrinktext(text, subfunc):
206 '''Helper for keyword expansion removal in text.
209 '''Helper for keyword expansion removal in text.
207 Depending on subfunc also returns number of substitutions.'''
210 Depending on subfunc also returns number of substitutions.'''
208 return subfunc(r'$\1$', text)
211 return subfunc(r'$\1$', text)
209
212
210 def _preselect(wstatus, changed):
213 def _preselect(wstatus, changed):
211 '''Retrieves modified and added files from a working directory state
214 '''Retrieves modified and added files from a working directory state
212 and returns the subset of each contained in given changed files
215 and returns the subset of each contained in given changed files
213 retrieved from a change context.'''
216 retrieved from a change context.'''
214 modified = [f for f in wstatus.modified if f in changed]
217 modified = [f for f in wstatus.modified if f in changed]
215 added = [f for f in wstatus.added if f in changed]
218 added = [f for f in wstatus.added if f in changed]
216 return modified, added
219 return modified, added
217
220
218
221
219 class kwtemplater(object):
222 class kwtemplater(object):
220 '''
223 '''
221 Sets up keyword templates, corresponding keyword regex, and
224 Sets up keyword templates, corresponding keyword regex, and
222 provides keyword substitution functions.
225 provides keyword substitution functions.
223 '''
226 '''
224
227
225 def __init__(self, ui, repo, inc, exc):
228 def __init__(self, ui, repo, inc, exc):
226 self.ui = ui
229 self.ui = ui
227 self._repo = weakref.ref(repo)
230 self._repo = weakref.ref(repo)
228 self.match = match.match(repo.root, '', [], inc, exc)
231 self.match = match.match(repo.root, '', [], inc, exc)
229 self.restrict = kwtools['hgcmd'] in restricted.split()
232 self.restrict = kwtools['hgcmd'] in restricted.split()
230 self.postcommit = False
233 self.postcommit = False
231
234
232 kwmaps = self.ui.configitems('keywordmaps')
235 kwmaps = self.ui.configitems('keywordmaps')
233 if kwmaps: # override default templates
236 if kwmaps: # override default templates
234 self.templates = dict(kwmaps)
237 self.templates = dict(kwmaps)
235 else:
238 else:
236 self.templates = _defaultkwmaps(self.ui)
239 self.templates = _defaultkwmaps(self.ui)
237
240
238 @property
241 @property
239 def repo(self):
242 def repo(self):
240 return self._repo()
243 return self._repo()
241
244
242 @util.propertycache
245 @util.propertycache
243 def escape(self):
246 def escape(self):
244 '''Returns bar-separated and escaped keywords.'''
247 '''Returns bar-separated and escaped keywords.'''
245 return '|'.join(map(re.escape, self.templates.keys()))
248 return '|'.join(map(re.escape, self.templates.keys()))
246
249
247 @util.propertycache
250 @util.propertycache
248 def rekw(self):
251 def rekw(self):
249 '''Returns regex for unexpanded keywords.'''
252 '''Returns regex for unexpanded keywords.'''
250 return re.compile(r'\$(%s)\$' % self.escape)
253 return re.compile(r'\$(%s)\$' % self.escape)
251
254
252 @util.propertycache
255 @util.propertycache
253 def rekwexp(self):
256 def rekwexp(self):
254 '''Returns regex for expanded keywords.'''
257 '''Returns regex for expanded keywords.'''
255 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
258 return re.compile(r'\$(%s): [^$\n\r]*? \$' % self.escape)
256
259
257 def substitute(self, data, path, ctx, subfunc):
260 def substitute(self, data, path, ctx, subfunc):
258 '''Replaces keywords in data with expanded template.'''
261 '''Replaces keywords in data with expanded template.'''
259 def kwsub(mobj):
262 def kwsub(mobj):
260 kw = mobj.group(1)
263 kw = mobj.group(1)
261 ct = logcmdutil.maketemplater(self.ui, self.repo,
264 ct = logcmdutil.maketemplater(self.ui, self.repo,
262 self.templates[kw])
265 self.templates[kw])
263 self.ui.pushbuffer()
266 self.ui.pushbuffer()
264 ct.show(ctx, root=self.repo.root, file=path)
267 ct.show(ctx, root=self.repo.root, file=path)
265 ekw = templatefilters.firstline(self.ui.popbuffer())
268 ekw = templatefilters.firstline(self.ui.popbuffer())
266 return '$%s: %s $' % (kw, ekw)
269 return '$%s: %s $' % (kw, ekw)
267 return subfunc(kwsub, data)
270 return subfunc(kwsub, data)
268
271
269 def linkctx(self, path, fileid):
272 def linkctx(self, path, fileid):
270 '''Similar to filelog.linkrev, but returns a changectx.'''
273 '''Similar to filelog.linkrev, but returns a changectx.'''
271 return self.repo.filectx(path, fileid=fileid).changectx()
274 return self.repo.filectx(path, fileid=fileid).changectx()
272
275
273 def expand(self, path, node, data):
276 def expand(self, path, node, data):
274 '''Returns data with keywords expanded.'''
277 '''Returns data with keywords expanded.'''
275 if not self.restrict and self.match(path) and not util.binary(data):
278 if (not self.restrict and self.match(path)
279 and not stringutil.binary(data)):
276 ctx = self.linkctx(path, node)
280 ctx = self.linkctx(path, node)
277 return self.substitute(data, path, ctx, self.rekw.sub)
281 return self.substitute(data, path, ctx, self.rekw.sub)
278 return data
282 return data
279
283
280 def iskwfile(self, cand, ctx):
284 def iskwfile(self, cand, ctx):
281 '''Returns subset of candidates which are configured for keyword
285 '''Returns subset of candidates which are configured for keyword
282 expansion but are not symbolic links.'''
286 expansion but are not symbolic links.'''
283 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
287 return [f for f in cand if self.match(f) and 'l' not in ctx.flags(f)]
284
288
285 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
289 def overwrite(self, ctx, candidates, lookup, expand, rekw=False):
286 '''Overwrites selected files expanding/shrinking keywords.'''
290 '''Overwrites selected files expanding/shrinking keywords.'''
287 if self.restrict or lookup or self.postcommit: # exclude kw_copy
291 if self.restrict or lookup or self.postcommit: # exclude kw_copy
288 candidates = self.iskwfile(candidates, ctx)
292 candidates = self.iskwfile(candidates, ctx)
289 if not candidates:
293 if not candidates:
290 return
294 return
291 kwcmd = self.restrict and lookup # kwexpand/kwshrink
295 kwcmd = self.restrict and lookup # kwexpand/kwshrink
292 if self.restrict or expand and lookup:
296 if self.restrict or expand and lookup:
293 mf = ctx.manifest()
297 mf = ctx.manifest()
294 if self.restrict or rekw:
298 if self.restrict or rekw:
295 re_kw = self.rekw
299 re_kw = self.rekw
296 else:
300 else:
297 re_kw = self.rekwexp
301 re_kw = self.rekwexp
298 if expand:
302 if expand:
299 msg = _('overwriting %s expanding keywords\n')
303 msg = _('overwriting %s expanding keywords\n')
300 else:
304 else:
301 msg = _('overwriting %s shrinking keywords\n')
305 msg = _('overwriting %s shrinking keywords\n')
302 for f in candidates:
306 for f in candidates:
303 if self.restrict:
307 if self.restrict:
304 data = self.repo.file(f).read(mf[f])
308 data = self.repo.file(f).read(mf[f])
305 else:
309 else:
306 data = self.repo.wread(f)
310 data = self.repo.wread(f)
307 if util.binary(data):
311 if stringutil.binary(data):
308 continue
312 continue
309 if expand:
313 if expand:
310 parents = ctx.parents()
314 parents = ctx.parents()
311 if lookup:
315 if lookup:
312 ctx = self.linkctx(f, mf[f])
316 ctx = self.linkctx(f, mf[f])
313 elif self.restrict and len(parents) > 1:
317 elif self.restrict and len(parents) > 1:
314 # merge commit
318 # merge commit
315 # in case of conflict f is in modified state during
319 # in case of conflict f is in modified state during
316 # merge, even if f does not differ from f in parent
320 # merge, even if f does not differ from f in parent
317 for p in parents:
321 for p in parents:
318 if f in p and not p[f].cmp(ctx[f]):
322 if f in p and not p[f].cmp(ctx[f]):
319 ctx = p[f].changectx()
323 ctx = p[f].changectx()
320 break
324 break
321 data, found = self.substitute(data, f, ctx, re_kw.subn)
325 data, found = self.substitute(data, f, ctx, re_kw.subn)
322 elif self.restrict:
326 elif self.restrict:
323 found = re_kw.search(data)
327 found = re_kw.search(data)
324 else:
328 else:
325 data, found = _shrinktext(data, re_kw.subn)
329 data, found = _shrinktext(data, re_kw.subn)
326 if found:
330 if found:
327 self.ui.note(msg % f)
331 self.ui.note(msg % f)
328 fp = self.repo.wvfs(f, "wb", atomictemp=True)
332 fp = self.repo.wvfs(f, "wb", atomictemp=True)
329 fp.write(data)
333 fp.write(data)
330 fp.close()
334 fp.close()
331 if kwcmd:
335 if kwcmd:
332 self.repo.dirstate.normal(f)
336 self.repo.dirstate.normal(f)
333 elif self.postcommit:
337 elif self.postcommit:
334 self.repo.dirstate.normallookup(f)
338 self.repo.dirstate.normallookup(f)
335
339
336 def shrink(self, fname, text):
340 def shrink(self, fname, text):
337 '''Returns text with all keyword substitutions removed.'''
341 '''Returns text with all keyword substitutions removed.'''
338 if self.match(fname) and not util.binary(text):
342 if self.match(fname) and not stringutil.binary(text):
339 return _shrinktext(text, self.rekwexp.sub)
343 return _shrinktext(text, self.rekwexp.sub)
340 return text
344 return text
341
345
342 def shrinklines(self, fname, lines):
346 def shrinklines(self, fname, lines):
343 '''Returns lines with keyword substitutions removed.'''
347 '''Returns lines with keyword substitutions removed.'''
344 if self.match(fname):
348 if self.match(fname):
345 text = ''.join(lines)
349 text = ''.join(lines)
346 if not util.binary(text):
350 if not stringutil.binary(text):
347 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
351 return _shrinktext(text, self.rekwexp.sub).splitlines(True)
348 return lines
352 return lines
349
353
350 def wread(self, fname, data):
354 def wread(self, fname, data):
351 '''If in restricted mode returns data read from wdir with
355 '''If in restricted mode returns data read from wdir with
352 keyword substitutions removed.'''
356 keyword substitutions removed.'''
353 if self.restrict:
357 if self.restrict:
354 return self.shrink(fname, data)
358 return self.shrink(fname, data)
355 return data
359 return data
356
360
357 class kwfilelog(filelog.filelog):
361 class kwfilelog(filelog.filelog):
358 '''
362 '''
359 Subclass of filelog to hook into its read, add, cmp methods.
363 Subclass of filelog to hook into its read, add, cmp methods.
360 Keywords are "stored" unexpanded, and processed on reading.
364 Keywords are "stored" unexpanded, and processed on reading.
361 '''
365 '''
362 def __init__(self, opener, kwt, path):
366 def __init__(self, opener, kwt, path):
363 super(kwfilelog, self).__init__(opener, path)
367 super(kwfilelog, self).__init__(opener, path)
364 self.kwt = kwt
368 self.kwt = kwt
365 self.path = path
369 self.path = path
366
370
367 def read(self, node):
371 def read(self, node):
368 '''Expands keywords when reading filelog.'''
372 '''Expands keywords when reading filelog.'''
369 data = super(kwfilelog, self).read(node)
373 data = super(kwfilelog, self).read(node)
370 if self.renamed(node):
374 if self.renamed(node):
371 return data
375 return data
372 return self.kwt.expand(self.path, node, data)
376 return self.kwt.expand(self.path, node, data)
373
377
374 def add(self, text, meta, tr, link, p1=None, p2=None):
378 def add(self, text, meta, tr, link, p1=None, p2=None):
375 '''Removes keyword substitutions when adding to filelog.'''
379 '''Removes keyword substitutions when adding to filelog.'''
376 text = self.kwt.shrink(self.path, text)
380 text = self.kwt.shrink(self.path, text)
377 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
381 return super(kwfilelog, self).add(text, meta, tr, link, p1, p2)
378
382
379 def cmp(self, node, text):
383 def cmp(self, node, text):
380 '''Removes keyword substitutions for comparison.'''
384 '''Removes keyword substitutions for comparison.'''
381 text = self.kwt.shrink(self.path, text)
385 text = self.kwt.shrink(self.path, text)
382 return super(kwfilelog, self).cmp(node, text)
386 return super(kwfilelog, self).cmp(node, text)
383
387
384 def _status(ui, repo, wctx, kwt, *pats, **opts):
388 def _status(ui, repo, wctx, kwt, *pats, **opts):
385 '''Bails out if [keyword] configuration is not active.
389 '''Bails out if [keyword] configuration is not active.
386 Returns status of working directory.'''
390 Returns status of working directory.'''
387 if kwt:
391 if kwt:
388 opts = pycompat.byteskwargs(opts)
392 opts = pycompat.byteskwargs(opts)
389 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
393 return repo.status(match=scmutil.match(wctx, pats, opts), clean=True,
390 unknown=opts.get('unknown') or opts.get('all'))
394 unknown=opts.get('unknown') or opts.get('all'))
391 if ui.configitems('keyword'):
395 if ui.configitems('keyword'):
392 raise error.Abort(_('[keyword] patterns cannot match'))
396 raise error.Abort(_('[keyword] patterns cannot match'))
393 raise error.Abort(_('no [keyword] patterns configured'))
397 raise error.Abort(_('no [keyword] patterns configured'))
394
398
395 def _kwfwrite(ui, repo, expand, *pats, **opts):
399 def _kwfwrite(ui, repo, expand, *pats, **opts):
396 '''Selects files and passes them to kwtemplater.overwrite.'''
400 '''Selects files and passes them to kwtemplater.overwrite.'''
397 wctx = repo[None]
401 wctx = repo[None]
398 if len(wctx.parents()) > 1:
402 if len(wctx.parents()) > 1:
399 raise error.Abort(_('outstanding uncommitted merge'))
403 raise error.Abort(_('outstanding uncommitted merge'))
400 kwt = getattr(repo, '_keywordkwt', None)
404 kwt = getattr(repo, '_keywordkwt', None)
401 with repo.wlock():
405 with repo.wlock():
402 status = _status(ui, repo, wctx, kwt, *pats, **opts)
406 status = _status(ui, repo, wctx, kwt, *pats, **opts)
403 if status.modified or status.added or status.removed or status.deleted:
407 if status.modified or status.added or status.removed or status.deleted:
404 raise error.Abort(_('outstanding uncommitted changes'))
408 raise error.Abort(_('outstanding uncommitted changes'))
405 kwt.overwrite(wctx, status.clean, True, expand)
409 kwt.overwrite(wctx, status.clean, True, expand)
406
410
407 @command('kwdemo',
411 @command('kwdemo',
408 [('d', 'default', None, _('show default keyword template maps')),
412 [('d', 'default', None, _('show default keyword template maps')),
409 ('f', 'rcfile', '',
413 ('f', 'rcfile', '',
410 _('read maps from rcfile'), _('FILE'))],
414 _('read maps from rcfile'), _('FILE'))],
411 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'),
415 _('hg kwdemo [-d] [-f RCFILE] [TEMPLATEMAP]...'),
412 optionalrepo=True)
416 optionalrepo=True)
413 def demo(ui, repo, *args, **opts):
417 def demo(ui, repo, *args, **opts):
414 '''print [keywordmaps] configuration and an expansion example
418 '''print [keywordmaps] configuration and an expansion example
415
419
416 Show current, custom, or default keyword template maps and their
420 Show current, custom, or default keyword template maps and their
417 expansions.
421 expansions.
418
422
419 Extend the current configuration by specifying maps as arguments
423 Extend the current configuration by specifying maps as arguments
420 and using -f/--rcfile to source an external hgrc file.
424 and using -f/--rcfile to source an external hgrc file.
421
425
422 Use -d/--default to disable current configuration.
426 Use -d/--default to disable current configuration.
423
427
424 See :hg:`help templates` for information on templates and filters.
428 See :hg:`help templates` for information on templates and filters.
425 '''
429 '''
426 def demoitems(section, items):
430 def demoitems(section, items):
427 ui.write('[%s]\n' % section)
431 ui.write('[%s]\n' % section)
428 for k, v in sorted(items):
432 for k, v in sorted(items):
429 ui.write('%s = %s\n' % (k, v))
433 ui.write('%s = %s\n' % (k, v))
430
434
431 fn = 'demo.txt'
435 fn = 'demo.txt'
432 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
436 tmpdir = tempfile.mkdtemp('', 'kwdemo.')
433 ui.note(_('creating temporary repository at %s\n') % tmpdir)
437 ui.note(_('creating temporary repository at %s\n') % tmpdir)
434 if repo is None:
438 if repo is None:
435 baseui = ui
439 baseui = ui
436 else:
440 else:
437 baseui = repo.baseui
441 baseui = repo.baseui
438 repo = localrepo.localrepository(baseui, tmpdir, True)
442 repo = localrepo.localrepository(baseui, tmpdir, True)
439 ui.setconfig('keyword', fn, '', 'keyword')
443 ui.setconfig('keyword', fn, '', 'keyword')
440 svn = ui.configbool('keywordset', 'svn')
444 svn = ui.configbool('keywordset', 'svn')
441 # explicitly set keywordset for demo output
445 # explicitly set keywordset for demo output
442 ui.setconfig('keywordset', 'svn', svn, 'keyword')
446 ui.setconfig('keywordset', 'svn', svn, 'keyword')
443
447
444 uikwmaps = ui.configitems('keywordmaps')
448 uikwmaps = ui.configitems('keywordmaps')
445 if args or opts.get(r'rcfile'):
449 if args or opts.get(r'rcfile'):
446 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
450 ui.status(_('\n\tconfiguration using custom keyword template maps\n'))
447 if uikwmaps:
451 if uikwmaps:
448 ui.status(_('\textending current template maps\n'))
452 ui.status(_('\textending current template maps\n'))
449 if opts.get(r'default') or not uikwmaps:
453 if opts.get(r'default') or not uikwmaps:
450 if svn:
454 if svn:
451 ui.status(_('\toverriding default svn keywordset\n'))
455 ui.status(_('\toverriding default svn keywordset\n'))
452 else:
456 else:
453 ui.status(_('\toverriding default cvs keywordset\n'))
457 ui.status(_('\toverriding default cvs keywordset\n'))
454 if opts.get(r'rcfile'):
458 if opts.get(r'rcfile'):
455 ui.readconfig(opts.get('rcfile'))
459 ui.readconfig(opts.get('rcfile'))
456 if args:
460 if args:
457 # simulate hgrc parsing
461 # simulate hgrc parsing
458 rcmaps = '[keywordmaps]\n%s\n' % '\n'.join(args)
462 rcmaps = '[keywordmaps]\n%s\n' % '\n'.join(args)
459 repo.vfs.write('hgrc', rcmaps)
463 repo.vfs.write('hgrc', rcmaps)
460 ui.readconfig(repo.vfs.join('hgrc'))
464 ui.readconfig(repo.vfs.join('hgrc'))
461 kwmaps = dict(ui.configitems('keywordmaps'))
465 kwmaps = dict(ui.configitems('keywordmaps'))
462 elif opts.get(r'default'):
466 elif opts.get(r'default'):
463 if svn:
467 if svn:
464 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
468 ui.status(_('\n\tconfiguration using default svn keywordset\n'))
465 else:
469 else:
466 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
470 ui.status(_('\n\tconfiguration using default cvs keywordset\n'))
467 kwmaps = _defaultkwmaps(ui)
471 kwmaps = _defaultkwmaps(ui)
468 if uikwmaps:
472 if uikwmaps:
469 ui.status(_('\tdisabling current template maps\n'))
473 ui.status(_('\tdisabling current template maps\n'))
470 for k, v in kwmaps.iteritems():
474 for k, v in kwmaps.iteritems():
471 ui.setconfig('keywordmaps', k, v, 'keyword')
475 ui.setconfig('keywordmaps', k, v, 'keyword')
472 else:
476 else:
473 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
477 ui.status(_('\n\tconfiguration using current keyword template maps\n'))
474 if uikwmaps:
478 if uikwmaps:
475 kwmaps = dict(uikwmaps)
479 kwmaps = dict(uikwmaps)
476 else:
480 else:
477 kwmaps = _defaultkwmaps(ui)
481 kwmaps = _defaultkwmaps(ui)
478
482
479 uisetup(ui)
483 uisetup(ui)
480 reposetup(ui, repo)
484 reposetup(ui, repo)
481 ui.write(('[extensions]\nkeyword =\n'))
485 ui.write(('[extensions]\nkeyword =\n'))
482 demoitems('keyword', ui.configitems('keyword'))
486 demoitems('keyword', ui.configitems('keyword'))
483 demoitems('keywordset', ui.configitems('keywordset'))
487 demoitems('keywordset', ui.configitems('keywordset'))
484 demoitems('keywordmaps', kwmaps.iteritems())
488 demoitems('keywordmaps', kwmaps.iteritems())
485 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
489 keywords = '$' + '$\n$'.join(sorted(kwmaps.keys())) + '$\n'
486 repo.wvfs.write(fn, keywords)
490 repo.wvfs.write(fn, keywords)
487 repo[None].add([fn])
491 repo[None].add([fn])
488 ui.note(_('\nkeywords written to %s:\n') % fn)
492 ui.note(_('\nkeywords written to %s:\n') % fn)
489 ui.note(keywords)
493 ui.note(keywords)
490 with repo.wlock():
494 with repo.wlock():
491 repo.dirstate.setbranch('demobranch')
495 repo.dirstate.setbranch('demobranch')
492 for name, cmd in ui.configitems('hooks'):
496 for name, cmd in ui.configitems('hooks'):
493 if name.split('.', 1)[0].find('commit') > -1:
497 if name.split('.', 1)[0].find('commit') > -1:
494 repo.ui.setconfig('hooks', name, '', 'keyword')
498 repo.ui.setconfig('hooks', name, '', 'keyword')
495 msg = _('hg keyword configuration and expansion example')
499 msg = _('hg keyword configuration and expansion example')
496 ui.note(("hg ci -m '%s'\n" % msg))
500 ui.note(("hg ci -m '%s'\n" % msg))
497 repo.commit(text=msg)
501 repo.commit(text=msg)
498 ui.status(_('\n\tkeywords expanded\n'))
502 ui.status(_('\n\tkeywords expanded\n'))
499 ui.write(repo.wread(fn))
503 ui.write(repo.wread(fn))
500 repo.wvfs.rmtree(repo.root)
504 repo.wvfs.rmtree(repo.root)
501
505
502 @command('kwexpand',
506 @command('kwexpand',
503 cmdutil.walkopts,
507 cmdutil.walkopts,
504 _('hg kwexpand [OPTION]... [FILE]...'),
508 _('hg kwexpand [OPTION]... [FILE]...'),
505 inferrepo=True)
509 inferrepo=True)
506 def expand(ui, repo, *pats, **opts):
510 def expand(ui, repo, *pats, **opts):
507 '''expand keywords in the working directory
511 '''expand keywords in the working directory
508
512
509 Run after (re)enabling keyword expansion.
513 Run after (re)enabling keyword expansion.
510
514
511 kwexpand refuses to run if given files contain local changes.
515 kwexpand refuses to run if given files contain local changes.
512 '''
516 '''
513 # 3rd argument sets expansion to True
517 # 3rd argument sets expansion to True
514 _kwfwrite(ui, repo, True, *pats, **opts)
518 _kwfwrite(ui, repo, True, *pats, **opts)
515
519
516 @command('kwfiles',
520 @command('kwfiles',
517 [('A', 'all', None, _('show keyword status flags of all files')),
521 [('A', 'all', None, _('show keyword status flags of all files')),
518 ('i', 'ignore', None, _('show files excluded from expansion')),
522 ('i', 'ignore', None, _('show files excluded from expansion')),
519 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
523 ('u', 'unknown', None, _('only show unknown (not tracked) files')),
520 ] + cmdutil.walkopts,
524 ] + cmdutil.walkopts,
521 _('hg kwfiles [OPTION]... [FILE]...'),
525 _('hg kwfiles [OPTION]... [FILE]...'),
522 inferrepo=True)
526 inferrepo=True)
523 def files(ui, repo, *pats, **opts):
527 def files(ui, repo, *pats, **opts):
524 '''show files configured for keyword expansion
528 '''show files configured for keyword expansion
525
529
526 List which files in the working directory are matched by the
530 List which files in the working directory are matched by the
527 [keyword] configuration patterns.
531 [keyword] configuration patterns.
528
532
529 Useful to prevent inadvertent keyword expansion and to speed up
533 Useful to prevent inadvertent keyword expansion and to speed up
530 execution by including only files that are actual candidates for
534 execution by including only files that are actual candidates for
531 expansion.
535 expansion.
532
536
533 See :hg:`help keyword` on how to construct patterns both for
537 See :hg:`help keyword` on how to construct patterns both for
534 inclusion and exclusion of files.
538 inclusion and exclusion of files.
535
539
536 With -A/--all and -v/--verbose the codes used to show the status
540 With -A/--all and -v/--verbose the codes used to show the status
537 of files are::
541 of files are::
538
542
539 K = keyword expansion candidate
543 K = keyword expansion candidate
540 k = keyword expansion candidate (not tracked)
544 k = keyword expansion candidate (not tracked)
541 I = ignored
545 I = ignored
542 i = ignored (not tracked)
546 i = ignored (not tracked)
543 '''
547 '''
544 kwt = getattr(repo, '_keywordkwt', None)
548 kwt = getattr(repo, '_keywordkwt', None)
545 wctx = repo[None]
549 wctx = repo[None]
546 status = _status(ui, repo, wctx, kwt, *pats, **opts)
550 status = _status(ui, repo, wctx, kwt, *pats, **opts)
547 if pats:
551 if pats:
548 cwd = repo.getcwd()
552 cwd = repo.getcwd()
549 else:
553 else:
550 cwd = ''
554 cwd = ''
551 files = []
555 files = []
552 opts = pycompat.byteskwargs(opts)
556 opts = pycompat.byteskwargs(opts)
553 if not opts.get('unknown') or opts.get('all'):
557 if not opts.get('unknown') or opts.get('all'):
554 files = sorted(status.modified + status.added + status.clean)
558 files = sorted(status.modified + status.added + status.clean)
555 kwfiles = kwt.iskwfile(files, wctx)
559 kwfiles = kwt.iskwfile(files, wctx)
556 kwdeleted = kwt.iskwfile(status.deleted, wctx)
560 kwdeleted = kwt.iskwfile(status.deleted, wctx)
557 kwunknown = kwt.iskwfile(status.unknown, wctx)
561 kwunknown = kwt.iskwfile(status.unknown, wctx)
558 if not opts.get('ignore') or opts.get('all'):
562 if not opts.get('ignore') or opts.get('all'):
559 showfiles = kwfiles, kwdeleted, kwunknown
563 showfiles = kwfiles, kwdeleted, kwunknown
560 else:
564 else:
561 showfiles = [], [], []
565 showfiles = [], [], []
562 if opts.get('all') or opts.get('ignore'):
566 if opts.get('all') or opts.get('ignore'):
563 showfiles += ([f for f in files if f not in kwfiles],
567 showfiles += ([f for f in files if f not in kwfiles],
564 [f for f in status.unknown if f not in kwunknown])
568 [f for f in status.unknown if f not in kwunknown])
565 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
569 kwlabels = 'enabled deleted enabledunknown ignored ignoredunknown'.split()
566 kwstates = zip(kwlabels, 'K!kIi', showfiles)
570 kwstates = zip(kwlabels, 'K!kIi', showfiles)
567 fm = ui.formatter('kwfiles', opts)
571 fm = ui.formatter('kwfiles', opts)
568 fmt = '%.0s%s\n'
572 fmt = '%.0s%s\n'
569 if opts.get('all') or ui.verbose:
573 if opts.get('all') or ui.verbose:
570 fmt = '%s %s\n'
574 fmt = '%s %s\n'
571 for kwstate, char, filenames in kwstates:
575 for kwstate, char, filenames in kwstates:
572 label = 'kwfiles.' + kwstate
576 label = 'kwfiles.' + kwstate
573 for f in filenames:
577 for f in filenames:
574 fm.startitem()
578 fm.startitem()
575 fm.write('kwstatus path', fmt, char,
579 fm.write('kwstatus path', fmt, char,
576 repo.pathto(f, cwd), label=label)
580 repo.pathto(f, cwd), label=label)
577 fm.end()
581 fm.end()
578
582
579 @command('kwshrink',
583 @command('kwshrink',
580 cmdutil.walkopts,
584 cmdutil.walkopts,
581 _('hg kwshrink [OPTION]... [FILE]...'),
585 _('hg kwshrink [OPTION]... [FILE]...'),
582 inferrepo=True)
586 inferrepo=True)
583 def shrink(ui, repo, *pats, **opts):
587 def shrink(ui, repo, *pats, **opts):
584 '''revert expanded keywords in the working directory
588 '''revert expanded keywords in the working directory
585
589
586 Must be run before changing/disabling active keywords.
590 Must be run before changing/disabling active keywords.
587
591
588 kwshrink refuses to run if given files contain local changes.
592 kwshrink refuses to run if given files contain local changes.
589 '''
593 '''
590 # 3rd argument sets expansion to False
594 # 3rd argument sets expansion to False
591 _kwfwrite(ui, repo, False, *pats, **opts)
595 _kwfwrite(ui, repo, False, *pats, **opts)
592
596
593 # monkeypatches
597 # monkeypatches
594
598
595 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
599 def kwpatchfile_init(orig, self, ui, gp, backend, store, eolmode=None):
596 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
600 '''Monkeypatch/wrap patch.patchfile.__init__ to avoid
597 rejects or conflicts due to expanded keywords in working dir.'''
601 rejects or conflicts due to expanded keywords in working dir.'''
598 orig(self, ui, gp, backend, store, eolmode)
602 orig(self, ui, gp, backend, store, eolmode)
599 kwt = getattr(getattr(backend, 'repo', None), '_keywordkwt', None)
603 kwt = getattr(getattr(backend, 'repo', None), '_keywordkwt', None)
600 if kwt:
604 if kwt:
601 # shrink keywords read from working dir
605 # shrink keywords read from working dir
602 self.lines = kwt.shrinklines(self.fname, self.lines)
606 self.lines = kwt.shrinklines(self.fname, self.lines)
603
607
604 def kwdiff(orig, repo, *args, **kwargs):
608 def kwdiff(orig, repo, *args, **kwargs):
605 '''Monkeypatch patch.diff to avoid expansion.'''
609 '''Monkeypatch patch.diff to avoid expansion.'''
606 kwt = getattr(repo, '_keywordkwt', None)
610 kwt = getattr(repo, '_keywordkwt', None)
607 if kwt:
611 if kwt:
608 restrict = kwt.restrict
612 restrict = kwt.restrict
609 kwt.restrict = True
613 kwt.restrict = True
610 try:
614 try:
611 for chunk in orig(repo, *args, **kwargs):
615 for chunk in orig(repo, *args, **kwargs):
612 yield chunk
616 yield chunk
613 finally:
617 finally:
614 if kwt:
618 if kwt:
615 kwt.restrict = restrict
619 kwt.restrict = restrict
616
620
617 def kwweb_skip(orig, web):
621 def kwweb_skip(orig, web):
618 '''Wraps webcommands.x turning off keyword expansion.'''
622 '''Wraps webcommands.x turning off keyword expansion.'''
619 kwt = getattr(web.repo, '_keywordkwt', None)
623 kwt = getattr(web.repo, '_keywordkwt', None)
620 if kwt:
624 if kwt:
621 origmatch = kwt.match
625 origmatch = kwt.match
622 kwt.match = util.never
626 kwt.match = util.never
623 try:
627 try:
624 for chunk in orig(web):
628 for chunk in orig(web):
625 yield chunk
629 yield chunk
626 finally:
630 finally:
627 if kwt:
631 if kwt:
628 kwt.match = origmatch
632 kwt.match = origmatch
629
633
630 def kw_amend(orig, ui, repo, old, extra, pats, opts):
634 def kw_amend(orig, ui, repo, old, extra, pats, opts):
631 '''Wraps cmdutil.amend expanding keywords after amend.'''
635 '''Wraps cmdutil.amend expanding keywords after amend.'''
632 kwt = getattr(repo, '_keywordkwt', None)
636 kwt = getattr(repo, '_keywordkwt', None)
633 if kwt is None:
637 if kwt is None:
634 return orig(ui, repo, old, extra, pats, opts)
638 return orig(ui, repo, old, extra, pats, opts)
635 with repo.wlock():
639 with repo.wlock():
636 kwt.postcommit = True
640 kwt.postcommit = True
637 newid = orig(ui, repo, old, extra, pats, opts)
641 newid = orig(ui, repo, old, extra, pats, opts)
638 if newid != old.node():
642 if newid != old.node():
639 ctx = repo[newid]
643 ctx = repo[newid]
640 kwt.restrict = True
644 kwt.restrict = True
641 kwt.overwrite(ctx, ctx.files(), False, True)
645 kwt.overwrite(ctx, ctx.files(), False, True)
642 kwt.restrict = False
646 kwt.restrict = False
643 return newid
647 return newid
644
648
645 def kw_copy(orig, ui, repo, pats, opts, rename=False):
649 def kw_copy(orig, ui, repo, pats, opts, rename=False):
646 '''Wraps cmdutil.copy so that copy/rename destinations do not
650 '''Wraps cmdutil.copy so that copy/rename destinations do not
647 contain expanded keywords.
651 contain expanded keywords.
648 Note that the source of a regular file destination may also be a
652 Note that the source of a regular file destination may also be a
649 symlink:
653 symlink:
650 hg cp sym x -> x is symlink
654 hg cp sym x -> x is symlink
651 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
655 cp sym x; hg cp -A sym x -> x is file (maybe expanded keywords)
652 For the latter we have to follow the symlink to find out whether its
656 For the latter we have to follow the symlink to find out whether its
653 target is configured for expansion and we therefore must unexpand the
657 target is configured for expansion and we therefore must unexpand the
654 keywords in the destination.'''
658 keywords in the destination.'''
655 kwt = getattr(repo, '_keywordkwt', None)
659 kwt = getattr(repo, '_keywordkwt', None)
656 if kwt is None:
660 if kwt is None:
657 return orig(ui, repo, pats, opts, rename)
661 return orig(ui, repo, pats, opts, rename)
658 with repo.wlock():
662 with repo.wlock():
659 orig(ui, repo, pats, opts, rename)
663 orig(ui, repo, pats, opts, rename)
660 if opts.get('dry_run'):
664 if opts.get('dry_run'):
661 return
665 return
662 wctx = repo[None]
666 wctx = repo[None]
663 cwd = repo.getcwd()
667 cwd = repo.getcwd()
664
668
665 def haskwsource(dest):
669 def haskwsource(dest):
666 '''Returns true if dest is a regular file and configured for
670 '''Returns true if dest is a regular file and configured for
667 expansion or a symlink which points to a file configured for
671 expansion or a symlink which points to a file configured for
668 expansion. '''
672 expansion. '''
669 source = repo.dirstate.copied(dest)
673 source = repo.dirstate.copied(dest)
670 if 'l' in wctx.flags(source):
674 if 'l' in wctx.flags(source):
671 source = pathutil.canonpath(repo.root, cwd,
675 source = pathutil.canonpath(repo.root, cwd,
672 os.path.realpath(source))
676 os.path.realpath(source))
673 return kwt.match(source)
677 return kwt.match(source)
674
678
675 candidates = [f for f in repo.dirstate.copies() if
679 candidates = [f for f in repo.dirstate.copies() if
676 'l' not in wctx.flags(f) and haskwsource(f)]
680 'l' not in wctx.flags(f) and haskwsource(f)]
677 kwt.overwrite(wctx, candidates, False, False)
681 kwt.overwrite(wctx, candidates, False, False)
678
682
679 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
683 def kw_dorecord(orig, ui, repo, commitfunc, *pats, **opts):
680 '''Wraps record.dorecord expanding keywords after recording.'''
684 '''Wraps record.dorecord expanding keywords after recording.'''
681 kwt = getattr(repo, '_keywordkwt', None)
685 kwt = getattr(repo, '_keywordkwt', None)
682 if kwt is None:
686 if kwt is None:
683 return orig(ui, repo, commitfunc, *pats, **opts)
687 return orig(ui, repo, commitfunc, *pats, **opts)
684 with repo.wlock():
688 with repo.wlock():
685 # record returns 0 even when nothing has changed
689 # record returns 0 even when nothing has changed
686 # therefore compare nodes before and after
690 # therefore compare nodes before and after
687 kwt.postcommit = True
691 kwt.postcommit = True
688 ctx = repo['.']
692 ctx = repo['.']
689 wstatus = ctx.status()
693 wstatus = ctx.status()
690 ret = orig(ui, repo, commitfunc, *pats, **opts)
694 ret = orig(ui, repo, commitfunc, *pats, **opts)
691 recctx = repo['.']
695 recctx = repo['.']
692 if ctx != recctx:
696 if ctx != recctx:
693 modified, added = _preselect(wstatus, recctx.files())
697 modified, added = _preselect(wstatus, recctx.files())
694 kwt.restrict = False
698 kwt.restrict = False
695 kwt.overwrite(recctx, modified, False, True)
699 kwt.overwrite(recctx, modified, False, True)
696 kwt.overwrite(recctx, added, False, True, True)
700 kwt.overwrite(recctx, added, False, True, True)
697 kwt.restrict = True
701 kwt.restrict = True
698 return ret
702 return ret
699
703
700 def kwfilectx_cmp(orig, self, fctx):
704 def kwfilectx_cmp(orig, self, fctx):
701 if fctx._customcmp:
705 if fctx._customcmp:
702 return fctx.cmp(self)
706 return fctx.cmp(self)
703 kwt = getattr(self._repo, '_keywordkwt', None)
707 kwt = getattr(self._repo, '_keywordkwt', None)
704 if kwt is None:
708 if kwt is None:
705 return orig(self, fctx)
709 return orig(self, fctx)
706 # keyword affects data size, comparing wdir and filelog size does
710 # keyword affects data size, comparing wdir and filelog size does
707 # not make sense
711 # not make sense
708 if (fctx._filenode is None and
712 if (fctx._filenode is None and
709 (self._repo._encodefilterpats or
713 (self._repo._encodefilterpats or
710 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
714 kwt.match(fctx.path()) and 'l' not in fctx.flags() or
711 self.size() - 4 == fctx.size()) or
715 self.size() - 4 == fctx.size()) or
712 self.size() == fctx.size()):
716 self.size() == fctx.size()):
713 return self._filelog.cmp(self._filenode, fctx.data())
717 return self._filelog.cmp(self._filenode, fctx.data())
714 return True
718 return True
715
719
716 def uisetup(ui):
720 def uisetup(ui):
717 ''' Monkeypatches dispatch._parse to retrieve user command.
721 ''' Monkeypatches dispatch._parse to retrieve user command.
718 Overrides file method to return kwfilelog instead of filelog
722 Overrides file method to return kwfilelog instead of filelog
719 if file matches user configuration.
723 if file matches user configuration.
720 Wraps commit to overwrite configured files with updated
724 Wraps commit to overwrite configured files with updated
721 keyword substitutions.
725 keyword substitutions.
722 Monkeypatches patch and webcommands.'''
726 Monkeypatches patch and webcommands.'''
723
727
724 def kwdispatch_parse(orig, ui, args):
728 def kwdispatch_parse(orig, ui, args):
725 '''Monkeypatch dispatch._parse to obtain running hg command.'''
729 '''Monkeypatch dispatch._parse to obtain running hg command.'''
726 cmd, func, args, options, cmdoptions = orig(ui, args)
730 cmd, func, args, options, cmdoptions = orig(ui, args)
727 kwtools['hgcmd'] = cmd
731 kwtools['hgcmd'] = cmd
728 return cmd, func, args, options, cmdoptions
732 return cmd, func, args, options, cmdoptions
729
733
730 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
734 extensions.wrapfunction(dispatch, '_parse', kwdispatch_parse)
731
735
732 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
736 extensions.wrapfunction(context.filectx, 'cmp', kwfilectx_cmp)
733 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
737 extensions.wrapfunction(patch.patchfile, '__init__', kwpatchfile_init)
734 extensions.wrapfunction(patch, 'diff', kwdiff)
738 extensions.wrapfunction(patch, 'diff', kwdiff)
735 extensions.wrapfunction(cmdutil, 'amend', kw_amend)
739 extensions.wrapfunction(cmdutil, 'amend', kw_amend)
736 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
740 extensions.wrapfunction(cmdutil, 'copy', kw_copy)
737 extensions.wrapfunction(cmdutil, 'dorecord', kw_dorecord)
741 extensions.wrapfunction(cmdutil, 'dorecord', kw_dorecord)
738 for c in nokwwebcommands.split():
742 for c in nokwwebcommands.split():
739 extensions.wrapfunction(webcommands, c, kwweb_skip)
743 extensions.wrapfunction(webcommands, c, kwweb_skip)
740
744
741 def reposetup(ui, repo):
745 def reposetup(ui, repo):
742 '''Sets up repo as kwrepo for keyword substitution.'''
746 '''Sets up repo as kwrepo for keyword substitution.'''
743
747
744 try:
748 try:
745 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
749 if (not repo.local() or kwtools['hgcmd'] in nokwcommands.split()
746 or '.hg' in util.splitpath(repo.root)
750 or '.hg' in util.splitpath(repo.root)
747 or repo._url.startswith('bundle:')):
751 or repo._url.startswith('bundle:')):
748 return
752 return
749 except AttributeError:
753 except AttributeError:
750 pass
754 pass
751
755
752 inc, exc = [], ['.hg*']
756 inc, exc = [], ['.hg*']
753 for pat, opt in ui.configitems('keyword'):
757 for pat, opt in ui.configitems('keyword'):
754 if opt != 'ignore':
758 if opt != 'ignore':
755 inc.append(pat)
759 inc.append(pat)
756 else:
760 else:
757 exc.append(pat)
761 exc.append(pat)
758 if not inc:
762 if not inc:
759 return
763 return
760
764
761 kwt = kwtemplater(ui, repo, inc, exc)
765 kwt = kwtemplater(ui, repo, inc, exc)
762
766
763 class kwrepo(repo.__class__):
767 class kwrepo(repo.__class__):
764 def file(self, f):
768 def file(self, f):
765 if f[0] == '/':
769 if f[0] == '/':
766 f = f[1:]
770 f = f[1:]
767 return kwfilelog(self.svfs, kwt, f)
771 return kwfilelog(self.svfs, kwt, f)
768
772
769 def wread(self, filename):
773 def wread(self, filename):
770 data = super(kwrepo, self).wread(filename)
774 data = super(kwrepo, self).wread(filename)
771 return kwt.wread(filename, data)
775 return kwt.wread(filename, data)
772
776
773 def commit(self, *args, **opts):
777 def commit(self, *args, **opts):
774 # use custom commitctx for user commands
778 # use custom commitctx for user commands
775 # other extensions can still wrap repo.commitctx directly
779 # other extensions can still wrap repo.commitctx directly
776 self.commitctx = self.kwcommitctx
780 self.commitctx = self.kwcommitctx
777 try:
781 try:
778 return super(kwrepo, self).commit(*args, **opts)
782 return super(kwrepo, self).commit(*args, **opts)
779 finally:
783 finally:
780 del self.commitctx
784 del self.commitctx
781
785
782 def kwcommitctx(self, ctx, error=False):
786 def kwcommitctx(self, ctx, error=False):
783 n = super(kwrepo, self).commitctx(ctx, error)
787 n = super(kwrepo, self).commitctx(ctx, error)
784 # no lock needed, only called from repo.commit() which already locks
788 # no lock needed, only called from repo.commit() which already locks
785 if not kwt.postcommit:
789 if not kwt.postcommit:
786 restrict = kwt.restrict
790 restrict = kwt.restrict
787 kwt.restrict = True
791 kwt.restrict = True
788 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
792 kwt.overwrite(self[n], sorted(ctx.added() + ctx.modified()),
789 False, True)
793 False, True)
790 kwt.restrict = restrict
794 kwt.restrict = restrict
791 return n
795 return n
792
796
793 def rollback(self, dryrun=False, force=False):
797 def rollback(self, dryrun=False, force=False):
794 with self.wlock():
798 with self.wlock():
795 origrestrict = kwt.restrict
799 origrestrict = kwt.restrict
796 try:
800 try:
797 if not dryrun:
801 if not dryrun:
798 changed = self['.'].files()
802 changed = self['.'].files()
799 ret = super(kwrepo, self).rollback(dryrun, force)
803 ret = super(kwrepo, self).rollback(dryrun, force)
800 if not dryrun:
804 if not dryrun:
801 ctx = self['.']
805 ctx = self['.']
802 modified, added = _preselect(ctx.status(), changed)
806 modified, added = _preselect(ctx.status(), changed)
803 kwt.restrict = False
807 kwt.restrict = False
804 kwt.overwrite(ctx, modified, True, True)
808 kwt.overwrite(ctx, modified, True, True)
805 kwt.overwrite(ctx, added, True, False)
809 kwt.overwrite(ctx, added, True, False)
806 return ret
810 return ret
807 finally:
811 finally:
808 kwt.restrict = origrestrict
812 kwt.restrict = origrestrict
809
813
810 repo.__class__ = kwrepo
814 repo.__class__ = kwrepo
811 repo._keywordkwt = kwt
815 repo._keywordkwt = kwt
@@ -1,128 +1,132
1 # Copyright 2010-2011 Fog Creek Software
1 # Copyright 2010-2011 Fog Creek Software
2 # Copyright 2010-2011 Unity Technologies
2 # Copyright 2010-2011 Unity Technologies
3 #
3 #
4 # This software may be used and distributed according to the terms of the
4 # This software may be used and distributed according to the terms of the
5 # GNU General Public License version 2 or any later version.
5 # GNU General Public License version 2 or any later version.
6
6
7 '''remote largefile store; the base class for wirestore'''
7 '''remote largefile store; the base class for wirestore'''
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 from mercurial.i18n import _
10 from mercurial.i18n import _
11
11
12 from mercurial import (
12 from mercurial import (
13 error,
13 error,
14 util,
14 util,
15 )
15 )
16
16
17 from mercurial.utils import (
18 stringutil,
19 )
20
17 from . import (
21 from . import (
18 basestore,
22 basestore,
19 lfutil,
23 lfutil,
20 localstore,
24 localstore,
21 )
25 )
22
26
23 urlerr = util.urlerr
27 urlerr = util.urlerr
24 urlreq = util.urlreq
28 urlreq = util.urlreq
25
29
26 class remotestore(basestore.basestore):
30 class remotestore(basestore.basestore):
27 '''a largefile store accessed over a network'''
31 '''a largefile store accessed over a network'''
28 def __init__(self, ui, repo, url):
32 def __init__(self, ui, repo, url):
29 super(remotestore, self).__init__(ui, repo, url)
33 super(remotestore, self).__init__(ui, repo, url)
30 self._lstore = None
34 self._lstore = None
31 if repo is not None:
35 if repo is not None:
32 self._lstore = localstore.localstore(self.ui, self.repo, self.repo)
36 self._lstore = localstore.localstore(self.ui, self.repo, self.repo)
33
37
34 def put(self, source, hash):
38 def put(self, source, hash):
35 if self.sendfile(source, hash):
39 if self.sendfile(source, hash):
36 raise error.Abort(
40 raise error.Abort(
37 _('remotestore: could not put %s to remote store %s')
41 _('remotestore: could not put %s to remote store %s')
38 % (source, util.hidepassword(self.url)))
42 % (source, util.hidepassword(self.url)))
39 self.ui.debug(
43 self.ui.debug(
40 _('remotestore: put %s to remote store %s\n')
44 _('remotestore: put %s to remote store %s\n')
41 % (source, util.hidepassword(self.url)))
45 % (source, util.hidepassword(self.url)))
42
46
43 def exists(self, hashes):
47 def exists(self, hashes):
44 return dict((h, s == 0) for (h, s) in # dict-from-generator
48 return dict((h, s == 0) for (h, s) in # dict-from-generator
45 self._stat(hashes).iteritems())
49 self._stat(hashes).iteritems())
46
50
47 def sendfile(self, filename, hash):
51 def sendfile(self, filename, hash):
48 self.ui.debug('remotestore: sendfile(%s, %s)\n' % (filename, hash))
52 self.ui.debug('remotestore: sendfile(%s, %s)\n' % (filename, hash))
49 try:
53 try:
50 with lfutil.httpsendfile(self.ui, filename) as fd:
54 with lfutil.httpsendfile(self.ui, filename) as fd:
51 return self._put(hash, fd)
55 return self._put(hash, fd)
52 except IOError as e:
56 except IOError as e:
53 raise error.Abort(
57 raise error.Abort(
54 _('remotestore: could not open file %s: %s')
58 _('remotestore: could not open file %s: %s')
55 % (filename, util.forcebytestr(e)))
59 % (filename, stringutil.forcebytestr(e)))
56
60
57 def _getfile(self, tmpfile, filename, hash):
61 def _getfile(self, tmpfile, filename, hash):
58 try:
62 try:
59 chunks = self._get(hash)
63 chunks = self._get(hash)
60 except urlerr.httperror as e:
64 except urlerr.httperror as e:
61 # 401s get converted to error.Aborts; everything else is fine being
65 # 401s get converted to error.Aborts; everything else is fine being
62 # turned into a StoreError
66 # turned into a StoreError
63 raise basestore.StoreError(filename, hash, self.url,
67 raise basestore.StoreError(filename, hash, self.url,
64 util.forcebytestr(e))
68 stringutil.forcebytestr(e))
65 except urlerr.urlerror as e:
69 except urlerr.urlerror as e:
66 # This usually indicates a connection problem, so don't
70 # This usually indicates a connection problem, so don't
67 # keep trying with the other files... they will probably
71 # keep trying with the other files... they will probably
68 # all fail too.
72 # all fail too.
69 raise error.Abort('%s: %s' %
73 raise error.Abort('%s: %s' %
70 (util.hidepassword(self.url), e.reason))
74 (util.hidepassword(self.url), e.reason))
71 except IOError as e:
75 except IOError as e:
72 raise basestore.StoreError(filename, hash, self.url,
76 raise basestore.StoreError(filename, hash, self.url,
73 util.forcebytestr(e))
77 stringutil.forcebytestr(e))
74
78
75 return lfutil.copyandhash(chunks, tmpfile)
79 return lfutil.copyandhash(chunks, tmpfile)
76
80
77 def _hashesavailablelocally(self, hashes):
81 def _hashesavailablelocally(self, hashes):
78 existslocallymap = self._lstore.exists(hashes)
82 existslocallymap = self._lstore.exists(hashes)
79 localhashes = [hash for hash in hashes if existslocallymap[hash]]
83 localhashes = [hash for hash in hashes if existslocallymap[hash]]
80 return localhashes
84 return localhashes
81
85
82 def _verifyfiles(self, contents, filestocheck):
86 def _verifyfiles(self, contents, filestocheck):
83 failed = False
87 failed = False
84 expectedhashes = [expectedhash
88 expectedhashes = [expectedhash
85 for cset, filename, expectedhash in filestocheck]
89 for cset, filename, expectedhash in filestocheck]
86 localhashes = self._hashesavailablelocally(expectedhashes)
90 localhashes = self._hashesavailablelocally(expectedhashes)
87 stats = self._stat([expectedhash for expectedhash in expectedhashes
91 stats = self._stat([expectedhash for expectedhash in expectedhashes
88 if expectedhash not in localhashes])
92 if expectedhash not in localhashes])
89
93
90 for cset, filename, expectedhash in filestocheck:
94 for cset, filename, expectedhash in filestocheck:
91 if expectedhash in localhashes:
95 if expectedhash in localhashes:
92 filetocheck = (cset, filename, expectedhash)
96 filetocheck = (cset, filename, expectedhash)
93 verifyresult = self._lstore._verifyfiles(contents,
97 verifyresult = self._lstore._verifyfiles(contents,
94 [filetocheck])
98 [filetocheck])
95 if verifyresult:
99 if verifyresult:
96 failed = True
100 failed = True
97 else:
101 else:
98 stat = stats[expectedhash]
102 stat = stats[expectedhash]
99 if stat:
103 if stat:
100 if stat == 1:
104 if stat == 1:
101 self.ui.warn(
105 self.ui.warn(
102 _('changeset %s: %s: contents differ\n')
106 _('changeset %s: %s: contents differ\n')
103 % (cset, filename))
107 % (cset, filename))
104 failed = True
108 failed = True
105 elif stat == 2:
109 elif stat == 2:
106 self.ui.warn(
110 self.ui.warn(
107 _('changeset %s: %s missing\n')
111 _('changeset %s: %s missing\n')
108 % (cset, filename))
112 % (cset, filename))
109 failed = True
113 failed = True
110 else:
114 else:
111 raise RuntimeError('verify failed: unexpected response '
115 raise RuntimeError('verify failed: unexpected response '
112 'from statlfile (%r)' % stat)
116 'from statlfile (%r)' % stat)
113 return failed
117 return failed
114
118
115 def _put(self, hash, fd):
119 def _put(self, hash, fd):
116 '''Put file with the given hash in the remote store.'''
120 '''Put file with the given hash in the remote store.'''
117 raise NotImplementedError('abstract method')
121 raise NotImplementedError('abstract method')
118
122
119 def _get(self, hash):
123 def _get(self, hash):
120 '''Get a iterator for content with the given hash.'''
124 '''Get a iterator for content with the given hash.'''
121 raise NotImplementedError('abstract method')
125 raise NotImplementedError('abstract method')
122
126
123 def _stat(self, hashes):
127 def _stat(self, hashes):
124 '''Get information about availability of files specified by
128 '''Get information about availability of files specified by
125 hashes in the remote store. Return dictionary mapping hashes
129 hashes in the remote store. Return dictionary mapping hashes
126 to return code where 0 means that file is available, other
130 to return code where 0 means that file is available, other
127 values if not.'''
131 values if not.'''
128 raise NotImplementedError('abstract method')
132 raise NotImplementedError('abstract method')
@@ -1,391 +1,395
1 # wrapper.py - methods wrapping core mercurial logic
1 # wrapper.py - methods wrapping core mercurial logic
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import hashlib
10 import hashlib
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13 from mercurial.node import bin, hex, nullid, short
13 from mercurial.node import bin, hex, nullid, short
14
14
15 from mercurial import (
15 from mercurial import (
16 error,
16 error,
17 filelog,
17 filelog,
18 revlog,
18 revlog,
19 util,
19 util,
20 )
20 )
21
21
22 from mercurial.utils import (
23 stringutil,
24 )
25
22 from ..largefiles import lfutil
26 from ..largefiles import lfutil
23
27
24 from . import (
28 from . import (
25 blobstore,
29 blobstore,
26 pointer,
30 pointer,
27 )
31 )
28
32
29 def supportedoutgoingversions(orig, repo):
33 def supportedoutgoingversions(orig, repo):
30 versions = orig(repo)
34 versions = orig(repo)
31 if 'lfs' in repo.requirements:
35 if 'lfs' in repo.requirements:
32 versions.discard('01')
36 versions.discard('01')
33 versions.discard('02')
37 versions.discard('02')
34 versions.add('03')
38 versions.add('03')
35 return versions
39 return versions
36
40
37 def allsupportedversions(orig, ui):
41 def allsupportedversions(orig, ui):
38 versions = orig(ui)
42 versions = orig(ui)
39 versions.add('03')
43 versions.add('03')
40 return versions
44 return versions
41
45
42 def _capabilities(orig, repo, proto):
46 def _capabilities(orig, repo, proto):
43 '''Wrap server command to announce lfs server capability'''
47 '''Wrap server command to announce lfs server capability'''
44 caps = orig(repo, proto)
48 caps = orig(repo, proto)
45 # XXX: change to 'lfs=serve' when separate git server isn't required?
49 # XXX: change to 'lfs=serve' when separate git server isn't required?
46 caps.append('lfs')
50 caps.append('lfs')
47 return caps
51 return caps
48
52
49 def bypasscheckhash(self, text):
53 def bypasscheckhash(self, text):
50 return False
54 return False
51
55
52 def readfromstore(self, text):
56 def readfromstore(self, text):
53 """Read filelog content from local blobstore transform for flagprocessor.
57 """Read filelog content from local blobstore transform for flagprocessor.
54
58
55 Default tranform for flagprocessor, returning contents from blobstore.
59 Default tranform for flagprocessor, returning contents from blobstore.
56 Returns a 2-typle (text, validatehash) where validatehash is True as the
60 Returns a 2-typle (text, validatehash) where validatehash is True as the
57 contents of the blobstore should be checked using checkhash.
61 contents of the blobstore should be checked using checkhash.
58 """
62 """
59 p = pointer.deserialize(text)
63 p = pointer.deserialize(text)
60 oid = p.oid()
64 oid = p.oid()
61 store = self.opener.lfslocalblobstore
65 store = self.opener.lfslocalblobstore
62 if not store.has(oid):
66 if not store.has(oid):
63 p.filename = self.filename
67 p.filename = self.filename
64 self.opener.lfsremoteblobstore.readbatch([p], store)
68 self.opener.lfsremoteblobstore.readbatch([p], store)
65
69
66 # The caller will validate the content
70 # The caller will validate the content
67 text = store.read(oid, verify=False)
71 text = store.read(oid, verify=False)
68
72
69 # pack hg filelog metadata
73 # pack hg filelog metadata
70 hgmeta = {}
74 hgmeta = {}
71 for k in p.keys():
75 for k in p.keys():
72 if k.startswith('x-hg-'):
76 if k.startswith('x-hg-'):
73 name = k[len('x-hg-'):]
77 name = k[len('x-hg-'):]
74 hgmeta[name] = p[k]
78 hgmeta[name] = p[k]
75 if hgmeta or text.startswith('\1\n'):
79 if hgmeta or text.startswith('\1\n'):
76 text = filelog.packmeta(hgmeta, text)
80 text = filelog.packmeta(hgmeta, text)
77
81
78 return (text, True)
82 return (text, True)
79
83
80 def writetostore(self, text):
84 def writetostore(self, text):
81 # hg filelog metadata (includes rename, etc)
85 # hg filelog metadata (includes rename, etc)
82 hgmeta, offset = filelog.parsemeta(text)
86 hgmeta, offset = filelog.parsemeta(text)
83 if offset and offset > 0:
87 if offset and offset > 0:
84 # lfs blob does not contain hg filelog metadata
88 # lfs blob does not contain hg filelog metadata
85 text = text[offset:]
89 text = text[offset:]
86
90
87 # git-lfs only supports sha256
91 # git-lfs only supports sha256
88 oid = hex(hashlib.sha256(text).digest())
92 oid = hex(hashlib.sha256(text).digest())
89 self.opener.lfslocalblobstore.write(oid, text)
93 self.opener.lfslocalblobstore.write(oid, text)
90
94
91 # replace contents with metadata
95 # replace contents with metadata
92 longoid = 'sha256:%s' % oid
96 longoid = 'sha256:%s' % oid
93 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
97 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
94
98
95 # by default, we expect the content to be binary. however, LFS could also
99 # by default, we expect the content to be binary. however, LFS could also
96 # be used for non-binary content. add a special entry for non-binary data.
100 # be used for non-binary content. add a special entry for non-binary data.
97 # this will be used by filectx.isbinary().
101 # this will be used by filectx.isbinary().
98 if not util.binary(text):
102 if not stringutil.binary(text):
99 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
103 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
100 metadata['x-is-binary'] = '0'
104 metadata['x-is-binary'] = '0'
101
105
102 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
106 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
103 if hgmeta is not None:
107 if hgmeta is not None:
104 for k, v in hgmeta.iteritems():
108 for k, v in hgmeta.iteritems():
105 metadata['x-hg-%s' % k] = v
109 metadata['x-hg-%s' % k] = v
106
110
107 rawtext = metadata.serialize()
111 rawtext = metadata.serialize()
108 return (rawtext, False)
112 return (rawtext, False)
109
113
110 def _islfs(rlog, node=None, rev=None):
114 def _islfs(rlog, node=None, rev=None):
111 if rev is None:
115 if rev is None:
112 if node is None:
116 if node is None:
113 # both None - likely working copy content where node is not ready
117 # both None - likely working copy content where node is not ready
114 return False
118 return False
115 rev = rlog.rev(node)
119 rev = rlog.rev(node)
116 else:
120 else:
117 node = rlog.node(rev)
121 node = rlog.node(rev)
118 if node == nullid:
122 if node == nullid:
119 return False
123 return False
120 flags = rlog.flags(rev)
124 flags = rlog.flags(rev)
121 return bool(flags & revlog.REVIDX_EXTSTORED)
125 return bool(flags & revlog.REVIDX_EXTSTORED)
122
126
123 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
127 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
124 cachedelta=None, node=None,
128 cachedelta=None, node=None,
125 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
129 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
126 textlen = len(text)
130 textlen = len(text)
127 # exclude hg rename meta from file size
131 # exclude hg rename meta from file size
128 meta, offset = filelog.parsemeta(text)
132 meta, offset = filelog.parsemeta(text)
129 if offset:
133 if offset:
130 textlen -= offset
134 textlen -= offset
131
135
132 lfstrack = self.opener.options['lfstrack']
136 lfstrack = self.opener.options['lfstrack']
133
137
134 if lfstrack(self.filename, textlen):
138 if lfstrack(self.filename, textlen):
135 flags |= revlog.REVIDX_EXTSTORED
139 flags |= revlog.REVIDX_EXTSTORED
136
140
137 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
141 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
138 node=node, flags=flags, **kwds)
142 node=node, flags=flags, **kwds)
139
143
140 def filelogrenamed(orig, self, node):
144 def filelogrenamed(orig, self, node):
141 if _islfs(self, node):
145 if _islfs(self, node):
142 rawtext = self.revision(node, raw=True)
146 rawtext = self.revision(node, raw=True)
143 if not rawtext:
147 if not rawtext:
144 return False
148 return False
145 metadata = pointer.deserialize(rawtext)
149 metadata = pointer.deserialize(rawtext)
146 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
150 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
147 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
151 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
148 else:
152 else:
149 return False
153 return False
150 return orig(self, node)
154 return orig(self, node)
151
155
152 def filelogsize(orig, self, rev):
156 def filelogsize(orig, self, rev):
153 if _islfs(self, rev=rev):
157 if _islfs(self, rev=rev):
154 # fast path: use lfs metadata to answer size
158 # fast path: use lfs metadata to answer size
155 rawtext = self.revision(rev, raw=True)
159 rawtext = self.revision(rev, raw=True)
156 metadata = pointer.deserialize(rawtext)
160 metadata = pointer.deserialize(rawtext)
157 return int(metadata['size'])
161 return int(metadata['size'])
158 return orig(self, rev)
162 return orig(self, rev)
159
163
160 def filectxcmp(orig, self, fctx):
164 def filectxcmp(orig, self, fctx):
161 """returns True if text is different than fctx"""
165 """returns True if text is different than fctx"""
162 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
166 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
163 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
167 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
164 # fast path: check LFS oid
168 # fast path: check LFS oid
165 p1 = pointer.deserialize(self.rawdata())
169 p1 = pointer.deserialize(self.rawdata())
166 p2 = pointer.deserialize(fctx.rawdata())
170 p2 = pointer.deserialize(fctx.rawdata())
167 return p1.oid() != p2.oid()
171 return p1.oid() != p2.oid()
168 return orig(self, fctx)
172 return orig(self, fctx)
169
173
170 def filectxisbinary(orig, self):
174 def filectxisbinary(orig, self):
171 if self.islfs():
175 if self.islfs():
172 # fast path: use lfs metadata to answer isbinary
176 # fast path: use lfs metadata to answer isbinary
173 metadata = pointer.deserialize(self.rawdata())
177 metadata = pointer.deserialize(self.rawdata())
174 # if lfs metadata says nothing, assume it's binary by default
178 # if lfs metadata says nothing, assume it's binary by default
175 return bool(int(metadata.get('x-is-binary', 1)))
179 return bool(int(metadata.get('x-is-binary', 1)))
176 return orig(self)
180 return orig(self)
177
181
178 def filectxislfs(self):
182 def filectxislfs(self):
179 return _islfs(self.filelog(), self.filenode())
183 return _islfs(self.filelog(), self.filenode())
180
184
181 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
185 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
182 orig(fm, ctx, matcher, path, decode)
186 orig(fm, ctx, matcher, path, decode)
183 fm.data(rawdata=ctx[path].rawdata())
187 fm.data(rawdata=ctx[path].rawdata())
184
188
185 def convertsink(orig, sink):
189 def convertsink(orig, sink):
186 sink = orig(sink)
190 sink = orig(sink)
187 if sink.repotype == 'hg':
191 if sink.repotype == 'hg':
188 class lfssink(sink.__class__):
192 class lfssink(sink.__class__):
189 def putcommit(self, files, copies, parents, commit, source, revmap,
193 def putcommit(self, files, copies, parents, commit, source, revmap,
190 full, cleanp2):
194 full, cleanp2):
191 pc = super(lfssink, self).putcommit
195 pc = super(lfssink, self).putcommit
192 node = pc(files, copies, parents, commit, source, revmap, full,
196 node = pc(files, copies, parents, commit, source, revmap, full,
193 cleanp2)
197 cleanp2)
194
198
195 if 'lfs' not in self.repo.requirements:
199 if 'lfs' not in self.repo.requirements:
196 ctx = self.repo[node]
200 ctx = self.repo[node]
197
201
198 # The file list may contain removed files, so check for
202 # The file list may contain removed files, so check for
199 # membership before assuming it is in the context.
203 # membership before assuming it is in the context.
200 if any(f in ctx and ctx[f].islfs() for f, n in files):
204 if any(f in ctx and ctx[f].islfs() for f, n in files):
201 self.repo.requirements.add('lfs')
205 self.repo.requirements.add('lfs')
202 self.repo._writerequirements()
206 self.repo._writerequirements()
203
207
204 # Permanently enable lfs locally
208 # Permanently enable lfs locally
205 self.repo.vfs.append(
209 self.repo.vfs.append(
206 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
210 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
207
211
208 return node
212 return node
209
213
210 sink.__class__ = lfssink
214 sink.__class__ = lfssink
211
215
212 return sink
216 return sink
213
217
214 def vfsinit(orig, self, othervfs):
218 def vfsinit(orig, self, othervfs):
215 orig(self, othervfs)
219 orig(self, othervfs)
216 # copy lfs related options
220 # copy lfs related options
217 for k, v in othervfs.options.items():
221 for k, v in othervfs.options.items():
218 if k.startswith('lfs'):
222 if k.startswith('lfs'):
219 self.options[k] = v
223 self.options[k] = v
220 # also copy lfs blobstores. note: this can run before reposetup, so lfs
224 # also copy lfs blobstores. note: this can run before reposetup, so lfs
221 # blobstore attributes are not always ready at this time.
225 # blobstore attributes are not always ready at this time.
222 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
226 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
223 if util.safehasattr(othervfs, name):
227 if util.safehasattr(othervfs, name):
224 setattr(self, name, getattr(othervfs, name))
228 setattr(self, name, getattr(othervfs, name))
225
229
226 def hgclone(orig, ui, opts, *args, **kwargs):
230 def hgclone(orig, ui, opts, *args, **kwargs):
227 result = orig(ui, opts, *args, **kwargs)
231 result = orig(ui, opts, *args, **kwargs)
228
232
229 if result is not None:
233 if result is not None:
230 sourcerepo, destrepo = result
234 sourcerepo, destrepo = result
231 repo = destrepo.local()
235 repo = destrepo.local()
232
236
233 # When cloning to a remote repo (like through SSH), no repo is available
237 # When cloning to a remote repo (like through SSH), no repo is available
234 # from the peer. Therefore the hgrc can't be updated.
238 # from the peer. Therefore the hgrc can't be updated.
235 if not repo:
239 if not repo:
236 return result
240 return result
237
241
238 # If lfs is required for this repo, permanently enable it locally
242 # If lfs is required for this repo, permanently enable it locally
239 if 'lfs' in repo.requirements:
243 if 'lfs' in repo.requirements:
240 repo.vfs.append('hgrc',
244 repo.vfs.append('hgrc',
241 util.tonativeeol('\n[extensions]\nlfs=\n'))
245 util.tonativeeol('\n[extensions]\nlfs=\n'))
242
246
243 return result
247 return result
244
248
245 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
249 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
246 orig(sourcerepo, destrepo, bookmarks, defaultpath)
250 orig(sourcerepo, destrepo, bookmarks, defaultpath)
247
251
248 # If lfs is required for this repo, permanently enable it locally
252 # If lfs is required for this repo, permanently enable it locally
249 if 'lfs' in destrepo.requirements:
253 if 'lfs' in destrepo.requirements:
250 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
254 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
251
255
252 def _prefetchfiles(repo, ctx, files):
256 def _prefetchfiles(repo, ctx, files):
253 """Ensure that required LFS blobs are present, fetching them as a group if
257 """Ensure that required LFS blobs are present, fetching them as a group if
254 needed."""
258 needed."""
255 pointers = []
259 pointers = []
256 localstore = repo.svfs.lfslocalblobstore
260 localstore = repo.svfs.lfslocalblobstore
257
261
258 for f in files:
262 for f in files:
259 p = pointerfromctx(ctx, f)
263 p = pointerfromctx(ctx, f)
260 if p and not localstore.has(p.oid()):
264 if p and not localstore.has(p.oid()):
261 p.filename = f
265 p.filename = f
262 pointers.append(p)
266 pointers.append(p)
263
267
264 if pointers:
268 if pointers:
265 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
269 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
266
270
267 def _canskipupload(repo):
271 def _canskipupload(repo):
268 # if remotestore is a null store, upload is a no-op and can be skipped
272 # if remotestore is a null store, upload is a no-op and can be skipped
269 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
273 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
270
274
271 def candownload(repo):
275 def candownload(repo):
272 # if remotestore is a null store, downloads will lead to nothing
276 # if remotestore is a null store, downloads will lead to nothing
273 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
277 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
274
278
275 def uploadblobsfromrevs(repo, revs):
279 def uploadblobsfromrevs(repo, revs):
276 '''upload lfs blobs introduced by revs
280 '''upload lfs blobs introduced by revs
277
281
278 Note: also used by other extensions e. g. infinitepush. avoid renaming.
282 Note: also used by other extensions e. g. infinitepush. avoid renaming.
279 '''
283 '''
280 if _canskipupload(repo):
284 if _canskipupload(repo):
281 return
285 return
282 pointers = extractpointers(repo, revs)
286 pointers = extractpointers(repo, revs)
283 uploadblobs(repo, pointers)
287 uploadblobs(repo, pointers)
284
288
285 def prepush(pushop):
289 def prepush(pushop):
286 """Prepush hook.
290 """Prepush hook.
287
291
288 Read through the revisions to push, looking for filelog entries that can be
292 Read through the revisions to push, looking for filelog entries that can be
289 deserialized into metadata so that we can block the push on their upload to
293 deserialized into metadata so that we can block the push on their upload to
290 the remote blobstore.
294 the remote blobstore.
291 """
295 """
292 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
296 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
293
297
294 def push(orig, repo, remote, *args, **kwargs):
298 def push(orig, repo, remote, *args, **kwargs):
295 """bail on push if the extension isn't enabled on remote when needed"""
299 """bail on push if the extension isn't enabled on remote when needed"""
296 if 'lfs' in repo.requirements:
300 if 'lfs' in repo.requirements:
297 # If the remote peer is for a local repo, the requirement tests in the
301 # If the remote peer is for a local repo, the requirement tests in the
298 # base class method enforce lfs support. Otherwise, some revisions in
302 # base class method enforce lfs support. Otherwise, some revisions in
299 # this repo use lfs, and the remote repo needs the extension loaded.
303 # this repo use lfs, and the remote repo needs the extension loaded.
300 if not remote.local() and not remote.capable('lfs'):
304 if not remote.local() and not remote.capable('lfs'):
301 # This is a copy of the message in exchange.push() when requirements
305 # This is a copy of the message in exchange.push() when requirements
302 # are missing between local repos.
306 # are missing between local repos.
303 m = _("required features are not supported in the destination: %s")
307 m = _("required features are not supported in the destination: %s")
304 raise error.Abort(m % 'lfs',
308 raise error.Abort(m % 'lfs',
305 hint=_('enable the lfs extension on the server'))
309 hint=_('enable the lfs extension on the server'))
306 return orig(repo, remote, *args, **kwargs)
310 return orig(repo, remote, *args, **kwargs)
307
311
308 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
312 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
309 *args, **kwargs):
313 *args, **kwargs):
310 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
314 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
311 uploadblobsfromrevs(repo, outgoing.missing)
315 uploadblobsfromrevs(repo, outgoing.missing)
312 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
316 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
313 **kwargs)
317 **kwargs)
314
318
315 def extractpointers(repo, revs):
319 def extractpointers(repo, revs):
316 """return a list of lfs pointers added by given revs"""
320 """return a list of lfs pointers added by given revs"""
317 repo.ui.debug('lfs: computing set of blobs to upload\n')
321 repo.ui.debug('lfs: computing set of blobs to upload\n')
318 pointers = {}
322 pointers = {}
319 for r in revs:
323 for r in revs:
320 ctx = repo[r]
324 ctx = repo[r]
321 for p in pointersfromctx(ctx).values():
325 for p in pointersfromctx(ctx).values():
322 pointers[p.oid()] = p
326 pointers[p.oid()] = p
323 return sorted(pointers.values())
327 return sorted(pointers.values())
324
328
325 def pointerfromctx(ctx, f, removed=False):
329 def pointerfromctx(ctx, f, removed=False):
326 """return a pointer for the named file from the given changectx, or None if
330 """return a pointer for the named file from the given changectx, or None if
327 the file isn't LFS.
331 the file isn't LFS.
328
332
329 Optionally, the pointer for a file deleted from the context can be returned.
333 Optionally, the pointer for a file deleted from the context can be returned.
330 Since no such pointer is actually stored, and to distinguish from a non LFS
334 Since no such pointer is actually stored, and to distinguish from a non LFS
331 file, this pointer is represented by an empty dict.
335 file, this pointer is represented by an empty dict.
332 """
336 """
333 _ctx = ctx
337 _ctx = ctx
334 if f not in ctx:
338 if f not in ctx:
335 if not removed:
339 if not removed:
336 return None
340 return None
337 if f in ctx.p1():
341 if f in ctx.p1():
338 _ctx = ctx.p1()
342 _ctx = ctx.p1()
339 elif f in ctx.p2():
343 elif f in ctx.p2():
340 _ctx = ctx.p2()
344 _ctx = ctx.p2()
341 else:
345 else:
342 return None
346 return None
343 fctx = _ctx[f]
347 fctx = _ctx[f]
344 if not _islfs(fctx.filelog(), fctx.filenode()):
348 if not _islfs(fctx.filelog(), fctx.filenode()):
345 return None
349 return None
346 try:
350 try:
347 p = pointer.deserialize(fctx.rawdata())
351 p = pointer.deserialize(fctx.rawdata())
348 if ctx == _ctx:
352 if ctx == _ctx:
349 return p
353 return p
350 return {}
354 return {}
351 except pointer.InvalidPointer as ex:
355 except pointer.InvalidPointer as ex:
352 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
356 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
353 % (f, short(_ctx.node()), ex))
357 % (f, short(_ctx.node()), ex))
354
358
355 def pointersfromctx(ctx, removed=False):
359 def pointersfromctx(ctx, removed=False):
356 """return a dict {path: pointer} for given single changectx.
360 """return a dict {path: pointer} for given single changectx.
357
361
358 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
362 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
359 stored for the path is an empty dict.
363 stored for the path is an empty dict.
360 """
364 """
361 result = {}
365 result = {}
362 for f in ctx.files():
366 for f in ctx.files():
363 p = pointerfromctx(ctx, f, removed=removed)
367 p = pointerfromctx(ctx, f, removed=removed)
364 if p is not None:
368 if p is not None:
365 result[f] = p
369 result[f] = p
366 return result
370 return result
367
371
368 def uploadblobs(repo, pointers):
372 def uploadblobs(repo, pointers):
369 """upload given pointers from local blobstore"""
373 """upload given pointers from local blobstore"""
370 if not pointers:
374 if not pointers:
371 return
375 return
372
376
373 remoteblob = repo.svfs.lfsremoteblobstore
377 remoteblob = repo.svfs.lfsremoteblobstore
374 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
378 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
375
379
376 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
380 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
377 orig(ui, srcrepo, dstrepo, requirements)
381 orig(ui, srcrepo, dstrepo, requirements)
378
382
379 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
383 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
380 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
384 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
381
385
382 for dirpath, dirs, files in srclfsvfs.walk():
386 for dirpath, dirs, files in srclfsvfs.walk():
383 for oid in files:
387 for oid in files:
384 ui.write(_('copying lfs blob %s\n') % oid)
388 ui.write(_('copying lfs blob %s\n') % oid)
385 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
389 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
386
390
387 def upgraderequirements(orig, repo):
391 def upgraderequirements(orig, repo):
388 reqs = orig(repo)
392 reqs = orig(repo)
389 if 'lfs' in repo.requirements:
393 if 'lfs' in repo.requirements:
390 reqs.add('lfs')
394 reqs.add('lfs')
391 return reqs
395 return reqs
@@ -1,3656 +1,3659
1 # mq.py - patch queues for mercurial
1 # mq.py - patch queues for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''manage a stack of patches
8 '''manage a stack of patches
9
9
10 This extension lets you work with a stack of patches in a Mercurial
10 This extension lets you work with a stack of patches in a Mercurial
11 repository. It manages two stacks of patches - all known patches, and
11 repository. It manages two stacks of patches - all known patches, and
12 applied patches (subset of known patches).
12 applied patches (subset of known patches).
13
13
14 Known patches are represented as patch files in the .hg/patches
14 Known patches are represented as patch files in the .hg/patches
15 directory. Applied patches are both patch files and changesets.
15 directory. Applied patches are both patch files and changesets.
16
16
17 Common tasks (use :hg:`help COMMAND` for more details)::
17 Common tasks (use :hg:`help COMMAND` for more details)::
18
18
19 create new patch qnew
19 create new patch qnew
20 import existing patch qimport
20 import existing patch qimport
21
21
22 print patch series qseries
22 print patch series qseries
23 print applied patches qapplied
23 print applied patches qapplied
24
24
25 add known patch to applied stack qpush
25 add known patch to applied stack qpush
26 remove patch from applied stack qpop
26 remove patch from applied stack qpop
27 refresh contents of top applied patch qrefresh
27 refresh contents of top applied patch qrefresh
28
28
29 By default, mq will automatically use git patches when required to
29 By default, mq will automatically use git patches when required to
30 avoid losing file mode changes, copy records, binary files or empty
30 avoid losing file mode changes, copy records, binary files or empty
31 files creations or deletions. This behavior can be configured with::
31 files creations or deletions. This behavior can be configured with::
32
32
33 [mq]
33 [mq]
34 git = auto/keep/yes/no
34 git = auto/keep/yes/no
35
35
36 If set to 'keep', mq will obey the [diff] section configuration while
36 If set to 'keep', mq will obey the [diff] section configuration while
37 preserving existing git patches upon qrefresh. If set to 'yes' or
37 preserving existing git patches upon qrefresh. If set to 'yes' or
38 'no', mq will override the [diff] section and always generate git or
38 'no', mq will override the [diff] section and always generate git or
39 regular patches, possibly losing data in the second case.
39 regular patches, possibly losing data in the second case.
40
40
41 It may be desirable for mq changesets to be kept in the secret phase (see
41 It may be desirable for mq changesets to be kept in the secret phase (see
42 :hg:`help phases`), which can be enabled with the following setting::
42 :hg:`help phases`), which can be enabled with the following setting::
43
43
44 [mq]
44 [mq]
45 secret = True
45 secret = True
46
46
47 You will by default be managing a patch queue named "patches". You can
47 You will by default be managing a patch queue named "patches". You can
48 create other, independent patch queues with the :hg:`qqueue` command.
48 create other, independent patch queues with the :hg:`qqueue` command.
49
49
50 If the working directory contains uncommitted files, qpush, qpop and
50 If the working directory contains uncommitted files, qpush, qpop and
51 qgoto abort immediately. If -f/--force is used, the changes are
51 qgoto abort immediately. If -f/--force is used, the changes are
52 discarded. Setting::
52 discarded. Setting::
53
53
54 [mq]
54 [mq]
55 keepchanges = True
55 keepchanges = True
56
56
57 make them behave as if --keep-changes were passed, and non-conflicting
57 make them behave as if --keep-changes were passed, and non-conflicting
58 local changes will be tolerated and preserved. If incompatible options
58 local changes will be tolerated and preserved. If incompatible options
59 such as -f/--force or --exact are passed, this setting is ignored.
59 such as -f/--force or --exact are passed, this setting is ignored.
60
60
61 This extension used to provide a strip command. This command now lives
61 This extension used to provide a strip command. This command now lives
62 in the strip extension.
62 in the strip extension.
63 '''
63 '''
64
64
65 from __future__ import absolute_import, print_function
65 from __future__ import absolute_import, print_function
66
66
67 import errno
67 import errno
68 import os
68 import os
69 import re
69 import re
70 import shutil
70 import shutil
71 from mercurial.i18n import _
71 from mercurial.i18n import _
72 from mercurial.node import (
72 from mercurial.node import (
73 bin,
73 bin,
74 hex,
74 hex,
75 nullid,
75 nullid,
76 nullrev,
76 nullrev,
77 short,
77 short,
78 )
78 )
79 from mercurial import (
79 from mercurial import (
80 cmdutil,
80 cmdutil,
81 commands,
81 commands,
82 dirstateguard,
82 dirstateguard,
83 encoding,
83 encoding,
84 error,
84 error,
85 extensions,
85 extensions,
86 hg,
86 hg,
87 localrepo,
87 localrepo,
88 lock as lockmod,
88 lock as lockmod,
89 logcmdutil,
89 logcmdutil,
90 patch as patchmod,
90 patch as patchmod,
91 phases,
91 phases,
92 pycompat,
92 pycompat,
93 registrar,
93 registrar,
94 revsetlang,
94 revsetlang,
95 scmutil,
95 scmutil,
96 smartset,
96 smartset,
97 subrepoutil,
97 subrepoutil,
98 util,
98 util,
99 vfs as vfsmod,
99 vfs as vfsmod,
100 )
100 )
101 from mercurial.utils import dateutil
101 from mercurial.utils import (
102 dateutil,
103 stringutil,
104 )
102
105
103 release = lockmod.release
106 release = lockmod.release
104 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
107 seriesopts = [('s', 'summary', None, _('print first line of patch header'))]
105
108
106 cmdtable = {}
109 cmdtable = {}
107 command = registrar.command(cmdtable)
110 command = registrar.command(cmdtable)
108 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
111 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
109 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
112 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
110 # be specifying the version(s) of Mercurial they are tested with, or
113 # be specifying the version(s) of Mercurial they are tested with, or
111 # leave the attribute unspecified.
114 # leave the attribute unspecified.
112 testedwith = 'ships-with-hg-core'
115 testedwith = 'ships-with-hg-core'
113
116
114 configtable = {}
117 configtable = {}
115 configitem = registrar.configitem(configtable)
118 configitem = registrar.configitem(configtable)
116
119
117 configitem('mq', 'git',
120 configitem('mq', 'git',
118 default='auto',
121 default='auto',
119 )
122 )
120 configitem('mq', 'keepchanges',
123 configitem('mq', 'keepchanges',
121 default=False,
124 default=False,
122 )
125 )
123 configitem('mq', 'plain',
126 configitem('mq', 'plain',
124 default=False,
127 default=False,
125 )
128 )
126 configitem('mq', 'secret',
129 configitem('mq', 'secret',
127 default=False,
130 default=False,
128 )
131 )
129
132
130 # force load strip extension formerly included in mq and import some utility
133 # force load strip extension formerly included in mq and import some utility
131 try:
134 try:
132 stripext = extensions.find('strip')
135 stripext = extensions.find('strip')
133 except KeyError:
136 except KeyError:
134 # note: load is lazy so we could avoid the try-except,
137 # note: load is lazy so we could avoid the try-except,
135 # but I (marmoute) prefer this explicit code.
138 # but I (marmoute) prefer this explicit code.
136 class dummyui(object):
139 class dummyui(object):
137 def debug(self, msg):
140 def debug(self, msg):
138 pass
141 pass
139 stripext = extensions.load(dummyui(), 'strip', '')
142 stripext = extensions.load(dummyui(), 'strip', '')
140
143
141 strip = stripext.strip
144 strip = stripext.strip
142 checksubstate = stripext.checksubstate
145 checksubstate = stripext.checksubstate
143 checklocalchanges = stripext.checklocalchanges
146 checklocalchanges = stripext.checklocalchanges
144
147
145
148
146 # Patch names looks like unix-file names.
149 # Patch names looks like unix-file names.
147 # They must be joinable with queue directory and result in the patch path.
150 # They must be joinable with queue directory and result in the patch path.
148 normname = util.normpath
151 normname = util.normpath
149
152
150 class statusentry(object):
153 class statusentry(object):
151 def __init__(self, node, name):
154 def __init__(self, node, name):
152 self.node, self.name = node, name
155 self.node, self.name = node, name
153
156
154 def __bytes__(self):
157 def __bytes__(self):
155 return hex(self.node) + ':' + self.name
158 return hex(self.node) + ':' + self.name
156
159
157 __str__ = encoding.strmethod(__bytes__)
160 __str__ = encoding.strmethod(__bytes__)
158 __repr__ = encoding.strmethod(__bytes__)
161 __repr__ = encoding.strmethod(__bytes__)
159
162
160 # The order of the headers in 'hg export' HG patches:
163 # The order of the headers in 'hg export' HG patches:
161 HGHEADERS = [
164 HGHEADERS = [
162 # '# HG changeset patch',
165 # '# HG changeset patch',
163 '# User ',
166 '# User ',
164 '# Date ',
167 '# Date ',
165 '# ',
168 '# ',
166 '# Branch ',
169 '# Branch ',
167 '# Node ID ',
170 '# Node ID ',
168 '# Parent ', # can occur twice for merges - but that is not relevant for mq
171 '# Parent ', # can occur twice for merges - but that is not relevant for mq
169 ]
172 ]
170 # The order of headers in plain 'mail style' patches:
173 # The order of headers in plain 'mail style' patches:
171 PLAINHEADERS = {
174 PLAINHEADERS = {
172 'from': 0,
175 'from': 0,
173 'date': 1,
176 'date': 1,
174 'subject': 2,
177 'subject': 2,
175 }
178 }
176
179
177 def inserthgheader(lines, header, value):
180 def inserthgheader(lines, header, value):
178 """Assuming lines contains a HG patch header, add a header line with value.
181 """Assuming lines contains a HG patch header, add a header line with value.
179 >>> try: inserthgheader([], b'# Date ', b'z')
182 >>> try: inserthgheader([], b'# Date ', b'z')
180 ... except ValueError as inst: print("oops")
183 ... except ValueError as inst: print("oops")
181 oops
184 oops
182 >>> inserthgheader([b'# HG changeset patch'], b'# Date ', b'z')
185 >>> inserthgheader([b'# HG changeset patch'], b'# Date ', b'z')
183 ['# HG changeset patch', '# Date z']
186 ['# HG changeset patch', '# Date z']
184 >>> inserthgheader([b'# HG changeset patch', b''], b'# Date ', b'z')
187 >>> inserthgheader([b'# HG changeset patch', b''], b'# Date ', b'z')
185 ['# HG changeset patch', '# Date z', '']
188 ['# HG changeset patch', '# Date z', '']
186 >>> inserthgheader([b'# HG changeset patch', b'# User y'], b'# Date ', b'z')
189 >>> inserthgheader([b'# HG changeset patch', b'# User y'], b'# Date ', b'z')
187 ['# HG changeset patch', '# User y', '# Date z']
190 ['# HG changeset patch', '# User y', '# Date z']
188 >>> inserthgheader([b'# HG changeset patch', b'# Date x', b'# User y'],
191 >>> inserthgheader([b'# HG changeset patch', b'# Date x', b'# User y'],
189 ... b'# User ', b'z')
192 ... b'# User ', b'z')
190 ['# HG changeset patch', '# Date x', '# User z']
193 ['# HG changeset patch', '# Date x', '# User z']
191 >>> inserthgheader([b'# HG changeset patch', b'# Date y'], b'# Date ', b'z')
194 >>> inserthgheader([b'# HG changeset patch', b'# Date y'], b'# Date ', b'z')
192 ['# HG changeset patch', '# Date z']
195 ['# HG changeset patch', '# Date z']
193 >>> inserthgheader([b'# HG changeset patch', b'', b'# Date y'],
196 >>> inserthgheader([b'# HG changeset patch', b'', b'# Date y'],
194 ... b'# Date ', b'z')
197 ... b'# Date ', b'z')
195 ['# HG changeset patch', '# Date z', '', '# Date y']
198 ['# HG changeset patch', '# Date z', '', '# Date y']
196 >>> inserthgheader([b'# HG changeset patch', b'# Parent y'],
199 >>> inserthgheader([b'# HG changeset patch', b'# Parent y'],
197 ... b'# Date ', b'z')
200 ... b'# Date ', b'z')
198 ['# HG changeset patch', '# Date z', '# Parent y']
201 ['# HG changeset patch', '# Date z', '# Parent y']
199 """
202 """
200 start = lines.index('# HG changeset patch') + 1
203 start = lines.index('# HG changeset patch') + 1
201 newindex = HGHEADERS.index(header)
204 newindex = HGHEADERS.index(header)
202 bestpos = len(lines)
205 bestpos = len(lines)
203 for i in range(start, len(lines)):
206 for i in range(start, len(lines)):
204 line = lines[i]
207 line = lines[i]
205 if not line.startswith('# '):
208 if not line.startswith('# '):
206 bestpos = min(bestpos, i)
209 bestpos = min(bestpos, i)
207 break
210 break
208 for lineindex, h in enumerate(HGHEADERS):
211 for lineindex, h in enumerate(HGHEADERS):
209 if line.startswith(h):
212 if line.startswith(h):
210 if lineindex == newindex:
213 if lineindex == newindex:
211 lines[i] = header + value
214 lines[i] = header + value
212 return lines
215 return lines
213 if lineindex > newindex:
216 if lineindex > newindex:
214 bestpos = min(bestpos, i)
217 bestpos = min(bestpos, i)
215 break # next line
218 break # next line
216 lines.insert(bestpos, header + value)
219 lines.insert(bestpos, header + value)
217 return lines
220 return lines
218
221
219 def insertplainheader(lines, header, value):
222 def insertplainheader(lines, header, value):
220 """For lines containing a plain patch header, add a header line with value.
223 """For lines containing a plain patch header, add a header line with value.
221 >>> insertplainheader([], b'Date', b'z')
224 >>> insertplainheader([], b'Date', b'z')
222 ['Date: z']
225 ['Date: z']
223 >>> insertplainheader([b''], b'Date', b'z')
226 >>> insertplainheader([b''], b'Date', b'z')
224 ['Date: z', '']
227 ['Date: z', '']
225 >>> insertplainheader([b'x'], b'Date', b'z')
228 >>> insertplainheader([b'x'], b'Date', b'z')
226 ['Date: z', '', 'x']
229 ['Date: z', '', 'x']
227 >>> insertplainheader([b'From: y', b'x'], b'Date', b'z')
230 >>> insertplainheader([b'From: y', b'x'], b'Date', b'z')
228 ['From: y', 'Date: z', '', 'x']
231 ['From: y', 'Date: z', '', 'x']
229 >>> insertplainheader([b' date : x', b' from : y', b''], b'From', b'z')
232 >>> insertplainheader([b' date : x', b' from : y', b''], b'From', b'z')
230 [' date : x', 'From: z', '']
233 [' date : x', 'From: z', '']
231 >>> insertplainheader([b'', b'Date: y'], b'Date', b'z')
234 >>> insertplainheader([b'', b'Date: y'], b'Date', b'z')
232 ['Date: z', '', 'Date: y']
235 ['Date: z', '', 'Date: y']
233 >>> insertplainheader([b'foo: bar', b'DATE: z', b'x'], b'From', b'y')
236 >>> insertplainheader([b'foo: bar', b'DATE: z', b'x'], b'From', b'y')
234 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
237 ['From: y', 'foo: bar', 'DATE: z', '', 'x']
235 """
238 """
236 newprio = PLAINHEADERS[header.lower()]
239 newprio = PLAINHEADERS[header.lower()]
237 bestpos = len(lines)
240 bestpos = len(lines)
238 for i, line in enumerate(lines):
241 for i, line in enumerate(lines):
239 if ':' in line:
242 if ':' in line:
240 lheader = line.split(':', 1)[0].strip().lower()
243 lheader = line.split(':', 1)[0].strip().lower()
241 lprio = PLAINHEADERS.get(lheader, newprio + 1)
244 lprio = PLAINHEADERS.get(lheader, newprio + 1)
242 if lprio == newprio:
245 if lprio == newprio:
243 lines[i] = '%s: %s' % (header, value)
246 lines[i] = '%s: %s' % (header, value)
244 return lines
247 return lines
245 if lprio > newprio and i < bestpos:
248 if lprio > newprio and i < bestpos:
246 bestpos = i
249 bestpos = i
247 else:
250 else:
248 if line:
251 if line:
249 lines.insert(i, '')
252 lines.insert(i, '')
250 if i < bestpos:
253 if i < bestpos:
251 bestpos = i
254 bestpos = i
252 break
255 break
253 lines.insert(bestpos, '%s: %s' % (header, value))
256 lines.insert(bestpos, '%s: %s' % (header, value))
254 return lines
257 return lines
255
258
256 class patchheader(object):
259 class patchheader(object):
257 def __init__(self, pf, plainmode=False):
260 def __init__(self, pf, plainmode=False):
258 def eatdiff(lines):
261 def eatdiff(lines):
259 while lines:
262 while lines:
260 l = lines[-1]
263 l = lines[-1]
261 if (l.startswith("diff -") or
264 if (l.startswith("diff -") or
262 l.startswith("Index:") or
265 l.startswith("Index:") or
263 l.startswith("===========")):
266 l.startswith("===========")):
264 del lines[-1]
267 del lines[-1]
265 else:
268 else:
266 break
269 break
267 def eatempty(lines):
270 def eatempty(lines):
268 while lines:
271 while lines:
269 if not lines[-1].strip():
272 if not lines[-1].strip():
270 del lines[-1]
273 del lines[-1]
271 else:
274 else:
272 break
275 break
273
276
274 message = []
277 message = []
275 comments = []
278 comments = []
276 user = None
279 user = None
277 date = None
280 date = None
278 parent = None
281 parent = None
279 format = None
282 format = None
280 subject = None
283 subject = None
281 branch = None
284 branch = None
282 nodeid = None
285 nodeid = None
283 diffstart = 0
286 diffstart = 0
284
287
285 for line in open(pf, 'rb'):
288 for line in open(pf, 'rb'):
286 line = line.rstrip()
289 line = line.rstrip()
287 if (line.startswith('diff --git')
290 if (line.startswith('diff --git')
288 or (diffstart and line.startswith('+++ '))):
291 or (diffstart and line.startswith('+++ '))):
289 diffstart = 2
292 diffstart = 2
290 break
293 break
291 diffstart = 0 # reset
294 diffstart = 0 # reset
292 if line.startswith("--- "):
295 if line.startswith("--- "):
293 diffstart = 1
296 diffstart = 1
294 continue
297 continue
295 elif format == "hgpatch":
298 elif format == "hgpatch":
296 # parse values when importing the result of an hg export
299 # parse values when importing the result of an hg export
297 if line.startswith("# User "):
300 if line.startswith("# User "):
298 user = line[7:]
301 user = line[7:]
299 elif line.startswith("# Date "):
302 elif line.startswith("# Date "):
300 date = line[7:]
303 date = line[7:]
301 elif line.startswith("# Parent "):
304 elif line.startswith("# Parent "):
302 parent = line[9:].lstrip() # handle double trailing space
305 parent = line[9:].lstrip() # handle double trailing space
303 elif line.startswith("# Branch "):
306 elif line.startswith("# Branch "):
304 branch = line[9:]
307 branch = line[9:]
305 elif line.startswith("# Node ID "):
308 elif line.startswith("# Node ID "):
306 nodeid = line[10:]
309 nodeid = line[10:]
307 elif not line.startswith("# ") and line:
310 elif not line.startswith("# ") and line:
308 message.append(line)
311 message.append(line)
309 format = None
312 format = None
310 elif line == '# HG changeset patch':
313 elif line == '# HG changeset patch':
311 message = []
314 message = []
312 format = "hgpatch"
315 format = "hgpatch"
313 elif (format != "tagdone" and (line.startswith("Subject: ") or
316 elif (format != "tagdone" and (line.startswith("Subject: ") or
314 line.startswith("subject: "))):
317 line.startswith("subject: "))):
315 subject = line[9:]
318 subject = line[9:]
316 format = "tag"
319 format = "tag"
317 elif (format != "tagdone" and (line.startswith("From: ") or
320 elif (format != "tagdone" and (line.startswith("From: ") or
318 line.startswith("from: "))):
321 line.startswith("from: "))):
319 user = line[6:]
322 user = line[6:]
320 format = "tag"
323 format = "tag"
321 elif (format != "tagdone" and (line.startswith("Date: ") or
324 elif (format != "tagdone" and (line.startswith("Date: ") or
322 line.startswith("date: "))):
325 line.startswith("date: "))):
323 date = line[6:]
326 date = line[6:]
324 format = "tag"
327 format = "tag"
325 elif format == "tag" and line == "":
328 elif format == "tag" and line == "":
326 # when looking for tags (subject: from: etc) they
329 # when looking for tags (subject: from: etc) they
327 # end once you find a blank line in the source
330 # end once you find a blank line in the source
328 format = "tagdone"
331 format = "tagdone"
329 elif message or line:
332 elif message or line:
330 message.append(line)
333 message.append(line)
331 comments.append(line)
334 comments.append(line)
332
335
333 eatdiff(message)
336 eatdiff(message)
334 eatdiff(comments)
337 eatdiff(comments)
335 # Remember the exact starting line of the patch diffs before consuming
338 # Remember the exact starting line of the patch diffs before consuming
336 # empty lines, for external use by TortoiseHg and others
339 # empty lines, for external use by TortoiseHg and others
337 self.diffstartline = len(comments)
340 self.diffstartline = len(comments)
338 eatempty(message)
341 eatempty(message)
339 eatempty(comments)
342 eatempty(comments)
340
343
341 # make sure message isn't empty
344 # make sure message isn't empty
342 if format and format.startswith("tag") and subject:
345 if format and format.startswith("tag") and subject:
343 message.insert(0, subject)
346 message.insert(0, subject)
344
347
345 self.message = message
348 self.message = message
346 self.comments = comments
349 self.comments = comments
347 self.user = user
350 self.user = user
348 self.date = date
351 self.date = date
349 self.parent = parent
352 self.parent = parent
350 # nodeid and branch are for external use by TortoiseHg and others
353 # nodeid and branch are for external use by TortoiseHg and others
351 self.nodeid = nodeid
354 self.nodeid = nodeid
352 self.branch = branch
355 self.branch = branch
353 self.haspatch = diffstart > 1
356 self.haspatch = diffstart > 1
354 self.plainmode = (plainmode or
357 self.plainmode = (plainmode or
355 '# HG changeset patch' not in self.comments and
358 '# HG changeset patch' not in self.comments and
356 any(c.startswith('Date: ') or
359 any(c.startswith('Date: ') or
357 c.startswith('From: ')
360 c.startswith('From: ')
358 for c in self.comments))
361 for c in self.comments))
359
362
360 def setuser(self, user):
363 def setuser(self, user):
361 try:
364 try:
362 inserthgheader(self.comments, '# User ', user)
365 inserthgheader(self.comments, '# User ', user)
363 except ValueError:
366 except ValueError:
364 if self.plainmode:
367 if self.plainmode:
365 insertplainheader(self.comments, 'From', user)
368 insertplainheader(self.comments, 'From', user)
366 else:
369 else:
367 tmp = ['# HG changeset patch', '# User ' + user]
370 tmp = ['# HG changeset patch', '# User ' + user]
368 self.comments = tmp + self.comments
371 self.comments = tmp + self.comments
369 self.user = user
372 self.user = user
370
373
371 def setdate(self, date):
374 def setdate(self, date):
372 try:
375 try:
373 inserthgheader(self.comments, '# Date ', date)
376 inserthgheader(self.comments, '# Date ', date)
374 except ValueError:
377 except ValueError:
375 if self.plainmode:
378 if self.plainmode:
376 insertplainheader(self.comments, 'Date', date)
379 insertplainheader(self.comments, 'Date', date)
377 else:
380 else:
378 tmp = ['# HG changeset patch', '# Date ' + date]
381 tmp = ['# HG changeset patch', '# Date ' + date]
379 self.comments = tmp + self.comments
382 self.comments = tmp + self.comments
380 self.date = date
383 self.date = date
381
384
382 def setparent(self, parent):
385 def setparent(self, parent):
383 try:
386 try:
384 inserthgheader(self.comments, '# Parent ', parent)
387 inserthgheader(self.comments, '# Parent ', parent)
385 except ValueError:
388 except ValueError:
386 if not self.plainmode:
389 if not self.plainmode:
387 tmp = ['# HG changeset patch', '# Parent ' + parent]
390 tmp = ['# HG changeset patch', '# Parent ' + parent]
388 self.comments = tmp + self.comments
391 self.comments = tmp + self.comments
389 self.parent = parent
392 self.parent = parent
390
393
391 def setmessage(self, message):
394 def setmessage(self, message):
392 if self.comments:
395 if self.comments:
393 self._delmsg()
396 self._delmsg()
394 self.message = [message]
397 self.message = [message]
395 if message:
398 if message:
396 if self.plainmode and self.comments and self.comments[-1]:
399 if self.plainmode and self.comments and self.comments[-1]:
397 self.comments.append('')
400 self.comments.append('')
398 self.comments.append(message)
401 self.comments.append(message)
399
402
400 def __bytes__(self):
403 def __bytes__(self):
401 s = '\n'.join(self.comments).rstrip()
404 s = '\n'.join(self.comments).rstrip()
402 if not s:
405 if not s:
403 return ''
406 return ''
404 return s + '\n\n'
407 return s + '\n\n'
405
408
406 __str__ = encoding.strmethod(__bytes__)
409 __str__ = encoding.strmethod(__bytes__)
407
410
408 def _delmsg(self):
411 def _delmsg(self):
409 '''Remove existing message, keeping the rest of the comments fields.
412 '''Remove existing message, keeping the rest of the comments fields.
410 If comments contains 'subject: ', message will prepend
413 If comments contains 'subject: ', message will prepend
411 the field and a blank line.'''
414 the field and a blank line.'''
412 if self.message:
415 if self.message:
413 subj = 'subject: ' + self.message[0].lower()
416 subj = 'subject: ' + self.message[0].lower()
414 for i in xrange(len(self.comments)):
417 for i in xrange(len(self.comments)):
415 if subj == self.comments[i].lower():
418 if subj == self.comments[i].lower():
416 del self.comments[i]
419 del self.comments[i]
417 self.message = self.message[2:]
420 self.message = self.message[2:]
418 break
421 break
419 ci = 0
422 ci = 0
420 for mi in self.message:
423 for mi in self.message:
421 while mi != self.comments[ci]:
424 while mi != self.comments[ci]:
422 ci += 1
425 ci += 1
423 del self.comments[ci]
426 del self.comments[ci]
424
427
425 def newcommit(repo, phase, *args, **kwargs):
428 def newcommit(repo, phase, *args, **kwargs):
426 """helper dedicated to ensure a commit respect mq.secret setting
429 """helper dedicated to ensure a commit respect mq.secret setting
427
430
428 It should be used instead of repo.commit inside the mq source for operation
431 It should be used instead of repo.commit inside the mq source for operation
429 creating new changeset.
432 creating new changeset.
430 """
433 """
431 repo = repo.unfiltered()
434 repo = repo.unfiltered()
432 if phase is None:
435 if phase is None:
433 if repo.ui.configbool('mq', 'secret'):
436 if repo.ui.configbool('mq', 'secret'):
434 phase = phases.secret
437 phase = phases.secret
435 overrides = {('ui', 'allowemptycommit'): True}
438 overrides = {('ui', 'allowemptycommit'): True}
436 if phase is not None:
439 if phase is not None:
437 overrides[('phases', 'new-commit')] = phase
440 overrides[('phases', 'new-commit')] = phase
438 with repo.ui.configoverride(overrides, 'mq'):
441 with repo.ui.configoverride(overrides, 'mq'):
439 repo.ui.setconfig('ui', 'allowemptycommit', True)
442 repo.ui.setconfig('ui', 'allowemptycommit', True)
440 return repo.commit(*args, **kwargs)
443 return repo.commit(*args, **kwargs)
441
444
442 class AbortNoCleanup(error.Abort):
445 class AbortNoCleanup(error.Abort):
443 pass
446 pass
444
447
445 class queue(object):
448 class queue(object):
446 def __init__(self, ui, baseui, path, patchdir=None):
449 def __init__(self, ui, baseui, path, patchdir=None):
447 self.basepath = path
450 self.basepath = path
448 try:
451 try:
449 with open(os.path.join(path, 'patches.queue'), r'rb') as fh:
452 with open(os.path.join(path, 'patches.queue'), r'rb') as fh:
450 cur = fh.read().rstrip()
453 cur = fh.read().rstrip()
451
454
452 if not cur:
455 if not cur:
453 curpath = os.path.join(path, 'patches')
456 curpath = os.path.join(path, 'patches')
454 else:
457 else:
455 curpath = os.path.join(path, 'patches-' + cur)
458 curpath = os.path.join(path, 'patches-' + cur)
456 except IOError:
459 except IOError:
457 curpath = os.path.join(path, 'patches')
460 curpath = os.path.join(path, 'patches')
458 self.path = patchdir or curpath
461 self.path = patchdir or curpath
459 self.opener = vfsmod.vfs(self.path)
462 self.opener = vfsmod.vfs(self.path)
460 self.ui = ui
463 self.ui = ui
461 self.baseui = baseui
464 self.baseui = baseui
462 self.applieddirty = False
465 self.applieddirty = False
463 self.seriesdirty = False
466 self.seriesdirty = False
464 self.added = []
467 self.added = []
465 self.seriespath = "series"
468 self.seriespath = "series"
466 self.statuspath = "status"
469 self.statuspath = "status"
467 self.guardspath = "guards"
470 self.guardspath = "guards"
468 self.activeguards = None
471 self.activeguards = None
469 self.guardsdirty = False
472 self.guardsdirty = False
470 # Handle mq.git as a bool with extended values
473 # Handle mq.git as a bool with extended values
471 gitmode = ui.config('mq', 'git').lower()
474 gitmode = ui.config('mq', 'git').lower()
472 boolmode = util.parsebool(gitmode)
475 boolmode = stringutil.parsebool(gitmode)
473 if boolmode is not None:
476 if boolmode is not None:
474 if boolmode:
477 if boolmode:
475 gitmode = 'yes'
478 gitmode = 'yes'
476 else:
479 else:
477 gitmode = 'no'
480 gitmode = 'no'
478 self.gitmode = gitmode
481 self.gitmode = gitmode
479 # deprecated config: mq.plain
482 # deprecated config: mq.plain
480 self.plainmode = ui.configbool('mq', 'plain')
483 self.plainmode = ui.configbool('mq', 'plain')
481 self.checkapplied = True
484 self.checkapplied = True
482
485
483 @util.propertycache
486 @util.propertycache
484 def applied(self):
487 def applied(self):
485 def parselines(lines):
488 def parselines(lines):
486 for l in lines:
489 for l in lines:
487 entry = l.split(':', 1)
490 entry = l.split(':', 1)
488 if len(entry) > 1:
491 if len(entry) > 1:
489 n, name = entry
492 n, name = entry
490 yield statusentry(bin(n), name)
493 yield statusentry(bin(n), name)
491 elif l.strip():
494 elif l.strip():
492 self.ui.warn(_('malformated mq status line: %s\n') % entry)
495 self.ui.warn(_('malformated mq status line: %s\n') % entry)
493 # else we ignore empty lines
496 # else we ignore empty lines
494 try:
497 try:
495 lines = self.opener.read(self.statuspath).splitlines()
498 lines = self.opener.read(self.statuspath).splitlines()
496 return list(parselines(lines))
499 return list(parselines(lines))
497 except IOError as e:
500 except IOError as e:
498 if e.errno == errno.ENOENT:
501 if e.errno == errno.ENOENT:
499 return []
502 return []
500 raise
503 raise
501
504
502 @util.propertycache
505 @util.propertycache
503 def fullseries(self):
506 def fullseries(self):
504 try:
507 try:
505 return self.opener.read(self.seriespath).splitlines()
508 return self.opener.read(self.seriespath).splitlines()
506 except IOError as e:
509 except IOError as e:
507 if e.errno == errno.ENOENT:
510 if e.errno == errno.ENOENT:
508 return []
511 return []
509 raise
512 raise
510
513
511 @util.propertycache
514 @util.propertycache
512 def series(self):
515 def series(self):
513 self.parseseries()
516 self.parseseries()
514 return self.series
517 return self.series
515
518
516 @util.propertycache
519 @util.propertycache
517 def seriesguards(self):
520 def seriesguards(self):
518 self.parseseries()
521 self.parseseries()
519 return self.seriesguards
522 return self.seriesguards
520
523
521 def invalidate(self):
524 def invalidate(self):
522 for a in 'applied fullseries series seriesguards'.split():
525 for a in 'applied fullseries series seriesguards'.split():
523 if a in self.__dict__:
526 if a in self.__dict__:
524 delattr(self, a)
527 delattr(self, a)
525 self.applieddirty = False
528 self.applieddirty = False
526 self.seriesdirty = False
529 self.seriesdirty = False
527 self.guardsdirty = False
530 self.guardsdirty = False
528 self.activeguards = None
531 self.activeguards = None
529
532
530 def diffopts(self, opts=None, patchfn=None, plain=False):
533 def diffopts(self, opts=None, patchfn=None, plain=False):
531 """Return diff options tweaked for this mq use, possibly upgrading to
534 """Return diff options tweaked for this mq use, possibly upgrading to
532 git format, and possibly plain and without lossy options."""
535 git format, and possibly plain and without lossy options."""
533 diffopts = patchmod.difffeatureopts(self.ui, opts,
536 diffopts = patchmod.difffeatureopts(self.ui, opts,
534 git=True, whitespace=not plain, formatchanging=not plain)
537 git=True, whitespace=not plain, formatchanging=not plain)
535 if self.gitmode == 'auto':
538 if self.gitmode == 'auto':
536 diffopts.upgrade = True
539 diffopts.upgrade = True
537 elif self.gitmode == 'keep':
540 elif self.gitmode == 'keep':
538 pass
541 pass
539 elif self.gitmode in ('yes', 'no'):
542 elif self.gitmode in ('yes', 'no'):
540 diffopts.git = self.gitmode == 'yes'
543 diffopts.git = self.gitmode == 'yes'
541 else:
544 else:
542 raise error.Abort(_('mq.git option can be auto/keep/yes/no'
545 raise error.Abort(_('mq.git option can be auto/keep/yes/no'
543 ' got %s') % self.gitmode)
546 ' got %s') % self.gitmode)
544 if patchfn:
547 if patchfn:
545 diffopts = self.patchopts(diffopts, patchfn)
548 diffopts = self.patchopts(diffopts, patchfn)
546 return diffopts
549 return diffopts
547
550
548 def patchopts(self, diffopts, *patches):
551 def patchopts(self, diffopts, *patches):
549 """Return a copy of input diff options with git set to true if
552 """Return a copy of input diff options with git set to true if
550 referenced patch is a git patch and should be preserved as such.
553 referenced patch is a git patch and should be preserved as such.
551 """
554 """
552 diffopts = diffopts.copy()
555 diffopts = diffopts.copy()
553 if not diffopts.git and self.gitmode == 'keep':
556 if not diffopts.git and self.gitmode == 'keep':
554 for patchfn in patches:
557 for patchfn in patches:
555 patchf = self.opener(patchfn, 'r')
558 patchf = self.opener(patchfn, 'r')
556 # if the patch was a git patch, refresh it as a git patch
559 # if the patch was a git patch, refresh it as a git patch
557 diffopts.git = any(line.startswith('diff --git')
560 diffopts.git = any(line.startswith('diff --git')
558 for line in patchf)
561 for line in patchf)
559 patchf.close()
562 patchf.close()
560 return diffopts
563 return diffopts
561
564
562 def join(self, *p):
565 def join(self, *p):
563 return os.path.join(self.path, *p)
566 return os.path.join(self.path, *p)
564
567
565 def findseries(self, patch):
568 def findseries(self, patch):
566 def matchpatch(l):
569 def matchpatch(l):
567 l = l.split('#', 1)[0]
570 l = l.split('#', 1)[0]
568 return l.strip() == patch
571 return l.strip() == patch
569 for index, l in enumerate(self.fullseries):
572 for index, l in enumerate(self.fullseries):
570 if matchpatch(l):
573 if matchpatch(l):
571 return index
574 return index
572 return None
575 return None
573
576
574 guard_re = re.compile(br'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
577 guard_re = re.compile(br'\s?#([-+][^-+# \t\r\n\f][^# \t\r\n\f]*)')
575
578
576 def parseseries(self):
579 def parseseries(self):
577 self.series = []
580 self.series = []
578 self.seriesguards = []
581 self.seriesguards = []
579 for l in self.fullseries:
582 for l in self.fullseries:
580 h = l.find('#')
583 h = l.find('#')
581 if h == -1:
584 if h == -1:
582 patch = l
585 patch = l
583 comment = ''
586 comment = ''
584 elif h == 0:
587 elif h == 0:
585 continue
588 continue
586 else:
589 else:
587 patch = l[:h]
590 patch = l[:h]
588 comment = l[h:]
591 comment = l[h:]
589 patch = patch.strip()
592 patch = patch.strip()
590 if patch:
593 if patch:
591 if patch in self.series:
594 if patch in self.series:
592 raise error.Abort(_('%s appears more than once in %s') %
595 raise error.Abort(_('%s appears more than once in %s') %
593 (patch, self.join(self.seriespath)))
596 (patch, self.join(self.seriespath)))
594 self.series.append(patch)
597 self.series.append(patch)
595 self.seriesguards.append(self.guard_re.findall(comment))
598 self.seriesguards.append(self.guard_re.findall(comment))
596
599
597 def checkguard(self, guard):
600 def checkguard(self, guard):
598 if not guard:
601 if not guard:
599 return _('guard cannot be an empty string')
602 return _('guard cannot be an empty string')
600 bad_chars = '# \t\r\n\f'
603 bad_chars = '# \t\r\n\f'
601 first = guard[0]
604 first = guard[0]
602 if first in '-+':
605 if first in '-+':
603 return (_('guard %r starts with invalid character: %r') %
606 return (_('guard %r starts with invalid character: %r') %
604 (guard, first))
607 (guard, first))
605 for c in bad_chars:
608 for c in bad_chars:
606 if c in guard:
609 if c in guard:
607 return _('invalid character in guard %r: %r') % (guard, c)
610 return _('invalid character in guard %r: %r') % (guard, c)
608
611
609 def setactive(self, guards):
612 def setactive(self, guards):
610 for guard in guards:
613 for guard in guards:
611 bad = self.checkguard(guard)
614 bad = self.checkguard(guard)
612 if bad:
615 if bad:
613 raise error.Abort(bad)
616 raise error.Abort(bad)
614 guards = sorted(set(guards))
617 guards = sorted(set(guards))
615 self.ui.debug('active guards: %s\n' % ' '.join(guards))
618 self.ui.debug('active guards: %s\n' % ' '.join(guards))
616 self.activeguards = guards
619 self.activeguards = guards
617 self.guardsdirty = True
620 self.guardsdirty = True
618
621
619 def active(self):
622 def active(self):
620 if self.activeguards is None:
623 if self.activeguards is None:
621 self.activeguards = []
624 self.activeguards = []
622 try:
625 try:
623 guards = self.opener.read(self.guardspath).split()
626 guards = self.opener.read(self.guardspath).split()
624 except IOError as err:
627 except IOError as err:
625 if err.errno != errno.ENOENT:
628 if err.errno != errno.ENOENT:
626 raise
629 raise
627 guards = []
630 guards = []
628 for i, guard in enumerate(guards):
631 for i, guard in enumerate(guards):
629 bad = self.checkguard(guard)
632 bad = self.checkguard(guard)
630 if bad:
633 if bad:
631 self.ui.warn('%s:%d: %s\n' %
634 self.ui.warn('%s:%d: %s\n' %
632 (self.join(self.guardspath), i + 1, bad))
635 (self.join(self.guardspath), i + 1, bad))
633 else:
636 else:
634 self.activeguards.append(guard)
637 self.activeguards.append(guard)
635 return self.activeguards
638 return self.activeguards
636
639
637 def setguards(self, idx, guards):
640 def setguards(self, idx, guards):
638 for g in guards:
641 for g in guards:
639 if len(g) < 2:
642 if len(g) < 2:
640 raise error.Abort(_('guard %r too short') % g)
643 raise error.Abort(_('guard %r too short') % g)
641 if g[0] not in '-+':
644 if g[0] not in '-+':
642 raise error.Abort(_('guard %r starts with invalid char') % g)
645 raise error.Abort(_('guard %r starts with invalid char') % g)
643 bad = self.checkguard(g[1:])
646 bad = self.checkguard(g[1:])
644 if bad:
647 if bad:
645 raise error.Abort(bad)
648 raise error.Abort(bad)
646 drop = self.guard_re.sub('', self.fullseries[idx])
649 drop = self.guard_re.sub('', self.fullseries[idx])
647 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
650 self.fullseries[idx] = drop + ''.join([' #' + g for g in guards])
648 self.parseseries()
651 self.parseseries()
649 self.seriesdirty = True
652 self.seriesdirty = True
650
653
651 def pushable(self, idx):
654 def pushable(self, idx):
652 if isinstance(idx, bytes):
655 if isinstance(idx, bytes):
653 idx = self.series.index(idx)
656 idx = self.series.index(idx)
654 patchguards = self.seriesguards[idx]
657 patchguards = self.seriesguards[idx]
655 if not patchguards:
658 if not patchguards:
656 return True, None
659 return True, None
657 guards = self.active()
660 guards = self.active()
658 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
661 exactneg = [g for g in patchguards if g[0] == '-' and g[1:] in guards]
659 if exactneg:
662 if exactneg:
660 return False, repr(exactneg[0])
663 return False, repr(exactneg[0])
661 pos = [g for g in patchguards if g[0] == '+']
664 pos = [g for g in patchguards if g[0] == '+']
662 exactpos = [g for g in pos if g[1:] in guards]
665 exactpos = [g for g in pos if g[1:] in guards]
663 if pos:
666 if pos:
664 if exactpos:
667 if exactpos:
665 return True, repr(exactpos[0])
668 return True, repr(exactpos[0])
666 return False, ' '.join(map(repr, pos))
669 return False, ' '.join(map(repr, pos))
667 return True, ''
670 return True, ''
668
671
669 def explainpushable(self, idx, all_patches=False):
672 def explainpushable(self, idx, all_patches=False):
670 if all_patches:
673 if all_patches:
671 write = self.ui.write
674 write = self.ui.write
672 else:
675 else:
673 write = self.ui.warn
676 write = self.ui.warn
674
677
675 if all_patches or self.ui.verbose:
678 if all_patches or self.ui.verbose:
676 if isinstance(idx, str):
679 if isinstance(idx, str):
677 idx = self.series.index(idx)
680 idx = self.series.index(idx)
678 pushable, why = self.pushable(idx)
681 pushable, why = self.pushable(idx)
679 if all_patches and pushable:
682 if all_patches and pushable:
680 if why is None:
683 if why is None:
681 write(_('allowing %s - no guards in effect\n') %
684 write(_('allowing %s - no guards in effect\n') %
682 self.series[idx])
685 self.series[idx])
683 else:
686 else:
684 if not why:
687 if not why:
685 write(_('allowing %s - no matching negative guards\n') %
688 write(_('allowing %s - no matching negative guards\n') %
686 self.series[idx])
689 self.series[idx])
687 else:
690 else:
688 write(_('allowing %s - guarded by %s\n') %
691 write(_('allowing %s - guarded by %s\n') %
689 (self.series[idx], why))
692 (self.series[idx], why))
690 if not pushable:
693 if not pushable:
691 if why:
694 if why:
692 write(_('skipping %s - guarded by %s\n') %
695 write(_('skipping %s - guarded by %s\n') %
693 (self.series[idx], why))
696 (self.series[idx], why))
694 else:
697 else:
695 write(_('skipping %s - no matching guards\n') %
698 write(_('skipping %s - no matching guards\n') %
696 self.series[idx])
699 self.series[idx])
697
700
698 def savedirty(self):
701 def savedirty(self):
699 def writelist(items, path):
702 def writelist(items, path):
700 fp = self.opener(path, 'wb')
703 fp = self.opener(path, 'wb')
701 for i in items:
704 for i in items:
702 fp.write("%s\n" % i)
705 fp.write("%s\n" % i)
703 fp.close()
706 fp.close()
704 if self.applieddirty:
707 if self.applieddirty:
705 writelist(map(bytes, self.applied), self.statuspath)
708 writelist(map(bytes, self.applied), self.statuspath)
706 self.applieddirty = False
709 self.applieddirty = False
707 if self.seriesdirty:
710 if self.seriesdirty:
708 writelist(self.fullseries, self.seriespath)
711 writelist(self.fullseries, self.seriespath)
709 self.seriesdirty = False
712 self.seriesdirty = False
710 if self.guardsdirty:
713 if self.guardsdirty:
711 writelist(self.activeguards, self.guardspath)
714 writelist(self.activeguards, self.guardspath)
712 self.guardsdirty = False
715 self.guardsdirty = False
713 if self.added:
716 if self.added:
714 qrepo = self.qrepo()
717 qrepo = self.qrepo()
715 if qrepo:
718 if qrepo:
716 qrepo[None].add(f for f in self.added if f not in qrepo[None])
719 qrepo[None].add(f for f in self.added if f not in qrepo[None])
717 self.added = []
720 self.added = []
718
721
719 def removeundo(self, repo):
722 def removeundo(self, repo):
720 undo = repo.sjoin('undo')
723 undo = repo.sjoin('undo')
721 if not os.path.exists(undo):
724 if not os.path.exists(undo):
722 return
725 return
723 try:
726 try:
724 os.unlink(undo)
727 os.unlink(undo)
725 except OSError as inst:
728 except OSError as inst:
726 self.ui.warn(_('error removing undo: %s\n') %
729 self.ui.warn(_('error removing undo: %s\n') %
727 util.forcebytestr(inst))
730 stringutil.forcebytestr(inst))
728
731
729 def backup(self, repo, files, copy=False):
732 def backup(self, repo, files, copy=False):
730 # backup local changes in --force case
733 # backup local changes in --force case
731 for f in sorted(files):
734 for f in sorted(files):
732 absf = repo.wjoin(f)
735 absf = repo.wjoin(f)
733 if os.path.lexists(absf):
736 if os.path.lexists(absf):
734 self.ui.note(_('saving current version of %s as %s\n') %
737 self.ui.note(_('saving current version of %s as %s\n') %
735 (f, scmutil.origpath(self.ui, repo, f)))
738 (f, scmutil.origpath(self.ui, repo, f)))
736
739
737 absorig = scmutil.origpath(self.ui, repo, absf)
740 absorig = scmutil.origpath(self.ui, repo, absf)
738 if copy:
741 if copy:
739 util.copyfile(absf, absorig)
742 util.copyfile(absf, absorig)
740 else:
743 else:
741 util.rename(absf, absorig)
744 util.rename(absf, absorig)
742
745
743 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
746 def printdiff(self, repo, diffopts, node1, node2=None, files=None,
744 fp=None, changes=None, opts=None):
747 fp=None, changes=None, opts=None):
745 if opts is None:
748 if opts is None:
746 opts = {}
749 opts = {}
747 stat = opts.get('stat')
750 stat = opts.get('stat')
748 m = scmutil.match(repo[node1], files, opts)
751 m = scmutil.match(repo[node1], files, opts)
749 logcmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
752 logcmdutil.diffordiffstat(self.ui, repo, diffopts, node1, node2, m,
750 changes, stat, fp)
753 changes, stat, fp)
751
754
752 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
755 def mergeone(self, repo, mergeq, head, patch, rev, diffopts):
753 # first try just applying the patch
756 # first try just applying the patch
754 (err, n) = self.apply(repo, [patch], update_status=False,
757 (err, n) = self.apply(repo, [patch], update_status=False,
755 strict=True, merge=rev)
758 strict=True, merge=rev)
756
759
757 if err == 0:
760 if err == 0:
758 return (err, n)
761 return (err, n)
759
762
760 if n is None:
763 if n is None:
761 raise error.Abort(_("apply failed for patch %s") % patch)
764 raise error.Abort(_("apply failed for patch %s") % patch)
762
765
763 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
766 self.ui.warn(_("patch didn't work out, merging %s\n") % patch)
764
767
765 # apply failed, strip away that rev and merge.
768 # apply failed, strip away that rev and merge.
766 hg.clean(repo, head)
769 hg.clean(repo, head)
767 strip(self.ui, repo, [n], update=False, backup=False)
770 strip(self.ui, repo, [n], update=False, backup=False)
768
771
769 ctx = repo[rev]
772 ctx = repo[rev]
770 ret = hg.merge(repo, rev)
773 ret = hg.merge(repo, rev)
771 if ret:
774 if ret:
772 raise error.Abort(_("update returned %d") % ret)
775 raise error.Abort(_("update returned %d") % ret)
773 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
776 n = newcommit(repo, None, ctx.description(), ctx.user(), force=True)
774 if n is None:
777 if n is None:
775 raise error.Abort(_("repo commit failed"))
778 raise error.Abort(_("repo commit failed"))
776 try:
779 try:
777 ph = patchheader(mergeq.join(patch), self.plainmode)
780 ph = patchheader(mergeq.join(patch), self.plainmode)
778 except Exception:
781 except Exception:
779 raise error.Abort(_("unable to read %s") % patch)
782 raise error.Abort(_("unable to read %s") % patch)
780
783
781 diffopts = self.patchopts(diffopts, patch)
784 diffopts = self.patchopts(diffopts, patch)
782 patchf = self.opener(patch, "w")
785 patchf = self.opener(patch, "w")
783 comments = bytes(ph)
786 comments = bytes(ph)
784 if comments:
787 if comments:
785 patchf.write(comments)
788 patchf.write(comments)
786 self.printdiff(repo, diffopts, head, n, fp=patchf)
789 self.printdiff(repo, diffopts, head, n, fp=patchf)
787 patchf.close()
790 patchf.close()
788 self.removeundo(repo)
791 self.removeundo(repo)
789 return (0, n)
792 return (0, n)
790
793
791 def qparents(self, repo, rev=None):
794 def qparents(self, repo, rev=None):
792 """return the mq handled parent or p1
795 """return the mq handled parent or p1
793
796
794 In some case where mq get himself in being the parent of a merge the
797 In some case where mq get himself in being the parent of a merge the
795 appropriate parent may be p2.
798 appropriate parent may be p2.
796 (eg: an in progress merge started with mq disabled)
799 (eg: an in progress merge started with mq disabled)
797
800
798 If no parent are managed by mq, p1 is returned.
801 If no parent are managed by mq, p1 is returned.
799 """
802 """
800 if rev is None:
803 if rev is None:
801 (p1, p2) = repo.dirstate.parents()
804 (p1, p2) = repo.dirstate.parents()
802 if p2 == nullid:
805 if p2 == nullid:
803 return p1
806 return p1
804 if not self.applied:
807 if not self.applied:
805 return None
808 return None
806 return self.applied[-1].node
809 return self.applied[-1].node
807 p1, p2 = repo.changelog.parents(rev)
810 p1, p2 = repo.changelog.parents(rev)
808 if p2 != nullid and p2 in [x.node for x in self.applied]:
811 if p2 != nullid and p2 in [x.node for x in self.applied]:
809 return p2
812 return p2
810 return p1
813 return p1
811
814
812 def mergepatch(self, repo, mergeq, series, diffopts):
815 def mergepatch(self, repo, mergeq, series, diffopts):
813 if not self.applied:
816 if not self.applied:
814 # each of the patches merged in will have two parents. This
817 # each of the patches merged in will have two parents. This
815 # can confuse the qrefresh, qdiff, and strip code because it
818 # can confuse the qrefresh, qdiff, and strip code because it
816 # needs to know which parent is actually in the patch queue.
819 # needs to know which parent is actually in the patch queue.
817 # so, we insert a merge marker with only one parent. This way
820 # so, we insert a merge marker with only one parent. This way
818 # the first patch in the queue is never a merge patch
821 # the first patch in the queue is never a merge patch
819 #
822 #
820 pname = ".hg.patches.merge.marker"
823 pname = ".hg.patches.merge.marker"
821 n = newcommit(repo, None, '[mq]: merge marker', force=True)
824 n = newcommit(repo, None, '[mq]: merge marker', force=True)
822 self.removeundo(repo)
825 self.removeundo(repo)
823 self.applied.append(statusentry(n, pname))
826 self.applied.append(statusentry(n, pname))
824 self.applieddirty = True
827 self.applieddirty = True
825
828
826 head = self.qparents(repo)
829 head = self.qparents(repo)
827
830
828 for patch in series:
831 for patch in series:
829 patch = mergeq.lookup(patch, strict=True)
832 patch = mergeq.lookup(patch, strict=True)
830 if not patch:
833 if not patch:
831 self.ui.warn(_("patch %s does not exist\n") % patch)
834 self.ui.warn(_("patch %s does not exist\n") % patch)
832 return (1, None)
835 return (1, None)
833 pushable, reason = self.pushable(patch)
836 pushable, reason = self.pushable(patch)
834 if not pushable:
837 if not pushable:
835 self.explainpushable(patch, all_patches=True)
838 self.explainpushable(patch, all_patches=True)
836 continue
839 continue
837 info = mergeq.isapplied(patch)
840 info = mergeq.isapplied(patch)
838 if not info:
841 if not info:
839 self.ui.warn(_("patch %s is not applied\n") % patch)
842 self.ui.warn(_("patch %s is not applied\n") % patch)
840 return (1, None)
843 return (1, None)
841 rev = info[1]
844 rev = info[1]
842 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
845 err, head = self.mergeone(repo, mergeq, head, patch, rev, diffopts)
843 if head:
846 if head:
844 self.applied.append(statusentry(head, patch))
847 self.applied.append(statusentry(head, patch))
845 self.applieddirty = True
848 self.applieddirty = True
846 if err:
849 if err:
847 return (err, head)
850 return (err, head)
848 self.savedirty()
851 self.savedirty()
849 return (0, head)
852 return (0, head)
850
853
851 def patch(self, repo, patchfile):
854 def patch(self, repo, patchfile):
852 '''Apply patchfile to the working directory.
855 '''Apply patchfile to the working directory.
853 patchfile: name of patch file'''
856 patchfile: name of patch file'''
854 files = set()
857 files = set()
855 try:
858 try:
856 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
859 fuzz = patchmod.patch(self.ui, repo, patchfile, strip=1,
857 files=files, eolmode=None)
860 files=files, eolmode=None)
858 return (True, list(files), fuzz)
861 return (True, list(files), fuzz)
859 except Exception as inst:
862 except Exception as inst:
860 self.ui.note(util.forcebytestr(inst) + '\n')
863 self.ui.note(stringutil.forcebytestr(inst) + '\n')
861 if not self.ui.verbose:
864 if not self.ui.verbose:
862 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
865 self.ui.warn(_("patch failed, unable to continue (try -v)\n"))
863 self.ui.traceback()
866 self.ui.traceback()
864 return (False, list(files), False)
867 return (False, list(files), False)
865
868
866 def apply(self, repo, series, list=False, update_status=True,
869 def apply(self, repo, series, list=False, update_status=True,
867 strict=False, patchdir=None, merge=None, all_files=None,
870 strict=False, patchdir=None, merge=None, all_files=None,
868 tobackup=None, keepchanges=False):
871 tobackup=None, keepchanges=False):
869 wlock = lock = tr = None
872 wlock = lock = tr = None
870 try:
873 try:
871 wlock = repo.wlock()
874 wlock = repo.wlock()
872 lock = repo.lock()
875 lock = repo.lock()
873 tr = repo.transaction("qpush")
876 tr = repo.transaction("qpush")
874 try:
877 try:
875 ret = self._apply(repo, series, list, update_status,
878 ret = self._apply(repo, series, list, update_status,
876 strict, patchdir, merge, all_files=all_files,
879 strict, patchdir, merge, all_files=all_files,
877 tobackup=tobackup, keepchanges=keepchanges)
880 tobackup=tobackup, keepchanges=keepchanges)
878 tr.close()
881 tr.close()
879 self.savedirty()
882 self.savedirty()
880 return ret
883 return ret
881 except AbortNoCleanup:
884 except AbortNoCleanup:
882 tr.close()
885 tr.close()
883 self.savedirty()
886 self.savedirty()
884 raise
887 raise
885 except: # re-raises
888 except: # re-raises
886 try:
889 try:
887 tr.abort()
890 tr.abort()
888 finally:
891 finally:
889 self.invalidate()
892 self.invalidate()
890 raise
893 raise
891 finally:
894 finally:
892 release(tr, lock, wlock)
895 release(tr, lock, wlock)
893 self.removeundo(repo)
896 self.removeundo(repo)
894
897
895 def _apply(self, repo, series, list=False, update_status=True,
898 def _apply(self, repo, series, list=False, update_status=True,
896 strict=False, patchdir=None, merge=None, all_files=None,
899 strict=False, patchdir=None, merge=None, all_files=None,
897 tobackup=None, keepchanges=False):
900 tobackup=None, keepchanges=False):
898 """returns (error, hash)
901 """returns (error, hash)
899
902
900 error = 1 for unable to read, 2 for patch failed, 3 for patch
903 error = 1 for unable to read, 2 for patch failed, 3 for patch
901 fuzz. tobackup is None or a set of files to backup before they
904 fuzz. tobackup is None or a set of files to backup before they
902 are modified by a patch.
905 are modified by a patch.
903 """
906 """
904 # TODO unify with commands.py
907 # TODO unify with commands.py
905 if not patchdir:
908 if not patchdir:
906 patchdir = self.path
909 patchdir = self.path
907 err = 0
910 err = 0
908 n = None
911 n = None
909 for patchname in series:
912 for patchname in series:
910 pushable, reason = self.pushable(patchname)
913 pushable, reason = self.pushable(patchname)
911 if not pushable:
914 if not pushable:
912 self.explainpushable(patchname, all_patches=True)
915 self.explainpushable(patchname, all_patches=True)
913 continue
916 continue
914 self.ui.status(_("applying %s\n") % patchname)
917 self.ui.status(_("applying %s\n") % patchname)
915 pf = os.path.join(patchdir, patchname)
918 pf = os.path.join(patchdir, patchname)
916
919
917 try:
920 try:
918 ph = patchheader(self.join(patchname), self.plainmode)
921 ph = patchheader(self.join(patchname), self.plainmode)
919 except IOError:
922 except IOError:
920 self.ui.warn(_("unable to read %s\n") % patchname)
923 self.ui.warn(_("unable to read %s\n") % patchname)
921 err = 1
924 err = 1
922 break
925 break
923
926
924 message = ph.message
927 message = ph.message
925 if not message:
928 if not message:
926 # The commit message should not be translated
929 # The commit message should not be translated
927 message = "imported patch %s\n" % patchname
930 message = "imported patch %s\n" % patchname
928 else:
931 else:
929 if list:
932 if list:
930 # The commit message should not be translated
933 # The commit message should not be translated
931 message.append("\nimported patch %s" % patchname)
934 message.append("\nimported patch %s" % patchname)
932 message = '\n'.join(message)
935 message = '\n'.join(message)
933
936
934 if ph.haspatch:
937 if ph.haspatch:
935 if tobackup:
938 if tobackup:
936 touched = patchmod.changedfiles(self.ui, repo, pf)
939 touched = patchmod.changedfiles(self.ui, repo, pf)
937 touched = set(touched) & tobackup
940 touched = set(touched) & tobackup
938 if touched and keepchanges:
941 if touched and keepchanges:
939 raise AbortNoCleanup(
942 raise AbortNoCleanup(
940 _("conflicting local changes found"),
943 _("conflicting local changes found"),
941 hint=_("did you forget to qrefresh?"))
944 hint=_("did you forget to qrefresh?"))
942 self.backup(repo, touched, copy=True)
945 self.backup(repo, touched, copy=True)
943 tobackup = tobackup - touched
946 tobackup = tobackup - touched
944 (patcherr, files, fuzz) = self.patch(repo, pf)
947 (patcherr, files, fuzz) = self.patch(repo, pf)
945 if all_files is not None:
948 if all_files is not None:
946 all_files.update(files)
949 all_files.update(files)
947 patcherr = not patcherr
950 patcherr = not patcherr
948 else:
951 else:
949 self.ui.warn(_("patch %s is empty\n") % patchname)
952 self.ui.warn(_("patch %s is empty\n") % patchname)
950 patcherr, files, fuzz = 0, [], 0
953 patcherr, files, fuzz = 0, [], 0
951
954
952 if merge and files:
955 if merge and files:
953 # Mark as removed/merged and update dirstate parent info
956 # Mark as removed/merged and update dirstate parent info
954 removed = []
957 removed = []
955 merged = []
958 merged = []
956 for f in files:
959 for f in files:
957 if os.path.lexists(repo.wjoin(f)):
960 if os.path.lexists(repo.wjoin(f)):
958 merged.append(f)
961 merged.append(f)
959 else:
962 else:
960 removed.append(f)
963 removed.append(f)
961 with repo.dirstate.parentchange():
964 with repo.dirstate.parentchange():
962 for f in removed:
965 for f in removed:
963 repo.dirstate.remove(f)
966 repo.dirstate.remove(f)
964 for f in merged:
967 for f in merged:
965 repo.dirstate.merge(f)
968 repo.dirstate.merge(f)
966 p1, p2 = repo.dirstate.parents()
969 p1, p2 = repo.dirstate.parents()
967 repo.setparents(p1, merge)
970 repo.setparents(p1, merge)
968
971
969 if all_files and '.hgsubstate' in all_files:
972 if all_files and '.hgsubstate' in all_files:
970 wctx = repo[None]
973 wctx = repo[None]
971 pctx = repo['.']
974 pctx = repo['.']
972 overwrite = False
975 overwrite = False
973 mergedsubstate = subrepoutil.submerge(repo, pctx, wctx, wctx,
976 mergedsubstate = subrepoutil.submerge(repo, pctx, wctx, wctx,
974 overwrite)
977 overwrite)
975 files += mergedsubstate.keys()
978 files += mergedsubstate.keys()
976
979
977 match = scmutil.matchfiles(repo, files or [])
980 match = scmutil.matchfiles(repo, files or [])
978 oldtip = repo['tip']
981 oldtip = repo['tip']
979 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
982 n = newcommit(repo, None, message, ph.user, ph.date, match=match,
980 force=True)
983 force=True)
981 if repo['tip'] == oldtip:
984 if repo['tip'] == oldtip:
982 raise error.Abort(_("qpush exactly duplicates child changeset"))
985 raise error.Abort(_("qpush exactly duplicates child changeset"))
983 if n is None:
986 if n is None:
984 raise error.Abort(_("repository commit failed"))
987 raise error.Abort(_("repository commit failed"))
985
988
986 if update_status:
989 if update_status:
987 self.applied.append(statusentry(n, patchname))
990 self.applied.append(statusentry(n, patchname))
988
991
989 if patcherr:
992 if patcherr:
990 self.ui.warn(_("patch failed, rejects left in working "
993 self.ui.warn(_("patch failed, rejects left in working "
991 "directory\n"))
994 "directory\n"))
992 err = 2
995 err = 2
993 break
996 break
994
997
995 if fuzz and strict:
998 if fuzz and strict:
996 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
999 self.ui.warn(_("fuzz found when applying patch, stopping\n"))
997 err = 3
1000 err = 3
998 break
1001 break
999 return (err, n)
1002 return (err, n)
1000
1003
1001 def _cleanup(self, patches, numrevs, keep=False):
1004 def _cleanup(self, patches, numrevs, keep=False):
1002 if not keep:
1005 if not keep:
1003 r = self.qrepo()
1006 r = self.qrepo()
1004 if r:
1007 if r:
1005 r[None].forget(patches)
1008 r[None].forget(patches)
1006 for p in patches:
1009 for p in patches:
1007 try:
1010 try:
1008 os.unlink(self.join(p))
1011 os.unlink(self.join(p))
1009 except OSError as inst:
1012 except OSError as inst:
1010 if inst.errno != errno.ENOENT:
1013 if inst.errno != errno.ENOENT:
1011 raise
1014 raise
1012
1015
1013 qfinished = []
1016 qfinished = []
1014 if numrevs:
1017 if numrevs:
1015 qfinished = self.applied[:numrevs]
1018 qfinished = self.applied[:numrevs]
1016 del self.applied[:numrevs]
1019 del self.applied[:numrevs]
1017 self.applieddirty = True
1020 self.applieddirty = True
1018
1021
1019 unknown = []
1022 unknown = []
1020
1023
1021 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
1024 for (i, p) in sorted([(self.findseries(p), p) for p in patches],
1022 reverse=True):
1025 reverse=True):
1023 if i is not None:
1026 if i is not None:
1024 del self.fullseries[i]
1027 del self.fullseries[i]
1025 else:
1028 else:
1026 unknown.append(p)
1029 unknown.append(p)
1027
1030
1028 if unknown:
1031 if unknown:
1029 if numrevs:
1032 if numrevs:
1030 rev = dict((entry.name, entry.node) for entry in qfinished)
1033 rev = dict((entry.name, entry.node) for entry in qfinished)
1031 for p in unknown:
1034 for p in unknown:
1032 msg = _('revision %s refers to unknown patches: %s\n')
1035 msg = _('revision %s refers to unknown patches: %s\n')
1033 self.ui.warn(msg % (short(rev[p]), p))
1036 self.ui.warn(msg % (short(rev[p]), p))
1034 else:
1037 else:
1035 msg = _('unknown patches: %s\n')
1038 msg = _('unknown patches: %s\n')
1036 raise error.Abort(''.join(msg % p for p in unknown))
1039 raise error.Abort(''.join(msg % p for p in unknown))
1037
1040
1038 self.parseseries()
1041 self.parseseries()
1039 self.seriesdirty = True
1042 self.seriesdirty = True
1040 return [entry.node for entry in qfinished]
1043 return [entry.node for entry in qfinished]
1041
1044
1042 def _revpatches(self, repo, revs):
1045 def _revpatches(self, repo, revs):
1043 firstrev = repo[self.applied[0].node].rev()
1046 firstrev = repo[self.applied[0].node].rev()
1044 patches = []
1047 patches = []
1045 for i, rev in enumerate(revs):
1048 for i, rev in enumerate(revs):
1046
1049
1047 if rev < firstrev:
1050 if rev < firstrev:
1048 raise error.Abort(_('revision %d is not managed') % rev)
1051 raise error.Abort(_('revision %d is not managed') % rev)
1049
1052
1050 ctx = repo[rev]
1053 ctx = repo[rev]
1051 base = self.applied[i].node
1054 base = self.applied[i].node
1052 if ctx.node() != base:
1055 if ctx.node() != base:
1053 msg = _('cannot delete revision %d above applied patches')
1056 msg = _('cannot delete revision %d above applied patches')
1054 raise error.Abort(msg % rev)
1057 raise error.Abort(msg % rev)
1055
1058
1056 patch = self.applied[i].name
1059 patch = self.applied[i].name
1057 for fmt in ('[mq]: %s', 'imported patch %s'):
1060 for fmt in ('[mq]: %s', 'imported patch %s'):
1058 if ctx.description() == fmt % patch:
1061 if ctx.description() == fmt % patch:
1059 msg = _('patch %s finalized without changeset message\n')
1062 msg = _('patch %s finalized without changeset message\n')
1060 repo.ui.status(msg % patch)
1063 repo.ui.status(msg % patch)
1061 break
1064 break
1062
1065
1063 patches.append(patch)
1066 patches.append(patch)
1064 return patches
1067 return patches
1065
1068
1066 def finish(self, repo, revs):
1069 def finish(self, repo, revs):
1067 # Manually trigger phase computation to ensure phasedefaults is
1070 # Manually trigger phase computation to ensure phasedefaults is
1068 # executed before we remove the patches.
1071 # executed before we remove the patches.
1069 repo._phasecache
1072 repo._phasecache
1070 patches = self._revpatches(repo, sorted(revs))
1073 patches = self._revpatches(repo, sorted(revs))
1071 qfinished = self._cleanup(patches, len(patches))
1074 qfinished = self._cleanup(patches, len(patches))
1072 if qfinished and repo.ui.configbool('mq', 'secret'):
1075 if qfinished and repo.ui.configbool('mq', 'secret'):
1073 # only use this logic when the secret option is added
1076 # only use this logic when the secret option is added
1074 oldqbase = repo[qfinished[0]]
1077 oldqbase = repo[qfinished[0]]
1075 tphase = phases.newcommitphase(repo.ui)
1078 tphase = phases.newcommitphase(repo.ui)
1076 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1079 if oldqbase.phase() > tphase and oldqbase.p1().phase() <= tphase:
1077 with repo.transaction('qfinish') as tr:
1080 with repo.transaction('qfinish') as tr:
1078 phases.advanceboundary(repo, tr, tphase, qfinished)
1081 phases.advanceboundary(repo, tr, tphase, qfinished)
1079
1082
1080 def delete(self, repo, patches, opts):
1083 def delete(self, repo, patches, opts):
1081 if not patches and not opts.get('rev'):
1084 if not patches and not opts.get('rev'):
1082 raise error.Abort(_('qdelete requires at least one revision or '
1085 raise error.Abort(_('qdelete requires at least one revision or '
1083 'patch name'))
1086 'patch name'))
1084
1087
1085 realpatches = []
1088 realpatches = []
1086 for patch in patches:
1089 for patch in patches:
1087 patch = self.lookup(patch, strict=True)
1090 patch = self.lookup(patch, strict=True)
1088 info = self.isapplied(patch)
1091 info = self.isapplied(patch)
1089 if info:
1092 if info:
1090 raise error.Abort(_("cannot delete applied patch %s") % patch)
1093 raise error.Abort(_("cannot delete applied patch %s") % patch)
1091 if patch not in self.series:
1094 if patch not in self.series:
1092 raise error.Abort(_("patch %s not in series file") % patch)
1095 raise error.Abort(_("patch %s not in series file") % patch)
1093 if patch not in realpatches:
1096 if patch not in realpatches:
1094 realpatches.append(patch)
1097 realpatches.append(patch)
1095
1098
1096 numrevs = 0
1099 numrevs = 0
1097 if opts.get('rev'):
1100 if opts.get('rev'):
1098 if not self.applied:
1101 if not self.applied:
1099 raise error.Abort(_('no patches applied'))
1102 raise error.Abort(_('no patches applied'))
1100 revs = scmutil.revrange(repo, opts.get('rev'))
1103 revs = scmutil.revrange(repo, opts.get('rev'))
1101 revs.sort()
1104 revs.sort()
1102 revpatches = self._revpatches(repo, revs)
1105 revpatches = self._revpatches(repo, revs)
1103 realpatches += revpatches
1106 realpatches += revpatches
1104 numrevs = len(revpatches)
1107 numrevs = len(revpatches)
1105
1108
1106 self._cleanup(realpatches, numrevs, opts.get('keep'))
1109 self._cleanup(realpatches, numrevs, opts.get('keep'))
1107
1110
1108 def checktoppatch(self, repo):
1111 def checktoppatch(self, repo):
1109 '''check that working directory is at qtip'''
1112 '''check that working directory is at qtip'''
1110 if self.applied:
1113 if self.applied:
1111 top = self.applied[-1].node
1114 top = self.applied[-1].node
1112 patch = self.applied[-1].name
1115 patch = self.applied[-1].name
1113 if repo.dirstate.p1() != top:
1116 if repo.dirstate.p1() != top:
1114 raise error.Abort(_("working directory revision is not qtip"))
1117 raise error.Abort(_("working directory revision is not qtip"))
1115 return top, patch
1118 return top, patch
1116 return None, None
1119 return None, None
1117
1120
1118 def putsubstate2changes(self, substatestate, changes):
1121 def putsubstate2changes(self, substatestate, changes):
1119 for files in changes[:3]:
1122 for files in changes[:3]:
1120 if '.hgsubstate' in files:
1123 if '.hgsubstate' in files:
1121 return # already listed up
1124 return # already listed up
1122 # not yet listed up
1125 # not yet listed up
1123 if substatestate in 'a?':
1126 if substatestate in 'a?':
1124 changes[1].append('.hgsubstate')
1127 changes[1].append('.hgsubstate')
1125 elif substatestate in 'r':
1128 elif substatestate in 'r':
1126 changes[2].append('.hgsubstate')
1129 changes[2].append('.hgsubstate')
1127 else: # modified
1130 else: # modified
1128 changes[0].append('.hgsubstate')
1131 changes[0].append('.hgsubstate')
1129
1132
1130 def checklocalchanges(self, repo, force=False, refresh=True):
1133 def checklocalchanges(self, repo, force=False, refresh=True):
1131 excsuffix = ''
1134 excsuffix = ''
1132 if refresh:
1135 if refresh:
1133 excsuffix = ', qrefresh first'
1136 excsuffix = ', qrefresh first'
1134 # plain versions for i18n tool to detect them
1137 # plain versions for i18n tool to detect them
1135 _("local changes found, qrefresh first")
1138 _("local changes found, qrefresh first")
1136 _("local changed subrepos found, qrefresh first")
1139 _("local changed subrepos found, qrefresh first")
1137 return checklocalchanges(repo, force, excsuffix)
1140 return checklocalchanges(repo, force, excsuffix)
1138
1141
1139 _reserved = ('series', 'status', 'guards', '.', '..')
1142 _reserved = ('series', 'status', 'guards', '.', '..')
1140 def checkreservedname(self, name):
1143 def checkreservedname(self, name):
1141 if name in self._reserved:
1144 if name in self._reserved:
1142 raise error.Abort(_('"%s" cannot be used as the name of a patch')
1145 raise error.Abort(_('"%s" cannot be used as the name of a patch')
1143 % name)
1146 % name)
1144 if name != name.strip():
1147 if name != name.strip():
1145 # whitespace is stripped by parseseries()
1148 # whitespace is stripped by parseseries()
1146 raise error.Abort(_('patch name cannot begin or end with '
1149 raise error.Abort(_('patch name cannot begin or end with '
1147 'whitespace'))
1150 'whitespace'))
1148 for prefix in ('.hg', '.mq'):
1151 for prefix in ('.hg', '.mq'):
1149 if name.startswith(prefix):
1152 if name.startswith(prefix):
1150 raise error.Abort(_('patch name cannot begin with "%s"')
1153 raise error.Abort(_('patch name cannot begin with "%s"')
1151 % prefix)
1154 % prefix)
1152 for c in ('#', ':', '\r', '\n'):
1155 for c in ('#', ':', '\r', '\n'):
1153 if c in name:
1156 if c in name:
1154 raise error.Abort(_('%r cannot be used in the name of a patch')
1157 raise error.Abort(_('%r cannot be used in the name of a patch')
1155 % c)
1158 % c)
1156
1159
1157 def checkpatchname(self, name, force=False):
1160 def checkpatchname(self, name, force=False):
1158 self.checkreservedname(name)
1161 self.checkreservedname(name)
1159 if not force and os.path.exists(self.join(name)):
1162 if not force and os.path.exists(self.join(name)):
1160 if os.path.isdir(self.join(name)):
1163 if os.path.isdir(self.join(name)):
1161 raise error.Abort(_('"%s" already exists as a directory')
1164 raise error.Abort(_('"%s" already exists as a directory')
1162 % name)
1165 % name)
1163 else:
1166 else:
1164 raise error.Abort(_('patch "%s" already exists') % name)
1167 raise error.Abort(_('patch "%s" already exists') % name)
1165
1168
1166 def makepatchname(self, title, fallbackname):
1169 def makepatchname(self, title, fallbackname):
1167 """Return a suitable filename for title, adding a suffix to make
1170 """Return a suitable filename for title, adding a suffix to make
1168 it unique in the existing list"""
1171 it unique in the existing list"""
1169 namebase = re.sub('[\s\W_]+', '_', title.lower()).strip('_')
1172 namebase = re.sub('[\s\W_]+', '_', title.lower()).strip('_')
1170 namebase = namebase[:75] # avoid too long name (issue5117)
1173 namebase = namebase[:75] # avoid too long name (issue5117)
1171 if namebase:
1174 if namebase:
1172 try:
1175 try:
1173 self.checkreservedname(namebase)
1176 self.checkreservedname(namebase)
1174 except error.Abort:
1177 except error.Abort:
1175 namebase = fallbackname
1178 namebase = fallbackname
1176 else:
1179 else:
1177 namebase = fallbackname
1180 namebase = fallbackname
1178 name = namebase
1181 name = namebase
1179 i = 0
1182 i = 0
1180 while True:
1183 while True:
1181 if name not in self.fullseries:
1184 if name not in self.fullseries:
1182 try:
1185 try:
1183 self.checkpatchname(name)
1186 self.checkpatchname(name)
1184 break
1187 break
1185 except error.Abort:
1188 except error.Abort:
1186 pass
1189 pass
1187 i += 1
1190 i += 1
1188 name = '%s__%d' % (namebase, i)
1191 name = '%s__%d' % (namebase, i)
1189 return name
1192 return name
1190
1193
1191 def checkkeepchanges(self, keepchanges, force):
1194 def checkkeepchanges(self, keepchanges, force):
1192 if force and keepchanges:
1195 if force and keepchanges:
1193 raise error.Abort(_('cannot use both --force and --keep-changes'))
1196 raise error.Abort(_('cannot use both --force and --keep-changes'))
1194
1197
1195 def new(self, repo, patchfn, *pats, **opts):
1198 def new(self, repo, patchfn, *pats, **opts):
1196 """options:
1199 """options:
1197 msg: a string or a no-argument function returning a string
1200 msg: a string or a no-argument function returning a string
1198 """
1201 """
1199 opts = pycompat.byteskwargs(opts)
1202 opts = pycompat.byteskwargs(opts)
1200 msg = opts.get('msg')
1203 msg = opts.get('msg')
1201 edit = opts.get('edit')
1204 edit = opts.get('edit')
1202 editform = opts.get('editform', 'mq.qnew')
1205 editform = opts.get('editform', 'mq.qnew')
1203 user = opts.get('user')
1206 user = opts.get('user')
1204 date = opts.get('date')
1207 date = opts.get('date')
1205 if date:
1208 if date:
1206 date = dateutil.parsedate(date)
1209 date = dateutil.parsedate(date)
1207 diffopts = self.diffopts({'git': opts.get('git')}, plain=True)
1210 diffopts = self.diffopts({'git': opts.get('git')}, plain=True)
1208 if opts.get('checkname', True):
1211 if opts.get('checkname', True):
1209 self.checkpatchname(patchfn)
1212 self.checkpatchname(patchfn)
1210 inclsubs = checksubstate(repo)
1213 inclsubs = checksubstate(repo)
1211 if inclsubs:
1214 if inclsubs:
1212 substatestate = repo.dirstate['.hgsubstate']
1215 substatestate = repo.dirstate['.hgsubstate']
1213 if opts.get('include') or opts.get('exclude') or pats:
1216 if opts.get('include') or opts.get('exclude') or pats:
1214 # detect missing files in pats
1217 # detect missing files in pats
1215 def badfn(f, msg):
1218 def badfn(f, msg):
1216 if f != '.hgsubstate': # .hgsubstate is auto-created
1219 if f != '.hgsubstate': # .hgsubstate is auto-created
1217 raise error.Abort('%s: %s' % (f, msg))
1220 raise error.Abort('%s: %s' % (f, msg))
1218 match = scmutil.match(repo[None], pats, opts, badfn=badfn)
1221 match = scmutil.match(repo[None], pats, opts, badfn=badfn)
1219 changes = repo.status(match=match)
1222 changes = repo.status(match=match)
1220 else:
1223 else:
1221 changes = self.checklocalchanges(repo, force=True)
1224 changes = self.checklocalchanges(repo, force=True)
1222 commitfiles = list(inclsubs)
1225 commitfiles = list(inclsubs)
1223 for files in changes[:3]:
1226 for files in changes[:3]:
1224 commitfiles.extend(files)
1227 commitfiles.extend(files)
1225 match = scmutil.matchfiles(repo, commitfiles)
1228 match = scmutil.matchfiles(repo, commitfiles)
1226 if len(repo[None].parents()) > 1:
1229 if len(repo[None].parents()) > 1:
1227 raise error.Abort(_('cannot manage merge changesets'))
1230 raise error.Abort(_('cannot manage merge changesets'))
1228 self.checktoppatch(repo)
1231 self.checktoppatch(repo)
1229 insert = self.fullseriesend()
1232 insert = self.fullseriesend()
1230 with repo.wlock():
1233 with repo.wlock():
1231 try:
1234 try:
1232 # if patch file write fails, abort early
1235 # if patch file write fails, abort early
1233 p = self.opener(patchfn, "w")
1236 p = self.opener(patchfn, "w")
1234 except IOError as e:
1237 except IOError as e:
1235 raise error.Abort(_('cannot write patch "%s": %s')
1238 raise error.Abort(_('cannot write patch "%s": %s')
1236 % (patchfn, encoding.strtolocal(e.strerror)))
1239 % (patchfn, encoding.strtolocal(e.strerror)))
1237 try:
1240 try:
1238 defaultmsg = "[mq]: %s" % patchfn
1241 defaultmsg = "[mq]: %s" % patchfn
1239 editor = cmdutil.getcommiteditor(editform=editform)
1242 editor = cmdutil.getcommiteditor(editform=editform)
1240 if edit:
1243 if edit:
1241 def finishdesc(desc):
1244 def finishdesc(desc):
1242 if desc.rstrip():
1245 if desc.rstrip():
1243 return desc
1246 return desc
1244 else:
1247 else:
1245 return defaultmsg
1248 return defaultmsg
1246 # i18n: this message is shown in editor with "HG: " prefix
1249 # i18n: this message is shown in editor with "HG: " prefix
1247 extramsg = _('Leave message empty to use default message.')
1250 extramsg = _('Leave message empty to use default message.')
1248 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1251 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1249 extramsg=extramsg,
1252 extramsg=extramsg,
1250 editform=editform)
1253 editform=editform)
1251 commitmsg = msg
1254 commitmsg = msg
1252 else:
1255 else:
1253 commitmsg = msg or defaultmsg
1256 commitmsg = msg or defaultmsg
1254
1257
1255 n = newcommit(repo, None, commitmsg, user, date, match=match,
1258 n = newcommit(repo, None, commitmsg, user, date, match=match,
1256 force=True, editor=editor)
1259 force=True, editor=editor)
1257 if n is None:
1260 if n is None:
1258 raise error.Abort(_("repo commit failed"))
1261 raise error.Abort(_("repo commit failed"))
1259 try:
1262 try:
1260 self.fullseries[insert:insert] = [patchfn]
1263 self.fullseries[insert:insert] = [patchfn]
1261 self.applied.append(statusentry(n, patchfn))
1264 self.applied.append(statusentry(n, patchfn))
1262 self.parseseries()
1265 self.parseseries()
1263 self.seriesdirty = True
1266 self.seriesdirty = True
1264 self.applieddirty = True
1267 self.applieddirty = True
1265 nctx = repo[n]
1268 nctx = repo[n]
1266 ph = patchheader(self.join(patchfn), self.plainmode)
1269 ph = patchheader(self.join(patchfn), self.plainmode)
1267 if user:
1270 if user:
1268 ph.setuser(user)
1271 ph.setuser(user)
1269 if date:
1272 if date:
1270 ph.setdate('%d %d' % date)
1273 ph.setdate('%d %d' % date)
1271 ph.setparent(hex(nctx.p1().node()))
1274 ph.setparent(hex(nctx.p1().node()))
1272 msg = nctx.description().strip()
1275 msg = nctx.description().strip()
1273 if msg == defaultmsg.strip():
1276 if msg == defaultmsg.strip():
1274 msg = ''
1277 msg = ''
1275 ph.setmessage(msg)
1278 ph.setmessage(msg)
1276 p.write(bytes(ph))
1279 p.write(bytes(ph))
1277 if commitfiles:
1280 if commitfiles:
1278 parent = self.qparents(repo, n)
1281 parent = self.qparents(repo, n)
1279 if inclsubs:
1282 if inclsubs:
1280 self.putsubstate2changes(substatestate, changes)
1283 self.putsubstate2changes(substatestate, changes)
1281 chunks = patchmod.diff(repo, node1=parent, node2=n,
1284 chunks = patchmod.diff(repo, node1=parent, node2=n,
1282 changes=changes, opts=diffopts)
1285 changes=changes, opts=diffopts)
1283 for chunk in chunks:
1286 for chunk in chunks:
1284 p.write(chunk)
1287 p.write(chunk)
1285 p.close()
1288 p.close()
1286 r = self.qrepo()
1289 r = self.qrepo()
1287 if r:
1290 if r:
1288 r[None].add([patchfn])
1291 r[None].add([patchfn])
1289 except: # re-raises
1292 except: # re-raises
1290 repo.rollback()
1293 repo.rollback()
1291 raise
1294 raise
1292 except Exception:
1295 except Exception:
1293 patchpath = self.join(patchfn)
1296 patchpath = self.join(patchfn)
1294 try:
1297 try:
1295 os.unlink(patchpath)
1298 os.unlink(patchpath)
1296 except OSError:
1299 except OSError:
1297 self.ui.warn(_('error unlinking %s\n') % patchpath)
1300 self.ui.warn(_('error unlinking %s\n') % patchpath)
1298 raise
1301 raise
1299 self.removeundo(repo)
1302 self.removeundo(repo)
1300
1303
1301 def isapplied(self, patch):
1304 def isapplied(self, patch):
1302 """returns (index, rev, patch)"""
1305 """returns (index, rev, patch)"""
1303 for i, a in enumerate(self.applied):
1306 for i, a in enumerate(self.applied):
1304 if a.name == patch:
1307 if a.name == patch:
1305 return (i, a.node, a.name)
1308 return (i, a.node, a.name)
1306 return None
1309 return None
1307
1310
1308 # if the exact patch name does not exist, we try a few
1311 # if the exact patch name does not exist, we try a few
1309 # variations. If strict is passed, we try only #1
1312 # variations. If strict is passed, we try only #1
1310 #
1313 #
1311 # 1) a number (as string) to indicate an offset in the series file
1314 # 1) a number (as string) to indicate an offset in the series file
1312 # 2) a unique substring of the patch name was given
1315 # 2) a unique substring of the patch name was given
1313 # 3) patchname[-+]num to indicate an offset in the series file
1316 # 3) patchname[-+]num to indicate an offset in the series file
1314 def lookup(self, patch, strict=False):
1317 def lookup(self, patch, strict=False):
1315 def partialname(s):
1318 def partialname(s):
1316 if s in self.series:
1319 if s in self.series:
1317 return s
1320 return s
1318 matches = [x for x in self.series if s in x]
1321 matches = [x for x in self.series if s in x]
1319 if len(matches) > 1:
1322 if len(matches) > 1:
1320 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1323 self.ui.warn(_('patch name "%s" is ambiguous:\n') % s)
1321 for m in matches:
1324 for m in matches:
1322 self.ui.warn(' %s\n' % m)
1325 self.ui.warn(' %s\n' % m)
1323 return None
1326 return None
1324 if matches:
1327 if matches:
1325 return matches[0]
1328 return matches[0]
1326 if self.series and self.applied:
1329 if self.series and self.applied:
1327 if s == 'qtip':
1330 if s == 'qtip':
1328 return self.series[self.seriesend(True) - 1]
1331 return self.series[self.seriesend(True) - 1]
1329 if s == 'qbase':
1332 if s == 'qbase':
1330 return self.series[0]
1333 return self.series[0]
1331 return None
1334 return None
1332
1335
1333 if patch in self.series:
1336 if patch in self.series:
1334 return patch
1337 return patch
1335
1338
1336 if not os.path.isfile(self.join(patch)):
1339 if not os.path.isfile(self.join(patch)):
1337 try:
1340 try:
1338 sno = int(patch)
1341 sno = int(patch)
1339 except (ValueError, OverflowError):
1342 except (ValueError, OverflowError):
1340 pass
1343 pass
1341 else:
1344 else:
1342 if -len(self.series) <= sno < len(self.series):
1345 if -len(self.series) <= sno < len(self.series):
1343 return self.series[sno]
1346 return self.series[sno]
1344
1347
1345 if not strict:
1348 if not strict:
1346 res = partialname(patch)
1349 res = partialname(patch)
1347 if res:
1350 if res:
1348 return res
1351 return res
1349 minus = patch.rfind('-')
1352 minus = patch.rfind('-')
1350 if minus >= 0:
1353 if minus >= 0:
1351 res = partialname(patch[:minus])
1354 res = partialname(patch[:minus])
1352 if res:
1355 if res:
1353 i = self.series.index(res)
1356 i = self.series.index(res)
1354 try:
1357 try:
1355 off = int(patch[minus + 1:] or 1)
1358 off = int(patch[minus + 1:] or 1)
1356 except (ValueError, OverflowError):
1359 except (ValueError, OverflowError):
1357 pass
1360 pass
1358 else:
1361 else:
1359 if i - off >= 0:
1362 if i - off >= 0:
1360 return self.series[i - off]
1363 return self.series[i - off]
1361 plus = patch.rfind('+')
1364 plus = patch.rfind('+')
1362 if plus >= 0:
1365 if plus >= 0:
1363 res = partialname(patch[:plus])
1366 res = partialname(patch[:plus])
1364 if res:
1367 if res:
1365 i = self.series.index(res)
1368 i = self.series.index(res)
1366 try:
1369 try:
1367 off = int(patch[plus + 1:] or 1)
1370 off = int(patch[plus + 1:] or 1)
1368 except (ValueError, OverflowError):
1371 except (ValueError, OverflowError):
1369 pass
1372 pass
1370 else:
1373 else:
1371 if i + off < len(self.series):
1374 if i + off < len(self.series):
1372 return self.series[i + off]
1375 return self.series[i + off]
1373 raise error.Abort(_("patch %s not in series") % patch)
1376 raise error.Abort(_("patch %s not in series") % patch)
1374
1377
1375 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1378 def push(self, repo, patch=None, force=False, list=False, mergeq=None,
1376 all=False, move=False, exact=False, nobackup=False,
1379 all=False, move=False, exact=False, nobackup=False,
1377 keepchanges=False):
1380 keepchanges=False):
1378 self.checkkeepchanges(keepchanges, force)
1381 self.checkkeepchanges(keepchanges, force)
1379 diffopts = self.diffopts()
1382 diffopts = self.diffopts()
1380 with repo.wlock():
1383 with repo.wlock():
1381 heads = []
1384 heads = []
1382 for hs in repo.branchmap().itervalues():
1385 for hs in repo.branchmap().itervalues():
1383 heads.extend(hs)
1386 heads.extend(hs)
1384 if not heads:
1387 if not heads:
1385 heads = [nullid]
1388 heads = [nullid]
1386 if repo.dirstate.p1() not in heads and not exact:
1389 if repo.dirstate.p1() not in heads and not exact:
1387 self.ui.status(_("(working directory not at a head)\n"))
1390 self.ui.status(_("(working directory not at a head)\n"))
1388
1391
1389 if not self.series:
1392 if not self.series:
1390 self.ui.warn(_('no patches in series\n'))
1393 self.ui.warn(_('no patches in series\n'))
1391 return 0
1394 return 0
1392
1395
1393 # Suppose our series file is: A B C and the current 'top'
1396 # Suppose our series file is: A B C and the current 'top'
1394 # patch is B. qpush C should be performed (moving forward)
1397 # patch is B. qpush C should be performed (moving forward)
1395 # qpush B is a NOP (no change) qpush A is an error (can't
1398 # qpush B is a NOP (no change) qpush A is an error (can't
1396 # go backwards with qpush)
1399 # go backwards with qpush)
1397 if patch:
1400 if patch:
1398 patch = self.lookup(patch)
1401 patch = self.lookup(patch)
1399 info = self.isapplied(patch)
1402 info = self.isapplied(patch)
1400 if info and info[0] >= len(self.applied) - 1:
1403 if info and info[0] >= len(self.applied) - 1:
1401 self.ui.warn(
1404 self.ui.warn(
1402 _('qpush: %s is already at the top\n') % patch)
1405 _('qpush: %s is already at the top\n') % patch)
1403 return 0
1406 return 0
1404
1407
1405 pushable, reason = self.pushable(patch)
1408 pushable, reason = self.pushable(patch)
1406 if pushable:
1409 if pushable:
1407 if self.series.index(patch) < self.seriesend():
1410 if self.series.index(patch) < self.seriesend():
1408 raise error.Abort(
1411 raise error.Abort(
1409 _("cannot push to a previous patch: %s") % patch)
1412 _("cannot push to a previous patch: %s") % patch)
1410 else:
1413 else:
1411 if reason:
1414 if reason:
1412 reason = _('guarded by %s') % reason
1415 reason = _('guarded by %s') % reason
1413 else:
1416 else:
1414 reason = _('no matching guards')
1417 reason = _('no matching guards')
1415 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1418 self.ui.warn(_("cannot push '%s' - %s\n") % (patch, reason))
1416 return 1
1419 return 1
1417 elif all:
1420 elif all:
1418 patch = self.series[-1]
1421 patch = self.series[-1]
1419 if self.isapplied(patch):
1422 if self.isapplied(patch):
1420 self.ui.warn(_('all patches are currently applied\n'))
1423 self.ui.warn(_('all patches are currently applied\n'))
1421 return 0
1424 return 0
1422
1425
1423 # Following the above example, starting at 'top' of B:
1426 # Following the above example, starting at 'top' of B:
1424 # qpush should be performed (pushes C), but a subsequent
1427 # qpush should be performed (pushes C), but a subsequent
1425 # qpush without an argument is an error (nothing to
1428 # qpush without an argument is an error (nothing to
1426 # apply). This allows a loop of "...while hg qpush..." to
1429 # apply). This allows a loop of "...while hg qpush..." to
1427 # work as it detects an error when done
1430 # work as it detects an error when done
1428 start = self.seriesend()
1431 start = self.seriesend()
1429 if start == len(self.series):
1432 if start == len(self.series):
1430 self.ui.warn(_('patch series already fully applied\n'))
1433 self.ui.warn(_('patch series already fully applied\n'))
1431 return 1
1434 return 1
1432 if not force and not keepchanges:
1435 if not force and not keepchanges:
1433 self.checklocalchanges(repo, refresh=self.applied)
1436 self.checklocalchanges(repo, refresh=self.applied)
1434
1437
1435 if exact:
1438 if exact:
1436 if keepchanges:
1439 if keepchanges:
1437 raise error.Abort(
1440 raise error.Abort(
1438 _("cannot use --exact and --keep-changes together"))
1441 _("cannot use --exact and --keep-changes together"))
1439 if move:
1442 if move:
1440 raise error.Abort(_('cannot use --exact and --move '
1443 raise error.Abort(_('cannot use --exact and --move '
1441 'together'))
1444 'together'))
1442 if self.applied:
1445 if self.applied:
1443 raise error.Abort(_('cannot push --exact with applied '
1446 raise error.Abort(_('cannot push --exact with applied '
1444 'patches'))
1447 'patches'))
1445 root = self.series[start]
1448 root = self.series[start]
1446 target = patchheader(self.join(root), self.plainmode).parent
1449 target = patchheader(self.join(root), self.plainmode).parent
1447 if not target:
1450 if not target:
1448 raise error.Abort(
1451 raise error.Abort(
1449 _("%s does not have a parent recorded") % root)
1452 _("%s does not have a parent recorded") % root)
1450 if not repo[target] == repo['.']:
1453 if not repo[target] == repo['.']:
1451 hg.update(repo, target)
1454 hg.update(repo, target)
1452
1455
1453 if move:
1456 if move:
1454 if not patch:
1457 if not patch:
1455 raise error.Abort(_("please specify the patch to move"))
1458 raise error.Abort(_("please specify the patch to move"))
1456 for fullstart, rpn in enumerate(self.fullseries):
1459 for fullstart, rpn in enumerate(self.fullseries):
1457 # strip markers for patch guards
1460 # strip markers for patch guards
1458 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1461 if self.guard_re.split(rpn, 1)[0] == self.series[start]:
1459 break
1462 break
1460 for i, rpn in enumerate(self.fullseries[fullstart:]):
1463 for i, rpn in enumerate(self.fullseries[fullstart:]):
1461 # strip markers for patch guards
1464 # strip markers for patch guards
1462 if self.guard_re.split(rpn, 1)[0] == patch:
1465 if self.guard_re.split(rpn, 1)[0] == patch:
1463 break
1466 break
1464 index = fullstart + i
1467 index = fullstart + i
1465 assert index < len(self.fullseries)
1468 assert index < len(self.fullseries)
1466 fullpatch = self.fullseries[index]
1469 fullpatch = self.fullseries[index]
1467 del self.fullseries[index]
1470 del self.fullseries[index]
1468 self.fullseries.insert(fullstart, fullpatch)
1471 self.fullseries.insert(fullstart, fullpatch)
1469 self.parseseries()
1472 self.parseseries()
1470 self.seriesdirty = True
1473 self.seriesdirty = True
1471
1474
1472 self.applieddirty = True
1475 self.applieddirty = True
1473 if start > 0:
1476 if start > 0:
1474 self.checktoppatch(repo)
1477 self.checktoppatch(repo)
1475 if not patch:
1478 if not patch:
1476 patch = self.series[start]
1479 patch = self.series[start]
1477 end = start + 1
1480 end = start + 1
1478 else:
1481 else:
1479 end = self.series.index(patch, start) + 1
1482 end = self.series.index(patch, start) + 1
1480
1483
1481 tobackup = set()
1484 tobackup = set()
1482 if (not nobackup and force) or keepchanges:
1485 if (not nobackup and force) or keepchanges:
1483 status = self.checklocalchanges(repo, force=True)
1486 status = self.checklocalchanges(repo, force=True)
1484 if keepchanges:
1487 if keepchanges:
1485 tobackup.update(status.modified + status.added +
1488 tobackup.update(status.modified + status.added +
1486 status.removed + status.deleted)
1489 status.removed + status.deleted)
1487 else:
1490 else:
1488 tobackup.update(status.modified + status.added)
1491 tobackup.update(status.modified + status.added)
1489
1492
1490 s = self.series[start:end]
1493 s = self.series[start:end]
1491 all_files = set()
1494 all_files = set()
1492 try:
1495 try:
1493 if mergeq:
1496 if mergeq:
1494 ret = self.mergepatch(repo, mergeq, s, diffopts)
1497 ret = self.mergepatch(repo, mergeq, s, diffopts)
1495 else:
1498 else:
1496 ret = self.apply(repo, s, list, all_files=all_files,
1499 ret = self.apply(repo, s, list, all_files=all_files,
1497 tobackup=tobackup, keepchanges=keepchanges)
1500 tobackup=tobackup, keepchanges=keepchanges)
1498 except AbortNoCleanup:
1501 except AbortNoCleanup:
1499 raise
1502 raise
1500 except: # re-raises
1503 except: # re-raises
1501 self.ui.warn(_('cleaning up working directory...\n'))
1504 self.ui.warn(_('cleaning up working directory...\n'))
1502 cmdutil.revert(self.ui, repo, repo['.'],
1505 cmdutil.revert(self.ui, repo, repo['.'],
1503 repo.dirstate.parents(), no_backup=True)
1506 repo.dirstate.parents(), no_backup=True)
1504 # only remove unknown files that we know we touched or
1507 # only remove unknown files that we know we touched or
1505 # created while patching
1508 # created while patching
1506 for f in all_files:
1509 for f in all_files:
1507 if f not in repo.dirstate:
1510 if f not in repo.dirstate:
1508 repo.wvfs.unlinkpath(f, ignoremissing=True)
1511 repo.wvfs.unlinkpath(f, ignoremissing=True)
1509 self.ui.warn(_('done\n'))
1512 self.ui.warn(_('done\n'))
1510 raise
1513 raise
1511
1514
1512 if not self.applied:
1515 if not self.applied:
1513 return ret[0]
1516 return ret[0]
1514 top = self.applied[-1].name
1517 top = self.applied[-1].name
1515 if ret[0] and ret[0] > 1:
1518 if ret[0] and ret[0] > 1:
1516 msg = _("errors during apply, please fix and qrefresh %s\n")
1519 msg = _("errors during apply, please fix and qrefresh %s\n")
1517 self.ui.write(msg % top)
1520 self.ui.write(msg % top)
1518 else:
1521 else:
1519 self.ui.write(_("now at: %s\n") % top)
1522 self.ui.write(_("now at: %s\n") % top)
1520 return ret[0]
1523 return ret[0]
1521
1524
1522 def pop(self, repo, patch=None, force=False, update=True, all=False,
1525 def pop(self, repo, patch=None, force=False, update=True, all=False,
1523 nobackup=False, keepchanges=False):
1526 nobackup=False, keepchanges=False):
1524 self.checkkeepchanges(keepchanges, force)
1527 self.checkkeepchanges(keepchanges, force)
1525 with repo.wlock():
1528 with repo.wlock():
1526 if patch:
1529 if patch:
1527 # index, rev, patch
1530 # index, rev, patch
1528 info = self.isapplied(patch)
1531 info = self.isapplied(patch)
1529 if not info:
1532 if not info:
1530 patch = self.lookup(patch)
1533 patch = self.lookup(patch)
1531 info = self.isapplied(patch)
1534 info = self.isapplied(patch)
1532 if not info:
1535 if not info:
1533 raise error.Abort(_("patch %s is not applied") % patch)
1536 raise error.Abort(_("patch %s is not applied") % patch)
1534
1537
1535 if not self.applied:
1538 if not self.applied:
1536 # Allow qpop -a to work repeatedly,
1539 # Allow qpop -a to work repeatedly,
1537 # but not qpop without an argument
1540 # but not qpop without an argument
1538 self.ui.warn(_("no patches applied\n"))
1541 self.ui.warn(_("no patches applied\n"))
1539 return not all
1542 return not all
1540
1543
1541 if all:
1544 if all:
1542 start = 0
1545 start = 0
1543 elif patch:
1546 elif patch:
1544 start = info[0] + 1
1547 start = info[0] + 1
1545 else:
1548 else:
1546 start = len(self.applied) - 1
1549 start = len(self.applied) - 1
1547
1550
1548 if start >= len(self.applied):
1551 if start >= len(self.applied):
1549 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1552 self.ui.warn(_("qpop: %s is already at the top\n") % patch)
1550 return
1553 return
1551
1554
1552 if not update:
1555 if not update:
1553 parents = repo.dirstate.parents()
1556 parents = repo.dirstate.parents()
1554 rr = [x.node for x in self.applied]
1557 rr = [x.node for x in self.applied]
1555 for p in parents:
1558 for p in parents:
1556 if p in rr:
1559 if p in rr:
1557 self.ui.warn(_("qpop: forcing dirstate update\n"))
1560 self.ui.warn(_("qpop: forcing dirstate update\n"))
1558 update = True
1561 update = True
1559 else:
1562 else:
1560 parents = [p.node() for p in repo[None].parents()]
1563 parents = [p.node() for p in repo[None].parents()]
1561 update = any(entry.node in parents
1564 update = any(entry.node in parents
1562 for entry in self.applied[start:])
1565 for entry in self.applied[start:])
1563
1566
1564 tobackup = set()
1567 tobackup = set()
1565 if update:
1568 if update:
1566 s = self.checklocalchanges(repo, force=force or keepchanges)
1569 s = self.checklocalchanges(repo, force=force or keepchanges)
1567 if force:
1570 if force:
1568 if not nobackup:
1571 if not nobackup:
1569 tobackup.update(s.modified + s.added)
1572 tobackup.update(s.modified + s.added)
1570 elif keepchanges:
1573 elif keepchanges:
1571 tobackup.update(s.modified + s.added +
1574 tobackup.update(s.modified + s.added +
1572 s.removed + s.deleted)
1575 s.removed + s.deleted)
1573
1576
1574 self.applieddirty = True
1577 self.applieddirty = True
1575 end = len(self.applied)
1578 end = len(self.applied)
1576 rev = self.applied[start].node
1579 rev = self.applied[start].node
1577
1580
1578 try:
1581 try:
1579 heads = repo.changelog.heads(rev)
1582 heads = repo.changelog.heads(rev)
1580 except error.LookupError:
1583 except error.LookupError:
1581 node = short(rev)
1584 node = short(rev)
1582 raise error.Abort(_('trying to pop unknown node %s') % node)
1585 raise error.Abort(_('trying to pop unknown node %s') % node)
1583
1586
1584 if heads != [self.applied[-1].node]:
1587 if heads != [self.applied[-1].node]:
1585 raise error.Abort(_("popping would remove a revision not "
1588 raise error.Abort(_("popping would remove a revision not "
1586 "managed by this patch queue"))
1589 "managed by this patch queue"))
1587 if not repo[self.applied[-1].node].mutable():
1590 if not repo[self.applied[-1].node].mutable():
1588 raise error.Abort(
1591 raise error.Abort(
1589 _("popping would remove a public revision"),
1592 _("popping would remove a public revision"),
1590 hint=_("see 'hg help phases' for details"))
1593 hint=_("see 'hg help phases' for details"))
1591
1594
1592 # we know there are no local changes, so we can make a simplified
1595 # we know there are no local changes, so we can make a simplified
1593 # form of hg.update.
1596 # form of hg.update.
1594 if update:
1597 if update:
1595 qp = self.qparents(repo, rev)
1598 qp = self.qparents(repo, rev)
1596 ctx = repo[qp]
1599 ctx = repo[qp]
1597 m, a, r, d = repo.status(qp, '.')[:4]
1600 m, a, r, d = repo.status(qp, '.')[:4]
1598 if d:
1601 if d:
1599 raise error.Abort(_("deletions found between repo revs"))
1602 raise error.Abort(_("deletions found between repo revs"))
1600
1603
1601 tobackup = set(a + m + r) & tobackup
1604 tobackup = set(a + m + r) & tobackup
1602 if keepchanges and tobackup:
1605 if keepchanges and tobackup:
1603 raise error.Abort(_("local changes found, qrefresh first"))
1606 raise error.Abort(_("local changes found, qrefresh first"))
1604 self.backup(repo, tobackup)
1607 self.backup(repo, tobackup)
1605 with repo.dirstate.parentchange():
1608 with repo.dirstate.parentchange():
1606 for f in a:
1609 for f in a:
1607 repo.wvfs.unlinkpath(f, ignoremissing=True)
1610 repo.wvfs.unlinkpath(f, ignoremissing=True)
1608 repo.dirstate.drop(f)
1611 repo.dirstate.drop(f)
1609 for f in m + r:
1612 for f in m + r:
1610 fctx = ctx[f]
1613 fctx = ctx[f]
1611 repo.wwrite(f, fctx.data(), fctx.flags())
1614 repo.wwrite(f, fctx.data(), fctx.flags())
1612 repo.dirstate.normal(f)
1615 repo.dirstate.normal(f)
1613 repo.setparents(qp, nullid)
1616 repo.setparents(qp, nullid)
1614 for patch in reversed(self.applied[start:end]):
1617 for patch in reversed(self.applied[start:end]):
1615 self.ui.status(_("popping %s\n") % patch.name)
1618 self.ui.status(_("popping %s\n") % patch.name)
1616 del self.applied[start:end]
1619 del self.applied[start:end]
1617 strip(self.ui, repo, [rev], update=False, backup=False)
1620 strip(self.ui, repo, [rev], update=False, backup=False)
1618 for s, state in repo['.'].substate.items():
1621 for s, state in repo['.'].substate.items():
1619 repo['.'].sub(s).get(state)
1622 repo['.'].sub(s).get(state)
1620 if self.applied:
1623 if self.applied:
1621 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1624 self.ui.write(_("now at: %s\n") % self.applied[-1].name)
1622 else:
1625 else:
1623 self.ui.write(_("patch queue now empty\n"))
1626 self.ui.write(_("patch queue now empty\n"))
1624
1627
1625 def diff(self, repo, pats, opts):
1628 def diff(self, repo, pats, opts):
1626 top, patch = self.checktoppatch(repo)
1629 top, patch = self.checktoppatch(repo)
1627 if not top:
1630 if not top:
1628 self.ui.write(_("no patches applied\n"))
1631 self.ui.write(_("no patches applied\n"))
1629 return
1632 return
1630 qp = self.qparents(repo, top)
1633 qp = self.qparents(repo, top)
1631 if opts.get('reverse'):
1634 if opts.get('reverse'):
1632 node1, node2 = None, qp
1635 node1, node2 = None, qp
1633 else:
1636 else:
1634 node1, node2 = qp, None
1637 node1, node2 = qp, None
1635 diffopts = self.diffopts(opts, patch)
1638 diffopts = self.diffopts(opts, patch)
1636 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1639 self.printdiff(repo, diffopts, node1, node2, files=pats, opts=opts)
1637
1640
1638 def refresh(self, repo, pats=None, **opts):
1641 def refresh(self, repo, pats=None, **opts):
1639 opts = pycompat.byteskwargs(opts)
1642 opts = pycompat.byteskwargs(opts)
1640 if not self.applied:
1643 if not self.applied:
1641 self.ui.write(_("no patches applied\n"))
1644 self.ui.write(_("no patches applied\n"))
1642 return 1
1645 return 1
1643 msg = opts.get('msg', '').rstrip()
1646 msg = opts.get('msg', '').rstrip()
1644 edit = opts.get('edit')
1647 edit = opts.get('edit')
1645 editform = opts.get('editform', 'mq.qrefresh')
1648 editform = opts.get('editform', 'mq.qrefresh')
1646 newuser = opts.get('user')
1649 newuser = opts.get('user')
1647 newdate = opts.get('date')
1650 newdate = opts.get('date')
1648 if newdate:
1651 if newdate:
1649 newdate = '%d %d' % dateutil.parsedate(newdate)
1652 newdate = '%d %d' % dateutil.parsedate(newdate)
1650 wlock = repo.wlock()
1653 wlock = repo.wlock()
1651
1654
1652 try:
1655 try:
1653 self.checktoppatch(repo)
1656 self.checktoppatch(repo)
1654 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1657 (top, patchfn) = (self.applied[-1].node, self.applied[-1].name)
1655 if repo.changelog.heads(top) != [top]:
1658 if repo.changelog.heads(top) != [top]:
1656 raise error.Abort(_("cannot qrefresh a revision with children"))
1659 raise error.Abort(_("cannot qrefresh a revision with children"))
1657 if not repo[top].mutable():
1660 if not repo[top].mutable():
1658 raise error.Abort(_("cannot qrefresh public revision"),
1661 raise error.Abort(_("cannot qrefresh public revision"),
1659 hint=_("see 'hg help phases' for details"))
1662 hint=_("see 'hg help phases' for details"))
1660
1663
1661 cparents = repo.changelog.parents(top)
1664 cparents = repo.changelog.parents(top)
1662 patchparent = self.qparents(repo, top)
1665 patchparent = self.qparents(repo, top)
1663
1666
1664 inclsubs = checksubstate(repo, hex(patchparent))
1667 inclsubs = checksubstate(repo, hex(patchparent))
1665 if inclsubs:
1668 if inclsubs:
1666 substatestate = repo.dirstate['.hgsubstate']
1669 substatestate = repo.dirstate['.hgsubstate']
1667
1670
1668 ph = patchheader(self.join(patchfn), self.plainmode)
1671 ph = patchheader(self.join(patchfn), self.plainmode)
1669 diffopts = self.diffopts({'git': opts.get('git')}, patchfn,
1672 diffopts = self.diffopts({'git': opts.get('git')}, patchfn,
1670 plain=True)
1673 plain=True)
1671 if newuser:
1674 if newuser:
1672 ph.setuser(newuser)
1675 ph.setuser(newuser)
1673 if newdate:
1676 if newdate:
1674 ph.setdate(newdate)
1677 ph.setdate(newdate)
1675 ph.setparent(hex(patchparent))
1678 ph.setparent(hex(patchparent))
1676
1679
1677 # only commit new patch when write is complete
1680 # only commit new patch when write is complete
1678 patchf = self.opener(patchfn, 'w', atomictemp=True)
1681 patchf = self.opener(patchfn, 'w', atomictemp=True)
1679
1682
1680 # update the dirstate in place, strip off the qtip commit
1683 # update the dirstate in place, strip off the qtip commit
1681 # and then commit.
1684 # and then commit.
1682 #
1685 #
1683 # this should really read:
1686 # this should really read:
1684 # mm, dd, aa = repo.status(top, patchparent)[:3]
1687 # mm, dd, aa = repo.status(top, patchparent)[:3]
1685 # but we do it backwards to take advantage of manifest/changelog
1688 # but we do it backwards to take advantage of manifest/changelog
1686 # caching against the next repo.status call
1689 # caching against the next repo.status call
1687 mm, aa, dd = repo.status(patchparent, top)[:3]
1690 mm, aa, dd = repo.status(patchparent, top)[:3]
1688 changes = repo.changelog.read(top)
1691 changes = repo.changelog.read(top)
1689 man = repo.manifestlog[changes[0]].read()
1692 man = repo.manifestlog[changes[0]].read()
1690 aaa = aa[:]
1693 aaa = aa[:]
1691 match1 = scmutil.match(repo[None], pats, opts)
1694 match1 = scmutil.match(repo[None], pats, opts)
1692 # in short mode, we only diff the files included in the
1695 # in short mode, we only diff the files included in the
1693 # patch already plus specified files
1696 # patch already plus specified files
1694 if opts.get('short'):
1697 if opts.get('short'):
1695 # if amending a patch, we start with existing
1698 # if amending a patch, we start with existing
1696 # files plus specified files - unfiltered
1699 # files plus specified files - unfiltered
1697 match = scmutil.matchfiles(repo, mm + aa + dd + match1.files())
1700 match = scmutil.matchfiles(repo, mm + aa + dd + match1.files())
1698 # filter with include/exclude options
1701 # filter with include/exclude options
1699 match1 = scmutil.match(repo[None], opts=opts)
1702 match1 = scmutil.match(repo[None], opts=opts)
1700 else:
1703 else:
1701 match = scmutil.matchall(repo)
1704 match = scmutil.matchall(repo)
1702 m, a, r, d = repo.status(match=match)[:4]
1705 m, a, r, d = repo.status(match=match)[:4]
1703 mm = set(mm)
1706 mm = set(mm)
1704 aa = set(aa)
1707 aa = set(aa)
1705 dd = set(dd)
1708 dd = set(dd)
1706
1709
1707 # we might end up with files that were added between
1710 # we might end up with files that were added between
1708 # qtip and the dirstate parent, but then changed in the
1711 # qtip and the dirstate parent, but then changed in the
1709 # local dirstate. in this case, we want them to only
1712 # local dirstate. in this case, we want them to only
1710 # show up in the added section
1713 # show up in the added section
1711 for x in m:
1714 for x in m:
1712 if x not in aa:
1715 if x not in aa:
1713 mm.add(x)
1716 mm.add(x)
1714 # we might end up with files added by the local dirstate that
1717 # we might end up with files added by the local dirstate that
1715 # were deleted by the patch. In this case, they should only
1718 # were deleted by the patch. In this case, they should only
1716 # show up in the changed section.
1719 # show up in the changed section.
1717 for x in a:
1720 for x in a:
1718 if x in dd:
1721 if x in dd:
1719 dd.remove(x)
1722 dd.remove(x)
1720 mm.add(x)
1723 mm.add(x)
1721 else:
1724 else:
1722 aa.add(x)
1725 aa.add(x)
1723 # make sure any files deleted in the local dirstate
1726 # make sure any files deleted in the local dirstate
1724 # are not in the add or change column of the patch
1727 # are not in the add or change column of the patch
1725 forget = []
1728 forget = []
1726 for x in d + r:
1729 for x in d + r:
1727 if x in aa:
1730 if x in aa:
1728 aa.remove(x)
1731 aa.remove(x)
1729 forget.append(x)
1732 forget.append(x)
1730 continue
1733 continue
1731 else:
1734 else:
1732 mm.discard(x)
1735 mm.discard(x)
1733 dd.add(x)
1736 dd.add(x)
1734
1737
1735 m = list(mm)
1738 m = list(mm)
1736 r = list(dd)
1739 r = list(dd)
1737 a = list(aa)
1740 a = list(aa)
1738
1741
1739 # create 'match' that includes the files to be recommitted.
1742 # create 'match' that includes the files to be recommitted.
1740 # apply match1 via repo.status to ensure correct case handling.
1743 # apply match1 via repo.status to ensure correct case handling.
1741 cm, ca, cr, cd = repo.status(patchparent, match=match1)[:4]
1744 cm, ca, cr, cd = repo.status(patchparent, match=match1)[:4]
1742 allmatches = set(cm + ca + cr + cd)
1745 allmatches = set(cm + ca + cr + cd)
1743 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1746 refreshchanges = [x.intersection(allmatches) for x in (mm, aa, dd)]
1744
1747
1745 files = set(inclsubs)
1748 files = set(inclsubs)
1746 for x in refreshchanges:
1749 for x in refreshchanges:
1747 files.update(x)
1750 files.update(x)
1748 match = scmutil.matchfiles(repo, files)
1751 match = scmutil.matchfiles(repo, files)
1749
1752
1750 bmlist = repo[top].bookmarks()
1753 bmlist = repo[top].bookmarks()
1751
1754
1752 dsguard = None
1755 dsguard = None
1753 try:
1756 try:
1754 dsguard = dirstateguard.dirstateguard(repo, 'mq.refresh')
1757 dsguard = dirstateguard.dirstateguard(repo, 'mq.refresh')
1755 if diffopts.git or diffopts.upgrade:
1758 if diffopts.git or diffopts.upgrade:
1756 copies = {}
1759 copies = {}
1757 for dst in a:
1760 for dst in a:
1758 src = repo.dirstate.copied(dst)
1761 src = repo.dirstate.copied(dst)
1759 # during qfold, the source file for copies may
1762 # during qfold, the source file for copies may
1760 # be removed. Treat this as a simple add.
1763 # be removed. Treat this as a simple add.
1761 if src is not None and src in repo.dirstate:
1764 if src is not None and src in repo.dirstate:
1762 copies.setdefault(src, []).append(dst)
1765 copies.setdefault(src, []).append(dst)
1763 repo.dirstate.add(dst)
1766 repo.dirstate.add(dst)
1764 # remember the copies between patchparent and qtip
1767 # remember the copies between patchparent and qtip
1765 for dst in aaa:
1768 for dst in aaa:
1766 f = repo.file(dst)
1769 f = repo.file(dst)
1767 src = f.renamed(man[dst])
1770 src = f.renamed(man[dst])
1768 if src:
1771 if src:
1769 copies.setdefault(src[0], []).extend(
1772 copies.setdefault(src[0], []).extend(
1770 copies.get(dst, []))
1773 copies.get(dst, []))
1771 if dst in a:
1774 if dst in a:
1772 copies[src[0]].append(dst)
1775 copies[src[0]].append(dst)
1773 # we can't copy a file created by the patch itself
1776 # we can't copy a file created by the patch itself
1774 if dst in copies:
1777 if dst in copies:
1775 del copies[dst]
1778 del copies[dst]
1776 for src, dsts in copies.iteritems():
1779 for src, dsts in copies.iteritems():
1777 for dst in dsts:
1780 for dst in dsts:
1778 repo.dirstate.copy(src, dst)
1781 repo.dirstate.copy(src, dst)
1779 else:
1782 else:
1780 for dst in a:
1783 for dst in a:
1781 repo.dirstate.add(dst)
1784 repo.dirstate.add(dst)
1782 # Drop useless copy information
1785 # Drop useless copy information
1783 for f in list(repo.dirstate.copies()):
1786 for f in list(repo.dirstate.copies()):
1784 repo.dirstate.copy(None, f)
1787 repo.dirstate.copy(None, f)
1785 for f in r:
1788 for f in r:
1786 repo.dirstate.remove(f)
1789 repo.dirstate.remove(f)
1787 # if the patch excludes a modified file, mark that
1790 # if the patch excludes a modified file, mark that
1788 # file with mtime=0 so status can see it.
1791 # file with mtime=0 so status can see it.
1789 mm = []
1792 mm = []
1790 for i in xrange(len(m) - 1, -1, -1):
1793 for i in xrange(len(m) - 1, -1, -1):
1791 if not match1(m[i]):
1794 if not match1(m[i]):
1792 mm.append(m[i])
1795 mm.append(m[i])
1793 del m[i]
1796 del m[i]
1794 for f in m:
1797 for f in m:
1795 repo.dirstate.normal(f)
1798 repo.dirstate.normal(f)
1796 for f in mm:
1799 for f in mm:
1797 repo.dirstate.normallookup(f)
1800 repo.dirstate.normallookup(f)
1798 for f in forget:
1801 for f in forget:
1799 repo.dirstate.drop(f)
1802 repo.dirstate.drop(f)
1800
1803
1801 user = ph.user or changes[1]
1804 user = ph.user or changes[1]
1802
1805
1803 oldphase = repo[top].phase()
1806 oldphase = repo[top].phase()
1804
1807
1805 # assumes strip can roll itself back if interrupted
1808 # assumes strip can roll itself back if interrupted
1806 repo.setparents(*cparents)
1809 repo.setparents(*cparents)
1807 self.applied.pop()
1810 self.applied.pop()
1808 self.applieddirty = True
1811 self.applieddirty = True
1809 strip(self.ui, repo, [top], update=False, backup=False)
1812 strip(self.ui, repo, [top], update=False, backup=False)
1810 dsguard.close()
1813 dsguard.close()
1811 finally:
1814 finally:
1812 release(dsguard)
1815 release(dsguard)
1813
1816
1814 try:
1817 try:
1815 # might be nice to attempt to roll back strip after this
1818 # might be nice to attempt to roll back strip after this
1816
1819
1817 defaultmsg = "[mq]: %s" % patchfn
1820 defaultmsg = "[mq]: %s" % patchfn
1818 editor = cmdutil.getcommiteditor(editform=editform)
1821 editor = cmdutil.getcommiteditor(editform=editform)
1819 if edit:
1822 if edit:
1820 def finishdesc(desc):
1823 def finishdesc(desc):
1821 if desc.rstrip():
1824 if desc.rstrip():
1822 ph.setmessage(desc)
1825 ph.setmessage(desc)
1823 return desc
1826 return desc
1824 return defaultmsg
1827 return defaultmsg
1825 # i18n: this message is shown in editor with "HG: " prefix
1828 # i18n: this message is shown in editor with "HG: " prefix
1826 extramsg = _('Leave message empty to use default message.')
1829 extramsg = _('Leave message empty to use default message.')
1827 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1830 editor = cmdutil.getcommiteditor(finishdesc=finishdesc,
1828 extramsg=extramsg,
1831 extramsg=extramsg,
1829 editform=editform)
1832 editform=editform)
1830 message = msg or "\n".join(ph.message)
1833 message = msg or "\n".join(ph.message)
1831 elif not msg:
1834 elif not msg:
1832 if not ph.message:
1835 if not ph.message:
1833 message = defaultmsg
1836 message = defaultmsg
1834 else:
1837 else:
1835 message = "\n".join(ph.message)
1838 message = "\n".join(ph.message)
1836 else:
1839 else:
1837 message = msg
1840 message = msg
1838 ph.setmessage(msg)
1841 ph.setmessage(msg)
1839
1842
1840 # Ensure we create a new changeset in the same phase than
1843 # Ensure we create a new changeset in the same phase than
1841 # the old one.
1844 # the old one.
1842 lock = tr = None
1845 lock = tr = None
1843 try:
1846 try:
1844 lock = repo.lock()
1847 lock = repo.lock()
1845 tr = repo.transaction('mq')
1848 tr = repo.transaction('mq')
1846 n = newcommit(repo, oldphase, message, user, ph.date,
1849 n = newcommit(repo, oldphase, message, user, ph.date,
1847 match=match, force=True, editor=editor)
1850 match=match, force=True, editor=editor)
1848 # only write patch after a successful commit
1851 # only write patch after a successful commit
1849 c = [list(x) for x in refreshchanges]
1852 c = [list(x) for x in refreshchanges]
1850 if inclsubs:
1853 if inclsubs:
1851 self.putsubstate2changes(substatestate, c)
1854 self.putsubstate2changes(substatestate, c)
1852 chunks = patchmod.diff(repo, patchparent,
1855 chunks = patchmod.diff(repo, patchparent,
1853 changes=c, opts=diffopts)
1856 changes=c, opts=diffopts)
1854 comments = bytes(ph)
1857 comments = bytes(ph)
1855 if comments:
1858 if comments:
1856 patchf.write(comments)
1859 patchf.write(comments)
1857 for chunk in chunks:
1860 for chunk in chunks:
1858 patchf.write(chunk)
1861 patchf.write(chunk)
1859 patchf.close()
1862 patchf.close()
1860
1863
1861 marks = repo._bookmarks
1864 marks = repo._bookmarks
1862 marks.applychanges(repo, tr, [(bm, n) for bm in bmlist])
1865 marks.applychanges(repo, tr, [(bm, n) for bm in bmlist])
1863 tr.close()
1866 tr.close()
1864
1867
1865 self.applied.append(statusentry(n, patchfn))
1868 self.applied.append(statusentry(n, patchfn))
1866 finally:
1869 finally:
1867 lockmod.release(tr, lock)
1870 lockmod.release(tr, lock)
1868 except: # re-raises
1871 except: # re-raises
1869 ctx = repo[cparents[0]]
1872 ctx = repo[cparents[0]]
1870 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1873 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
1871 self.savedirty()
1874 self.savedirty()
1872 self.ui.warn(_('qrefresh interrupted while patch was popped! '
1875 self.ui.warn(_('qrefresh interrupted while patch was popped! '
1873 '(revert --all, qpush to recover)\n'))
1876 '(revert --all, qpush to recover)\n'))
1874 raise
1877 raise
1875 finally:
1878 finally:
1876 wlock.release()
1879 wlock.release()
1877 self.removeundo(repo)
1880 self.removeundo(repo)
1878
1881
1879 def init(self, repo, create=False):
1882 def init(self, repo, create=False):
1880 if not create and os.path.isdir(self.path):
1883 if not create and os.path.isdir(self.path):
1881 raise error.Abort(_("patch queue directory already exists"))
1884 raise error.Abort(_("patch queue directory already exists"))
1882 try:
1885 try:
1883 os.mkdir(self.path)
1886 os.mkdir(self.path)
1884 except OSError as inst:
1887 except OSError as inst:
1885 if inst.errno != errno.EEXIST or not create:
1888 if inst.errno != errno.EEXIST or not create:
1886 raise
1889 raise
1887 if create:
1890 if create:
1888 return self.qrepo(create=True)
1891 return self.qrepo(create=True)
1889
1892
1890 def unapplied(self, repo, patch=None):
1893 def unapplied(self, repo, patch=None):
1891 if patch and patch not in self.series:
1894 if patch and patch not in self.series:
1892 raise error.Abort(_("patch %s is not in series file") % patch)
1895 raise error.Abort(_("patch %s is not in series file") % patch)
1893 if not patch:
1896 if not patch:
1894 start = self.seriesend()
1897 start = self.seriesend()
1895 else:
1898 else:
1896 start = self.series.index(patch) + 1
1899 start = self.series.index(patch) + 1
1897 unapplied = []
1900 unapplied = []
1898 for i in xrange(start, len(self.series)):
1901 for i in xrange(start, len(self.series)):
1899 pushable, reason = self.pushable(i)
1902 pushable, reason = self.pushable(i)
1900 if pushable:
1903 if pushable:
1901 unapplied.append((i, self.series[i]))
1904 unapplied.append((i, self.series[i]))
1902 self.explainpushable(i)
1905 self.explainpushable(i)
1903 return unapplied
1906 return unapplied
1904
1907
1905 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1908 def qseries(self, repo, missing=None, start=0, length=None, status=None,
1906 summary=False):
1909 summary=False):
1907 def displayname(pfx, patchname, state):
1910 def displayname(pfx, patchname, state):
1908 if pfx:
1911 if pfx:
1909 self.ui.write(pfx)
1912 self.ui.write(pfx)
1910 if summary:
1913 if summary:
1911 ph = patchheader(self.join(patchname), self.plainmode)
1914 ph = patchheader(self.join(patchname), self.plainmode)
1912 if ph.message:
1915 if ph.message:
1913 msg = ph.message[0]
1916 msg = ph.message[0]
1914 else:
1917 else:
1915 msg = ''
1918 msg = ''
1916
1919
1917 if self.ui.formatted():
1920 if self.ui.formatted():
1918 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1921 width = self.ui.termwidth() - len(pfx) - len(patchname) - 2
1919 if width > 0:
1922 if width > 0:
1920 msg = util.ellipsis(msg, width)
1923 msg = stringutil.ellipsis(msg, width)
1921 else:
1924 else:
1922 msg = ''
1925 msg = ''
1923 self.ui.write(patchname, label='qseries.' + state)
1926 self.ui.write(patchname, label='qseries.' + state)
1924 self.ui.write(': ')
1927 self.ui.write(': ')
1925 self.ui.write(msg, label='qseries.message.' + state)
1928 self.ui.write(msg, label='qseries.message.' + state)
1926 else:
1929 else:
1927 self.ui.write(patchname, label='qseries.' + state)
1930 self.ui.write(patchname, label='qseries.' + state)
1928 self.ui.write('\n')
1931 self.ui.write('\n')
1929
1932
1930 applied = set([p.name for p in self.applied])
1933 applied = set([p.name for p in self.applied])
1931 if length is None:
1934 if length is None:
1932 length = len(self.series) - start
1935 length = len(self.series) - start
1933 if not missing:
1936 if not missing:
1934 if self.ui.verbose:
1937 if self.ui.verbose:
1935 idxwidth = len("%d" % (start + length - 1))
1938 idxwidth = len("%d" % (start + length - 1))
1936 for i in xrange(start, start + length):
1939 for i in xrange(start, start + length):
1937 patch = self.series[i]
1940 patch = self.series[i]
1938 if patch in applied:
1941 if patch in applied:
1939 char, state = 'A', 'applied'
1942 char, state = 'A', 'applied'
1940 elif self.pushable(i)[0]:
1943 elif self.pushable(i)[0]:
1941 char, state = 'U', 'unapplied'
1944 char, state = 'U', 'unapplied'
1942 else:
1945 else:
1943 char, state = 'G', 'guarded'
1946 char, state = 'G', 'guarded'
1944 pfx = ''
1947 pfx = ''
1945 if self.ui.verbose:
1948 if self.ui.verbose:
1946 pfx = '%*d %s ' % (idxwidth, i, char)
1949 pfx = '%*d %s ' % (idxwidth, i, char)
1947 elif status and status != char:
1950 elif status and status != char:
1948 continue
1951 continue
1949 displayname(pfx, patch, state)
1952 displayname(pfx, patch, state)
1950 else:
1953 else:
1951 msng_list = []
1954 msng_list = []
1952 for root, dirs, files in os.walk(self.path):
1955 for root, dirs, files in os.walk(self.path):
1953 d = root[len(self.path) + 1:]
1956 d = root[len(self.path) + 1:]
1954 for f in files:
1957 for f in files:
1955 fl = os.path.join(d, f)
1958 fl = os.path.join(d, f)
1956 if (fl not in self.series and
1959 if (fl not in self.series and
1957 fl not in (self.statuspath, self.seriespath,
1960 fl not in (self.statuspath, self.seriespath,
1958 self.guardspath)
1961 self.guardspath)
1959 and not fl.startswith('.')):
1962 and not fl.startswith('.')):
1960 msng_list.append(fl)
1963 msng_list.append(fl)
1961 for x in sorted(msng_list):
1964 for x in sorted(msng_list):
1962 pfx = self.ui.verbose and ('D ') or ''
1965 pfx = self.ui.verbose and ('D ') or ''
1963 displayname(pfx, x, 'missing')
1966 displayname(pfx, x, 'missing')
1964
1967
1965 def issaveline(self, l):
1968 def issaveline(self, l):
1966 if l.name == '.hg.patches.save.line':
1969 if l.name == '.hg.patches.save.line':
1967 return True
1970 return True
1968
1971
1969 def qrepo(self, create=False):
1972 def qrepo(self, create=False):
1970 ui = self.baseui.copy()
1973 ui = self.baseui.copy()
1971 # copy back attributes set by ui.pager()
1974 # copy back attributes set by ui.pager()
1972 if self.ui.pageractive and not ui.pageractive:
1975 if self.ui.pageractive and not ui.pageractive:
1973 ui.pageractive = self.ui.pageractive
1976 ui.pageractive = self.ui.pageractive
1974 # internal config: ui.formatted
1977 # internal config: ui.formatted
1975 ui.setconfig('ui', 'formatted',
1978 ui.setconfig('ui', 'formatted',
1976 self.ui.config('ui', 'formatted'), 'mqpager')
1979 self.ui.config('ui', 'formatted'), 'mqpager')
1977 ui.setconfig('ui', 'interactive',
1980 ui.setconfig('ui', 'interactive',
1978 self.ui.config('ui', 'interactive'), 'mqpager')
1981 self.ui.config('ui', 'interactive'), 'mqpager')
1979 if create or os.path.isdir(self.join(".hg")):
1982 if create or os.path.isdir(self.join(".hg")):
1980 return hg.repository(ui, path=self.path, create=create)
1983 return hg.repository(ui, path=self.path, create=create)
1981
1984
1982 def restore(self, repo, rev, delete=None, qupdate=None):
1985 def restore(self, repo, rev, delete=None, qupdate=None):
1983 desc = repo[rev].description().strip()
1986 desc = repo[rev].description().strip()
1984 lines = desc.splitlines()
1987 lines = desc.splitlines()
1985 i = 0
1988 i = 0
1986 datastart = None
1989 datastart = None
1987 series = []
1990 series = []
1988 applied = []
1991 applied = []
1989 qpp = None
1992 qpp = None
1990 for i, line in enumerate(lines):
1993 for i, line in enumerate(lines):
1991 if line == 'Patch Data:':
1994 if line == 'Patch Data:':
1992 datastart = i + 1
1995 datastart = i + 1
1993 elif line.startswith('Dirstate:'):
1996 elif line.startswith('Dirstate:'):
1994 l = line.rstrip()
1997 l = line.rstrip()
1995 l = l[10:].split(' ')
1998 l = l[10:].split(' ')
1996 qpp = [bin(x) for x in l]
1999 qpp = [bin(x) for x in l]
1997 elif datastart is not None:
2000 elif datastart is not None:
1998 l = line.rstrip()
2001 l = line.rstrip()
1999 n, name = l.split(':', 1)
2002 n, name = l.split(':', 1)
2000 if n:
2003 if n:
2001 applied.append(statusentry(bin(n), name))
2004 applied.append(statusentry(bin(n), name))
2002 else:
2005 else:
2003 series.append(l)
2006 series.append(l)
2004 if datastart is None:
2007 if datastart is None:
2005 self.ui.warn(_("no saved patch data found\n"))
2008 self.ui.warn(_("no saved patch data found\n"))
2006 return 1
2009 return 1
2007 self.ui.warn(_("restoring status: %s\n") % lines[0])
2010 self.ui.warn(_("restoring status: %s\n") % lines[0])
2008 self.fullseries = series
2011 self.fullseries = series
2009 self.applied = applied
2012 self.applied = applied
2010 self.parseseries()
2013 self.parseseries()
2011 self.seriesdirty = True
2014 self.seriesdirty = True
2012 self.applieddirty = True
2015 self.applieddirty = True
2013 heads = repo.changelog.heads()
2016 heads = repo.changelog.heads()
2014 if delete:
2017 if delete:
2015 if rev not in heads:
2018 if rev not in heads:
2016 self.ui.warn(_("save entry has children, leaving it alone\n"))
2019 self.ui.warn(_("save entry has children, leaving it alone\n"))
2017 else:
2020 else:
2018 self.ui.warn(_("removing save entry %s\n") % short(rev))
2021 self.ui.warn(_("removing save entry %s\n") % short(rev))
2019 pp = repo.dirstate.parents()
2022 pp = repo.dirstate.parents()
2020 if rev in pp:
2023 if rev in pp:
2021 update = True
2024 update = True
2022 else:
2025 else:
2023 update = False
2026 update = False
2024 strip(self.ui, repo, [rev], update=update, backup=False)
2027 strip(self.ui, repo, [rev], update=update, backup=False)
2025 if qpp:
2028 if qpp:
2026 self.ui.warn(_("saved queue repository parents: %s %s\n") %
2029 self.ui.warn(_("saved queue repository parents: %s %s\n") %
2027 (short(qpp[0]), short(qpp[1])))
2030 (short(qpp[0]), short(qpp[1])))
2028 if qupdate:
2031 if qupdate:
2029 self.ui.status(_("updating queue directory\n"))
2032 self.ui.status(_("updating queue directory\n"))
2030 r = self.qrepo()
2033 r = self.qrepo()
2031 if not r:
2034 if not r:
2032 self.ui.warn(_("unable to load queue repository\n"))
2035 self.ui.warn(_("unable to load queue repository\n"))
2033 return 1
2036 return 1
2034 hg.clean(r, qpp[0])
2037 hg.clean(r, qpp[0])
2035
2038
2036 def save(self, repo, msg=None):
2039 def save(self, repo, msg=None):
2037 if not self.applied:
2040 if not self.applied:
2038 self.ui.warn(_("save: no patches applied, exiting\n"))
2041 self.ui.warn(_("save: no patches applied, exiting\n"))
2039 return 1
2042 return 1
2040 if self.issaveline(self.applied[-1]):
2043 if self.issaveline(self.applied[-1]):
2041 self.ui.warn(_("status is already saved\n"))
2044 self.ui.warn(_("status is already saved\n"))
2042 return 1
2045 return 1
2043
2046
2044 if not msg:
2047 if not msg:
2045 msg = _("hg patches saved state")
2048 msg = _("hg patches saved state")
2046 else:
2049 else:
2047 msg = "hg patches: " + msg.rstrip('\r\n')
2050 msg = "hg patches: " + msg.rstrip('\r\n')
2048 r = self.qrepo()
2051 r = self.qrepo()
2049 if r:
2052 if r:
2050 pp = r.dirstate.parents()
2053 pp = r.dirstate.parents()
2051 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
2054 msg += "\nDirstate: %s %s" % (hex(pp[0]), hex(pp[1]))
2052 msg += "\n\nPatch Data:\n"
2055 msg += "\n\nPatch Data:\n"
2053 msg += ''.join('%s\n' % x for x in self.applied)
2056 msg += ''.join('%s\n' % x for x in self.applied)
2054 msg += ''.join(':%s\n' % x for x in self.fullseries)
2057 msg += ''.join(':%s\n' % x for x in self.fullseries)
2055 n = repo.commit(msg, force=True)
2058 n = repo.commit(msg, force=True)
2056 if not n:
2059 if not n:
2057 self.ui.warn(_("repo commit failed\n"))
2060 self.ui.warn(_("repo commit failed\n"))
2058 return 1
2061 return 1
2059 self.applied.append(statusentry(n, '.hg.patches.save.line'))
2062 self.applied.append(statusentry(n, '.hg.patches.save.line'))
2060 self.applieddirty = True
2063 self.applieddirty = True
2061 self.removeundo(repo)
2064 self.removeundo(repo)
2062
2065
2063 def fullseriesend(self):
2066 def fullseriesend(self):
2064 if self.applied:
2067 if self.applied:
2065 p = self.applied[-1].name
2068 p = self.applied[-1].name
2066 end = self.findseries(p)
2069 end = self.findseries(p)
2067 if end is None:
2070 if end is None:
2068 return len(self.fullseries)
2071 return len(self.fullseries)
2069 return end + 1
2072 return end + 1
2070 return 0
2073 return 0
2071
2074
2072 def seriesend(self, all_patches=False):
2075 def seriesend(self, all_patches=False):
2073 """If all_patches is False, return the index of the next pushable patch
2076 """If all_patches is False, return the index of the next pushable patch
2074 in the series, or the series length. If all_patches is True, return the
2077 in the series, or the series length. If all_patches is True, return the
2075 index of the first patch past the last applied one.
2078 index of the first patch past the last applied one.
2076 """
2079 """
2077 end = 0
2080 end = 0
2078 def nextpatch(start):
2081 def nextpatch(start):
2079 if all_patches or start >= len(self.series):
2082 if all_patches or start >= len(self.series):
2080 return start
2083 return start
2081 for i in xrange(start, len(self.series)):
2084 for i in xrange(start, len(self.series)):
2082 p, reason = self.pushable(i)
2085 p, reason = self.pushable(i)
2083 if p:
2086 if p:
2084 return i
2087 return i
2085 self.explainpushable(i)
2088 self.explainpushable(i)
2086 return len(self.series)
2089 return len(self.series)
2087 if self.applied:
2090 if self.applied:
2088 p = self.applied[-1].name
2091 p = self.applied[-1].name
2089 try:
2092 try:
2090 end = self.series.index(p)
2093 end = self.series.index(p)
2091 except ValueError:
2094 except ValueError:
2092 return 0
2095 return 0
2093 return nextpatch(end + 1)
2096 return nextpatch(end + 1)
2094 return nextpatch(end)
2097 return nextpatch(end)
2095
2098
2096 def appliedname(self, index):
2099 def appliedname(self, index):
2097 pname = self.applied[index].name
2100 pname = self.applied[index].name
2098 if not self.ui.verbose:
2101 if not self.ui.verbose:
2099 p = pname
2102 p = pname
2100 else:
2103 else:
2101 p = ("%d" % self.series.index(pname)) + " " + pname
2104 p = ("%d" % self.series.index(pname)) + " " + pname
2102 return p
2105 return p
2103
2106
2104 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2107 def qimport(self, repo, files, patchname=None, rev=None, existing=None,
2105 force=None, git=False):
2108 force=None, git=False):
2106 def checkseries(patchname):
2109 def checkseries(patchname):
2107 if patchname in self.series:
2110 if patchname in self.series:
2108 raise error.Abort(_('patch %s is already in the series file')
2111 raise error.Abort(_('patch %s is already in the series file')
2109 % patchname)
2112 % patchname)
2110
2113
2111 if rev:
2114 if rev:
2112 if files:
2115 if files:
2113 raise error.Abort(_('option "-r" not valid when importing '
2116 raise error.Abort(_('option "-r" not valid when importing '
2114 'files'))
2117 'files'))
2115 rev = scmutil.revrange(repo, rev)
2118 rev = scmutil.revrange(repo, rev)
2116 rev.sort(reverse=True)
2119 rev.sort(reverse=True)
2117 elif not files:
2120 elif not files:
2118 raise error.Abort(_('no files or revisions specified'))
2121 raise error.Abort(_('no files or revisions specified'))
2119 if (len(files) > 1 or len(rev) > 1) and patchname:
2122 if (len(files) > 1 or len(rev) > 1) and patchname:
2120 raise error.Abort(_('option "-n" not valid when importing multiple '
2123 raise error.Abort(_('option "-n" not valid when importing multiple '
2121 'patches'))
2124 'patches'))
2122 imported = []
2125 imported = []
2123 if rev:
2126 if rev:
2124 # If mq patches are applied, we can only import revisions
2127 # If mq patches are applied, we can only import revisions
2125 # that form a linear path to qbase.
2128 # that form a linear path to qbase.
2126 # Otherwise, they should form a linear path to a head.
2129 # Otherwise, they should form a linear path to a head.
2127 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2130 heads = repo.changelog.heads(repo.changelog.node(rev.first()))
2128 if len(heads) > 1:
2131 if len(heads) > 1:
2129 raise error.Abort(_('revision %d is the root of more than one '
2132 raise error.Abort(_('revision %d is the root of more than one '
2130 'branch') % rev.last())
2133 'branch') % rev.last())
2131 if self.applied:
2134 if self.applied:
2132 base = repo.changelog.node(rev.first())
2135 base = repo.changelog.node(rev.first())
2133 if base in [n.node for n in self.applied]:
2136 if base in [n.node for n in self.applied]:
2134 raise error.Abort(_('revision %d is already managed')
2137 raise error.Abort(_('revision %d is already managed')
2135 % rev.first())
2138 % rev.first())
2136 if heads != [self.applied[-1].node]:
2139 if heads != [self.applied[-1].node]:
2137 raise error.Abort(_('revision %d is not the parent of '
2140 raise error.Abort(_('revision %d is not the parent of '
2138 'the queue') % rev.first())
2141 'the queue') % rev.first())
2139 base = repo.changelog.rev(self.applied[0].node)
2142 base = repo.changelog.rev(self.applied[0].node)
2140 lastparent = repo.changelog.parentrevs(base)[0]
2143 lastparent = repo.changelog.parentrevs(base)[0]
2141 else:
2144 else:
2142 if heads != [repo.changelog.node(rev.first())]:
2145 if heads != [repo.changelog.node(rev.first())]:
2143 raise error.Abort(_('revision %d has unmanaged children')
2146 raise error.Abort(_('revision %d has unmanaged children')
2144 % rev.first())
2147 % rev.first())
2145 lastparent = None
2148 lastparent = None
2146
2149
2147 diffopts = self.diffopts({'git': git})
2150 diffopts = self.diffopts({'git': git})
2148 with repo.transaction('qimport') as tr:
2151 with repo.transaction('qimport') as tr:
2149 for r in rev:
2152 for r in rev:
2150 if not repo[r].mutable():
2153 if not repo[r].mutable():
2151 raise error.Abort(_('revision %d is not mutable') % r,
2154 raise error.Abort(_('revision %d is not mutable') % r,
2152 hint=_("see 'hg help phases' "
2155 hint=_("see 'hg help phases' "
2153 'for details'))
2156 'for details'))
2154 p1, p2 = repo.changelog.parentrevs(r)
2157 p1, p2 = repo.changelog.parentrevs(r)
2155 n = repo.changelog.node(r)
2158 n = repo.changelog.node(r)
2156 if p2 != nullrev:
2159 if p2 != nullrev:
2157 raise error.Abort(_('cannot import merge revision %d')
2160 raise error.Abort(_('cannot import merge revision %d')
2158 % r)
2161 % r)
2159 if lastparent and lastparent != r:
2162 if lastparent and lastparent != r:
2160 raise error.Abort(_('revision %d is not the parent of '
2163 raise error.Abort(_('revision %d is not the parent of '
2161 '%d')
2164 '%d')
2162 % (r, lastparent))
2165 % (r, lastparent))
2163 lastparent = p1
2166 lastparent = p1
2164
2167
2165 if not patchname:
2168 if not patchname:
2166 patchname = self.makepatchname(
2169 patchname = self.makepatchname(
2167 repo[r].description().split('\n', 1)[0],
2170 repo[r].description().split('\n', 1)[0],
2168 '%d.diff' % r)
2171 '%d.diff' % r)
2169 checkseries(patchname)
2172 checkseries(patchname)
2170 self.checkpatchname(patchname, force)
2173 self.checkpatchname(patchname, force)
2171 self.fullseries.insert(0, patchname)
2174 self.fullseries.insert(0, patchname)
2172
2175
2173 patchf = self.opener(patchname, "w")
2176 patchf = self.opener(patchname, "w")
2174 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2177 cmdutil.export(repo, [n], fp=patchf, opts=diffopts)
2175 patchf.close()
2178 patchf.close()
2176
2179
2177 se = statusentry(n, patchname)
2180 se = statusentry(n, patchname)
2178 self.applied.insert(0, se)
2181 self.applied.insert(0, se)
2179
2182
2180 self.added.append(patchname)
2183 self.added.append(patchname)
2181 imported.append(patchname)
2184 imported.append(patchname)
2182 patchname = None
2185 patchname = None
2183 if rev and repo.ui.configbool('mq', 'secret'):
2186 if rev and repo.ui.configbool('mq', 'secret'):
2184 # if we added anything with --rev, move the secret root
2187 # if we added anything with --rev, move the secret root
2185 phases.retractboundary(repo, tr, phases.secret, [n])
2188 phases.retractboundary(repo, tr, phases.secret, [n])
2186 self.parseseries()
2189 self.parseseries()
2187 self.applieddirty = True
2190 self.applieddirty = True
2188 self.seriesdirty = True
2191 self.seriesdirty = True
2189
2192
2190 for i, filename in enumerate(files):
2193 for i, filename in enumerate(files):
2191 if existing:
2194 if existing:
2192 if filename == '-':
2195 if filename == '-':
2193 raise error.Abort(_('-e is incompatible with import from -')
2196 raise error.Abort(_('-e is incompatible with import from -')
2194 )
2197 )
2195 filename = normname(filename)
2198 filename = normname(filename)
2196 self.checkreservedname(filename)
2199 self.checkreservedname(filename)
2197 if util.url(filename).islocal():
2200 if util.url(filename).islocal():
2198 originpath = self.join(filename)
2201 originpath = self.join(filename)
2199 if not os.path.isfile(originpath):
2202 if not os.path.isfile(originpath):
2200 raise error.Abort(
2203 raise error.Abort(
2201 _("patch %s does not exist") % filename)
2204 _("patch %s does not exist") % filename)
2202
2205
2203 if patchname:
2206 if patchname:
2204 self.checkpatchname(patchname, force)
2207 self.checkpatchname(patchname, force)
2205
2208
2206 self.ui.write(_('renaming %s to %s\n')
2209 self.ui.write(_('renaming %s to %s\n')
2207 % (filename, patchname))
2210 % (filename, patchname))
2208 util.rename(originpath, self.join(patchname))
2211 util.rename(originpath, self.join(patchname))
2209 else:
2212 else:
2210 patchname = filename
2213 patchname = filename
2211
2214
2212 else:
2215 else:
2213 if filename == '-' and not patchname:
2216 if filename == '-' and not patchname:
2214 raise error.Abort(_('need --name to import a patch from -'))
2217 raise error.Abort(_('need --name to import a patch from -'))
2215 elif not patchname:
2218 elif not patchname:
2216 patchname = normname(os.path.basename(filename.rstrip('/')))
2219 patchname = normname(os.path.basename(filename.rstrip('/')))
2217 self.checkpatchname(patchname, force)
2220 self.checkpatchname(patchname, force)
2218 try:
2221 try:
2219 if filename == '-':
2222 if filename == '-':
2220 text = self.ui.fin.read()
2223 text = self.ui.fin.read()
2221 else:
2224 else:
2222 fp = hg.openpath(self.ui, filename)
2225 fp = hg.openpath(self.ui, filename)
2223 text = fp.read()
2226 text = fp.read()
2224 fp.close()
2227 fp.close()
2225 except (OSError, IOError):
2228 except (OSError, IOError):
2226 raise error.Abort(_("unable to read file %s") % filename)
2229 raise error.Abort(_("unable to read file %s") % filename)
2227 patchf = self.opener(patchname, "w")
2230 patchf = self.opener(patchname, "w")
2228 patchf.write(text)
2231 patchf.write(text)
2229 patchf.close()
2232 patchf.close()
2230 if not force:
2233 if not force:
2231 checkseries(patchname)
2234 checkseries(patchname)
2232 if patchname not in self.series:
2235 if patchname not in self.series:
2233 index = self.fullseriesend() + i
2236 index = self.fullseriesend() + i
2234 self.fullseries[index:index] = [patchname]
2237 self.fullseries[index:index] = [patchname]
2235 self.parseseries()
2238 self.parseseries()
2236 self.seriesdirty = True
2239 self.seriesdirty = True
2237 self.ui.warn(_("adding %s to series file\n") % patchname)
2240 self.ui.warn(_("adding %s to series file\n") % patchname)
2238 self.added.append(patchname)
2241 self.added.append(patchname)
2239 imported.append(patchname)
2242 imported.append(patchname)
2240 patchname = None
2243 patchname = None
2241
2244
2242 self.removeundo(repo)
2245 self.removeundo(repo)
2243 return imported
2246 return imported
2244
2247
2245 def fixkeepchangesopts(ui, opts):
2248 def fixkeepchangesopts(ui, opts):
2246 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2249 if (not ui.configbool('mq', 'keepchanges') or opts.get('force')
2247 or opts.get('exact')):
2250 or opts.get('exact')):
2248 return opts
2251 return opts
2249 opts = dict(opts)
2252 opts = dict(opts)
2250 opts['keep_changes'] = True
2253 opts['keep_changes'] = True
2251 return opts
2254 return opts
2252
2255
2253 @command("qdelete|qremove|qrm",
2256 @command("qdelete|qremove|qrm",
2254 [('k', 'keep', None, _('keep patch file')),
2257 [('k', 'keep', None, _('keep patch file')),
2255 ('r', 'rev', [],
2258 ('r', 'rev', [],
2256 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2259 _('stop managing a revision (DEPRECATED)'), _('REV'))],
2257 _('hg qdelete [-k] [PATCH]...'))
2260 _('hg qdelete [-k] [PATCH]...'))
2258 def delete(ui, repo, *patches, **opts):
2261 def delete(ui, repo, *patches, **opts):
2259 """remove patches from queue
2262 """remove patches from queue
2260
2263
2261 The patches must not be applied, and at least one patch is required. Exact
2264 The patches must not be applied, and at least one patch is required. Exact
2262 patch identifiers must be given. With -k/--keep, the patch files are
2265 patch identifiers must be given. With -k/--keep, the patch files are
2263 preserved in the patch directory.
2266 preserved in the patch directory.
2264
2267
2265 To stop managing a patch and move it into permanent history,
2268 To stop managing a patch and move it into permanent history,
2266 use the :hg:`qfinish` command."""
2269 use the :hg:`qfinish` command."""
2267 q = repo.mq
2270 q = repo.mq
2268 q.delete(repo, patches, pycompat.byteskwargs(opts))
2271 q.delete(repo, patches, pycompat.byteskwargs(opts))
2269 q.savedirty()
2272 q.savedirty()
2270 return 0
2273 return 0
2271
2274
2272 @command("qapplied",
2275 @command("qapplied",
2273 [('1', 'last', None, _('show only the preceding applied patch'))
2276 [('1', 'last', None, _('show only the preceding applied patch'))
2274 ] + seriesopts,
2277 ] + seriesopts,
2275 _('hg qapplied [-1] [-s] [PATCH]'))
2278 _('hg qapplied [-1] [-s] [PATCH]'))
2276 def applied(ui, repo, patch=None, **opts):
2279 def applied(ui, repo, patch=None, **opts):
2277 """print the patches already applied
2280 """print the patches already applied
2278
2281
2279 Returns 0 on success."""
2282 Returns 0 on success."""
2280
2283
2281 q = repo.mq
2284 q = repo.mq
2282 opts = pycompat.byteskwargs(opts)
2285 opts = pycompat.byteskwargs(opts)
2283
2286
2284 if patch:
2287 if patch:
2285 if patch not in q.series:
2288 if patch not in q.series:
2286 raise error.Abort(_("patch %s is not in series file") % patch)
2289 raise error.Abort(_("patch %s is not in series file") % patch)
2287 end = q.series.index(patch) + 1
2290 end = q.series.index(patch) + 1
2288 else:
2291 else:
2289 end = q.seriesend(True)
2292 end = q.seriesend(True)
2290
2293
2291 if opts.get('last') and not end:
2294 if opts.get('last') and not end:
2292 ui.write(_("no patches applied\n"))
2295 ui.write(_("no patches applied\n"))
2293 return 1
2296 return 1
2294 elif opts.get('last') and end == 1:
2297 elif opts.get('last') and end == 1:
2295 ui.write(_("only one patch applied\n"))
2298 ui.write(_("only one patch applied\n"))
2296 return 1
2299 return 1
2297 elif opts.get('last'):
2300 elif opts.get('last'):
2298 start = end - 2
2301 start = end - 2
2299 end = 1
2302 end = 1
2300 else:
2303 else:
2301 start = 0
2304 start = 0
2302
2305
2303 q.qseries(repo, length=end, start=start, status='A',
2306 q.qseries(repo, length=end, start=start, status='A',
2304 summary=opts.get('summary'))
2307 summary=opts.get('summary'))
2305
2308
2306
2309
2307 @command("qunapplied",
2310 @command("qunapplied",
2308 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2311 [('1', 'first', None, _('show only the first patch'))] + seriesopts,
2309 _('hg qunapplied [-1] [-s] [PATCH]'))
2312 _('hg qunapplied [-1] [-s] [PATCH]'))
2310 def unapplied(ui, repo, patch=None, **opts):
2313 def unapplied(ui, repo, patch=None, **opts):
2311 """print the patches not yet applied
2314 """print the patches not yet applied
2312
2315
2313 Returns 0 on success."""
2316 Returns 0 on success."""
2314
2317
2315 q = repo.mq
2318 q = repo.mq
2316 opts = pycompat.byteskwargs(opts)
2319 opts = pycompat.byteskwargs(opts)
2317 if patch:
2320 if patch:
2318 if patch not in q.series:
2321 if patch not in q.series:
2319 raise error.Abort(_("patch %s is not in series file") % patch)
2322 raise error.Abort(_("patch %s is not in series file") % patch)
2320 start = q.series.index(patch) + 1
2323 start = q.series.index(patch) + 1
2321 else:
2324 else:
2322 start = q.seriesend(True)
2325 start = q.seriesend(True)
2323
2326
2324 if start == len(q.series) and opts.get('first'):
2327 if start == len(q.series) and opts.get('first'):
2325 ui.write(_("all patches applied\n"))
2328 ui.write(_("all patches applied\n"))
2326 return 1
2329 return 1
2327
2330
2328 if opts.get('first'):
2331 if opts.get('first'):
2329 length = 1
2332 length = 1
2330 else:
2333 else:
2331 length = None
2334 length = None
2332 q.qseries(repo, start=start, length=length, status='U',
2335 q.qseries(repo, start=start, length=length, status='U',
2333 summary=opts.get('summary'))
2336 summary=opts.get('summary'))
2334
2337
2335 @command("qimport",
2338 @command("qimport",
2336 [('e', 'existing', None, _('import file in patch directory')),
2339 [('e', 'existing', None, _('import file in patch directory')),
2337 ('n', 'name', '',
2340 ('n', 'name', '',
2338 _('name of patch file'), _('NAME')),
2341 _('name of patch file'), _('NAME')),
2339 ('f', 'force', None, _('overwrite existing files')),
2342 ('f', 'force', None, _('overwrite existing files')),
2340 ('r', 'rev', [],
2343 ('r', 'rev', [],
2341 _('place existing revisions under mq control'), _('REV')),
2344 _('place existing revisions under mq control'), _('REV')),
2342 ('g', 'git', None, _('use git extended diff format')),
2345 ('g', 'git', None, _('use git extended diff format')),
2343 ('P', 'push', None, _('qpush after importing'))],
2346 ('P', 'push', None, _('qpush after importing'))],
2344 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2347 _('hg qimport [-e] [-n NAME] [-f] [-g] [-P] [-r REV]... [FILE]...'))
2345 def qimport(ui, repo, *filename, **opts):
2348 def qimport(ui, repo, *filename, **opts):
2346 """import a patch or existing changeset
2349 """import a patch or existing changeset
2347
2350
2348 The patch is inserted into the series after the last applied
2351 The patch is inserted into the series after the last applied
2349 patch. If no patches have been applied, qimport prepends the patch
2352 patch. If no patches have been applied, qimport prepends the patch
2350 to the series.
2353 to the series.
2351
2354
2352 The patch will have the same name as its source file unless you
2355 The patch will have the same name as its source file unless you
2353 give it a new one with -n/--name.
2356 give it a new one with -n/--name.
2354
2357
2355 You can register an existing patch inside the patch directory with
2358 You can register an existing patch inside the patch directory with
2356 the -e/--existing flag.
2359 the -e/--existing flag.
2357
2360
2358 With -f/--force, an existing patch of the same name will be
2361 With -f/--force, an existing patch of the same name will be
2359 overwritten.
2362 overwritten.
2360
2363
2361 An existing changeset may be placed under mq control with -r/--rev
2364 An existing changeset may be placed under mq control with -r/--rev
2362 (e.g. qimport --rev . -n patch will place the current revision
2365 (e.g. qimport --rev . -n patch will place the current revision
2363 under mq control). With -g/--git, patches imported with --rev will
2366 under mq control). With -g/--git, patches imported with --rev will
2364 use the git diff format. See the diffs help topic for information
2367 use the git diff format. See the diffs help topic for information
2365 on why this is important for preserving rename/copy information
2368 on why this is important for preserving rename/copy information
2366 and permission changes. Use :hg:`qfinish` to remove changesets
2369 and permission changes. Use :hg:`qfinish` to remove changesets
2367 from mq control.
2370 from mq control.
2368
2371
2369 To import a patch from standard input, pass - as the patch file.
2372 To import a patch from standard input, pass - as the patch file.
2370 When importing from standard input, a patch name must be specified
2373 When importing from standard input, a patch name must be specified
2371 using the --name flag.
2374 using the --name flag.
2372
2375
2373 To import an existing patch while renaming it::
2376 To import an existing patch while renaming it::
2374
2377
2375 hg qimport -e existing-patch -n new-name
2378 hg qimport -e existing-patch -n new-name
2376
2379
2377 Returns 0 if import succeeded.
2380 Returns 0 if import succeeded.
2378 """
2381 """
2379 opts = pycompat.byteskwargs(opts)
2382 opts = pycompat.byteskwargs(opts)
2380 with repo.lock(): # cause this may move phase
2383 with repo.lock(): # cause this may move phase
2381 q = repo.mq
2384 q = repo.mq
2382 try:
2385 try:
2383 imported = q.qimport(
2386 imported = q.qimport(
2384 repo, filename, patchname=opts.get('name'),
2387 repo, filename, patchname=opts.get('name'),
2385 existing=opts.get('existing'), force=opts.get('force'),
2388 existing=opts.get('existing'), force=opts.get('force'),
2386 rev=opts.get('rev'), git=opts.get('git'))
2389 rev=opts.get('rev'), git=opts.get('git'))
2387 finally:
2390 finally:
2388 q.savedirty()
2391 q.savedirty()
2389
2392
2390 if imported and opts.get('push') and not opts.get('rev'):
2393 if imported and opts.get('push') and not opts.get('rev'):
2391 return q.push(repo, imported[-1])
2394 return q.push(repo, imported[-1])
2392 return 0
2395 return 0
2393
2396
2394 def qinit(ui, repo, create):
2397 def qinit(ui, repo, create):
2395 """initialize a new queue repository
2398 """initialize a new queue repository
2396
2399
2397 This command also creates a series file for ordering patches, and
2400 This command also creates a series file for ordering patches, and
2398 an mq-specific .hgignore file in the queue repository, to exclude
2401 an mq-specific .hgignore file in the queue repository, to exclude
2399 the status and guards files (these contain mostly transient state).
2402 the status and guards files (these contain mostly transient state).
2400
2403
2401 Returns 0 if initialization succeeded."""
2404 Returns 0 if initialization succeeded."""
2402 q = repo.mq
2405 q = repo.mq
2403 r = q.init(repo, create)
2406 r = q.init(repo, create)
2404 q.savedirty()
2407 q.savedirty()
2405 if r:
2408 if r:
2406 if not os.path.exists(r.wjoin('.hgignore')):
2409 if not os.path.exists(r.wjoin('.hgignore')):
2407 fp = r.wvfs('.hgignore', 'w')
2410 fp = r.wvfs('.hgignore', 'w')
2408 fp.write('^\\.hg\n')
2411 fp.write('^\\.hg\n')
2409 fp.write('^\\.mq\n')
2412 fp.write('^\\.mq\n')
2410 fp.write('syntax: glob\n')
2413 fp.write('syntax: glob\n')
2411 fp.write('status\n')
2414 fp.write('status\n')
2412 fp.write('guards\n')
2415 fp.write('guards\n')
2413 fp.close()
2416 fp.close()
2414 if not os.path.exists(r.wjoin('series')):
2417 if not os.path.exists(r.wjoin('series')):
2415 r.wvfs('series', 'w').close()
2418 r.wvfs('series', 'w').close()
2416 r[None].add(['.hgignore', 'series'])
2419 r[None].add(['.hgignore', 'series'])
2417 commands.add(ui, r)
2420 commands.add(ui, r)
2418 return 0
2421 return 0
2419
2422
2420 @command("^qinit",
2423 @command("^qinit",
2421 [('c', 'create-repo', None, _('create queue repository'))],
2424 [('c', 'create-repo', None, _('create queue repository'))],
2422 _('hg qinit [-c]'))
2425 _('hg qinit [-c]'))
2423 def init(ui, repo, **opts):
2426 def init(ui, repo, **opts):
2424 """init a new queue repository (DEPRECATED)
2427 """init a new queue repository (DEPRECATED)
2425
2428
2426 The queue repository is unversioned by default. If
2429 The queue repository is unversioned by default. If
2427 -c/--create-repo is specified, qinit will create a separate nested
2430 -c/--create-repo is specified, qinit will create a separate nested
2428 repository for patches (qinit -c may also be run later to convert
2431 repository for patches (qinit -c may also be run later to convert
2429 an unversioned patch repository into a versioned one). You can use
2432 an unversioned patch repository into a versioned one). You can use
2430 qcommit to commit changes to this queue repository.
2433 qcommit to commit changes to this queue repository.
2431
2434
2432 This command is deprecated. Without -c, it's implied by other relevant
2435 This command is deprecated. Without -c, it's implied by other relevant
2433 commands. With -c, use :hg:`init --mq` instead."""
2436 commands. With -c, use :hg:`init --mq` instead."""
2434 return qinit(ui, repo, create=opts.get(r'create_repo'))
2437 return qinit(ui, repo, create=opts.get(r'create_repo'))
2435
2438
2436 @command("qclone",
2439 @command("qclone",
2437 [('', 'pull', None, _('use pull protocol to copy metadata')),
2440 [('', 'pull', None, _('use pull protocol to copy metadata')),
2438 ('U', 'noupdate', None,
2441 ('U', 'noupdate', None,
2439 _('do not update the new working directories')),
2442 _('do not update the new working directories')),
2440 ('', 'uncompressed', None,
2443 ('', 'uncompressed', None,
2441 _('use uncompressed transfer (fast over LAN)')),
2444 _('use uncompressed transfer (fast over LAN)')),
2442 ('p', 'patches', '',
2445 ('p', 'patches', '',
2443 _('location of source patch repository'), _('REPO')),
2446 _('location of source patch repository'), _('REPO')),
2444 ] + cmdutil.remoteopts,
2447 ] + cmdutil.remoteopts,
2445 _('hg qclone [OPTION]... SOURCE [DEST]'),
2448 _('hg qclone [OPTION]... SOURCE [DEST]'),
2446 norepo=True)
2449 norepo=True)
2447 def clone(ui, source, dest=None, **opts):
2450 def clone(ui, source, dest=None, **opts):
2448 '''clone main and patch repository at same time
2451 '''clone main and patch repository at same time
2449
2452
2450 If source is local, destination will have no patches applied. If
2453 If source is local, destination will have no patches applied. If
2451 source is remote, this command can not check if patches are
2454 source is remote, this command can not check if patches are
2452 applied in source, so cannot guarantee that patches are not
2455 applied in source, so cannot guarantee that patches are not
2453 applied in destination. If you clone remote repository, be sure
2456 applied in destination. If you clone remote repository, be sure
2454 before that it has no patches applied.
2457 before that it has no patches applied.
2455
2458
2456 Source patch repository is looked for in <src>/.hg/patches by
2459 Source patch repository is looked for in <src>/.hg/patches by
2457 default. Use -p <url> to change.
2460 default. Use -p <url> to change.
2458
2461
2459 The patch directory must be a nested Mercurial repository, as
2462 The patch directory must be a nested Mercurial repository, as
2460 would be created by :hg:`init --mq`.
2463 would be created by :hg:`init --mq`.
2461
2464
2462 Return 0 on success.
2465 Return 0 on success.
2463 '''
2466 '''
2464 opts = pycompat.byteskwargs(opts)
2467 opts = pycompat.byteskwargs(opts)
2465 def patchdir(repo):
2468 def patchdir(repo):
2466 """compute a patch repo url from a repo object"""
2469 """compute a patch repo url from a repo object"""
2467 url = repo.url()
2470 url = repo.url()
2468 if url.endswith('/'):
2471 if url.endswith('/'):
2469 url = url[:-1]
2472 url = url[:-1]
2470 return url + '/.hg/patches'
2473 return url + '/.hg/patches'
2471
2474
2472 # main repo (destination and sources)
2475 # main repo (destination and sources)
2473 if dest is None:
2476 if dest is None:
2474 dest = hg.defaultdest(source)
2477 dest = hg.defaultdest(source)
2475 sr = hg.peer(ui, opts, ui.expandpath(source))
2478 sr = hg.peer(ui, opts, ui.expandpath(source))
2476
2479
2477 # patches repo (source only)
2480 # patches repo (source only)
2478 if opts.get('patches'):
2481 if opts.get('patches'):
2479 patchespath = ui.expandpath(opts.get('patches'))
2482 patchespath = ui.expandpath(opts.get('patches'))
2480 else:
2483 else:
2481 patchespath = patchdir(sr)
2484 patchespath = patchdir(sr)
2482 try:
2485 try:
2483 hg.peer(ui, opts, patchespath)
2486 hg.peer(ui, opts, patchespath)
2484 except error.RepoError:
2487 except error.RepoError:
2485 raise error.Abort(_('versioned patch repository not found'
2488 raise error.Abort(_('versioned patch repository not found'
2486 ' (see init --mq)'))
2489 ' (see init --mq)'))
2487 qbase, destrev = None, None
2490 qbase, destrev = None, None
2488 if sr.local():
2491 if sr.local():
2489 repo = sr.local()
2492 repo = sr.local()
2490 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2493 if repo.mq.applied and repo[qbase].phase() != phases.secret:
2491 qbase = repo.mq.applied[0].node
2494 qbase = repo.mq.applied[0].node
2492 if not hg.islocal(dest):
2495 if not hg.islocal(dest):
2493 heads = set(repo.heads())
2496 heads = set(repo.heads())
2494 destrev = list(heads.difference(repo.heads(qbase)))
2497 destrev = list(heads.difference(repo.heads(qbase)))
2495 destrev.append(repo.changelog.parents(qbase)[0])
2498 destrev.append(repo.changelog.parents(qbase)[0])
2496 elif sr.capable('lookup'):
2499 elif sr.capable('lookup'):
2497 try:
2500 try:
2498 qbase = sr.lookup('qbase')
2501 qbase = sr.lookup('qbase')
2499 except error.RepoError:
2502 except error.RepoError:
2500 pass
2503 pass
2501
2504
2502 ui.note(_('cloning main repository\n'))
2505 ui.note(_('cloning main repository\n'))
2503 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2506 sr, dr = hg.clone(ui, opts, sr.url(), dest,
2504 pull=opts.get('pull'),
2507 pull=opts.get('pull'),
2505 rev=destrev,
2508 rev=destrev,
2506 update=False,
2509 update=False,
2507 stream=opts.get('uncompressed'))
2510 stream=opts.get('uncompressed'))
2508
2511
2509 ui.note(_('cloning patch repository\n'))
2512 ui.note(_('cloning patch repository\n'))
2510 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2513 hg.clone(ui, opts, opts.get('patches') or patchdir(sr), patchdir(dr),
2511 pull=opts.get('pull'), update=not opts.get('noupdate'),
2514 pull=opts.get('pull'), update=not opts.get('noupdate'),
2512 stream=opts.get('uncompressed'))
2515 stream=opts.get('uncompressed'))
2513
2516
2514 if dr.local():
2517 if dr.local():
2515 repo = dr.local()
2518 repo = dr.local()
2516 if qbase:
2519 if qbase:
2517 ui.note(_('stripping applied patches from destination '
2520 ui.note(_('stripping applied patches from destination '
2518 'repository\n'))
2521 'repository\n'))
2519 strip(ui, repo, [qbase], update=False, backup=None)
2522 strip(ui, repo, [qbase], update=False, backup=None)
2520 if not opts.get('noupdate'):
2523 if not opts.get('noupdate'):
2521 ui.note(_('updating destination repository\n'))
2524 ui.note(_('updating destination repository\n'))
2522 hg.update(repo, repo.changelog.tip())
2525 hg.update(repo, repo.changelog.tip())
2523
2526
2524 @command("qcommit|qci",
2527 @command("qcommit|qci",
2525 commands.table["^commit|ci"][1],
2528 commands.table["^commit|ci"][1],
2526 _('hg qcommit [OPTION]... [FILE]...'),
2529 _('hg qcommit [OPTION]... [FILE]...'),
2527 inferrepo=True)
2530 inferrepo=True)
2528 def commit(ui, repo, *pats, **opts):
2531 def commit(ui, repo, *pats, **opts):
2529 """commit changes in the queue repository (DEPRECATED)
2532 """commit changes in the queue repository (DEPRECATED)
2530
2533
2531 This command is deprecated; use :hg:`commit --mq` instead."""
2534 This command is deprecated; use :hg:`commit --mq` instead."""
2532 q = repo.mq
2535 q = repo.mq
2533 r = q.qrepo()
2536 r = q.qrepo()
2534 if not r:
2537 if not r:
2535 raise error.Abort('no queue repository')
2538 raise error.Abort('no queue repository')
2536 commands.commit(r.ui, r, *pats, **opts)
2539 commands.commit(r.ui, r, *pats, **opts)
2537
2540
2538 @command("qseries",
2541 @command("qseries",
2539 [('m', 'missing', None, _('print patches not in series')),
2542 [('m', 'missing', None, _('print patches not in series')),
2540 ] + seriesopts,
2543 ] + seriesopts,
2541 _('hg qseries [-ms]'))
2544 _('hg qseries [-ms]'))
2542 def series(ui, repo, **opts):
2545 def series(ui, repo, **opts):
2543 """print the entire series file
2546 """print the entire series file
2544
2547
2545 Returns 0 on success."""
2548 Returns 0 on success."""
2546 repo.mq.qseries(repo, missing=opts.get(r'missing'),
2549 repo.mq.qseries(repo, missing=opts.get(r'missing'),
2547 summary=opts.get(r'summary'))
2550 summary=opts.get(r'summary'))
2548 return 0
2551 return 0
2549
2552
2550 @command("qtop", seriesopts, _('hg qtop [-s]'))
2553 @command("qtop", seriesopts, _('hg qtop [-s]'))
2551 def top(ui, repo, **opts):
2554 def top(ui, repo, **opts):
2552 """print the name of the current patch
2555 """print the name of the current patch
2553
2556
2554 Returns 0 on success."""
2557 Returns 0 on success."""
2555 q = repo.mq
2558 q = repo.mq
2556 if q.applied:
2559 if q.applied:
2557 t = q.seriesend(True)
2560 t = q.seriesend(True)
2558 else:
2561 else:
2559 t = 0
2562 t = 0
2560
2563
2561 if t:
2564 if t:
2562 q.qseries(repo, start=t - 1, length=1, status='A',
2565 q.qseries(repo, start=t - 1, length=1, status='A',
2563 summary=opts.get(r'summary'))
2566 summary=opts.get(r'summary'))
2564 else:
2567 else:
2565 ui.write(_("no patches applied\n"))
2568 ui.write(_("no patches applied\n"))
2566 return 1
2569 return 1
2567
2570
2568 @command("qnext", seriesopts, _('hg qnext [-s]'))
2571 @command("qnext", seriesopts, _('hg qnext [-s]'))
2569 def next(ui, repo, **opts):
2572 def next(ui, repo, **opts):
2570 """print the name of the next pushable patch
2573 """print the name of the next pushable patch
2571
2574
2572 Returns 0 on success."""
2575 Returns 0 on success."""
2573 q = repo.mq
2576 q = repo.mq
2574 end = q.seriesend()
2577 end = q.seriesend()
2575 if end == len(q.series):
2578 if end == len(q.series):
2576 ui.write(_("all patches applied\n"))
2579 ui.write(_("all patches applied\n"))
2577 return 1
2580 return 1
2578 q.qseries(repo, start=end, length=1, summary=opts.get(r'summary'))
2581 q.qseries(repo, start=end, length=1, summary=opts.get(r'summary'))
2579
2582
2580 @command("qprev", seriesopts, _('hg qprev [-s]'))
2583 @command("qprev", seriesopts, _('hg qprev [-s]'))
2581 def prev(ui, repo, **opts):
2584 def prev(ui, repo, **opts):
2582 """print the name of the preceding applied patch
2585 """print the name of the preceding applied patch
2583
2586
2584 Returns 0 on success."""
2587 Returns 0 on success."""
2585 q = repo.mq
2588 q = repo.mq
2586 l = len(q.applied)
2589 l = len(q.applied)
2587 if l == 1:
2590 if l == 1:
2588 ui.write(_("only one patch applied\n"))
2591 ui.write(_("only one patch applied\n"))
2589 return 1
2592 return 1
2590 if not l:
2593 if not l:
2591 ui.write(_("no patches applied\n"))
2594 ui.write(_("no patches applied\n"))
2592 return 1
2595 return 1
2593 idx = q.series.index(q.applied[-2].name)
2596 idx = q.series.index(q.applied[-2].name)
2594 q.qseries(repo, start=idx, length=1, status='A',
2597 q.qseries(repo, start=idx, length=1, status='A',
2595 summary=opts.get(r'summary'))
2598 summary=opts.get(r'summary'))
2596
2599
2597 def setupheaderopts(ui, opts):
2600 def setupheaderopts(ui, opts):
2598 if not opts.get('user') and opts.get('currentuser'):
2601 if not opts.get('user') and opts.get('currentuser'):
2599 opts['user'] = ui.username()
2602 opts['user'] = ui.username()
2600 if not opts.get('date') and opts.get('currentdate'):
2603 if not opts.get('date') and opts.get('currentdate'):
2601 opts['date'] = "%d %d" % dateutil.makedate()
2604 opts['date'] = "%d %d" % dateutil.makedate()
2602
2605
2603 @command("^qnew",
2606 @command("^qnew",
2604 [('e', 'edit', None, _('invoke editor on commit messages')),
2607 [('e', 'edit', None, _('invoke editor on commit messages')),
2605 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2608 ('f', 'force', None, _('import uncommitted changes (DEPRECATED)')),
2606 ('g', 'git', None, _('use git extended diff format')),
2609 ('g', 'git', None, _('use git extended diff format')),
2607 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2610 ('U', 'currentuser', None, _('add "From: <current user>" to patch')),
2608 ('u', 'user', '',
2611 ('u', 'user', '',
2609 _('add "From: <USER>" to patch'), _('USER')),
2612 _('add "From: <USER>" to patch'), _('USER')),
2610 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2613 ('D', 'currentdate', None, _('add "Date: <current date>" to patch')),
2611 ('d', 'date', '',
2614 ('d', 'date', '',
2612 _('add "Date: <DATE>" to patch'), _('DATE'))
2615 _('add "Date: <DATE>" to patch'), _('DATE'))
2613 ] + cmdutil.walkopts + cmdutil.commitopts,
2616 ] + cmdutil.walkopts + cmdutil.commitopts,
2614 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2617 _('hg qnew [-e] [-m TEXT] [-l FILE] PATCH [FILE]...'),
2615 inferrepo=True)
2618 inferrepo=True)
2616 def new(ui, repo, patch, *args, **opts):
2619 def new(ui, repo, patch, *args, **opts):
2617 """create a new patch
2620 """create a new patch
2618
2621
2619 qnew creates a new patch on top of the currently-applied patch (if
2622 qnew creates a new patch on top of the currently-applied patch (if
2620 any). The patch will be initialized with any outstanding changes
2623 any). The patch will be initialized with any outstanding changes
2621 in the working directory. You may also use -I/--include,
2624 in the working directory. You may also use -I/--include,
2622 -X/--exclude, and/or a list of files after the patch name to add
2625 -X/--exclude, and/or a list of files after the patch name to add
2623 only changes to matching files to the new patch, leaving the rest
2626 only changes to matching files to the new patch, leaving the rest
2624 as uncommitted modifications.
2627 as uncommitted modifications.
2625
2628
2626 -u/--user and -d/--date can be used to set the (given) user and
2629 -u/--user and -d/--date can be used to set the (given) user and
2627 date, respectively. -U/--currentuser and -D/--currentdate set user
2630 date, respectively. -U/--currentuser and -D/--currentdate set user
2628 to current user and date to current date.
2631 to current user and date to current date.
2629
2632
2630 -e/--edit, -m/--message or -l/--logfile set the patch header as
2633 -e/--edit, -m/--message or -l/--logfile set the patch header as
2631 well as the commit message. If none is specified, the header is
2634 well as the commit message. If none is specified, the header is
2632 empty and the commit message is '[mq]: PATCH'.
2635 empty and the commit message is '[mq]: PATCH'.
2633
2636
2634 Use the -g/--git option to keep the patch in the git extended diff
2637 Use the -g/--git option to keep the patch in the git extended diff
2635 format. Read the diffs help topic for more information on why this
2638 format. Read the diffs help topic for more information on why this
2636 is important for preserving permission changes and copy/rename
2639 is important for preserving permission changes and copy/rename
2637 information.
2640 information.
2638
2641
2639 Returns 0 on successful creation of a new patch.
2642 Returns 0 on successful creation of a new patch.
2640 """
2643 """
2641 opts = pycompat.byteskwargs(opts)
2644 opts = pycompat.byteskwargs(opts)
2642 msg = cmdutil.logmessage(ui, opts)
2645 msg = cmdutil.logmessage(ui, opts)
2643 q = repo.mq
2646 q = repo.mq
2644 opts['msg'] = msg
2647 opts['msg'] = msg
2645 setupheaderopts(ui, opts)
2648 setupheaderopts(ui, opts)
2646 q.new(repo, patch, *args, **pycompat.strkwargs(opts))
2649 q.new(repo, patch, *args, **pycompat.strkwargs(opts))
2647 q.savedirty()
2650 q.savedirty()
2648 return 0
2651 return 0
2649
2652
2650 @command("^qrefresh",
2653 @command("^qrefresh",
2651 [('e', 'edit', None, _('invoke editor on commit messages')),
2654 [('e', 'edit', None, _('invoke editor on commit messages')),
2652 ('g', 'git', None, _('use git extended diff format')),
2655 ('g', 'git', None, _('use git extended diff format')),
2653 ('s', 'short', None,
2656 ('s', 'short', None,
2654 _('refresh only files already in the patch and specified files')),
2657 _('refresh only files already in the patch and specified files')),
2655 ('U', 'currentuser', None,
2658 ('U', 'currentuser', None,
2656 _('add/update author field in patch with current user')),
2659 _('add/update author field in patch with current user')),
2657 ('u', 'user', '',
2660 ('u', 'user', '',
2658 _('add/update author field in patch with given user'), _('USER')),
2661 _('add/update author field in patch with given user'), _('USER')),
2659 ('D', 'currentdate', None,
2662 ('D', 'currentdate', None,
2660 _('add/update date field in patch with current date')),
2663 _('add/update date field in patch with current date')),
2661 ('d', 'date', '',
2664 ('d', 'date', '',
2662 _('add/update date field in patch with given date'), _('DATE'))
2665 _('add/update date field in patch with given date'), _('DATE'))
2663 ] + cmdutil.walkopts + cmdutil.commitopts,
2666 ] + cmdutil.walkopts + cmdutil.commitopts,
2664 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2667 _('hg qrefresh [-I] [-X] [-e] [-m TEXT] [-l FILE] [-s] [FILE]...'),
2665 inferrepo=True)
2668 inferrepo=True)
2666 def refresh(ui, repo, *pats, **opts):
2669 def refresh(ui, repo, *pats, **opts):
2667 """update the current patch
2670 """update the current patch
2668
2671
2669 If any file patterns are provided, the refreshed patch will
2672 If any file patterns are provided, the refreshed patch will
2670 contain only the modifications that match those patterns; the
2673 contain only the modifications that match those patterns; the
2671 remaining modifications will remain in the working directory.
2674 remaining modifications will remain in the working directory.
2672
2675
2673 If -s/--short is specified, files currently included in the patch
2676 If -s/--short is specified, files currently included in the patch
2674 will be refreshed just like matched files and remain in the patch.
2677 will be refreshed just like matched files and remain in the patch.
2675
2678
2676 If -e/--edit is specified, Mercurial will start your configured editor for
2679 If -e/--edit is specified, Mercurial will start your configured editor for
2677 you to enter a message. In case qrefresh fails, you will find a backup of
2680 you to enter a message. In case qrefresh fails, you will find a backup of
2678 your message in ``.hg/last-message.txt``.
2681 your message in ``.hg/last-message.txt``.
2679
2682
2680 hg add/remove/copy/rename work as usual, though you might want to
2683 hg add/remove/copy/rename work as usual, though you might want to
2681 use git-style patches (-g/--git or [diff] git=1) to track copies
2684 use git-style patches (-g/--git or [diff] git=1) to track copies
2682 and renames. See the diffs help topic for more information on the
2685 and renames. See the diffs help topic for more information on the
2683 git diff format.
2686 git diff format.
2684
2687
2685 Returns 0 on success.
2688 Returns 0 on success.
2686 """
2689 """
2687 opts = pycompat.byteskwargs(opts)
2690 opts = pycompat.byteskwargs(opts)
2688 q = repo.mq
2691 q = repo.mq
2689 message = cmdutil.logmessage(ui, opts)
2692 message = cmdutil.logmessage(ui, opts)
2690 setupheaderopts(ui, opts)
2693 setupheaderopts(ui, opts)
2691 with repo.wlock():
2694 with repo.wlock():
2692 ret = q.refresh(repo, pats, msg=message, **pycompat.strkwargs(opts))
2695 ret = q.refresh(repo, pats, msg=message, **pycompat.strkwargs(opts))
2693 q.savedirty()
2696 q.savedirty()
2694 return ret
2697 return ret
2695
2698
2696 @command("^qdiff",
2699 @command("^qdiff",
2697 cmdutil.diffopts + cmdutil.diffopts2 + cmdutil.walkopts,
2700 cmdutil.diffopts + cmdutil.diffopts2 + cmdutil.walkopts,
2698 _('hg qdiff [OPTION]... [FILE]...'),
2701 _('hg qdiff [OPTION]... [FILE]...'),
2699 inferrepo=True)
2702 inferrepo=True)
2700 def diff(ui, repo, *pats, **opts):
2703 def diff(ui, repo, *pats, **opts):
2701 """diff of the current patch and subsequent modifications
2704 """diff of the current patch and subsequent modifications
2702
2705
2703 Shows a diff which includes the current patch as well as any
2706 Shows a diff which includes the current patch as well as any
2704 changes which have been made in the working directory since the
2707 changes which have been made in the working directory since the
2705 last refresh (thus showing what the current patch would become
2708 last refresh (thus showing what the current patch would become
2706 after a qrefresh).
2709 after a qrefresh).
2707
2710
2708 Use :hg:`diff` if you only want to see the changes made since the
2711 Use :hg:`diff` if you only want to see the changes made since the
2709 last qrefresh, or :hg:`export qtip` if you want to see changes
2712 last qrefresh, or :hg:`export qtip` if you want to see changes
2710 made by the current patch without including changes made since the
2713 made by the current patch without including changes made since the
2711 qrefresh.
2714 qrefresh.
2712
2715
2713 Returns 0 on success.
2716 Returns 0 on success.
2714 """
2717 """
2715 ui.pager('qdiff')
2718 ui.pager('qdiff')
2716 repo.mq.diff(repo, pats, pycompat.byteskwargs(opts))
2719 repo.mq.diff(repo, pats, pycompat.byteskwargs(opts))
2717 return 0
2720 return 0
2718
2721
2719 @command('qfold',
2722 @command('qfold',
2720 [('e', 'edit', None, _('invoke editor on commit messages')),
2723 [('e', 'edit', None, _('invoke editor on commit messages')),
2721 ('k', 'keep', None, _('keep folded patch files')),
2724 ('k', 'keep', None, _('keep folded patch files')),
2722 ] + cmdutil.commitopts,
2725 ] + cmdutil.commitopts,
2723 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2726 _('hg qfold [-e] [-k] [-m TEXT] [-l FILE] PATCH...'))
2724 def fold(ui, repo, *files, **opts):
2727 def fold(ui, repo, *files, **opts):
2725 """fold the named patches into the current patch
2728 """fold the named patches into the current patch
2726
2729
2727 Patches must not yet be applied. Each patch will be successively
2730 Patches must not yet be applied. Each patch will be successively
2728 applied to the current patch in the order given. If all the
2731 applied to the current patch in the order given. If all the
2729 patches apply successfully, the current patch will be refreshed
2732 patches apply successfully, the current patch will be refreshed
2730 with the new cumulative patch, and the folded patches will be
2733 with the new cumulative patch, and the folded patches will be
2731 deleted. With -k/--keep, the folded patch files will not be
2734 deleted. With -k/--keep, the folded patch files will not be
2732 removed afterwards.
2735 removed afterwards.
2733
2736
2734 The header for each folded patch will be concatenated with the
2737 The header for each folded patch will be concatenated with the
2735 current patch header, separated by a line of ``* * *``.
2738 current patch header, separated by a line of ``* * *``.
2736
2739
2737 Returns 0 on success."""
2740 Returns 0 on success."""
2738 opts = pycompat.byteskwargs(opts)
2741 opts = pycompat.byteskwargs(opts)
2739 q = repo.mq
2742 q = repo.mq
2740 if not files:
2743 if not files:
2741 raise error.Abort(_('qfold requires at least one patch name'))
2744 raise error.Abort(_('qfold requires at least one patch name'))
2742 if not q.checktoppatch(repo)[0]:
2745 if not q.checktoppatch(repo)[0]:
2743 raise error.Abort(_('no patches applied'))
2746 raise error.Abort(_('no patches applied'))
2744 q.checklocalchanges(repo)
2747 q.checklocalchanges(repo)
2745
2748
2746 message = cmdutil.logmessage(ui, opts)
2749 message = cmdutil.logmessage(ui, opts)
2747
2750
2748 parent = q.lookup('qtip')
2751 parent = q.lookup('qtip')
2749 patches = []
2752 patches = []
2750 messages = []
2753 messages = []
2751 for f in files:
2754 for f in files:
2752 p = q.lookup(f)
2755 p = q.lookup(f)
2753 if p in patches or p == parent:
2756 if p in patches or p == parent:
2754 ui.warn(_('skipping already folded patch %s\n') % p)
2757 ui.warn(_('skipping already folded patch %s\n') % p)
2755 if q.isapplied(p):
2758 if q.isapplied(p):
2756 raise error.Abort(_('qfold cannot fold already applied patch %s')
2759 raise error.Abort(_('qfold cannot fold already applied patch %s')
2757 % p)
2760 % p)
2758 patches.append(p)
2761 patches.append(p)
2759
2762
2760 for p in patches:
2763 for p in patches:
2761 if not message:
2764 if not message:
2762 ph = patchheader(q.join(p), q.plainmode)
2765 ph = patchheader(q.join(p), q.plainmode)
2763 if ph.message:
2766 if ph.message:
2764 messages.append(ph.message)
2767 messages.append(ph.message)
2765 pf = q.join(p)
2768 pf = q.join(p)
2766 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2769 (patchsuccess, files, fuzz) = q.patch(repo, pf)
2767 if not patchsuccess:
2770 if not patchsuccess:
2768 raise error.Abort(_('error folding patch %s') % p)
2771 raise error.Abort(_('error folding patch %s') % p)
2769
2772
2770 if not message:
2773 if not message:
2771 ph = patchheader(q.join(parent), q.plainmode)
2774 ph = patchheader(q.join(parent), q.plainmode)
2772 message = ph.message
2775 message = ph.message
2773 for msg in messages:
2776 for msg in messages:
2774 if msg:
2777 if msg:
2775 if message:
2778 if message:
2776 message.append('* * *')
2779 message.append('* * *')
2777 message.extend(msg)
2780 message.extend(msg)
2778 message = '\n'.join(message)
2781 message = '\n'.join(message)
2779
2782
2780 diffopts = q.patchopts(q.diffopts(), *patches)
2783 diffopts = q.patchopts(q.diffopts(), *patches)
2781 with repo.wlock():
2784 with repo.wlock():
2782 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2785 q.refresh(repo, msg=message, git=diffopts.git, edit=opts.get('edit'),
2783 editform='mq.qfold')
2786 editform='mq.qfold')
2784 q.delete(repo, patches, opts)
2787 q.delete(repo, patches, opts)
2785 q.savedirty()
2788 q.savedirty()
2786
2789
2787 @command("qgoto",
2790 @command("qgoto",
2788 [('', 'keep-changes', None,
2791 [('', 'keep-changes', None,
2789 _('tolerate non-conflicting local changes')),
2792 _('tolerate non-conflicting local changes')),
2790 ('f', 'force', None, _('overwrite any local changes')),
2793 ('f', 'force', None, _('overwrite any local changes')),
2791 ('', 'no-backup', None, _('do not save backup copies of files'))],
2794 ('', 'no-backup', None, _('do not save backup copies of files'))],
2792 _('hg qgoto [OPTION]... PATCH'))
2795 _('hg qgoto [OPTION]... PATCH'))
2793 def goto(ui, repo, patch, **opts):
2796 def goto(ui, repo, patch, **opts):
2794 '''push or pop patches until named patch is at top of stack
2797 '''push or pop patches until named patch is at top of stack
2795
2798
2796 Returns 0 on success.'''
2799 Returns 0 on success.'''
2797 opts = pycompat.byteskwargs(opts)
2800 opts = pycompat.byteskwargs(opts)
2798 opts = fixkeepchangesopts(ui, opts)
2801 opts = fixkeepchangesopts(ui, opts)
2799 q = repo.mq
2802 q = repo.mq
2800 patch = q.lookup(patch)
2803 patch = q.lookup(patch)
2801 nobackup = opts.get('no_backup')
2804 nobackup = opts.get('no_backup')
2802 keepchanges = opts.get('keep_changes')
2805 keepchanges = opts.get('keep_changes')
2803 if q.isapplied(patch):
2806 if q.isapplied(patch):
2804 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2807 ret = q.pop(repo, patch, force=opts.get('force'), nobackup=nobackup,
2805 keepchanges=keepchanges)
2808 keepchanges=keepchanges)
2806 else:
2809 else:
2807 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2810 ret = q.push(repo, patch, force=opts.get('force'), nobackup=nobackup,
2808 keepchanges=keepchanges)
2811 keepchanges=keepchanges)
2809 q.savedirty()
2812 q.savedirty()
2810 return ret
2813 return ret
2811
2814
2812 @command("qguard",
2815 @command("qguard",
2813 [('l', 'list', None, _('list all patches and guards')),
2816 [('l', 'list', None, _('list all patches and guards')),
2814 ('n', 'none', None, _('drop all guards'))],
2817 ('n', 'none', None, _('drop all guards'))],
2815 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2818 _('hg qguard [-l] [-n] [PATCH] [-- [+GUARD]... [-GUARD]...]'))
2816 def guard(ui, repo, *args, **opts):
2819 def guard(ui, repo, *args, **opts):
2817 '''set or print guards for a patch
2820 '''set or print guards for a patch
2818
2821
2819 Guards control whether a patch can be pushed. A patch with no
2822 Guards control whether a patch can be pushed. A patch with no
2820 guards is always pushed. A patch with a positive guard ("+foo") is
2823 guards is always pushed. A patch with a positive guard ("+foo") is
2821 pushed only if the :hg:`qselect` command has activated it. A patch with
2824 pushed only if the :hg:`qselect` command has activated it. A patch with
2822 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2825 a negative guard ("-foo") is never pushed if the :hg:`qselect` command
2823 has activated it.
2826 has activated it.
2824
2827
2825 With no arguments, print the currently active guards.
2828 With no arguments, print the currently active guards.
2826 With arguments, set guards for the named patch.
2829 With arguments, set guards for the named patch.
2827
2830
2828 .. note::
2831 .. note::
2829
2832
2830 Specifying negative guards now requires '--'.
2833 Specifying negative guards now requires '--'.
2831
2834
2832 To set guards on another patch::
2835 To set guards on another patch::
2833
2836
2834 hg qguard other.patch -- +2.6.17 -stable
2837 hg qguard other.patch -- +2.6.17 -stable
2835
2838
2836 Returns 0 on success.
2839 Returns 0 on success.
2837 '''
2840 '''
2838 def status(idx):
2841 def status(idx):
2839 guards = q.seriesguards[idx] or ['unguarded']
2842 guards = q.seriesguards[idx] or ['unguarded']
2840 if q.series[idx] in applied:
2843 if q.series[idx] in applied:
2841 state = 'applied'
2844 state = 'applied'
2842 elif q.pushable(idx)[0]:
2845 elif q.pushable(idx)[0]:
2843 state = 'unapplied'
2846 state = 'unapplied'
2844 else:
2847 else:
2845 state = 'guarded'
2848 state = 'guarded'
2846 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2849 label = 'qguard.patch qguard.%s qseries.%s' % (state, state)
2847 ui.write('%s: ' % ui.label(q.series[idx], label))
2850 ui.write('%s: ' % ui.label(q.series[idx], label))
2848
2851
2849 for i, guard in enumerate(guards):
2852 for i, guard in enumerate(guards):
2850 if guard.startswith('+'):
2853 if guard.startswith('+'):
2851 ui.write(guard, label='qguard.positive')
2854 ui.write(guard, label='qguard.positive')
2852 elif guard.startswith('-'):
2855 elif guard.startswith('-'):
2853 ui.write(guard, label='qguard.negative')
2856 ui.write(guard, label='qguard.negative')
2854 else:
2857 else:
2855 ui.write(guard, label='qguard.unguarded')
2858 ui.write(guard, label='qguard.unguarded')
2856 if i != len(guards) - 1:
2859 if i != len(guards) - 1:
2857 ui.write(' ')
2860 ui.write(' ')
2858 ui.write('\n')
2861 ui.write('\n')
2859 q = repo.mq
2862 q = repo.mq
2860 applied = set(p.name for p in q.applied)
2863 applied = set(p.name for p in q.applied)
2861 patch = None
2864 patch = None
2862 args = list(args)
2865 args = list(args)
2863 if opts.get(r'list'):
2866 if opts.get(r'list'):
2864 if args or opts.get('none'):
2867 if args or opts.get('none'):
2865 raise error.Abort(_('cannot mix -l/--list with options or '
2868 raise error.Abort(_('cannot mix -l/--list with options or '
2866 'arguments'))
2869 'arguments'))
2867 for i in xrange(len(q.series)):
2870 for i in xrange(len(q.series)):
2868 status(i)
2871 status(i)
2869 return
2872 return
2870 if not args or args[0][0:1] in '-+':
2873 if not args or args[0][0:1] in '-+':
2871 if not q.applied:
2874 if not q.applied:
2872 raise error.Abort(_('no patches applied'))
2875 raise error.Abort(_('no patches applied'))
2873 patch = q.applied[-1].name
2876 patch = q.applied[-1].name
2874 if patch is None and args[0][0:1] not in '-+':
2877 if patch is None and args[0][0:1] not in '-+':
2875 patch = args.pop(0)
2878 patch = args.pop(0)
2876 if patch is None:
2879 if patch is None:
2877 raise error.Abort(_('no patch to work with'))
2880 raise error.Abort(_('no patch to work with'))
2878 if args or opts.get('none'):
2881 if args or opts.get('none'):
2879 idx = q.findseries(patch)
2882 idx = q.findseries(patch)
2880 if idx is None:
2883 if idx is None:
2881 raise error.Abort(_('no patch named %s') % patch)
2884 raise error.Abort(_('no patch named %s') % patch)
2882 q.setguards(idx, args)
2885 q.setguards(idx, args)
2883 q.savedirty()
2886 q.savedirty()
2884 else:
2887 else:
2885 status(q.series.index(q.lookup(patch)))
2888 status(q.series.index(q.lookup(patch)))
2886
2889
2887 @command("qheader", [], _('hg qheader [PATCH]'))
2890 @command("qheader", [], _('hg qheader [PATCH]'))
2888 def header(ui, repo, patch=None):
2891 def header(ui, repo, patch=None):
2889 """print the header of the topmost or specified patch
2892 """print the header of the topmost or specified patch
2890
2893
2891 Returns 0 on success."""
2894 Returns 0 on success."""
2892 q = repo.mq
2895 q = repo.mq
2893
2896
2894 if patch:
2897 if patch:
2895 patch = q.lookup(patch)
2898 patch = q.lookup(patch)
2896 else:
2899 else:
2897 if not q.applied:
2900 if not q.applied:
2898 ui.write(_('no patches applied\n'))
2901 ui.write(_('no patches applied\n'))
2899 return 1
2902 return 1
2900 patch = q.lookup('qtip')
2903 patch = q.lookup('qtip')
2901 ph = patchheader(q.join(patch), q.plainmode)
2904 ph = patchheader(q.join(patch), q.plainmode)
2902
2905
2903 ui.write('\n'.join(ph.message) + '\n')
2906 ui.write('\n'.join(ph.message) + '\n')
2904
2907
2905 def lastsavename(path):
2908 def lastsavename(path):
2906 (directory, base) = os.path.split(path)
2909 (directory, base) = os.path.split(path)
2907 names = os.listdir(directory)
2910 names = os.listdir(directory)
2908 namere = re.compile("%s.([0-9]+)" % base)
2911 namere = re.compile("%s.([0-9]+)" % base)
2909 maxindex = None
2912 maxindex = None
2910 maxname = None
2913 maxname = None
2911 for f in names:
2914 for f in names:
2912 m = namere.match(f)
2915 m = namere.match(f)
2913 if m:
2916 if m:
2914 index = int(m.group(1))
2917 index = int(m.group(1))
2915 if maxindex is None or index > maxindex:
2918 if maxindex is None or index > maxindex:
2916 maxindex = index
2919 maxindex = index
2917 maxname = f
2920 maxname = f
2918 if maxname:
2921 if maxname:
2919 return (os.path.join(directory, maxname), maxindex)
2922 return (os.path.join(directory, maxname), maxindex)
2920 return (None, None)
2923 return (None, None)
2921
2924
2922 def savename(path):
2925 def savename(path):
2923 (last, index) = lastsavename(path)
2926 (last, index) = lastsavename(path)
2924 if last is None:
2927 if last is None:
2925 index = 0
2928 index = 0
2926 newpath = path + ".%d" % (index + 1)
2929 newpath = path + ".%d" % (index + 1)
2927 return newpath
2930 return newpath
2928
2931
2929 @command("^qpush",
2932 @command("^qpush",
2930 [('', 'keep-changes', None,
2933 [('', 'keep-changes', None,
2931 _('tolerate non-conflicting local changes')),
2934 _('tolerate non-conflicting local changes')),
2932 ('f', 'force', None, _('apply on top of local changes')),
2935 ('f', 'force', None, _('apply on top of local changes')),
2933 ('e', 'exact', None,
2936 ('e', 'exact', None,
2934 _('apply the target patch to its recorded parent')),
2937 _('apply the target patch to its recorded parent')),
2935 ('l', 'list', None, _('list patch name in commit text')),
2938 ('l', 'list', None, _('list patch name in commit text')),
2936 ('a', 'all', None, _('apply all patches')),
2939 ('a', 'all', None, _('apply all patches')),
2937 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2940 ('m', 'merge', None, _('merge from another queue (DEPRECATED)')),
2938 ('n', 'name', '',
2941 ('n', 'name', '',
2939 _('merge queue name (DEPRECATED)'), _('NAME')),
2942 _('merge queue name (DEPRECATED)'), _('NAME')),
2940 ('', 'move', None,
2943 ('', 'move', None,
2941 _('reorder patch series and apply only the patch')),
2944 _('reorder patch series and apply only the patch')),
2942 ('', 'no-backup', None, _('do not save backup copies of files'))],
2945 ('', 'no-backup', None, _('do not save backup copies of files'))],
2943 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2946 _('hg qpush [-f] [-l] [-a] [--move] [PATCH | INDEX]'))
2944 def push(ui, repo, patch=None, **opts):
2947 def push(ui, repo, patch=None, **opts):
2945 """push the next patch onto the stack
2948 """push the next patch onto the stack
2946
2949
2947 By default, abort if the working directory contains uncommitted
2950 By default, abort if the working directory contains uncommitted
2948 changes. With --keep-changes, abort only if the uncommitted files
2951 changes. With --keep-changes, abort only if the uncommitted files
2949 overlap with patched files. With -f/--force, backup and patch over
2952 overlap with patched files. With -f/--force, backup and patch over
2950 uncommitted changes.
2953 uncommitted changes.
2951
2954
2952 Return 0 on success.
2955 Return 0 on success.
2953 """
2956 """
2954 q = repo.mq
2957 q = repo.mq
2955 mergeq = None
2958 mergeq = None
2956
2959
2957 opts = pycompat.byteskwargs(opts)
2960 opts = pycompat.byteskwargs(opts)
2958 opts = fixkeepchangesopts(ui, opts)
2961 opts = fixkeepchangesopts(ui, opts)
2959 if opts.get('merge'):
2962 if opts.get('merge'):
2960 if opts.get('name'):
2963 if opts.get('name'):
2961 newpath = repo.vfs.join(opts.get('name'))
2964 newpath = repo.vfs.join(opts.get('name'))
2962 else:
2965 else:
2963 newpath, i = lastsavename(q.path)
2966 newpath, i = lastsavename(q.path)
2964 if not newpath:
2967 if not newpath:
2965 ui.warn(_("no saved queues found, please use -n\n"))
2968 ui.warn(_("no saved queues found, please use -n\n"))
2966 return 1
2969 return 1
2967 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2970 mergeq = queue(ui, repo.baseui, repo.path, newpath)
2968 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2971 ui.warn(_("merging with queue at: %s\n") % mergeq.path)
2969 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2972 ret = q.push(repo, patch, force=opts.get('force'), list=opts.get('list'),
2970 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2973 mergeq=mergeq, all=opts.get('all'), move=opts.get('move'),
2971 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2974 exact=opts.get('exact'), nobackup=opts.get('no_backup'),
2972 keepchanges=opts.get('keep_changes'))
2975 keepchanges=opts.get('keep_changes'))
2973 return ret
2976 return ret
2974
2977
2975 @command("^qpop",
2978 @command("^qpop",
2976 [('a', 'all', None, _('pop all patches')),
2979 [('a', 'all', None, _('pop all patches')),
2977 ('n', 'name', '',
2980 ('n', 'name', '',
2978 _('queue name to pop (DEPRECATED)'), _('NAME')),
2981 _('queue name to pop (DEPRECATED)'), _('NAME')),
2979 ('', 'keep-changes', None,
2982 ('', 'keep-changes', None,
2980 _('tolerate non-conflicting local changes')),
2983 _('tolerate non-conflicting local changes')),
2981 ('f', 'force', None, _('forget any local changes to patched files')),
2984 ('f', 'force', None, _('forget any local changes to patched files')),
2982 ('', 'no-backup', None, _('do not save backup copies of files'))],
2985 ('', 'no-backup', None, _('do not save backup copies of files'))],
2983 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2986 _('hg qpop [-a] [-f] [PATCH | INDEX]'))
2984 def pop(ui, repo, patch=None, **opts):
2987 def pop(ui, repo, patch=None, **opts):
2985 """pop the current patch off the stack
2988 """pop the current patch off the stack
2986
2989
2987 Without argument, pops off the top of the patch stack. If given a
2990 Without argument, pops off the top of the patch stack. If given a
2988 patch name, keeps popping off patches until the named patch is at
2991 patch name, keeps popping off patches until the named patch is at
2989 the top of the stack.
2992 the top of the stack.
2990
2993
2991 By default, abort if the working directory contains uncommitted
2994 By default, abort if the working directory contains uncommitted
2992 changes. With --keep-changes, abort only if the uncommitted files
2995 changes. With --keep-changes, abort only if the uncommitted files
2993 overlap with patched files. With -f/--force, backup and discard
2996 overlap with patched files. With -f/--force, backup and discard
2994 changes made to such files.
2997 changes made to such files.
2995
2998
2996 Return 0 on success.
2999 Return 0 on success.
2997 """
3000 """
2998 opts = pycompat.byteskwargs(opts)
3001 opts = pycompat.byteskwargs(opts)
2999 opts = fixkeepchangesopts(ui, opts)
3002 opts = fixkeepchangesopts(ui, opts)
3000 localupdate = True
3003 localupdate = True
3001 if opts.get('name'):
3004 if opts.get('name'):
3002 q = queue(ui, repo.baseui, repo.path, repo.vfs.join(opts.get('name')))
3005 q = queue(ui, repo.baseui, repo.path, repo.vfs.join(opts.get('name')))
3003 ui.warn(_('using patch queue: %s\n') % q.path)
3006 ui.warn(_('using patch queue: %s\n') % q.path)
3004 localupdate = False
3007 localupdate = False
3005 else:
3008 else:
3006 q = repo.mq
3009 q = repo.mq
3007 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
3010 ret = q.pop(repo, patch, force=opts.get('force'), update=localupdate,
3008 all=opts.get('all'), nobackup=opts.get('no_backup'),
3011 all=opts.get('all'), nobackup=opts.get('no_backup'),
3009 keepchanges=opts.get('keep_changes'))
3012 keepchanges=opts.get('keep_changes'))
3010 q.savedirty()
3013 q.savedirty()
3011 return ret
3014 return ret
3012
3015
3013 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
3016 @command("qrename|qmv", [], _('hg qrename PATCH1 [PATCH2]'))
3014 def rename(ui, repo, patch, name=None, **opts):
3017 def rename(ui, repo, patch, name=None, **opts):
3015 """rename a patch
3018 """rename a patch
3016
3019
3017 With one argument, renames the current patch to PATCH1.
3020 With one argument, renames the current patch to PATCH1.
3018 With two arguments, renames PATCH1 to PATCH2.
3021 With two arguments, renames PATCH1 to PATCH2.
3019
3022
3020 Returns 0 on success."""
3023 Returns 0 on success."""
3021 q = repo.mq
3024 q = repo.mq
3022 if not name:
3025 if not name:
3023 name = patch
3026 name = patch
3024 patch = None
3027 patch = None
3025
3028
3026 if patch:
3029 if patch:
3027 patch = q.lookup(patch)
3030 patch = q.lookup(patch)
3028 else:
3031 else:
3029 if not q.applied:
3032 if not q.applied:
3030 ui.write(_('no patches applied\n'))
3033 ui.write(_('no patches applied\n'))
3031 return
3034 return
3032 patch = q.lookup('qtip')
3035 patch = q.lookup('qtip')
3033 absdest = q.join(name)
3036 absdest = q.join(name)
3034 if os.path.isdir(absdest):
3037 if os.path.isdir(absdest):
3035 name = normname(os.path.join(name, os.path.basename(patch)))
3038 name = normname(os.path.join(name, os.path.basename(patch)))
3036 absdest = q.join(name)
3039 absdest = q.join(name)
3037 q.checkpatchname(name)
3040 q.checkpatchname(name)
3038
3041
3039 ui.note(_('renaming %s to %s\n') % (patch, name))
3042 ui.note(_('renaming %s to %s\n') % (patch, name))
3040 i = q.findseries(patch)
3043 i = q.findseries(patch)
3041 guards = q.guard_re.findall(q.fullseries[i])
3044 guards = q.guard_re.findall(q.fullseries[i])
3042 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
3045 q.fullseries[i] = name + ''.join([' #' + g for g in guards])
3043 q.parseseries()
3046 q.parseseries()
3044 q.seriesdirty = True
3047 q.seriesdirty = True
3045
3048
3046 info = q.isapplied(patch)
3049 info = q.isapplied(patch)
3047 if info:
3050 if info:
3048 q.applied[info[0]] = statusentry(info[1], name)
3051 q.applied[info[0]] = statusentry(info[1], name)
3049 q.applieddirty = True
3052 q.applieddirty = True
3050
3053
3051 destdir = os.path.dirname(absdest)
3054 destdir = os.path.dirname(absdest)
3052 if not os.path.isdir(destdir):
3055 if not os.path.isdir(destdir):
3053 os.makedirs(destdir)
3056 os.makedirs(destdir)
3054 util.rename(q.join(patch), absdest)
3057 util.rename(q.join(patch), absdest)
3055 r = q.qrepo()
3058 r = q.qrepo()
3056 if r and patch in r.dirstate:
3059 if r and patch in r.dirstate:
3057 wctx = r[None]
3060 wctx = r[None]
3058 with r.wlock():
3061 with r.wlock():
3059 if r.dirstate[patch] == 'a':
3062 if r.dirstate[patch] == 'a':
3060 r.dirstate.drop(patch)
3063 r.dirstate.drop(patch)
3061 r.dirstate.add(name)
3064 r.dirstate.add(name)
3062 else:
3065 else:
3063 wctx.copy(patch, name)
3066 wctx.copy(patch, name)
3064 wctx.forget([patch])
3067 wctx.forget([patch])
3065
3068
3066 q.savedirty()
3069 q.savedirty()
3067
3070
3068 @command("qrestore",
3071 @command("qrestore",
3069 [('d', 'delete', None, _('delete save entry')),
3072 [('d', 'delete', None, _('delete save entry')),
3070 ('u', 'update', None, _('update queue working directory'))],
3073 ('u', 'update', None, _('update queue working directory'))],
3071 _('hg qrestore [-d] [-u] REV'))
3074 _('hg qrestore [-d] [-u] REV'))
3072 def restore(ui, repo, rev, **opts):
3075 def restore(ui, repo, rev, **opts):
3073 """restore the queue state saved by a revision (DEPRECATED)
3076 """restore the queue state saved by a revision (DEPRECATED)
3074
3077
3075 This command is deprecated, use :hg:`rebase` instead."""
3078 This command is deprecated, use :hg:`rebase` instead."""
3076 rev = repo.lookup(rev)
3079 rev = repo.lookup(rev)
3077 q = repo.mq
3080 q = repo.mq
3078 q.restore(repo, rev, delete=opts.get(r'delete'),
3081 q.restore(repo, rev, delete=opts.get(r'delete'),
3079 qupdate=opts.get(r'update'))
3082 qupdate=opts.get(r'update'))
3080 q.savedirty()
3083 q.savedirty()
3081 return 0
3084 return 0
3082
3085
3083 @command("qsave",
3086 @command("qsave",
3084 [('c', 'copy', None, _('copy patch directory')),
3087 [('c', 'copy', None, _('copy patch directory')),
3085 ('n', 'name', '',
3088 ('n', 'name', '',
3086 _('copy directory name'), _('NAME')),
3089 _('copy directory name'), _('NAME')),
3087 ('e', 'empty', None, _('clear queue status file')),
3090 ('e', 'empty', None, _('clear queue status file')),
3088 ('f', 'force', None, _('force copy'))] + cmdutil.commitopts,
3091 ('f', 'force', None, _('force copy'))] + cmdutil.commitopts,
3089 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3092 _('hg qsave [-m TEXT] [-l FILE] [-c] [-n NAME] [-e] [-f]'))
3090 def save(ui, repo, **opts):
3093 def save(ui, repo, **opts):
3091 """save current queue state (DEPRECATED)
3094 """save current queue state (DEPRECATED)
3092
3095
3093 This command is deprecated, use :hg:`rebase` instead."""
3096 This command is deprecated, use :hg:`rebase` instead."""
3094 q = repo.mq
3097 q = repo.mq
3095 opts = pycompat.byteskwargs(opts)
3098 opts = pycompat.byteskwargs(opts)
3096 message = cmdutil.logmessage(ui, opts)
3099 message = cmdutil.logmessage(ui, opts)
3097 ret = q.save(repo, msg=message)
3100 ret = q.save(repo, msg=message)
3098 if ret:
3101 if ret:
3099 return ret
3102 return ret
3100 q.savedirty() # save to .hg/patches before copying
3103 q.savedirty() # save to .hg/patches before copying
3101 if opts.get('copy'):
3104 if opts.get('copy'):
3102 path = q.path
3105 path = q.path
3103 if opts.get('name'):
3106 if opts.get('name'):
3104 newpath = os.path.join(q.basepath, opts.get('name'))
3107 newpath = os.path.join(q.basepath, opts.get('name'))
3105 if os.path.exists(newpath):
3108 if os.path.exists(newpath):
3106 if not os.path.isdir(newpath):
3109 if not os.path.isdir(newpath):
3107 raise error.Abort(_('destination %s exists and is not '
3110 raise error.Abort(_('destination %s exists and is not '
3108 'a directory') % newpath)
3111 'a directory') % newpath)
3109 if not opts.get('force'):
3112 if not opts.get('force'):
3110 raise error.Abort(_('destination %s exists, '
3113 raise error.Abort(_('destination %s exists, '
3111 'use -f to force') % newpath)
3114 'use -f to force') % newpath)
3112 else:
3115 else:
3113 newpath = savename(path)
3116 newpath = savename(path)
3114 ui.warn(_("copy %s to %s\n") % (path, newpath))
3117 ui.warn(_("copy %s to %s\n") % (path, newpath))
3115 util.copyfiles(path, newpath)
3118 util.copyfiles(path, newpath)
3116 if opts.get('empty'):
3119 if opts.get('empty'):
3117 del q.applied[:]
3120 del q.applied[:]
3118 q.applieddirty = True
3121 q.applieddirty = True
3119 q.savedirty()
3122 q.savedirty()
3120 return 0
3123 return 0
3121
3124
3122
3125
3123 @command("qselect",
3126 @command("qselect",
3124 [('n', 'none', None, _('disable all guards')),
3127 [('n', 'none', None, _('disable all guards')),
3125 ('s', 'series', None, _('list all guards in series file')),
3128 ('s', 'series', None, _('list all guards in series file')),
3126 ('', 'pop', None, _('pop to before first guarded applied patch')),
3129 ('', 'pop', None, _('pop to before first guarded applied patch')),
3127 ('', 'reapply', None, _('pop, then reapply patches'))],
3130 ('', 'reapply', None, _('pop, then reapply patches'))],
3128 _('hg qselect [OPTION]... [GUARD]...'))
3131 _('hg qselect [OPTION]... [GUARD]...'))
3129 def select(ui, repo, *args, **opts):
3132 def select(ui, repo, *args, **opts):
3130 '''set or print guarded patches to push
3133 '''set or print guarded patches to push
3131
3134
3132 Use the :hg:`qguard` command to set or print guards on patch, then use
3135 Use the :hg:`qguard` command to set or print guards on patch, then use
3133 qselect to tell mq which guards to use. A patch will be pushed if
3136 qselect to tell mq which guards to use. A patch will be pushed if
3134 it has no guards or any positive guards match the currently
3137 it has no guards or any positive guards match the currently
3135 selected guard, but will not be pushed if any negative guards
3138 selected guard, but will not be pushed if any negative guards
3136 match the current guard. For example::
3139 match the current guard. For example::
3137
3140
3138 qguard foo.patch -- -stable (negative guard)
3141 qguard foo.patch -- -stable (negative guard)
3139 qguard bar.patch +stable (positive guard)
3142 qguard bar.patch +stable (positive guard)
3140 qselect stable
3143 qselect stable
3141
3144
3142 This activates the "stable" guard. mq will skip foo.patch (because
3145 This activates the "stable" guard. mq will skip foo.patch (because
3143 it has a negative match) but push bar.patch (because it has a
3146 it has a negative match) but push bar.patch (because it has a
3144 positive match).
3147 positive match).
3145
3148
3146 With no arguments, prints the currently active guards.
3149 With no arguments, prints the currently active guards.
3147 With one argument, sets the active guard.
3150 With one argument, sets the active guard.
3148
3151
3149 Use -n/--none to deactivate guards (no other arguments needed).
3152 Use -n/--none to deactivate guards (no other arguments needed).
3150 When no guards are active, patches with positive guards are
3153 When no guards are active, patches with positive guards are
3151 skipped and patches with negative guards are pushed.
3154 skipped and patches with negative guards are pushed.
3152
3155
3153 qselect can change the guards on applied patches. It does not pop
3156 qselect can change the guards on applied patches. It does not pop
3154 guarded patches by default. Use --pop to pop back to the last
3157 guarded patches by default. Use --pop to pop back to the last
3155 applied patch that is not guarded. Use --reapply (which implies
3158 applied patch that is not guarded. Use --reapply (which implies
3156 --pop) to push back to the current patch afterwards, but skip
3159 --pop) to push back to the current patch afterwards, but skip
3157 guarded patches.
3160 guarded patches.
3158
3161
3159 Use -s/--series to print a list of all guards in the series file
3162 Use -s/--series to print a list of all guards in the series file
3160 (no other arguments needed). Use -v for more information.
3163 (no other arguments needed). Use -v for more information.
3161
3164
3162 Returns 0 on success.'''
3165 Returns 0 on success.'''
3163
3166
3164 q = repo.mq
3167 q = repo.mq
3165 opts = pycompat.byteskwargs(opts)
3168 opts = pycompat.byteskwargs(opts)
3166 guards = q.active()
3169 guards = q.active()
3167 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3170 pushable = lambda i: q.pushable(q.applied[i].name)[0]
3168 if args or opts.get('none'):
3171 if args or opts.get('none'):
3169 old_unapplied = q.unapplied(repo)
3172 old_unapplied = q.unapplied(repo)
3170 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3173 old_guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3171 q.setactive(args)
3174 q.setactive(args)
3172 q.savedirty()
3175 q.savedirty()
3173 if not args:
3176 if not args:
3174 ui.status(_('guards deactivated\n'))
3177 ui.status(_('guards deactivated\n'))
3175 if not opts.get('pop') and not opts.get('reapply'):
3178 if not opts.get('pop') and not opts.get('reapply'):
3176 unapplied = q.unapplied(repo)
3179 unapplied = q.unapplied(repo)
3177 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3180 guarded = [i for i in xrange(len(q.applied)) if not pushable(i)]
3178 if len(unapplied) != len(old_unapplied):
3181 if len(unapplied) != len(old_unapplied):
3179 ui.status(_('number of unguarded, unapplied patches has '
3182 ui.status(_('number of unguarded, unapplied patches has '
3180 'changed from %d to %d\n') %
3183 'changed from %d to %d\n') %
3181 (len(old_unapplied), len(unapplied)))
3184 (len(old_unapplied), len(unapplied)))
3182 if len(guarded) != len(old_guarded):
3185 if len(guarded) != len(old_guarded):
3183 ui.status(_('number of guarded, applied patches has changed '
3186 ui.status(_('number of guarded, applied patches has changed '
3184 'from %d to %d\n') %
3187 'from %d to %d\n') %
3185 (len(old_guarded), len(guarded)))
3188 (len(old_guarded), len(guarded)))
3186 elif opts.get('series'):
3189 elif opts.get('series'):
3187 guards = {}
3190 guards = {}
3188 noguards = 0
3191 noguards = 0
3189 for gs in q.seriesguards:
3192 for gs in q.seriesguards:
3190 if not gs:
3193 if not gs:
3191 noguards += 1
3194 noguards += 1
3192 for g in gs:
3195 for g in gs:
3193 guards.setdefault(g, 0)
3196 guards.setdefault(g, 0)
3194 guards[g] += 1
3197 guards[g] += 1
3195 if ui.verbose:
3198 if ui.verbose:
3196 guards['NONE'] = noguards
3199 guards['NONE'] = noguards
3197 guards = list(guards.items())
3200 guards = list(guards.items())
3198 guards.sort(key=lambda x: x[0][1:])
3201 guards.sort(key=lambda x: x[0][1:])
3199 if guards:
3202 if guards:
3200 ui.note(_('guards in series file:\n'))
3203 ui.note(_('guards in series file:\n'))
3201 for guard, count in guards:
3204 for guard, count in guards:
3202 ui.note('%2d ' % count)
3205 ui.note('%2d ' % count)
3203 ui.write(guard, '\n')
3206 ui.write(guard, '\n')
3204 else:
3207 else:
3205 ui.note(_('no guards in series file\n'))
3208 ui.note(_('no guards in series file\n'))
3206 else:
3209 else:
3207 if guards:
3210 if guards:
3208 ui.note(_('active guards:\n'))
3211 ui.note(_('active guards:\n'))
3209 for g in guards:
3212 for g in guards:
3210 ui.write(g, '\n')
3213 ui.write(g, '\n')
3211 else:
3214 else:
3212 ui.write(_('no active guards\n'))
3215 ui.write(_('no active guards\n'))
3213 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3216 reapply = opts.get('reapply') and q.applied and q.applied[-1].name
3214 popped = False
3217 popped = False
3215 if opts.get('pop') or opts.get('reapply'):
3218 if opts.get('pop') or opts.get('reapply'):
3216 for i in xrange(len(q.applied)):
3219 for i in xrange(len(q.applied)):
3217 if not pushable(i):
3220 if not pushable(i):
3218 ui.status(_('popping guarded patches\n'))
3221 ui.status(_('popping guarded patches\n'))
3219 popped = True
3222 popped = True
3220 if i == 0:
3223 if i == 0:
3221 q.pop(repo, all=True)
3224 q.pop(repo, all=True)
3222 else:
3225 else:
3223 q.pop(repo, q.applied[i - 1].name)
3226 q.pop(repo, q.applied[i - 1].name)
3224 break
3227 break
3225 if popped:
3228 if popped:
3226 try:
3229 try:
3227 if reapply:
3230 if reapply:
3228 ui.status(_('reapplying unguarded patches\n'))
3231 ui.status(_('reapplying unguarded patches\n'))
3229 q.push(repo, reapply)
3232 q.push(repo, reapply)
3230 finally:
3233 finally:
3231 q.savedirty()
3234 q.savedirty()
3232
3235
3233 @command("qfinish",
3236 @command("qfinish",
3234 [('a', 'applied', None, _('finish all applied changesets'))],
3237 [('a', 'applied', None, _('finish all applied changesets'))],
3235 _('hg qfinish [-a] [REV]...'))
3238 _('hg qfinish [-a] [REV]...'))
3236 def finish(ui, repo, *revrange, **opts):
3239 def finish(ui, repo, *revrange, **opts):
3237 """move applied patches into repository history
3240 """move applied patches into repository history
3238
3241
3239 Finishes the specified revisions (corresponding to applied
3242 Finishes the specified revisions (corresponding to applied
3240 patches) by moving them out of mq control into regular repository
3243 patches) by moving them out of mq control into regular repository
3241 history.
3244 history.
3242
3245
3243 Accepts a revision range or the -a/--applied option. If --applied
3246 Accepts a revision range or the -a/--applied option. If --applied
3244 is specified, all applied mq revisions are removed from mq
3247 is specified, all applied mq revisions are removed from mq
3245 control. Otherwise, the given revisions must be at the base of the
3248 control. Otherwise, the given revisions must be at the base of the
3246 stack of applied patches.
3249 stack of applied patches.
3247
3250
3248 This can be especially useful if your changes have been applied to
3251 This can be especially useful if your changes have been applied to
3249 an upstream repository, or if you are about to push your changes
3252 an upstream repository, or if you are about to push your changes
3250 to upstream.
3253 to upstream.
3251
3254
3252 Returns 0 on success.
3255 Returns 0 on success.
3253 """
3256 """
3254 if not opts.get(r'applied') and not revrange:
3257 if not opts.get(r'applied') and not revrange:
3255 raise error.Abort(_('no revisions specified'))
3258 raise error.Abort(_('no revisions specified'))
3256 elif opts.get(r'applied'):
3259 elif opts.get(r'applied'):
3257 revrange = ('qbase::qtip',) + revrange
3260 revrange = ('qbase::qtip',) + revrange
3258
3261
3259 q = repo.mq
3262 q = repo.mq
3260 if not q.applied:
3263 if not q.applied:
3261 ui.status(_('no patches applied\n'))
3264 ui.status(_('no patches applied\n'))
3262 return 0
3265 return 0
3263
3266
3264 revs = scmutil.revrange(repo, revrange)
3267 revs = scmutil.revrange(repo, revrange)
3265 if repo['.'].rev() in revs and repo[None].files():
3268 if repo['.'].rev() in revs and repo[None].files():
3266 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3269 ui.warn(_('warning: uncommitted changes in the working directory\n'))
3267 # queue.finish may changes phases but leave the responsibility to lock the
3270 # queue.finish may changes phases but leave the responsibility to lock the
3268 # repo to the caller to avoid deadlock with wlock. This command code is
3271 # repo to the caller to avoid deadlock with wlock. This command code is
3269 # responsibility for this locking.
3272 # responsibility for this locking.
3270 with repo.lock():
3273 with repo.lock():
3271 q.finish(repo, revs)
3274 q.finish(repo, revs)
3272 q.savedirty()
3275 q.savedirty()
3273 return 0
3276 return 0
3274
3277
3275 @command("qqueue",
3278 @command("qqueue",
3276 [('l', 'list', False, _('list all available queues')),
3279 [('l', 'list', False, _('list all available queues')),
3277 ('', 'active', False, _('print name of active queue')),
3280 ('', 'active', False, _('print name of active queue')),
3278 ('c', 'create', False, _('create new queue')),
3281 ('c', 'create', False, _('create new queue')),
3279 ('', 'rename', False, _('rename active queue')),
3282 ('', 'rename', False, _('rename active queue')),
3280 ('', 'delete', False, _('delete reference to queue')),
3283 ('', 'delete', False, _('delete reference to queue')),
3281 ('', 'purge', False, _('delete queue, and remove patch dir')),
3284 ('', 'purge', False, _('delete queue, and remove patch dir')),
3282 ],
3285 ],
3283 _('[OPTION] [QUEUE]'))
3286 _('[OPTION] [QUEUE]'))
3284 def qqueue(ui, repo, name=None, **opts):
3287 def qqueue(ui, repo, name=None, **opts):
3285 '''manage multiple patch queues
3288 '''manage multiple patch queues
3286
3289
3287 Supports switching between different patch queues, as well as creating
3290 Supports switching between different patch queues, as well as creating
3288 new patch queues and deleting existing ones.
3291 new patch queues and deleting existing ones.
3289
3292
3290 Omitting a queue name or specifying -l/--list will show you the registered
3293 Omitting a queue name or specifying -l/--list will show you the registered
3291 queues - by default the "normal" patches queue is registered. The currently
3294 queues - by default the "normal" patches queue is registered. The currently
3292 active queue will be marked with "(active)". Specifying --active will print
3295 active queue will be marked with "(active)". Specifying --active will print
3293 only the name of the active queue.
3296 only the name of the active queue.
3294
3297
3295 To create a new queue, use -c/--create. The queue is automatically made
3298 To create a new queue, use -c/--create. The queue is automatically made
3296 active, except in the case where there are applied patches from the
3299 active, except in the case where there are applied patches from the
3297 currently active queue in the repository. Then the queue will only be
3300 currently active queue in the repository. Then the queue will only be
3298 created and switching will fail.
3301 created and switching will fail.
3299
3302
3300 To delete an existing queue, use --delete. You cannot delete the currently
3303 To delete an existing queue, use --delete. You cannot delete the currently
3301 active queue.
3304 active queue.
3302
3305
3303 Returns 0 on success.
3306 Returns 0 on success.
3304 '''
3307 '''
3305 q = repo.mq
3308 q = repo.mq
3306 _defaultqueue = 'patches'
3309 _defaultqueue = 'patches'
3307 _allqueues = 'patches.queues'
3310 _allqueues = 'patches.queues'
3308 _activequeue = 'patches.queue'
3311 _activequeue = 'patches.queue'
3309
3312
3310 def _getcurrent():
3313 def _getcurrent():
3311 cur = os.path.basename(q.path)
3314 cur = os.path.basename(q.path)
3312 if cur.startswith('patches-'):
3315 if cur.startswith('patches-'):
3313 cur = cur[8:]
3316 cur = cur[8:]
3314 return cur
3317 return cur
3315
3318
3316 def _noqueues():
3319 def _noqueues():
3317 try:
3320 try:
3318 fh = repo.vfs(_allqueues, 'r')
3321 fh = repo.vfs(_allqueues, 'r')
3319 fh.close()
3322 fh.close()
3320 except IOError:
3323 except IOError:
3321 return True
3324 return True
3322
3325
3323 return False
3326 return False
3324
3327
3325 def _getqueues():
3328 def _getqueues():
3326 current = _getcurrent()
3329 current = _getcurrent()
3327
3330
3328 try:
3331 try:
3329 fh = repo.vfs(_allqueues, 'r')
3332 fh = repo.vfs(_allqueues, 'r')
3330 queues = [queue.strip() for queue in fh if queue.strip()]
3333 queues = [queue.strip() for queue in fh if queue.strip()]
3331 fh.close()
3334 fh.close()
3332 if current not in queues:
3335 if current not in queues:
3333 queues.append(current)
3336 queues.append(current)
3334 except IOError:
3337 except IOError:
3335 queues = [_defaultqueue]
3338 queues = [_defaultqueue]
3336
3339
3337 return sorted(queues)
3340 return sorted(queues)
3338
3341
3339 def _setactive(name):
3342 def _setactive(name):
3340 if q.applied:
3343 if q.applied:
3341 raise error.Abort(_('new queue created, but cannot make active '
3344 raise error.Abort(_('new queue created, but cannot make active '
3342 'as patches are applied'))
3345 'as patches are applied'))
3343 _setactivenocheck(name)
3346 _setactivenocheck(name)
3344
3347
3345 def _setactivenocheck(name):
3348 def _setactivenocheck(name):
3346 fh = repo.vfs(_activequeue, 'w')
3349 fh = repo.vfs(_activequeue, 'w')
3347 if name != 'patches':
3350 if name != 'patches':
3348 fh.write(name)
3351 fh.write(name)
3349 fh.close()
3352 fh.close()
3350
3353
3351 def _addqueue(name):
3354 def _addqueue(name):
3352 fh = repo.vfs(_allqueues, 'a')
3355 fh = repo.vfs(_allqueues, 'a')
3353 fh.write('%s\n' % (name,))
3356 fh.write('%s\n' % (name,))
3354 fh.close()
3357 fh.close()
3355
3358
3356 def _queuedir(name):
3359 def _queuedir(name):
3357 if name == 'patches':
3360 if name == 'patches':
3358 return repo.vfs.join('patches')
3361 return repo.vfs.join('patches')
3359 else:
3362 else:
3360 return repo.vfs.join('patches-' + name)
3363 return repo.vfs.join('patches-' + name)
3361
3364
3362 def _validname(name):
3365 def _validname(name):
3363 for n in name:
3366 for n in name:
3364 if n in ':\\/.':
3367 if n in ':\\/.':
3365 return False
3368 return False
3366 return True
3369 return True
3367
3370
3368 def _delete(name):
3371 def _delete(name):
3369 if name not in existing:
3372 if name not in existing:
3370 raise error.Abort(_('cannot delete queue that does not exist'))
3373 raise error.Abort(_('cannot delete queue that does not exist'))
3371
3374
3372 current = _getcurrent()
3375 current = _getcurrent()
3373
3376
3374 if name == current:
3377 if name == current:
3375 raise error.Abort(_('cannot delete currently active queue'))
3378 raise error.Abort(_('cannot delete currently active queue'))
3376
3379
3377 fh = repo.vfs('patches.queues.new', 'w')
3380 fh = repo.vfs('patches.queues.new', 'w')
3378 for queue in existing:
3381 for queue in existing:
3379 if queue == name:
3382 if queue == name:
3380 continue
3383 continue
3381 fh.write('%s\n' % (queue,))
3384 fh.write('%s\n' % (queue,))
3382 fh.close()
3385 fh.close()
3383 repo.vfs.rename('patches.queues.new', _allqueues)
3386 repo.vfs.rename('patches.queues.new', _allqueues)
3384
3387
3385 opts = pycompat.byteskwargs(opts)
3388 opts = pycompat.byteskwargs(opts)
3386 if not name or opts.get('list') or opts.get('active'):
3389 if not name or opts.get('list') or opts.get('active'):
3387 current = _getcurrent()
3390 current = _getcurrent()
3388 if opts.get('active'):
3391 if opts.get('active'):
3389 ui.write('%s\n' % (current,))
3392 ui.write('%s\n' % (current,))
3390 return
3393 return
3391 for queue in _getqueues():
3394 for queue in _getqueues():
3392 ui.write('%s' % (queue,))
3395 ui.write('%s' % (queue,))
3393 if queue == current and not ui.quiet:
3396 if queue == current and not ui.quiet:
3394 ui.write(_(' (active)\n'))
3397 ui.write(_(' (active)\n'))
3395 else:
3398 else:
3396 ui.write('\n')
3399 ui.write('\n')
3397 return
3400 return
3398
3401
3399 if not _validname(name):
3402 if not _validname(name):
3400 raise error.Abort(
3403 raise error.Abort(
3401 _('invalid queue name, may not contain the characters ":\\/."'))
3404 _('invalid queue name, may not contain the characters ":\\/."'))
3402
3405
3403 with repo.wlock():
3406 with repo.wlock():
3404 existing = _getqueues()
3407 existing = _getqueues()
3405
3408
3406 if opts.get('create'):
3409 if opts.get('create'):
3407 if name in existing:
3410 if name in existing:
3408 raise error.Abort(_('queue "%s" already exists') % name)
3411 raise error.Abort(_('queue "%s" already exists') % name)
3409 if _noqueues():
3412 if _noqueues():
3410 _addqueue(_defaultqueue)
3413 _addqueue(_defaultqueue)
3411 _addqueue(name)
3414 _addqueue(name)
3412 _setactive(name)
3415 _setactive(name)
3413 elif opts.get('rename'):
3416 elif opts.get('rename'):
3414 current = _getcurrent()
3417 current = _getcurrent()
3415 if name == current:
3418 if name == current:
3416 raise error.Abort(_('can\'t rename "%s" to its current name')
3419 raise error.Abort(_('can\'t rename "%s" to its current name')
3417 % name)
3420 % name)
3418 if name in existing:
3421 if name in existing:
3419 raise error.Abort(_('queue "%s" already exists') % name)
3422 raise error.Abort(_('queue "%s" already exists') % name)
3420
3423
3421 olddir = _queuedir(current)
3424 olddir = _queuedir(current)
3422 newdir = _queuedir(name)
3425 newdir = _queuedir(name)
3423
3426
3424 if os.path.exists(newdir):
3427 if os.path.exists(newdir):
3425 raise error.Abort(_('non-queue directory "%s" already exists') %
3428 raise error.Abort(_('non-queue directory "%s" already exists') %
3426 newdir)
3429 newdir)
3427
3430
3428 fh = repo.vfs('patches.queues.new', 'w')
3431 fh = repo.vfs('patches.queues.new', 'w')
3429 for queue in existing:
3432 for queue in existing:
3430 if queue == current:
3433 if queue == current:
3431 fh.write('%s\n' % (name,))
3434 fh.write('%s\n' % (name,))
3432 if os.path.exists(olddir):
3435 if os.path.exists(olddir):
3433 util.rename(olddir, newdir)
3436 util.rename(olddir, newdir)
3434 else:
3437 else:
3435 fh.write('%s\n' % (queue,))
3438 fh.write('%s\n' % (queue,))
3436 fh.close()
3439 fh.close()
3437 repo.vfs.rename('patches.queues.new', _allqueues)
3440 repo.vfs.rename('patches.queues.new', _allqueues)
3438 _setactivenocheck(name)
3441 _setactivenocheck(name)
3439 elif opts.get('delete'):
3442 elif opts.get('delete'):
3440 _delete(name)
3443 _delete(name)
3441 elif opts.get('purge'):
3444 elif opts.get('purge'):
3442 if name in existing:
3445 if name in existing:
3443 _delete(name)
3446 _delete(name)
3444 qdir = _queuedir(name)
3447 qdir = _queuedir(name)
3445 if os.path.exists(qdir):
3448 if os.path.exists(qdir):
3446 shutil.rmtree(qdir)
3449 shutil.rmtree(qdir)
3447 else:
3450 else:
3448 if name not in existing:
3451 if name not in existing:
3449 raise error.Abort(_('use --create to create a new queue'))
3452 raise error.Abort(_('use --create to create a new queue'))
3450 _setactive(name)
3453 _setactive(name)
3451
3454
3452 def mqphasedefaults(repo, roots):
3455 def mqphasedefaults(repo, roots):
3453 """callback used to set mq changeset as secret when no phase data exists"""
3456 """callback used to set mq changeset as secret when no phase data exists"""
3454 if repo.mq.applied:
3457 if repo.mq.applied:
3455 if repo.ui.configbool('mq', 'secret'):
3458 if repo.ui.configbool('mq', 'secret'):
3456 mqphase = phases.secret
3459 mqphase = phases.secret
3457 else:
3460 else:
3458 mqphase = phases.draft
3461 mqphase = phases.draft
3459 qbase = repo[repo.mq.applied[0].node]
3462 qbase = repo[repo.mq.applied[0].node]
3460 roots[mqphase].add(qbase.node())
3463 roots[mqphase].add(qbase.node())
3461 return roots
3464 return roots
3462
3465
3463 def reposetup(ui, repo):
3466 def reposetup(ui, repo):
3464 class mqrepo(repo.__class__):
3467 class mqrepo(repo.__class__):
3465 @localrepo.unfilteredpropertycache
3468 @localrepo.unfilteredpropertycache
3466 def mq(self):
3469 def mq(self):
3467 return queue(self.ui, self.baseui, self.path)
3470 return queue(self.ui, self.baseui, self.path)
3468
3471
3469 def invalidateall(self):
3472 def invalidateall(self):
3470 super(mqrepo, self).invalidateall()
3473 super(mqrepo, self).invalidateall()
3471 if localrepo.hasunfilteredcache(self, 'mq'):
3474 if localrepo.hasunfilteredcache(self, 'mq'):
3472 # recreate mq in case queue path was changed
3475 # recreate mq in case queue path was changed
3473 delattr(self.unfiltered(), 'mq')
3476 delattr(self.unfiltered(), 'mq')
3474
3477
3475 def abortifwdirpatched(self, errmsg, force=False):
3478 def abortifwdirpatched(self, errmsg, force=False):
3476 if self.mq.applied and self.mq.checkapplied and not force:
3479 if self.mq.applied and self.mq.checkapplied and not force:
3477 parents = self.dirstate.parents()
3480 parents = self.dirstate.parents()
3478 patches = [s.node for s in self.mq.applied]
3481 patches = [s.node for s in self.mq.applied]
3479 if parents[0] in patches or parents[1] in patches:
3482 if parents[0] in patches or parents[1] in patches:
3480 raise error.Abort(errmsg)
3483 raise error.Abort(errmsg)
3481
3484
3482 def commit(self, text="", user=None, date=None, match=None,
3485 def commit(self, text="", user=None, date=None, match=None,
3483 force=False, editor=False, extra=None):
3486 force=False, editor=False, extra=None):
3484 if extra is None:
3487 if extra is None:
3485 extra = {}
3488 extra = {}
3486 self.abortifwdirpatched(
3489 self.abortifwdirpatched(
3487 _('cannot commit over an applied mq patch'),
3490 _('cannot commit over an applied mq patch'),
3488 force)
3491 force)
3489
3492
3490 return super(mqrepo, self).commit(text, user, date, match, force,
3493 return super(mqrepo, self).commit(text, user, date, match, force,
3491 editor, extra)
3494 editor, extra)
3492
3495
3493 def checkpush(self, pushop):
3496 def checkpush(self, pushop):
3494 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3497 if self.mq.applied and self.mq.checkapplied and not pushop.force:
3495 outapplied = [e.node for e in self.mq.applied]
3498 outapplied = [e.node for e in self.mq.applied]
3496 if pushop.revs:
3499 if pushop.revs:
3497 # Assume applied patches have no non-patch descendants and
3500 # Assume applied patches have no non-patch descendants and
3498 # are not on remote already. Filtering any changeset not
3501 # are not on remote already. Filtering any changeset not
3499 # pushed.
3502 # pushed.
3500 heads = set(pushop.revs)
3503 heads = set(pushop.revs)
3501 for node in reversed(outapplied):
3504 for node in reversed(outapplied):
3502 if node in heads:
3505 if node in heads:
3503 break
3506 break
3504 else:
3507 else:
3505 outapplied.pop()
3508 outapplied.pop()
3506 # looking for pushed and shared changeset
3509 # looking for pushed and shared changeset
3507 for node in outapplied:
3510 for node in outapplied:
3508 if self[node].phase() < phases.secret:
3511 if self[node].phase() < phases.secret:
3509 raise error.Abort(_('source has mq patches applied'))
3512 raise error.Abort(_('source has mq patches applied'))
3510 # no non-secret patches pushed
3513 # no non-secret patches pushed
3511 super(mqrepo, self).checkpush(pushop)
3514 super(mqrepo, self).checkpush(pushop)
3512
3515
3513 def _findtags(self):
3516 def _findtags(self):
3514 '''augment tags from base class with patch tags'''
3517 '''augment tags from base class with patch tags'''
3515 result = super(mqrepo, self)._findtags()
3518 result = super(mqrepo, self)._findtags()
3516
3519
3517 q = self.mq
3520 q = self.mq
3518 if not q.applied:
3521 if not q.applied:
3519 return result
3522 return result
3520
3523
3521 mqtags = [(patch.node, patch.name) for patch in q.applied]
3524 mqtags = [(patch.node, patch.name) for patch in q.applied]
3522
3525
3523 try:
3526 try:
3524 # for now ignore filtering business
3527 # for now ignore filtering business
3525 self.unfiltered().changelog.rev(mqtags[-1][0])
3528 self.unfiltered().changelog.rev(mqtags[-1][0])
3526 except error.LookupError:
3529 except error.LookupError:
3527 self.ui.warn(_('mq status file refers to unknown node %s\n')
3530 self.ui.warn(_('mq status file refers to unknown node %s\n')
3528 % short(mqtags[-1][0]))
3531 % short(mqtags[-1][0]))
3529 return result
3532 return result
3530
3533
3531 # do not add fake tags for filtered revisions
3534 # do not add fake tags for filtered revisions
3532 included = self.changelog.hasnode
3535 included = self.changelog.hasnode
3533 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3536 mqtags = [mqt for mqt in mqtags if included(mqt[0])]
3534 if not mqtags:
3537 if not mqtags:
3535 return result
3538 return result
3536
3539
3537 mqtags.append((mqtags[-1][0], 'qtip'))
3540 mqtags.append((mqtags[-1][0], 'qtip'))
3538 mqtags.append((mqtags[0][0], 'qbase'))
3541 mqtags.append((mqtags[0][0], 'qbase'))
3539 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3542 mqtags.append((self.changelog.parents(mqtags[0][0])[0], 'qparent'))
3540 tags = result[0]
3543 tags = result[0]
3541 for patch in mqtags:
3544 for patch in mqtags:
3542 if patch[1] in tags:
3545 if patch[1] in tags:
3543 self.ui.warn(_('tag %s overrides mq patch of the same '
3546 self.ui.warn(_('tag %s overrides mq patch of the same '
3544 'name\n') % patch[1])
3547 'name\n') % patch[1])
3545 else:
3548 else:
3546 tags[patch[1]] = patch[0]
3549 tags[patch[1]] = patch[0]
3547
3550
3548 return result
3551 return result
3549
3552
3550 if repo.local():
3553 if repo.local():
3551 repo.__class__ = mqrepo
3554 repo.__class__ = mqrepo
3552
3555
3553 repo._phasedefaults.append(mqphasedefaults)
3556 repo._phasedefaults.append(mqphasedefaults)
3554
3557
3555 def mqimport(orig, ui, repo, *args, **kwargs):
3558 def mqimport(orig, ui, repo, *args, **kwargs):
3556 if (util.safehasattr(repo, 'abortifwdirpatched')
3559 if (util.safehasattr(repo, 'abortifwdirpatched')
3557 and not kwargs.get(r'no_commit', False)):
3560 and not kwargs.get(r'no_commit', False)):
3558 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3561 repo.abortifwdirpatched(_('cannot import over an applied patch'),
3559 kwargs.get(r'force'))
3562 kwargs.get(r'force'))
3560 return orig(ui, repo, *args, **kwargs)
3563 return orig(ui, repo, *args, **kwargs)
3561
3564
3562 def mqinit(orig, ui, *args, **kwargs):
3565 def mqinit(orig, ui, *args, **kwargs):
3563 mq = kwargs.pop(r'mq', None)
3566 mq = kwargs.pop(r'mq', None)
3564
3567
3565 if not mq:
3568 if not mq:
3566 return orig(ui, *args, **kwargs)
3569 return orig(ui, *args, **kwargs)
3567
3570
3568 if args:
3571 if args:
3569 repopath = args[0]
3572 repopath = args[0]
3570 if not hg.islocal(repopath):
3573 if not hg.islocal(repopath):
3571 raise error.Abort(_('only a local queue repository '
3574 raise error.Abort(_('only a local queue repository '
3572 'may be initialized'))
3575 'may be initialized'))
3573 else:
3576 else:
3574 repopath = cmdutil.findrepo(pycompat.getcwd())
3577 repopath = cmdutil.findrepo(pycompat.getcwd())
3575 if not repopath:
3578 if not repopath:
3576 raise error.Abort(_('there is no Mercurial repository here '
3579 raise error.Abort(_('there is no Mercurial repository here '
3577 '(.hg not found)'))
3580 '(.hg not found)'))
3578 repo = hg.repository(ui, repopath)
3581 repo = hg.repository(ui, repopath)
3579 return qinit(ui, repo, True)
3582 return qinit(ui, repo, True)
3580
3583
3581 def mqcommand(orig, ui, repo, *args, **kwargs):
3584 def mqcommand(orig, ui, repo, *args, **kwargs):
3582 """Add --mq option to operate on patch repository instead of main"""
3585 """Add --mq option to operate on patch repository instead of main"""
3583
3586
3584 # some commands do not like getting unknown options
3587 # some commands do not like getting unknown options
3585 mq = kwargs.pop(r'mq', None)
3588 mq = kwargs.pop(r'mq', None)
3586
3589
3587 if not mq:
3590 if not mq:
3588 return orig(ui, repo, *args, **kwargs)
3591 return orig(ui, repo, *args, **kwargs)
3589
3592
3590 q = repo.mq
3593 q = repo.mq
3591 r = q.qrepo()
3594 r = q.qrepo()
3592 if not r:
3595 if not r:
3593 raise error.Abort(_('no queue repository'))
3596 raise error.Abort(_('no queue repository'))
3594 return orig(r.ui, r, *args, **kwargs)
3597 return orig(r.ui, r, *args, **kwargs)
3595
3598
3596 def summaryhook(ui, repo):
3599 def summaryhook(ui, repo):
3597 q = repo.mq
3600 q = repo.mq
3598 m = []
3601 m = []
3599 a, u = len(q.applied), len(q.unapplied(repo))
3602 a, u = len(q.applied), len(q.unapplied(repo))
3600 if a:
3603 if a:
3601 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3604 m.append(ui.label(_("%d applied"), 'qseries.applied') % a)
3602 if u:
3605 if u:
3603 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3606 m.append(ui.label(_("%d unapplied"), 'qseries.unapplied') % u)
3604 if m:
3607 if m:
3605 # i18n: column positioning for "hg summary"
3608 # i18n: column positioning for "hg summary"
3606 ui.write(_("mq: %s\n") % ', '.join(m))
3609 ui.write(_("mq: %s\n") % ', '.join(m))
3607 else:
3610 else:
3608 # i18n: column positioning for "hg summary"
3611 # i18n: column positioning for "hg summary"
3609 ui.note(_("mq: (empty queue)\n"))
3612 ui.note(_("mq: (empty queue)\n"))
3610
3613
3611 revsetpredicate = registrar.revsetpredicate()
3614 revsetpredicate = registrar.revsetpredicate()
3612
3615
3613 @revsetpredicate('mq()')
3616 @revsetpredicate('mq()')
3614 def revsetmq(repo, subset, x):
3617 def revsetmq(repo, subset, x):
3615 """Changesets managed by MQ.
3618 """Changesets managed by MQ.
3616 """
3619 """
3617 revsetlang.getargs(x, 0, 0, _("mq takes no arguments"))
3620 revsetlang.getargs(x, 0, 0, _("mq takes no arguments"))
3618 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3621 applied = set([repo[r.node].rev() for r in repo.mq.applied])
3619 return smartset.baseset([r for r in subset if r in applied])
3622 return smartset.baseset([r for r in subset if r in applied])
3620
3623
3621 # tell hggettext to extract docstrings from these functions:
3624 # tell hggettext to extract docstrings from these functions:
3622 i18nfunctions = [revsetmq]
3625 i18nfunctions = [revsetmq]
3623
3626
3624 def extsetup(ui):
3627 def extsetup(ui):
3625 # Ensure mq wrappers are called first, regardless of extension load order by
3628 # Ensure mq wrappers are called first, regardless of extension load order by
3626 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3629 # NOT wrapping in uisetup() and instead deferring to init stage two here.
3627 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3630 mqopt = [('', 'mq', None, _("operate on patch repository"))]
3628
3631
3629 extensions.wrapcommand(commands.table, 'import', mqimport)
3632 extensions.wrapcommand(commands.table, 'import', mqimport)
3630 cmdutil.summaryhooks.add('mq', summaryhook)
3633 cmdutil.summaryhooks.add('mq', summaryhook)
3631
3634
3632 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3635 entry = extensions.wrapcommand(commands.table, 'init', mqinit)
3633 entry[1].extend(mqopt)
3636 entry[1].extend(mqopt)
3634
3637
3635 def dotable(cmdtable):
3638 def dotable(cmdtable):
3636 for cmd, entry in cmdtable.iteritems():
3639 for cmd, entry in cmdtable.iteritems():
3637 cmd = cmdutil.parsealiases(cmd)[0]
3640 cmd = cmdutil.parsealiases(cmd)[0]
3638 func = entry[0]
3641 func = entry[0]
3639 if func.norepo:
3642 if func.norepo:
3640 continue
3643 continue
3641 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3644 entry = extensions.wrapcommand(cmdtable, cmd, mqcommand)
3642 entry[1].extend(mqopt)
3645 entry[1].extend(mqopt)
3643
3646
3644 dotable(commands.table)
3647 dotable(commands.table)
3645
3648
3646 for extname, extmodule in extensions.extensions():
3649 for extname, extmodule in extensions.extensions():
3647 if extmodule.__file__ != __file__:
3650 if extmodule.__file__ != __file__:
3648 dotable(getattr(extmodule, 'cmdtable', {}))
3651 dotable(getattr(extmodule, 'cmdtable', {}))
3649
3652
3650 colortable = {'qguard.negative': 'red',
3653 colortable = {'qguard.negative': 'red',
3651 'qguard.positive': 'yellow',
3654 'qguard.positive': 'yellow',
3652 'qguard.unguarded': 'green',
3655 'qguard.unguarded': 'green',
3653 'qseries.applied': 'blue bold underline',
3656 'qseries.applied': 'blue bold underline',
3654 'qseries.guarded': 'black bold',
3657 'qseries.guarded': 'black bold',
3655 'qseries.missing': 'red bold',
3658 'qseries.missing': 'red bold',
3656 'qseries.unapplied': 'black bold'}
3659 'qseries.unapplied': 'black bold'}
@@ -1,502 +1,505
1 # narrowbundle2.py - bundle2 extensions for narrow repository support
1 # narrowbundle2.py - bundle2 extensions for narrow repository support
2 #
2 #
3 # Copyright 2017 Google, Inc.
3 # Copyright 2017 Google, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import errno
11 import errno
12 import struct
12 import struct
13
13
14 from mercurial.i18n import _
14 from mercurial.i18n import _
15 from mercurial.node import (
15 from mercurial.node import (
16 bin,
16 bin,
17 nullid,
17 nullid,
18 nullrev,
18 nullrev,
19 )
19 )
20 from mercurial import (
20 from mercurial import (
21 bundle2,
21 bundle2,
22 changegroup,
22 changegroup,
23 dagutil,
23 dagutil,
24 error,
24 error,
25 exchange,
25 exchange,
26 extensions,
26 extensions,
27 narrowspec,
27 narrowspec,
28 repair,
28 repair,
29 util,
29 util,
30 wireproto,
30 wireproto,
31 )
31 )
32 from mercurial.utils import (
33 stringutil,
34 )
32
35
33 NARROWCAP = 'narrow'
36 NARROWCAP = 'narrow'
34 _NARROWACL_SECTION = 'narrowhgacl'
37 _NARROWACL_SECTION = 'narrowhgacl'
35 _CHANGESPECPART = NARROWCAP + ':changespec'
38 _CHANGESPECPART = NARROWCAP + ':changespec'
36 _SPECPART = NARROWCAP + ':spec'
39 _SPECPART = NARROWCAP + ':spec'
37 _SPECPART_INCLUDE = 'include'
40 _SPECPART_INCLUDE = 'include'
38 _SPECPART_EXCLUDE = 'exclude'
41 _SPECPART_EXCLUDE = 'exclude'
39 _KILLNODESIGNAL = 'KILL'
42 _KILLNODESIGNAL = 'KILL'
40 _DONESIGNAL = 'DONE'
43 _DONESIGNAL = 'DONE'
41 _ELIDEDCSHEADER = '>20s20s20sl' # cset id, p1, p2, len(text)
44 _ELIDEDCSHEADER = '>20s20s20sl' # cset id, p1, p2, len(text)
42 _ELIDEDMFHEADER = '>20s20s20s20sl' # manifest id, p1, p2, link id, len(text)
45 _ELIDEDMFHEADER = '>20s20s20s20sl' # manifest id, p1, p2, link id, len(text)
43 _CSHEADERSIZE = struct.calcsize(_ELIDEDCSHEADER)
46 _CSHEADERSIZE = struct.calcsize(_ELIDEDCSHEADER)
44 _MFHEADERSIZE = struct.calcsize(_ELIDEDMFHEADER)
47 _MFHEADERSIZE = struct.calcsize(_ELIDEDMFHEADER)
45
48
46 # When advertising capabilities, always include narrow clone support.
49 # When advertising capabilities, always include narrow clone support.
47 def getrepocaps_narrow(orig, repo, **kwargs):
50 def getrepocaps_narrow(orig, repo, **kwargs):
48 caps = orig(repo, **kwargs)
51 caps = orig(repo, **kwargs)
49 caps[NARROWCAP] = ['v0']
52 caps[NARROWCAP] = ['v0']
50 return caps
53 return caps
51
54
52 def _computeellipsis(repo, common, heads, known, match, depth=None):
55 def _computeellipsis(repo, common, heads, known, match, depth=None):
53 """Compute the shape of a narrowed DAG.
56 """Compute the shape of a narrowed DAG.
54
57
55 Args:
58 Args:
56 repo: The repository we're transferring.
59 repo: The repository we're transferring.
57 common: The roots of the DAG range we're transferring.
60 common: The roots of the DAG range we're transferring.
58 May be just [nullid], which means all ancestors of heads.
61 May be just [nullid], which means all ancestors of heads.
59 heads: The heads of the DAG range we're transferring.
62 heads: The heads of the DAG range we're transferring.
60 match: The narrowmatcher that allows us to identify relevant changes.
63 match: The narrowmatcher that allows us to identify relevant changes.
61 depth: If not None, only consider nodes to be full nodes if they are at
64 depth: If not None, only consider nodes to be full nodes if they are at
62 most depth changesets away from one of heads.
65 most depth changesets away from one of heads.
63
66
64 Returns:
67 Returns:
65 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
68 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
66
69
67 visitnodes: The list of nodes (either full or ellipsis) which
70 visitnodes: The list of nodes (either full or ellipsis) which
68 need to be sent to the client.
71 need to be sent to the client.
69 relevant_nodes: The set of changelog nodes which change a file inside
72 relevant_nodes: The set of changelog nodes which change a file inside
70 the narrowspec. The client needs these as non-ellipsis nodes.
73 the narrowspec. The client needs these as non-ellipsis nodes.
71 ellipsisroots: A dict of {rev: parents} that is used in
74 ellipsisroots: A dict of {rev: parents} that is used in
72 narrowchangegroup to produce ellipsis nodes with the
75 narrowchangegroup to produce ellipsis nodes with the
73 correct parents.
76 correct parents.
74 """
77 """
75 cl = repo.changelog
78 cl = repo.changelog
76 mfl = repo.manifestlog
79 mfl = repo.manifestlog
77
80
78 cldag = dagutil.revlogdag(cl)
81 cldag = dagutil.revlogdag(cl)
79 # dagutil does not like nullid/nullrev
82 # dagutil does not like nullid/nullrev
80 commonrevs = cldag.internalizeall(common - set([nullid])) | set([nullrev])
83 commonrevs = cldag.internalizeall(common - set([nullid])) | set([nullrev])
81 headsrevs = cldag.internalizeall(heads)
84 headsrevs = cldag.internalizeall(heads)
82 if depth:
85 if depth:
83 revdepth = {h: 0 for h in headsrevs}
86 revdepth = {h: 0 for h in headsrevs}
84
87
85 ellipsisheads = collections.defaultdict(set)
88 ellipsisheads = collections.defaultdict(set)
86 ellipsisroots = collections.defaultdict(set)
89 ellipsisroots = collections.defaultdict(set)
87
90
88 def addroot(head, curchange):
91 def addroot(head, curchange):
89 """Add a root to an ellipsis head, splitting heads with 3 roots."""
92 """Add a root to an ellipsis head, splitting heads with 3 roots."""
90 ellipsisroots[head].add(curchange)
93 ellipsisroots[head].add(curchange)
91 # Recursively split ellipsis heads with 3 roots by finding the
94 # Recursively split ellipsis heads with 3 roots by finding the
92 # roots' youngest common descendant which is an elided merge commit.
95 # roots' youngest common descendant which is an elided merge commit.
93 # That descendant takes 2 of the 3 roots as its own, and becomes a
96 # That descendant takes 2 of the 3 roots as its own, and becomes a
94 # root of the head.
97 # root of the head.
95 while len(ellipsisroots[head]) > 2:
98 while len(ellipsisroots[head]) > 2:
96 child, roots = splithead(head)
99 child, roots = splithead(head)
97 splitroots(head, child, roots)
100 splitroots(head, child, roots)
98 head = child # Recurse in case we just added a 3rd root
101 head = child # Recurse in case we just added a 3rd root
99
102
100 def splitroots(head, child, roots):
103 def splitroots(head, child, roots):
101 ellipsisroots[head].difference_update(roots)
104 ellipsisroots[head].difference_update(roots)
102 ellipsisroots[head].add(child)
105 ellipsisroots[head].add(child)
103 ellipsisroots[child].update(roots)
106 ellipsisroots[child].update(roots)
104 ellipsisroots[child].discard(child)
107 ellipsisroots[child].discard(child)
105
108
106 def splithead(head):
109 def splithead(head):
107 r1, r2, r3 = sorted(ellipsisroots[head])
110 r1, r2, r3 = sorted(ellipsisroots[head])
108 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
111 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
109 mid = repo.revs('sort(merge() & %d::%d & %d::%d, -rev)',
112 mid = repo.revs('sort(merge() & %d::%d & %d::%d, -rev)',
110 nr1, head, nr2, head)
113 nr1, head, nr2, head)
111 for j in mid:
114 for j in mid:
112 if j == nr2:
115 if j == nr2:
113 return nr2, (nr1, nr2)
116 return nr2, (nr1, nr2)
114 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
117 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
115 return j, (nr1, nr2)
118 return j, (nr1, nr2)
116 raise error.Abort('Failed to split up ellipsis node! head: %d, '
119 raise error.Abort('Failed to split up ellipsis node! head: %d, '
117 'roots: %d %d %d' % (head, r1, r2, r3))
120 'roots: %d %d %d' % (head, r1, r2, r3))
118
121
119 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
122 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
120 visit = reversed(missing)
123 visit = reversed(missing)
121 relevant_nodes = set()
124 relevant_nodes = set()
122 visitnodes = [cl.node(m) for m in missing]
125 visitnodes = [cl.node(m) for m in missing]
123 required = set(headsrevs) | known
126 required = set(headsrevs) | known
124 for rev in visit:
127 for rev in visit:
125 clrev = cl.changelogrevision(rev)
128 clrev = cl.changelogrevision(rev)
126 ps = cldag.parents(rev)
129 ps = cldag.parents(rev)
127 if depth is not None:
130 if depth is not None:
128 curdepth = revdepth[rev]
131 curdepth = revdepth[rev]
129 for p in ps:
132 for p in ps:
130 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
133 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
131 needed = False
134 needed = False
132 shallow_enough = depth is None or revdepth[rev] <= depth
135 shallow_enough = depth is None or revdepth[rev] <= depth
133 if shallow_enough:
136 if shallow_enough:
134 curmf = mfl[clrev.manifest].read()
137 curmf = mfl[clrev.manifest].read()
135 if ps:
138 if ps:
136 # We choose to not trust the changed files list in
139 # We choose to not trust the changed files list in
137 # changesets because it's not always correct. TODO: could
140 # changesets because it's not always correct. TODO: could
138 # we trust it for the non-merge case?
141 # we trust it for the non-merge case?
139 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
142 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
140 needed = bool(curmf.diff(p1mf, match))
143 needed = bool(curmf.diff(p1mf, match))
141 if not needed and len(ps) > 1:
144 if not needed and len(ps) > 1:
142 # For merge changes, the list of changed files is not
145 # For merge changes, the list of changed files is not
143 # helpful, since we need to emit the merge if a file
146 # helpful, since we need to emit the merge if a file
144 # in the narrow spec has changed on either side of the
147 # in the narrow spec has changed on either side of the
145 # merge. As a result, we do a manifest diff to check.
148 # merge. As a result, we do a manifest diff to check.
146 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
149 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
147 needed = bool(curmf.diff(p2mf, match))
150 needed = bool(curmf.diff(p2mf, match))
148 else:
151 else:
149 # For a root node, we need to include the node if any
152 # For a root node, we need to include the node if any
150 # files in the node match the narrowspec.
153 # files in the node match the narrowspec.
151 needed = any(curmf.walk(match))
154 needed = any(curmf.walk(match))
152
155
153 if needed:
156 if needed:
154 for head in ellipsisheads[rev]:
157 for head in ellipsisheads[rev]:
155 addroot(head, rev)
158 addroot(head, rev)
156 for p in ps:
159 for p in ps:
157 required.add(p)
160 required.add(p)
158 relevant_nodes.add(cl.node(rev))
161 relevant_nodes.add(cl.node(rev))
159 else:
162 else:
160 if not ps:
163 if not ps:
161 ps = [nullrev]
164 ps = [nullrev]
162 if rev in required:
165 if rev in required:
163 for head in ellipsisheads[rev]:
166 for head in ellipsisheads[rev]:
164 addroot(head, rev)
167 addroot(head, rev)
165 for p in ps:
168 for p in ps:
166 ellipsisheads[p].add(rev)
169 ellipsisheads[p].add(rev)
167 else:
170 else:
168 for p in ps:
171 for p in ps:
169 ellipsisheads[p] |= ellipsisheads[rev]
172 ellipsisheads[p] |= ellipsisheads[rev]
170
173
171 # add common changesets as roots of their reachable ellipsis heads
174 # add common changesets as roots of their reachable ellipsis heads
172 for c in commonrevs:
175 for c in commonrevs:
173 for head in ellipsisheads[c]:
176 for head in ellipsisheads[c]:
174 addroot(head, c)
177 addroot(head, c)
175 return visitnodes, relevant_nodes, ellipsisroots
178 return visitnodes, relevant_nodes, ellipsisroots
176
179
177 def _packellipsischangegroup(repo, common, match, relevant_nodes,
180 def _packellipsischangegroup(repo, common, match, relevant_nodes,
178 ellipsisroots, visitnodes, depth, source, version):
181 ellipsisroots, visitnodes, depth, source, version):
179 if version in ('01', '02'):
182 if version in ('01', '02'):
180 raise error.Abort(
183 raise error.Abort(
181 'ellipsis nodes require at least cg3 on client and server, '
184 'ellipsis nodes require at least cg3 on client and server, '
182 'but negotiated version %s' % version)
185 'but negotiated version %s' % version)
183 # We wrap cg1packer.revchunk, using a side channel to pass
186 # We wrap cg1packer.revchunk, using a side channel to pass
184 # relevant_nodes into that area. Then if linknode isn't in the
187 # relevant_nodes into that area. Then if linknode isn't in the
185 # set, we know we have an ellipsis node and we should defer
188 # set, we know we have an ellipsis node and we should defer
186 # sending that node's data. We override close() to detect
189 # sending that node's data. We override close() to detect
187 # pending ellipsis nodes and flush them.
190 # pending ellipsis nodes and flush them.
188 packer = changegroup.getbundler(version, repo)
191 packer = changegroup.getbundler(version, repo)
189 # Let the packer have access to the narrow matcher so it can
192 # Let the packer have access to the narrow matcher so it can
190 # omit filelogs and dirlogs as needed
193 # omit filelogs and dirlogs as needed
191 packer._narrow_matcher = lambda : match
194 packer._narrow_matcher = lambda : match
192 # Give the packer the list of nodes which should not be
195 # Give the packer the list of nodes which should not be
193 # ellipsis nodes. We store this rather than the set of nodes
196 # ellipsis nodes. We store this rather than the set of nodes
194 # that should be an ellipsis because for very large histories
197 # that should be an ellipsis because for very large histories
195 # we expect this to be significantly smaller.
198 # we expect this to be significantly smaller.
196 packer.full_nodes = relevant_nodes
199 packer.full_nodes = relevant_nodes
197 # Maps ellipsis revs to their roots at the changelog level.
200 # Maps ellipsis revs to their roots at the changelog level.
198 packer.precomputed_ellipsis = ellipsisroots
201 packer.precomputed_ellipsis = ellipsisroots
199 # Maps CL revs to per-revlog revisions. Cleared in close() at
202 # Maps CL revs to per-revlog revisions. Cleared in close() at
200 # the end of each group.
203 # the end of each group.
201 packer.clrev_to_localrev = {}
204 packer.clrev_to_localrev = {}
202 packer.next_clrev_to_localrev = {}
205 packer.next_clrev_to_localrev = {}
203 # Maps changelog nodes to changelog revs. Filled in once
206 # Maps changelog nodes to changelog revs. Filled in once
204 # during changelog stage and then left unmodified.
207 # during changelog stage and then left unmodified.
205 packer.clnode_to_rev = {}
208 packer.clnode_to_rev = {}
206 packer.changelog_done = False
209 packer.changelog_done = False
207 # If true, informs the packer that it is serving shallow content and might
210 # If true, informs the packer that it is serving shallow content and might
208 # need to pack file contents not introduced by the changes being packed.
211 # need to pack file contents not introduced by the changes being packed.
209 packer.is_shallow = depth is not None
212 packer.is_shallow = depth is not None
210
213
211 return packer.generate(common, visitnodes, False, source)
214 return packer.generate(common, visitnodes, False, source)
212
215
213 # Serve a changegroup for a client with a narrow clone.
216 # Serve a changegroup for a client with a narrow clone.
214 def getbundlechangegrouppart_narrow(bundler, repo, source,
217 def getbundlechangegrouppart_narrow(bundler, repo, source,
215 bundlecaps=None, b2caps=None, heads=None,
218 bundlecaps=None, b2caps=None, heads=None,
216 common=None, **kwargs):
219 common=None, **kwargs):
217 cgversions = b2caps.get('changegroup')
220 cgversions = b2caps.get('changegroup')
218 if cgversions: # 3.1 and 3.2 ship with an empty value
221 if cgversions: # 3.1 and 3.2 ship with an empty value
219 cgversions = [v for v in cgversions
222 cgversions = [v for v in cgversions
220 if v in changegroup.supportedoutgoingversions(repo)]
223 if v in changegroup.supportedoutgoingversions(repo)]
221 if not cgversions:
224 if not cgversions:
222 raise ValueError(_('no common changegroup version'))
225 raise ValueError(_('no common changegroup version'))
223 version = max(cgversions)
226 version = max(cgversions)
224 else:
227 else:
225 raise ValueError(_("server does not advertise changegroup version,"
228 raise ValueError(_("server does not advertise changegroup version,"
226 " can't negotiate support for ellipsis nodes"))
229 " can't negotiate support for ellipsis nodes"))
227
230
228 include = sorted(filter(bool, kwargs.get(r'includepats', [])))
231 include = sorted(filter(bool, kwargs.get(r'includepats', [])))
229 exclude = sorted(filter(bool, kwargs.get(r'excludepats', [])))
232 exclude = sorted(filter(bool, kwargs.get(r'excludepats', [])))
230 newmatch = narrowspec.match(repo.root, include=include, exclude=exclude)
233 newmatch = narrowspec.match(repo.root, include=include, exclude=exclude)
231 if not repo.ui.configbool("experimental", "narrowservebrokenellipses"):
234 if not repo.ui.configbool("experimental", "narrowservebrokenellipses"):
232 outgoing = exchange._computeoutgoing(repo, heads, common)
235 outgoing = exchange._computeoutgoing(repo, heads, common)
233 if not outgoing.missing:
236 if not outgoing.missing:
234 return
237 return
235 def wrappedgetbundler(orig, *args, **kwargs):
238 def wrappedgetbundler(orig, *args, **kwargs):
236 bundler = orig(*args, **kwargs)
239 bundler = orig(*args, **kwargs)
237 bundler._narrow_matcher = lambda : newmatch
240 bundler._narrow_matcher = lambda : newmatch
238 return bundler
241 return bundler
239 with extensions.wrappedfunction(changegroup, 'getbundler',
242 with extensions.wrappedfunction(changegroup, 'getbundler',
240 wrappedgetbundler):
243 wrappedgetbundler):
241 cg = changegroup.makestream(repo, outgoing, version, source)
244 cg = changegroup.makestream(repo, outgoing, version, source)
242 part = bundler.newpart('changegroup', data=cg)
245 part = bundler.newpart('changegroup', data=cg)
243 part.addparam('version', version)
246 part.addparam('version', version)
244 if 'treemanifest' in repo.requirements:
247 if 'treemanifest' in repo.requirements:
245 part.addparam('treemanifest', '1')
248 part.addparam('treemanifest', '1')
246
249
247 if include or exclude:
250 if include or exclude:
248 narrowspecpart = bundler.newpart(_SPECPART)
251 narrowspecpart = bundler.newpart(_SPECPART)
249 if include:
252 if include:
250 narrowspecpart.addparam(
253 narrowspecpart.addparam(
251 _SPECPART_INCLUDE, '\n'.join(include), mandatory=True)
254 _SPECPART_INCLUDE, '\n'.join(include), mandatory=True)
252 if exclude:
255 if exclude:
253 narrowspecpart.addparam(
256 narrowspecpart.addparam(
254 _SPECPART_EXCLUDE, '\n'.join(exclude), mandatory=True)
257 _SPECPART_EXCLUDE, '\n'.join(exclude), mandatory=True)
255
258
256 return
259 return
257
260
258 depth = kwargs.get(r'depth', None)
261 depth = kwargs.get(r'depth', None)
259 if depth is not None:
262 if depth is not None:
260 depth = int(depth)
263 depth = int(depth)
261 if depth < 1:
264 if depth < 1:
262 raise error.Abort(_('depth must be positive, got %d') % depth)
265 raise error.Abort(_('depth must be positive, got %d') % depth)
263
266
264 heads = set(heads or repo.heads())
267 heads = set(heads or repo.heads())
265 common = set(common or [nullid])
268 common = set(common or [nullid])
266 oldinclude = sorted(filter(bool, kwargs.get(r'oldincludepats', [])))
269 oldinclude = sorted(filter(bool, kwargs.get(r'oldincludepats', [])))
267 oldexclude = sorted(filter(bool, kwargs.get(r'oldexcludepats', [])))
270 oldexclude = sorted(filter(bool, kwargs.get(r'oldexcludepats', [])))
268 known = {bin(n) for n in kwargs.get(r'known', [])}
271 known = {bin(n) for n in kwargs.get(r'known', [])}
269 if known and (oldinclude != include or oldexclude != exclude):
272 if known and (oldinclude != include or oldexclude != exclude):
270 # Steps:
273 # Steps:
271 # 1. Send kill for "$known & ::common"
274 # 1. Send kill for "$known & ::common"
272 #
275 #
273 # 2. Send changegroup for ::common
276 # 2. Send changegroup for ::common
274 #
277 #
275 # 3. Proceed.
278 # 3. Proceed.
276 #
279 #
277 # In the future, we can send kills for only the specific
280 # In the future, we can send kills for only the specific
278 # nodes we know should go away or change shape, and then
281 # nodes we know should go away or change shape, and then
279 # send a data stream that tells the client something like this:
282 # send a data stream that tells the client something like this:
280 #
283 #
281 # a) apply this changegroup
284 # a) apply this changegroup
282 # b) apply nodes XXX, YYY, ZZZ that you already have
285 # b) apply nodes XXX, YYY, ZZZ that you already have
283 # c) goto a
286 # c) goto a
284 #
287 #
285 # until they've built up the full new state.
288 # until they've built up the full new state.
286 # Convert to revnums and intersect with "common". The client should
289 # Convert to revnums and intersect with "common". The client should
287 # have made it a subset of "common" already, but let's be safe.
290 # have made it a subset of "common" already, but let's be safe.
288 known = set(repo.revs("%ln & ::%ln", known, common))
291 known = set(repo.revs("%ln & ::%ln", known, common))
289 # TODO: we could send only roots() of this set, and the
292 # TODO: we could send only roots() of this set, and the
290 # list of nodes in common, and the client could work out
293 # list of nodes in common, and the client could work out
291 # what to strip, instead of us explicitly sending every
294 # what to strip, instead of us explicitly sending every
292 # single node.
295 # single node.
293 deadrevs = known
296 deadrevs = known
294 def genkills():
297 def genkills():
295 for r in deadrevs:
298 for r in deadrevs:
296 yield _KILLNODESIGNAL
299 yield _KILLNODESIGNAL
297 yield repo.changelog.node(r)
300 yield repo.changelog.node(r)
298 yield _DONESIGNAL
301 yield _DONESIGNAL
299 bundler.newpart(_CHANGESPECPART, data=genkills())
302 bundler.newpart(_CHANGESPECPART, data=genkills())
300 newvisit, newfull, newellipsis = _computeellipsis(
303 newvisit, newfull, newellipsis = _computeellipsis(
301 repo, set(), common, known, newmatch)
304 repo, set(), common, known, newmatch)
302 if newvisit:
305 if newvisit:
303 cg = _packellipsischangegroup(
306 cg = _packellipsischangegroup(
304 repo, common, newmatch, newfull, newellipsis,
307 repo, common, newmatch, newfull, newellipsis,
305 newvisit, depth, source, version)
308 newvisit, depth, source, version)
306 part = bundler.newpart('changegroup', data=cg)
309 part = bundler.newpart('changegroup', data=cg)
307 part.addparam('version', version)
310 part.addparam('version', version)
308 if 'treemanifest' in repo.requirements:
311 if 'treemanifest' in repo.requirements:
309 part.addparam('treemanifest', '1')
312 part.addparam('treemanifest', '1')
310
313
311 visitnodes, relevant_nodes, ellipsisroots = _computeellipsis(
314 visitnodes, relevant_nodes, ellipsisroots = _computeellipsis(
312 repo, common, heads, set(), newmatch, depth=depth)
315 repo, common, heads, set(), newmatch, depth=depth)
313
316
314 repo.ui.debug('Found %d relevant revs\n' % len(relevant_nodes))
317 repo.ui.debug('Found %d relevant revs\n' % len(relevant_nodes))
315 if visitnodes:
318 if visitnodes:
316 cg = _packellipsischangegroup(
319 cg = _packellipsischangegroup(
317 repo, common, newmatch, relevant_nodes, ellipsisroots,
320 repo, common, newmatch, relevant_nodes, ellipsisroots,
318 visitnodes, depth, source, version)
321 visitnodes, depth, source, version)
319 part = bundler.newpart('changegroup', data=cg)
322 part = bundler.newpart('changegroup', data=cg)
320 part.addparam('version', version)
323 part.addparam('version', version)
321 if 'treemanifest' in repo.requirements:
324 if 'treemanifest' in repo.requirements:
322 part.addparam('treemanifest', '1')
325 part.addparam('treemanifest', '1')
323
326
324 def applyacl_narrow(repo, kwargs):
327 def applyacl_narrow(repo, kwargs):
325 ui = repo.ui
328 ui = repo.ui
326 username = ui.shortuser(ui.environ.get('REMOTE_USER') or ui.username())
329 username = ui.shortuser(ui.environ.get('REMOTE_USER') or ui.username())
327 user_includes = ui.configlist(
330 user_includes = ui.configlist(
328 _NARROWACL_SECTION, username + '.includes',
331 _NARROWACL_SECTION, username + '.includes',
329 ui.configlist(_NARROWACL_SECTION, 'default.includes'))
332 ui.configlist(_NARROWACL_SECTION, 'default.includes'))
330 user_excludes = ui.configlist(
333 user_excludes = ui.configlist(
331 _NARROWACL_SECTION, username + '.excludes',
334 _NARROWACL_SECTION, username + '.excludes',
332 ui.configlist(_NARROWACL_SECTION, 'default.excludes'))
335 ui.configlist(_NARROWACL_SECTION, 'default.excludes'))
333 if not user_includes:
336 if not user_includes:
334 raise error.Abort(_("{} configuration for user {} is empty")
337 raise error.Abort(_("{} configuration for user {} is empty")
335 .format(_NARROWACL_SECTION, username))
338 .format(_NARROWACL_SECTION, username))
336
339
337 user_includes = [
340 user_includes = [
338 'path:.' if p == '*' else 'path:' + p for p in user_includes]
341 'path:.' if p == '*' else 'path:' + p for p in user_includes]
339 user_excludes = [
342 user_excludes = [
340 'path:.' if p == '*' else 'path:' + p for p in user_excludes]
343 'path:.' if p == '*' else 'path:' + p for p in user_excludes]
341
344
342 req_includes = set(kwargs.get(r'includepats', []))
345 req_includes = set(kwargs.get(r'includepats', []))
343 req_excludes = set(kwargs.get(r'excludepats', []))
346 req_excludes = set(kwargs.get(r'excludepats', []))
344
347
345 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
348 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
346 req_includes, req_excludes, user_includes, user_excludes)
349 req_includes, req_excludes, user_includes, user_excludes)
347
350
348 if invalid_includes:
351 if invalid_includes:
349 raise error.Abort(
352 raise error.Abort(
350 _("The following includes are not accessible for {}: {}")
353 _("The following includes are not accessible for {}: {}")
351 .format(username, invalid_includes))
354 .format(username, invalid_includes))
352
355
353 new_args = {}
356 new_args = {}
354 new_args.update(kwargs)
357 new_args.update(kwargs)
355 new_args['includepats'] = req_includes
358 new_args['includepats'] = req_includes
356 if req_excludes:
359 if req_excludes:
357 new_args['excludepats'] = req_excludes
360 new_args['excludepats'] = req_excludes
358 return new_args
361 return new_args
359
362
360 @bundle2.parthandler(_SPECPART, (_SPECPART_INCLUDE, _SPECPART_EXCLUDE))
363 @bundle2.parthandler(_SPECPART, (_SPECPART_INCLUDE, _SPECPART_EXCLUDE))
361 def _handlechangespec_2(op, inpart):
364 def _handlechangespec_2(op, inpart):
362 includepats = set(inpart.params.get(_SPECPART_INCLUDE, '').splitlines())
365 includepats = set(inpart.params.get(_SPECPART_INCLUDE, '').splitlines())
363 excludepats = set(inpart.params.get(_SPECPART_EXCLUDE, '').splitlines())
366 excludepats = set(inpart.params.get(_SPECPART_EXCLUDE, '').splitlines())
364 if not changegroup.NARROW_REQUIREMENT in op.repo.requirements:
367 if not changegroup.NARROW_REQUIREMENT in op.repo.requirements:
365 op.repo.requirements.add(changegroup.NARROW_REQUIREMENT)
368 op.repo.requirements.add(changegroup.NARROW_REQUIREMENT)
366 op.repo._writerequirements()
369 op.repo._writerequirements()
367 op.repo.setnarrowpats(includepats, excludepats)
370 op.repo.setnarrowpats(includepats, excludepats)
368
371
369 @bundle2.parthandler(_CHANGESPECPART)
372 @bundle2.parthandler(_CHANGESPECPART)
370 def _handlechangespec(op, inpart):
373 def _handlechangespec(op, inpart):
371 repo = op.repo
374 repo = op.repo
372 cl = repo.changelog
375 cl = repo.changelog
373
376
374 # changesets which need to be stripped entirely. either they're no longer
377 # changesets which need to be stripped entirely. either they're no longer
375 # needed in the new narrow spec, or the server is sending a replacement
378 # needed in the new narrow spec, or the server is sending a replacement
376 # in the changegroup part.
379 # in the changegroup part.
377 clkills = set()
380 clkills = set()
378
381
379 # A changespec part contains all the updates to ellipsis nodes
382 # A changespec part contains all the updates to ellipsis nodes
380 # that will happen as a result of widening or narrowing a
383 # that will happen as a result of widening or narrowing a
381 # repo. All the changes that this block encounters are ellipsis
384 # repo. All the changes that this block encounters are ellipsis
382 # nodes or flags to kill an existing ellipsis.
385 # nodes or flags to kill an existing ellipsis.
383 chunksignal = changegroup.readexactly(inpart, 4)
386 chunksignal = changegroup.readexactly(inpart, 4)
384 while chunksignal != _DONESIGNAL:
387 while chunksignal != _DONESIGNAL:
385 if chunksignal == _KILLNODESIGNAL:
388 if chunksignal == _KILLNODESIGNAL:
386 # a node used to be an ellipsis but isn't anymore
389 # a node used to be an ellipsis but isn't anymore
387 ck = changegroup.readexactly(inpart, 20)
390 ck = changegroup.readexactly(inpart, 20)
388 if cl.hasnode(ck):
391 if cl.hasnode(ck):
389 clkills.add(ck)
392 clkills.add(ck)
390 else:
393 else:
391 raise error.Abort(
394 raise error.Abort(
392 _('unexpected changespec node chunk type: %s') % chunksignal)
395 _('unexpected changespec node chunk type: %s') % chunksignal)
393 chunksignal = changegroup.readexactly(inpart, 4)
396 chunksignal = changegroup.readexactly(inpart, 4)
394
397
395 if clkills:
398 if clkills:
396 # preserve bookmarks that repair.strip() would otherwise strip
399 # preserve bookmarks that repair.strip() would otherwise strip
397 bmstore = repo._bookmarks
400 bmstore = repo._bookmarks
398 class dummybmstore(dict):
401 class dummybmstore(dict):
399 def applychanges(self, repo, tr, changes):
402 def applychanges(self, repo, tr, changes):
400 pass
403 pass
401 def recordchange(self, tr): # legacy version
404 def recordchange(self, tr): # legacy version
402 pass
405 pass
403 repo._bookmarks = dummybmstore()
406 repo._bookmarks = dummybmstore()
404 chgrpfile = repair.strip(op.ui, repo, list(clkills), backup=True,
407 chgrpfile = repair.strip(op.ui, repo, list(clkills), backup=True,
405 topic='widen')
408 topic='widen')
406 repo._bookmarks = bmstore
409 repo._bookmarks = bmstore
407 if chgrpfile:
410 if chgrpfile:
408 # presence of _widen_bundle attribute activates widen handler later
411 # presence of _widen_bundle attribute activates widen handler later
409 op._widen_bundle = chgrpfile
412 op._widen_bundle = chgrpfile
410 # Set the new narrowspec if we're widening. The setnewnarrowpats() method
413 # Set the new narrowspec if we're widening. The setnewnarrowpats() method
411 # will currently always be there when using the core+narrowhg server, but
414 # will currently always be there when using the core+narrowhg server, but
412 # other servers may include a changespec part even when not widening (e.g.
415 # other servers may include a changespec part even when not widening (e.g.
413 # because we're deepening a shallow repo).
416 # because we're deepening a shallow repo).
414 if util.safehasattr(repo, 'setnewnarrowpats'):
417 if util.safehasattr(repo, 'setnewnarrowpats'):
415 repo.setnewnarrowpats()
418 repo.setnewnarrowpats()
416
419
417 def handlechangegroup_widen(op, inpart):
420 def handlechangegroup_widen(op, inpart):
418 """Changegroup exchange handler which restores temporarily-stripped nodes"""
421 """Changegroup exchange handler which restores temporarily-stripped nodes"""
419 # We saved a bundle with stripped node data we must now restore.
422 # We saved a bundle with stripped node data we must now restore.
420 # This approach is based on mercurial/repair.py@6ee26a53c111.
423 # This approach is based on mercurial/repair.py@6ee26a53c111.
421 repo = op.repo
424 repo = op.repo
422 ui = op.ui
425 ui = op.ui
423
426
424 chgrpfile = op._widen_bundle
427 chgrpfile = op._widen_bundle
425 del op._widen_bundle
428 del op._widen_bundle
426 vfs = repo.vfs
429 vfs = repo.vfs
427
430
428 ui.note(_("adding branch\n"))
431 ui.note(_("adding branch\n"))
429 f = vfs.open(chgrpfile, "rb")
432 f = vfs.open(chgrpfile, "rb")
430 try:
433 try:
431 gen = exchange.readbundle(ui, f, chgrpfile, vfs)
434 gen = exchange.readbundle(ui, f, chgrpfile, vfs)
432 if not ui.verbose:
435 if not ui.verbose:
433 # silence internal shuffling chatter
436 # silence internal shuffling chatter
434 ui.pushbuffer()
437 ui.pushbuffer()
435 if isinstance(gen, bundle2.unbundle20):
438 if isinstance(gen, bundle2.unbundle20):
436 with repo.transaction('strip') as tr:
439 with repo.transaction('strip') as tr:
437 bundle2.processbundle(repo, gen, lambda: tr)
440 bundle2.processbundle(repo, gen, lambda: tr)
438 else:
441 else:
439 gen.apply(repo, 'strip', 'bundle:' + vfs.join(chgrpfile), True)
442 gen.apply(repo, 'strip', 'bundle:' + vfs.join(chgrpfile), True)
440 if not ui.verbose:
443 if not ui.verbose:
441 ui.popbuffer()
444 ui.popbuffer()
442 finally:
445 finally:
443 f.close()
446 f.close()
444
447
445 # remove undo files
448 # remove undo files
446 for undovfs, undofile in repo.undofiles():
449 for undovfs, undofile in repo.undofiles():
447 try:
450 try:
448 undovfs.unlink(undofile)
451 undovfs.unlink(undofile)
449 except OSError as e:
452 except OSError as e:
450 if e.errno != errno.ENOENT:
453 if e.errno != errno.ENOENT:
451 ui.warn(_('error removing %s: %s\n') %
454 ui.warn(_('error removing %s: %s\n') %
452 (undovfs.join(undofile), util.forcebytestr(e)))
455 (undovfs.join(undofile), stringutil.forcebytestr(e)))
453
456
454 # Remove partial backup only if there were no exceptions
457 # Remove partial backup only if there were no exceptions
455 vfs.unlink(chgrpfile)
458 vfs.unlink(chgrpfile)
456
459
457 def setup():
460 def setup():
458 """Enable narrow repo support in bundle2-related extension points."""
461 """Enable narrow repo support in bundle2-related extension points."""
459 extensions.wrapfunction(bundle2, 'getrepocaps', getrepocaps_narrow)
462 extensions.wrapfunction(bundle2, 'getrepocaps', getrepocaps_narrow)
460
463
461 wireproto.gboptsmap['narrow'] = 'boolean'
464 wireproto.gboptsmap['narrow'] = 'boolean'
462 wireproto.gboptsmap['depth'] = 'plain'
465 wireproto.gboptsmap['depth'] = 'plain'
463 wireproto.gboptsmap['oldincludepats'] = 'csv'
466 wireproto.gboptsmap['oldincludepats'] = 'csv'
464 wireproto.gboptsmap['oldexcludepats'] = 'csv'
467 wireproto.gboptsmap['oldexcludepats'] = 'csv'
465 wireproto.gboptsmap['includepats'] = 'csv'
468 wireproto.gboptsmap['includepats'] = 'csv'
466 wireproto.gboptsmap['excludepats'] = 'csv'
469 wireproto.gboptsmap['excludepats'] = 'csv'
467 wireproto.gboptsmap['known'] = 'csv'
470 wireproto.gboptsmap['known'] = 'csv'
468
471
469 # Extend changegroup serving to handle requests from narrow clients.
472 # Extend changegroup serving to handle requests from narrow clients.
470 origcgfn = exchange.getbundle2partsmapping['changegroup']
473 origcgfn = exchange.getbundle2partsmapping['changegroup']
471 def wrappedcgfn(*args, **kwargs):
474 def wrappedcgfn(*args, **kwargs):
472 repo = args[1]
475 repo = args[1]
473 if repo.ui.has_section(_NARROWACL_SECTION):
476 if repo.ui.has_section(_NARROWACL_SECTION):
474 getbundlechangegrouppart_narrow(
477 getbundlechangegrouppart_narrow(
475 *args, **applyacl_narrow(repo, kwargs))
478 *args, **applyacl_narrow(repo, kwargs))
476 elif kwargs.get(r'narrow', False):
479 elif kwargs.get(r'narrow', False):
477 getbundlechangegrouppart_narrow(*args, **kwargs)
480 getbundlechangegrouppart_narrow(*args, **kwargs)
478 else:
481 else:
479 origcgfn(*args, **kwargs)
482 origcgfn(*args, **kwargs)
480 exchange.getbundle2partsmapping['changegroup'] = wrappedcgfn
483 exchange.getbundle2partsmapping['changegroup'] = wrappedcgfn
481
484
482 # disable rev branch cache exchange when serving a narrow bundle
485 # disable rev branch cache exchange when serving a narrow bundle
483 # (currently incompatible with that part)
486 # (currently incompatible with that part)
484 origrbcfn = exchange.getbundle2partsmapping['cache:rev-branch-cache']
487 origrbcfn = exchange.getbundle2partsmapping['cache:rev-branch-cache']
485 def wrappedcgfn(*args, **kwargs):
488 def wrappedcgfn(*args, **kwargs):
486 repo = args[1]
489 repo = args[1]
487 if repo.ui.has_section(_NARROWACL_SECTION):
490 if repo.ui.has_section(_NARROWACL_SECTION):
488 return
491 return
489 elif kwargs.get(r'narrow', False):
492 elif kwargs.get(r'narrow', False):
490 return
493 return
491 else:
494 else:
492 origrbcfn(*args, **kwargs)
495 origrbcfn(*args, **kwargs)
493 exchange.getbundle2partsmapping['cache:rev-branch-cache'] = wrappedcgfn
496 exchange.getbundle2partsmapping['cache:rev-branch-cache'] = wrappedcgfn
494
497
495 # Extend changegroup receiver so client can fixup after widen requests.
498 # Extend changegroup receiver so client can fixup after widen requests.
496 origcghandler = bundle2.parthandlermapping['changegroup']
499 origcghandler = bundle2.parthandlermapping['changegroup']
497 def wrappedcghandler(op, inpart):
500 def wrappedcghandler(op, inpart):
498 origcghandler(op, inpart)
501 origcghandler(op, inpart)
499 if util.safehasattr(op, '_widen_bundle'):
502 if util.safehasattr(op, '_widen_bundle'):
500 handlechangegroup_widen(op, inpart)
503 handlechangegroup_widen(op, inpart)
501 wrappedcghandler.params = origcghandler.params
504 wrappedcghandler.params = origcghandler.params
502 bundle2.parthandlermapping['changegroup'] = wrappedcghandler
505 bundle2.parthandlermapping['changegroup'] = wrappedcghandler
@@ -1,485 +1,488
1 # notify.py - email notifications for mercurial
1 # notify.py - email notifications for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''hooks for sending email push notifications
8 '''hooks for sending email push notifications
9
9
10 This extension implements hooks to send email notifications when
10 This extension implements hooks to send email notifications when
11 changesets are sent from or received by the local repository.
11 changesets are sent from or received by the local repository.
12
12
13 First, enable the extension as explained in :hg:`help extensions`, and
13 First, enable the extension as explained in :hg:`help extensions`, and
14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
14 register the hook you want to run. ``incoming`` and ``changegroup`` hooks
15 are run when changesets are received, while ``outgoing`` hooks are for
15 are run when changesets are received, while ``outgoing`` hooks are for
16 changesets sent to another repository::
16 changesets sent to another repository::
17
17
18 [hooks]
18 [hooks]
19 # one email for each incoming changeset
19 # one email for each incoming changeset
20 incoming.notify = python:hgext.notify.hook
20 incoming.notify = python:hgext.notify.hook
21 # one email for all incoming changesets
21 # one email for all incoming changesets
22 changegroup.notify = python:hgext.notify.hook
22 changegroup.notify = python:hgext.notify.hook
23
23
24 # one email for all outgoing changesets
24 # one email for all outgoing changesets
25 outgoing.notify = python:hgext.notify.hook
25 outgoing.notify = python:hgext.notify.hook
26
26
27 This registers the hooks. To enable notification, subscribers must
27 This registers the hooks. To enable notification, subscribers must
28 be assigned to repositories. The ``[usersubs]`` section maps multiple
28 be assigned to repositories. The ``[usersubs]`` section maps multiple
29 repositories to a given recipient. The ``[reposubs]`` section maps
29 repositories to a given recipient. The ``[reposubs]`` section maps
30 multiple recipients to a single repository::
30 multiple recipients to a single repository::
31
31
32 [usersubs]
32 [usersubs]
33 # key is subscriber email, value is a comma-separated list of repo patterns
33 # key is subscriber email, value is a comma-separated list of repo patterns
34 user@host = pattern
34 user@host = pattern
35
35
36 [reposubs]
36 [reposubs]
37 # key is repo pattern, value is a comma-separated list of subscriber emails
37 # key is repo pattern, value is a comma-separated list of subscriber emails
38 pattern = user@host
38 pattern = user@host
39
39
40 A ``pattern`` is a ``glob`` matching the absolute path to a repository,
40 A ``pattern`` is a ``glob`` matching the absolute path to a repository,
41 optionally combined with a revset expression. A revset expression, if
41 optionally combined with a revset expression. A revset expression, if
42 present, is separated from the glob by a hash. Example::
42 present, is separated from the glob by a hash. Example::
43
43
44 [reposubs]
44 [reposubs]
45 */widgets#branch(release) = qa-team@example.com
45 */widgets#branch(release) = qa-team@example.com
46
46
47 This sends to ``qa-team@example.com`` whenever a changeset on the ``release``
47 This sends to ``qa-team@example.com`` whenever a changeset on the ``release``
48 branch triggers a notification in any repository ending in ``widgets``.
48 branch triggers a notification in any repository ending in ``widgets``.
49
49
50 In order to place them under direct user management, ``[usersubs]`` and
50 In order to place them under direct user management, ``[usersubs]`` and
51 ``[reposubs]`` sections may be placed in a separate ``hgrc`` file and
51 ``[reposubs]`` sections may be placed in a separate ``hgrc`` file and
52 incorporated by reference::
52 incorporated by reference::
53
53
54 [notify]
54 [notify]
55 config = /path/to/subscriptionsfile
55 config = /path/to/subscriptionsfile
56
56
57 Notifications will not be sent until the ``notify.test`` value is set
57 Notifications will not be sent until the ``notify.test`` value is set
58 to ``False``; see below.
58 to ``False``; see below.
59
59
60 Notifications content can be tweaked with the following configuration entries:
60 Notifications content can be tweaked with the following configuration entries:
61
61
62 notify.test
62 notify.test
63 If ``True``, print messages to stdout instead of sending them. Default: True.
63 If ``True``, print messages to stdout instead of sending them. Default: True.
64
64
65 notify.sources
65 notify.sources
66 Space-separated list of change sources. Notifications are activated only
66 Space-separated list of change sources. Notifications are activated only
67 when a changeset's source is in this list. Sources may be:
67 when a changeset's source is in this list. Sources may be:
68
68
69 :``serve``: changesets received via http or ssh
69 :``serve``: changesets received via http or ssh
70 :``pull``: changesets received via ``hg pull``
70 :``pull``: changesets received via ``hg pull``
71 :``unbundle``: changesets received via ``hg unbundle``
71 :``unbundle``: changesets received via ``hg unbundle``
72 :``push``: changesets sent or received via ``hg push``
72 :``push``: changesets sent or received via ``hg push``
73 :``bundle``: changesets sent via ``hg unbundle``
73 :``bundle``: changesets sent via ``hg unbundle``
74
74
75 Default: serve.
75 Default: serve.
76
76
77 notify.strip
77 notify.strip
78 Number of leading slashes to strip from url paths. By default, notifications
78 Number of leading slashes to strip from url paths. By default, notifications
79 reference repositories with their absolute path. ``notify.strip`` lets you
79 reference repositories with their absolute path. ``notify.strip`` lets you
80 turn them into relative paths. For example, ``notify.strip=3`` will change
80 turn them into relative paths. For example, ``notify.strip=3`` will change
81 ``/long/path/repository`` into ``repository``. Default: 0.
81 ``/long/path/repository`` into ``repository``. Default: 0.
82
82
83 notify.domain
83 notify.domain
84 Default email domain for sender or recipients with no explicit domain.
84 Default email domain for sender or recipients with no explicit domain.
85
85
86 notify.style
86 notify.style
87 Style file to use when formatting emails.
87 Style file to use when formatting emails.
88
88
89 notify.template
89 notify.template
90 Template to use when formatting emails.
90 Template to use when formatting emails.
91
91
92 notify.incoming
92 notify.incoming
93 Template to use when run as an incoming hook, overriding ``notify.template``.
93 Template to use when run as an incoming hook, overriding ``notify.template``.
94
94
95 notify.outgoing
95 notify.outgoing
96 Template to use when run as an outgoing hook, overriding ``notify.template``.
96 Template to use when run as an outgoing hook, overriding ``notify.template``.
97
97
98 notify.changegroup
98 notify.changegroup
99 Template to use when running as a changegroup hook, overriding
99 Template to use when running as a changegroup hook, overriding
100 ``notify.template``.
100 ``notify.template``.
101
101
102 notify.maxdiff
102 notify.maxdiff
103 Maximum number of diff lines to include in notification email. Set to 0
103 Maximum number of diff lines to include in notification email. Set to 0
104 to disable the diff, or -1 to include all of it. Default: 300.
104 to disable the diff, or -1 to include all of it. Default: 300.
105
105
106 notify.maxsubject
106 notify.maxsubject
107 Maximum number of characters in email's subject line. Default: 67.
107 Maximum number of characters in email's subject line. Default: 67.
108
108
109 notify.diffstat
109 notify.diffstat
110 Set to True to include a diffstat before diff content. Default: True.
110 Set to True to include a diffstat before diff content. Default: True.
111
111
112 notify.merge
112 notify.merge
113 If True, send notifications for merge changesets. Default: True.
113 If True, send notifications for merge changesets. Default: True.
114
114
115 notify.mbox
115 notify.mbox
116 If set, append mails to this mbox file instead of sending. Default: None.
116 If set, append mails to this mbox file instead of sending. Default: None.
117
117
118 notify.fromauthor
118 notify.fromauthor
119 If set, use the committer of the first changeset in a changegroup for
119 If set, use the committer of the first changeset in a changegroup for
120 the "From" field of the notification mail. If not set, take the user
120 the "From" field of the notification mail. If not set, take the user
121 from the pushing repo. Default: False.
121 from the pushing repo. Default: False.
122
122
123 If set, the following entries will also be used to customize the
123 If set, the following entries will also be used to customize the
124 notifications:
124 notifications:
125
125
126 email.from
126 email.from
127 Email ``From`` address to use if none can be found in the generated
127 Email ``From`` address to use if none can be found in the generated
128 email content.
128 email content.
129
129
130 web.baseurl
130 web.baseurl
131 Root repository URL to combine with repository paths when making
131 Root repository URL to combine with repository paths when making
132 references. See also ``notify.strip``.
132 references. See also ``notify.strip``.
133
133
134 '''
134 '''
135 from __future__ import absolute_import
135 from __future__ import absolute_import
136
136
137 import email
137 import email
138 import email.parser as emailparser
138 import email.parser as emailparser
139 import fnmatch
139 import fnmatch
140 import socket
140 import socket
141 import time
141 import time
142
142
143 from mercurial.i18n import _
143 from mercurial.i18n import _
144 from mercurial import (
144 from mercurial import (
145 error,
145 error,
146 logcmdutil,
146 logcmdutil,
147 mail,
147 mail,
148 patch,
148 patch,
149 registrar,
149 registrar,
150 util,
150 util,
151 )
151 )
152 from mercurial.utils import dateutil
152 from mercurial.utils import (
153 dateutil,
154 stringutil,
155 )
153
156
154 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
155 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
156 # be specifying the version(s) of Mercurial they are tested with, or
159 # be specifying the version(s) of Mercurial they are tested with, or
157 # leave the attribute unspecified.
160 # leave the attribute unspecified.
158 testedwith = 'ships-with-hg-core'
161 testedwith = 'ships-with-hg-core'
159
162
160 configtable = {}
163 configtable = {}
161 configitem = registrar.configitem(configtable)
164 configitem = registrar.configitem(configtable)
162
165
163 configitem('notify', 'changegroup',
166 configitem('notify', 'changegroup',
164 default=None,
167 default=None,
165 )
168 )
166 configitem('notify', 'config',
169 configitem('notify', 'config',
167 default=None,
170 default=None,
168 )
171 )
169 configitem('notify', 'diffstat',
172 configitem('notify', 'diffstat',
170 default=True,
173 default=True,
171 )
174 )
172 configitem('notify', 'domain',
175 configitem('notify', 'domain',
173 default=None,
176 default=None,
174 )
177 )
175 configitem('notify', 'fromauthor',
178 configitem('notify', 'fromauthor',
176 default=None,
179 default=None,
177 )
180 )
178 configitem('notify', 'incoming',
181 configitem('notify', 'incoming',
179 default=None,
182 default=None,
180 )
183 )
181 configitem('notify', 'maxdiff',
184 configitem('notify', 'maxdiff',
182 default=300,
185 default=300,
183 )
186 )
184 configitem('notify', 'maxsubject',
187 configitem('notify', 'maxsubject',
185 default=67,
188 default=67,
186 )
189 )
187 configitem('notify', 'mbox',
190 configitem('notify', 'mbox',
188 default=None,
191 default=None,
189 )
192 )
190 configitem('notify', 'merge',
193 configitem('notify', 'merge',
191 default=True,
194 default=True,
192 )
195 )
193 configitem('notify', 'outgoing',
196 configitem('notify', 'outgoing',
194 default=None,
197 default=None,
195 )
198 )
196 configitem('notify', 'sources',
199 configitem('notify', 'sources',
197 default='serve',
200 default='serve',
198 )
201 )
199 configitem('notify', 'strip',
202 configitem('notify', 'strip',
200 default=0,
203 default=0,
201 )
204 )
202 configitem('notify', 'style',
205 configitem('notify', 'style',
203 default=None,
206 default=None,
204 )
207 )
205 configitem('notify', 'template',
208 configitem('notify', 'template',
206 default=None,
209 default=None,
207 )
210 )
208 configitem('notify', 'test',
211 configitem('notify', 'test',
209 default=True,
212 default=True,
210 )
213 )
211
214
212 # template for single changeset can include email headers.
215 # template for single changeset can include email headers.
213 single_template = '''
216 single_template = '''
214 Subject: changeset in {webroot}: {desc|firstline|strip}
217 Subject: changeset in {webroot}: {desc|firstline|strip}
215 From: {author}
218 From: {author}
216
219
217 changeset {node|short} in {root}
220 changeset {node|short} in {root}
218 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
221 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
219 description:
222 description:
220 \t{desc|tabindent|strip}
223 \t{desc|tabindent|strip}
221 '''.lstrip()
224 '''.lstrip()
222
225
223 # template for multiple changesets should not contain email headers,
226 # template for multiple changesets should not contain email headers,
224 # because only first set of headers will be used and result will look
227 # because only first set of headers will be used and result will look
225 # strange.
228 # strange.
226 multiple_template = '''
229 multiple_template = '''
227 changeset {node|short} in {root}
230 changeset {node|short} in {root}
228 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
231 details: {baseurl}{webroot}?cmd=changeset;node={node|short}
229 summary: {desc|firstline}
232 summary: {desc|firstline}
230 '''
233 '''
231
234
232 deftemplates = {
235 deftemplates = {
233 'changegroup': multiple_template,
236 'changegroup': multiple_template,
234 }
237 }
235
238
236 class notifier(object):
239 class notifier(object):
237 '''email notification class.'''
240 '''email notification class.'''
238
241
239 def __init__(self, ui, repo, hooktype):
242 def __init__(self, ui, repo, hooktype):
240 self.ui = ui
243 self.ui = ui
241 cfg = self.ui.config('notify', 'config')
244 cfg = self.ui.config('notify', 'config')
242 if cfg:
245 if cfg:
243 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
246 self.ui.readconfig(cfg, sections=['usersubs', 'reposubs'])
244 self.repo = repo
247 self.repo = repo
245 self.stripcount = int(self.ui.config('notify', 'strip'))
248 self.stripcount = int(self.ui.config('notify', 'strip'))
246 self.root = self.strip(self.repo.root)
249 self.root = self.strip(self.repo.root)
247 self.domain = self.ui.config('notify', 'domain')
250 self.domain = self.ui.config('notify', 'domain')
248 self.mbox = self.ui.config('notify', 'mbox')
251 self.mbox = self.ui.config('notify', 'mbox')
249 self.test = self.ui.configbool('notify', 'test')
252 self.test = self.ui.configbool('notify', 'test')
250 self.charsets = mail._charsets(self.ui)
253 self.charsets = mail._charsets(self.ui)
251 self.subs = self.subscribers()
254 self.subs = self.subscribers()
252 self.merge = self.ui.configbool('notify', 'merge')
255 self.merge = self.ui.configbool('notify', 'merge')
253
256
254 mapfile = None
257 mapfile = None
255 template = (self.ui.config('notify', hooktype) or
258 template = (self.ui.config('notify', hooktype) or
256 self.ui.config('notify', 'template'))
259 self.ui.config('notify', 'template'))
257 if not template:
260 if not template:
258 mapfile = self.ui.config('notify', 'style')
261 mapfile = self.ui.config('notify', 'style')
259 if not mapfile and not template:
262 if not mapfile and not template:
260 template = deftemplates.get(hooktype) or single_template
263 template = deftemplates.get(hooktype) or single_template
261 spec = logcmdutil.templatespec(template, mapfile)
264 spec = logcmdutil.templatespec(template, mapfile)
262 self.t = logcmdutil.changesettemplater(self.ui, self.repo, spec)
265 self.t = logcmdutil.changesettemplater(self.ui, self.repo, spec)
263
266
264 def strip(self, path):
267 def strip(self, path):
265 '''strip leading slashes from local path, turn into web-safe path.'''
268 '''strip leading slashes from local path, turn into web-safe path.'''
266
269
267 path = util.pconvert(path)
270 path = util.pconvert(path)
268 count = self.stripcount
271 count = self.stripcount
269 while count > 0:
272 while count > 0:
270 c = path.find('/')
273 c = path.find('/')
271 if c == -1:
274 if c == -1:
272 break
275 break
273 path = path[c + 1:]
276 path = path[c + 1:]
274 count -= 1
277 count -= 1
275 return path
278 return path
276
279
277 def fixmail(self, addr):
280 def fixmail(self, addr):
278 '''try to clean up email addresses.'''
281 '''try to clean up email addresses.'''
279
282
280 addr = util.email(addr.strip())
283 addr = stringutil.email(addr.strip())
281 if self.domain:
284 if self.domain:
282 a = addr.find('@localhost')
285 a = addr.find('@localhost')
283 if a != -1:
286 if a != -1:
284 addr = addr[:a]
287 addr = addr[:a]
285 if '@' not in addr:
288 if '@' not in addr:
286 return addr + '@' + self.domain
289 return addr + '@' + self.domain
287 return addr
290 return addr
288
291
289 def subscribers(self):
292 def subscribers(self):
290 '''return list of email addresses of subscribers to this repo.'''
293 '''return list of email addresses of subscribers to this repo.'''
291 subs = set()
294 subs = set()
292 for user, pats in self.ui.configitems('usersubs'):
295 for user, pats in self.ui.configitems('usersubs'):
293 for pat in pats.split(','):
296 for pat in pats.split(','):
294 if '#' in pat:
297 if '#' in pat:
295 pat, revs = pat.split('#', 1)
298 pat, revs = pat.split('#', 1)
296 else:
299 else:
297 revs = None
300 revs = None
298 if fnmatch.fnmatch(self.repo.root, pat.strip()):
301 if fnmatch.fnmatch(self.repo.root, pat.strip()):
299 subs.add((self.fixmail(user), revs))
302 subs.add((self.fixmail(user), revs))
300 for pat, users in self.ui.configitems('reposubs'):
303 for pat, users in self.ui.configitems('reposubs'):
301 if '#' in pat:
304 if '#' in pat:
302 pat, revs = pat.split('#', 1)
305 pat, revs = pat.split('#', 1)
303 else:
306 else:
304 revs = None
307 revs = None
305 if fnmatch.fnmatch(self.repo.root, pat):
308 if fnmatch.fnmatch(self.repo.root, pat):
306 for user in users.split(','):
309 for user in users.split(','):
307 subs.add((self.fixmail(user), revs))
310 subs.add((self.fixmail(user), revs))
308 return [(mail.addressencode(self.ui, s, self.charsets, self.test), r)
311 return [(mail.addressencode(self.ui, s, self.charsets, self.test), r)
309 for s, r in sorted(subs)]
312 for s, r in sorted(subs)]
310
313
311 def node(self, ctx, **props):
314 def node(self, ctx, **props):
312 '''format one changeset, unless it is a suppressed merge.'''
315 '''format one changeset, unless it is a suppressed merge.'''
313 if not self.merge and len(ctx.parents()) > 1:
316 if not self.merge and len(ctx.parents()) > 1:
314 return False
317 return False
315 self.t.show(ctx, changes=ctx.changeset(),
318 self.t.show(ctx, changes=ctx.changeset(),
316 baseurl=self.ui.config('web', 'baseurl'),
319 baseurl=self.ui.config('web', 'baseurl'),
317 root=self.repo.root, webroot=self.root, **props)
320 root=self.repo.root, webroot=self.root, **props)
318 return True
321 return True
319
322
320 def skipsource(self, source):
323 def skipsource(self, source):
321 '''true if incoming changes from this source should be skipped.'''
324 '''true if incoming changes from this source should be skipped.'''
322 ok_sources = self.ui.config('notify', 'sources').split()
325 ok_sources = self.ui.config('notify', 'sources').split()
323 return source not in ok_sources
326 return source not in ok_sources
324
327
325 def send(self, ctx, count, data):
328 def send(self, ctx, count, data):
326 '''send message.'''
329 '''send message.'''
327
330
328 # Select subscribers by revset
331 # Select subscribers by revset
329 subs = set()
332 subs = set()
330 for sub, spec in self.subs:
333 for sub, spec in self.subs:
331 if spec is None:
334 if spec is None:
332 subs.add(sub)
335 subs.add(sub)
333 continue
336 continue
334 revs = self.repo.revs('%r and %d:', spec, ctx.rev())
337 revs = self.repo.revs('%r and %d:', spec, ctx.rev())
335 if len(revs):
338 if len(revs):
336 subs.add(sub)
339 subs.add(sub)
337 continue
340 continue
338 if len(subs) == 0:
341 if len(subs) == 0:
339 self.ui.debug('notify: no subscribers to selected repo '
342 self.ui.debug('notify: no subscribers to selected repo '
340 'and revset\n')
343 'and revset\n')
341 return
344 return
342
345
343 p = emailparser.Parser()
346 p = emailparser.Parser()
344 try:
347 try:
345 msg = p.parsestr(data)
348 msg = p.parsestr(data)
346 except email.Errors.MessageParseError as inst:
349 except email.Errors.MessageParseError as inst:
347 raise error.Abort(inst)
350 raise error.Abort(inst)
348
351
349 # store sender and subject
352 # store sender and subject
350 sender, subject = msg['From'], msg['Subject']
353 sender, subject = msg['From'], msg['Subject']
351 del msg['From'], msg['Subject']
354 del msg['From'], msg['Subject']
352
355
353 if not msg.is_multipart():
356 if not msg.is_multipart():
354 # create fresh mime message from scratch
357 # create fresh mime message from scratch
355 # (multipart templates must take care of this themselves)
358 # (multipart templates must take care of this themselves)
356 headers = msg.items()
359 headers = msg.items()
357 payload = msg.get_payload()
360 payload = msg.get_payload()
358 # for notification prefer readability over data precision
361 # for notification prefer readability over data precision
359 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
362 msg = mail.mimeencode(self.ui, payload, self.charsets, self.test)
360 # reinstate custom headers
363 # reinstate custom headers
361 for k, v in headers:
364 for k, v in headers:
362 msg[k] = v
365 msg[k] = v
363
366
364 msg['Date'] = dateutil.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
367 msg['Date'] = dateutil.datestr(format="%a, %d %b %Y %H:%M:%S %1%2")
365
368
366 # try to make subject line exist and be useful
369 # try to make subject line exist and be useful
367 if not subject:
370 if not subject:
368 if count > 1:
371 if count > 1:
369 subject = _('%s: %d new changesets') % (self.root, count)
372 subject = _('%s: %d new changesets') % (self.root, count)
370 else:
373 else:
371 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
374 s = ctx.description().lstrip().split('\n', 1)[0].rstrip()
372 subject = '%s: %s' % (self.root, s)
375 subject = '%s: %s' % (self.root, s)
373 maxsubject = int(self.ui.config('notify', 'maxsubject'))
376 maxsubject = int(self.ui.config('notify', 'maxsubject'))
374 if maxsubject:
377 if maxsubject:
375 subject = util.ellipsis(subject, maxsubject)
378 subject = stringutil.ellipsis(subject, maxsubject)
376 msg['Subject'] = mail.headencode(self.ui, subject,
379 msg['Subject'] = mail.headencode(self.ui, subject,
377 self.charsets, self.test)
380 self.charsets, self.test)
378
381
379 # try to make message have proper sender
382 # try to make message have proper sender
380 if not sender:
383 if not sender:
381 sender = self.ui.config('email', 'from') or self.ui.username()
384 sender = self.ui.config('email', 'from') or self.ui.username()
382 if '@' not in sender or '@localhost' in sender:
385 if '@' not in sender or '@localhost' in sender:
383 sender = self.fixmail(sender)
386 sender = self.fixmail(sender)
384 msg['From'] = mail.addressencode(self.ui, sender,
387 msg['From'] = mail.addressencode(self.ui, sender,
385 self.charsets, self.test)
388 self.charsets, self.test)
386
389
387 msg['X-Hg-Notification'] = 'changeset %s' % ctx
390 msg['X-Hg-Notification'] = 'changeset %s' % ctx
388 if not msg['Message-Id']:
391 if not msg['Message-Id']:
389 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
392 msg['Message-Id'] = ('<hg.%s.%s.%s@%s>' %
390 (ctx, int(time.time()),
393 (ctx, int(time.time()),
391 hash(self.repo.root), socket.getfqdn()))
394 hash(self.repo.root), socket.getfqdn()))
392 msg['To'] = ', '.join(sorted(subs))
395 msg['To'] = ', '.join(sorted(subs))
393
396
394 msgtext = msg.as_string()
397 msgtext = msg.as_string()
395 if self.test:
398 if self.test:
396 self.ui.write(msgtext)
399 self.ui.write(msgtext)
397 if not msgtext.endswith('\n'):
400 if not msgtext.endswith('\n'):
398 self.ui.write('\n')
401 self.ui.write('\n')
399 else:
402 else:
400 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
403 self.ui.status(_('notify: sending %d subscribers %d changes\n') %
401 (len(subs), count))
404 (len(subs), count))
402 mail.sendmail(self.ui, util.email(msg['From']),
405 mail.sendmail(self.ui, stringutil.email(msg['From']),
403 subs, msgtext, mbox=self.mbox)
406 subs, msgtext, mbox=self.mbox)
404
407
405 def diff(self, ctx, ref=None):
408 def diff(self, ctx, ref=None):
406
409
407 maxdiff = int(self.ui.config('notify', 'maxdiff'))
410 maxdiff = int(self.ui.config('notify', 'maxdiff'))
408 prev = ctx.p1().node()
411 prev = ctx.p1().node()
409 if ref:
412 if ref:
410 ref = ref.node()
413 ref = ref.node()
411 else:
414 else:
412 ref = ctx.node()
415 ref = ctx.node()
413 chunks = patch.diff(self.repo, prev, ref,
416 chunks = patch.diff(self.repo, prev, ref,
414 opts=patch.diffallopts(self.ui))
417 opts=patch.diffallopts(self.ui))
415 difflines = ''.join(chunks).splitlines()
418 difflines = ''.join(chunks).splitlines()
416
419
417 if self.ui.configbool('notify', 'diffstat'):
420 if self.ui.configbool('notify', 'diffstat'):
418 s = patch.diffstat(difflines)
421 s = patch.diffstat(difflines)
419 # s may be nil, don't include the header if it is
422 # s may be nil, don't include the header if it is
420 if s:
423 if s:
421 self.ui.write(_('\ndiffstat:\n\n%s') % s)
424 self.ui.write(_('\ndiffstat:\n\n%s') % s)
422
425
423 if maxdiff == 0:
426 if maxdiff == 0:
424 return
427 return
425 elif maxdiff > 0 and len(difflines) > maxdiff:
428 elif maxdiff > 0 and len(difflines) > maxdiff:
426 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
429 msg = _('\ndiffs (truncated from %d to %d lines):\n\n')
427 self.ui.write(msg % (len(difflines), maxdiff))
430 self.ui.write(msg % (len(difflines), maxdiff))
428 difflines = difflines[:maxdiff]
431 difflines = difflines[:maxdiff]
429 elif difflines:
432 elif difflines:
430 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
433 self.ui.write(_('\ndiffs (%d lines):\n\n') % len(difflines))
431
434
432 self.ui.write("\n".join(difflines))
435 self.ui.write("\n".join(difflines))
433
436
434 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
437 def hook(ui, repo, hooktype, node=None, source=None, **kwargs):
435 '''send email notifications to interested subscribers.
438 '''send email notifications to interested subscribers.
436
439
437 if used as changegroup hook, send one email for all changesets in
440 if used as changegroup hook, send one email for all changesets in
438 changegroup. else send one email per changeset.'''
441 changegroup. else send one email per changeset.'''
439
442
440 n = notifier(ui, repo, hooktype)
443 n = notifier(ui, repo, hooktype)
441 ctx = repo[node]
444 ctx = repo[node]
442
445
443 if not n.subs:
446 if not n.subs:
444 ui.debug('notify: no subscribers to repository %s\n' % n.root)
447 ui.debug('notify: no subscribers to repository %s\n' % n.root)
445 return
448 return
446 if n.skipsource(source):
449 if n.skipsource(source):
447 ui.debug('notify: changes have source "%s" - skipping\n' % source)
450 ui.debug('notify: changes have source "%s" - skipping\n' % source)
448 return
451 return
449
452
450 ui.pushbuffer()
453 ui.pushbuffer()
451 data = ''
454 data = ''
452 count = 0
455 count = 0
453 author = ''
456 author = ''
454 if hooktype == 'changegroup' or hooktype == 'outgoing':
457 if hooktype == 'changegroup' or hooktype == 'outgoing':
455 start, end = ctx.rev(), len(repo)
458 start, end = ctx.rev(), len(repo)
456 for rev in xrange(start, end):
459 for rev in xrange(start, end):
457 if n.node(repo[rev]):
460 if n.node(repo[rev]):
458 count += 1
461 count += 1
459 if not author:
462 if not author:
460 author = repo[rev].user()
463 author = repo[rev].user()
461 else:
464 else:
462 data += ui.popbuffer()
465 data += ui.popbuffer()
463 ui.note(_('notify: suppressing notification for merge %d:%s\n')
466 ui.note(_('notify: suppressing notification for merge %d:%s\n')
464 % (rev, repo[rev].hex()[:12]))
467 % (rev, repo[rev].hex()[:12]))
465 ui.pushbuffer()
468 ui.pushbuffer()
466 if count:
469 if count:
467 n.diff(ctx, repo['tip'])
470 n.diff(ctx, repo['tip'])
468 else:
471 else:
469 if not n.node(ctx):
472 if not n.node(ctx):
470 ui.popbuffer()
473 ui.popbuffer()
471 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
474 ui.note(_('notify: suppressing notification for merge %d:%s\n') %
472 (ctx.rev(), ctx.hex()[:12]))
475 (ctx.rev(), ctx.hex()[:12]))
473 return
476 return
474 count += 1
477 count += 1
475 n.diff(ctx)
478 n.diff(ctx)
476 if not author:
479 if not author:
477 author = ctx.user()
480 author = ctx.user()
478
481
479 data += ui.popbuffer()
482 data += ui.popbuffer()
480 fromauthor = ui.config('notify', 'fromauthor')
483 fromauthor = ui.config('notify', 'fromauthor')
481 if author and fromauthor:
484 if author and fromauthor:
482 data = '\n'.join(['From: %s' % author, data])
485 data = '\n'.join(['From: %s' % author, data])
483
486
484 if count:
487 if count:
485 n.send(ctx, count, data)
488 n.send(ctx, count, data)
@@ -1,195 +1,198
1 # Mercurial extension to provide 'hg relink' command
1 # Mercurial extension to provide 'hg relink' command
2 #
2 #
3 # Copyright (C) 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright (C) 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """recreates hardlinks between repository clones"""
8 """recreates hardlinks between repository clones"""
9 from __future__ import absolute_import
9 from __future__ import absolute_import
10
10
11 import os
11 import os
12 import stat
12 import stat
13
13
14 from mercurial.i18n import _
14 from mercurial.i18n import _
15 from mercurial import (
15 from mercurial import (
16 error,
16 error,
17 hg,
17 hg,
18 registrar,
18 registrar,
19 util,
19 util,
20 )
20 )
21 from mercurial.utils import (
22 stringutil,
23 )
21
24
22 cmdtable = {}
25 cmdtable = {}
23 command = registrar.command(cmdtable)
26 command = registrar.command(cmdtable)
24 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
27 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
25 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
28 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
26 # be specifying the version(s) of Mercurial they are tested with, or
29 # be specifying the version(s) of Mercurial they are tested with, or
27 # leave the attribute unspecified.
30 # leave the attribute unspecified.
28 testedwith = 'ships-with-hg-core'
31 testedwith = 'ships-with-hg-core'
29
32
30 @command('relink', [], _('[ORIGIN]'))
33 @command('relink', [], _('[ORIGIN]'))
31 def relink(ui, repo, origin=None, **opts):
34 def relink(ui, repo, origin=None, **opts):
32 """recreate hardlinks between two repositories
35 """recreate hardlinks between two repositories
33
36
34 When repositories are cloned locally, their data files will be
37 When repositories are cloned locally, their data files will be
35 hardlinked so that they only use the space of a single repository.
38 hardlinked so that they only use the space of a single repository.
36
39
37 Unfortunately, subsequent pulls into either repository will break
40 Unfortunately, subsequent pulls into either repository will break
38 hardlinks for any files touched by the new changesets, even if
41 hardlinks for any files touched by the new changesets, even if
39 both repositories end up pulling the same changes.
42 both repositories end up pulling the same changes.
40
43
41 Similarly, passing --rev to "hg clone" will fail to use any
44 Similarly, passing --rev to "hg clone" will fail to use any
42 hardlinks, falling back to a complete copy of the source
45 hardlinks, falling back to a complete copy of the source
43 repository.
46 repository.
44
47
45 This command lets you recreate those hardlinks and reclaim that
48 This command lets you recreate those hardlinks and reclaim that
46 wasted space.
49 wasted space.
47
50
48 This repository will be relinked to share space with ORIGIN, which
51 This repository will be relinked to share space with ORIGIN, which
49 must be on the same local disk. If ORIGIN is omitted, looks for
52 must be on the same local disk. If ORIGIN is omitted, looks for
50 "default-relink", then "default", in [paths].
53 "default-relink", then "default", in [paths].
51
54
52 Do not attempt any read operations on this repository while the
55 Do not attempt any read operations on this repository while the
53 command is running. (Both repositories will be locked against
56 command is running. (Both repositories will be locked against
54 writes.)
57 writes.)
55 """
58 """
56 if (not util.safehasattr(util, 'samefile') or
59 if (not util.safehasattr(util, 'samefile') or
57 not util.safehasattr(util, 'samedevice')):
60 not util.safehasattr(util, 'samedevice')):
58 raise error.Abort(_('hardlinks are not supported on this system'))
61 raise error.Abort(_('hardlinks are not supported on this system'))
59 src = hg.repository(repo.baseui, ui.expandpath(origin or 'default-relink',
62 src = hg.repository(repo.baseui, ui.expandpath(origin or 'default-relink',
60 origin or 'default'))
63 origin or 'default'))
61 ui.status(_('relinking %s to %s\n') % (src.store.path, repo.store.path))
64 ui.status(_('relinking %s to %s\n') % (src.store.path, repo.store.path))
62 if repo.root == src.root:
65 if repo.root == src.root:
63 ui.status(_('there is nothing to relink\n'))
66 ui.status(_('there is nothing to relink\n'))
64 return
67 return
65
68
66 if not util.samedevice(src.store.path, repo.store.path):
69 if not util.samedevice(src.store.path, repo.store.path):
67 # No point in continuing
70 # No point in continuing
68 raise error.Abort(_('source and destination are on different devices'))
71 raise error.Abort(_('source and destination are on different devices'))
69
72
70 locallock = repo.lock()
73 locallock = repo.lock()
71 try:
74 try:
72 remotelock = src.lock()
75 remotelock = src.lock()
73 try:
76 try:
74 candidates = sorted(collect(src, ui))
77 candidates = sorted(collect(src, ui))
75 targets = prune(candidates, src.store.path, repo.store.path, ui)
78 targets = prune(candidates, src.store.path, repo.store.path, ui)
76 do_relink(src.store.path, repo.store.path, targets, ui)
79 do_relink(src.store.path, repo.store.path, targets, ui)
77 finally:
80 finally:
78 remotelock.release()
81 remotelock.release()
79 finally:
82 finally:
80 locallock.release()
83 locallock.release()
81
84
82 def collect(src, ui):
85 def collect(src, ui):
83 seplen = len(os.path.sep)
86 seplen = len(os.path.sep)
84 candidates = []
87 candidates = []
85 live = len(src['tip'].manifest())
88 live = len(src['tip'].manifest())
86 # Your average repository has some files which were deleted before
89 # Your average repository has some files which were deleted before
87 # the tip revision. We account for that by assuming that there are
90 # the tip revision. We account for that by assuming that there are
88 # 3 tracked files for every 2 live files as of the tip version of
91 # 3 tracked files for every 2 live files as of the tip version of
89 # the repository.
92 # the repository.
90 #
93 #
91 # mozilla-central as of 2010-06-10 had a ratio of just over 7:5.
94 # mozilla-central as of 2010-06-10 had a ratio of just over 7:5.
92 total = live * 3 // 2
95 total = live * 3 // 2
93 src = src.store.path
96 src = src.store.path
94 pos = 0
97 pos = 0
95 ui.status(_("tip has %d files, estimated total number of files: %d\n")
98 ui.status(_("tip has %d files, estimated total number of files: %d\n")
96 % (live, total))
99 % (live, total))
97 for dirpath, dirnames, filenames in os.walk(src):
100 for dirpath, dirnames, filenames in os.walk(src):
98 dirnames.sort()
101 dirnames.sort()
99 relpath = dirpath[len(src) + seplen:]
102 relpath = dirpath[len(src) + seplen:]
100 for filename in sorted(filenames):
103 for filename in sorted(filenames):
101 if filename[-2:] not in ('.d', '.i'):
104 if filename[-2:] not in ('.d', '.i'):
102 continue
105 continue
103 st = os.stat(os.path.join(dirpath, filename))
106 st = os.stat(os.path.join(dirpath, filename))
104 if not stat.S_ISREG(st.st_mode):
107 if not stat.S_ISREG(st.st_mode):
105 continue
108 continue
106 pos += 1
109 pos += 1
107 candidates.append((os.path.join(relpath, filename), st))
110 candidates.append((os.path.join(relpath, filename), st))
108 ui.progress(_('collecting'), pos, filename, _('files'), total)
111 ui.progress(_('collecting'), pos, filename, _('files'), total)
109
112
110 ui.progress(_('collecting'), None)
113 ui.progress(_('collecting'), None)
111 ui.status(_('collected %d candidate storage files\n') % len(candidates))
114 ui.status(_('collected %d candidate storage files\n') % len(candidates))
112 return candidates
115 return candidates
113
116
114 def prune(candidates, src, dst, ui):
117 def prune(candidates, src, dst, ui):
115 def linkfilter(src, dst, st):
118 def linkfilter(src, dst, st):
116 try:
119 try:
117 ts = os.stat(dst)
120 ts = os.stat(dst)
118 except OSError:
121 except OSError:
119 # Destination doesn't have this file?
122 # Destination doesn't have this file?
120 return False
123 return False
121 if util.samefile(src, dst):
124 if util.samefile(src, dst):
122 return False
125 return False
123 if not util.samedevice(src, dst):
126 if not util.samedevice(src, dst):
124 # No point in continuing
127 # No point in continuing
125 raise error.Abort(
128 raise error.Abort(
126 _('source and destination are on different devices'))
129 _('source and destination are on different devices'))
127 if st.st_size != ts.st_size:
130 if st.st_size != ts.st_size:
128 return False
131 return False
129 return st
132 return st
130
133
131 targets = []
134 targets = []
132 total = len(candidates)
135 total = len(candidates)
133 pos = 0
136 pos = 0
134 for fn, st in candidates:
137 for fn, st in candidates:
135 pos += 1
138 pos += 1
136 srcpath = os.path.join(src, fn)
139 srcpath = os.path.join(src, fn)
137 tgt = os.path.join(dst, fn)
140 tgt = os.path.join(dst, fn)
138 ts = linkfilter(srcpath, tgt, st)
141 ts = linkfilter(srcpath, tgt, st)
139 if not ts:
142 if not ts:
140 ui.debug('not linkable: %s\n' % fn)
143 ui.debug('not linkable: %s\n' % fn)
141 continue
144 continue
142 targets.append((fn, ts.st_size))
145 targets.append((fn, ts.st_size))
143 ui.progress(_('pruning'), pos, fn, _('files'), total)
146 ui.progress(_('pruning'), pos, fn, _('files'), total)
144
147
145 ui.progress(_('pruning'), None)
148 ui.progress(_('pruning'), None)
146 ui.status(_('pruned down to %d probably relinkable files\n') % len(targets))
149 ui.status(_('pruned down to %d probably relinkable files\n') % len(targets))
147 return targets
150 return targets
148
151
149 def do_relink(src, dst, files, ui):
152 def do_relink(src, dst, files, ui):
150 def relinkfile(src, dst):
153 def relinkfile(src, dst):
151 bak = dst + '.bak'
154 bak = dst + '.bak'
152 os.rename(dst, bak)
155 os.rename(dst, bak)
153 try:
156 try:
154 util.oslink(src, dst)
157 util.oslink(src, dst)
155 except OSError:
158 except OSError:
156 os.rename(bak, dst)
159 os.rename(bak, dst)
157 raise
160 raise
158 os.remove(bak)
161 os.remove(bak)
159
162
160 CHUNKLEN = 65536
163 CHUNKLEN = 65536
161 relinked = 0
164 relinked = 0
162 savedbytes = 0
165 savedbytes = 0
163
166
164 pos = 0
167 pos = 0
165 total = len(files)
168 total = len(files)
166 for f, sz in files:
169 for f, sz in files:
167 pos += 1
170 pos += 1
168 source = os.path.join(src, f)
171 source = os.path.join(src, f)
169 tgt = os.path.join(dst, f)
172 tgt = os.path.join(dst, f)
170 # Binary mode, so that read() works correctly, especially on Windows
173 # Binary mode, so that read() works correctly, especially on Windows
171 sfp = open(source, 'rb')
174 sfp = open(source, 'rb')
172 dfp = open(tgt, 'rb')
175 dfp = open(tgt, 'rb')
173 sin = sfp.read(CHUNKLEN)
176 sin = sfp.read(CHUNKLEN)
174 while sin:
177 while sin:
175 din = dfp.read(CHUNKLEN)
178 din = dfp.read(CHUNKLEN)
176 if sin != din:
179 if sin != din:
177 break
180 break
178 sin = sfp.read(CHUNKLEN)
181 sin = sfp.read(CHUNKLEN)
179 sfp.close()
182 sfp.close()
180 dfp.close()
183 dfp.close()
181 if sin:
184 if sin:
182 ui.debug('not linkable: %s\n' % f)
185 ui.debug('not linkable: %s\n' % f)
183 continue
186 continue
184 try:
187 try:
185 relinkfile(source, tgt)
188 relinkfile(source, tgt)
186 ui.progress(_('relinking'), pos, f, _('files'), total)
189 ui.progress(_('relinking'), pos, f, _('files'), total)
187 relinked += 1
190 relinked += 1
188 savedbytes += sz
191 savedbytes += sz
189 except OSError as inst:
192 except OSError as inst:
190 ui.warn('%s: %s\n' % (tgt, util.forcebytestr(inst)))
193 ui.warn('%s: %s\n' % (tgt, stringutil.forcebytestr(inst)))
191
194
192 ui.progress(_('relinking'), None)
195 ui.progress(_('relinking'), None)
193
196
194 ui.status(_('relinked %d files (%s reclaimed)\n') %
197 ui.status(_('relinked %d files (%s reclaimed)\n') %
195 (relinked, util.bytecount(savedbytes)))
198 (relinked, util.bytecount(savedbytes)))
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now