##// END OF EJS Templates
py3: conditionalize xmlrpclib import...
Pulkit Goyal -
r29432:34b914ac default
parent child Browse files
Show More
@@ -1,928 +1,928
1 # bugzilla.py - bugzilla integration for mercurial
1 # bugzilla.py - bugzilla integration for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''hooks for integrating with the Bugzilla bug tracker
9 '''hooks for integrating with the Bugzilla bug tracker
10
10
11 This hook extension adds comments on bugs in Bugzilla when changesets
11 This hook extension adds comments on bugs in Bugzilla when changesets
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 the Mercurial template mechanism.
13 the Mercurial template mechanism.
14
14
15 The bug references can optionally include an update for Bugzilla of the
15 The bug references can optionally include an update for Bugzilla of the
16 hours spent working on the bug. Bugs can also be marked fixed.
16 hours spent working on the bug. Bugs can also be marked fixed.
17
17
18 Three basic modes of access to Bugzilla are provided:
18 Three basic modes of access to Bugzilla are provided:
19
19
20 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
20 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
21
21
22 2. Check data via the Bugzilla XMLRPC interface and submit bug change
22 2. Check data via the Bugzilla XMLRPC interface and submit bug change
23 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
23 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
24
24
25 3. Writing directly to the Bugzilla database. Only Bugzilla installations
25 3. Writing directly to the Bugzilla database. Only Bugzilla installations
26 using MySQL are supported. Requires Python MySQLdb.
26 using MySQL are supported. Requires Python MySQLdb.
27
27
28 Writing directly to the database is susceptible to schema changes, and
28 Writing directly to the database is susceptible to schema changes, and
29 relies on a Bugzilla contrib script to send out bug change
29 relies on a Bugzilla contrib script to send out bug change
30 notification emails. This script runs as the user running Mercurial,
30 notification emails. This script runs as the user running Mercurial,
31 must be run on the host with the Bugzilla install, and requires
31 must be run on the host with the Bugzilla install, and requires
32 permission to read Bugzilla configuration details and the necessary
32 permission to read Bugzilla configuration details and the necessary
33 MySQL user and password to have full access rights to the Bugzilla
33 MySQL user and password to have full access rights to the Bugzilla
34 database. For these reasons this access mode is now considered
34 database. For these reasons this access mode is now considered
35 deprecated, and will not be updated for new Bugzilla versions going
35 deprecated, and will not be updated for new Bugzilla versions going
36 forward. Only adding comments is supported in this access mode.
36 forward. Only adding comments is supported in this access mode.
37
37
38 Access via XMLRPC needs a Bugzilla username and password to be specified
38 Access via XMLRPC needs a Bugzilla username and password to be specified
39 in the configuration. Comments are added under that username. Since the
39 in the configuration. Comments are added under that username. Since the
40 configuration must be readable by all Mercurial users, it is recommended
40 configuration must be readable by all Mercurial users, it is recommended
41 that the rights of that user are restricted in Bugzilla to the minimum
41 that the rights of that user are restricted in Bugzilla to the minimum
42 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
42 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
43
43
44 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
44 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
45 email to the Bugzilla email interface to submit comments to bugs.
45 email to the Bugzilla email interface to submit comments to bugs.
46 The From: address in the email is set to the email address of the Mercurial
46 The From: address in the email is set to the email address of the Mercurial
47 user, so the comment appears to come from the Mercurial user. In the event
47 user, so the comment appears to come from the Mercurial user. In the event
48 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
48 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
49 user, the email associated with the Bugzilla username used to log into
49 user, the email associated with the Bugzilla username used to log into
50 Bugzilla is used instead as the source of the comment. Marking bugs fixed
50 Bugzilla is used instead as the source of the comment. Marking bugs fixed
51 works on all supported Bugzilla versions.
51 works on all supported Bugzilla versions.
52
52
53 Configuration items common to all access modes:
53 Configuration items common to all access modes:
54
54
55 bugzilla.version
55 bugzilla.version
56 The access type to use. Values recognized are:
56 The access type to use. Values recognized are:
57
57
58 :``xmlrpc``: Bugzilla XMLRPC interface.
58 :``xmlrpc``: Bugzilla XMLRPC interface.
59 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
59 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
60 :``3.0``: MySQL access, Bugzilla 3.0 and later.
60 :``3.0``: MySQL access, Bugzilla 3.0 and later.
61 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
61 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
62 including 3.0.
62 including 3.0.
63 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
63 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
64 including 2.18.
64 including 2.18.
65
65
66 bugzilla.regexp
66 bugzilla.regexp
67 Regular expression to match bug IDs for update in changeset commit message.
67 Regular expression to match bug IDs for update in changeset commit message.
68 It must contain one "()" named group ``<ids>`` containing the bug
68 It must contain one "()" named group ``<ids>`` containing the bug
69 IDs separated by non-digit characters. It may also contain
69 IDs separated by non-digit characters. It may also contain
70 a named group ``<hours>`` with a floating-point number giving the
70 a named group ``<hours>`` with a floating-point number giving the
71 hours worked on the bug. If no named groups are present, the first
71 hours worked on the bug. If no named groups are present, the first
72 "()" group is assumed to contain the bug IDs, and work time is not
72 "()" group is assumed to contain the bug IDs, and work time is not
73 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
73 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
74 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
74 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
75 variations thereof, followed by an hours number prefixed by ``h`` or
75 variations thereof, followed by an hours number prefixed by ``h`` or
76 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
76 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
77
77
78 bugzilla.fixregexp
78 bugzilla.fixregexp
79 Regular expression to match bug IDs for marking fixed in changeset
79 Regular expression to match bug IDs for marking fixed in changeset
80 commit message. This must contain a "()" named group ``<ids>` containing
80 commit message. This must contain a "()" named group ``<ids>` containing
81 the bug IDs separated by non-digit characters. It may also contain
81 the bug IDs separated by non-digit characters. It may also contain
82 a named group ``<hours>`` with a floating-point number giving the
82 a named group ``<hours>`` with a floating-point number giving the
83 hours worked on the bug. If no named groups are present, the first
83 hours worked on the bug. If no named groups are present, the first
84 "()" group is assumed to contain the bug IDs, and work time is not
84 "()" group is assumed to contain the bug IDs, and work time is not
85 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
85 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
86 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
86 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
87 variations thereof, followed by an hours number prefixed by ``h`` or
87 variations thereof, followed by an hours number prefixed by ``h`` or
88 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
88 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
89
89
90 bugzilla.fixstatus
90 bugzilla.fixstatus
91 The status to set a bug to when marking fixed. Default ``RESOLVED``.
91 The status to set a bug to when marking fixed. Default ``RESOLVED``.
92
92
93 bugzilla.fixresolution
93 bugzilla.fixresolution
94 The resolution to set a bug to when marking fixed. Default ``FIXED``.
94 The resolution to set a bug to when marking fixed. Default ``FIXED``.
95
95
96 bugzilla.style
96 bugzilla.style
97 The style file to use when formatting comments.
97 The style file to use when formatting comments.
98
98
99 bugzilla.template
99 bugzilla.template
100 Template to use when formatting comments. Overrides style if
100 Template to use when formatting comments. Overrides style if
101 specified. In addition to the usual Mercurial keywords, the
101 specified. In addition to the usual Mercurial keywords, the
102 extension specifies:
102 extension specifies:
103
103
104 :``{bug}``: The Bugzilla bug ID.
104 :``{bug}``: The Bugzilla bug ID.
105 :``{root}``: The full pathname of the Mercurial repository.
105 :``{root}``: The full pathname of the Mercurial repository.
106 :``{webroot}``: Stripped pathname of the Mercurial repository.
106 :``{webroot}``: Stripped pathname of the Mercurial repository.
107 :``{hgweb}``: Base URL for browsing Mercurial repositories.
107 :``{hgweb}``: Base URL for browsing Mercurial repositories.
108
108
109 Default ``changeset {node|short} in repo {root} refers to bug
109 Default ``changeset {node|short} in repo {root} refers to bug
110 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
110 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
111
111
112 bugzilla.strip
112 bugzilla.strip
113 The number of path separator characters to strip from the front of
113 The number of path separator characters to strip from the front of
114 the Mercurial repository path (``{root}`` in templates) to produce
114 the Mercurial repository path (``{root}`` in templates) to produce
115 ``{webroot}``. For example, a repository with ``{root}``
115 ``{webroot}``. For example, a repository with ``{root}``
116 ``/var/local/my-project`` with a strip of 2 gives a value for
116 ``/var/local/my-project`` with a strip of 2 gives a value for
117 ``{webroot}`` of ``my-project``. Default 0.
117 ``{webroot}`` of ``my-project``. Default 0.
118
118
119 web.baseurl
119 web.baseurl
120 Base URL for browsing Mercurial repositories. Referenced from
120 Base URL for browsing Mercurial repositories. Referenced from
121 templates as ``{hgweb}``.
121 templates as ``{hgweb}``.
122
122
123 Configuration items common to XMLRPC+email and MySQL access modes:
123 Configuration items common to XMLRPC+email and MySQL access modes:
124
124
125 bugzilla.usermap
125 bugzilla.usermap
126 Path of file containing Mercurial committer email to Bugzilla user email
126 Path of file containing Mercurial committer email to Bugzilla user email
127 mappings. If specified, the file should contain one mapping per
127 mappings. If specified, the file should contain one mapping per
128 line::
128 line::
129
129
130 committer = Bugzilla user
130 committer = Bugzilla user
131
131
132 See also the ``[usermap]`` section.
132 See also the ``[usermap]`` section.
133
133
134 The ``[usermap]`` section is used to specify mappings of Mercurial
134 The ``[usermap]`` section is used to specify mappings of Mercurial
135 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
135 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
136 Contains entries of the form ``committer = Bugzilla user``.
136 Contains entries of the form ``committer = Bugzilla user``.
137
137
138 XMLRPC access mode configuration:
138 XMLRPC access mode configuration:
139
139
140 bugzilla.bzurl
140 bugzilla.bzurl
141 The base URL for the Bugzilla installation.
141 The base URL for the Bugzilla installation.
142 Default ``http://localhost/bugzilla``.
142 Default ``http://localhost/bugzilla``.
143
143
144 bugzilla.user
144 bugzilla.user
145 The username to use to log into Bugzilla via XMLRPC. Default
145 The username to use to log into Bugzilla via XMLRPC. Default
146 ``bugs``.
146 ``bugs``.
147
147
148 bugzilla.password
148 bugzilla.password
149 The password for Bugzilla login.
149 The password for Bugzilla login.
150
150
151 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
151 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
152 and also:
152 and also:
153
153
154 bugzilla.bzemail
154 bugzilla.bzemail
155 The Bugzilla email address.
155 The Bugzilla email address.
156
156
157 In addition, the Mercurial email settings must be configured. See the
157 In addition, the Mercurial email settings must be configured. See the
158 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
158 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
159
159
160 MySQL access mode configuration:
160 MySQL access mode configuration:
161
161
162 bugzilla.host
162 bugzilla.host
163 Hostname of the MySQL server holding the Bugzilla database.
163 Hostname of the MySQL server holding the Bugzilla database.
164 Default ``localhost``.
164 Default ``localhost``.
165
165
166 bugzilla.db
166 bugzilla.db
167 Name of the Bugzilla database in MySQL. Default ``bugs``.
167 Name of the Bugzilla database in MySQL. Default ``bugs``.
168
168
169 bugzilla.user
169 bugzilla.user
170 Username to use to access MySQL server. Default ``bugs``.
170 Username to use to access MySQL server. Default ``bugs``.
171
171
172 bugzilla.password
172 bugzilla.password
173 Password to use to access MySQL server.
173 Password to use to access MySQL server.
174
174
175 bugzilla.timeout
175 bugzilla.timeout
176 Database connection timeout (seconds). Default 5.
176 Database connection timeout (seconds). Default 5.
177
177
178 bugzilla.bzuser
178 bugzilla.bzuser
179 Fallback Bugzilla user name to record comments with, if changeset
179 Fallback Bugzilla user name to record comments with, if changeset
180 committer cannot be found as a Bugzilla user.
180 committer cannot be found as a Bugzilla user.
181
181
182 bugzilla.bzdir
182 bugzilla.bzdir
183 Bugzilla install directory. Used by default notify. Default
183 Bugzilla install directory. Used by default notify. Default
184 ``/var/www/html/bugzilla``.
184 ``/var/www/html/bugzilla``.
185
185
186 bugzilla.notify
186 bugzilla.notify
187 The command to run to get Bugzilla to send bug change notification
187 The command to run to get Bugzilla to send bug change notification
188 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
188 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
189 id) and ``user`` (committer bugzilla email). Default depends on
189 id) and ``user`` (committer bugzilla email). Default depends on
190 version; from 2.18 it is "cd %(bzdir)s && perl -T
190 version; from 2.18 it is "cd %(bzdir)s && perl -T
191 contrib/sendbugmail.pl %(id)s %(user)s".
191 contrib/sendbugmail.pl %(id)s %(user)s".
192
192
193 Activating the extension::
193 Activating the extension::
194
194
195 [extensions]
195 [extensions]
196 bugzilla =
196 bugzilla =
197
197
198 [hooks]
198 [hooks]
199 # run bugzilla hook on every change pulled or pushed in here
199 # run bugzilla hook on every change pulled or pushed in here
200 incoming.bugzilla = python:hgext.bugzilla.hook
200 incoming.bugzilla = python:hgext.bugzilla.hook
201
201
202 Example configurations:
202 Example configurations:
203
203
204 XMLRPC example configuration. This uses the Bugzilla at
204 XMLRPC example configuration. This uses the Bugzilla at
205 ``http://my-project.org/bugzilla``, logging in as user
205 ``http://my-project.org/bugzilla``, logging in as user
206 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
206 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
207 collection of Mercurial repositories in ``/var/local/hg/repos/``,
207 collection of Mercurial repositories in ``/var/local/hg/repos/``,
208 with a web interface at ``http://my-project.org/hg``. ::
208 with a web interface at ``http://my-project.org/hg``. ::
209
209
210 [bugzilla]
210 [bugzilla]
211 bzurl=http://my-project.org/bugzilla
211 bzurl=http://my-project.org/bugzilla
212 user=bugmail@my-project.org
212 user=bugmail@my-project.org
213 password=plugh
213 password=plugh
214 version=xmlrpc
214 version=xmlrpc
215 template=Changeset {node|short} in {root|basename}.
215 template=Changeset {node|short} in {root|basename}.
216 {hgweb}/{webroot}/rev/{node|short}\\n
216 {hgweb}/{webroot}/rev/{node|short}\\n
217 {desc}\\n
217 {desc}\\n
218 strip=5
218 strip=5
219
219
220 [web]
220 [web]
221 baseurl=http://my-project.org/hg
221 baseurl=http://my-project.org/hg
222
222
223 XMLRPC+email example configuration. This uses the Bugzilla at
223 XMLRPC+email example configuration. This uses the Bugzilla at
224 ``http://my-project.org/bugzilla``, logging in as user
224 ``http://my-project.org/bugzilla``, logging in as user
225 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
225 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
226 collection of Mercurial repositories in ``/var/local/hg/repos/``,
226 collection of Mercurial repositories in ``/var/local/hg/repos/``,
227 with a web interface at ``http://my-project.org/hg``. Bug comments
227 with a web interface at ``http://my-project.org/hg``. Bug comments
228 are sent to the Bugzilla email address
228 are sent to the Bugzilla email address
229 ``bugzilla@my-project.org``. ::
229 ``bugzilla@my-project.org``. ::
230
230
231 [bugzilla]
231 [bugzilla]
232 bzurl=http://my-project.org/bugzilla
232 bzurl=http://my-project.org/bugzilla
233 user=bugmail@my-project.org
233 user=bugmail@my-project.org
234 password=plugh
234 password=plugh
235 version=xmlrpc+email
235 version=xmlrpc+email
236 bzemail=bugzilla@my-project.org
236 bzemail=bugzilla@my-project.org
237 template=Changeset {node|short} in {root|basename}.
237 template=Changeset {node|short} in {root|basename}.
238 {hgweb}/{webroot}/rev/{node|short}\\n
238 {hgweb}/{webroot}/rev/{node|short}\\n
239 {desc}\\n
239 {desc}\\n
240 strip=5
240 strip=5
241
241
242 [web]
242 [web]
243 baseurl=http://my-project.org/hg
243 baseurl=http://my-project.org/hg
244
244
245 [usermap]
245 [usermap]
246 user@emaildomain.com=user.name@bugzilladomain.com
246 user@emaildomain.com=user.name@bugzilladomain.com
247
247
248 MySQL example configuration. This has a local Bugzilla 3.2 installation
248 MySQL example configuration. This has a local Bugzilla 3.2 installation
249 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
249 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
250 the Bugzilla database name is ``bugs`` and MySQL is
250 the Bugzilla database name is ``bugs`` and MySQL is
251 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
251 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
252 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
252 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
253 with a web interface at ``http://my-project.org/hg``. ::
253 with a web interface at ``http://my-project.org/hg``. ::
254
254
255 [bugzilla]
255 [bugzilla]
256 host=localhost
256 host=localhost
257 password=XYZZY
257 password=XYZZY
258 version=3.0
258 version=3.0
259 bzuser=unknown@domain.com
259 bzuser=unknown@domain.com
260 bzdir=/opt/bugzilla-3.2
260 bzdir=/opt/bugzilla-3.2
261 template=Changeset {node|short} in {root|basename}.
261 template=Changeset {node|short} in {root|basename}.
262 {hgweb}/{webroot}/rev/{node|short}\\n
262 {hgweb}/{webroot}/rev/{node|short}\\n
263 {desc}\\n
263 {desc}\\n
264 strip=5
264 strip=5
265
265
266 [web]
266 [web]
267 baseurl=http://my-project.org/hg
267 baseurl=http://my-project.org/hg
268
268
269 [usermap]
269 [usermap]
270 user@emaildomain.com=user.name@bugzilladomain.com
270 user@emaildomain.com=user.name@bugzilladomain.com
271
271
272 All the above add a comment to the Bugzilla bug record of the form::
272 All the above add a comment to the Bugzilla bug record of the form::
273
273
274 Changeset 3b16791d6642 in repository-name.
274 Changeset 3b16791d6642 in repository-name.
275 http://my-project.org/hg/repository-name/rev/3b16791d6642
275 http://my-project.org/hg/repository-name/rev/3b16791d6642
276
276
277 Changeset commit comment. Bug 1234.
277 Changeset commit comment. Bug 1234.
278 '''
278 '''
279
279
280 from __future__ import absolute_import
280 from __future__ import absolute_import
281
281
282 import re
282 import re
283 import time
283 import time
284 import xmlrpclib
285
284
286 from mercurial.i18n import _
285 from mercurial.i18n import _
287 from mercurial.node import short
286 from mercurial.node import short
288 from mercurial import (
287 from mercurial import (
289 cmdutil,
288 cmdutil,
290 error,
289 error,
291 mail,
290 mail,
292 util,
291 util,
293 )
292 )
294
293
295 urlparse = util.urlparse
294 urlparse = util.urlparse
295 xmlrpclib = util.xmlrpclib
296
296
297 # Note for extension authors: ONLY specify testedwith = 'internal' for
297 # Note for extension authors: ONLY specify testedwith = 'internal' for
298 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
298 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
299 # be specifying the version(s) of Mercurial they are tested with, or
299 # be specifying the version(s) of Mercurial they are tested with, or
300 # leave the attribute unspecified.
300 # leave the attribute unspecified.
301 testedwith = 'internal'
301 testedwith = 'internal'
302
302
303 class bzaccess(object):
303 class bzaccess(object):
304 '''Base class for access to Bugzilla.'''
304 '''Base class for access to Bugzilla.'''
305
305
306 def __init__(self, ui):
306 def __init__(self, ui):
307 self.ui = ui
307 self.ui = ui
308 usermap = self.ui.config('bugzilla', 'usermap')
308 usermap = self.ui.config('bugzilla', 'usermap')
309 if usermap:
309 if usermap:
310 self.ui.readconfig(usermap, sections=['usermap'])
310 self.ui.readconfig(usermap, sections=['usermap'])
311
311
312 def map_committer(self, user):
312 def map_committer(self, user):
313 '''map name of committer to Bugzilla user name.'''
313 '''map name of committer to Bugzilla user name.'''
314 for committer, bzuser in self.ui.configitems('usermap'):
314 for committer, bzuser in self.ui.configitems('usermap'):
315 if committer.lower() == user.lower():
315 if committer.lower() == user.lower():
316 return bzuser
316 return bzuser
317 return user
317 return user
318
318
319 # Methods to be implemented by access classes.
319 # Methods to be implemented by access classes.
320 #
320 #
321 # 'bugs' is a dict keyed on bug id, where values are a dict holding
321 # 'bugs' is a dict keyed on bug id, where values are a dict holding
322 # updates to bug state. Recognized dict keys are:
322 # updates to bug state. Recognized dict keys are:
323 #
323 #
324 # 'hours': Value, float containing work hours to be updated.
324 # 'hours': Value, float containing work hours to be updated.
325 # 'fix': If key present, bug is to be marked fixed. Value ignored.
325 # 'fix': If key present, bug is to be marked fixed. Value ignored.
326
326
327 def filter_real_bug_ids(self, bugs):
327 def filter_real_bug_ids(self, bugs):
328 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
328 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
329 pass
329 pass
330
330
331 def filter_cset_known_bug_ids(self, node, bugs):
331 def filter_cset_known_bug_ids(self, node, bugs):
332 '''remove bug IDs where node occurs in comment text from bugs.'''
332 '''remove bug IDs where node occurs in comment text from bugs.'''
333 pass
333 pass
334
334
335 def updatebug(self, bugid, newstate, text, committer):
335 def updatebug(self, bugid, newstate, text, committer):
336 '''update the specified bug. Add comment text and set new states.
336 '''update the specified bug. Add comment text and set new states.
337
337
338 If possible add the comment as being from the committer of
338 If possible add the comment as being from the committer of
339 the changeset. Otherwise use the default Bugzilla user.
339 the changeset. Otherwise use the default Bugzilla user.
340 '''
340 '''
341 pass
341 pass
342
342
343 def notify(self, bugs, committer):
343 def notify(self, bugs, committer):
344 '''Force sending of Bugzilla notification emails.
344 '''Force sending of Bugzilla notification emails.
345
345
346 Only required if the access method does not trigger notification
346 Only required if the access method does not trigger notification
347 emails automatically.
347 emails automatically.
348 '''
348 '''
349 pass
349 pass
350
350
351 # Bugzilla via direct access to MySQL database.
351 # Bugzilla via direct access to MySQL database.
352 class bzmysql(bzaccess):
352 class bzmysql(bzaccess):
353 '''Support for direct MySQL access to Bugzilla.
353 '''Support for direct MySQL access to Bugzilla.
354
354
355 The earliest Bugzilla version this is tested with is version 2.16.
355 The earliest Bugzilla version this is tested with is version 2.16.
356
356
357 If your Bugzilla is version 3.4 or above, you are strongly
357 If your Bugzilla is version 3.4 or above, you are strongly
358 recommended to use the XMLRPC access method instead.
358 recommended to use the XMLRPC access method instead.
359 '''
359 '''
360
360
361 @staticmethod
361 @staticmethod
362 def sql_buglist(ids):
362 def sql_buglist(ids):
363 '''return SQL-friendly list of bug ids'''
363 '''return SQL-friendly list of bug ids'''
364 return '(' + ','.join(map(str, ids)) + ')'
364 return '(' + ','.join(map(str, ids)) + ')'
365
365
366 _MySQLdb = None
366 _MySQLdb = None
367
367
368 def __init__(self, ui):
368 def __init__(self, ui):
369 try:
369 try:
370 import MySQLdb as mysql
370 import MySQLdb as mysql
371 bzmysql._MySQLdb = mysql
371 bzmysql._MySQLdb = mysql
372 except ImportError as err:
372 except ImportError as err:
373 raise error.Abort(_('python mysql support not available: %s') % err)
373 raise error.Abort(_('python mysql support not available: %s') % err)
374
374
375 bzaccess.__init__(self, ui)
375 bzaccess.__init__(self, ui)
376
376
377 host = self.ui.config('bugzilla', 'host', 'localhost')
377 host = self.ui.config('bugzilla', 'host', 'localhost')
378 user = self.ui.config('bugzilla', 'user', 'bugs')
378 user = self.ui.config('bugzilla', 'user', 'bugs')
379 passwd = self.ui.config('bugzilla', 'password')
379 passwd = self.ui.config('bugzilla', 'password')
380 db = self.ui.config('bugzilla', 'db', 'bugs')
380 db = self.ui.config('bugzilla', 'db', 'bugs')
381 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
381 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
382 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
382 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
383 (host, db, user, '*' * len(passwd)))
383 (host, db, user, '*' * len(passwd)))
384 self.conn = bzmysql._MySQLdb.connect(host=host,
384 self.conn = bzmysql._MySQLdb.connect(host=host,
385 user=user, passwd=passwd,
385 user=user, passwd=passwd,
386 db=db,
386 db=db,
387 connect_timeout=timeout)
387 connect_timeout=timeout)
388 self.cursor = self.conn.cursor()
388 self.cursor = self.conn.cursor()
389 self.longdesc_id = self.get_longdesc_id()
389 self.longdesc_id = self.get_longdesc_id()
390 self.user_ids = {}
390 self.user_ids = {}
391 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
391 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
392
392
393 def run(self, *args, **kwargs):
393 def run(self, *args, **kwargs):
394 '''run a query.'''
394 '''run a query.'''
395 self.ui.note(_('query: %s %s\n') % (args, kwargs))
395 self.ui.note(_('query: %s %s\n') % (args, kwargs))
396 try:
396 try:
397 self.cursor.execute(*args, **kwargs)
397 self.cursor.execute(*args, **kwargs)
398 except bzmysql._MySQLdb.MySQLError:
398 except bzmysql._MySQLdb.MySQLError:
399 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
399 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
400 raise
400 raise
401
401
402 def get_longdesc_id(self):
402 def get_longdesc_id(self):
403 '''get identity of longdesc field'''
403 '''get identity of longdesc field'''
404 self.run('select fieldid from fielddefs where name = "longdesc"')
404 self.run('select fieldid from fielddefs where name = "longdesc"')
405 ids = self.cursor.fetchall()
405 ids = self.cursor.fetchall()
406 if len(ids) != 1:
406 if len(ids) != 1:
407 raise error.Abort(_('unknown database schema'))
407 raise error.Abort(_('unknown database schema'))
408 return ids[0][0]
408 return ids[0][0]
409
409
410 def filter_real_bug_ids(self, bugs):
410 def filter_real_bug_ids(self, bugs):
411 '''filter not-existing bugs from set.'''
411 '''filter not-existing bugs from set.'''
412 self.run('select bug_id from bugs where bug_id in %s' %
412 self.run('select bug_id from bugs where bug_id in %s' %
413 bzmysql.sql_buglist(bugs.keys()))
413 bzmysql.sql_buglist(bugs.keys()))
414 existing = [id for (id,) in self.cursor.fetchall()]
414 existing = [id for (id,) in self.cursor.fetchall()]
415 for id in bugs.keys():
415 for id in bugs.keys():
416 if id not in existing:
416 if id not in existing:
417 self.ui.status(_('bug %d does not exist\n') % id)
417 self.ui.status(_('bug %d does not exist\n') % id)
418 del bugs[id]
418 del bugs[id]
419
419
420 def filter_cset_known_bug_ids(self, node, bugs):
420 def filter_cset_known_bug_ids(self, node, bugs):
421 '''filter bug ids that already refer to this changeset from set.'''
421 '''filter bug ids that already refer to this changeset from set.'''
422 self.run('''select bug_id from longdescs where
422 self.run('''select bug_id from longdescs where
423 bug_id in %s and thetext like "%%%s%%"''' %
423 bug_id in %s and thetext like "%%%s%%"''' %
424 (bzmysql.sql_buglist(bugs.keys()), short(node)))
424 (bzmysql.sql_buglist(bugs.keys()), short(node)))
425 for (id,) in self.cursor.fetchall():
425 for (id,) in self.cursor.fetchall():
426 self.ui.status(_('bug %d already knows about changeset %s\n') %
426 self.ui.status(_('bug %d already knows about changeset %s\n') %
427 (id, short(node)))
427 (id, short(node)))
428 del bugs[id]
428 del bugs[id]
429
429
430 def notify(self, bugs, committer):
430 def notify(self, bugs, committer):
431 '''tell bugzilla to send mail.'''
431 '''tell bugzilla to send mail.'''
432 self.ui.status(_('telling bugzilla to send mail:\n'))
432 self.ui.status(_('telling bugzilla to send mail:\n'))
433 (user, userid) = self.get_bugzilla_user(committer)
433 (user, userid) = self.get_bugzilla_user(committer)
434 for id in bugs.keys():
434 for id in bugs.keys():
435 self.ui.status(_(' bug %s\n') % id)
435 self.ui.status(_(' bug %s\n') % id)
436 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
436 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
437 bzdir = self.ui.config('bugzilla', 'bzdir',
437 bzdir = self.ui.config('bugzilla', 'bzdir',
438 '/var/www/html/bugzilla')
438 '/var/www/html/bugzilla')
439 try:
439 try:
440 # Backwards-compatible with old notify string, which
440 # Backwards-compatible with old notify string, which
441 # took one string. This will throw with a new format
441 # took one string. This will throw with a new format
442 # string.
442 # string.
443 cmd = cmdfmt % id
443 cmd = cmdfmt % id
444 except TypeError:
444 except TypeError:
445 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
445 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
446 self.ui.note(_('running notify command %s\n') % cmd)
446 self.ui.note(_('running notify command %s\n') % cmd)
447 fp = util.popen('(%s) 2>&1' % cmd)
447 fp = util.popen('(%s) 2>&1' % cmd)
448 out = fp.read()
448 out = fp.read()
449 ret = fp.close()
449 ret = fp.close()
450 if ret:
450 if ret:
451 self.ui.warn(out)
451 self.ui.warn(out)
452 raise error.Abort(_('bugzilla notify command %s') %
452 raise error.Abort(_('bugzilla notify command %s') %
453 util.explainexit(ret)[0])
453 util.explainexit(ret)[0])
454 self.ui.status(_('done\n'))
454 self.ui.status(_('done\n'))
455
455
456 def get_user_id(self, user):
456 def get_user_id(self, user):
457 '''look up numeric bugzilla user id.'''
457 '''look up numeric bugzilla user id.'''
458 try:
458 try:
459 return self.user_ids[user]
459 return self.user_ids[user]
460 except KeyError:
460 except KeyError:
461 try:
461 try:
462 userid = int(user)
462 userid = int(user)
463 except ValueError:
463 except ValueError:
464 self.ui.note(_('looking up user %s\n') % user)
464 self.ui.note(_('looking up user %s\n') % user)
465 self.run('''select userid from profiles
465 self.run('''select userid from profiles
466 where login_name like %s''', user)
466 where login_name like %s''', user)
467 all = self.cursor.fetchall()
467 all = self.cursor.fetchall()
468 if len(all) != 1:
468 if len(all) != 1:
469 raise KeyError(user)
469 raise KeyError(user)
470 userid = int(all[0][0])
470 userid = int(all[0][0])
471 self.user_ids[user] = userid
471 self.user_ids[user] = userid
472 return userid
472 return userid
473
473
474 def get_bugzilla_user(self, committer):
474 def get_bugzilla_user(self, committer):
475 '''See if committer is a registered bugzilla user. Return
475 '''See if committer is a registered bugzilla user. Return
476 bugzilla username and userid if so. If not, return default
476 bugzilla username and userid if so. If not, return default
477 bugzilla username and userid.'''
477 bugzilla username and userid.'''
478 user = self.map_committer(committer)
478 user = self.map_committer(committer)
479 try:
479 try:
480 userid = self.get_user_id(user)
480 userid = self.get_user_id(user)
481 except KeyError:
481 except KeyError:
482 try:
482 try:
483 defaultuser = self.ui.config('bugzilla', 'bzuser')
483 defaultuser = self.ui.config('bugzilla', 'bzuser')
484 if not defaultuser:
484 if not defaultuser:
485 raise error.Abort(_('cannot find bugzilla user id for %s') %
485 raise error.Abort(_('cannot find bugzilla user id for %s') %
486 user)
486 user)
487 userid = self.get_user_id(defaultuser)
487 userid = self.get_user_id(defaultuser)
488 user = defaultuser
488 user = defaultuser
489 except KeyError:
489 except KeyError:
490 raise error.Abort(_('cannot find bugzilla user id for %s or %s')
490 raise error.Abort(_('cannot find bugzilla user id for %s or %s')
491 % (user, defaultuser))
491 % (user, defaultuser))
492 return (user, userid)
492 return (user, userid)
493
493
494 def updatebug(self, bugid, newstate, text, committer):
494 def updatebug(self, bugid, newstate, text, committer):
495 '''update bug state with comment text.
495 '''update bug state with comment text.
496
496
497 Try adding comment as committer of changeset, otherwise as
497 Try adding comment as committer of changeset, otherwise as
498 default bugzilla user.'''
498 default bugzilla user.'''
499 if len(newstate) > 0:
499 if len(newstate) > 0:
500 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
500 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
501
501
502 (user, userid) = self.get_bugzilla_user(committer)
502 (user, userid) = self.get_bugzilla_user(committer)
503 now = time.strftime('%Y-%m-%d %H:%M:%S')
503 now = time.strftime('%Y-%m-%d %H:%M:%S')
504 self.run('''insert into longdescs
504 self.run('''insert into longdescs
505 (bug_id, who, bug_when, thetext)
505 (bug_id, who, bug_when, thetext)
506 values (%s, %s, %s, %s)''',
506 values (%s, %s, %s, %s)''',
507 (bugid, userid, now, text))
507 (bugid, userid, now, text))
508 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
508 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
509 values (%s, %s, %s, %s)''',
509 values (%s, %s, %s, %s)''',
510 (bugid, userid, now, self.longdesc_id))
510 (bugid, userid, now, self.longdesc_id))
511 self.conn.commit()
511 self.conn.commit()
512
512
513 class bzmysql_2_18(bzmysql):
513 class bzmysql_2_18(bzmysql):
514 '''support for bugzilla 2.18 series.'''
514 '''support for bugzilla 2.18 series.'''
515
515
516 def __init__(self, ui):
516 def __init__(self, ui):
517 bzmysql.__init__(self, ui)
517 bzmysql.__init__(self, ui)
518 self.default_notify = \
518 self.default_notify = \
519 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
519 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
520
520
521 class bzmysql_3_0(bzmysql_2_18):
521 class bzmysql_3_0(bzmysql_2_18):
522 '''support for bugzilla 3.0 series.'''
522 '''support for bugzilla 3.0 series.'''
523
523
524 def __init__(self, ui):
524 def __init__(self, ui):
525 bzmysql_2_18.__init__(self, ui)
525 bzmysql_2_18.__init__(self, ui)
526
526
527 def get_longdesc_id(self):
527 def get_longdesc_id(self):
528 '''get identity of longdesc field'''
528 '''get identity of longdesc field'''
529 self.run('select id from fielddefs where name = "longdesc"')
529 self.run('select id from fielddefs where name = "longdesc"')
530 ids = self.cursor.fetchall()
530 ids = self.cursor.fetchall()
531 if len(ids) != 1:
531 if len(ids) != 1:
532 raise error.Abort(_('unknown database schema'))
532 raise error.Abort(_('unknown database schema'))
533 return ids[0][0]
533 return ids[0][0]
534
534
535 # Bugzilla via XMLRPC interface.
535 # Bugzilla via XMLRPC interface.
536
536
537 class cookietransportrequest(object):
537 class cookietransportrequest(object):
538 """A Transport request method that retains cookies over its lifetime.
538 """A Transport request method that retains cookies over its lifetime.
539
539
540 The regular xmlrpclib transports ignore cookies. Which causes
540 The regular xmlrpclib transports ignore cookies. Which causes
541 a bit of a problem when you need a cookie-based login, as with
541 a bit of a problem when you need a cookie-based login, as with
542 the Bugzilla XMLRPC interface prior to 4.4.3.
542 the Bugzilla XMLRPC interface prior to 4.4.3.
543
543
544 So this is a helper for defining a Transport which looks for
544 So this is a helper for defining a Transport which looks for
545 cookies being set in responses and saves them to add to all future
545 cookies being set in responses and saves them to add to all future
546 requests.
546 requests.
547 """
547 """
548
548
549 # Inspiration drawn from
549 # Inspiration drawn from
550 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
550 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
551 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
551 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
552
552
553 cookies = []
553 cookies = []
554 def send_cookies(self, connection):
554 def send_cookies(self, connection):
555 if self.cookies:
555 if self.cookies:
556 for cookie in self.cookies:
556 for cookie in self.cookies:
557 connection.putheader("Cookie", cookie)
557 connection.putheader("Cookie", cookie)
558
558
559 def request(self, host, handler, request_body, verbose=0):
559 def request(self, host, handler, request_body, verbose=0):
560 self.verbose = verbose
560 self.verbose = verbose
561 self.accept_gzip_encoding = False
561 self.accept_gzip_encoding = False
562
562
563 # issue XML-RPC request
563 # issue XML-RPC request
564 h = self.make_connection(host)
564 h = self.make_connection(host)
565 if verbose:
565 if verbose:
566 h.set_debuglevel(1)
566 h.set_debuglevel(1)
567
567
568 self.send_request(h, handler, request_body)
568 self.send_request(h, handler, request_body)
569 self.send_host(h, host)
569 self.send_host(h, host)
570 self.send_cookies(h)
570 self.send_cookies(h)
571 self.send_user_agent(h)
571 self.send_user_agent(h)
572 self.send_content(h, request_body)
572 self.send_content(h, request_body)
573
573
574 # Deal with differences between Python 2.4-2.6 and 2.7.
574 # Deal with differences between Python 2.4-2.6 and 2.7.
575 # In the former h is a HTTP(S). In the latter it's a
575 # In the former h is a HTTP(S). In the latter it's a
576 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
576 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
577 # HTTP(S) has an underlying HTTP(S)Connection, so extract
577 # HTTP(S) has an underlying HTTP(S)Connection, so extract
578 # that and use it.
578 # that and use it.
579 try:
579 try:
580 response = h.getresponse()
580 response = h.getresponse()
581 except AttributeError:
581 except AttributeError:
582 response = h._conn.getresponse()
582 response = h._conn.getresponse()
583
583
584 # Add any cookie definitions to our list.
584 # Add any cookie definitions to our list.
585 for header in response.msg.getallmatchingheaders("Set-Cookie"):
585 for header in response.msg.getallmatchingheaders("Set-Cookie"):
586 val = header.split(": ", 1)[1]
586 val = header.split(": ", 1)[1]
587 cookie = val.split(";", 1)[0]
587 cookie = val.split(";", 1)[0]
588 self.cookies.append(cookie)
588 self.cookies.append(cookie)
589
589
590 if response.status != 200:
590 if response.status != 200:
591 raise xmlrpclib.ProtocolError(host + handler, response.status,
591 raise xmlrpclib.ProtocolError(host + handler, response.status,
592 response.reason, response.msg.headers)
592 response.reason, response.msg.headers)
593
593
594 payload = response.read()
594 payload = response.read()
595 parser, unmarshaller = self.getparser()
595 parser, unmarshaller = self.getparser()
596 parser.feed(payload)
596 parser.feed(payload)
597 parser.close()
597 parser.close()
598
598
599 return unmarshaller.close()
599 return unmarshaller.close()
600
600
601 # The explicit calls to the underlying xmlrpclib __init__() methods are
601 # The explicit calls to the underlying xmlrpclib __init__() methods are
602 # necessary. The xmlrpclib.Transport classes are old-style classes, and
602 # necessary. The xmlrpclib.Transport classes are old-style classes, and
603 # it turns out their __init__() doesn't get called when doing multiple
603 # it turns out their __init__() doesn't get called when doing multiple
604 # inheritance with a new-style class.
604 # inheritance with a new-style class.
605 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
605 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
606 def __init__(self, use_datetime=0):
606 def __init__(self, use_datetime=0):
607 if util.safehasattr(xmlrpclib.Transport, "__init__"):
607 if util.safehasattr(xmlrpclib.Transport, "__init__"):
608 xmlrpclib.Transport.__init__(self, use_datetime)
608 xmlrpclib.Transport.__init__(self, use_datetime)
609
609
610 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
610 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
611 def __init__(self, use_datetime=0):
611 def __init__(self, use_datetime=0):
612 if util.safehasattr(xmlrpclib.Transport, "__init__"):
612 if util.safehasattr(xmlrpclib.Transport, "__init__"):
613 xmlrpclib.SafeTransport.__init__(self, use_datetime)
613 xmlrpclib.SafeTransport.__init__(self, use_datetime)
614
614
615 class bzxmlrpc(bzaccess):
615 class bzxmlrpc(bzaccess):
616 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
616 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
617
617
618 Requires a minimum Bugzilla version 3.4.
618 Requires a minimum Bugzilla version 3.4.
619 """
619 """
620
620
621 def __init__(self, ui):
621 def __init__(self, ui):
622 bzaccess.__init__(self, ui)
622 bzaccess.__init__(self, ui)
623
623
624 bzweb = self.ui.config('bugzilla', 'bzurl',
624 bzweb = self.ui.config('bugzilla', 'bzurl',
625 'http://localhost/bugzilla/')
625 'http://localhost/bugzilla/')
626 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
626 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
627
627
628 user = self.ui.config('bugzilla', 'user', 'bugs')
628 user = self.ui.config('bugzilla', 'user', 'bugs')
629 passwd = self.ui.config('bugzilla', 'password')
629 passwd = self.ui.config('bugzilla', 'password')
630
630
631 self.fixstatus = self.ui.config('bugzilla', 'fixstatus', 'RESOLVED')
631 self.fixstatus = self.ui.config('bugzilla', 'fixstatus', 'RESOLVED')
632 self.fixresolution = self.ui.config('bugzilla', 'fixresolution',
632 self.fixresolution = self.ui.config('bugzilla', 'fixresolution',
633 'FIXED')
633 'FIXED')
634
634
635 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
635 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
636 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
636 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
637 self.bzvermajor = int(ver[0])
637 self.bzvermajor = int(ver[0])
638 self.bzverminor = int(ver[1])
638 self.bzverminor = int(ver[1])
639 login = self.bzproxy.User.login({'login': user, 'password': passwd,
639 login = self.bzproxy.User.login({'login': user, 'password': passwd,
640 'restrict_login': True})
640 'restrict_login': True})
641 self.bztoken = login.get('token', '')
641 self.bztoken = login.get('token', '')
642
642
643 def transport(self, uri):
643 def transport(self, uri):
644 if urlparse.urlparse(uri, "http")[0] == "https":
644 if urlparse.urlparse(uri, "http")[0] == "https":
645 return cookiesafetransport()
645 return cookiesafetransport()
646 else:
646 else:
647 return cookietransport()
647 return cookietransport()
648
648
649 def get_bug_comments(self, id):
649 def get_bug_comments(self, id):
650 """Return a string with all comment text for a bug."""
650 """Return a string with all comment text for a bug."""
651 c = self.bzproxy.Bug.comments({'ids': [id],
651 c = self.bzproxy.Bug.comments({'ids': [id],
652 'include_fields': ['text'],
652 'include_fields': ['text'],
653 'token': self.bztoken})
653 'token': self.bztoken})
654 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
654 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
655
655
656 def filter_real_bug_ids(self, bugs):
656 def filter_real_bug_ids(self, bugs):
657 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
657 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
658 'include_fields': [],
658 'include_fields': [],
659 'permissive': True,
659 'permissive': True,
660 'token': self.bztoken,
660 'token': self.bztoken,
661 })
661 })
662 for badbug in probe['faults']:
662 for badbug in probe['faults']:
663 id = badbug['id']
663 id = badbug['id']
664 self.ui.status(_('bug %d does not exist\n') % id)
664 self.ui.status(_('bug %d does not exist\n') % id)
665 del bugs[id]
665 del bugs[id]
666
666
667 def filter_cset_known_bug_ids(self, node, bugs):
667 def filter_cset_known_bug_ids(self, node, bugs):
668 for id in sorted(bugs.keys()):
668 for id in sorted(bugs.keys()):
669 if self.get_bug_comments(id).find(short(node)) != -1:
669 if self.get_bug_comments(id).find(short(node)) != -1:
670 self.ui.status(_('bug %d already knows about changeset %s\n') %
670 self.ui.status(_('bug %d already knows about changeset %s\n') %
671 (id, short(node)))
671 (id, short(node)))
672 del bugs[id]
672 del bugs[id]
673
673
674 def updatebug(self, bugid, newstate, text, committer):
674 def updatebug(self, bugid, newstate, text, committer):
675 args = {}
675 args = {}
676 if 'hours' in newstate:
676 if 'hours' in newstate:
677 args['work_time'] = newstate['hours']
677 args['work_time'] = newstate['hours']
678
678
679 if self.bzvermajor >= 4:
679 if self.bzvermajor >= 4:
680 args['ids'] = [bugid]
680 args['ids'] = [bugid]
681 args['comment'] = {'body' : text}
681 args['comment'] = {'body' : text}
682 if 'fix' in newstate:
682 if 'fix' in newstate:
683 args['status'] = self.fixstatus
683 args['status'] = self.fixstatus
684 args['resolution'] = self.fixresolution
684 args['resolution'] = self.fixresolution
685 args['token'] = self.bztoken
685 args['token'] = self.bztoken
686 self.bzproxy.Bug.update(args)
686 self.bzproxy.Bug.update(args)
687 else:
687 else:
688 if 'fix' in newstate:
688 if 'fix' in newstate:
689 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
689 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
690 "to mark bugs fixed\n"))
690 "to mark bugs fixed\n"))
691 args['id'] = bugid
691 args['id'] = bugid
692 args['comment'] = text
692 args['comment'] = text
693 self.bzproxy.Bug.add_comment(args)
693 self.bzproxy.Bug.add_comment(args)
694
694
695 class bzxmlrpcemail(bzxmlrpc):
695 class bzxmlrpcemail(bzxmlrpc):
696 """Read data from Bugzilla via XMLRPC, send updates via email.
696 """Read data from Bugzilla via XMLRPC, send updates via email.
697
697
698 Advantages of sending updates via email:
698 Advantages of sending updates via email:
699 1. Comments can be added as any user, not just logged in user.
699 1. Comments can be added as any user, not just logged in user.
700 2. Bug statuses or other fields not accessible via XMLRPC can
700 2. Bug statuses or other fields not accessible via XMLRPC can
701 potentially be updated.
701 potentially be updated.
702
702
703 There is no XMLRPC function to change bug status before Bugzilla
703 There is no XMLRPC function to change bug status before Bugzilla
704 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
704 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
705 But bugs can be marked fixed via email from 3.4 onwards.
705 But bugs can be marked fixed via email from 3.4 onwards.
706 """
706 """
707
707
708 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
708 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
709 # in-email fields are specified as '@<fieldname> = <value>'. In
709 # in-email fields are specified as '@<fieldname> = <value>'. In
710 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
710 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
711 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
711 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
712 # compatibility, but rather than rely on this use the new format for
712 # compatibility, but rather than rely on this use the new format for
713 # 4.0 onwards.
713 # 4.0 onwards.
714
714
715 def __init__(self, ui):
715 def __init__(self, ui):
716 bzxmlrpc.__init__(self, ui)
716 bzxmlrpc.__init__(self, ui)
717
717
718 self.bzemail = self.ui.config('bugzilla', 'bzemail')
718 self.bzemail = self.ui.config('bugzilla', 'bzemail')
719 if not self.bzemail:
719 if not self.bzemail:
720 raise error.Abort(_("configuration 'bzemail' missing"))
720 raise error.Abort(_("configuration 'bzemail' missing"))
721 mail.validateconfig(self.ui)
721 mail.validateconfig(self.ui)
722
722
723 def makecommandline(self, fieldname, value):
723 def makecommandline(self, fieldname, value):
724 if self.bzvermajor >= 4:
724 if self.bzvermajor >= 4:
725 return "@%s %s" % (fieldname, str(value))
725 return "@%s %s" % (fieldname, str(value))
726 else:
726 else:
727 if fieldname == "id":
727 if fieldname == "id":
728 fieldname = "bug_id"
728 fieldname = "bug_id"
729 return "@%s = %s" % (fieldname, str(value))
729 return "@%s = %s" % (fieldname, str(value))
730
730
731 def send_bug_modify_email(self, bugid, commands, comment, committer):
731 def send_bug_modify_email(self, bugid, commands, comment, committer):
732 '''send modification message to Bugzilla bug via email.
732 '''send modification message to Bugzilla bug via email.
733
733
734 The message format is documented in the Bugzilla email_in.pl
734 The message format is documented in the Bugzilla email_in.pl
735 specification. commands is a list of command lines, comment is the
735 specification. commands is a list of command lines, comment is the
736 comment text.
736 comment text.
737
737
738 To stop users from crafting commit comments with
738 To stop users from crafting commit comments with
739 Bugzilla commands, specify the bug ID via the message body, rather
739 Bugzilla commands, specify the bug ID via the message body, rather
740 than the subject line, and leave a blank line after it.
740 than the subject line, and leave a blank line after it.
741 '''
741 '''
742 user = self.map_committer(committer)
742 user = self.map_committer(committer)
743 matches = self.bzproxy.User.get({'match': [user],
743 matches = self.bzproxy.User.get({'match': [user],
744 'token': self.bztoken})
744 'token': self.bztoken})
745 if not matches['users']:
745 if not matches['users']:
746 user = self.ui.config('bugzilla', 'user', 'bugs')
746 user = self.ui.config('bugzilla', 'user', 'bugs')
747 matches = self.bzproxy.User.get({'match': [user],
747 matches = self.bzproxy.User.get({'match': [user],
748 'token': self.bztoken})
748 'token': self.bztoken})
749 if not matches['users']:
749 if not matches['users']:
750 raise error.Abort(_("default bugzilla user %s email not found")
750 raise error.Abort(_("default bugzilla user %s email not found")
751 % user)
751 % user)
752 user = matches['users'][0]['email']
752 user = matches['users'][0]['email']
753 commands.append(self.makecommandline("id", bugid))
753 commands.append(self.makecommandline("id", bugid))
754
754
755 text = "\n".join(commands) + "\n\n" + comment
755 text = "\n".join(commands) + "\n\n" + comment
756
756
757 _charsets = mail._charsets(self.ui)
757 _charsets = mail._charsets(self.ui)
758 user = mail.addressencode(self.ui, user, _charsets)
758 user = mail.addressencode(self.ui, user, _charsets)
759 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
759 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
760 msg = mail.mimeencode(self.ui, text, _charsets)
760 msg = mail.mimeencode(self.ui, text, _charsets)
761 msg['From'] = user
761 msg['From'] = user
762 msg['To'] = bzemail
762 msg['To'] = bzemail
763 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
763 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
764 sendmail = mail.connect(self.ui)
764 sendmail = mail.connect(self.ui)
765 sendmail(user, bzemail, msg.as_string())
765 sendmail(user, bzemail, msg.as_string())
766
766
767 def updatebug(self, bugid, newstate, text, committer):
767 def updatebug(self, bugid, newstate, text, committer):
768 cmds = []
768 cmds = []
769 if 'hours' in newstate:
769 if 'hours' in newstate:
770 cmds.append(self.makecommandline("work_time", newstate['hours']))
770 cmds.append(self.makecommandline("work_time", newstate['hours']))
771 if 'fix' in newstate:
771 if 'fix' in newstate:
772 cmds.append(self.makecommandline("bug_status", self.fixstatus))
772 cmds.append(self.makecommandline("bug_status", self.fixstatus))
773 cmds.append(self.makecommandline("resolution", self.fixresolution))
773 cmds.append(self.makecommandline("resolution", self.fixresolution))
774 self.send_bug_modify_email(bugid, cmds, text, committer)
774 self.send_bug_modify_email(bugid, cmds, text, committer)
775
775
776 class bugzilla(object):
776 class bugzilla(object):
777 # supported versions of bugzilla. different versions have
777 # supported versions of bugzilla. different versions have
778 # different schemas.
778 # different schemas.
779 _versions = {
779 _versions = {
780 '2.16': bzmysql,
780 '2.16': bzmysql,
781 '2.18': bzmysql_2_18,
781 '2.18': bzmysql_2_18,
782 '3.0': bzmysql_3_0,
782 '3.0': bzmysql_3_0,
783 'xmlrpc': bzxmlrpc,
783 'xmlrpc': bzxmlrpc,
784 'xmlrpc+email': bzxmlrpcemail
784 'xmlrpc+email': bzxmlrpcemail
785 }
785 }
786
786
787 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
787 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
788 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
788 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
789 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
789 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
790
790
791 _default_fix_re = (r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
791 _default_fix_re = (r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
792 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
792 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
793 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
793 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
794 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
794 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
795
795
796 def __init__(self, ui, repo):
796 def __init__(self, ui, repo):
797 self.ui = ui
797 self.ui = ui
798 self.repo = repo
798 self.repo = repo
799
799
800 bzversion = self.ui.config('bugzilla', 'version')
800 bzversion = self.ui.config('bugzilla', 'version')
801 try:
801 try:
802 bzclass = bugzilla._versions[bzversion]
802 bzclass = bugzilla._versions[bzversion]
803 except KeyError:
803 except KeyError:
804 raise error.Abort(_('bugzilla version %s not supported') %
804 raise error.Abort(_('bugzilla version %s not supported') %
805 bzversion)
805 bzversion)
806 self.bzdriver = bzclass(self.ui)
806 self.bzdriver = bzclass(self.ui)
807
807
808 self.bug_re = re.compile(
808 self.bug_re = re.compile(
809 self.ui.config('bugzilla', 'regexp',
809 self.ui.config('bugzilla', 'regexp',
810 bugzilla._default_bug_re), re.IGNORECASE)
810 bugzilla._default_bug_re), re.IGNORECASE)
811 self.fix_re = re.compile(
811 self.fix_re = re.compile(
812 self.ui.config('bugzilla', 'fixregexp',
812 self.ui.config('bugzilla', 'fixregexp',
813 bugzilla._default_fix_re), re.IGNORECASE)
813 bugzilla._default_fix_re), re.IGNORECASE)
814 self.split_re = re.compile(r'\D+')
814 self.split_re = re.compile(r'\D+')
815
815
816 def find_bugs(self, ctx):
816 def find_bugs(self, ctx):
817 '''return bugs dictionary created from commit comment.
817 '''return bugs dictionary created from commit comment.
818
818
819 Extract bug info from changeset comments. Filter out any that are
819 Extract bug info from changeset comments. Filter out any that are
820 not known to Bugzilla, and any that already have a reference to
820 not known to Bugzilla, and any that already have a reference to
821 the given changeset in their comments.
821 the given changeset in their comments.
822 '''
822 '''
823 start = 0
823 start = 0
824 hours = 0.0
824 hours = 0.0
825 bugs = {}
825 bugs = {}
826 bugmatch = self.bug_re.search(ctx.description(), start)
826 bugmatch = self.bug_re.search(ctx.description(), start)
827 fixmatch = self.fix_re.search(ctx.description(), start)
827 fixmatch = self.fix_re.search(ctx.description(), start)
828 while True:
828 while True:
829 bugattribs = {}
829 bugattribs = {}
830 if not bugmatch and not fixmatch:
830 if not bugmatch and not fixmatch:
831 break
831 break
832 if not bugmatch:
832 if not bugmatch:
833 m = fixmatch
833 m = fixmatch
834 elif not fixmatch:
834 elif not fixmatch:
835 m = bugmatch
835 m = bugmatch
836 else:
836 else:
837 if bugmatch.start() < fixmatch.start():
837 if bugmatch.start() < fixmatch.start():
838 m = bugmatch
838 m = bugmatch
839 else:
839 else:
840 m = fixmatch
840 m = fixmatch
841 start = m.end()
841 start = m.end()
842 if m is bugmatch:
842 if m is bugmatch:
843 bugmatch = self.bug_re.search(ctx.description(), start)
843 bugmatch = self.bug_re.search(ctx.description(), start)
844 if 'fix' in bugattribs:
844 if 'fix' in bugattribs:
845 del bugattribs['fix']
845 del bugattribs['fix']
846 else:
846 else:
847 fixmatch = self.fix_re.search(ctx.description(), start)
847 fixmatch = self.fix_re.search(ctx.description(), start)
848 bugattribs['fix'] = None
848 bugattribs['fix'] = None
849
849
850 try:
850 try:
851 ids = m.group('ids')
851 ids = m.group('ids')
852 except IndexError:
852 except IndexError:
853 ids = m.group(1)
853 ids = m.group(1)
854 try:
854 try:
855 hours = float(m.group('hours'))
855 hours = float(m.group('hours'))
856 bugattribs['hours'] = hours
856 bugattribs['hours'] = hours
857 except IndexError:
857 except IndexError:
858 pass
858 pass
859 except TypeError:
859 except TypeError:
860 pass
860 pass
861 except ValueError:
861 except ValueError:
862 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
862 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
863
863
864 for id in self.split_re.split(ids):
864 for id in self.split_re.split(ids):
865 if not id:
865 if not id:
866 continue
866 continue
867 bugs[int(id)] = bugattribs
867 bugs[int(id)] = bugattribs
868 if bugs:
868 if bugs:
869 self.bzdriver.filter_real_bug_ids(bugs)
869 self.bzdriver.filter_real_bug_ids(bugs)
870 if bugs:
870 if bugs:
871 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
871 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
872 return bugs
872 return bugs
873
873
874 def update(self, bugid, newstate, ctx):
874 def update(self, bugid, newstate, ctx):
875 '''update bugzilla bug with reference to changeset.'''
875 '''update bugzilla bug with reference to changeset.'''
876
876
877 def webroot(root):
877 def webroot(root):
878 '''strip leading prefix of repo root and turn into
878 '''strip leading prefix of repo root and turn into
879 url-safe path.'''
879 url-safe path.'''
880 count = int(self.ui.config('bugzilla', 'strip', 0))
880 count = int(self.ui.config('bugzilla', 'strip', 0))
881 root = util.pconvert(root)
881 root = util.pconvert(root)
882 while count > 0:
882 while count > 0:
883 c = root.find('/')
883 c = root.find('/')
884 if c == -1:
884 if c == -1:
885 break
885 break
886 root = root[c + 1:]
886 root = root[c + 1:]
887 count -= 1
887 count -= 1
888 return root
888 return root
889
889
890 mapfile = None
890 mapfile = None
891 tmpl = self.ui.config('bugzilla', 'template')
891 tmpl = self.ui.config('bugzilla', 'template')
892 if not tmpl:
892 if not tmpl:
893 mapfile = self.ui.config('bugzilla', 'style')
893 mapfile = self.ui.config('bugzilla', 'style')
894 if not mapfile and not tmpl:
894 if not mapfile and not tmpl:
895 tmpl = _('changeset {node|short} in repo {root} refers '
895 tmpl = _('changeset {node|short} in repo {root} refers '
896 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
896 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
897 t = cmdutil.changeset_templater(self.ui, self.repo,
897 t = cmdutil.changeset_templater(self.ui, self.repo,
898 False, None, tmpl, mapfile, False)
898 False, None, tmpl, mapfile, False)
899 self.ui.pushbuffer()
899 self.ui.pushbuffer()
900 t.show(ctx, changes=ctx.changeset(),
900 t.show(ctx, changes=ctx.changeset(),
901 bug=str(bugid),
901 bug=str(bugid),
902 hgweb=self.ui.config('web', 'baseurl'),
902 hgweb=self.ui.config('web', 'baseurl'),
903 root=self.repo.root,
903 root=self.repo.root,
904 webroot=webroot(self.repo.root))
904 webroot=webroot(self.repo.root))
905 data = self.ui.popbuffer()
905 data = self.ui.popbuffer()
906 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
906 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
907
907
908 def notify(self, bugs, committer):
908 def notify(self, bugs, committer):
909 '''ensure Bugzilla users are notified of bug change.'''
909 '''ensure Bugzilla users are notified of bug change.'''
910 self.bzdriver.notify(bugs, committer)
910 self.bzdriver.notify(bugs, committer)
911
911
912 def hook(ui, repo, hooktype, node=None, **kwargs):
912 def hook(ui, repo, hooktype, node=None, **kwargs):
913 '''add comment to bugzilla for each changeset that refers to a
913 '''add comment to bugzilla for each changeset that refers to a
914 bugzilla bug id. only add a comment once per bug, so same change
914 bugzilla bug id. only add a comment once per bug, so same change
915 seen multiple times does not fill bug with duplicate data.'''
915 seen multiple times does not fill bug with duplicate data.'''
916 if node is None:
916 if node is None:
917 raise error.Abort(_('hook type %s does not pass a changeset id') %
917 raise error.Abort(_('hook type %s does not pass a changeset id') %
918 hooktype)
918 hooktype)
919 try:
919 try:
920 bz = bugzilla(ui, repo)
920 bz = bugzilla(ui, repo)
921 ctx = repo[node]
921 ctx = repo[node]
922 bugs = bz.find_bugs(ctx)
922 bugs = bz.find_bugs(ctx)
923 if bugs:
923 if bugs:
924 for bug in bugs:
924 for bug in bugs:
925 bz.update(bug, bugs[bug], ctx)
925 bz.update(bug, bugs[bug], ctx)
926 bz.notify(bugs, util.email(ctx.user()))
926 bz.notify(bugs, util.email(ctx.user()))
927 except Exception as e:
927 except Exception as e:
928 raise error.Abort(_('Bugzilla error: %s') % e)
928 raise error.Abort(_('Bugzilla error: %s') % e)
@@ -1,138 +1,145
1 # pycompat.py - portability shim for python 3
1 # pycompat.py - portability shim for python 3
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5
5
6 """Mercurial portability shim for python 3.
6 """Mercurial portability shim for python 3.
7
7
8 This contains aliases to hide python version-specific details from the core.
8 This contains aliases to hide python version-specific details from the core.
9 """
9 """
10
10
11 from __future__ import absolute_import
11 from __future__ import absolute_import
12
12
13 try:
13 try:
14 import cPickle as pickle
14 import cPickle as pickle
15 pickle.dumps
15 pickle.dumps
16 except ImportError:
16 except ImportError:
17 import pickle
17 import pickle
18 pickle.dumps # silence pyflakes
18 pickle.dumps # silence pyflakes
19
19
20 try:
20 try:
21 import xmlrpclib
22 xmlrpclib.Transport
23 except ImportError:
24 import xmlrpc.client as xmlrpclib
25 xmlrpclib.Transport
26
27 try:
21 import urlparse
28 import urlparse
22 urlparse.urlparse
29 urlparse.urlparse
23 except ImportError:
30 except ImportError:
24 import urllib.parse as urlparse
31 import urllib.parse as urlparse
25 urlparse.urlparse
32 urlparse.urlparse
26
33
27 try:
34 try:
28 import cStringIO as io
35 import cStringIO as io
29 stringio = io.StringIO
36 stringio = io.StringIO
30 except ImportError:
37 except ImportError:
31 import io
38 import io
32 stringio = io.StringIO
39 stringio = io.StringIO
33
40
34 try:
41 try:
35 import Queue as _queue
42 import Queue as _queue
36 _queue.Queue
43 _queue.Queue
37 except ImportError:
44 except ImportError:
38 import queue as _queue
45 import queue as _queue
39 empty = _queue.Empty
46 empty = _queue.Empty
40 queue = _queue.Queue
47 queue = _queue.Queue
41
48
42 class _pycompatstub(object):
49 class _pycompatstub(object):
43 pass
50 pass
44
51
45 def _alias(alias, origin, items):
52 def _alias(alias, origin, items):
46 """ populate a _pycompatstub
53 """ populate a _pycompatstub
47
54
48 copies items from origin to alias
55 copies items from origin to alias
49 """
56 """
50 def hgcase(item):
57 def hgcase(item):
51 return item.replace('_', '').lower()
58 return item.replace('_', '').lower()
52 for item in items:
59 for item in items:
53 try:
60 try:
54 setattr(alias, hgcase(item), getattr(origin, item))
61 setattr(alias, hgcase(item), getattr(origin, item))
55 except AttributeError:
62 except AttributeError:
56 pass
63 pass
57
64
58 urlreq = _pycompatstub()
65 urlreq = _pycompatstub()
59 urlerr = _pycompatstub()
66 urlerr = _pycompatstub()
60 try:
67 try:
61 import urllib2
68 import urllib2
62 import urllib
69 import urllib
63 _alias(urlreq, urllib, (
70 _alias(urlreq, urllib, (
64 "addclosehook",
71 "addclosehook",
65 "addinfourl",
72 "addinfourl",
66 "ftpwrapper",
73 "ftpwrapper",
67 "pathname2url",
74 "pathname2url",
68 "quote",
75 "quote",
69 "splitattr",
76 "splitattr",
70 "splitpasswd",
77 "splitpasswd",
71 "splitport",
78 "splitport",
72 "splituser",
79 "splituser",
73 "unquote",
80 "unquote",
74 "url2pathname",
81 "url2pathname",
75 "urlencode",
82 "urlencode",
76 "urlencode",
83 "urlencode",
77 ))
84 ))
78 _alias(urlreq, urllib2, (
85 _alias(urlreq, urllib2, (
79 "AbstractHTTPHandler",
86 "AbstractHTTPHandler",
80 "BaseHandler",
87 "BaseHandler",
81 "build_opener",
88 "build_opener",
82 "FileHandler",
89 "FileHandler",
83 "FTPHandler",
90 "FTPHandler",
84 "HTTPBasicAuthHandler",
91 "HTTPBasicAuthHandler",
85 "HTTPDigestAuthHandler",
92 "HTTPDigestAuthHandler",
86 "HTTPHandler",
93 "HTTPHandler",
87 "HTTPPasswordMgrWithDefaultRealm",
94 "HTTPPasswordMgrWithDefaultRealm",
88 "HTTPSHandler",
95 "HTTPSHandler",
89 "install_opener",
96 "install_opener",
90 "ProxyHandler",
97 "ProxyHandler",
91 "Request",
98 "Request",
92 "urlopen",
99 "urlopen",
93 ))
100 ))
94 _alias(urlerr, urllib2, (
101 _alias(urlerr, urllib2, (
95 "HTTPError",
102 "HTTPError",
96 "URLError",
103 "URLError",
97 ))
104 ))
98
105
99 except ImportError:
106 except ImportError:
100 import urllib.request
107 import urllib.request
101 _alias(urlreq, urllib.request, (
108 _alias(urlreq, urllib.request, (
102 "AbstractHTTPHandler",
109 "AbstractHTTPHandler",
103 "addclosehook",
110 "addclosehook",
104 "addinfourl",
111 "addinfourl",
105 "BaseHandler",
112 "BaseHandler",
106 "build_opener",
113 "build_opener",
107 "FileHandler",
114 "FileHandler",
108 "FTPHandler",
115 "FTPHandler",
109 "ftpwrapper",
116 "ftpwrapper",
110 "HTTPHandler",
117 "HTTPHandler",
111 "HTTPSHandler",
118 "HTTPSHandler",
112 "install_opener",
119 "install_opener",
113 "pathname2url",
120 "pathname2url",
114 "HTTPBasicAuthHandler",
121 "HTTPBasicAuthHandler",
115 "HTTPDigestAuthHandler",
122 "HTTPDigestAuthHandler",
116 "HTTPPasswordMgrWithDefaultRealm",
123 "HTTPPasswordMgrWithDefaultRealm",
117 "ProxyHandler",
124 "ProxyHandler",
118 "quote",
125 "quote",
119 "Request",
126 "Request",
120 "splitattr",
127 "splitattr",
121 "splitpasswd",
128 "splitpasswd",
122 "splitport",
129 "splitport",
123 "splituser",
130 "splituser",
124 "unquote",
131 "unquote",
125 "url2pathname",
132 "url2pathname",
126 "urlopen",
133 "urlopen",
127 ))
134 ))
128 import urllib.error
135 import urllib.error
129 _alias(urlerr, urllib.error, (
136 _alias(urlerr, urllib.error, (
130 "HTTPError",
137 "HTTPError",
131 "URLError",
138 "URLError",
132 ))
139 ))
133
140
134 try:
141 try:
135 xrange
142 xrange
136 except NameError:
143 except NameError:
137 import builtins
144 import builtins
138 builtins.xrange = range
145 builtins.xrange = range
@@ -1,2854 +1,2855
1 # util.py - Mercurial utility functions and platform specific implementations
1 # util.py - Mercurial utility functions and platform specific implementations
2 #
2 #
3 # Copyright 2005 K. Thananchayan <thananck@yahoo.com>
3 # Copyright 2005 K. Thananchayan <thananck@yahoo.com>
4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
5 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
5 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
6 #
6 #
7 # This software may be used and distributed according to the terms of the
7 # This software may be used and distributed according to the terms of the
8 # GNU General Public License version 2 or any later version.
8 # GNU General Public License version 2 or any later version.
9
9
10 """Mercurial utility functions and platform specific implementations.
10 """Mercurial utility functions and platform specific implementations.
11
11
12 This contains helper routines that are independent of the SCM core and
12 This contains helper routines that are independent of the SCM core and
13 hide platform-specific details from the core.
13 hide platform-specific details from the core.
14 """
14 """
15
15
16 from __future__ import absolute_import
16 from __future__ import absolute_import
17
17
18 import bz2
18 import bz2
19 import calendar
19 import calendar
20 import collections
20 import collections
21 import datetime
21 import datetime
22 import errno
22 import errno
23 import gc
23 import gc
24 import hashlib
24 import hashlib
25 import imp
25 import imp
26 import os
26 import os
27 import re as remod
27 import re as remod
28 import shutil
28 import shutil
29 import signal
29 import signal
30 import socket
30 import socket
31 import subprocess
31 import subprocess
32 import sys
32 import sys
33 import tempfile
33 import tempfile
34 import textwrap
34 import textwrap
35 import time
35 import time
36 import traceback
36 import traceback
37 import zlib
37 import zlib
38
38
39 from . import (
39 from . import (
40 encoding,
40 encoding,
41 error,
41 error,
42 i18n,
42 i18n,
43 osutil,
43 osutil,
44 parsers,
44 parsers,
45 pycompat,
45 pycompat,
46 )
46 )
47
47
48 for attr in (
48 for attr in (
49 'empty',
49 'empty',
50 'pickle',
50 'pickle',
51 'queue',
51 'queue',
52 'urlerr',
52 'urlerr',
53 'urlparse',
53 'urlparse',
54 # we do import urlreq, but we do it outside the loop
54 # we do import urlreq, but we do it outside the loop
55 #'urlreq',
55 #'urlreq',
56 'stringio',
56 'stringio',
57 'xmlrpclib',
57 ):
58 ):
58 globals()[attr] = getattr(pycompat, attr)
59 globals()[attr] = getattr(pycompat, attr)
59
60
60 # This line is to make pyflakes happy:
61 # This line is to make pyflakes happy:
61 urlreq = pycompat.urlreq
62 urlreq = pycompat.urlreq
62
63
63 if os.name == 'nt':
64 if os.name == 'nt':
64 from . import windows as platform
65 from . import windows as platform
65 else:
66 else:
66 from . import posix as platform
67 from . import posix as platform
67
68
68 _ = i18n._
69 _ = i18n._
69
70
70 cachestat = platform.cachestat
71 cachestat = platform.cachestat
71 checkexec = platform.checkexec
72 checkexec = platform.checkexec
72 checklink = platform.checklink
73 checklink = platform.checklink
73 copymode = platform.copymode
74 copymode = platform.copymode
74 executablepath = platform.executablepath
75 executablepath = platform.executablepath
75 expandglobs = platform.expandglobs
76 expandglobs = platform.expandglobs
76 explainexit = platform.explainexit
77 explainexit = platform.explainexit
77 findexe = platform.findexe
78 findexe = platform.findexe
78 gethgcmd = platform.gethgcmd
79 gethgcmd = platform.gethgcmd
79 getuser = platform.getuser
80 getuser = platform.getuser
80 getpid = os.getpid
81 getpid = os.getpid
81 groupmembers = platform.groupmembers
82 groupmembers = platform.groupmembers
82 groupname = platform.groupname
83 groupname = platform.groupname
83 hidewindow = platform.hidewindow
84 hidewindow = platform.hidewindow
84 isexec = platform.isexec
85 isexec = platform.isexec
85 isowner = platform.isowner
86 isowner = platform.isowner
86 localpath = platform.localpath
87 localpath = platform.localpath
87 lookupreg = platform.lookupreg
88 lookupreg = platform.lookupreg
88 makedir = platform.makedir
89 makedir = platform.makedir
89 nlinks = platform.nlinks
90 nlinks = platform.nlinks
90 normpath = platform.normpath
91 normpath = platform.normpath
91 normcase = platform.normcase
92 normcase = platform.normcase
92 normcasespec = platform.normcasespec
93 normcasespec = platform.normcasespec
93 normcasefallback = platform.normcasefallback
94 normcasefallback = platform.normcasefallback
94 openhardlinks = platform.openhardlinks
95 openhardlinks = platform.openhardlinks
95 oslink = platform.oslink
96 oslink = platform.oslink
96 parsepatchoutput = platform.parsepatchoutput
97 parsepatchoutput = platform.parsepatchoutput
97 pconvert = platform.pconvert
98 pconvert = platform.pconvert
98 poll = platform.poll
99 poll = platform.poll
99 popen = platform.popen
100 popen = platform.popen
100 posixfile = platform.posixfile
101 posixfile = platform.posixfile
101 quotecommand = platform.quotecommand
102 quotecommand = platform.quotecommand
102 readpipe = platform.readpipe
103 readpipe = platform.readpipe
103 rename = platform.rename
104 rename = platform.rename
104 removedirs = platform.removedirs
105 removedirs = platform.removedirs
105 samedevice = platform.samedevice
106 samedevice = platform.samedevice
106 samefile = platform.samefile
107 samefile = platform.samefile
107 samestat = platform.samestat
108 samestat = platform.samestat
108 setbinary = platform.setbinary
109 setbinary = platform.setbinary
109 setflags = platform.setflags
110 setflags = platform.setflags
110 setsignalhandler = platform.setsignalhandler
111 setsignalhandler = platform.setsignalhandler
111 shellquote = platform.shellquote
112 shellquote = platform.shellquote
112 spawndetached = platform.spawndetached
113 spawndetached = platform.spawndetached
113 split = platform.split
114 split = platform.split
114 sshargs = platform.sshargs
115 sshargs = platform.sshargs
115 statfiles = getattr(osutil, 'statfiles', platform.statfiles)
116 statfiles = getattr(osutil, 'statfiles', platform.statfiles)
116 statisexec = platform.statisexec
117 statisexec = platform.statisexec
117 statislink = platform.statislink
118 statislink = platform.statislink
118 termwidth = platform.termwidth
119 termwidth = platform.termwidth
119 testpid = platform.testpid
120 testpid = platform.testpid
120 umask = platform.umask
121 umask = platform.umask
121 unlink = platform.unlink
122 unlink = platform.unlink
122 unlinkpath = platform.unlinkpath
123 unlinkpath = platform.unlinkpath
123 username = platform.username
124 username = platform.username
124
125
125 # Python compatibility
126 # Python compatibility
126
127
127 _notset = object()
128 _notset = object()
128
129
129 # disable Python's problematic floating point timestamps (issue4836)
130 # disable Python's problematic floating point timestamps (issue4836)
130 # (Python hypocritically says you shouldn't change this behavior in
131 # (Python hypocritically says you shouldn't change this behavior in
131 # libraries, and sure enough Mercurial is not a library.)
132 # libraries, and sure enough Mercurial is not a library.)
132 os.stat_float_times(False)
133 os.stat_float_times(False)
133
134
134 def safehasattr(thing, attr):
135 def safehasattr(thing, attr):
135 return getattr(thing, attr, _notset) is not _notset
136 return getattr(thing, attr, _notset) is not _notset
136
137
137 DIGESTS = {
138 DIGESTS = {
138 'md5': hashlib.md5,
139 'md5': hashlib.md5,
139 'sha1': hashlib.sha1,
140 'sha1': hashlib.sha1,
140 'sha512': hashlib.sha512,
141 'sha512': hashlib.sha512,
141 }
142 }
142 # List of digest types from strongest to weakest
143 # List of digest types from strongest to weakest
143 DIGESTS_BY_STRENGTH = ['sha512', 'sha1', 'md5']
144 DIGESTS_BY_STRENGTH = ['sha512', 'sha1', 'md5']
144
145
145 for k in DIGESTS_BY_STRENGTH:
146 for k in DIGESTS_BY_STRENGTH:
146 assert k in DIGESTS
147 assert k in DIGESTS
147
148
148 class digester(object):
149 class digester(object):
149 """helper to compute digests.
150 """helper to compute digests.
150
151
151 This helper can be used to compute one or more digests given their name.
152 This helper can be used to compute one or more digests given their name.
152
153
153 >>> d = digester(['md5', 'sha1'])
154 >>> d = digester(['md5', 'sha1'])
154 >>> d.update('foo')
155 >>> d.update('foo')
155 >>> [k for k in sorted(d)]
156 >>> [k for k in sorted(d)]
156 ['md5', 'sha1']
157 ['md5', 'sha1']
157 >>> d['md5']
158 >>> d['md5']
158 'acbd18db4cc2f85cedef654fccc4a4d8'
159 'acbd18db4cc2f85cedef654fccc4a4d8'
159 >>> d['sha1']
160 >>> d['sha1']
160 '0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33'
161 '0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33'
161 >>> digester.preferred(['md5', 'sha1'])
162 >>> digester.preferred(['md5', 'sha1'])
162 'sha1'
163 'sha1'
163 """
164 """
164
165
165 def __init__(self, digests, s=''):
166 def __init__(self, digests, s=''):
166 self._hashes = {}
167 self._hashes = {}
167 for k in digests:
168 for k in digests:
168 if k not in DIGESTS:
169 if k not in DIGESTS:
169 raise Abort(_('unknown digest type: %s') % k)
170 raise Abort(_('unknown digest type: %s') % k)
170 self._hashes[k] = DIGESTS[k]()
171 self._hashes[k] = DIGESTS[k]()
171 if s:
172 if s:
172 self.update(s)
173 self.update(s)
173
174
174 def update(self, data):
175 def update(self, data):
175 for h in self._hashes.values():
176 for h in self._hashes.values():
176 h.update(data)
177 h.update(data)
177
178
178 def __getitem__(self, key):
179 def __getitem__(self, key):
179 if key not in DIGESTS:
180 if key not in DIGESTS:
180 raise Abort(_('unknown digest type: %s') % k)
181 raise Abort(_('unknown digest type: %s') % k)
181 return self._hashes[key].hexdigest()
182 return self._hashes[key].hexdigest()
182
183
183 def __iter__(self):
184 def __iter__(self):
184 return iter(self._hashes)
185 return iter(self._hashes)
185
186
186 @staticmethod
187 @staticmethod
187 def preferred(supported):
188 def preferred(supported):
188 """returns the strongest digest type in both supported and DIGESTS."""
189 """returns the strongest digest type in both supported and DIGESTS."""
189
190
190 for k in DIGESTS_BY_STRENGTH:
191 for k in DIGESTS_BY_STRENGTH:
191 if k in supported:
192 if k in supported:
192 return k
193 return k
193 return None
194 return None
194
195
195 class digestchecker(object):
196 class digestchecker(object):
196 """file handle wrapper that additionally checks content against a given
197 """file handle wrapper that additionally checks content against a given
197 size and digests.
198 size and digests.
198
199
199 d = digestchecker(fh, size, {'md5': '...'})
200 d = digestchecker(fh, size, {'md5': '...'})
200
201
201 When multiple digests are given, all of them are validated.
202 When multiple digests are given, all of them are validated.
202 """
203 """
203
204
204 def __init__(self, fh, size, digests):
205 def __init__(self, fh, size, digests):
205 self._fh = fh
206 self._fh = fh
206 self._size = size
207 self._size = size
207 self._got = 0
208 self._got = 0
208 self._digests = dict(digests)
209 self._digests = dict(digests)
209 self._digester = digester(self._digests.keys())
210 self._digester = digester(self._digests.keys())
210
211
211 def read(self, length=-1):
212 def read(self, length=-1):
212 content = self._fh.read(length)
213 content = self._fh.read(length)
213 self._digester.update(content)
214 self._digester.update(content)
214 self._got += len(content)
215 self._got += len(content)
215 return content
216 return content
216
217
217 def validate(self):
218 def validate(self):
218 if self._size != self._got:
219 if self._size != self._got:
219 raise Abort(_('size mismatch: expected %d, got %d') %
220 raise Abort(_('size mismatch: expected %d, got %d') %
220 (self._size, self._got))
221 (self._size, self._got))
221 for k, v in self._digests.items():
222 for k, v in self._digests.items():
222 if v != self._digester[k]:
223 if v != self._digester[k]:
223 # i18n: first parameter is a digest name
224 # i18n: first parameter is a digest name
224 raise Abort(_('%s mismatch: expected %s, got %s') %
225 raise Abort(_('%s mismatch: expected %s, got %s') %
225 (k, v, self._digester[k]))
226 (k, v, self._digester[k]))
226
227
227 try:
228 try:
228 buffer = buffer
229 buffer = buffer
229 except NameError:
230 except NameError:
230 if sys.version_info[0] < 3:
231 if sys.version_info[0] < 3:
231 def buffer(sliceable, offset=0):
232 def buffer(sliceable, offset=0):
232 return sliceable[offset:]
233 return sliceable[offset:]
233 else:
234 else:
234 def buffer(sliceable, offset=0):
235 def buffer(sliceable, offset=0):
235 return memoryview(sliceable)[offset:]
236 return memoryview(sliceable)[offset:]
236
237
237 closefds = os.name == 'posix'
238 closefds = os.name == 'posix'
238
239
239 _chunksize = 4096
240 _chunksize = 4096
240
241
241 class bufferedinputpipe(object):
242 class bufferedinputpipe(object):
242 """a manually buffered input pipe
243 """a manually buffered input pipe
243
244
244 Python will not let us use buffered IO and lazy reading with 'polling' at
245 Python will not let us use buffered IO and lazy reading with 'polling' at
245 the same time. We cannot probe the buffer state and select will not detect
246 the same time. We cannot probe the buffer state and select will not detect
246 that data are ready to read if they are already buffered.
247 that data are ready to read if they are already buffered.
247
248
248 This class let us work around that by implementing its own buffering
249 This class let us work around that by implementing its own buffering
249 (allowing efficient readline) while offering a way to know if the buffer is
250 (allowing efficient readline) while offering a way to know if the buffer is
250 empty from the output (allowing collaboration of the buffer with polling).
251 empty from the output (allowing collaboration of the buffer with polling).
251
252
252 This class lives in the 'util' module because it makes use of the 'os'
253 This class lives in the 'util' module because it makes use of the 'os'
253 module from the python stdlib.
254 module from the python stdlib.
254 """
255 """
255
256
256 def __init__(self, input):
257 def __init__(self, input):
257 self._input = input
258 self._input = input
258 self._buffer = []
259 self._buffer = []
259 self._eof = False
260 self._eof = False
260 self._lenbuf = 0
261 self._lenbuf = 0
261
262
262 @property
263 @property
263 def hasbuffer(self):
264 def hasbuffer(self):
264 """True is any data is currently buffered
265 """True is any data is currently buffered
265
266
266 This will be used externally a pre-step for polling IO. If there is
267 This will be used externally a pre-step for polling IO. If there is
267 already data then no polling should be set in place."""
268 already data then no polling should be set in place."""
268 return bool(self._buffer)
269 return bool(self._buffer)
269
270
270 @property
271 @property
271 def closed(self):
272 def closed(self):
272 return self._input.closed
273 return self._input.closed
273
274
274 def fileno(self):
275 def fileno(self):
275 return self._input.fileno()
276 return self._input.fileno()
276
277
277 def close(self):
278 def close(self):
278 return self._input.close()
279 return self._input.close()
279
280
280 def read(self, size):
281 def read(self, size):
281 while (not self._eof) and (self._lenbuf < size):
282 while (not self._eof) and (self._lenbuf < size):
282 self._fillbuffer()
283 self._fillbuffer()
283 return self._frombuffer(size)
284 return self._frombuffer(size)
284
285
285 def readline(self, *args, **kwargs):
286 def readline(self, *args, **kwargs):
286 if 1 < len(self._buffer):
287 if 1 < len(self._buffer):
287 # this should not happen because both read and readline end with a
288 # this should not happen because both read and readline end with a
288 # _frombuffer call that collapse it.
289 # _frombuffer call that collapse it.
289 self._buffer = [''.join(self._buffer)]
290 self._buffer = [''.join(self._buffer)]
290 self._lenbuf = len(self._buffer[0])
291 self._lenbuf = len(self._buffer[0])
291 lfi = -1
292 lfi = -1
292 if self._buffer:
293 if self._buffer:
293 lfi = self._buffer[-1].find('\n')
294 lfi = self._buffer[-1].find('\n')
294 while (not self._eof) and lfi < 0:
295 while (not self._eof) and lfi < 0:
295 self._fillbuffer()
296 self._fillbuffer()
296 if self._buffer:
297 if self._buffer:
297 lfi = self._buffer[-1].find('\n')
298 lfi = self._buffer[-1].find('\n')
298 size = lfi + 1
299 size = lfi + 1
299 if lfi < 0: # end of file
300 if lfi < 0: # end of file
300 size = self._lenbuf
301 size = self._lenbuf
301 elif 1 < len(self._buffer):
302 elif 1 < len(self._buffer):
302 # we need to take previous chunks into account
303 # we need to take previous chunks into account
303 size += self._lenbuf - len(self._buffer[-1])
304 size += self._lenbuf - len(self._buffer[-1])
304 return self._frombuffer(size)
305 return self._frombuffer(size)
305
306
306 def _frombuffer(self, size):
307 def _frombuffer(self, size):
307 """return at most 'size' data from the buffer
308 """return at most 'size' data from the buffer
308
309
309 The data are removed from the buffer."""
310 The data are removed from the buffer."""
310 if size == 0 or not self._buffer:
311 if size == 0 or not self._buffer:
311 return ''
312 return ''
312 buf = self._buffer[0]
313 buf = self._buffer[0]
313 if 1 < len(self._buffer):
314 if 1 < len(self._buffer):
314 buf = ''.join(self._buffer)
315 buf = ''.join(self._buffer)
315
316
316 data = buf[:size]
317 data = buf[:size]
317 buf = buf[len(data):]
318 buf = buf[len(data):]
318 if buf:
319 if buf:
319 self._buffer = [buf]
320 self._buffer = [buf]
320 self._lenbuf = len(buf)
321 self._lenbuf = len(buf)
321 else:
322 else:
322 self._buffer = []
323 self._buffer = []
323 self._lenbuf = 0
324 self._lenbuf = 0
324 return data
325 return data
325
326
326 def _fillbuffer(self):
327 def _fillbuffer(self):
327 """read data to the buffer"""
328 """read data to the buffer"""
328 data = os.read(self._input.fileno(), _chunksize)
329 data = os.read(self._input.fileno(), _chunksize)
329 if not data:
330 if not data:
330 self._eof = True
331 self._eof = True
331 else:
332 else:
332 self._lenbuf += len(data)
333 self._lenbuf += len(data)
333 self._buffer.append(data)
334 self._buffer.append(data)
334
335
335 def popen2(cmd, env=None, newlines=False):
336 def popen2(cmd, env=None, newlines=False):
336 # Setting bufsize to -1 lets the system decide the buffer size.
337 # Setting bufsize to -1 lets the system decide the buffer size.
337 # The default for bufsize is 0, meaning unbuffered. This leads to
338 # The default for bufsize is 0, meaning unbuffered. This leads to
338 # poor performance on Mac OS X: http://bugs.python.org/issue4194
339 # poor performance on Mac OS X: http://bugs.python.org/issue4194
339 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
340 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
340 close_fds=closefds,
341 close_fds=closefds,
341 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
342 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
342 universal_newlines=newlines,
343 universal_newlines=newlines,
343 env=env)
344 env=env)
344 return p.stdin, p.stdout
345 return p.stdin, p.stdout
345
346
346 def popen3(cmd, env=None, newlines=False):
347 def popen3(cmd, env=None, newlines=False):
347 stdin, stdout, stderr, p = popen4(cmd, env, newlines)
348 stdin, stdout, stderr, p = popen4(cmd, env, newlines)
348 return stdin, stdout, stderr
349 return stdin, stdout, stderr
349
350
350 def popen4(cmd, env=None, newlines=False, bufsize=-1):
351 def popen4(cmd, env=None, newlines=False, bufsize=-1):
351 p = subprocess.Popen(cmd, shell=True, bufsize=bufsize,
352 p = subprocess.Popen(cmd, shell=True, bufsize=bufsize,
352 close_fds=closefds,
353 close_fds=closefds,
353 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
354 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
354 stderr=subprocess.PIPE,
355 stderr=subprocess.PIPE,
355 universal_newlines=newlines,
356 universal_newlines=newlines,
356 env=env)
357 env=env)
357 return p.stdin, p.stdout, p.stderr, p
358 return p.stdin, p.stdout, p.stderr, p
358
359
359 def version():
360 def version():
360 """Return version information if available."""
361 """Return version information if available."""
361 try:
362 try:
362 from . import __version__
363 from . import __version__
363 return __version__.version
364 return __version__.version
364 except ImportError:
365 except ImportError:
365 return 'unknown'
366 return 'unknown'
366
367
367 def versiontuple(v=None, n=4):
368 def versiontuple(v=None, n=4):
368 """Parses a Mercurial version string into an N-tuple.
369 """Parses a Mercurial version string into an N-tuple.
369
370
370 The version string to be parsed is specified with the ``v`` argument.
371 The version string to be parsed is specified with the ``v`` argument.
371 If it isn't defined, the current Mercurial version string will be parsed.
372 If it isn't defined, the current Mercurial version string will be parsed.
372
373
373 ``n`` can be 2, 3, or 4. Here is how some version strings map to
374 ``n`` can be 2, 3, or 4. Here is how some version strings map to
374 returned values:
375 returned values:
375
376
376 >>> v = '3.6.1+190-df9b73d2d444'
377 >>> v = '3.6.1+190-df9b73d2d444'
377 >>> versiontuple(v, 2)
378 >>> versiontuple(v, 2)
378 (3, 6)
379 (3, 6)
379 >>> versiontuple(v, 3)
380 >>> versiontuple(v, 3)
380 (3, 6, 1)
381 (3, 6, 1)
381 >>> versiontuple(v, 4)
382 >>> versiontuple(v, 4)
382 (3, 6, 1, '190-df9b73d2d444')
383 (3, 6, 1, '190-df9b73d2d444')
383
384
384 >>> versiontuple('3.6.1+190-df9b73d2d444+20151118')
385 >>> versiontuple('3.6.1+190-df9b73d2d444+20151118')
385 (3, 6, 1, '190-df9b73d2d444+20151118')
386 (3, 6, 1, '190-df9b73d2d444+20151118')
386
387
387 >>> v = '3.6'
388 >>> v = '3.6'
388 >>> versiontuple(v, 2)
389 >>> versiontuple(v, 2)
389 (3, 6)
390 (3, 6)
390 >>> versiontuple(v, 3)
391 >>> versiontuple(v, 3)
391 (3, 6, None)
392 (3, 6, None)
392 >>> versiontuple(v, 4)
393 >>> versiontuple(v, 4)
393 (3, 6, None, None)
394 (3, 6, None, None)
394 """
395 """
395 if not v:
396 if not v:
396 v = version()
397 v = version()
397 parts = v.split('+', 1)
398 parts = v.split('+', 1)
398 if len(parts) == 1:
399 if len(parts) == 1:
399 vparts, extra = parts[0], None
400 vparts, extra = parts[0], None
400 else:
401 else:
401 vparts, extra = parts
402 vparts, extra = parts
402
403
403 vints = []
404 vints = []
404 for i in vparts.split('.'):
405 for i in vparts.split('.'):
405 try:
406 try:
406 vints.append(int(i))
407 vints.append(int(i))
407 except ValueError:
408 except ValueError:
408 break
409 break
409 # (3, 6) -> (3, 6, None)
410 # (3, 6) -> (3, 6, None)
410 while len(vints) < 3:
411 while len(vints) < 3:
411 vints.append(None)
412 vints.append(None)
412
413
413 if n == 2:
414 if n == 2:
414 return (vints[0], vints[1])
415 return (vints[0], vints[1])
415 if n == 3:
416 if n == 3:
416 return (vints[0], vints[1], vints[2])
417 return (vints[0], vints[1], vints[2])
417 if n == 4:
418 if n == 4:
418 return (vints[0], vints[1], vints[2], extra)
419 return (vints[0], vints[1], vints[2], extra)
419
420
420 # used by parsedate
421 # used by parsedate
421 defaultdateformats = (
422 defaultdateformats = (
422 '%Y-%m-%d %H:%M:%S',
423 '%Y-%m-%d %H:%M:%S',
423 '%Y-%m-%d %I:%M:%S%p',
424 '%Y-%m-%d %I:%M:%S%p',
424 '%Y-%m-%d %H:%M',
425 '%Y-%m-%d %H:%M',
425 '%Y-%m-%d %I:%M%p',
426 '%Y-%m-%d %I:%M%p',
426 '%Y-%m-%d',
427 '%Y-%m-%d',
427 '%m-%d',
428 '%m-%d',
428 '%m/%d',
429 '%m/%d',
429 '%m/%d/%y',
430 '%m/%d/%y',
430 '%m/%d/%Y',
431 '%m/%d/%Y',
431 '%a %b %d %H:%M:%S %Y',
432 '%a %b %d %H:%M:%S %Y',
432 '%a %b %d %I:%M:%S%p %Y',
433 '%a %b %d %I:%M:%S%p %Y',
433 '%a, %d %b %Y %H:%M:%S', # GNU coreutils "/bin/date --rfc-2822"
434 '%a, %d %b %Y %H:%M:%S', # GNU coreutils "/bin/date --rfc-2822"
434 '%b %d %H:%M:%S %Y',
435 '%b %d %H:%M:%S %Y',
435 '%b %d %I:%M:%S%p %Y',
436 '%b %d %I:%M:%S%p %Y',
436 '%b %d %H:%M:%S',
437 '%b %d %H:%M:%S',
437 '%b %d %I:%M:%S%p',
438 '%b %d %I:%M:%S%p',
438 '%b %d %H:%M',
439 '%b %d %H:%M',
439 '%b %d %I:%M%p',
440 '%b %d %I:%M%p',
440 '%b %d %Y',
441 '%b %d %Y',
441 '%b %d',
442 '%b %d',
442 '%H:%M:%S',
443 '%H:%M:%S',
443 '%I:%M:%S%p',
444 '%I:%M:%S%p',
444 '%H:%M',
445 '%H:%M',
445 '%I:%M%p',
446 '%I:%M%p',
446 )
447 )
447
448
448 extendeddateformats = defaultdateformats + (
449 extendeddateformats = defaultdateformats + (
449 "%Y",
450 "%Y",
450 "%Y-%m",
451 "%Y-%m",
451 "%b",
452 "%b",
452 "%b %Y",
453 "%b %Y",
453 )
454 )
454
455
455 def cachefunc(func):
456 def cachefunc(func):
456 '''cache the result of function calls'''
457 '''cache the result of function calls'''
457 # XXX doesn't handle keywords args
458 # XXX doesn't handle keywords args
458 if func.__code__.co_argcount == 0:
459 if func.__code__.co_argcount == 0:
459 cache = []
460 cache = []
460 def f():
461 def f():
461 if len(cache) == 0:
462 if len(cache) == 0:
462 cache.append(func())
463 cache.append(func())
463 return cache[0]
464 return cache[0]
464 return f
465 return f
465 cache = {}
466 cache = {}
466 if func.__code__.co_argcount == 1:
467 if func.__code__.co_argcount == 1:
467 # we gain a small amount of time because
468 # we gain a small amount of time because
468 # we don't need to pack/unpack the list
469 # we don't need to pack/unpack the list
469 def f(arg):
470 def f(arg):
470 if arg not in cache:
471 if arg not in cache:
471 cache[arg] = func(arg)
472 cache[arg] = func(arg)
472 return cache[arg]
473 return cache[arg]
473 else:
474 else:
474 def f(*args):
475 def f(*args):
475 if args not in cache:
476 if args not in cache:
476 cache[args] = func(*args)
477 cache[args] = func(*args)
477 return cache[args]
478 return cache[args]
478
479
479 return f
480 return f
480
481
481 class sortdict(dict):
482 class sortdict(dict):
482 '''a simple sorted dictionary'''
483 '''a simple sorted dictionary'''
483 def __init__(self, data=None):
484 def __init__(self, data=None):
484 self._list = []
485 self._list = []
485 if data:
486 if data:
486 self.update(data)
487 self.update(data)
487 def copy(self):
488 def copy(self):
488 return sortdict(self)
489 return sortdict(self)
489 def __setitem__(self, key, val):
490 def __setitem__(self, key, val):
490 if key in self:
491 if key in self:
491 self._list.remove(key)
492 self._list.remove(key)
492 self._list.append(key)
493 self._list.append(key)
493 dict.__setitem__(self, key, val)
494 dict.__setitem__(self, key, val)
494 def __iter__(self):
495 def __iter__(self):
495 return self._list.__iter__()
496 return self._list.__iter__()
496 def update(self, src):
497 def update(self, src):
497 if isinstance(src, dict):
498 if isinstance(src, dict):
498 src = src.iteritems()
499 src = src.iteritems()
499 for k, v in src:
500 for k, v in src:
500 self[k] = v
501 self[k] = v
501 def clear(self):
502 def clear(self):
502 dict.clear(self)
503 dict.clear(self)
503 self._list = []
504 self._list = []
504 def items(self):
505 def items(self):
505 return [(k, self[k]) for k in self._list]
506 return [(k, self[k]) for k in self._list]
506 def __delitem__(self, key):
507 def __delitem__(self, key):
507 dict.__delitem__(self, key)
508 dict.__delitem__(self, key)
508 self._list.remove(key)
509 self._list.remove(key)
509 def pop(self, key, *args, **kwargs):
510 def pop(self, key, *args, **kwargs):
510 dict.pop(self, key, *args, **kwargs)
511 dict.pop(self, key, *args, **kwargs)
511 try:
512 try:
512 self._list.remove(key)
513 self._list.remove(key)
513 except ValueError:
514 except ValueError:
514 pass
515 pass
515 def keys(self):
516 def keys(self):
516 return self._list
517 return self._list
517 def iterkeys(self):
518 def iterkeys(self):
518 return self._list.__iter__()
519 return self._list.__iter__()
519 def iteritems(self):
520 def iteritems(self):
520 for k in self._list:
521 for k in self._list:
521 yield k, self[k]
522 yield k, self[k]
522 def insert(self, index, key, val):
523 def insert(self, index, key, val):
523 self._list.insert(index, key)
524 self._list.insert(index, key)
524 dict.__setitem__(self, key, val)
525 dict.__setitem__(self, key, val)
525
526
526 class _lrucachenode(object):
527 class _lrucachenode(object):
527 """A node in a doubly linked list.
528 """A node in a doubly linked list.
528
529
529 Holds a reference to nodes on either side as well as a key-value
530 Holds a reference to nodes on either side as well as a key-value
530 pair for the dictionary entry.
531 pair for the dictionary entry.
531 """
532 """
532 __slots__ = ('next', 'prev', 'key', 'value')
533 __slots__ = ('next', 'prev', 'key', 'value')
533
534
534 def __init__(self):
535 def __init__(self):
535 self.next = None
536 self.next = None
536 self.prev = None
537 self.prev = None
537
538
538 self.key = _notset
539 self.key = _notset
539 self.value = None
540 self.value = None
540
541
541 def markempty(self):
542 def markempty(self):
542 """Mark the node as emptied."""
543 """Mark the node as emptied."""
543 self.key = _notset
544 self.key = _notset
544
545
545 class lrucachedict(object):
546 class lrucachedict(object):
546 """Dict that caches most recent accesses and sets.
547 """Dict that caches most recent accesses and sets.
547
548
548 The dict consists of an actual backing dict - indexed by original
549 The dict consists of an actual backing dict - indexed by original
549 key - and a doubly linked circular list defining the order of entries in
550 key - and a doubly linked circular list defining the order of entries in
550 the cache.
551 the cache.
551
552
552 The head node is the newest entry in the cache. If the cache is full,
553 The head node is the newest entry in the cache. If the cache is full,
553 we recycle head.prev and make it the new head. Cache accesses result in
554 we recycle head.prev and make it the new head. Cache accesses result in
554 the node being moved to before the existing head and being marked as the
555 the node being moved to before the existing head and being marked as the
555 new head node.
556 new head node.
556 """
557 """
557 def __init__(self, max):
558 def __init__(self, max):
558 self._cache = {}
559 self._cache = {}
559
560
560 self._head = head = _lrucachenode()
561 self._head = head = _lrucachenode()
561 head.prev = head
562 head.prev = head
562 head.next = head
563 head.next = head
563 self._size = 1
564 self._size = 1
564 self._capacity = max
565 self._capacity = max
565
566
566 def __len__(self):
567 def __len__(self):
567 return len(self._cache)
568 return len(self._cache)
568
569
569 def __contains__(self, k):
570 def __contains__(self, k):
570 return k in self._cache
571 return k in self._cache
571
572
572 def __iter__(self):
573 def __iter__(self):
573 # We don't have to iterate in cache order, but why not.
574 # We don't have to iterate in cache order, but why not.
574 n = self._head
575 n = self._head
575 for i in range(len(self._cache)):
576 for i in range(len(self._cache)):
576 yield n.key
577 yield n.key
577 n = n.next
578 n = n.next
578
579
579 def __getitem__(self, k):
580 def __getitem__(self, k):
580 node = self._cache[k]
581 node = self._cache[k]
581 self._movetohead(node)
582 self._movetohead(node)
582 return node.value
583 return node.value
583
584
584 def __setitem__(self, k, v):
585 def __setitem__(self, k, v):
585 node = self._cache.get(k)
586 node = self._cache.get(k)
586 # Replace existing value and mark as newest.
587 # Replace existing value and mark as newest.
587 if node is not None:
588 if node is not None:
588 node.value = v
589 node.value = v
589 self._movetohead(node)
590 self._movetohead(node)
590 return
591 return
591
592
592 if self._size < self._capacity:
593 if self._size < self._capacity:
593 node = self._addcapacity()
594 node = self._addcapacity()
594 else:
595 else:
595 # Grab the last/oldest item.
596 # Grab the last/oldest item.
596 node = self._head.prev
597 node = self._head.prev
597
598
598 # At capacity. Kill the old entry.
599 # At capacity. Kill the old entry.
599 if node.key is not _notset:
600 if node.key is not _notset:
600 del self._cache[node.key]
601 del self._cache[node.key]
601
602
602 node.key = k
603 node.key = k
603 node.value = v
604 node.value = v
604 self._cache[k] = node
605 self._cache[k] = node
605 # And mark it as newest entry. No need to adjust order since it
606 # And mark it as newest entry. No need to adjust order since it
606 # is already self._head.prev.
607 # is already self._head.prev.
607 self._head = node
608 self._head = node
608
609
609 def __delitem__(self, k):
610 def __delitem__(self, k):
610 node = self._cache.pop(k)
611 node = self._cache.pop(k)
611 node.markempty()
612 node.markempty()
612
613
613 # Temporarily mark as newest item before re-adjusting head to make
614 # Temporarily mark as newest item before re-adjusting head to make
614 # this node the oldest item.
615 # this node the oldest item.
615 self._movetohead(node)
616 self._movetohead(node)
616 self._head = node.next
617 self._head = node.next
617
618
618 # Additional dict methods.
619 # Additional dict methods.
619
620
620 def get(self, k, default=None):
621 def get(self, k, default=None):
621 try:
622 try:
622 return self._cache[k]
623 return self._cache[k]
623 except KeyError:
624 except KeyError:
624 return default
625 return default
625
626
626 def clear(self):
627 def clear(self):
627 n = self._head
628 n = self._head
628 while n.key is not _notset:
629 while n.key is not _notset:
629 n.markempty()
630 n.markempty()
630 n = n.next
631 n = n.next
631
632
632 self._cache.clear()
633 self._cache.clear()
633
634
634 def copy(self):
635 def copy(self):
635 result = lrucachedict(self._capacity)
636 result = lrucachedict(self._capacity)
636 n = self._head.prev
637 n = self._head.prev
637 # Iterate in oldest-to-newest order, so the copy has the right ordering
638 # Iterate in oldest-to-newest order, so the copy has the right ordering
638 for i in range(len(self._cache)):
639 for i in range(len(self._cache)):
639 result[n.key] = n.value
640 result[n.key] = n.value
640 n = n.prev
641 n = n.prev
641 return result
642 return result
642
643
643 def _movetohead(self, node):
644 def _movetohead(self, node):
644 """Mark a node as the newest, making it the new head.
645 """Mark a node as the newest, making it the new head.
645
646
646 When a node is accessed, it becomes the freshest entry in the LRU
647 When a node is accessed, it becomes the freshest entry in the LRU
647 list, which is denoted by self._head.
648 list, which is denoted by self._head.
648
649
649 Visually, let's make ``N`` the new head node (* denotes head):
650 Visually, let's make ``N`` the new head node (* denotes head):
650
651
651 previous/oldest <-> head <-> next/next newest
652 previous/oldest <-> head <-> next/next newest
652
653
653 ----<->--- A* ---<->-----
654 ----<->--- A* ---<->-----
654 | |
655 | |
655 E <-> D <-> N <-> C <-> B
656 E <-> D <-> N <-> C <-> B
656
657
657 To:
658 To:
658
659
659 ----<->--- N* ---<->-----
660 ----<->--- N* ---<->-----
660 | |
661 | |
661 E <-> D <-> C <-> B <-> A
662 E <-> D <-> C <-> B <-> A
662
663
663 This requires the following moves:
664 This requires the following moves:
664
665
665 C.next = D (node.prev.next = node.next)
666 C.next = D (node.prev.next = node.next)
666 D.prev = C (node.next.prev = node.prev)
667 D.prev = C (node.next.prev = node.prev)
667 E.next = N (head.prev.next = node)
668 E.next = N (head.prev.next = node)
668 N.prev = E (node.prev = head.prev)
669 N.prev = E (node.prev = head.prev)
669 N.next = A (node.next = head)
670 N.next = A (node.next = head)
670 A.prev = N (head.prev = node)
671 A.prev = N (head.prev = node)
671 """
672 """
672 head = self._head
673 head = self._head
673 # C.next = D
674 # C.next = D
674 node.prev.next = node.next
675 node.prev.next = node.next
675 # D.prev = C
676 # D.prev = C
676 node.next.prev = node.prev
677 node.next.prev = node.prev
677 # N.prev = E
678 # N.prev = E
678 node.prev = head.prev
679 node.prev = head.prev
679 # N.next = A
680 # N.next = A
680 # It is tempting to do just "head" here, however if node is
681 # It is tempting to do just "head" here, however if node is
681 # adjacent to head, this will do bad things.
682 # adjacent to head, this will do bad things.
682 node.next = head.prev.next
683 node.next = head.prev.next
683 # E.next = N
684 # E.next = N
684 node.next.prev = node
685 node.next.prev = node
685 # A.prev = N
686 # A.prev = N
686 node.prev.next = node
687 node.prev.next = node
687
688
688 self._head = node
689 self._head = node
689
690
690 def _addcapacity(self):
691 def _addcapacity(self):
691 """Add a node to the circular linked list.
692 """Add a node to the circular linked list.
692
693
693 The new node is inserted before the head node.
694 The new node is inserted before the head node.
694 """
695 """
695 head = self._head
696 head = self._head
696 node = _lrucachenode()
697 node = _lrucachenode()
697 head.prev.next = node
698 head.prev.next = node
698 node.prev = head.prev
699 node.prev = head.prev
699 node.next = head
700 node.next = head
700 head.prev = node
701 head.prev = node
701 self._size += 1
702 self._size += 1
702 return node
703 return node
703
704
704 def lrucachefunc(func):
705 def lrucachefunc(func):
705 '''cache most recent results of function calls'''
706 '''cache most recent results of function calls'''
706 cache = {}
707 cache = {}
707 order = collections.deque()
708 order = collections.deque()
708 if func.__code__.co_argcount == 1:
709 if func.__code__.co_argcount == 1:
709 def f(arg):
710 def f(arg):
710 if arg not in cache:
711 if arg not in cache:
711 if len(cache) > 20:
712 if len(cache) > 20:
712 del cache[order.popleft()]
713 del cache[order.popleft()]
713 cache[arg] = func(arg)
714 cache[arg] = func(arg)
714 else:
715 else:
715 order.remove(arg)
716 order.remove(arg)
716 order.append(arg)
717 order.append(arg)
717 return cache[arg]
718 return cache[arg]
718 else:
719 else:
719 def f(*args):
720 def f(*args):
720 if args not in cache:
721 if args not in cache:
721 if len(cache) > 20:
722 if len(cache) > 20:
722 del cache[order.popleft()]
723 del cache[order.popleft()]
723 cache[args] = func(*args)
724 cache[args] = func(*args)
724 else:
725 else:
725 order.remove(args)
726 order.remove(args)
726 order.append(args)
727 order.append(args)
727 return cache[args]
728 return cache[args]
728
729
729 return f
730 return f
730
731
731 class propertycache(object):
732 class propertycache(object):
732 def __init__(self, func):
733 def __init__(self, func):
733 self.func = func
734 self.func = func
734 self.name = func.__name__
735 self.name = func.__name__
735 def __get__(self, obj, type=None):
736 def __get__(self, obj, type=None):
736 result = self.func(obj)
737 result = self.func(obj)
737 self.cachevalue(obj, result)
738 self.cachevalue(obj, result)
738 return result
739 return result
739
740
740 def cachevalue(self, obj, value):
741 def cachevalue(self, obj, value):
741 # __dict__ assignment required to bypass __setattr__ (eg: repoview)
742 # __dict__ assignment required to bypass __setattr__ (eg: repoview)
742 obj.__dict__[self.name] = value
743 obj.__dict__[self.name] = value
743
744
744 def pipefilter(s, cmd):
745 def pipefilter(s, cmd):
745 '''filter string S through command CMD, returning its output'''
746 '''filter string S through command CMD, returning its output'''
746 p = subprocess.Popen(cmd, shell=True, close_fds=closefds,
747 p = subprocess.Popen(cmd, shell=True, close_fds=closefds,
747 stdin=subprocess.PIPE, stdout=subprocess.PIPE)
748 stdin=subprocess.PIPE, stdout=subprocess.PIPE)
748 pout, perr = p.communicate(s)
749 pout, perr = p.communicate(s)
749 return pout
750 return pout
750
751
751 def tempfilter(s, cmd):
752 def tempfilter(s, cmd):
752 '''filter string S through a pair of temporary files with CMD.
753 '''filter string S through a pair of temporary files with CMD.
753 CMD is used as a template to create the real command to be run,
754 CMD is used as a template to create the real command to be run,
754 with the strings INFILE and OUTFILE replaced by the real names of
755 with the strings INFILE and OUTFILE replaced by the real names of
755 the temporary files generated.'''
756 the temporary files generated.'''
756 inname, outname = None, None
757 inname, outname = None, None
757 try:
758 try:
758 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
759 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
759 fp = os.fdopen(infd, 'wb')
760 fp = os.fdopen(infd, 'wb')
760 fp.write(s)
761 fp.write(s)
761 fp.close()
762 fp.close()
762 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
763 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
763 os.close(outfd)
764 os.close(outfd)
764 cmd = cmd.replace('INFILE', inname)
765 cmd = cmd.replace('INFILE', inname)
765 cmd = cmd.replace('OUTFILE', outname)
766 cmd = cmd.replace('OUTFILE', outname)
766 code = os.system(cmd)
767 code = os.system(cmd)
767 if sys.platform == 'OpenVMS' and code & 1:
768 if sys.platform == 'OpenVMS' and code & 1:
768 code = 0
769 code = 0
769 if code:
770 if code:
770 raise Abort(_("command '%s' failed: %s") %
771 raise Abort(_("command '%s' failed: %s") %
771 (cmd, explainexit(code)))
772 (cmd, explainexit(code)))
772 return readfile(outname)
773 return readfile(outname)
773 finally:
774 finally:
774 try:
775 try:
775 if inname:
776 if inname:
776 os.unlink(inname)
777 os.unlink(inname)
777 except OSError:
778 except OSError:
778 pass
779 pass
779 try:
780 try:
780 if outname:
781 if outname:
781 os.unlink(outname)
782 os.unlink(outname)
782 except OSError:
783 except OSError:
783 pass
784 pass
784
785
785 filtertable = {
786 filtertable = {
786 'tempfile:': tempfilter,
787 'tempfile:': tempfilter,
787 'pipe:': pipefilter,
788 'pipe:': pipefilter,
788 }
789 }
789
790
790 def filter(s, cmd):
791 def filter(s, cmd):
791 "filter a string through a command that transforms its input to its output"
792 "filter a string through a command that transforms its input to its output"
792 for name, fn in filtertable.iteritems():
793 for name, fn in filtertable.iteritems():
793 if cmd.startswith(name):
794 if cmd.startswith(name):
794 return fn(s, cmd[len(name):].lstrip())
795 return fn(s, cmd[len(name):].lstrip())
795 return pipefilter(s, cmd)
796 return pipefilter(s, cmd)
796
797
797 def binary(s):
798 def binary(s):
798 """return true if a string is binary data"""
799 """return true if a string is binary data"""
799 return bool(s and '\0' in s)
800 return bool(s and '\0' in s)
800
801
801 def increasingchunks(source, min=1024, max=65536):
802 def increasingchunks(source, min=1024, max=65536):
802 '''return no less than min bytes per chunk while data remains,
803 '''return no less than min bytes per chunk while data remains,
803 doubling min after each chunk until it reaches max'''
804 doubling min after each chunk until it reaches max'''
804 def log2(x):
805 def log2(x):
805 if not x:
806 if not x:
806 return 0
807 return 0
807 i = 0
808 i = 0
808 while x:
809 while x:
809 x >>= 1
810 x >>= 1
810 i += 1
811 i += 1
811 return i - 1
812 return i - 1
812
813
813 buf = []
814 buf = []
814 blen = 0
815 blen = 0
815 for chunk in source:
816 for chunk in source:
816 buf.append(chunk)
817 buf.append(chunk)
817 blen += len(chunk)
818 blen += len(chunk)
818 if blen >= min:
819 if blen >= min:
819 if min < max:
820 if min < max:
820 min = min << 1
821 min = min << 1
821 nmin = 1 << log2(blen)
822 nmin = 1 << log2(blen)
822 if nmin > min:
823 if nmin > min:
823 min = nmin
824 min = nmin
824 if min > max:
825 if min > max:
825 min = max
826 min = max
826 yield ''.join(buf)
827 yield ''.join(buf)
827 blen = 0
828 blen = 0
828 buf = []
829 buf = []
829 if buf:
830 if buf:
830 yield ''.join(buf)
831 yield ''.join(buf)
831
832
832 Abort = error.Abort
833 Abort = error.Abort
833
834
834 def always(fn):
835 def always(fn):
835 return True
836 return True
836
837
837 def never(fn):
838 def never(fn):
838 return False
839 return False
839
840
840 def nogc(func):
841 def nogc(func):
841 """disable garbage collector
842 """disable garbage collector
842
843
843 Python's garbage collector triggers a GC each time a certain number of
844 Python's garbage collector triggers a GC each time a certain number of
844 container objects (the number being defined by gc.get_threshold()) are
845 container objects (the number being defined by gc.get_threshold()) are
845 allocated even when marked not to be tracked by the collector. Tracking has
846 allocated even when marked not to be tracked by the collector. Tracking has
846 no effect on when GCs are triggered, only on what objects the GC looks
847 no effect on when GCs are triggered, only on what objects the GC looks
847 into. As a workaround, disable GC while building complex (huge)
848 into. As a workaround, disable GC while building complex (huge)
848 containers.
849 containers.
849
850
850 This garbage collector issue have been fixed in 2.7.
851 This garbage collector issue have been fixed in 2.7.
851 """
852 """
852 def wrapper(*args, **kwargs):
853 def wrapper(*args, **kwargs):
853 gcenabled = gc.isenabled()
854 gcenabled = gc.isenabled()
854 gc.disable()
855 gc.disable()
855 try:
856 try:
856 return func(*args, **kwargs)
857 return func(*args, **kwargs)
857 finally:
858 finally:
858 if gcenabled:
859 if gcenabled:
859 gc.enable()
860 gc.enable()
860 return wrapper
861 return wrapper
861
862
862 def pathto(root, n1, n2):
863 def pathto(root, n1, n2):
863 '''return the relative path from one place to another.
864 '''return the relative path from one place to another.
864 root should use os.sep to separate directories
865 root should use os.sep to separate directories
865 n1 should use os.sep to separate directories
866 n1 should use os.sep to separate directories
866 n2 should use "/" to separate directories
867 n2 should use "/" to separate directories
867 returns an os.sep-separated path.
868 returns an os.sep-separated path.
868
869
869 If n1 is a relative path, it's assumed it's
870 If n1 is a relative path, it's assumed it's
870 relative to root.
871 relative to root.
871 n2 should always be relative to root.
872 n2 should always be relative to root.
872 '''
873 '''
873 if not n1:
874 if not n1:
874 return localpath(n2)
875 return localpath(n2)
875 if os.path.isabs(n1):
876 if os.path.isabs(n1):
876 if os.path.splitdrive(root)[0] != os.path.splitdrive(n1)[0]:
877 if os.path.splitdrive(root)[0] != os.path.splitdrive(n1)[0]:
877 return os.path.join(root, localpath(n2))
878 return os.path.join(root, localpath(n2))
878 n2 = '/'.join((pconvert(root), n2))
879 n2 = '/'.join((pconvert(root), n2))
879 a, b = splitpath(n1), n2.split('/')
880 a, b = splitpath(n1), n2.split('/')
880 a.reverse()
881 a.reverse()
881 b.reverse()
882 b.reverse()
882 while a and b and a[-1] == b[-1]:
883 while a and b and a[-1] == b[-1]:
883 a.pop()
884 a.pop()
884 b.pop()
885 b.pop()
885 b.reverse()
886 b.reverse()
886 return os.sep.join((['..'] * len(a)) + b) or '.'
887 return os.sep.join((['..'] * len(a)) + b) or '.'
887
888
888 def mainfrozen():
889 def mainfrozen():
889 """return True if we are a frozen executable.
890 """return True if we are a frozen executable.
890
891
891 The code supports py2exe (most common, Windows only) and tools/freeze
892 The code supports py2exe (most common, Windows only) and tools/freeze
892 (portable, not much used).
893 (portable, not much used).
893 """
894 """
894 return (safehasattr(sys, "frozen") or # new py2exe
895 return (safehasattr(sys, "frozen") or # new py2exe
895 safehasattr(sys, "importers") or # old py2exe
896 safehasattr(sys, "importers") or # old py2exe
896 imp.is_frozen("__main__")) # tools/freeze
897 imp.is_frozen("__main__")) # tools/freeze
897
898
898 # the location of data files matching the source code
899 # the location of data files matching the source code
899 if mainfrozen() and getattr(sys, 'frozen', None) != 'macosx_app':
900 if mainfrozen() and getattr(sys, 'frozen', None) != 'macosx_app':
900 # executable version (py2exe) doesn't support __file__
901 # executable version (py2exe) doesn't support __file__
901 datapath = os.path.dirname(sys.executable)
902 datapath = os.path.dirname(sys.executable)
902 else:
903 else:
903 datapath = os.path.dirname(__file__)
904 datapath = os.path.dirname(__file__)
904
905
905 i18n.setdatapath(datapath)
906 i18n.setdatapath(datapath)
906
907
907 _hgexecutable = None
908 _hgexecutable = None
908
909
909 def hgexecutable():
910 def hgexecutable():
910 """return location of the 'hg' executable.
911 """return location of the 'hg' executable.
911
912
912 Defaults to $HG or 'hg' in the search path.
913 Defaults to $HG or 'hg' in the search path.
913 """
914 """
914 if _hgexecutable is None:
915 if _hgexecutable is None:
915 hg = os.environ.get('HG')
916 hg = os.environ.get('HG')
916 mainmod = sys.modules['__main__']
917 mainmod = sys.modules['__main__']
917 if hg:
918 if hg:
918 _sethgexecutable(hg)
919 _sethgexecutable(hg)
919 elif mainfrozen():
920 elif mainfrozen():
920 if getattr(sys, 'frozen', None) == 'macosx_app':
921 if getattr(sys, 'frozen', None) == 'macosx_app':
921 # Env variable set by py2app
922 # Env variable set by py2app
922 _sethgexecutable(os.environ['EXECUTABLEPATH'])
923 _sethgexecutable(os.environ['EXECUTABLEPATH'])
923 else:
924 else:
924 _sethgexecutable(sys.executable)
925 _sethgexecutable(sys.executable)
925 elif os.path.basename(getattr(mainmod, '__file__', '')) == 'hg':
926 elif os.path.basename(getattr(mainmod, '__file__', '')) == 'hg':
926 _sethgexecutable(mainmod.__file__)
927 _sethgexecutable(mainmod.__file__)
927 else:
928 else:
928 exe = findexe('hg') or os.path.basename(sys.argv[0])
929 exe = findexe('hg') or os.path.basename(sys.argv[0])
929 _sethgexecutable(exe)
930 _sethgexecutable(exe)
930 return _hgexecutable
931 return _hgexecutable
931
932
932 def _sethgexecutable(path):
933 def _sethgexecutable(path):
933 """set location of the 'hg' executable"""
934 """set location of the 'hg' executable"""
934 global _hgexecutable
935 global _hgexecutable
935 _hgexecutable = path
936 _hgexecutable = path
936
937
937 def _isstdout(f):
938 def _isstdout(f):
938 fileno = getattr(f, 'fileno', None)
939 fileno = getattr(f, 'fileno', None)
939 return fileno and fileno() == sys.__stdout__.fileno()
940 return fileno and fileno() == sys.__stdout__.fileno()
940
941
941 def system(cmd, environ=None, cwd=None, onerr=None, errprefix=None, out=None):
942 def system(cmd, environ=None, cwd=None, onerr=None, errprefix=None, out=None):
942 '''enhanced shell command execution.
943 '''enhanced shell command execution.
943 run with environment maybe modified, maybe in different dir.
944 run with environment maybe modified, maybe in different dir.
944
945
945 if command fails and onerr is None, return status, else raise onerr
946 if command fails and onerr is None, return status, else raise onerr
946 object as exception.
947 object as exception.
947
948
948 if out is specified, it is assumed to be a file-like object that has a
949 if out is specified, it is assumed to be a file-like object that has a
949 write() method. stdout and stderr will be redirected to out.'''
950 write() method. stdout and stderr will be redirected to out.'''
950 if environ is None:
951 if environ is None:
951 environ = {}
952 environ = {}
952 try:
953 try:
953 sys.stdout.flush()
954 sys.stdout.flush()
954 except Exception:
955 except Exception:
955 pass
956 pass
956 def py2shell(val):
957 def py2shell(val):
957 'convert python object into string that is useful to shell'
958 'convert python object into string that is useful to shell'
958 if val is None or val is False:
959 if val is None or val is False:
959 return '0'
960 return '0'
960 if val is True:
961 if val is True:
961 return '1'
962 return '1'
962 return str(val)
963 return str(val)
963 origcmd = cmd
964 origcmd = cmd
964 cmd = quotecommand(cmd)
965 cmd = quotecommand(cmd)
965 if sys.platform == 'plan9' and (sys.version_info[0] == 2
966 if sys.platform == 'plan9' and (sys.version_info[0] == 2
966 and sys.version_info[1] < 7):
967 and sys.version_info[1] < 7):
967 # subprocess kludge to work around issues in half-baked Python
968 # subprocess kludge to work around issues in half-baked Python
968 # ports, notably bichued/python:
969 # ports, notably bichued/python:
969 if not cwd is None:
970 if not cwd is None:
970 os.chdir(cwd)
971 os.chdir(cwd)
971 rc = os.system(cmd)
972 rc = os.system(cmd)
972 else:
973 else:
973 env = dict(os.environ)
974 env = dict(os.environ)
974 env.update((k, py2shell(v)) for k, v in environ.iteritems())
975 env.update((k, py2shell(v)) for k, v in environ.iteritems())
975 env['HG'] = hgexecutable()
976 env['HG'] = hgexecutable()
976 if out is None or _isstdout(out):
977 if out is None or _isstdout(out):
977 rc = subprocess.call(cmd, shell=True, close_fds=closefds,
978 rc = subprocess.call(cmd, shell=True, close_fds=closefds,
978 env=env, cwd=cwd)
979 env=env, cwd=cwd)
979 else:
980 else:
980 proc = subprocess.Popen(cmd, shell=True, close_fds=closefds,
981 proc = subprocess.Popen(cmd, shell=True, close_fds=closefds,
981 env=env, cwd=cwd, stdout=subprocess.PIPE,
982 env=env, cwd=cwd, stdout=subprocess.PIPE,
982 stderr=subprocess.STDOUT)
983 stderr=subprocess.STDOUT)
983 while True:
984 while True:
984 line = proc.stdout.readline()
985 line = proc.stdout.readline()
985 if not line:
986 if not line:
986 break
987 break
987 out.write(line)
988 out.write(line)
988 proc.wait()
989 proc.wait()
989 rc = proc.returncode
990 rc = proc.returncode
990 if sys.platform == 'OpenVMS' and rc & 1:
991 if sys.platform == 'OpenVMS' and rc & 1:
991 rc = 0
992 rc = 0
992 if rc and onerr:
993 if rc and onerr:
993 errmsg = '%s %s' % (os.path.basename(origcmd.split(None, 1)[0]),
994 errmsg = '%s %s' % (os.path.basename(origcmd.split(None, 1)[0]),
994 explainexit(rc)[0])
995 explainexit(rc)[0])
995 if errprefix:
996 if errprefix:
996 errmsg = '%s: %s' % (errprefix, errmsg)
997 errmsg = '%s: %s' % (errprefix, errmsg)
997 raise onerr(errmsg)
998 raise onerr(errmsg)
998 return rc
999 return rc
999
1000
1000 def checksignature(func):
1001 def checksignature(func):
1001 '''wrap a function with code to check for calling errors'''
1002 '''wrap a function with code to check for calling errors'''
1002 def check(*args, **kwargs):
1003 def check(*args, **kwargs):
1003 try:
1004 try:
1004 return func(*args, **kwargs)
1005 return func(*args, **kwargs)
1005 except TypeError:
1006 except TypeError:
1006 if len(traceback.extract_tb(sys.exc_info()[2])) == 1:
1007 if len(traceback.extract_tb(sys.exc_info()[2])) == 1:
1007 raise error.SignatureError
1008 raise error.SignatureError
1008 raise
1009 raise
1009
1010
1010 return check
1011 return check
1011
1012
1012 def copyfile(src, dest, hardlink=False, copystat=False, checkambig=False):
1013 def copyfile(src, dest, hardlink=False, copystat=False, checkambig=False):
1013 '''copy a file, preserving mode and optionally other stat info like
1014 '''copy a file, preserving mode and optionally other stat info like
1014 atime/mtime
1015 atime/mtime
1015
1016
1016 checkambig argument is used with filestat, and is useful only if
1017 checkambig argument is used with filestat, and is useful only if
1017 destination file is guarded by any lock (e.g. repo.lock or
1018 destination file is guarded by any lock (e.g. repo.lock or
1018 repo.wlock).
1019 repo.wlock).
1019
1020
1020 copystat and checkambig should be exclusive.
1021 copystat and checkambig should be exclusive.
1021 '''
1022 '''
1022 assert not (copystat and checkambig)
1023 assert not (copystat and checkambig)
1023 oldstat = None
1024 oldstat = None
1024 if os.path.lexists(dest):
1025 if os.path.lexists(dest):
1025 if checkambig:
1026 if checkambig:
1026 oldstat = checkambig and filestat(dest)
1027 oldstat = checkambig and filestat(dest)
1027 unlink(dest)
1028 unlink(dest)
1028 # hardlinks are problematic on CIFS, quietly ignore this flag
1029 # hardlinks are problematic on CIFS, quietly ignore this flag
1029 # until we find a way to work around it cleanly (issue4546)
1030 # until we find a way to work around it cleanly (issue4546)
1030 if False and hardlink:
1031 if False and hardlink:
1031 try:
1032 try:
1032 oslink(src, dest)
1033 oslink(src, dest)
1033 return
1034 return
1034 except (IOError, OSError):
1035 except (IOError, OSError):
1035 pass # fall back to normal copy
1036 pass # fall back to normal copy
1036 if os.path.islink(src):
1037 if os.path.islink(src):
1037 os.symlink(os.readlink(src), dest)
1038 os.symlink(os.readlink(src), dest)
1038 # copytime is ignored for symlinks, but in general copytime isn't needed
1039 # copytime is ignored for symlinks, but in general copytime isn't needed
1039 # for them anyway
1040 # for them anyway
1040 else:
1041 else:
1041 try:
1042 try:
1042 shutil.copyfile(src, dest)
1043 shutil.copyfile(src, dest)
1043 if copystat:
1044 if copystat:
1044 # copystat also copies mode
1045 # copystat also copies mode
1045 shutil.copystat(src, dest)
1046 shutil.copystat(src, dest)
1046 else:
1047 else:
1047 shutil.copymode(src, dest)
1048 shutil.copymode(src, dest)
1048 if oldstat and oldstat.stat:
1049 if oldstat and oldstat.stat:
1049 newstat = filestat(dest)
1050 newstat = filestat(dest)
1050 if newstat.isambig(oldstat):
1051 if newstat.isambig(oldstat):
1051 # stat of copied file is ambiguous to original one
1052 # stat of copied file is ambiguous to original one
1052 advanced = (oldstat.stat.st_mtime + 1) & 0x7fffffff
1053 advanced = (oldstat.stat.st_mtime + 1) & 0x7fffffff
1053 os.utime(dest, (advanced, advanced))
1054 os.utime(dest, (advanced, advanced))
1054 except shutil.Error as inst:
1055 except shutil.Error as inst:
1055 raise Abort(str(inst))
1056 raise Abort(str(inst))
1056
1057
1057 def copyfiles(src, dst, hardlink=None, progress=lambda t, pos: None):
1058 def copyfiles(src, dst, hardlink=None, progress=lambda t, pos: None):
1058 """Copy a directory tree using hardlinks if possible."""
1059 """Copy a directory tree using hardlinks if possible."""
1059 num = 0
1060 num = 0
1060
1061
1061 if hardlink is None:
1062 if hardlink is None:
1062 hardlink = (os.stat(src).st_dev ==
1063 hardlink = (os.stat(src).st_dev ==
1063 os.stat(os.path.dirname(dst)).st_dev)
1064 os.stat(os.path.dirname(dst)).st_dev)
1064 if hardlink:
1065 if hardlink:
1065 topic = _('linking')
1066 topic = _('linking')
1066 else:
1067 else:
1067 topic = _('copying')
1068 topic = _('copying')
1068
1069
1069 if os.path.isdir(src):
1070 if os.path.isdir(src):
1070 os.mkdir(dst)
1071 os.mkdir(dst)
1071 for name, kind in osutil.listdir(src):
1072 for name, kind in osutil.listdir(src):
1072 srcname = os.path.join(src, name)
1073 srcname = os.path.join(src, name)
1073 dstname = os.path.join(dst, name)
1074 dstname = os.path.join(dst, name)
1074 def nprog(t, pos):
1075 def nprog(t, pos):
1075 if pos is not None:
1076 if pos is not None:
1076 return progress(t, pos + num)
1077 return progress(t, pos + num)
1077 hardlink, n = copyfiles(srcname, dstname, hardlink, progress=nprog)
1078 hardlink, n = copyfiles(srcname, dstname, hardlink, progress=nprog)
1078 num += n
1079 num += n
1079 else:
1080 else:
1080 if hardlink:
1081 if hardlink:
1081 try:
1082 try:
1082 oslink(src, dst)
1083 oslink(src, dst)
1083 except (IOError, OSError):
1084 except (IOError, OSError):
1084 hardlink = False
1085 hardlink = False
1085 shutil.copy(src, dst)
1086 shutil.copy(src, dst)
1086 else:
1087 else:
1087 shutil.copy(src, dst)
1088 shutil.copy(src, dst)
1088 num += 1
1089 num += 1
1089 progress(topic, num)
1090 progress(topic, num)
1090 progress(topic, None)
1091 progress(topic, None)
1091
1092
1092 return hardlink, num
1093 return hardlink, num
1093
1094
1094 _winreservednames = '''con prn aux nul
1095 _winreservednames = '''con prn aux nul
1095 com1 com2 com3 com4 com5 com6 com7 com8 com9
1096 com1 com2 com3 com4 com5 com6 com7 com8 com9
1096 lpt1 lpt2 lpt3 lpt4 lpt5 lpt6 lpt7 lpt8 lpt9'''.split()
1097 lpt1 lpt2 lpt3 lpt4 lpt5 lpt6 lpt7 lpt8 lpt9'''.split()
1097 _winreservedchars = ':*?"<>|'
1098 _winreservedchars = ':*?"<>|'
1098 def checkwinfilename(path):
1099 def checkwinfilename(path):
1099 r'''Check that the base-relative path is a valid filename on Windows.
1100 r'''Check that the base-relative path is a valid filename on Windows.
1100 Returns None if the path is ok, or a UI string describing the problem.
1101 Returns None if the path is ok, or a UI string describing the problem.
1101
1102
1102 >>> checkwinfilename("just/a/normal/path")
1103 >>> checkwinfilename("just/a/normal/path")
1103 >>> checkwinfilename("foo/bar/con.xml")
1104 >>> checkwinfilename("foo/bar/con.xml")
1104 "filename contains 'con', which is reserved on Windows"
1105 "filename contains 'con', which is reserved on Windows"
1105 >>> checkwinfilename("foo/con.xml/bar")
1106 >>> checkwinfilename("foo/con.xml/bar")
1106 "filename contains 'con', which is reserved on Windows"
1107 "filename contains 'con', which is reserved on Windows"
1107 >>> checkwinfilename("foo/bar/xml.con")
1108 >>> checkwinfilename("foo/bar/xml.con")
1108 >>> checkwinfilename("foo/bar/AUX/bla.txt")
1109 >>> checkwinfilename("foo/bar/AUX/bla.txt")
1109 "filename contains 'AUX', which is reserved on Windows"
1110 "filename contains 'AUX', which is reserved on Windows"
1110 >>> checkwinfilename("foo/bar/bla:.txt")
1111 >>> checkwinfilename("foo/bar/bla:.txt")
1111 "filename contains ':', which is reserved on Windows"
1112 "filename contains ':', which is reserved on Windows"
1112 >>> checkwinfilename("foo/bar/b\07la.txt")
1113 >>> checkwinfilename("foo/bar/b\07la.txt")
1113 "filename contains '\\x07', which is invalid on Windows"
1114 "filename contains '\\x07', which is invalid on Windows"
1114 >>> checkwinfilename("foo/bar/bla ")
1115 >>> checkwinfilename("foo/bar/bla ")
1115 "filename ends with ' ', which is not allowed on Windows"
1116 "filename ends with ' ', which is not allowed on Windows"
1116 >>> checkwinfilename("../bar")
1117 >>> checkwinfilename("../bar")
1117 >>> checkwinfilename("foo\\")
1118 >>> checkwinfilename("foo\\")
1118 "filename ends with '\\', which is invalid on Windows"
1119 "filename ends with '\\', which is invalid on Windows"
1119 >>> checkwinfilename("foo\\/bar")
1120 >>> checkwinfilename("foo\\/bar")
1120 "directory name ends with '\\', which is invalid on Windows"
1121 "directory name ends with '\\', which is invalid on Windows"
1121 '''
1122 '''
1122 if path.endswith('\\'):
1123 if path.endswith('\\'):
1123 return _("filename ends with '\\', which is invalid on Windows")
1124 return _("filename ends with '\\', which is invalid on Windows")
1124 if '\\/' in path:
1125 if '\\/' in path:
1125 return _("directory name ends with '\\', which is invalid on Windows")
1126 return _("directory name ends with '\\', which is invalid on Windows")
1126 for n in path.replace('\\', '/').split('/'):
1127 for n in path.replace('\\', '/').split('/'):
1127 if not n:
1128 if not n:
1128 continue
1129 continue
1129 for c in n:
1130 for c in n:
1130 if c in _winreservedchars:
1131 if c in _winreservedchars:
1131 return _("filename contains '%s', which is reserved "
1132 return _("filename contains '%s', which is reserved "
1132 "on Windows") % c
1133 "on Windows") % c
1133 if ord(c) <= 31:
1134 if ord(c) <= 31:
1134 return _("filename contains %r, which is invalid "
1135 return _("filename contains %r, which is invalid "
1135 "on Windows") % c
1136 "on Windows") % c
1136 base = n.split('.')[0]
1137 base = n.split('.')[0]
1137 if base and base.lower() in _winreservednames:
1138 if base and base.lower() in _winreservednames:
1138 return _("filename contains '%s', which is reserved "
1139 return _("filename contains '%s', which is reserved "
1139 "on Windows") % base
1140 "on Windows") % base
1140 t = n[-1]
1141 t = n[-1]
1141 if t in '. ' and n not in '..':
1142 if t in '. ' and n not in '..':
1142 return _("filename ends with '%s', which is not allowed "
1143 return _("filename ends with '%s', which is not allowed "
1143 "on Windows") % t
1144 "on Windows") % t
1144
1145
1145 if os.name == 'nt':
1146 if os.name == 'nt':
1146 checkosfilename = checkwinfilename
1147 checkosfilename = checkwinfilename
1147 else:
1148 else:
1148 checkosfilename = platform.checkosfilename
1149 checkosfilename = platform.checkosfilename
1149
1150
1150 def makelock(info, pathname):
1151 def makelock(info, pathname):
1151 try:
1152 try:
1152 return os.symlink(info, pathname)
1153 return os.symlink(info, pathname)
1153 except OSError as why:
1154 except OSError as why:
1154 if why.errno == errno.EEXIST:
1155 if why.errno == errno.EEXIST:
1155 raise
1156 raise
1156 except AttributeError: # no symlink in os
1157 except AttributeError: # no symlink in os
1157 pass
1158 pass
1158
1159
1159 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
1160 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
1160 os.write(ld, info)
1161 os.write(ld, info)
1161 os.close(ld)
1162 os.close(ld)
1162
1163
1163 def readlock(pathname):
1164 def readlock(pathname):
1164 try:
1165 try:
1165 return os.readlink(pathname)
1166 return os.readlink(pathname)
1166 except OSError as why:
1167 except OSError as why:
1167 if why.errno not in (errno.EINVAL, errno.ENOSYS):
1168 if why.errno not in (errno.EINVAL, errno.ENOSYS):
1168 raise
1169 raise
1169 except AttributeError: # no symlink in os
1170 except AttributeError: # no symlink in os
1170 pass
1171 pass
1171 fp = posixfile(pathname)
1172 fp = posixfile(pathname)
1172 r = fp.read()
1173 r = fp.read()
1173 fp.close()
1174 fp.close()
1174 return r
1175 return r
1175
1176
1176 def fstat(fp):
1177 def fstat(fp):
1177 '''stat file object that may not have fileno method.'''
1178 '''stat file object that may not have fileno method.'''
1178 try:
1179 try:
1179 return os.fstat(fp.fileno())
1180 return os.fstat(fp.fileno())
1180 except AttributeError:
1181 except AttributeError:
1181 return os.stat(fp.name)
1182 return os.stat(fp.name)
1182
1183
1183 # File system features
1184 # File system features
1184
1185
1185 def checkcase(path):
1186 def checkcase(path):
1186 """
1187 """
1187 Return true if the given path is on a case-sensitive filesystem
1188 Return true if the given path is on a case-sensitive filesystem
1188
1189
1189 Requires a path (like /foo/.hg) ending with a foldable final
1190 Requires a path (like /foo/.hg) ending with a foldable final
1190 directory component.
1191 directory component.
1191 """
1192 """
1192 s1 = os.lstat(path)
1193 s1 = os.lstat(path)
1193 d, b = os.path.split(path)
1194 d, b = os.path.split(path)
1194 b2 = b.upper()
1195 b2 = b.upper()
1195 if b == b2:
1196 if b == b2:
1196 b2 = b.lower()
1197 b2 = b.lower()
1197 if b == b2:
1198 if b == b2:
1198 return True # no evidence against case sensitivity
1199 return True # no evidence against case sensitivity
1199 p2 = os.path.join(d, b2)
1200 p2 = os.path.join(d, b2)
1200 try:
1201 try:
1201 s2 = os.lstat(p2)
1202 s2 = os.lstat(p2)
1202 if s2 == s1:
1203 if s2 == s1:
1203 return False
1204 return False
1204 return True
1205 return True
1205 except OSError:
1206 except OSError:
1206 return True
1207 return True
1207
1208
1208 try:
1209 try:
1209 import re2
1210 import re2
1210 _re2 = None
1211 _re2 = None
1211 except ImportError:
1212 except ImportError:
1212 _re2 = False
1213 _re2 = False
1213
1214
1214 class _re(object):
1215 class _re(object):
1215 def _checkre2(self):
1216 def _checkre2(self):
1216 global _re2
1217 global _re2
1217 try:
1218 try:
1218 # check if match works, see issue3964
1219 # check if match works, see issue3964
1219 _re2 = bool(re2.match(r'\[([^\[]+)\]', '[ui]'))
1220 _re2 = bool(re2.match(r'\[([^\[]+)\]', '[ui]'))
1220 except ImportError:
1221 except ImportError:
1221 _re2 = False
1222 _re2 = False
1222
1223
1223 def compile(self, pat, flags=0):
1224 def compile(self, pat, flags=0):
1224 '''Compile a regular expression, using re2 if possible
1225 '''Compile a regular expression, using re2 if possible
1225
1226
1226 For best performance, use only re2-compatible regexp features. The
1227 For best performance, use only re2-compatible regexp features. The
1227 only flags from the re module that are re2-compatible are
1228 only flags from the re module that are re2-compatible are
1228 IGNORECASE and MULTILINE.'''
1229 IGNORECASE and MULTILINE.'''
1229 if _re2 is None:
1230 if _re2 is None:
1230 self._checkre2()
1231 self._checkre2()
1231 if _re2 and (flags & ~(remod.IGNORECASE | remod.MULTILINE)) == 0:
1232 if _re2 and (flags & ~(remod.IGNORECASE | remod.MULTILINE)) == 0:
1232 if flags & remod.IGNORECASE:
1233 if flags & remod.IGNORECASE:
1233 pat = '(?i)' + pat
1234 pat = '(?i)' + pat
1234 if flags & remod.MULTILINE:
1235 if flags & remod.MULTILINE:
1235 pat = '(?m)' + pat
1236 pat = '(?m)' + pat
1236 try:
1237 try:
1237 return re2.compile(pat)
1238 return re2.compile(pat)
1238 except re2.error:
1239 except re2.error:
1239 pass
1240 pass
1240 return remod.compile(pat, flags)
1241 return remod.compile(pat, flags)
1241
1242
1242 @propertycache
1243 @propertycache
1243 def escape(self):
1244 def escape(self):
1244 '''Return the version of escape corresponding to self.compile.
1245 '''Return the version of escape corresponding to self.compile.
1245
1246
1246 This is imperfect because whether re2 or re is used for a particular
1247 This is imperfect because whether re2 or re is used for a particular
1247 function depends on the flags, etc, but it's the best we can do.
1248 function depends on the flags, etc, but it's the best we can do.
1248 '''
1249 '''
1249 global _re2
1250 global _re2
1250 if _re2 is None:
1251 if _re2 is None:
1251 self._checkre2()
1252 self._checkre2()
1252 if _re2:
1253 if _re2:
1253 return re2.escape
1254 return re2.escape
1254 else:
1255 else:
1255 return remod.escape
1256 return remod.escape
1256
1257
1257 re = _re()
1258 re = _re()
1258
1259
1259 _fspathcache = {}
1260 _fspathcache = {}
1260 def fspath(name, root):
1261 def fspath(name, root):
1261 '''Get name in the case stored in the filesystem
1262 '''Get name in the case stored in the filesystem
1262
1263
1263 The name should be relative to root, and be normcase-ed for efficiency.
1264 The name should be relative to root, and be normcase-ed for efficiency.
1264
1265
1265 Note that this function is unnecessary, and should not be
1266 Note that this function is unnecessary, and should not be
1266 called, for case-sensitive filesystems (simply because it's expensive).
1267 called, for case-sensitive filesystems (simply because it's expensive).
1267
1268
1268 The root should be normcase-ed, too.
1269 The root should be normcase-ed, too.
1269 '''
1270 '''
1270 def _makefspathcacheentry(dir):
1271 def _makefspathcacheentry(dir):
1271 return dict((normcase(n), n) for n in os.listdir(dir))
1272 return dict((normcase(n), n) for n in os.listdir(dir))
1272
1273
1273 seps = os.sep
1274 seps = os.sep
1274 if os.altsep:
1275 if os.altsep:
1275 seps = seps + os.altsep
1276 seps = seps + os.altsep
1276 # Protect backslashes. This gets silly very quickly.
1277 # Protect backslashes. This gets silly very quickly.
1277 seps.replace('\\','\\\\')
1278 seps.replace('\\','\\\\')
1278 pattern = remod.compile(r'([^%s]+)|([%s]+)' % (seps, seps))
1279 pattern = remod.compile(r'([^%s]+)|([%s]+)' % (seps, seps))
1279 dir = os.path.normpath(root)
1280 dir = os.path.normpath(root)
1280 result = []
1281 result = []
1281 for part, sep in pattern.findall(name):
1282 for part, sep in pattern.findall(name):
1282 if sep:
1283 if sep:
1283 result.append(sep)
1284 result.append(sep)
1284 continue
1285 continue
1285
1286
1286 if dir not in _fspathcache:
1287 if dir not in _fspathcache:
1287 _fspathcache[dir] = _makefspathcacheentry(dir)
1288 _fspathcache[dir] = _makefspathcacheentry(dir)
1288 contents = _fspathcache[dir]
1289 contents = _fspathcache[dir]
1289
1290
1290 found = contents.get(part)
1291 found = contents.get(part)
1291 if not found:
1292 if not found:
1292 # retry "once per directory" per "dirstate.walk" which
1293 # retry "once per directory" per "dirstate.walk" which
1293 # may take place for each patches of "hg qpush", for example
1294 # may take place for each patches of "hg qpush", for example
1294 _fspathcache[dir] = contents = _makefspathcacheentry(dir)
1295 _fspathcache[dir] = contents = _makefspathcacheentry(dir)
1295 found = contents.get(part)
1296 found = contents.get(part)
1296
1297
1297 result.append(found or part)
1298 result.append(found or part)
1298 dir = os.path.join(dir, part)
1299 dir = os.path.join(dir, part)
1299
1300
1300 return ''.join(result)
1301 return ''.join(result)
1301
1302
1302 def checknlink(testfile):
1303 def checknlink(testfile):
1303 '''check whether hardlink count reporting works properly'''
1304 '''check whether hardlink count reporting works properly'''
1304
1305
1305 # testfile may be open, so we need a separate file for checking to
1306 # testfile may be open, so we need a separate file for checking to
1306 # work around issue2543 (or testfile may get lost on Samba shares)
1307 # work around issue2543 (or testfile may get lost on Samba shares)
1307 f1 = testfile + ".hgtmp1"
1308 f1 = testfile + ".hgtmp1"
1308 if os.path.lexists(f1):
1309 if os.path.lexists(f1):
1309 return False
1310 return False
1310 try:
1311 try:
1311 posixfile(f1, 'w').close()
1312 posixfile(f1, 'w').close()
1312 except IOError:
1313 except IOError:
1313 return False
1314 return False
1314
1315
1315 f2 = testfile + ".hgtmp2"
1316 f2 = testfile + ".hgtmp2"
1316 fd = None
1317 fd = None
1317 try:
1318 try:
1318 oslink(f1, f2)
1319 oslink(f1, f2)
1319 # nlinks() may behave differently for files on Windows shares if
1320 # nlinks() may behave differently for files on Windows shares if
1320 # the file is open.
1321 # the file is open.
1321 fd = posixfile(f2)
1322 fd = posixfile(f2)
1322 return nlinks(f2) > 1
1323 return nlinks(f2) > 1
1323 except OSError:
1324 except OSError:
1324 return False
1325 return False
1325 finally:
1326 finally:
1326 if fd is not None:
1327 if fd is not None:
1327 fd.close()
1328 fd.close()
1328 for f in (f1, f2):
1329 for f in (f1, f2):
1329 try:
1330 try:
1330 os.unlink(f)
1331 os.unlink(f)
1331 except OSError:
1332 except OSError:
1332 pass
1333 pass
1333
1334
1334 def endswithsep(path):
1335 def endswithsep(path):
1335 '''Check path ends with os.sep or os.altsep.'''
1336 '''Check path ends with os.sep or os.altsep.'''
1336 return path.endswith(os.sep) or os.altsep and path.endswith(os.altsep)
1337 return path.endswith(os.sep) or os.altsep and path.endswith(os.altsep)
1337
1338
1338 def splitpath(path):
1339 def splitpath(path):
1339 '''Split path by os.sep.
1340 '''Split path by os.sep.
1340 Note that this function does not use os.altsep because this is
1341 Note that this function does not use os.altsep because this is
1341 an alternative of simple "xxx.split(os.sep)".
1342 an alternative of simple "xxx.split(os.sep)".
1342 It is recommended to use os.path.normpath() before using this
1343 It is recommended to use os.path.normpath() before using this
1343 function if need.'''
1344 function if need.'''
1344 return path.split(os.sep)
1345 return path.split(os.sep)
1345
1346
1346 def gui():
1347 def gui():
1347 '''Are we running in a GUI?'''
1348 '''Are we running in a GUI?'''
1348 if sys.platform == 'darwin':
1349 if sys.platform == 'darwin':
1349 if 'SSH_CONNECTION' in os.environ:
1350 if 'SSH_CONNECTION' in os.environ:
1350 # handle SSH access to a box where the user is logged in
1351 # handle SSH access to a box where the user is logged in
1351 return False
1352 return False
1352 elif getattr(osutil, 'isgui', None):
1353 elif getattr(osutil, 'isgui', None):
1353 # check if a CoreGraphics session is available
1354 # check if a CoreGraphics session is available
1354 return osutil.isgui()
1355 return osutil.isgui()
1355 else:
1356 else:
1356 # pure build; use a safe default
1357 # pure build; use a safe default
1357 return True
1358 return True
1358 else:
1359 else:
1359 return os.name == "nt" or os.environ.get("DISPLAY")
1360 return os.name == "nt" or os.environ.get("DISPLAY")
1360
1361
1361 def mktempcopy(name, emptyok=False, createmode=None):
1362 def mktempcopy(name, emptyok=False, createmode=None):
1362 """Create a temporary file with the same contents from name
1363 """Create a temporary file with the same contents from name
1363
1364
1364 The permission bits are copied from the original file.
1365 The permission bits are copied from the original file.
1365
1366
1366 If the temporary file is going to be truncated immediately, you
1367 If the temporary file is going to be truncated immediately, you
1367 can use emptyok=True as an optimization.
1368 can use emptyok=True as an optimization.
1368
1369
1369 Returns the name of the temporary file.
1370 Returns the name of the temporary file.
1370 """
1371 """
1371 d, fn = os.path.split(name)
1372 d, fn = os.path.split(name)
1372 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
1373 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
1373 os.close(fd)
1374 os.close(fd)
1374 # Temporary files are created with mode 0600, which is usually not
1375 # Temporary files are created with mode 0600, which is usually not
1375 # what we want. If the original file already exists, just copy
1376 # what we want. If the original file already exists, just copy
1376 # its mode. Otherwise, manually obey umask.
1377 # its mode. Otherwise, manually obey umask.
1377 copymode(name, temp, createmode)
1378 copymode(name, temp, createmode)
1378 if emptyok:
1379 if emptyok:
1379 return temp
1380 return temp
1380 try:
1381 try:
1381 try:
1382 try:
1382 ifp = posixfile(name, "rb")
1383 ifp = posixfile(name, "rb")
1383 except IOError as inst:
1384 except IOError as inst:
1384 if inst.errno == errno.ENOENT:
1385 if inst.errno == errno.ENOENT:
1385 return temp
1386 return temp
1386 if not getattr(inst, 'filename', None):
1387 if not getattr(inst, 'filename', None):
1387 inst.filename = name
1388 inst.filename = name
1388 raise
1389 raise
1389 ofp = posixfile(temp, "wb")
1390 ofp = posixfile(temp, "wb")
1390 for chunk in filechunkiter(ifp):
1391 for chunk in filechunkiter(ifp):
1391 ofp.write(chunk)
1392 ofp.write(chunk)
1392 ifp.close()
1393 ifp.close()
1393 ofp.close()
1394 ofp.close()
1394 except: # re-raises
1395 except: # re-raises
1395 try: os.unlink(temp)
1396 try: os.unlink(temp)
1396 except OSError: pass
1397 except OSError: pass
1397 raise
1398 raise
1398 return temp
1399 return temp
1399
1400
1400 class filestat(object):
1401 class filestat(object):
1401 """help to exactly detect change of a file
1402 """help to exactly detect change of a file
1402
1403
1403 'stat' attribute is result of 'os.stat()' if specified 'path'
1404 'stat' attribute is result of 'os.stat()' if specified 'path'
1404 exists. Otherwise, it is None. This can avoid preparative
1405 exists. Otherwise, it is None. This can avoid preparative
1405 'exists()' examination on client side of this class.
1406 'exists()' examination on client side of this class.
1406 """
1407 """
1407 def __init__(self, path):
1408 def __init__(self, path):
1408 try:
1409 try:
1409 self.stat = os.stat(path)
1410 self.stat = os.stat(path)
1410 except OSError as err:
1411 except OSError as err:
1411 if err.errno != errno.ENOENT:
1412 if err.errno != errno.ENOENT:
1412 raise
1413 raise
1413 self.stat = None
1414 self.stat = None
1414
1415
1415 __hash__ = object.__hash__
1416 __hash__ = object.__hash__
1416
1417
1417 def __eq__(self, old):
1418 def __eq__(self, old):
1418 try:
1419 try:
1419 # if ambiguity between stat of new and old file is
1420 # if ambiguity between stat of new and old file is
1420 # avoided, comparision of size, ctime and mtime is enough
1421 # avoided, comparision of size, ctime and mtime is enough
1421 # to exactly detect change of a file regardless of platform
1422 # to exactly detect change of a file regardless of platform
1422 return (self.stat.st_size == old.stat.st_size and
1423 return (self.stat.st_size == old.stat.st_size and
1423 self.stat.st_ctime == old.stat.st_ctime and
1424 self.stat.st_ctime == old.stat.st_ctime and
1424 self.stat.st_mtime == old.stat.st_mtime)
1425 self.stat.st_mtime == old.stat.st_mtime)
1425 except AttributeError:
1426 except AttributeError:
1426 return False
1427 return False
1427
1428
1428 def isambig(self, old):
1429 def isambig(self, old):
1429 """Examine whether new (= self) stat is ambiguous against old one
1430 """Examine whether new (= self) stat is ambiguous against old one
1430
1431
1431 "S[N]" below means stat of a file at N-th change:
1432 "S[N]" below means stat of a file at N-th change:
1432
1433
1433 - S[n-1].ctime < S[n].ctime: can detect change of a file
1434 - S[n-1].ctime < S[n].ctime: can detect change of a file
1434 - S[n-1].ctime == S[n].ctime
1435 - S[n-1].ctime == S[n].ctime
1435 - S[n-1].ctime < S[n].mtime: means natural advancing (*1)
1436 - S[n-1].ctime < S[n].mtime: means natural advancing (*1)
1436 - S[n-1].ctime == S[n].mtime: is ambiguous (*2)
1437 - S[n-1].ctime == S[n].mtime: is ambiguous (*2)
1437 - S[n-1].ctime > S[n].mtime: never occurs naturally (don't care)
1438 - S[n-1].ctime > S[n].mtime: never occurs naturally (don't care)
1438 - S[n-1].ctime > S[n].ctime: never occurs naturally (don't care)
1439 - S[n-1].ctime > S[n].ctime: never occurs naturally (don't care)
1439
1440
1440 Case (*2) above means that a file was changed twice or more at
1441 Case (*2) above means that a file was changed twice or more at
1441 same time in sec (= S[n-1].ctime), and comparison of timestamp
1442 same time in sec (= S[n-1].ctime), and comparison of timestamp
1442 is ambiguous.
1443 is ambiguous.
1443
1444
1444 Base idea to avoid such ambiguity is "advance mtime 1 sec, if
1445 Base idea to avoid such ambiguity is "advance mtime 1 sec, if
1445 timestamp is ambiguous".
1446 timestamp is ambiguous".
1446
1447
1447 But advancing mtime only in case (*2) doesn't work as
1448 But advancing mtime only in case (*2) doesn't work as
1448 expected, because naturally advanced S[n].mtime in case (*1)
1449 expected, because naturally advanced S[n].mtime in case (*1)
1449 might be equal to manually advanced S[n-1 or earlier].mtime.
1450 might be equal to manually advanced S[n-1 or earlier].mtime.
1450
1451
1451 Therefore, all "S[n-1].ctime == S[n].ctime" cases should be
1452 Therefore, all "S[n-1].ctime == S[n].ctime" cases should be
1452 treated as ambiguous regardless of mtime, to avoid overlooking
1453 treated as ambiguous regardless of mtime, to avoid overlooking
1453 by confliction between such mtime.
1454 by confliction between such mtime.
1454
1455
1455 Advancing mtime "if isambig(oldstat)" ensures "S[n-1].mtime !=
1456 Advancing mtime "if isambig(oldstat)" ensures "S[n-1].mtime !=
1456 S[n].mtime", even if size of a file isn't changed.
1457 S[n].mtime", even if size of a file isn't changed.
1457 """
1458 """
1458 try:
1459 try:
1459 return (self.stat.st_ctime == old.stat.st_ctime)
1460 return (self.stat.st_ctime == old.stat.st_ctime)
1460 except AttributeError:
1461 except AttributeError:
1461 return False
1462 return False
1462
1463
1463 def __ne__(self, other):
1464 def __ne__(self, other):
1464 return not self == other
1465 return not self == other
1465
1466
1466 class atomictempfile(object):
1467 class atomictempfile(object):
1467 '''writable file object that atomically updates a file
1468 '''writable file object that atomically updates a file
1468
1469
1469 All writes will go to a temporary copy of the original file. Call
1470 All writes will go to a temporary copy of the original file. Call
1470 close() when you are done writing, and atomictempfile will rename
1471 close() when you are done writing, and atomictempfile will rename
1471 the temporary copy to the original name, making the changes
1472 the temporary copy to the original name, making the changes
1472 visible. If the object is destroyed without being closed, all your
1473 visible. If the object is destroyed without being closed, all your
1473 writes are discarded.
1474 writes are discarded.
1474
1475
1475 checkambig argument of constructor is used with filestat, and is
1476 checkambig argument of constructor is used with filestat, and is
1476 useful only if target file is guarded by any lock (e.g. repo.lock
1477 useful only if target file is guarded by any lock (e.g. repo.lock
1477 or repo.wlock).
1478 or repo.wlock).
1478 '''
1479 '''
1479 def __init__(self, name, mode='w+b', createmode=None, checkambig=False):
1480 def __init__(self, name, mode='w+b', createmode=None, checkambig=False):
1480 self.__name = name # permanent name
1481 self.__name = name # permanent name
1481 self._tempname = mktempcopy(name, emptyok=('w' in mode),
1482 self._tempname = mktempcopy(name, emptyok=('w' in mode),
1482 createmode=createmode)
1483 createmode=createmode)
1483 self._fp = posixfile(self._tempname, mode)
1484 self._fp = posixfile(self._tempname, mode)
1484 self._checkambig = checkambig
1485 self._checkambig = checkambig
1485
1486
1486 # delegated methods
1487 # delegated methods
1487 self.read = self._fp.read
1488 self.read = self._fp.read
1488 self.write = self._fp.write
1489 self.write = self._fp.write
1489 self.seek = self._fp.seek
1490 self.seek = self._fp.seek
1490 self.tell = self._fp.tell
1491 self.tell = self._fp.tell
1491 self.fileno = self._fp.fileno
1492 self.fileno = self._fp.fileno
1492
1493
1493 def close(self):
1494 def close(self):
1494 if not self._fp.closed:
1495 if not self._fp.closed:
1495 self._fp.close()
1496 self._fp.close()
1496 filename = localpath(self.__name)
1497 filename = localpath(self.__name)
1497 oldstat = self._checkambig and filestat(filename)
1498 oldstat = self._checkambig and filestat(filename)
1498 if oldstat and oldstat.stat:
1499 if oldstat and oldstat.stat:
1499 rename(self._tempname, filename)
1500 rename(self._tempname, filename)
1500 newstat = filestat(filename)
1501 newstat = filestat(filename)
1501 if newstat.isambig(oldstat):
1502 if newstat.isambig(oldstat):
1502 # stat of changed file is ambiguous to original one
1503 # stat of changed file is ambiguous to original one
1503 advanced = (oldstat.stat.st_mtime + 1) & 0x7fffffff
1504 advanced = (oldstat.stat.st_mtime + 1) & 0x7fffffff
1504 os.utime(filename, (advanced, advanced))
1505 os.utime(filename, (advanced, advanced))
1505 else:
1506 else:
1506 rename(self._tempname, filename)
1507 rename(self._tempname, filename)
1507
1508
1508 def discard(self):
1509 def discard(self):
1509 if not self._fp.closed:
1510 if not self._fp.closed:
1510 try:
1511 try:
1511 os.unlink(self._tempname)
1512 os.unlink(self._tempname)
1512 except OSError:
1513 except OSError:
1513 pass
1514 pass
1514 self._fp.close()
1515 self._fp.close()
1515
1516
1516 def __del__(self):
1517 def __del__(self):
1517 if safehasattr(self, '_fp'): # constructor actually did something
1518 if safehasattr(self, '_fp'): # constructor actually did something
1518 self.discard()
1519 self.discard()
1519
1520
1520 def __enter__(self):
1521 def __enter__(self):
1521 return self
1522 return self
1522
1523
1523 def __exit__(self, exctype, excvalue, traceback):
1524 def __exit__(self, exctype, excvalue, traceback):
1524 if exctype is not None:
1525 if exctype is not None:
1525 self.discard()
1526 self.discard()
1526 else:
1527 else:
1527 self.close()
1528 self.close()
1528
1529
1529 def makedirs(name, mode=None, notindexed=False):
1530 def makedirs(name, mode=None, notindexed=False):
1530 """recursive directory creation with parent mode inheritance
1531 """recursive directory creation with parent mode inheritance
1531
1532
1532 Newly created directories are marked as "not to be indexed by
1533 Newly created directories are marked as "not to be indexed by
1533 the content indexing service", if ``notindexed`` is specified
1534 the content indexing service", if ``notindexed`` is specified
1534 for "write" mode access.
1535 for "write" mode access.
1535 """
1536 """
1536 try:
1537 try:
1537 makedir(name, notindexed)
1538 makedir(name, notindexed)
1538 except OSError as err:
1539 except OSError as err:
1539 if err.errno == errno.EEXIST:
1540 if err.errno == errno.EEXIST:
1540 return
1541 return
1541 if err.errno != errno.ENOENT or not name:
1542 if err.errno != errno.ENOENT or not name:
1542 raise
1543 raise
1543 parent = os.path.dirname(os.path.abspath(name))
1544 parent = os.path.dirname(os.path.abspath(name))
1544 if parent == name:
1545 if parent == name:
1545 raise
1546 raise
1546 makedirs(parent, mode, notindexed)
1547 makedirs(parent, mode, notindexed)
1547 try:
1548 try:
1548 makedir(name, notindexed)
1549 makedir(name, notindexed)
1549 except OSError as err:
1550 except OSError as err:
1550 # Catch EEXIST to handle races
1551 # Catch EEXIST to handle races
1551 if err.errno == errno.EEXIST:
1552 if err.errno == errno.EEXIST:
1552 return
1553 return
1553 raise
1554 raise
1554 if mode is not None:
1555 if mode is not None:
1555 os.chmod(name, mode)
1556 os.chmod(name, mode)
1556
1557
1557 def readfile(path):
1558 def readfile(path):
1558 with open(path, 'rb') as fp:
1559 with open(path, 'rb') as fp:
1559 return fp.read()
1560 return fp.read()
1560
1561
1561 def writefile(path, text):
1562 def writefile(path, text):
1562 with open(path, 'wb') as fp:
1563 with open(path, 'wb') as fp:
1563 fp.write(text)
1564 fp.write(text)
1564
1565
1565 def appendfile(path, text):
1566 def appendfile(path, text):
1566 with open(path, 'ab') as fp:
1567 with open(path, 'ab') as fp:
1567 fp.write(text)
1568 fp.write(text)
1568
1569
1569 class chunkbuffer(object):
1570 class chunkbuffer(object):
1570 """Allow arbitrary sized chunks of data to be efficiently read from an
1571 """Allow arbitrary sized chunks of data to be efficiently read from an
1571 iterator over chunks of arbitrary size."""
1572 iterator over chunks of arbitrary size."""
1572
1573
1573 def __init__(self, in_iter):
1574 def __init__(self, in_iter):
1574 """in_iter is the iterator that's iterating over the input chunks.
1575 """in_iter is the iterator that's iterating over the input chunks.
1575 targetsize is how big a buffer to try to maintain."""
1576 targetsize is how big a buffer to try to maintain."""
1576 def splitbig(chunks):
1577 def splitbig(chunks):
1577 for chunk in chunks:
1578 for chunk in chunks:
1578 if len(chunk) > 2**20:
1579 if len(chunk) > 2**20:
1579 pos = 0
1580 pos = 0
1580 while pos < len(chunk):
1581 while pos < len(chunk):
1581 end = pos + 2 ** 18
1582 end = pos + 2 ** 18
1582 yield chunk[pos:end]
1583 yield chunk[pos:end]
1583 pos = end
1584 pos = end
1584 else:
1585 else:
1585 yield chunk
1586 yield chunk
1586 self.iter = splitbig(in_iter)
1587 self.iter = splitbig(in_iter)
1587 self._queue = collections.deque()
1588 self._queue = collections.deque()
1588 self._chunkoffset = 0
1589 self._chunkoffset = 0
1589
1590
1590 def read(self, l=None):
1591 def read(self, l=None):
1591 """Read L bytes of data from the iterator of chunks of data.
1592 """Read L bytes of data from the iterator of chunks of data.
1592 Returns less than L bytes if the iterator runs dry.
1593 Returns less than L bytes if the iterator runs dry.
1593
1594
1594 If size parameter is omitted, read everything"""
1595 If size parameter is omitted, read everything"""
1595 if l is None:
1596 if l is None:
1596 return ''.join(self.iter)
1597 return ''.join(self.iter)
1597
1598
1598 left = l
1599 left = l
1599 buf = []
1600 buf = []
1600 queue = self._queue
1601 queue = self._queue
1601 while left > 0:
1602 while left > 0:
1602 # refill the queue
1603 # refill the queue
1603 if not queue:
1604 if not queue:
1604 target = 2**18
1605 target = 2**18
1605 for chunk in self.iter:
1606 for chunk in self.iter:
1606 queue.append(chunk)
1607 queue.append(chunk)
1607 target -= len(chunk)
1608 target -= len(chunk)
1608 if target <= 0:
1609 if target <= 0:
1609 break
1610 break
1610 if not queue:
1611 if not queue:
1611 break
1612 break
1612
1613
1613 # The easy way to do this would be to queue.popleft(), modify the
1614 # The easy way to do this would be to queue.popleft(), modify the
1614 # chunk (if necessary), then queue.appendleft(). However, for cases
1615 # chunk (if necessary), then queue.appendleft(). However, for cases
1615 # where we read partial chunk content, this incurs 2 dequeue
1616 # where we read partial chunk content, this incurs 2 dequeue
1616 # mutations and creates a new str for the remaining chunk in the
1617 # mutations and creates a new str for the remaining chunk in the
1617 # queue. Our code below avoids this overhead.
1618 # queue. Our code below avoids this overhead.
1618
1619
1619 chunk = queue[0]
1620 chunk = queue[0]
1620 chunkl = len(chunk)
1621 chunkl = len(chunk)
1621 offset = self._chunkoffset
1622 offset = self._chunkoffset
1622
1623
1623 # Use full chunk.
1624 # Use full chunk.
1624 if offset == 0 and left >= chunkl:
1625 if offset == 0 and left >= chunkl:
1625 left -= chunkl
1626 left -= chunkl
1626 queue.popleft()
1627 queue.popleft()
1627 buf.append(chunk)
1628 buf.append(chunk)
1628 # self._chunkoffset remains at 0.
1629 # self._chunkoffset remains at 0.
1629 continue
1630 continue
1630
1631
1631 chunkremaining = chunkl - offset
1632 chunkremaining = chunkl - offset
1632
1633
1633 # Use all of unconsumed part of chunk.
1634 # Use all of unconsumed part of chunk.
1634 if left >= chunkremaining:
1635 if left >= chunkremaining:
1635 left -= chunkremaining
1636 left -= chunkremaining
1636 queue.popleft()
1637 queue.popleft()
1637 # offset == 0 is enabled by block above, so this won't merely
1638 # offset == 0 is enabled by block above, so this won't merely
1638 # copy via ``chunk[0:]``.
1639 # copy via ``chunk[0:]``.
1639 buf.append(chunk[offset:])
1640 buf.append(chunk[offset:])
1640 self._chunkoffset = 0
1641 self._chunkoffset = 0
1641
1642
1642 # Partial chunk needed.
1643 # Partial chunk needed.
1643 else:
1644 else:
1644 buf.append(chunk[offset:offset + left])
1645 buf.append(chunk[offset:offset + left])
1645 self._chunkoffset += left
1646 self._chunkoffset += left
1646 left -= chunkremaining
1647 left -= chunkremaining
1647
1648
1648 return ''.join(buf)
1649 return ''.join(buf)
1649
1650
1650 def filechunkiter(f, size=65536, limit=None):
1651 def filechunkiter(f, size=65536, limit=None):
1651 """Create a generator that produces the data in the file size
1652 """Create a generator that produces the data in the file size
1652 (default 65536) bytes at a time, up to optional limit (default is
1653 (default 65536) bytes at a time, up to optional limit (default is
1653 to read all data). Chunks may be less than size bytes if the
1654 to read all data). Chunks may be less than size bytes if the
1654 chunk is the last chunk in the file, or the file is a socket or
1655 chunk is the last chunk in the file, or the file is a socket or
1655 some other type of file that sometimes reads less data than is
1656 some other type of file that sometimes reads less data than is
1656 requested."""
1657 requested."""
1657 assert size >= 0
1658 assert size >= 0
1658 assert limit is None or limit >= 0
1659 assert limit is None or limit >= 0
1659 while True:
1660 while True:
1660 if limit is None:
1661 if limit is None:
1661 nbytes = size
1662 nbytes = size
1662 else:
1663 else:
1663 nbytes = min(limit, size)
1664 nbytes = min(limit, size)
1664 s = nbytes and f.read(nbytes)
1665 s = nbytes and f.read(nbytes)
1665 if not s:
1666 if not s:
1666 break
1667 break
1667 if limit:
1668 if limit:
1668 limit -= len(s)
1669 limit -= len(s)
1669 yield s
1670 yield s
1670
1671
1671 def makedate(timestamp=None):
1672 def makedate(timestamp=None):
1672 '''Return a unix timestamp (or the current time) as a (unixtime,
1673 '''Return a unix timestamp (or the current time) as a (unixtime,
1673 offset) tuple based off the local timezone.'''
1674 offset) tuple based off the local timezone.'''
1674 if timestamp is None:
1675 if timestamp is None:
1675 timestamp = time.time()
1676 timestamp = time.time()
1676 if timestamp < 0:
1677 if timestamp < 0:
1677 hint = _("check your clock")
1678 hint = _("check your clock")
1678 raise Abort(_("negative timestamp: %d") % timestamp, hint=hint)
1679 raise Abort(_("negative timestamp: %d") % timestamp, hint=hint)
1679 delta = (datetime.datetime.utcfromtimestamp(timestamp) -
1680 delta = (datetime.datetime.utcfromtimestamp(timestamp) -
1680 datetime.datetime.fromtimestamp(timestamp))
1681 datetime.datetime.fromtimestamp(timestamp))
1681 tz = delta.days * 86400 + delta.seconds
1682 tz = delta.days * 86400 + delta.seconds
1682 return timestamp, tz
1683 return timestamp, tz
1683
1684
1684 def datestr(date=None, format='%a %b %d %H:%M:%S %Y %1%2'):
1685 def datestr(date=None, format='%a %b %d %H:%M:%S %Y %1%2'):
1685 """represent a (unixtime, offset) tuple as a localized time.
1686 """represent a (unixtime, offset) tuple as a localized time.
1686 unixtime is seconds since the epoch, and offset is the time zone's
1687 unixtime is seconds since the epoch, and offset is the time zone's
1687 number of seconds away from UTC.
1688 number of seconds away from UTC.
1688
1689
1689 >>> datestr((0, 0))
1690 >>> datestr((0, 0))
1690 'Thu Jan 01 00:00:00 1970 +0000'
1691 'Thu Jan 01 00:00:00 1970 +0000'
1691 >>> datestr((42, 0))
1692 >>> datestr((42, 0))
1692 'Thu Jan 01 00:00:42 1970 +0000'
1693 'Thu Jan 01 00:00:42 1970 +0000'
1693 >>> datestr((-42, 0))
1694 >>> datestr((-42, 0))
1694 'Wed Dec 31 23:59:18 1969 +0000'
1695 'Wed Dec 31 23:59:18 1969 +0000'
1695 >>> datestr((0x7fffffff, 0))
1696 >>> datestr((0x7fffffff, 0))
1696 'Tue Jan 19 03:14:07 2038 +0000'
1697 'Tue Jan 19 03:14:07 2038 +0000'
1697 >>> datestr((-0x80000000, 0))
1698 >>> datestr((-0x80000000, 0))
1698 'Fri Dec 13 20:45:52 1901 +0000'
1699 'Fri Dec 13 20:45:52 1901 +0000'
1699 """
1700 """
1700 t, tz = date or makedate()
1701 t, tz = date or makedate()
1701 if "%1" in format or "%2" in format or "%z" in format:
1702 if "%1" in format or "%2" in format or "%z" in format:
1702 sign = (tz > 0) and "-" or "+"
1703 sign = (tz > 0) and "-" or "+"
1703 minutes = abs(tz) // 60
1704 minutes = abs(tz) // 60
1704 q, r = divmod(minutes, 60)
1705 q, r = divmod(minutes, 60)
1705 format = format.replace("%z", "%1%2")
1706 format = format.replace("%z", "%1%2")
1706 format = format.replace("%1", "%c%02d" % (sign, q))
1707 format = format.replace("%1", "%c%02d" % (sign, q))
1707 format = format.replace("%2", "%02d" % r)
1708 format = format.replace("%2", "%02d" % r)
1708 d = t - tz
1709 d = t - tz
1709 if d > 0x7fffffff:
1710 if d > 0x7fffffff:
1710 d = 0x7fffffff
1711 d = 0x7fffffff
1711 elif d < -0x80000000:
1712 elif d < -0x80000000:
1712 d = -0x80000000
1713 d = -0x80000000
1713 # Never use time.gmtime() and datetime.datetime.fromtimestamp()
1714 # Never use time.gmtime() and datetime.datetime.fromtimestamp()
1714 # because they use the gmtime() system call which is buggy on Windows
1715 # because they use the gmtime() system call which is buggy on Windows
1715 # for negative values.
1716 # for negative values.
1716 t = datetime.datetime(1970, 1, 1) + datetime.timedelta(seconds=d)
1717 t = datetime.datetime(1970, 1, 1) + datetime.timedelta(seconds=d)
1717 s = t.strftime(format)
1718 s = t.strftime(format)
1718 return s
1719 return s
1719
1720
1720 def shortdate(date=None):
1721 def shortdate(date=None):
1721 """turn (timestamp, tzoff) tuple into iso 8631 date."""
1722 """turn (timestamp, tzoff) tuple into iso 8631 date."""
1722 return datestr(date, format='%Y-%m-%d')
1723 return datestr(date, format='%Y-%m-%d')
1723
1724
1724 def parsetimezone(tz):
1725 def parsetimezone(tz):
1725 """parse a timezone string and return an offset integer"""
1726 """parse a timezone string and return an offset integer"""
1726 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
1727 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
1727 sign = (tz[0] == "+") and 1 or -1
1728 sign = (tz[0] == "+") and 1 or -1
1728 hours = int(tz[1:3])
1729 hours = int(tz[1:3])
1729 minutes = int(tz[3:5])
1730 minutes = int(tz[3:5])
1730 return -sign * (hours * 60 + minutes) * 60
1731 return -sign * (hours * 60 + minutes) * 60
1731 if tz == "GMT" or tz == "UTC":
1732 if tz == "GMT" or tz == "UTC":
1732 return 0
1733 return 0
1733 return None
1734 return None
1734
1735
1735 def strdate(string, format, defaults=[]):
1736 def strdate(string, format, defaults=[]):
1736 """parse a localized time string and return a (unixtime, offset) tuple.
1737 """parse a localized time string and return a (unixtime, offset) tuple.
1737 if the string cannot be parsed, ValueError is raised."""
1738 if the string cannot be parsed, ValueError is raised."""
1738 # NOTE: unixtime = localunixtime + offset
1739 # NOTE: unixtime = localunixtime + offset
1739 offset, date = parsetimezone(string.split()[-1]), string
1740 offset, date = parsetimezone(string.split()[-1]), string
1740 if offset is not None:
1741 if offset is not None:
1741 date = " ".join(string.split()[:-1])
1742 date = " ".join(string.split()[:-1])
1742
1743
1743 # add missing elements from defaults
1744 # add missing elements from defaults
1744 usenow = False # default to using biased defaults
1745 usenow = False # default to using biased defaults
1745 for part in ("S", "M", "HI", "d", "mb", "yY"): # decreasing specificity
1746 for part in ("S", "M", "HI", "d", "mb", "yY"): # decreasing specificity
1746 found = [True for p in part if ("%"+p) in format]
1747 found = [True for p in part if ("%"+p) in format]
1747 if not found:
1748 if not found:
1748 date += "@" + defaults[part][usenow]
1749 date += "@" + defaults[part][usenow]
1749 format += "@%" + part[0]
1750 format += "@%" + part[0]
1750 else:
1751 else:
1751 # We've found a specific time element, less specific time
1752 # We've found a specific time element, less specific time
1752 # elements are relative to today
1753 # elements are relative to today
1753 usenow = True
1754 usenow = True
1754
1755
1755 timetuple = time.strptime(date, format)
1756 timetuple = time.strptime(date, format)
1756 localunixtime = int(calendar.timegm(timetuple))
1757 localunixtime = int(calendar.timegm(timetuple))
1757 if offset is None:
1758 if offset is None:
1758 # local timezone
1759 # local timezone
1759 unixtime = int(time.mktime(timetuple))
1760 unixtime = int(time.mktime(timetuple))
1760 offset = unixtime - localunixtime
1761 offset = unixtime - localunixtime
1761 else:
1762 else:
1762 unixtime = localunixtime + offset
1763 unixtime = localunixtime + offset
1763 return unixtime, offset
1764 return unixtime, offset
1764
1765
1765 def parsedate(date, formats=None, bias=None):
1766 def parsedate(date, formats=None, bias=None):
1766 """parse a localized date/time and return a (unixtime, offset) tuple.
1767 """parse a localized date/time and return a (unixtime, offset) tuple.
1767
1768
1768 The date may be a "unixtime offset" string or in one of the specified
1769 The date may be a "unixtime offset" string or in one of the specified
1769 formats. If the date already is a (unixtime, offset) tuple, it is returned.
1770 formats. If the date already is a (unixtime, offset) tuple, it is returned.
1770
1771
1771 >>> parsedate(' today ') == parsedate(\
1772 >>> parsedate(' today ') == parsedate(\
1772 datetime.date.today().strftime('%b %d'))
1773 datetime.date.today().strftime('%b %d'))
1773 True
1774 True
1774 >>> parsedate( 'yesterday ') == parsedate((datetime.date.today() -\
1775 >>> parsedate( 'yesterday ') == parsedate((datetime.date.today() -\
1775 datetime.timedelta(days=1)\
1776 datetime.timedelta(days=1)\
1776 ).strftime('%b %d'))
1777 ).strftime('%b %d'))
1777 True
1778 True
1778 >>> now, tz = makedate()
1779 >>> now, tz = makedate()
1779 >>> strnow, strtz = parsedate('now')
1780 >>> strnow, strtz = parsedate('now')
1780 >>> (strnow - now) < 1
1781 >>> (strnow - now) < 1
1781 True
1782 True
1782 >>> tz == strtz
1783 >>> tz == strtz
1783 True
1784 True
1784 """
1785 """
1785 if bias is None:
1786 if bias is None:
1786 bias = {}
1787 bias = {}
1787 if not date:
1788 if not date:
1788 return 0, 0
1789 return 0, 0
1789 if isinstance(date, tuple) and len(date) == 2:
1790 if isinstance(date, tuple) and len(date) == 2:
1790 return date
1791 return date
1791 if not formats:
1792 if not formats:
1792 formats = defaultdateformats
1793 formats = defaultdateformats
1793 date = date.strip()
1794 date = date.strip()
1794
1795
1795 if date == 'now' or date == _('now'):
1796 if date == 'now' or date == _('now'):
1796 return makedate()
1797 return makedate()
1797 if date == 'today' or date == _('today'):
1798 if date == 'today' or date == _('today'):
1798 date = datetime.date.today().strftime('%b %d')
1799 date = datetime.date.today().strftime('%b %d')
1799 elif date == 'yesterday' or date == _('yesterday'):
1800 elif date == 'yesterday' or date == _('yesterday'):
1800 date = (datetime.date.today() -
1801 date = (datetime.date.today() -
1801 datetime.timedelta(days=1)).strftime('%b %d')
1802 datetime.timedelta(days=1)).strftime('%b %d')
1802
1803
1803 try:
1804 try:
1804 when, offset = map(int, date.split(' '))
1805 when, offset = map(int, date.split(' '))
1805 except ValueError:
1806 except ValueError:
1806 # fill out defaults
1807 # fill out defaults
1807 now = makedate()
1808 now = makedate()
1808 defaults = {}
1809 defaults = {}
1809 for part in ("d", "mb", "yY", "HI", "M", "S"):
1810 for part in ("d", "mb", "yY", "HI", "M", "S"):
1810 # this piece is for rounding the specific end of unknowns
1811 # this piece is for rounding the specific end of unknowns
1811 b = bias.get(part)
1812 b = bias.get(part)
1812 if b is None:
1813 if b is None:
1813 if part[0] in "HMS":
1814 if part[0] in "HMS":
1814 b = "00"
1815 b = "00"
1815 else:
1816 else:
1816 b = "0"
1817 b = "0"
1817
1818
1818 # this piece is for matching the generic end to today's date
1819 # this piece is for matching the generic end to today's date
1819 n = datestr(now, "%" + part[0])
1820 n = datestr(now, "%" + part[0])
1820
1821
1821 defaults[part] = (b, n)
1822 defaults[part] = (b, n)
1822
1823
1823 for format in formats:
1824 for format in formats:
1824 try:
1825 try:
1825 when, offset = strdate(date, format, defaults)
1826 when, offset = strdate(date, format, defaults)
1826 except (ValueError, OverflowError):
1827 except (ValueError, OverflowError):
1827 pass
1828 pass
1828 else:
1829 else:
1829 break
1830 break
1830 else:
1831 else:
1831 raise Abort(_('invalid date: %r') % date)
1832 raise Abort(_('invalid date: %r') % date)
1832 # validate explicit (probably user-specified) date and
1833 # validate explicit (probably user-specified) date and
1833 # time zone offset. values must fit in signed 32 bits for
1834 # time zone offset. values must fit in signed 32 bits for
1834 # current 32-bit linux runtimes. timezones go from UTC-12
1835 # current 32-bit linux runtimes. timezones go from UTC-12
1835 # to UTC+14
1836 # to UTC+14
1836 if when < -0x80000000 or when > 0x7fffffff:
1837 if when < -0x80000000 or when > 0x7fffffff:
1837 raise Abort(_('date exceeds 32 bits: %d') % when)
1838 raise Abort(_('date exceeds 32 bits: %d') % when)
1838 if offset < -50400 or offset > 43200:
1839 if offset < -50400 or offset > 43200:
1839 raise Abort(_('impossible time zone offset: %d') % offset)
1840 raise Abort(_('impossible time zone offset: %d') % offset)
1840 return when, offset
1841 return when, offset
1841
1842
1842 def matchdate(date):
1843 def matchdate(date):
1843 """Return a function that matches a given date match specifier
1844 """Return a function that matches a given date match specifier
1844
1845
1845 Formats include:
1846 Formats include:
1846
1847
1847 '{date}' match a given date to the accuracy provided
1848 '{date}' match a given date to the accuracy provided
1848
1849
1849 '<{date}' on or before a given date
1850 '<{date}' on or before a given date
1850
1851
1851 '>{date}' on or after a given date
1852 '>{date}' on or after a given date
1852
1853
1853 >>> p1 = parsedate("10:29:59")
1854 >>> p1 = parsedate("10:29:59")
1854 >>> p2 = parsedate("10:30:00")
1855 >>> p2 = parsedate("10:30:00")
1855 >>> p3 = parsedate("10:30:59")
1856 >>> p3 = parsedate("10:30:59")
1856 >>> p4 = parsedate("10:31:00")
1857 >>> p4 = parsedate("10:31:00")
1857 >>> p5 = parsedate("Sep 15 10:30:00 1999")
1858 >>> p5 = parsedate("Sep 15 10:30:00 1999")
1858 >>> f = matchdate("10:30")
1859 >>> f = matchdate("10:30")
1859 >>> f(p1[0])
1860 >>> f(p1[0])
1860 False
1861 False
1861 >>> f(p2[0])
1862 >>> f(p2[0])
1862 True
1863 True
1863 >>> f(p3[0])
1864 >>> f(p3[0])
1864 True
1865 True
1865 >>> f(p4[0])
1866 >>> f(p4[0])
1866 False
1867 False
1867 >>> f(p5[0])
1868 >>> f(p5[0])
1868 False
1869 False
1869 """
1870 """
1870
1871
1871 def lower(date):
1872 def lower(date):
1872 d = {'mb': "1", 'd': "1"}
1873 d = {'mb': "1", 'd': "1"}
1873 return parsedate(date, extendeddateformats, d)[0]
1874 return parsedate(date, extendeddateformats, d)[0]
1874
1875
1875 def upper(date):
1876 def upper(date):
1876 d = {'mb': "12", 'HI': "23", 'M': "59", 'S': "59"}
1877 d = {'mb': "12", 'HI': "23", 'M': "59", 'S': "59"}
1877 for days in ("31", "30", "29"):
1878 for days in ("31", "30", "29"):
1878 try:
1879 try:
1879 d["d"] = days
1880 d["d"] = days
1880 return parsedate(date, extendeddateformats, d)[0]
1881 return parsedate(date, extendeddateformats, d)[0]
1881 except Abort:
1882 except Abort:
1882 pass
1883 pass
1883 d["d"] = "28"
1884 d["d"] = "28"
1884 return parsedate(date, extendeddateformats, d)[0]
1885 return parsedate(date, extendeddateformats, d)[0]
1885
1886
1886 date = date.strip()
1887 date = date.strip()
1887
1888
1888 if not date:
1889 if not date:
1889 raise Abort(_("dates cannot consist entirely of whitespace"))
1890 raise Abort(_("dates cannot consist entirely of whitespace"))
1890 elif date[0] == "<":
1891 elif date[0] == "<":
1891 if not date[1:]:
1892 if not date[1:]:
1892 raise Abort(_("invalid day spec, use '<DATE'"))
1893 raise Abort(_("invalid day spec, use '<DATE'"))
1893 when = upper(date[1:])
1894 when = upper(date[1:])
1894 return lambda x: x <= when
1895 return lambda x: x <= when
1895 elif date[0] == ">":
1896 elif date[0] == ">":
1896 if not date[1:]:
1897 if not date[1:]:
1897 raise Abort(_("invalid day spec, use '>DATE'"))
1898 raise Abort(_("invalid day spec, use '>DATE'"))
1898 when = lower(date[1:])
1899 when = lower(date[1:])
1899 return lambda x: x >= when
1900 return lambda x: x >= when
1900 elif date[0] == "-":
1901 elif date[0] == "-":
1901 try:
1902 try:
1902 days = int(date[1:])
1903 days = int(date[1:])
1903 except ValueError:
1904 except ValueError:
1904 raise Abort(_("invalid day spec: %s") % date[1:])
1905 raise Abort(_("invalid day spec: %s") % date[1:])
1905 if days < 0:
1906 if days < 0:
1906 raise Abort(_('%s must be nonnegative (see "hg help dates")')
1907 raise Abort(_('%s must be nonnegative (see "hg help dates")')
1907 % date[1:])
1908 % date[1:])
1908 when = makedate()[0] - days * 3600 * 24
1909 when = makedate()[0] - days * 3600 * 24
1909 return lambda x: x >= when
1910 return lambda x: x >= when
1910 elif " to " in date:
1911 elif " to " in date:
1911 a, b = date.split(" to ")
1912 a, b = date.split(" to ")
1912 start, stop = lower(a), upper(b)
1913 start, stop = lower(a), upper(b)
1913 return lambda x: x >= start and x <= stop
1914 return lambda x: x >= start and x <= stop
1914 else:
1915 else:
1915 start, stop = lower(date), upper(date)
1916 start, stop = lower(date), upper(date)
1916 return lambda x: x >= start and x <= stop
1917 return lambda x: x >= start and x <= stop
1917
1918
1918 def stringmatcher(pattern):
1919 def stringmatcher(pattern):
1919 """
1920 """
1920 accepts a string, possibly starting with 're:' or 'literal:' prefix.
1921 accepts a string, possibly starting with 're:' or 'literal:' prefix.
1921 returns the matcher name, pattern, and matcher function.
1922 returns the matcher name, pattern, and matcher function.
1922 missing or unknown prefixes are treated as literal matches.
1923 missing or unknown prefixes are treated as literal matches.
1923
1924
1924 helper for tests:
1925 helper for tests:
1925 >>> def test(pattern, *tests):
1926 >>> def test(pattern, *tests):
1926 ... kind, pattern, matcher = stringmatcher(pattern)
1927 ... kind, pattern, matcher = stringmatcher(pattern)
1927 ... return (kind, pattern, [bool(matcher(t)) for t in tests])
1928 ... return (kind, pattern, [bool(matcher(t)) for t in tests])
1928
1929
1929 exact matching (no prefix):
1930 exact matching (no prefix):
1930 >>> test('abcdefg', 'abc', 'def', 'abcdefg')
1931 >>> test('abcdefg', 'abc', 'def', 'abcdefg')
1931 ('literal', 'abcdefg', [False, False, True])
1932 ('literal', 'abcdefg', [False, False, True])
1932
1933
1933 regex matching ('re:' prefix)
1934 regex matching ('re:' prefix)
1934 >>> test('re:a.+b', 'nomatch', 'fooadef', 'fooadefbar')
1935 >>> test('re:a.+b', 'nomatch', 'fooadef', 'fooadefbar')
1935 ('re', 'a.+b', [False, False, True])
1936 ('re', 'a.+b', [False, False, True])
1936
1937
1937 force exact matches ('literal:' prefix)
1938 force exact matches ('literal:' prefix)
1938 >>> test('literal:re:foobar', 'foobar', 're:foobar')
1939 >>> test('literal:re:foobar', 'foobar', 're:foobar')
1939 ('literal', 're:foobar', [False, True])
1940 ('literal', 're:foobar', [False, True])
1940
1941
1941 unknown prefixes are ignored and treated as literals
1942 unknown prefixes are ignored and treated as literals
1942 >>> test('foo:bar', 'foo', 'bar', 'foo:bar')
1943 >>> test('foo:bar', 'foo', 'bar', 'foo:bar')
1943 ('literal', 'foo:bar', [False, False, True])
1944 ('literal', 'foo:bar', [False, False, True])
1944 """
1945 """
1945 if pattern.startswith('re:'):
1946 if pattern.startswith('re:'):
1946 pattern = pattern[3:]
1947 pattern = pattern[3:]
1947 try:
1948 try:
1948 regex = remod.compile(pattern)
1949 regex = remod.compile(pattern)
1949 except remod.error as e:
1950 except remod.error as e:
1950 raise error.ParseError(_('invalid regular expression: %s')
1951 raise error.ParseError(_('invalid regular expression: %s')
1951 % e)
1952 % e)
1952 return 're', pattern, regex.search
1953 return 're', pattern, regex.search
1953 elif pattern.startswith('literal:'):
1954 elif pattern.startswith('literal:'):
1954 pattern = pattern[8:]
1955 pattern = pattern[8:]
1955 return 'literal', pattern, pattern.__eq__
1956 return 'literal', pattern, pattern.__eq__
1956
1957
1957 def shortuser(user):
1958 def shortuser(user):
1958 """Return a short representation of a user name or email address."""
1959 """Return a short representation of a user name or email address."""
1959 f = user.find('@')
1960 f = user.find('@')
1960 if f >= 0:
1961 if f >= 0:
1961 user = user[:f]
1962 user = user[:f]
1962 f = user.find('<')
1963 f = user.find('<')
1963 if f >= 0:
1964 if f >= 0:
1964 user = user[f + 1:]
1965 user = user[f + 1:]
1965 f = user.find(' ')
1966 f = user.find(' ')
1966 if f >= 0:
1967 if f >= 0:
1967 user = user[:f]
1968 user = user[:f]
1968 f = user.find('.')
1969 f = user.find('.')
1969 if f >= 0:
1970 if f >= 0:
1970 user = user[:f]
1971 user = user[:f]
1971 return user
1972 return user
1972
1973
1973 def emailuser(user):
1974 def emailuser(user):
1974 """Return the user portion of an email address."""
1975 """Return the user portion of an email address."""
1975 f = user.find('@')
1976 f = user.find('@')
1976 if f >= 0:
1977 if f >= 0:
1977 user = user[:f]
1978 user = user[:f]
1978 f = user.find('<')
1979 f = user.find('<')
1979 if f >= 0:
1980 if f >= 0:
1980 user = user[f + 1:]
1981 user = user[f + 1:]
1981 return user
1982 return user
1982
1983
1983 def email(author):
1984 def email(author):
1984 '''get email of author.'''
1985 '''get email of author.'''
1985 r = author.find('>')
1986 r = author.find('>')
1986 if r == -1:
1987 if r == -1:
1987 r = None
1988 r = None
1988 return author[author.find('<') + 1:r]
1989 return author[author.find('<') + 1:r]
1989
1990
1990 def ellipsis(text, maxlength=400):
1991 def ellipsis(text, maxlength=400):
1991 """Trim string to at most maxlength (default: 400) columns in display."""
1992 """Trim string to at most maxlength (default: 400) columns in display."""
1992 return encoding.trim(text, maxlength, ellipsis='...')
1993 return encoding.trim(text, maxlength, ellipsis='...')
1993
1994
1994 def unitcountfn(*unittable):
1995 def unitcountfn(*unittable):
1995 '''return a function that renders a readable count of some quantity'''
1996 '''return a function that renders a readable count of some quantity'''
1996
1997
1997 def go(count):
1998 def go(count):
1998 for multiplier, divisor, format in unittable:
1999 for multiplier, divisor, format in unittable:
1999 if count >= divisor * multiplier:
2000 if count >= divisor * multiplier:
2000 return format % (count / float(divisor))
2001 return format % (count / float(divisor))
2001 return unittable[-1][2] % count
2002 return unittable[-1][2] % count
2002
2003
2003 return go
2004 return go
2004
2005
2005 bytecount = unitcountfn(
2006 bytecount = unitcountfn(
2006 (100, 1 << 30, _('%.0f GB')),
2007 (100, 1 << 30, _('%.0f GB')),
2007 (10, 1 << 30, _('%.1f GB')),
2008 (10, 1 << 30, _('%.1f GB')),
2008 (1, 1 << 30, _('%.2f GB')),
2009 (1, 1 << 30, _('%.2f GB')),
2009 (100, 1 << 20, _('%.0f MB')),
2010 (100, 1 << 20, _('%.0f MB')),
2010 (10, 1 << 20, _('%.1f MB')),
2011 (10, 1 << 20, _('%.1f MB')),
2011 (1, 1 << 20, _('%.2f MB')),
2012 (1, 1 << 20, _('%.2f MB')),
2012 (100, 1 << 10, _('%.0f KB')),
2013 (100, 1 << 10, _('%.0f KB')),
2013 (10, 1 << 10, _('%.1f KB')),
2014 (10, 1 << 10, _('%.1f KB')),
2014 (1, 1 << 10, _('%.2f KB')),
2015 (1, 1 << 10, _('%.2f KB')),
2015 (1, 1, _('%.0f bytes')),
2016 (1, 1, _('%.0f bytes')),
2016 )
2017 )
2017
2018
2018 def uirepr(s):
2019 def uirepr(s):
2019 # Avoid double backslash in Windows path repr()
2020 # Avoid double backslash in Windows path repr()
2020 return repr(s).replace('\\\\', '\\')
2021 return repr(s).replace('\\\\', '\\')
2021
2022
2022 # delay import of textwrap
2023 # delay import of textwrap
2023 def MBTextWrapper(**kwargs):
2024 def MBTextWrapper(**kwargs):
2024 class tw(textwrap.TextWrapper):
2025 class tw(textwrap.TextWrapper):
2025 """
2026 """
2026 Extend TextWrapper for width-awareness.
2027 Extend TextWrapper for width-awareness.
2027
2028
2028 Neither number of 'bytes' in any encoding nor 'characters' is
2029 Neither number of 'bytes' in any encoding nor 'characters' is
2029 appropriate to calculate terminal columns for specified string.
2030 appropriate to calculate terminal columns for specified string.
2030
2031
2031 Original TextWrapper implementation uses built-in 'len()' directly,
2032 Original TextWrapper implementation uses built-in 'len()' directly,
2032 so overriding is needed to use width information of each characters.
2033 so overriding is needed to use width information of each characters.
2033
2034
2034 In addition, characters classified into 'ambiguous' width are
2035 In addition, characters classified into 'ambiguous' width are
2035 treated as wide in East Asian area, but as narrow in other.
2036 treated as wide in East Asian area, but as narrow in other.
2036
2037
2037 This requires use decision to determine width of such characters.
2038 This requires use decision to determine width of such characters.
2038 """
2039 """
2039 def _cutdown(self, ucstr, space_left):
2040 def _cutdown(self, ucstr, space_left):
2040 l = 0
2041 l = 0
2041 colwidth = encoding.ucolwidth
2042 colwidth = encoding.ucolwidth
2042 for i in xrange(len(ucstr)):
2043 for i in xrange(len(ucstr)):
2043 l += colwidth(ucstr[i])
2044 l += colwidth(ucstr[i])
2044 if space_left < l:
2045 if space_left < l:
2045 return (ucstr[:i], ucstr[i:])
2046 return (ucstr[:i], ucstr[i:])
2046 return ucstr, ''
2047 return ucstr, ''
2047
2048
2048 # overriding of base class
2049 # overriding of base class
2049 def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
2050 def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
2050 space_left = max(width - cur_len, 1)
2051 space_left = max(width - cur_len, 1)
2051
2052
2052 if self.break_long_words:
2053 if self.break_long_words:
2053 cut, res = self._cutdown(reversed_chunks[-1], space_left)
2054 cut, res = self._cutdown(reversed_chunks[-1], space_left)
2054 cur_line.append(cut)
2055 cur_line.append(cut)
2055 reversed_chunks[-1] = res
2056 reversed_chunks[-1] = res
2056 elif not cur_line:
2057 elif not cur_line:
2057 cur_line.append(reversed_chunks.pop())
2058 cur_line.append(reversed_chunks.pop())
2058
2059
2059 # this overriding code is imported from TextWrapper of Python 2.6
2060 # this overriding code is imported from TextWrapper of Python 2.6
2060 # to calculate columns of string by 'encoding.ucolwidth()'
2061 # to calculate columns of string by 'encoding.ucolwidth()'
2061 def _wrap_chunks(self, chunks):
2062 def _wrap_chunks(self, chunks):
2062 colwidth = encoding.ucolwidth
2063 colwidth = encoding.ucolwidth
2063
2064
2064 lines = []
2065 lines = []
2065 if self.width <= 0:
2066 if self.width <= 0:
2066 raise ValueError("invalid width %r (must be > 0)" % self.width)
2067 raise ValueError("invalid width %r (must be > 0)" % self.width)
2067
2068
2068 # Arrange in reverse order so items can be efficiently popped
2069 # Arrange in reverse order so items can be efficiently popped
2069 # from a stack of chucks.
2070 # from a stack of chucks.
2070 chunks.reverse()
2071 chunks.reverse()
2071
2072
2072 while chunks:
2073 while chunks:
2073
2074
2074 # Start the list of chunks that will make up the current line.
2075 # Start the list of chunks that will make up the current line.
2075 # cur_len is just the length of all the chunks in cur_line.
2076 # cur_len is just the length of all the chunks in cur_line.
2076 cur_line = []
2077 cur_line = []
2077 cur_len = 0
2078 cur_len = 0
2078
2079
2079 # Figure out which static string will prefix this line.
2080 # Figure out which static string will prefix this line.
2080 if lines:
2081 if lines:
2081 indent = self.subsequent_indent
2082 indent = self.subsequent_indent
2082 else:
2083 else:
2083 indent = self.initial_indent
2084 indent = self.initial_indent
2084
2085
2085 # Maximum width for this line.
2086 # Maximum width for this line.
2086 width = self.width - len(indent)
2087 width = self.width - len(indent)
2087
2088
2088 # First chunk on line is whitespace -- drop it, unless this
2089 # First chunk on line is whitespace -- drop it, unless this
2089 # is the very beginning of the text (i.e. no lines started yet).
2090 # is the very beginning of the text (i.e. no lines started yet).
2090 if self.drop_whitespace and chunks[-1].strip() == '' and lines:
2091 if self.drop_whitespace and chunks[-1].strip() == '' and lines:
2091 del chunks[-1]
2092 del chunks[-1]
2092
2093
2093 while chunks:
2094 while chunks:
2094 l = colwidth(chunks[-1])
2095 l = colwidth(chunks[-1])
2095
2096
2096 # Can at least squeeze this chunk onto the current line.
2097 # Can at least squeeze this chunk onto the current line.
2097 if cur_len + l <= width:
2098 if cur_len + l <= width:
2098 cur_line.append(chunks.pop())
2099 cur_line.append(chunks.pop())
2099 cur_len += l
2100 cur_len += l
2100
2101
2101 # Nope, this line is full.
2102 # Nope, this line is full.
2102 else:
2103 else:
2103 break
2104 break
2104
2105
2105 # The current line is full, and the next chunk is too big to
2106 # The current line is full, and the next chunk is too big to
2106 # fit on *any* line (not just this one).
2107 # fit on *any* line (not just this one).
2107 if chunks and colwidth(chunks[-1]) > width:
2108 if chunks and colwidth(chunks[-1]) > width:
2108 self._handle_long_word(chunks, cur_line, cur_len, width)
2109 self._handle_long_word(chunks, cur_line, cur_len, width)
2109
2110
2110 # If the last chunk on this line is all whitespace, drop it.
2111 # If the last chunk on this line is all whitespace, drop it.
2111 if (self.drop_whitespace and
2112 if (self.drop_whitespace and
2112 cur_line and cur_line[-1].strip() == ''):
2113 cur_line and cur_line[-1].strip() == ''):
2113 del cur_line[-1]
2114 del cur_line[-1]
2114
2115
2115 # Convert current line back to a string and store it in list
2116 # Convert current line back to a string and store it in list
2116 # of all lines (return value).
2117 # of all lines (return value).
2117 if cur_line:
2118 if cur_line:
2118 lines.append(indent + ''.join(cur_line))
2119 lines.append(indent + ''.join(cur_line))
2119
2120
2120 return lines
2121 return lines
2121
2122
2122 global MBTextWrapper
2123 global MBTextWrapper
2123 MBTextWrapper = tw
2124 MBTextWrapper = tw
2124 return tw(**kwargs)
2125 return tw(**kwargs)
2125
2126
2126 def wrap(line, width, initindent='', hangindent=''):
2127 def wrap(line, width, initindent='', hangindent=''):
2127 maxindent = max(len(hangindent), len(initindent))
2128 maxindent = max(len(hangindent), len(initindent))
2128 if width <= maxindent:
2129 if width <= maxindent:
2129 # adjust for weird terminal size
2130 # adjust for weird terminal size
2130 width = max(78, maxindent + 1)
2131 width = max(78, maxindent + 1)
2131 line = line.decode(encoding.encoding, encoding.encodingmode)
2132 line = line.decode(encoding.encoding, encoding.encodingmode)
2132 initindent = initindent.decode(encoding.encoding, encoding.encodingmode)
2133 initindent = initindent.decode(encoding.encoding, encoding.encodingmode)
2133 hangindent = hangindent.decode(encoding.encoding, encoding.encodingmode)
2134 hangindent = hangindent.decode(encoding.encoding, encoding.encodingmode)
2134 wrapper = MBTextWrapper(width=width,
2135 wrapper = MBTextWrapper(width=width,
2135 initial_indent=initindent,
2136 initial_indent=initindent,
2136 subsequent_indent=hangindent)
2137 subsequent_indent=hangindent)
2137 return wrapper.fill(line).encode(encoding.encoding)
2138 return wrapper.fill(line).encode(encoding.encoding)
2138
2139
2139 def iterlines(iterator):
2140 def iterlines(iterator):
2140 for chunk in iterator:
2141 for chunk in iterator:
2141 for line in chunk.splitlines():
2142 for line in chunk.splitlines():
2142 yield line
2143 yield line
2143
2144
2144 def expandpath(path):
2145 def expandpath(path):
2145 return os.path.expanduser(os.path.expandvars(path))
2146 return os.path.expanduser(os.path.expandvars(path))
2146
2147
2147 def hgcmd():
2148 def hgcmd():
2148 """Return the command used to execute current hg
2149 """Return the command used to execute current hg
2149
2150
2150 This is different from hgexecutable() because on Windows we want
2151 This is different from hgexecutable() because on Windows we want
2151 to avoid things opening new shell windows like batch files, so we
2152 to avoid things opening new shell windows like batch files, so we
2152 get either the python call or current executable.
2153 get either the python call or current executable.
2153 """
2154 """
2154 if mainfrozen():
2155 if mainfrozen():
2155 if getattr(sys, 'frozen', None) == 'macosx_app':
2156 if getattr(sys, 'frozen', None) == 'macosx_app':
2156 # Env variable set by py2app
2157 # Env variable set by py2app
2157 return [os.environ['EXECUTABLEPATH']]
2158 return [os.environ['EXECUTABLEPATH']]
2158 else:
2159 else:
2159 return [sys.executable]
2160 return [sys.executable]
2160 return gethgcmd()
2161 return gethgcmd()
2161
2162
2162 def rundetached(args, condfn):
2163 def rundetached(args, condfn):
2163 """Execute the argument list in a detached process.
2164 """Execute the argument list in a detached process.
2164
2165
2165 condfn is a callable which is called repeatedly and should return
2166 condfn is a callable which is called repeatedly and should return
2166 True once the child process is known to have started successfully.
2167 True once the child process is known to have started successfully.
2167 At this point, the child process PID is returned. If the child
2168 At this point, the child process PID is returned. If the child
2168 process fails to start or finishes before condfn() evaluates to
2169 process fails to start or finishes before condfn() evaluates to
2169 True, return -1.
2170 True, return -1.
2170 """
2171 """
2171 # Windows case is easier because the child process is either
2172 # Windows case is easier because the child process is either
2172 # successfully starting and validating the condition or exiting
2173 # successfully starting and validating the condition or exiting
2173 # on failure. We just poll on its PID. On Unix, if the child
2174 # on failure. We just poll on its PID. On Unix, if the child
2174 # process fails to start, it will be left in a zombie state until
2175 # process fails to start, it will be left in a zombie state until
2175 # the parent wait on it, which we cannot do since we expect a long
2176 # the parent wait on it, which we cannot do since we expect a long
2176 # running process on success. Instead we listen for SIGCHLD telling
2177 # running process on success. Instead we listen for SIGCHLD telling
2177 # us our child process terminated.
2178 # us our child process terminated.
2178 terminated = set()
2179 terminated = set()
2179 def handler(signum, frame):
2180 def handler(signum, frame):
2180 terminated.add(os.wait())
2181 terminated.add(os.wait())
2181 prevhandler = None
2182 prevhandler = None
2182 SIGCHLD = getattr(signal, 'SIGCHLD', None)
2183 SIGCHLD = getattr(signal, 'SIGCHLD', None)
2183 if SIGCHLD is not None:
2184 if SIGCHLD is not None:
2184 prevhandler = signal.signal(SIGCHLD, handler)
2185 prevhandler = signal.signal(SIGCHLD, handler)
2185 try:
2186 try:
2186 pid = spawndetached(args)
2187 pid = spawndetached(args)
2187 while not condfn():
2188 while not condfn():
2188 if ((pid in terminated or not testpid(pid))
2189 if ((pid in terminated or not testpid(pid))
2189 and not condfn()):
2190 and not condfn()):
2190 return -1
2191 return -1
2191 time.sleep(0.1)
2192 time.sleep(0.1)
2192 return pid
2193 return pid
2193 finally:
2194 finally:
2194 if prevhandler is not None:
2195 if prevhandler is not None:
2195 signal.signal(signal.SIGCHLD, prevhandler)
2196 signal.signal(signal.SIGCHLD, prevhandler)
2196
2197
2197 def interpolate(prefix, mapping, s, fn=None, escape_prefix=False):
2198 def interpolate(prefix, mapping, s, fn=None, escape_prefix=False):
2198 """Return the result of interpolating items in the mapping into string s.
2199 """Return the result of interpolating items in the mapping into string s.
2199
2200
2200 prefix is a single character string, or a two character string with
2201 prefix is a single character string, or a two character string with
2201 a backslash as the first character if the prefix needs to be escaped in
2202 a backslash as the first character if the prefix needs to be escaped in
2202 a regular expression.
2203 a regular expression.
2203
2204
2204 fn is an optional function that will be applied to the replacement text
2205 fn is an optional function that will be applied to the replacement text
2205 just before replacement.
2206 just before replacement.
2206
2207
2207 escape_prefix is an optional flag that allows using doubled prefix for
2208 escape_prefix is an optional flag that allows using doubled prefix for
2208 its escaping.
2209 its escaping.
2209 """
2210 """
2210 fn = fn or (lambda s: s)
2211 fn = fn or (lambda s: s)
2211 patterns = '|'.join(mapping.keys())
2212 patterns = '|'.join(mapping.keys())
2212 if escape_prefix:
2213 if escape_prefix:
2213 patterns += '|' + prefix
2214 patterns += '|' + prefix
2214 if len(prefix) > 1:
2215 if len(prefix) > 1:
2215 prefix_char = prefix[1:]
2216 prefix_char = prefix[1:]
2216 else:
2217 else:
2217 prefix_char = prefix
2218 prefix_char = prefix
2218 mapping[prefix_char] = prefix_char
2219 mapping[prefix_char] = prefix_char
2219 r = remod.compile(r'%s(%s)' % (prefix, patterns))
2220 r = remod.compile(r'%s(%s)' % (prefix, patterns))
2220 return r.sub(lambda x: fn(mapping[x.group()[1:]]), s)
2221 return r.sub(lambda x: fn(mapping[x.group()[1:]]), s)
2221
2222
2222 def getport(port):
2223 def getport(port):
2223 """Return the port for a given network service.
2224 """Return the port for a given network service.
2224
2225
2225 If port is an integer, it's returned as is. If it's a string, it's
2226 If port is an integer, it's returned as is. If it's a string, it's
2226 looked up using socket.getservbyname(). If there's no matching
2227 looked up using socket.getservbyname(). If there's no matching
2227 service, error.Abort is raised.
2228 service, error.Abort is raised.
2228 """
2229 """
2229 try:
2230 try:
2230 return int(port)
2231 return int(port)
2231 except ValueError:
2232 except ValueError:
2232 pass
2233 pass
2233
2234
2234 try:
2235 try:
2235 return socket.getservbyname(port)
2236 return socket.getservbyname(port)
2236 except socket.error:
2237 except socket.error:
2237 raise Abort(_("no port number associated with service '%s'") % port)
2238 raise Abort(_("no port number associated with service '%s'") % port)
2238
2239
2239 _booleans = {'1': True, 'yes': True, 'true': True, 'on': True, 'always': True,
2240 _booleans = {'1': True, 'yes': True, 'true': True, 'on': True, 'always': True,
2240 '0': False, 'no': False, 'false': False, 'off': False,
2241 '0': False, 'no': False, 'false': False, 'off': False,
2241 'never': False}
2242 'never': False}
2242
2243
2243 def parsebool(s):
2244 def parsebool(s):
2244 """Parse s into a boolean.
2245 """Parse s into a boolean.
2245
2246
2246 If s is not a valid boolean, returns None.
2247 If s is not a valid boolean, returns None.
2247 """
2248 """
2248 return _booleans.get(s.lower(), None)
2249 return _booleans.get(s.lower(), None)
2249
2250
2250 _hexdig = '0123456789ABCDEFabcdef'
2251 _hexdig = '0123456789ABCDEFabcdef'
2251 _hextochr = dict((a + b, chr(int(a + b, 16)))
2252 _hextochr = dict((a + b, chr(int(a + b, 16)))
2252 for a in _hexdig for b in _hexdig)
2253 for a in _hexdig for b in _hexdig)
2253
2254
2254 def _urlunquote(s):
2255 def _urlunquote(s):
2255 """Decode HTTP/HTML % encoding.
2256 """Decode HTTP/HTML % encoding.
2256
2257
2257 >>> _urlunquote('abc%20def')
2258 >>> _urlunquote('abc%20def')
2258 'abc def'
2259 'abc def'
2259 """
2260 """
2260 res = s.split('%')
2261 res = s.split('%')
2261 # fastpath
2262 # fastpath
2262 if len(res) == 1:
2263 if len(res) == 1:
2263 return s
2264 return s
2264 s = res[0]
2265 s = res[0]
2265 for item in res[1:]:
2266 for item in res[1:]:
2266 try:
2267 try:
2267 s += _hextochr[item[:2]] + item[2:]
2268 s += _hextochr[item[:2]] + item[2:]
2268 except KeyError:
2269 except KeyError:
2269 s += '%' + item
2270 s += '%' + item
2270 except UnicodeDecodeError:
2271 except UnicodeDecodeError:
2271 s += unichr(int(item[:2], 16)) + item[2:]
2272 s += unichr(int(item[:2], 16)) + item[2:]
2272 return s
2273 return s
2273
2274
2274 class url(object):
2275 class url(object):
2275 r"""Reliable URL parser.
2276 r"""Reliable URL parser.
2276
2277
2277 This parses URLs and provides attributes for the following
2278 This parses URLs and provides attributes for the following
2278 components:
2279 components:
2279
2280
2280 <scheme>://<user>:<passwd>@<host>:<port>/<path>?<query>#<fragment>
2281 <scheme>://<user>:<passwd>@<host>:<port>/<path>?<query>#<fragment>
2281
2282
2282 Missing components are set to None. The only exception is
2283 Missing components are set to None. The only exception is
2283 fragment, which is set to '' if present but empty.
2284 fragment, which is set to '' if present but empty.
2284
2285
2285 If parsefragment is False, fragment is included in query. If
2286 If parsefragment is False, fragment is included in query. If
2286 parsequery is False, query is included in path. If both are
2287 parsequery is False, query is included in path. If both are
2287 False, both fragment and query are included in path.
2288 False, both fragment and query are included in path.
2288
2289
2289 See http://www.ietf.org/rfc/rfc2396.txt for more information.
2290 See http://www.ietf.org/rfc/rfc2396.txt for more information.
2290
2291
2291 Note that for backward compatibility reasons, bundle URLs do not
2292 Note that for backward compatibility reasons, bundle URLs do not
2292 take host names. That means 'bundle://../' has a path of '../'.
2293 take host names. That means 'bundle://../' has a path of '../'.
2293
2294
2294 Examples:
2295 Examples:
2295
2296
2296 >>> url('http://www.ietf.org/rfc/rfc2396.txt')
2297 >>> url('http://www.ietf.org/rfc/rfc2396.txt')
2297 <url scheme: 'http', host: 'www.ietf.org', path: 'rfc/rfc2396.txt'>
2298 <url scheme: 'http', host: 'www.ietf.org', path: 'rfc/rfc2396.txt'>
2298 >>> url('ssh://[::1]:2200//home/joe/repo')
2299 >>> url('ssh://[::1]:2200//home/joe/repo')
2299 <url scheme: 'ssh', host: '[::1]', port: '2200', path: '/home/joe/repo'>
2300 <url scheme: 'ssh', host: '[::1]', port: '2200', path: '/home/joe/repo'>
2300 >>> url('file:///home/joe/repo')
2301 >>> url('file:///home/joe/repo')
2301 <url scheme: 'file', path: '/home/joe/repo'>
2302 <url scheme: 'file', path: '/home/joe/repo'>
2302 >>> url('file:///c:/temp/foo/')
2303 >>> url('file:///c:/temp/foo/')
2303 <url scheme: 'file', path: 'c:/temp/foo/'>
2304 <url scheme: 'file', path: 'c:/temp/foo/'>
2304 >>> url('bundle:foo')
2305 >>> url('bundle:foo')
2305 <url scheme: 'bundle', path: 'foo'>
2306 <url scheme: 'bundle', path: 'foo'>
2306 >>> url('bundle://../foo')
2307 >>> url('bundle://../foo')
2307 <url scheme: 'bundle', path: '../foo'>
2308 <url scheme: 'bundle', path: '../foo'>
2308 >>> url(r'c:\foo\bar')
2309 >>> url(r'c:\foo\bar')
2309 <url path: 'c:\\foo\\bar'>
2310 <url path: 'c:\\foo\\bar'>
2310 >>> url(r'\\blah\blah\blah')
2311 >>> url(r'\\blah\blah\blah')
2311 <url path: '\\\\blah\\blah\\blah'>
2312 <url path: '\\\\blah\\blah\\blah'>
2312 >>> url(r'\\blah\blah\blah#baz')
2313 >>> url(r'\\blah\blah\blah#baz')
2313 <url path: '\\\\blah\\blah\\blah', fragment: 'baz'>
2314 <url path: '\\\\blah\\blah\\blah', fragment: 'baz'>
2314 >>> url(r'file:///C:\users\me')
2315 >>> url(r'file:///C:\users\me')
2315 <url scheme: 'file', path: 'C:\\users\\me'>
2316 <url scheme: 'file', path: 'C:\\users\\me'>
2316
2317
2317 Authentication credentials:
2318 Authentication credentials:
2318
2319
2319 >>> url('ssh://joe:xyz@x/repo')
2320 >>> url('ssh://joe:xyz@x/repo')
2320 <url scheme: 'ssh', user: 'joe', passwd: 'xyz', host: 'x', path: 'repo'>
2321 <url scheme: 'ssh', user: 'joe', passwd: 'xyz', host: 'x', path: 'repo'>
2321 >>> url('ssh://joe@x/repo')
2322 >>> url('ssh://joe@x/repo')
2322 <url scheme: 'ssh', user: 'joe', host: 'x', path: 'repo'>
2323 <url scheme: 'ssh', user: 'joe', host: 'x', path: 'repo'>
2323
2324
2324 Query strings and fragments:
2325 Query strings and fragments:
2325
2326
2326 >>> url('http://host/a?b#c')
2327 >>> url('http://host/a?b#c')
2327 <url scheme: 'http', host: 'host', path: 'a', query: 'b', fragment: 'c'>
2328 <url scheme: 'http', host: 'host', path: 'a', query: 'b', fragment: 'c'>
2328 >>> url('http://host/a?b#c', parsequery=False, parsefragment=False)
2329 >>> url('http://host/a?b#c', parsequery=False, parsefragment=False)
2329 <url scheme: 'http', host: 'host', path: 'a?b#c'>
2330 <url scheme: 'http', host: 'host', path: 'a?b#c'>
2330 """
2331 """
2331
2332
2332 _safechars = "!~*'()+"
2333 _safechars = "!~*'()+"
2333 _safepchars = "/!~*'()+:\\"
2334 _safepchars = "/!~*'()+:\\"
2334 _matchscheme = remod.compile(r'^[a-zA-Z0-9+.\-]+:').match
2335 _matchscheme = remod.compile(r'^[a-zA-Z0-9+.\-]+:').match
2335
2336
2336 def __init__(self, path, parsequery=True, parsefragment=True):
2337 def __init__(self, path, parsequery=True, parsefragment=True):
2337 # We slowly chomp away at path until we have only the path left
2338 # We slowly chomp away at path until we have only the path left
2338 self.scheme = self.user = self.passwd = self.host = None
2339 self.scheme = self.user = self.passwd = self.host = None
2339 self.port = self.path = self.query = self.fragment = None
2340 self.port = self.path = self.query = self.fragment = None
2340 self._localpath = True
2341 self._localpath = True
2341 self._hostport = ''
2342 self._hostport = ''
2342 self._origpath = path
2343 self._origpath = path
2343
2344
2344 if parsefragment and '#' in path:
2345 if parsefragment and '#' in path:
2345 path, self.fragment = path.split('#', 1)
2346 path, self.fragment = path.split('#', 1)
2346 if not path:
2347 if not path:
2347 path = None
2348 path = None
2348
2349
2349 # special case for Windows drive letters and UNC paths
2350 # special case for Windows drive letters and UNC paths
2350 if hasdriveletter(path) or path.startswith(r'\\'):
2351 if hasdriveletter(path) or path.startswith(r'\\'):
2351 self.path = path
2352 self.path = path
2352 return
2353 return
2353
2354
2354 # For compatibility reasons, we can't handle bundle paths as
2355 # For compatibility reasons, we can't handle bundle paths as
2355 # normal URLS
2356 # normal URLS
2356 if path.startswith('bundle:'):
2357 if path.startswith('bundle:'):
2357 self.scheme = 'bundle'
2358 self.scheme = 'bundle'
2358 path = path[7:]
2359 path = path[7:]
2359 if path.startswith('//'):
2360 if path.startswith('//'):
2360 path = path[2:]
2361 path = path[2:]
2361 self.path = path
2362 self.path = path
2362 return
2363 return
2363
2364
2364 if self._matchscheme(path):
2365 if self._matchscheme(path):
2365 parts = path.split(':', 1)
2366 parts = path.split(':', 1)
2366 if parts[0]:
2367 if parts[0]:
2367 self.scheme, path = parts
2368 self.scheme, path = parts
2368 self._localpath = False
2369 self._localpath = False
2369
2370
2370 if not path:
2371 if not path:
2371 path = None
2372 path = None
2372 if self._localpath:
2373 if self._localpath:
2373 self.path = ''
2374 self.path = ''
2374 return
2375 return
2375 else:
2376 else:
2376 if self._localpath:
2377 if self._localpath:
2377 self.path = path
2378 self.path = path
2378 return
2379 return
2379
2380
2380 if parsequery and '?' in path:
2381 if parsequery and '?' in path:
2381 path, self.query = path.split('?', 1)
2382 path, self.query = path.split('?', 1)
2382 if not path:
2383 if not path:
2383 path = None
2384 path = None
2384 if not self.query:
2385 if not self.query:
2385 self.query = None
2386 self.query = None
2386
2387
2387 # // is required to specify a host/authority
2388 # // is required to specify a host/authority
2388 if path and path.startswith('//'):
2389 if path and path.startswith('//'):
2389 parts = path[2:].split('/', 1)
2390 parts = path[2:].split('/', 1)
2390 if len(parts) > 1:
2391 if len(parts) > 1:
2391 self.host, path = parts
2392 self.host, path = parts
2392 else:
2393 else:
2393 self.host = parts[0]
2394 self.host = parts[0]
2394 path = None
2395 path = None
2395 if not self.host:
2396 if not self.host:
2396 self.host = None
2397 self.host = None
2397 # path of file:///d is /d
2398 # path of file:///d is /d
2398 # path of file:///d:/ is d:/, not /d:/
2399 # path of file:///d:/ is d:/, not /d:/
2399 if path and not hasdriveletter(path):
2400 if path and not hasdriveletter(path):
2400 path = '/' + path
2401 path = '/' + path
2401
2402
2402 if self.host and '@' in self.host:
2403 if self.host and '@' in self.host:
2403 self.user, self.host = self.host.rsplit('@', 1)
2404 self.user, self.host = self.host.rsplit('@', 1)
2404 if ':' in self.user:
2405 if ':' in self.user:
2405 self.user, self.passwd = self.user.split(':', 1)
2406 self.user, self.passwd = self.user.split(':', 1)
2406 if not self.host:
2407 if not self.host:
2407 self.host = None
2408 self.host = None
2408
2409
2409 # Don't split on colons in IPv6 addresses without ports
2410 # Don't split on colons in IPv6 addresses without ports
2410 if (self.host and ':' in self.host and
2411 if (self.host and ':' in self.host and
2411 not (self.host.startswith('[') and self.host.endswith(']'))):
2412 not (self.host.startswith('[') and self.host.endswith(']'))):
2412 self._hostport = self.host
2413 self._hostport = self.host
2413 self.host, self.port = self.host.rsplit(':', 1)
2414 self.host, self.port = self.host.rsplit(':', 1)
2414 if not self.host:
2415 if not self.host:
2415 self.host = None
2416 self.host = None
2416
2417
2417 if (self.host and self.scheme == 'file' and
2418 if (self.host and self.scheme == 'file' and
2418 self.host not in ('localhost', '127.0.0.1', '[::1]')):
2419 self.host not in ('localhost', '127.0.0.1', '[::1]')):
2419 raise Abort(_('file:// URLs can only refer to localhost'))
2420 raise Abort(_('file:// URLs can only refer to localhost'))
2420
2421
2421 self.path = path
2422 self.path = path
2422
2423
2423 # leave the query string escaped
2424 # leave the query string escaped
2424 for a in ('user', 'passwd', 'host', 'port',
2425 for a in ('user', 'passwd', 'host', 'port',
2425 'path', 'fragment'):
2426 'path', 'fragment'):
2426 v = getattr(self, a)
2427 v = getattr(self, a)
2427 if v is not None:
2428 if v is not None:
2428 setattr(self, a, _urlunquote(v))
2429 setattr(self, a, _urlunquote(v))
2429
2430
2430 def __repr__(self):
2431 def __repr__(self):
2431 attrs = []
2432 attrs = []
2432 for a in ('scheme', 'user', 'passwd', 'host', 'port', 'path',
2433 for a in ('scheme', 'user', 'passwd', 'host', 'port', 'path',
2433 'query', 'fragment'):
2434 'query', 'fragment'):
2434 v = getattr(self, a)
2435 v = getattr(self, a)
2435 if v is not None:
2436 if v is not None:
2436 attrs.append('%s: %r' % (a, v))
2437 attrs.append('%s: %r' % (a, v))
2437 return '<url %s>' % ', '.join(attrs)
2438 return '<url %s>' % ', '.join(attrs)
2438
2439
2439 def __str__(self):
2440 def __str__(self):
2440 r"""Join the URL's components back into a URL string.
2441 r"""Join the URL's components back into a URL string.
2441
2442
2442 Examples:
2443 Examples:
2443
2444
2444 >>> str(url('http://user:pw@host:80/c:/bob?fo:oo#ba:ar'))
2445 >>> str(url('http://user:pw@host:80/c:/bob?fo:oo#ba:ar'))
2445 'http://user:pw@host:80/c:/bob?fo:oo#ba:ar'
2446 'http://user:pw@host:80/c:/bob?fo:oo#ba:ar'
2446 >>> str(url('http://user:pw@host:80/?foo=bar&baz=42'))
2447 >>> str(url('http://user:pw@host:80/?foo=bar&baz=42'))
2447 'http://user:pw@host:80/?foo=bar&baz=42'
2448 'http://user:pw@host:80/?foo=bar&baz=42'
2448 >>> str(url('http://user:pw@host:80/?foo=bar%3dbaz'))
2449 >>> str(url('http://user:pw@host:80/?foo=bar%3dbaz'))
2449 'http://user:pw@host:80/?foo=bar%3dbaz'
2450 'http://user:pw@host:80/?foo=bar%3dbaz'
2450 >>> str(url('ssh://user:pw@[::1]:2200//home/joe#'))
2451 >>> str(url('ssh://user:pw@[::1]:2200//home/joe#'))
2451 'ssh://user:pw@[::1]:2200//home/joe#'
2452 'ssh://user:pw@[::1]:2200//home/joe#'
2452 >>> str(url('http://localhost:80//'))
2453 >>> str(url('http://localhost:80//'))
2453 'http://localhost:80//'
2454 'http://localhost:80//'
2454 >>> str(url('http://localhost:80/'))
2455 >>> str(url('http://localhost:80/'))
2455 'http://localhost:80/'
2456 'http://localhost:80/'
2456 >>> str(url('http://localhost:80'))
2457 >>> str(url('http://localhost:80'))
2457 'http://localhost:80/'
2458 'http://localhost:80/'
2458 >>> str(url('bundle:foo'))
2459 >>> str(url('bundle:foo'))
2459 'bundle:foo'
2460 'bundle:foo'
2460 >>> str(url('bundle://../foo'))
2461 >>> str(url('bundle://../foo'))
2461 'bundle:../foo'
2462 'bundle:../foo'
2462 >>> str(url('path'))
2463 >>> str(url('path'))
2463 'path'
2464 'path'
2464 >>> str(url('file:///tmp/foo/bar'))
2465 >>> str(url('file:///tmp/foo/bar'))
2465 'file:///tmp/foo/bar'
2466 'file:///tmp/foo/bar'
2466 >>> str(url('file:///c:/tmp/foo/bar'))
2467 >>> str(url('file:///c:/tmp/foo/bar'))
2467 'file:///c:/tmp/foo/bar'
2468 'file:///c:/tmp/foo/bar'
2468 >>> print url(r'bundle:foo\bar')
2469 >>> print url(r'bundle:foo\bar')
2469 bundle:foo\bar
2470 bundle:foo\bar
2470 >>> print url(r'file:///D:\data\hg')
2471 >>> print url(r'file:///D:\data\hg')
2471 file:///D:\data\hg
2472 file:///D:\data\hg
2472 """
2473 """
2473 if self._localpath:
2474 if self._localpath:
2474 s = self.path
2475 s = self.path
2475 if self.scheme == 'bundle':
2476 if self.scheme == 'bundle':
2476 s = 'bundle:' + s
2477 s = 'bundle:' + s
2477 if self.fragment:
2478 if self.fragment:
2478 s += '#' + self.fragment
2479 s += '#' + self.fragment
2479 return s
2480 return s
2480
2481
2481 s = self.scheme + ':'
2482 s = self.scheme + ':'
2482 if self.user or self.passwd or self.host:
2483 if self.user or self.passwd or self.host:
2483 s += '//'
2484 s += '//'
2484 elif self.scheme and (not self.path or self.path.startswith('/')
2485 elif self.scheme and (not self.path or self.path.startswith('/')
2485 or hasdriveletter(self.path)):
2486 or hasdriveletter(self.path)):
2486 s += '//'
2487 s += '//'
2487 if hasdriveletter(self.path):
2488 if hasdriveletter(self.path):
2488 s += '/'
2489 s += '/'
2489 if self.user:
2490 if self.user:
2490 s += urlreq.quote(self.user, safe=self._safechars)
2491 s += urlreq.quote(self.user, safe=self._safechars)
2491 if self.passwd:
2492 if self.passwd:
2492 s += ':' + urlreq.quote(self.passwd, safe=self._safechars)
2493 s += ':' + urlreq.quote(self.passwd, safe=self._safechars)
2493 if self.user or self.passwd:
2494 if self.user or self.passwd:
2494 s += '@'
2495 s += '@'
2495 if self.host:
2496 if self.host:
2496 if not (self.host.startswith('[') and self.host.endswith(']')):
2497 if not (self.host.startswith('[') and self.host.endswith(']')):
2497 s += urlreq.quote(self.host)
2498 s += urlreq.quote(self.host)
2498 else:
2499 else:
2499 s += self.host
2500 s += self.host
2500 if self.port:
2501 if self.port:
2501 s += ':' + urlreq.quote(self.port)
2502 s += ':' + urlreq.quote(self.port)
2502 if self.host:
2503 if self.host:
2503 s += '/'
2504 s += '/'
2504 if self.path:
2505 if self.path:
2505 # TODO: similar to the query string, we should not unescape the
2506 # TODO: similar to the query string, we should not unescape the
2506 # path when we store it, the path might contain '%2f' = '/',
2507 # path when we store it, the path might contain '%2f' = '/',
2507 # which we should *not* escape.
2508 # which we should *not* escape.
2508 s += urlreq.quote(self.path, safe=self._safepchars)
2509 s += urlreq.quote(self.path, safe=self._safepchars)
2509 if self.query:
2510 if self.query:
2510 # we store the query in escaped form.
2511 # we store the query in escaped form.
2511 s += '?' + self.query
2512 s += '?' + self.query
2512 if self.fragment is not None:
2513 if self.fragment is not None:
2513 s += '#' + urlreq.quote(self.fragment, safe=self._safepchars)
2514 s += '#' + urlreq.quote(self.fragment, safe=self._safepchars)
2514 return s
2515 return s
2515
2516
2516 def authinfo(self):
2517 def authinfo(self):
2517 user, passwd = self.user, self.passwd
2518 user, passwd = self.user, self.passwd
2518 try:
2519 try:
2519 self.user, self.passwd = None, None
2520 self.user, self.passwd = None, None
2520 s = str(self)
2521 s = str(self)
2521 finally:
2522 finally:
2522 self.user, self.passwd = user, passwd
2523 self.user, self.passwd = user, passwd
2523 if not self.user:
2524 if not self.user:
2524 return (s, None)
2525 return (s, None)
2525 # authinfo[1] is passed to urllib2 password manager, and its
2526 # authinfo[1] is passed to urllib2 password manager, and its
2526 # URIs must not contain credentials. The host is passed in the
2527 # URIs must not contain credentials. The host is passed in the
2527 # URIs list because Python < 2.4.3 uses only that to search for
2528 # URIs list because Python < 2.4.3 uses only that to search for
2528 # a password.
2529 # a password.
2529 return (s, (None, (s, self.host),
2530 return (s, (None, (s, self.host),
2530 self.user, self.passwd or ''))
2531 self.user, self.passwd or ''))
2531
2532
2532 def isabs(self):
2533 def isabs(self):
2533 if self.scheme and self.scheme != 'file':
2534 if self.scheme and self.scheme != 'file':
2534 return True # remote URL
2535 return True # remote URL
2535 if hasdriveletter(self.path):
2536 if hasdriveletter(self.path):
2536 return True # absolute for our purposes - can't be joined()
2537 return True # absolute for our purposes - can't be joined()
2537 if self.path.startswith(r'\\'):
2538 if self.path.startswith(r'\\'):
2538 return True # Windows UNC path
2539 return True # Windows UNC path
2539 if self.path.startswith('/'):
2540 if self.path.startswith('/'):
2540 return True # POSIX-style
2541 return True # POSIX-style
2541 return False
2542 return False
2542
2543
2543 def localpath(self):
2544 def localpath(self):
2544 if self.scheme == 'file' or self.scheme == 'bundle':
2545 if self.scheme == 'file' or self.scheme == 'bundle':
2545 path = self.path or '/'
2546 path = self.path or '/'
2546 # For Windows, we need to promote hosts containing drive
2547 # For Windows, we need to promote hosts containing drive
2547 # letters to paths with drive letters.
2548 # letters to paths with drive letters.
2548 if hasdriveletter(self._hostport):
2549 if hasdriveletter(self._hostport):
2549 path = self._hostport + '/' + self.path
2550 path = self._hostport + '/' + self.path
2550 elif (self.host is not None and self.path
2551 elif (self.host is not None and self.path
2551 and not hasdriveletter(path)):
2552 and not hasdriveletter(path)):
2552 path = '/' + path
2553 path = '/' + path
2553 return path
2554 return path
2554 return self._origpath
2555 return self._origpath
2555
2556
2556 def islocal(self):
2557 def islocal(self):
2557 '''whether localpath will return something that posixfile can open'''
2558 '''whether localpath will return something that posixfile can open'''
2558 return (not self.scheme or self.scheme == 'file'
2559 return (not self.scheme or self.scheme == 'file'
2559 or self.scheme == 'bundle')
2560 or self.scheme == 'bundle')
2560
2561
2561 def hasscheme(path):
2562 def hasscheme(path):
2562 return bool(url(path).scheme)
2563 return bool(url(path).scheme)
2563
2564
2564 def hasdriveletter(path):
2565 def hasdriveletter(path):
2565 return path and path[1:2] == ':' and path[0:1].isalpha()
2566 return path and path[1:2] == ':' and path[0:1].isalpha()
2566
2567
2567 def urllocalpath(path):
2568 def urllocalpath(path):
2568 return url(path, parsequery=False, parsefragment=False).localpath()
2569 return url(path, parsequery=False, parsefragment=False).localpath()
2569
2570
2570 def hidepassword(u):
2571 def hidepassword(u):
2571 '''hide user credential in a url string'''
2572 '''hide user credential in a url string'''
2572 u = url(u)
2573 u = url(u)
2573 if u.passwd:
2574 if u.passwd:
2574 u.passwd = '***'
2575 u.passwd = '***'
2575 return str(u)
2576 return str(u)
2576
2577
2577 def removeauth(u):
2578 def removeauth(u):
2578 '''remove all authentication information from a url string'''
2579 '''remove all authentication information from a url string'''
2579 u = url(u)
2580 u = url(u)
2580 u.user = u.passwd = None
2581 u.user = u.passwd = None
2581 return str(u)
2582 return str(u)
2582
2583
2583 def isatty(fp):
2584 def isatty(fp):
2584 try:
2585 try:
2585 return fp.isatty()
2586 return fp.isatty()
2586 except AttributeError:
2587 except AttributeError:
2587 return False
2588 return False
2588
2589
2589 timecount = unitcountfn(
2590 timecount = unitcountfn(
2590 (1, 1e3, _('%.0f s')),
2591 (1, 1e3, _('%.0f s')),
2591 (100, 1, _('%.1f s')),
2592 (100, 1, _('%.1f s')),
2592 (10, 1, _('%.2f s')),
2593 (10, 1, _('%.2f s')),
2593 (1, 1, _('%.3f s')),
2594 (1, 1, _('%.3f s')),
2594 (100, 0.001, _('%.1f ms')),
2595 (100, 0.001, _('%.1f ms')),
2595 (10, 0.001, _('%.2f ms')),
2596 (10, 0.001, _('%.2f ms')),
2596 (1, 0.001, _('%.3f ms')),
2597 (1, 0.001, _('%.3f ms')),
2597 (100, 0.000001, _('%.1f us')),
2598 (100, 0.000001, _('%.1f us')),
2598 (10, 0.000001, _('%.2f us')),
2599 (10, 0.000001, _('%.2f us')),
2599 (1, 0.000001, _('%.3f us')),
2600 (1, 0.000001, _('%.3f us')),
2600 (100, 0.000000001, _('%.1f ns')),
2601 (100, 0.000000001, _('%.1f ns')),
2601 (10, 0.000000001, _('%.2f ns')),
2602 (10, 0.000000001, _('%.2f ns')),
2602 (1, 0.000000001, _('%.3f ns')),
2603 (1, 0.000000001, _('%.3f ns')),
2603 )
2604 )
2604
2605
2605 _timenesting = [0]
2606 _timenesting = [0]
2606
2607
2607 def timed(func):
2608 def timed(func):
2608 '''Report the execution time of a function call to stderr.
2609 '''Report the execution time of a function call to stderr.
2609
2610
2610 During development, use as a decorator when you need to measure
2611 During development, use as a decorator when you need to measure
2611 the cost of a function, e.g. as follows:
2612 the cost of a function, e.g. as follows:
2612
2613
2613 @util.timed
2614 @util.timed
2614 def foo(a, b, c):
2615 def foo(a, b, c):
2615 pass
2616 pass
2616 '''
2617 '''
2617
2618
2618 def wrapper(*args, **kwargs):
2619 def wrapper(*args, **kwargs):
2619 start = time.time()
2620 start = time.time()
2620 indent = 2
2621 indent = 2
2621 _timenesting[0] += indent
2622 _timenesting[0] += indent
2622 try:
2623 try:
2623 return func(*args, **kwargs)
2624 return func(*args, **kwargs)
2624 finally:
2625 finally:
2625 elapsed = time.time() - start
2626 elapsed = time.time() - start
2626 _timenesting[0] -= indent
2627 _timenesting[0] -= indent
2627 sys.stderr.write('%s%s: %s\n' %
2628 sys.stderr.write('%s%s: %s\n' %
2628 (' ' * _timenesting[0], func.__name__,
2629 (' ' * _timenesting[0], func.__name__,
2629 timecount(elapsed)))
2630 timecount(elapsed)))
2630 return wrapper
2631 return wrapper
2631
2632
2632 _sizeunits = (('m', 2**20), ('k', 2**10), ('g', 2**30),
2633 _sizeunits = (('m', 2**20), ('k', 2**10), ('g', 2**30),
2633 ('kb', 2**10), ('mb', 2**20), ('gb', 2**30), ('b', 1))
2634 ('kb', 2**10), ('mb', 2**20), ('gb', 2**30), ('b', 1))
2634
2635
2635 def sizetoint(s):
2636 def sizetoint(s):
2636 '''Convert a space specifier to a byte count.
2637 '''Convert a space specifier to a byte count.
2637
2638
2638 >>> sizetoint('30')
2639 >>> sizetoint('30')
2639 30
2640 30
2640 >>> sizetoint('2.2kb')
2641 >>> sizetoint('2.2kb')
2641 2252
2642 2252
2642 >>> sizetoint('6M')
2643 >>> sizetoint('6M')
2643 6291456
2644 6291456
2644 '''
2645 '''
2645 t = s.strip().lower()
2646 t = s.strip().lower()
2646 try:
2647 try:
2647 for k, u in _sizeunits:
2648 for k, u in _sizeunits:
2648 if t.endswith(k):
2649 if t.endswith(k):
2649 return int(float(t[:-len(k)]) * u)
2650 return int(float(t[:-len(k)]) * u)
2650 return int(t)
2651 return int(t)
2651 except ValueError:
2652 except ValueError:
2652 raise error.ParseError(_("couldn't parse size: %s") % s)
2653 raise error.ParseError(_("couldn't parse size: %s") % s)
2653
2654
2654 class hooks(object):
2655 class hooks(object):
2655 '''A collection of hook functions that can be used to extend a
2656 '''A collection of hook functions that can be used to extend a
2656 function's behavior. Hooks are called in lexicographic order,
2657 function's behavior. Hooks are called in lexicographic order,
2657 based on the names of their sources.'''
2658 based on the names of their sources.'''
2658
2659
2659 def __init__(self):
2660 def __init__(self):
2660 self._hooks = []
2661 self._hooks = []
2661
2662
2662 def add(self, source, hook):
2663 def add(self, source, hook):
2663 self._hooks.append((source, hook))
2664 self._hooks.append((source, hook))
2664
2665
2665 def __call__(self, *args):
2666 def __call__(self, *args):
2666 self._hooks.sort(key=lambda x: x[0])
2667 self._hooks.sort(key=lambda x: x[0])
2667 results = []
2668 results = []
2668 for source, hook in self._hooks:
2669 for source, hook in self._hooks:
2669 results.append(hook(*args))
2670 results.append(hook(*args))
2670 return results
2671 return results
2671
2672
2672 def getstackframes(skip=0, line=' %-*s in %s\n', fileline='%s:%s'):
2673 def getstackframes(skip=0, line=' %-*s in %s\n', fileline='%s:%s'):
2673 '''Yields lines for a nicely formatted stacktrace.
2674 '''Yields lines for a nicely formatted stacktrace.
2674 Skips the 'skip' last entries.
2675 Skips the 'skip' last entries.
2675 Each file+linenumber is formatted according to fileline.
2676 Each file+linenumber is formatted according to fileline.
2676 Each line is formatted according to line.
2677 Each line is formatted according to line.
2677 If line is None, it yields:
2678 If line is None, it yields:
2678 length of longest filepath+line number,
2679 length of longest filepath+line number,
2679 filepath+linenumber,
2680 filepath+linenumber,
2680 function
2681 function
2681
2682
2682 Not be used in production code but very convenient while developing.
2683 Not be used in production code but very convenient while developing.
2683 '''
2684 '''
2684 entries = [(fileline % (fn, ln), func)
2685 entries = [(fileline % (fn, ln), func)
2685 for fn, ln, func, _text in traceback.extract_stack()[:-skip - 1]]
2686 for fn, ln, func, _text in traceback.extract_stack()[:-skip - 1]]
2686 if entries:
2687 if entries:
2687 fnmax = max(len(entry[0]) for entry in entries)
2688 fnmax = max(len(entry[0]) for entry in entries)
2688 for fnln, func in entries:
2689 for fnln, func in entries:
2689 if line is None:
2690 if line is None:
2690 yield (fnmax, fnln, func)
2691 yield (fnmax, fnln, func)
2691 else:
2692 else:
2692 yield line % (fnmax, fnln, func)
2693 yield line % (fnmax, fnln, func)
2693
2694
2694 def debugstacktrace(msg='stacktrace', skip=0, f=sys.stderr, otherf=sys.stdout):
2695 def debugstacktrace(msg='stacktrace', skip=0, f=sys.stderr, otherf=sys.stdout):
2695 '''Writes a message to f (stderr) with a nicely formatted stacktrace.
2696 '''Writes a message to f (stderr) with a nicely formatted stacktrace.
2696 Skips the 'skip' last entries. By default it will flush stdout first.
2697 Skips the 'skip' last entries. By default it will flush stdout first.
2697 It can be used everywhere and intentionally does not require an ui object.
2698 It can be used everywhere and intentionally does not require an ui object.
2698 Not be used in production code but very convenient while developing.
2699 Not be used in production code but very convenient while developing.
2699 '''
2700 '''
2700 if otherf:
2701 if otherf:
2701 otherf.flush()
2702 otherf.flush()
2702 f.write('%s at:\n' % msg)
2703 f.write('%s at:\n' % msg)
2703 for line in getstackframes(skip + 1):
2704 for line in getstackframes(skip + 1):
2704 f.write(line)
2705 f.write(line)
2705 f.flush()
2706 f.flush()
2706
2707
2707 class dirs(object):
2708 class dirs(object):
2708 '''a multiset of directory names from a dirstate or manifest'''
2709 '''a multiset of directory names from a dirstate or manifest'''
2709
2710
2710 def __init__(self, map, skip=None):
2711 def __init__(self, map, skip=None):
2711 self._dirs = {}
2712 self._dirs = {}
2712 addpath = self.addpath
2713 addpath = self.addpath
2713 if safehasattr(map, 'iteritems') and skip is not None:
2714 if safehasattr(map, 'iteritems') and skip is not None:
2714 for f, s in map.iteritems():
2715 for f, s in map.iteritems():
2715 if s[0] != skip:
2716 if s[0] != skip:
2716 addpath(f)
2717 addpath(f)
2717 else:
2718 else:
2718 for f in map:
2719 for f in map:
2719 addpath(f)
2720 addpath(f)
2720
2721
2721 def addpath(self, path):
2722 def addpath(self, path):
2722 dirs = self._dirs
2723 dirs = self._dirs
2723 for base in finddirs(path):
2724 for base in finddirs(path):
2724 if base in dirs:
2725 if base in dirs:
2725 dirs[base] += 1
2726 dirs[base] += 1
2726 return
2727 return
2727 dirs[base] = 1
2728 dirs[base] = 1
2728
2729
2729 def delpath(self, path):
2730 def delpath(self, path):
2730 dirs = self._dirs
2731 dirs = self._dirs
2731 for base in finddirs(path):
2732 for base in finddirs(path):
2732 if dirs[base] > 1:
2733 if dirs[base] > 1:
2733 dirs[base] -= 1
2734 dirs[base] -= 1
2734 return
2735 return
2735 del dirs[base]
2736 del dirs[base]
2736
2737
2737 def __iter__(self):
2738 def __iter__(self):
2738 return self._dirs.iterkeys()
2739 return self._dirs.iterkeys()
2739
2740
2740 def __contains__(self, d):
2741 def __contains__(self, d):
2741 return d in self._dirs
2742 return d in self._dirs
2742
2743
2743 if safehasattr(parsers, 'dirs'):
2744 if safehasattr(parsers, 'dirs'):
2744 dirs = parsers.dirs
2745 dirs = parsers.dirs
2745
2746
2746 def finddirs(path):
2747 def finddirs(path):
2747 pos = path.rfind('/')
2748 pos = path.rfind('/')
2748 while pos != -1:
2749 while pos != -1:
2749 yield path[:pos]
2750 yield path[:pos]
2750 pos = path.rfind('/', 0, pos)
2751 pos = path.rfind('/', 0, pos)
2751
2752
2752 # compression utility
2753 # compression utility
2753
2754
2754 class nocompress(object):
2755 class nocompress(object):
2755 def compress(self, x):
2756 def compress(self, x):
2756 return x
2757 return x
2757 def flush(self):
2758 def flush(self):
2758 return ""
2759 return ""
2759
2760
2760 compressors = {
2761 compressors = {
2761 None: nocompress,
2762 None: nocompress,
2762 # lambda to prevent early import
2763 # lambda to prevent early import
2763 'BZ': lambda: bz2.BZ2Compressor(),
2764 'BZ': lambda: bz2.BZ2Compressor(),
2764 'GZ': lambda: zlib.compressobj(),
2765 'GZ': lambda: zlib.compressobj(),
2765 }
2766 }
2766 # also support the old form by courtesies
2767 # also support the old form by courtesies
2767 compressors['UN'] = compressors[None]
2768 compressors['UN'] = compressors[None]
2768
2769
2769 def _makedecompressor(decompcls):
2770 def _makedecompressor(decompcls):
2770 def generator(f):
2771 def generator(f):
2771 d = decompcls()
2772 d = decompcls()
2772 for chunk in filechunkiter(f):
2773 for chunk in filechunkiter(f):
2773 yield d.decompress(chunk)
2774 yield d.decompress(chunk)
2774 def func(fh):
2775 def func(fh):
2775 return chunkbuffer(generator(fh))
2776 return chunkbuffer(generator(fh))
2776 return func
2777 return func
2777
2778
2778 class ctxmanager(object):
2779 class ctxmanager(object):
2779 '''A context manager for use in 'with' blocks to allow multiple
2780 '''A context manager for use in 'with' blocks to allow multiple
2780 contexts to be entered at once. This is both safer and more
2781 contexts to be entered at once. This is both safer and more
2781 flexible than contextlib.nested.
2782 flexible than contextlib.nested.
2782
2783
2783 Once Mercurial supports Python 2.7+, this will become mostly
2784 Once Mercurial supports Python 2.7+, this will become mostly
2784 unnecessary.
2785 unnecessary.
2785 '''
2786 '''
2786
2787
2787 def __init__(self, *args):
2788 def __init__(self, *args):
2788 '''Accepts a list of no-argument functions that return context
2789 '''Accepts a list of no-argument functions that return context
2789 managers. These will be invoked at __call__ time.'''
2790 managers. These will be invoked at __call__ time.'''
2790 self._pending = args
2791 self._pending = args
2791 self._atexit = []
2792 self._atexit = []
2792
2793
2793 def __enter__(self):
2794 def __enter__(self):
2794 return self
2795 return self
2795
2796
2796 def enter(self):
2797 def enter(self):
2797 '''Create and enter context managers in the order in which they were
2798 '''Create and enter context managers in the order in which they were
2798 passed to the constructor.'''
2799 passed to the constructor.'''
2799 values = []
2800 values = []
2800 for func in self._pending:
2801 for func in self._pending:
2801 obj = func()
2802 obj = func()
2802 values.append(obj.__enter__())
2803 values.append(obj.__enter__())
2803 self._atexit.append(obj.__exit__)
2804 self._atexit.append(obj.__exit__)
2804 del self._pending
2805 del self._pending
2805 return values
2806 return values
2806
2807
2807 def atexit(self, func, *args, **kwargs):
2808 def atexit(self, func, *args, **kwargs):
2808 '''Add a function to call when this context manager exits. The
2809 '''Add a function to call when this context manager exits. The
2809 ordering of multiple atexit calls is unspecified, save that
2810 ordering of multiple atexit calls is unspecified, save that
2810 they will happen before any __exit__ functions.'''
2811 they will happen before any __exit__ functions.'''
2811 def wrapper(exc_type, exc_val, exc_tb):
2812 def wrapper(exc_type, exc_val, exc_tb):
2812 func(*args, **kwargs)
2813 func(*args, **kwargs)
2813 self._atexit.append(wrapper)
2814 self._atexit.append(wrapper)
2814 return func
2815 return func
2815
2816
2816 def __exit__(self, exc_type, exc_val, exc_tb):
2817 def __exit__(self, exc_type, exc_val, exc_tb):
2817 '''Context managers are exited in the reverse order from which
2818 '''Context managers are exited in the reverse order from which
2818 they were created.'''
2819 they were created.'''
2819 received = exc_type is not None
2820 received = exc_type is not None
2820 suppressed = False
2821 suppressed = False
2821 pending = None
2822 pending = None
2822 self._atexit.reverse()
2823 self._atexit.reverse()
2823 for exitfunc in self._atexit:
2824 for exitfunc in self._atexit:
2824 try:
2825 try:
2825 if exitfunc(exc_type, exc_val, exc_tb):
2826 if exitfunc(exc_type, exc_val, exc_tb):
2826 suppressed = True
2827 suppressed = True
2827 exc_type = None
2828 exc_type = None
2828 exc_val = None
2829 exc_val = None
2829 exc_tb = None
2830 exc_tb = None
2830 except BaseException:
2831 except BaseException:
2831 pending = sys.exc_info()
2832 pending = sys.exc_info()
2832 exc_type, exc_val, exc_tb = pending = sys.exc_info()
2833 exc_type, exc_val, exc_tb = pending = sys.exc_info()
2833 del self._atexit
2834 del self._atexit
2834 if pending:
2835 if pending:
2835 raise exc_val
2836 raise exc_val
2836 return received and suppressed
2837 return received and suppressed
2837
2838
2838 def _bz2():
2839 def _bz2():
2839 d = bz2.BZ2Decompressor()
2840 d = bz2.BZ2Decompressor()
2840 # Bzip2 stream start with BZ, but we stripped it.
2841 # Bzip2 stream start with BZ, but we stripped it.
2841 # we put it back for good measure.
2842 # we put it back for good measure.
2842 d.decompress('BZ')
2843 d.decompress('BZ')
2843 return d
2844 return d
2844
2845
2845 decompressors = {None: lambda fh: fh,
2846 decompressors = {None: lambda fh: fh,
2846 '_truncatedBZ': _makedecompressor(_bz2),
2847 '_truncatedBZ': _makedecompressor(_bz2),
2847 'BZ': _makedecompressor(lambda: bz2.BZ2Decompressor()),
2848 'BZ': _makedecompressor(lambda: bz2.BZ2Decompressor()),
2848 'GZ': _makedecompressor(lambda: zlib.decompressobj()),
2849 'GZ': _makedecompressor(lambda: zlib.decompressobj()),
2849 }
2850 }
2850 # also support the old form by courtesies
2851 # also support the old form by courtesies
2851 decompressors['UN'] = decompressors[None]
2852 decompressors['UN'] = decompressors[None]
2852
2853
2853 # convenient shortcut
2854 # convenient shortcut
2854 dst = debugstacktrace
2855 dst = debugstacktrace
@@ -1,151 +1,151
1 #require test-repo
1 #require test-repo
2
2
3 $ . "$TESTDIR/helpers-testrepo.sh"
3 $ . "$TESTDIR/helpers-testrepo.sh"
4 $ cd "$TESTDIR"/..
4 $ cd "$TESTDIR"/..
5
5
6 $ hg files 'set:(**.py)' | sed 's|\\|/|g' | xargs python contrib/check-py3-compat.py
6 $ hg files 'set:(**.py)' | sed 's|\\|/|g' | xargs python contrib/check-py3-compat.py
7 hgext/fsmonitor/pywatchman/__init__.py not using absolute_import
7 hgext/fsmonitor/pywatchman/__init__.py not using absolute_import
8 hgext/fsmonitor/pywatchman/__init__.py requires print_function
8 hgext/fsmonitor/pywatchman/__init__.py requires print_function
9 hgext/fsmonitor/pywatchman/capabilities.py not using absolute_import
9 hgext/fsmonitor/pywatchman/capabilities.py not using absolute_import
10 hgext/fsmonitor/pywatchman/pybser.py not using absolute_import
10 hgext/fsmonitor/pywatchman/pybser.py not using absolute_import
11 hgext/highlight/__init__.py not using absolute_import
11 hgext/highlight/__init__.py not using absolute_import
12 hgext/highlight/highlight.py not using absolute_import
12 hgext/highlight/highlight.py not using absolute_import
13 hgext/share.py not using absolute_import
13 hgext/share.py not using absolute_import
14 hgext/win32text.py not using absolute_import
14 hgext/win32text.py not using absolute_import
15 i18n/check-translation.py not using absolute_import
15 i18n/check-translation.py not using absolute_import
16 i18n/polib.py not using absolute_import
16 i18n/polib.py not using absolute_import
17 setup.py not using absolute_import
17 setup.py not using absolute_import
18 tests/heredoctest.py requires print_function
18 tests/heredoctest.py requires print_function
19 tests/md5sum.py not using absolute_import
19 tests/md5sum.py not using absolute_import
20 tests/readlink.py not using absolute_import
20 tests/readlink.py not using absolute_import
21 tests/run-tests.py not using absolute_import
21 tests/run-tests.py not using absolute_import
22 tests/test-demandimport.py not using absolute_import
22 tests/test-demandimport.py not using absolute_import
23
23
24 #if py3exe
24 #if py3exe
25 $ hg files 'set:(**.py)' | sed 's|\\|/|g' | xargs $PYTHON3 contrib/check-py3-compat.py
25 $ hg files 'set:(**.py)' | sed 's|\\|/|g' | xargs $PYTHON3 contrib/check-py3-compat.py
26 doc/hgmanpage.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
26 doc/hgmanpage.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
27 hgext/automv.py: error importing module: <SyntaxError> invalid syntax (commands.py, line *) (line *) (glob)
27 hgext/automv.py: error importing module: <SyntaxError> invalid syntax (commands.py, line *) (line *) (glob)
28 hgext/blackbox.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
28 hgext/blackbox.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
29 hgext/bugzilla.py: error importing module: <ImportError> No module named 'xmlrpclib' (line *) (glob)
29 hgext/bugzilla.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
30 hgext/censor.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
30 hgext/censor.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
31 hgext/chgserver.py: error importing module: <ImportError> No module named 'SocketServer' (line *) (glob)
31 hgext/chgserver.py: error importing module: <ImportError> No module named 'SocketServer' (line *) (glob)
32 hgext/children.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
32 hgext/children.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
33 hgext/churn.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
33 hgext/churn.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
34 hgext/clonebundles.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
34 hgext/clonebundles.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
35 hgext/color.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
35 hgext/color.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
36 hgext/convert/bzr.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
36 hgext/convert/bzr.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
37 hgext/convert/convcmd.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
37 hgext/convert/convcmd.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
38 hgext/convert/cvs.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
38 hgext/convert/cvs.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
39 hgext/convert/cvsps.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
39 hgext/convert/cvsps.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
40 hgext/convert/darcs.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
40 hgext/convert/darcs.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
41 hgext/convert/filemap.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
41 hgext/convert/filemap.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
42 hgext/convert/git.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
42 hgext/convert/git.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
43 hgext/convert/gnuarch.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
43 hgext/convert/gnuarch.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
44 hgext/convert/hg.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
44 hgext/convert/hg.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
45 hgext/convert/monotone.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
45 hgext/convert/monotone.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
46 hgext/convert/p*.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
46 hgext/convert/p*.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
47 hgext/convert/subversion.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
47 hgext/convert/subversion.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
48 hgext/convert/transport.py: error importing module: <ImportError> No module named 'svn.client' (line *) (glob)
48 hgext/convert/transport.py: error importing module: <ImportError> No module named 'svn.client' (line *) (glob)
49 hgext/eol.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
49 hgext/eol.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
50 hgext/extdiff.py: error importing module: <SyntaxError> invalid syntax (archival.py, line *) (line *) (glob)
50 hgext/extdiff.py: error importing module: <SyntaxError> invalid syntax (archival.py, line *) (line *) (glob)
51 hgext/factotum.py: error importing: <ImportError> No module named 'rfc822' (error at __init__.py:*) (glob)
51 hgext/factotum.py: error importing: <ImportError> No module named 'rfc822' (error at __init__.py:*) (glob)
52 hgext/fetch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
52 hgext/fetch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
53 hgext/fsmonitor/watchmanclient.py: error importing module: <SystemError> Parent module 'hgext.fsmonitor' not loaded, cannot perform relative import (line *) (glob)
53 hgext/fsmonitor/watchmanclient.py: error importing module: <SystemError> Parent module 'hgext.fsmonitor' not loaded, cannot perform relative import (line *) (glob)
54 hgext/gpg.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
54 hgext/gpg.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
55 hgext/graphlog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
55 hgext/graphlog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
56 hgext/hgk.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
56 hgext/hgk.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
57 hgext/histedit.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
57 hgext/histedit.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
58 hgext/keyword.py: error importing: <ImportError> No module named 'BaseHTTPServer' (error at common.py:*) (glob)
58 hgext/keyword.py: error importing: <ImportError> No module named 'BaseHTTPServer' (error at common.py:*) (glob)
59 hgext/largefiles/basestore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
59 hgext/largefiles/basestore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
60 hgext/largefiles/lfcommands.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
60 hgext/largefiles/lfcommands.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
61 hgext/largefiles/lfutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
61 hgext/largefiles/lfutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
62 hgext/largefiles/localstore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
62 hgext/largefiles/localstore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
63 hgext/largefiles/overrides.py: error importing module: <SyntaxError> invalid syntax (archival.py, line *) (line *) (glob)
63 hgext/largefiles/overrides.py: error importing module: <SyntaxError> invalid syntax (archival.py, line *) (line *) (glob)
64 hgext/largefiles/proto.py: error importing: <ImportError> No module named 'httplib' (error at httppeer.py:*) (glob)
64 hgext/largefiles/proto.py: error importing: <ImportError> No module named 'httplib' (error at httppeer.py:*) (glob)
65 hgext/largefiles/remotestore.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at wireproto.py:*) (glob)
65 hgext/largefiles/remotestore.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at wireproto.py:*) (glob)
66 hgext/largefiles/reposetup.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
66 hgext/largefiles/reposetup.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
67 hgext/largefiles/storefactory.py: error importing: <SyntaxError> invalid syntax (bundle2.py, line *) (error at bundlerepo.py:*) (glob)
67 hgext/largefiles/storefactory.py: error importing: <SyntaxError> invalid syntax (bundle2.py, line *) (error at bundlerepo.py:*) (glob)
68 hgext/largefiles/uisetup.py: error importing: <ImportError> No module named 'BaseHTTPServer' (error at common.py:*) (glob)
68 hgext/largefiles/uisetup.py: error importing: <ImportError> No module named 'BaseHTTPServer' (error at common.py:*) (glob)
69 hgext/largefiles/wirestore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
69 hgext/largefiles/wirestore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
70 hgext/mq.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
70 hgext/mq.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
71 hgext/notify.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
71 hgext/notify.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
72 hgext/pager.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
72 hgext/pager.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
73 hgext/patchbomb.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
73 hgext/patchbomb.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
74 hgext/purge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
74 hgext/purge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
75 hgext/rebase.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
75 hgext/rebase.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
76 hgext/record.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
76 hgext/record.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
77 hgext/relink.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
77 hgext/relink.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
78 hgext/schemes.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
78 hgext/schemes.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
79 hgext/share.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
79 hgext/share.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
80 hgext/shelve.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
80 hgext/shelve.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
81 hgext/strip.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
81 hgext/strip.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
82 hgext/transplant.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
82 hgext/transplant.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
83 mercurial/archival.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
83 mercurial/archival.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
84 mercurial/branchmap.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
84 mercurial/branchmap.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
85 mercurial/bundle*.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
85 mercurial/bundle*.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
86 mercurial/bundlerepo.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
86 mercurial/bundlerepo.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
87 mercurial/changegroup.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
87 mercurial/changegroup.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
88 mercurial/changelog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
88 mercurial/changelog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
89 mercurial/cmdutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
89 mercurial/cmdutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
90 mercurial/commands.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
90 mercurial/commands.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
91 mercurial/commandserver.py: error importing module: <ImportError> No module named 'SocketServer' (line *) (glob)
91 mercurial/commandserver.py: error importing module: <ImportError> No module named 'SocketServer' (line *) (glob)
92 mercurial/context.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
92 mercurial/context.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
93 mercurial/copies.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
93 mercurial/copies.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
94 mercurial/crecord.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
94 mercurial/crecord.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
95 mercurial/dirstate.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
95 mercurial/dirstate.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
96 mercurial/discovery.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
96 mercurial/discovery.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
97 mercurial/dispatch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
97 mercurial/dispatch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
98 mercurial/exchange.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
98 mercurial/exchange.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
99 mercurial/extensions.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
99 mercurial/extensions.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
100 mercurial/filelog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
100 mercurial/filelog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
101 mercurial/filemerge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
101 mercurial/filemerge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
102 mercurial/fileset.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
102 mercurial/fileset.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
103 mercurial/formatter.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
103 mercurial/formatter.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
104 mercurial/graphmod.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
104 mercurial/graphmod.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
105 mercurial/help.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
105 mercurial/help.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
106 mercurial/hg.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
106 mercurial/hg.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
107 mercurial/hgweb/common.py: error importing module: <ImportError> No module named 'BaseHTTPServer' (line *) (glob)
107 mercurial/hgweb/common.py: error importing module: <ImportError> No module named 'BaseHTTPServer' (line *) (glob)
108 mercurial/hgweb/hgweb_mod.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
108 mercurial/hgweb/hgweb_mod.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
109 mercurial/hgweb/hgwebdir_mod.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
109 mercurial/hgweb/hgwebdir_mod.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
110 mercurial/hgweb/protocol.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
110 mercurial/hgweb/protocol.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
111 mercurial/hgweb/request.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
111 mercurial/hgweb/request.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
112 mercurial/hgweb/server.py: error importing module: <ImportError> No module named 'BaseHTTPServer' (line *) (glob)
112 mercurial/hgweb/server.py: error importing module: <ImportError> No module named 'BaseHTTPServer' (line *) (glob)
113 mercurial/hgweb/webcommands.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
113 mercurial/hgweb/webcommands.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
114 mercurial/hgweb/webutil.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
114 mercurial/hgweb/webutil.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
115 mercurial/hgweb/wsgicgi.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
115 mercurial/hgweb/wsgicgi.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
116 mercurial/hook.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
116 mercurial/hook.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
117 mercurial/httpconnection.py: error importing: <ImportError> No module named 'rfc822' (error at __init__.py:*) (glob)
117 mercurial/httpconnection.py: error importing: <ImportError> No module named 'rfc822' (error at __init__.py:*) (glob)
118 mercurial/httppeer.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
118 mercurial/httppeer.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
119 mercurial/keepalive.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
119 mercurial/keepalive.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
120 mercurial/localrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
120 mercurial/localrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
121 mercurial/mail.py: error importing module: <AttributeError> module 'email' has no attribute 'Header' (line *) (glob)
121 mercurial/mail.py: error importing module: <AttributeError> module 'email' has no attribute 'Header' (line *) (glob)
122 mercurial/manifest.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
122 mercurial/manifest.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
123 mercurial/merge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
123 mercurial/merge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
124 mercurial/namespaces.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
124 mercurial/namespaces.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
125 mercurial/patch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
125 mercurial/patch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
126 mercurial/pure/mpatch.py: error importing module: <ImportError> cannot import name 'pycompat' (line *) (glob)
126 mercurial/pure/mpatch.py: error importing module: <ImportError> cannot import name 'pycompat' (line *) (glob)
127 mercurial/pure/parsers.py: error importing module: <ImportError> No module named 'mercurial.pure.node' (line *) (glob)
127 mercurial/pure/parsers.py: error importing module: <ImportError> No module named 'mercurial.pure.node' (line *) (glob)
128 mercurial/repair.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
128 mercurial/repair.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
129 mercurial/revlog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
129 mercurial/revlog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
130 mercurial/revset.py: error importing module: <AttributeError> 'dict' object has no attribute 'iteritems' (line *) (glob)
130 mercurial/revset.py: error importing module: <AttributeError> 'dict' object has no attribute 'iteritems' (line *) (glob)
131 mercurial/scmutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
131 mercurial/scmutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
132 mercurial/scmwindows.py: error importing module: <ImportError> No module named '_winreg' (line *) (glob)
132 mercurial/scmwindows.py: error importing module: <ImportError> No module named '_winreg' (line *) (glob)
133 mercurial/simplemerge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
133 mercurial/simplemerge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
134 mercurial/sshpeer.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at wireproto.py:*) (glob)
134 mercurial/sshpeer.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at wireproto.py:*) (glob)
135 mercurial/sshserver.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
135 mercurial/sshserver.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
136 mercurial/statichttprepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
136 mercurial/statichttprepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
137 mercurial/store.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
137 mercurial/store.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
138 mercurial/streamclone.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
138 mercurial/streamclone.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
139 mercurial/subrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
139 mercurial/subrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
140 mercurial/templatefilters.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
140 mercurial/templatefilters.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
141 mercurial/templatekw.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
141 mercurial/templatekw.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
142 mercurial/templater.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
142 mercurial/templater.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
143 mercurial/ui.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
143 mercurial/ui.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
144 mercurial/unionrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
144 mercurial/unionrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
145 mercurial/url.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
145 mercurial/url.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
146 mercurial/verify.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
146 mercurial/verify.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
147 mercurial/win*.py: error importing module: <ImportError> No module named 'msvcrt' (line *) (glob)
147 mercurial/win*.py: error importing module: <ImportError> No module named 'msvcrt' (line *) (glob)
148 mercurial/windows.py: error importing module: <ImportError> No module named '_winreg' (line *) (glob)
148 mercurial/windows.py: error importing module: <ImportError> No module named '_winreg' (line *) (glob)
149 mercurial/wireproto.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
149 mercurial/wireproto.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
150
150
151 #endif
151 #endif
General Comments 0
You need to be logged in to leave comments. Login now