##// END OF EJS Templates
py3: conditionalize xmlrpclib import...
Pulkit Goyal -
r29432:34b914ac default
parent child Browse files
Show More
@@ -1,928 +1,928
1 1 # bugzilla.py - bugzilla integration for mercurial
2 2 #
3 3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 4 # Copyright 2011-4 Jim Hague <jim.hague@acm.org>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''hooks for integrating with the Bugzilla bug tracker
10 10
11 11 This hook extension adds comments on bugs in Bugzilla when changesets
12 12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 13 the Mercurial template mechanism.
14 14
15 15 The bug references can optionally include an update for Bugzilla of the
16 16 hours spent working on the bug. Bugs can also be marked fixed.
17 17
18 18 Three basic modes of access to Bugzilla are provided:
19 19
20 20 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
21 21
22 22 2. Check data via the Bugzilla XMLRPC interface and submit bug change
23 23 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
24 24
25 25 3. Writing directly to the Bugzilla database. Only Bugzilla installations
26 26 using MySQL are supported. Requires Python MySQLdb.
27 27
28 28 Writing directly to the database is susceptible to schema changes, and
29 29 relies on a Bugzilla contrib script to send out bug change
30 30 notification emails. This script runs as the user running Mercurial,
31 31 must be run on the host with the Bugzilla install, and requires
32 32 permission to read Bugzilla configuration details and the necessary
33 33 MySQL user and password to have full access rights to the Bugzilla
34 34 database. For these reasons this access mode is now considered
35 35 deprecated, and will not be updated for new Bugzilla versions going
36 36 forward. Only adding comments is supported in this access mode.
37 37
38 38 Access via XMLRPC needs a Bugzilla username and password to be specified
39 39 in the configuration. Comments are added under that username. Since the
40 40 configuration must be readable by all Mercurial users, it is recommended
41 41 that the rights of that user are restricted in Bugzilla to the minimum
42 42 necessary to add comments. Marking bugs fixed requires Bugzilla 4.0 and later.
43 43
44 44 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
45 45 email to the Bugzilla email interface to submit comments to bugs.
46 46 The From: address in the email is set to the email address of the Mercurial
47 47 user, so the comment appears to come from the Mercurial user. In the event
48 48 that the Mercurial user email is not recognized by Bugzilla as a Bugzilla
49 49 user, the email associated with the Bugzilla username used to log into
50 50 Bugzilla is used instead as the source of the comment. Marking bugs fixed
51 51 works on all supported Bugzilla versions.
52 52
53 53 Configuration items common to all access modes:
54 54
55 55 bugzilla.version
56 56 The access type to use. Values recognized are:
57 57
58 58 :``xmlrpc``: Bugzilla XMLRPC interface.
59 59 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
60 60 :``3.0``: MySQL access, Bugzilla 3.0 and later.
61 61 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
62 62 including 3.0.
63 63 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
64 64 including 2.18.
65 65
66 66 bugzilla.regexp
67 67 Regular expression to match bug IDs for update in changeset commit message.
68 68 It must contain one "()" named group ``<ids>`` containing the bug
69 69 IDs separated by non-digit characters. It may also contain
70 70 a named group ``<hours>`` with a floating-point number giving the
71 71 hours worked on the bug. If no named groups are present, the first
72 72 "()" group is assumed to contain the bug IDs, and work time is not
73 73 updated. The default expression matches ``Bug 1234``, ``Bug no. 1234``,
74 74 ``Bug number 1234``, ``Bugs 1234,5678``, ``Bug 1234 and 5678`` and
75 75 variations thereof, followed by an hours number prefixed by ``h`` or
76 76 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
77 77
78 78 bugzilla.fixregexp
79 79 Regular expression to match bug IDs for marking fixed in changeset
80 80 commit message. This must contain a "()" named group ``<ids>` containing
81 81 the bug IDs separated by non-digit characters. It may also contain
82 82 a named group ``<hours>`` with a floating-point number giving the
83 83 hours worked on the bug. If no named groups are present, the first
84 84 "()" group is assumed to contain the bug IDs, and work time is not
85 85 updated. The default expression matches ``Fixes 1234``, ``Fixes bug 1234``,
86 86 ``Fixes bugs 1234,5678``, ``Fixes 1234 and 5678`` and
87 87 variations thereof, followed by an hours number prefixed by ``h`` or
88 88 ``hours``, e.g. ``hours 1.5``. Matching is case insensitive.
89 89
90 90 bugzilla.fixstatus
91 91 The status to set a bug to when marking fixed. Default ``RESOLVED``.
92 92
93 93 bugzilla.fixresolution
94 94 The resolution to set a bug to when marking fixed. Default ``FIXED``.
95 95
96 96 bugzilla.style
97 97 The style file to use when formatting comments.
98 98
99 99 bugzilla.template
100 100 Template to use when formatting comments. Overrides style if
101 101 specified. In addition to the usual Mercurial keywords, the
102 102 extension specifies:
103 103
104 104 :``{bug}``: The Bugzilla bug ID.
105 105 :``{root}``: The full pathname of the Mercurial repository.
106 106 :``{webroot}``: Stripped pathname of the Mercurial repository.
107 107 :``{hgweb}``: Base URL for browsing Mercurial repositories.
108 108
109 109 Default ``changeset {node|short} in repo {root} refers to bug
110 110 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
111 111
112 112 bugzilla.strip
113 113 The number of path separator characters to strip from the front of
114 114 the Mercurial repository path (``{root}`` in templates) to produce
115 115 ``{webroot}``. For example, a repository with ``{root}``
116 116 ``/var/local/my-project`` with a strip of 2 gives a value for
117 117 ``{webroot}`` of ``my-project``. Default 0.
118 118
119 119 web.baseurl
120 120 Base URL for browsing Mercurial repositories. Referenced from
121 121 templates as ``{hgweb}``.
122 122
123 123 Configuration items common to XMLRPC+email and MySQL access modes:
124 124
125 125 bugzilla.usermap
126 126 Path of file containing Mercurial committer email to Bugzilla user email
127 127 mappings. If specified, the file should contain one mapping per
128 128 line::
129 129
130 130 committer = Bugzilla user
131 131
132 132 See also the ``[usermap]`` section.
133 133
134 134 The ``[usermap]`` section is used to specify mappings of Mercurial
135 135 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
136 136 Contains entries of the form ``committer = Bugzilla user``.
137 137
138 138 XMLRPC access mode configuration:
139 139
140 140 bugzilla.bzurl
141 141 The base URL for the Bugzilla installation.
142 142 Default ``http://localhost/bugzilla``.
143 143
144 144 bugzilla.user
145 145 The username to use to log into Bugzilla via XMLRPC. Default
146 146 ``bugs``.
147 147
148 148 bugzilla.password
149 149 The password for Bugzilla login.
150 150
151 151 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
152 152 and also:
153 153
154 154 bugzilla.bzemail
155 155 The Bugzilla email address.
156 156
157 157 In addition, the Mercurial email settings must be configured. See the
158 158 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
159 159
160 160 MySQL access mode configuration:
161 161
162 162 bugzilla.host
163 163 Hostname of the MySQL server holding the Bugzilla database.
164 164 Default ``localhost``.
165 165
166 166 bugzilla.db
167 167 Name of the Bugzilla database in MySQL. Default ``bugs``.
168 168
169 169 bugzilla.user
170 170 Username to use to access MySQL server. Default ``bugs``.
171 171
172 172 bugzilla.password
173 173 Password to use to access MySQL server.
174 174
175 175 bugzilla.timeout
176 176 Database connection timeout (seconds). Default 5.
177 177
178 178 bugzilla.bzuser
179 179 Fallback Bugzilla user name to record comments with, if changeset
180 180 committer cannot be found as a Bugzilla user.
181 181
182 182 bugzilla.bzdir
183 183 Bugzilla install directory. Used by default notify. Default
184 184 ``/var/www/html/bugzilla``.
185 185
186 186 bugzilla.notify
187 187 The command to run to get Bugzilla to send bug change notification
188 188 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
189 189 id) and ``user`` (committer bugzilla email). Default depends on
190 190 version; from 2.18 it is "cd %(bzdir)s && perl -T
191 191 contrib/sendbugmail.pl %(id)s %(user)s".
192 192
193 193 Activating the extension::
194 194
195 195 [extensions]
196 196 bugzilla =
197 197
198 198 [hooks]
199 199 # run bugzilla hook on every change pulled or pushed in here
200 200 incoming.bugzilla = python:hgext.bugzilla.hook
201 201
202 202 Example configurations:
203 203
204 204 XMLRPC example configuration. This uses the Bugzilla at
205 205 ``http://my-project.org/bugzilla``, logging in as user
206 206 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
207 207 collection of Mercurial repositories in ``/var/local/hg/repos/``,
208 208 with a web interface at ``http://my-project.org/hg``. ::
209 209
210 210 [bugzilla]
211 211 bzurl=http://my-project.org/bugzilla
212 212 user=bugmail@my-project.org
213 213 password=plugh
214 214 version=xmlrpc
215 215 template=Changeset {node|short} in {root|basename}.
216 216 {hgweb}/{webroot}/rev/{node|short}\\n
217 217 {desc}\\n
218 218 strip=5
219 219
220 220 [web]
221 221 baseurl=http://my-project.org/hg
222 222
223 223 XMLRPC+email example configuration. This uses the Bugzilla at
224 224 ``http://my-project.org/bugzilla``, logging in as user
225 225 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
226 226 collection of Mercurial repositories in ``/var/local/hg/repos/``,
227 227 with a web interface at ``http://my-project.org/hg``. Bug comments
228 228 are sent to the Bugzilla email address
229 229 ``bugzilla@my-project.org``. ::
230 230
231 231 [bugzilla]
232 232 bzurl=http://my-project.org/bugzilla
233 233 user=bugmail@my-project.org
234 234 password=plugh
235 235 version=xmlrpc+email
236 236 bzemail=bugzilla@my-project.org
237 237 template=Changeset {node|short} in {root|basename}.
238 238 {hgweb}/{webroot}/rev/{node|short}\\n
239 239 {desc}\\n
240 240 strip=5
241 241
242 242 [web]
243 243 baseurl=http://my-project.org/hg
244 244
245 245 [usermap]
246 246 user@emaildomain.com=user.name@bugzilladomain.com
247 247
248 248 MySQL example configuration. This has a local Bugzilla 3.2 installation
249 249 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
250 250 the Bugzilla database name is ``bugs`` and MySQL is
251 251 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
252 252 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
253 253 with a web interface at ``http://my-project.org/hg``. ::
254 254
255 255 [bugzilla]
256 256 host=localhost
257 257 password=XYZZY
258 258 version=3.0
259 259 bzuser=unknown@domain.com
260 260 bzdir=/opt/bugzilla-3.2
261 261 template=Changeset {node|short} in {root|basename}.
262 262 {hgweb}/{webroot}/rev/{node|short}\\n
263 263 {desc}\\n
264 264 strip=5
265 265
266 266 [web]
267 267 baseurl=http://my-project.org/hg
268 268
269 269 [usermap]
270 270 user@emaildomain.com=user.name@bugzilladomain.com
271 271
272 272 All the above add a comment to the Bugzilla bug record of the form::
273 273
274 274 Changeset 3b16791d6642 in repository-name.
275 275 http://my-project.org/hg/repository-name/rev/3b16791d6642
276 276
277 277 Changeset commit comment. Bug 1234.
278 278 '''
279 279
280 280 from __future__ import absolute_import
281 281
282 282 import re
283 283 import time
284 import xmlrpclib
285 284
286 285 from mercurial.i18n import _
287 286 from mercurial.node import short
288 287 from mercurial import (
289 288 cmdutil,
290 289 error,
291 290 mail,
292 291 util,
293 292 )
294 293
295 294 urlparse = util.urlparse
295 xmlrpclib = util.xmlrpclib
296 296
297 297 # Note for extension authors: ONLY specify testedwith = 'internal' for
298 298 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
299 299 # be specifying the version(s) of Mercurial they are tested with, or
300 300 # leave the attribute unspecified.
301 301 testedwith = 'internal'
302 302
303 303 class bzaccess(object):
304 304 '''Base class for access to Bugzilla.'''
305 305
306 306 def __init__(self, ui):
307 307 self.ui = ui
308 308 usermap = self.ui.config('bugzilla', 'usermap')
309 309 if usermap:
310 310 self.ui.readconfig(usermap, sections=['usermap'])
311 311
312 312 def map_committer(self, user):
313 313 '''map name of committer to Bugzilla user name.'''
314 314 for committer, bzuser in self.ui.configitems('usermap'):
315 315 if committer.lower() == user.lower():
316 316 return bzuser
317 317 return user
318 318
319 319 # Methods to be implemented by access classes.
320 320 #
321 321 # 'bugs' is a dict keyed on bug id, where values are a dict holding
322 322 # updates to bug state. Recognized dict keys are:
323 323 #
324 324 # 'hours': Value, float containing work hours to be updated.
325 325 # 'fix': If key present, bug is to be marked fixed. Value ignored.
326 326
327 327 def filter_real_bug_ids(self, bugs):
328 328 '''remove bug IDs that do not exist in Bugzilla from bugs.'''
329 329 pass
330 330
331 331 def filter_cset_known_bug_ids(self, node, bugs):
332 332 '''remove bug IDs where node occurs in comment text from bugs.'''
333 333 pass
334 334
335 335 def updatebug(self, bugid, newstate, text, committer):
336 336 '''update the specified bug. Add comment text and set new states.
337 337
338 338 If possible add the comment as being from the committer of
339 339 the changeset. Otherwise use the default Bugzilla user.
340 340 '''
341 341 pass
342 342
343 343 def notify(self, bugs, committer):
344 344 '''Force sending of Bugzilla notification emails.
345 345
346 346 Only required if the access method does not trigger notification
347 347 emails automatically.
348 348 '''
349 349 pass
350 350
351 351 # Bugzilla via direct access to MySQL database.
352 352 class bzmysql(bzaccess):
353 353 '''Support for direct MySQL access to Bugzilla.
354 354
355 355 The earliest Bugzilla version this is tested with is version 2.16.
356 356
357 357 If your Bugzilla is version 3.4 or above, you are strongly
358 358 recommended to use the XMLRPC access method instead.
359 359 '''
360 360
361 361 @staticmethod
362 362 def sql_buglist(ids):
363 363 '''return SQL-friendly list of bug ids'''
364 364 return '(' + ','.join(map(str, ids)) + ')'
365 365
366 366 _MySQLdb = None
367 367
368 368 def __init__(self, ui):
369 369 try:
370 370 import MySQLdb as mysql
371 371 bzmysql._MySQLdb = mysql
372 372 except ImportError as err:
373 373 raise error.Abort(_('python mysql support not available: %s') % err)
374 374
375 375 bzaccess.__init__(self, ui)
376 376
377 377 host = self.ui.config('bugzilla', 'host', 'localhost')
378 378 user = self.ui.config('bugzilla', 'user', 'bugs')
379 379 passwd = self.ui.config('bugzilla', 'password')
380 380 db = self.ui.config('bugzilla', 'db', 'bugs')
381 381 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
382 382 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
383 383 (host, db, user, '*' * len(passwd)))
384 384 self.conn = bzmysql._MySQLdb.connect(host=host,
385 385 user=user, passwd=passwd,
386 386 db=db,
387 387 connect_timeout=timeout)
388 388 self.cursor = self.conn.cursor()
389 389 self.longdesc_id = self.get_longdesc_id()
390 390 self.user_ids = {}
391 391 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
392 392
393 393 def run(self, *args, **kwargs):
394 394 '''run a query.'''
395 395 self.ui.note(_('query: %s %s\n') % (args, kwargs))
396 396 try:
397 397 self.cursor.execute(*args, **kwargs)
398 398 except bzmysql._MySQLdb.MySQLError:
399 399 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
400 400 raise
401 401
402 402 def get_longdesc_id(self):
403 403 '''get identity of longdesc field'''
404 404 self.run('select fieldid from fielddefs where name = "longdesc"')
405 405 ids = self.cursor.fetchall()
406 406 if len(ids) != 1:
407 407 raise error.Abort(_('unknown database schema'))
408 408 return ids[0][0]
409 409
410 410 def filter_real_bug_ids(self, bugs):
411 411 '''filter not-existing bugs from set.'''
412 412 self.run('select bug_id from bugs where bug_id in %s' %
413 413 bzmysql.sql_buglist(bugs.keys()))
414 414 existing = [id for (id,) in self.cursor.fetchall()]
415 415 for id in bugs.keys():
416 416 if id not in existing:
417 417 self.ui.status(_('bug %d does not exist\n') % id)
418 418 del bugs[id]
419 419
420 420 def filter_cset_known_bug_ids(self, node, bugs):
421 421 '''filter bug ids that already refer to this changeset from set.'''
422 422 self.run('''select bug_id from longdescs where
423 423 bug_id in %s and thetext like "%%%s%%"''' %
424 424 (bzmysql.sql_buglist(bugs.keys()), short(node)))
425 425 for (id,) in self.cursor.fetchall():
426 426 self.ui.status(_('bug %d already knows about changeset %s\n') %
427 427 (id, short(node)))
428 428 del bugs[id]
429 429
430 430 def notify(self, bugs, committer):
431 431 '''tell bugzilla to send mail.'''
432 432 self.ui.status(_('telling bugzilla to send mail:\n'))
433 433 (user, userid) = self.get_bugzilla_user(committer)
434 434 for id in bugs.keys():
435 435 self.ui.status(_(' bug %s\n') % id)
436 436 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
437 437 bzdir = self.ui.config('bugzilla', 'bzdir',
438 438 '/var/www/html/bugzilla')
439 439 try:
440 440 # Backwards-compatible with old notify string, which
441 441 # took one string. This will throw with a new format
442 442 # string.
443 443 cmd = cmdfmt % id
444 444 except TypeError:
445 445 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
446 446 self.ui.note(_('running notify command %s\n') % cmd)
447 447 fp = util.popen('(%s) 2>&1' % cmd)
448 448 out = fp.read()
449 449 ret = fp.close()
450 450 if ret:
451 451 self.ui.warn(out)
452 452 raise error.Abort(_('bugzilla notify command %s') %
453 453 util.explainexit(ret)[0])
454 454 self.ui.status(_('done\n'))
455 455
456 456 def get_user_id(self, user):
457 457 '''look up numeric bugzilla user id.'''
458 458 try:
459 459 return self.user_ids[user]
460 460 except KeyError:
461 461 try:
462 462 userid = int(user)
463 463 except ValueError:
464 464 self.ui.note(_('looking up user %s\n') % user)
465 465 self.run('''select userid from profiles
466 466 where login_name like %s''', user)
467 467 all = self.cursor.fetchall()
468 468 if len(all) != 1:
469 469 raise KeyError(user)
470 470 userid = int(all[0][0])
471 471 self.user_ids[user] = userid
472 472 return userid
473 473
474 474 def get_bugzilla_user(self, committer):
475 475 '''See if committer is a registered bugzilla user. Return
476 476 bugzilla username and userid if so. If not, return default
477 477 bugzilla username and userid.'''
478 478 user = self.map_committer(committer)
479 479 try:
480 480 userid = self.get_user_id(user)
481 481 except KeyError:
482 482 try:
483 483 defaultuser = self.ui.config('bugzilla', 'bzuser')
484 484 if not defaultuser:
485 485 raise error.Abort(_('cannot find bugzilla user id for %s') %
486 486 user)
487 487 userid = self.get_user_id(defaultuser)
488 488 user = defaultuser
489 489 except KeyError:
490 490 raise error.Abort(_('cannot find bugzilla user id for %s or %s')
491 491 % (user, defaultuser))
492 492 return (user, userid)
493 493
494 494 def updatebug(self, bugid, newstate, text, committer):
495 495 '''update bug state with comment text.
496 496
497 497 Try adding comment as committer of changeset, otherwise as
498 498 default bugzilla user.'''
499 499 if len(newstate) > 0:
500 500 self.ui.warn(_("Bugzilla/MySQL cannot update bug state\n"))
501 501
502 502 (user, userid) = self.get_bugzilla_user(committer)
503 503 now = time.strftime('%Y-%m-%d %H:%M:%S')
504 504 self.run('''insert into longdescs
505 505 (bug_id, who, bug_when, thetext)
506 506 values (%s, %s, %s, %s)''',
507 507 (bugid, userid, now, text))
508 508 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
509 509 values (%s, %s, %s, %s)''',
510 510 (bugid, userid, now, self.longdesc_id))
511 511 self.conn.commit()
512 512
513 513 class bzmysql_2_18(bzmysql):
514 514 '''support for bugzilla 2.18 series.'''
515 515
516 516 def __init__(self, ui):
517 517 bzmysql.__init__(self, ui)
518 518 self.default_notify = \
519 519 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
520 520
521 521 class bzmysql_3_0(bzmysql_2_18):
522 522 '''support for bugzilla 3.0 series.'''
523 523
524 524 def __init__(self, ui):
525 525 bzmysql_2_18.__init__(self, ui)
526 526
527 527 def get_longdesc_id(self):
528 528 '''get identity of longdesc field'''
529 529 self.run('select id from fielddefs where name = "longdesc"')
530 530 ids = self.cursor.fetchall()
531 531 if len(ids) != 1:
532 532 raise error.Abort(_('unknown database schema'))
533 533 return ids[0][0]
534 534
535 535 # Bugzilla via XMLRPC interface.
536 536
537 537 class cookietransportrequest(object):
538 538 """A Transport request method that retains cookies over its lifetime.
539 539
540 540 The regular xmlrpclib transports ignore cookies. Which causes
541 541 a bit of a problem when you need a cookie-based login, as with
542 542 the Bugzilla XMLRPC interface prior to 4.4.3.
543 543
544 544 So this is a helper for defining a Transport which looks for
545 545 cookies being set in responses and saves them to add to all future
546 546 requests.
547 547 """
548 548
549 549 # Inspiration drawn from
550 550 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
551 551 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
552 552
553 553 cookies = []
554 554 def send_cookies(self, connection):
555 555 if self.cookies:
556 556 for cookie in self.cookies:
557 557 connection.putheader("Cookie", cookie)
558 558
559 559 def request(self, host, handler, request_body, verbose=0):
560 560 self.verbose = verbose
561 561 self.accept_gzip_encoding = False
562 562
563 563 # issue XML-RPC request
564 564 h = self.make_connection(host)
565 565 if verbose:
566 566 h.set_debuglevel(1)
567 567
568 568 self.send_request(h, handler, request_body)
569 569 self.send_host(h, host)
570 570 self.send_cookies(h)
571 571 self.send_user_agent(h)
572 572 self.send_content(h, request_body)
573 573
574 574 # Deal with differences between Python 2.4-2.6 and 2.7.
575 575 # In the former h is a HTTP(S). In the latter it's a
576 576 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
577 577 # HTTP(S) has an underlying HTTP(S)Connection, so extract
578 578 # that and use it.
579 579 try:
580 580 response = h.getresponse()
581 581 except AttributeError:
582 582 response = h._conn.getresponse()
583 583
584 584 # Add any cookie definitions to our list.
585 585 for header in response.msg.getallmatchingheaders("Set-Cookie"):
586 586 val = header.split(": ", 1)[1]
587 587 cookie = val.split(";", 1)[0]
588 588 self.cookies.append(cookie)
589 589
590 590 if response.status != 200:
591 591 raise xmlrpclib.ProtocolError(host + handler, response.status,
592 592 response.reason, response.msg.headers)
593 593
594 594 payload = response.read()
595 595 parser, unmarshaller = self.getparser()
596 596 parser.feed(payload)
597 597 parser.close()
598 598
599 599 return unmarshaller.close()
600 600
601 601 # The explicit calls to the underlying xmlrpclib __init__() methods are
602 602 # necessary. The xmlrpclib.Transport classes are old-style classes, and
603 603 # it turns out their __init__() doesn't get called when doing multiple
604 604 # inheritance with a new-style class.
605 605 class cookietransport(cookietransportrequest, xmlrpclib.Transport):
606 606 def __init__(self, use_datetime=0):
607 607 if util.safehasattr(xmlrpclib.Transport, "__init__"):
608 608 xmlrpclib.Transport.__init__(self, use_datetime)
609 609
610 610 class cookiesafetransport(cookietransportrequest, xmlrpclib.SafeTransport):
611 611 def __init__(self, use_datetime=0):
612 612 if util.safehasattr(xmlrpclib.Transport, "__init__"):
613 613 xmlrpclib.SafeTransport.__init__(self, use_datetime)
614 614
615 615 class bzxmlrpc(bzaccess):
616 616 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
617 617
618 618 Requires a minimum Bugzilla version 3.4.
619 619 """
620 620
621 621 def __init__(self, ui):
622 622 bzaccess.__init__(self, ui)
623 623
624 624 bzweb = self.ui.config('bugzilla', 'bzurl',
625 625 'http://localhost/bugzilla/')
626 626 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
627 627
628 628 user = self.ui.config('bugzilla', 'user', 'bugs')
629 629 passwd = self.ui.config('bugzilla', 'password')
630 630
631 631 self.fixstatus = self.ui.config('bugzilla', 'fixstatus', 'RESOLVED')
632 632 self.fixresolution = self.ui.config('bugzilla', 'fixresolution',
633 633 'FIXED')
634 634
635 635 self.bzproxy = xmlrpclib.ServerProxy(bzweb, self.transport(bzweb))
636 636 ver = self.bzproxy.Bugzilla.version()['version'].split('.')
637 637 self.bzvermajor = int(ver[0])
638 638 self.bzverminor = int(ver[1])
639 639 login = self.bzproxy.User.login({'login': user, 'password': passwd,
640 640 'restrict_login': True})
641 641 self.bztoken = login.get('token', '')
642 642
643 643 def transport(self, uri):
644 644 if urlparse.urlparse(uri, "http")[0] == "https":
645 645 return cookiesafetransport()
646 646 else:
647 647 return cookietransport()
648 648
649 649 def get_bug_comments(self, id):
650 650 """Return a string with all comment text for a bug."""
651 651 c = self.bzproxy.Bug.comments({'ids': [id],
652 652 'include_fields': ['text'],
653 653 'token': self.bztoken})
654 654 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
655 655
656 656 def filter_real_bug_ids(self, bugs):
657 657 probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
658 658 'include_fields': [],
659 659 'permissive': True,
660 660 'token': self.bztoken,
661 661 })
662 662 for badbug in probe['faults']:
663 663 id = badbug['id']
664 664 self.ui.status(_('bug %d does not exist\n') % id)
665 665 del bugs[id]
666 666
667 667 def filter_cset_known_bug_ids(self, node, bugs):
668 668 for id in sorted(bugs.keys()):
669 669 if self.get_bug_comments(id).find(short(node)) != -1:
670 670 self.ui.status(_('bug %d already knows about changeset %s\n') %
671 671 (id, short(node)))
672 672 del bugs[id]
673 673
674 674 def updatebug(self, bugid, newstate, text, committer):
675 675 args = {}
676 676 if 'hours' in newstate:
677 677 args['work_time'] = newstate['hours']
678 678
679 679 if self.bzvermajor >= 4:
680 680 args['ids'] = [bugid]
681 681 args['comment'] = {'body' : text}
682 682 if 'fix' in newstate:
683 683 args['status'] = self.fixstatus
684 684 args['resolution'] = self.fixresolution
685 685 args['token'] = self.bztoken
686 686 self.bzproxy.Bug.update(args)
687 687 else:
688 688 if 'fix' in newstate:
689 689 self.ui.warn(_("Bugzilla/XMLRPC needs Bugzilla 4.0 or later "
690 690 "to mark bugs fixed\n"))
691 691 args['id'] = bugid
692 692 args['comment'] = text
693 693 self.bzproxy.Bug.add_comment(args)
694 694
695 695 class bzxmlrpcemail(bzxmlrpc):
696 696 """Read data from Bugzilla via XMLRPC, send updates via email.
697 697
698 698 Advantages of sending updates via email:
699 699 1. Comments can be added as any user, not just logged in user.
700 700 2. Bug statuses or other fields not accessible via XMLRPC can
701 701 potentially be updated.
702 702
703 703 There is no XMLRPC function to change bug status before Bugzilla
704 704 4.0, so bugs cannot be marked fixed via XMLRPC before Bugzilla 4.0.
705 705 But bugs can be marked fixed via email from 3.4 onwards.
706 706 """
707 707
708 708 # The email interface changes subtly between 3.4 and 3.6. In 3.4,
709 709 # in-email fields are specified as '@<fieldname> = <value>'. In
710 710 # 3.6 this becomes '@<fieldname> <value>'. And fieldname @bug_id
711 711 # in 3.4 becomes @id in 3.6. 3.6 and 4.0 both maintain backwards
712 712 # compatibility, but rather than rely on this use the new format for
713 713 # 4.0 onwards.
714 714
715 715 def __init__(self, ui):
716 716 bzxmlrpc.__init__(self, ui)
717 717
718 718 self.bzemail = self.ui.config('bugzilla', 'bzemail')
719 719 if not self.bzemail:
720 720 raise error.Abort(_("configuration 'bzemail' missing"))
721 721 mail.validateconfig(self.ui)
722 722
723 723 def makecommandline(self, fieldname, value):
724 724 if self.bzvermajor >= 4:
725 725 return "@%s %s" % (fieldname, str(value))
726 726 else:
727 727 if fieldname == "id":
728 728 fieldname = "bug_id"
729 729 return "@%s = %s" % (fieldname, str(value))
730 730
731 731 def send_bug_modify_email(self, bugid, commands, comment, committer):
732 732 '''send modification message to Bugzilla bug via email.
733 733
734 734 The message format is documented in the Bugzilla email_in.pl
735 735 specification. commands is a list of command lines, comment is the
736 736 comment text.
737 737
738 738 To stop users from crafting commit comments with
739 739 Bugzilla commands, specify the bug ID via the message body, rather
740 740 than the subject line, and leave a blank line after it.
741 741 '''
742 742 user = self.map_committer(committer)
743 743 matches = self.bzproxy.User.get({'match': [user],
744 744 'token': self.bztoken})
745 745 if not matches['users']:
746 746 user = self.ui.config('bugzilla', 'user', 'bugs')
747 747 matches = self.bzproxy.User.get({'match': [user],
748 748 'token': self.bztoken})
749 749 if not matches['users']:
750 750 raise error.Abort(_("default bugzilla user %s email not found")
751 751 % user)
752 752 user = matches['users'][0]['email']
753 753 commands.append(self.makecommandline("id", bugid))
754 754
755 755 text = "\n".join(commands) + "\n\n" + comment
756 756
757 757 _charsets = mail._charsets(self.ui)
758 758 user = mail.addressencode(self.ui, user, _charsets)
759 759 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
760 760 msg = mail.mimeencode(self.ui, text, _charsets)
761 761 msg['From'] = user
762 762 msg['To'] = bzemail
763 763 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
764 764 sendmail = mail.connect(self.ui)
765 765 sendmail(user, bzemail, msg.as_string())
766 766
767 767 def updatebug(self, bugid, newstate, text, committer):
768 768 cmds = []
769 769 if 'hours' in newstate:
770 770 cmds.append(self.makecommandline("work_time", newstate['hours']))
771 771 if 'fix' in newstate:
772 772 cmds.append(self.makecommandline("bug_status", self.fixstatus))
773 773 cmds.append(self.makecommandline("resolution", self.fixresolution))
774 774 self.send_bug_modify_email(bugid, cmds, text, committer)
775 775
776 776 class bugzilla(object):
777 777 # supported versions of bugzilla. different versions have
778 778 # different schemas.
779 779 _versions = {
780 780 '2.16': bzmysql,
781 781 '2.18': bzmysql_2_18,
782 782 '3.0': bzmysql_3_0,
783 783 'xmlrpc': bzxmlrpc,
784 784 'xmlrpc+email': bzxmlrpcemail
785 785 }
786 786
787 787 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
788 788 r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
789 789 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
790 790
791 791 _default_fix_re = (r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
792 792 r'(?:nos?\.?|num(?:ber)?s?)?\s*'
793 793 r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
794 794 r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
795 795
796 796 def __init__(self, ui, repo):
797 797 self.ui = ui
798 798 self.repo = repo
799 799
800 800 bzversion = self.ui.config('bugzilla', 'version')
801 801 try:
802 802 bzclass = bugzilla._versions[bzversion]
803 803 except KeyError:
804 804 raise error.Abort(_('bugzilla version %s not supported') %
805 805 bzversion)
806 806 self.bzdriver = bzclass(self.ui)
807 807
808 808 self.bug_re = re.compile(
809 809 self.ui.config('bugzilla', 'regexp',
810 810 bugzilla._default_bug_re), re.IGNORECASE)
811 811 self.fix_re = re.compile(
812 812 self.ui.config('bugzilla', 'fixregexp',
813 813 bugzilla._default_fix_re), re.IGNORECASE)
814 814 self.split_re = re.compile(r'\D+')
815 815
816 816 def find_bugs(self, ctx):
817 817 '''return bugs dictionary created from commit comment.
818 818
819 819 Extract bug info from changeset comments. Filter out any that are
820 820 not known to Bugzilla, and any that already have a reference to
821 821 the given changeset in their comments.
822 822 '''
823 823 start = 0
824 824 hours = 0.0
825 825 bugs = {}
826 826 bugmatch = self.bug_re.search(ctx.description(), start)
827 827 fixmatch = self.fix_re.search(ctx.description(), start)
828 828 while True:
829 829 bugattribs = {}
830 830 if not bugmatch and not fixmatch:
831 831 break
832 832 if not bugmatch:
833 833 m = fixmatch
834 834 elif not fixmatch:
835 835 m = bugmatch
836 836 else:
837 837 if bugmatch.start() < fixmatch.start():
838 838 m = bugmatch
839 839 else:
840 840 m = fixmatch
841 841 start = m.end()
842 842 if m is bugmatch:
843 843 bugmatch = self.bug_re.search(ctx.description(), start)
844 844 if 'fix' in bugattribs:
845 845 del bugattribs['fix']
846 846 else:
847 847 fixmatch = self.fix_re.search(ctx.description(), start)
848 848 bugattribs['fix'] = None
849 849
850 850 try:
851 851 ids = m.group('ids')
852 852 except IndexError:
853 853 ids = m.group(1)
854 854 try:
855 855 hours = float(m.group('hours'))
856 856 bugattribs['hours'] = hours
857 857 except IndexError:
858 858 pass
859 859 except TypeError:
860 860 pass
861 861 except ValueError:
862 862 self.ui.status(_("%s: invalid hours\n") % m.group('hours'))
863 863
864 864 for id in self.split_re.split(ids):
865 865 if not id:
866 866 continue
867 867 bugs[int(id)] = bugattribs
868 868 if bugs:
869 869 self.bzdriver.filter_real_bug_ids(bugs)
870 870 if bugs:
871 871 self.bzdriver.filter_cset_known_bug_ids(ctx.node(), bugs)
872 872 return bugs
873 873
874 874 def update(self, bugid, newstate, ctx):
875 875 '''update bugzilla bug with reference to changeset.'''
876 876
877 877 def webroot(root):
878 878 '''strip leading prefix of repo root and turn into
879 879 url-safe path.'''
880 880 count = int(self.ui.config('bugzilla', 'strip', 0))
881 881 root = util.pconvert(root)
882 882 while count > 0:
883 883 c = root.find('/')
884 884 if c == -1:
885 885 break
886 886 root = root[c + 1:]
887 887 count -= 1
888 888 return root
889 889
890 890 mapfile = None
891 891 tmpl = self.ui.config('bugzilla', 'template')
892 892 if not tmpl:
893 893 mapfile = self.ui.config('bugzilla', 'style')
894 894 if not mapfile and not tmpl:
895 895 tmpl = _('changeset {node|short} in repo {root} refers '
896 896 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
897 897 t = cmdutil.changeset_templater(self.ui, self.repo,
898 898 False, None, tmpl, mapfile, False)
899 899 self.ui.pushbuffer()
900 900 t.show(ctx, changes=ctx.changeset(),
901 901 bug=str(bugid),
902 902 hgweb=self.ui.config('web', 'baseurl'),
903 903 root=self.repo.root,
904 904 webroot=webroot(self.repo.root))
905 905 data = self.ui.popbuffer()
906 906 self.bzdriver.updatebug(bugid, newstate, data, util.email(ctx.user()))
907 907
908 908 def notify(self, bugs, committer):
909 909 '''ensure Bugzilla users are notified of bug change.'''
910 910 self.bzdriver.notify(bugs, committer)
911 911
912 912 def hook(ui, repo, hooktype, node=None, **kwargs):
913 913 '''add comment to bugzilla for each changeset that refers to a
914 914 bugzilla bug id. only add a comment once per bug, so same change
915 915 seen multiple times does not fill bug with duplicate data.'''
916 916 if node is None:
917 917 raise error.Abort(_('hook type %s does not pass a changeset id') %
918 918 hooktype)
919 919 try:
920 920 bz = bugzilla(ui, repo)
921 921 ctx = repo[node]
922 922 bugs = bz.find_bugs(ctx)
923 923 if bugs:
924 924 for bug in bugs:
925 925 bz.update(bug, bugs[bug], ctx)
926 926 bz.notify(bugs, util.email(ctx.user()))
927 927 except Exception as e:
928 928 raise error.Abort(_('Bugzilla error: %s') % e)
@@ -1,138 +1,145
1 1 # pycompat.py - portability shim for python 3
2 2 #
3 3 # This software may be used and distributed according to the terms of the
4 4 # GNU General Public License version 2 or any later version.
5 5
6 6 """Mercurial portability shim for python 3.
7 7
8 8 This contains aliases to hide python version-specific details from the core.
9 9 """
10 10
11 11 from __future__ import absolute_import
12 12
13 13 try:
14 14 import cPickle as pickle
15 15 pickle.dumps
16 16 except ImportError:
17 17 import pickle
18 18 pickle.dumps # silence pyflakes
19 19
20 20 try:
21 import xmlrpclib
22 xmlrpclib.Transport
23 except ImportError:
24 import xmlrpc.client as xmlrpclib
25 xmlrpclib.Transport
26
27 try:
21 28 import urlparse
22 29 urlparse.urlparse
23 30 except ImportError:
24 31 import urllib.parse as urlparse
25 32 urlparse.urlparse
26 33
27 34 try:
28 35 import cStringIO as io
29 36 stringio = io.StringIO
30 37 except ImportError:
31 38 import io
32 39 stringio = io.StringIO
33 40
34 41 try:
35 42 import Queue as _queue
36 43 _queue.Queue
37 44 except ImportError:
38 45 import queue as _queue
39 46 empty = _queue.Empty
40 47 queue = _queue.Queue
41 48
42 49 class _pycompatstub(object):
43 50 pass
44 51
45 52 def _alias(alias, origin, items):
46 53 """ populate a _pycompatstub
47 54
48 55 copies items from origin to alias
49 56 """
50 57 def hgcase(item):
51 58 return item.replace('_', '').lower()
52 59 for item in items:
53 60 try:
54 61 setattr(alias, hgcase(item), getattr(origin, item))
55 62 except AttributeError:
56 63 pass
57 64
58 65 urlreq = _pycompatstub()
59 66 urlerr = _pycompatstub()
60 67 try:
61 68 import urllib2
62 69 import urllib
63 70 _alias(urlreq, urllib, (
64 71 "addclosehook",
65 72 "addinfourl",
66 73 "ftpwrapper",
67 74 "pathname2url",
68 75 "quote",
69 76 "splitattr",
70 77 "splitpasswd",
71 78 "splitport",
72 79 "splituser",
73 80 "unquote",
74 81 "url2pathname",
75 82 "urlencode",
76 83 "urlencode",
77 84 ))
78 85 _alias(urlreq, urllib2, (
79 86 "AbstractHTTPHandler",
80 87 "BaseHandler",
81 88 "build_opener",
82 89 "FileHandler",
83 90 "FTPHandler",
84 91 "HTTPBasicAuthHandler",
85 92 "HTTPDigestAuthHandler",
86 93 "HTTPHandler",
87 94 "HTTPPasswordMgrWithDefaultRealm",
88 95 "HTTPSHandler",
89 96 "install_opener",
90 97 "ProxyHandler",
91 98 "Request",
92 99 "urlopen",
93 100 ))
94 101 _alias(urlerr, urllib2, (
95 102 "HTTPError",
96 103 "URLError",
97 104 ))
98 105
99 106 except ImportError:
100 107 import urllib.request
101 108 _alias(urlreq, urllib.request, (
102 109 "AbstractHTTPHandler",
103 110 "addclosehook",
104 111 "addinfourl",
105 112 "BaseHandler",
106 113 "build_opener",
107 114 "FileHandler",
108 115 "FTPHandler",
109 116 "ftpwrapper",
110 117 "HTTPHandler",
111 118 "HTTPSHandler",
112 119 "install_opener",
113 120 "pathname2url",
114 121 "HTTPBasicAuthHandler",
115 122 "HTTPDigestAuthHandler",
116 123 "HTTPPasswordMgrWithDefaultRealm",
117 124 "ProxyHandler",
118 125 "quote",
119 126 "Request",
120 127 "splitattr",
121 128 "splitpasswd",
122 129 "splitport",
123 130 "splituser",
124 131 "unquote",
125 132 "url2pathname",
126 133 "urlopen",
127 134 ))
128 135 import urllib.error
129 136 _alias(urlerr, urllib.error, (
130 137 "HTTPError",
131 138 "URLError",
132 139 ))
133 140
134 141 try:
135 142 xrange
136 143 except NameError:
137 144 import builtins
138 145 builtins.xrange = range
@@ -1,2854 +1,2855
1 1 # util.py - Mercurial utility functions and platform specific implementations
2 2 #
3 3 # Copyright 2005 K. Thananchayan <thananck@yahoo.com>
4 4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
5 5 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
6 6 #
7 7 # This software may be used and distributed according to the terms of the
8 8 # GNU General Public License version 2 or any later version.
9 9
10 10 """Mercurial utility functions and platform specific implementations.
11 11
12 12 This contains helper routines that are independent of the SCM core and
13 13 hide platform-specific details from the core.
14 14 """
15 15
16 16 from __future__ import absolute_import
17 17
18 18 import bz2
19 19 import calendar
20 20 import collections
21 21 import datetime
22 22 import errno
23 23 import gc
24 24 import hashlib
25 25 import imp
26 26 import os
27 27 import re as remod
28 28 import shutil
29 29 import signal
30 30 import socket
31 31 import subprocess
32 32 import sys
33 33 import tempfile
34 34 import textwrap
35 35 import time
36 36 import traceback
37 37 import zlib
38 38
39 39 from . import (
40 40 encoding,
41 41 error,
42 42 i18n,
43 43 osutil,
44 44 parsers,
45 45 pycompat,
46 46 )
47 47
48 48 for attr in (
49 49 'empty',
50 50 'pickle',
51 51 'queue',
52 52 'urlerr',
53 53 'urlparse',
54 54 # we do import urlreq, but we do it outside the loop
55 55 #'urlreq',
56 56 'stringio',
57 'xmlrpclib',
57 58 ):
58 59 globals()[attr] = getattr(pycompat, attr)
59 60
60 61 # This line is to make pyflakes happy:
61 62 urlreq = pycompat.urlreq
62 63
63 64 if os.name == 'nt':
64 65 from . import windows as platform
65 66 else:
66 67 from . import posix as platform
67 68
68 69 _ = i18n._
69 70
70 71 cachestat = platform.cachestat
71 72 checkexec = platform.checkexec
72 73 checklink = platform.checklink
73 74 copymode = platform.copymode
74 75 executablepath = platform.executablepath
75 76 expandglobs = platform.expandglobs
76 77 explainexit = platform.explainexit
77 78 findexe = platform.findexe
78 79 gethgcmd = platform.gethgcmd
79 80 getuser = platform.getuser
80 81 getpid = os.getpid
81 82 groupmembers = platform.groupmembers
82 83 groupname = platform.groupname
83 84 hidewindow = platform.hidewindow
84 85 isexec = platform.isexec
85 86 isowner = platform.isowner
86 87 localpath = platform.localpath
87 88 lookupreg = platform.lookupreg
88 89 makedir = platform.makedir
89 90 nlinks = platform.nlinks
90 91 normpath = platform.normpath
91 92 normcase = platform.normcase
92 93 normcasespec = platform.normcasespec
93 94 normcasefallback = platform.normcasefallback
94 95 openhardlinks = platform.openhardlinks
95 96 oslink = platform.oslink
96 97 parsepatchoutput = platform.parsepatchoutput
97 98 pconvert = platform.pconvert
98 99 poll = platform.poll
99 100 popen = platform.popen
100 101 posixfile = platform.posixfile
101 102 quotecommand = platform.quotecommand
102 103 readpipe = platform.readpipe
103 104 rename = platform.rename
104 105 removedirs = platform.removedirs
105 106 samedevice = platform.samedevice
106 107 samefile = platform.samefile
107 108 samestat = platform.samestat
108 109 setbinary = platform.setbinary
109 110 setflags = platform.setflags
110 111 setsignalhandler = platform.setsignalhandler
111 112 shellquote = platform.shellquote
112 113 spawndetached = platform.spawndetached
113 114 split = platform.split
114 115 sshargs = platform.sshargs
115 116 statfiles = getattr(osutil, 'statfiles', platform.statfiles)
116 117 statisexec = platform.statisexec
117 118 statislink = platform.statislink
118 119 termwidth = platform.termwidth
119 120 testpid = platform.testpid
120 121 umask = platform.umask
121 122 unlink = platform.unlink
122 123 unlinkpath = platform.unlinkpath
123 124 username = platform.username
124 125
125 126 # Python compatibility
126 127
127 128 _notset = object()
128 129
129 130 # disable Python's problematic floating point timestamps (issue4836)
130 131 # (Python hypocritically says you shouldn't change this behavior in
131 132 # libraries, and sure enough Mercurial is not a library.)
132 133 os.stat_float_times(False)
133 134
134 135 def safehasattr(thing, attr):
135 136 return getattr(thing, attr, _notset) is not _notset
136 137
137 138 DIGESTS = {
138 139 'md5': hashlib.md5,
139 140 'sha1': hashlib.sha1,
140 141 'sha512': hashlib.sha512,
141 142 }
142 143 # List of digest types from strongest to weakest
143 144 DIGESTS_BY_STRENGTH = ['sha512', 'sha1', 'md5']
144 145
145 146 for k in DIGESTS_BY_STRENGTH:
146 147 assert k in DIGESTS
147 148
148 149 class digester(object):
149 150 """helper to compute digests.
150 151
151 152 This helper can be used to compute one or more digests given their name.
152 153
153 154 >>> d = digester(['md5', 'sha1'])
154 155 >>> d.update('foo')
155 156 >>> [k for k in sorted(d)]
156 157 ['md5', 'sha1']
157 158 >>> d['md5']
158 159 'acbd18db4cc2f85cedef654fccc4a4d8'
159 160 >>> d['sha1']
160 161 '0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33'
161 162 >>> digester.preferred(['md5', 'sha1'])
162 163 'sha1'
163 164 """
164 165
165 166 def __init__(self, digests, s=''):
166 167 self._hashes = {}
167 168 for k in digests:
168 169 if k not in DIGESTS:
169 170 raise Abort(_('unknown digest type: %s') % k)
170 171 self._hashes[k] = DIGESTS[k]()
171 172 if s:
172 173 self.update(s)
173 174
174 175 def update(self, data):
175 176 for h in self._hashes.values():
176 177 h.update(data)
177 178
178 179 def __getitem__(self, key):
179 180 if key not in DIGESTS:
180 181 raise Abort(_('unknown digest type: %s') % k)
181 182 return self._hashes[key].hexdigest()
182 183
183 184 def __iter__(self):
184 185 return iter(self._hashes)
185 186
186 187 @staticmethod
187 188 def preferred(supported):
188 189 """returns the strongest digest type in both supported and DIGESTS."""
189 190
190 191 for k in DIGESTS_BY_STRENGTH:
191 192 if k in supported:
192 193 return k
193 194 return None
194 195
195 196 class digestchecker(object):
196 197 """file handle wrapper that additionally checks content against a given
197 198 size and digests.
198 199
199 200 d = digestchecker(fh, size, {'md5': '...'})
200 201
201 202 When multiple digests are given, all of them are validated.
202 203 """
203 204
204 205 def __init__(self, fh, size, digests):
205 206 self._fh = fh
206 207 self._size = size
207 208 self._got = 0
208 209 self._digests = dict(digests)
209 210 self._digester = digester(self._digests.keys())
210 211
211 212 def read(self, length=-1):
212 213 content = self._fh.read(length)
213 214 self._digester.update(content)
214 215 self._got += len(content)
215 216 return content
216 217
217 218 def validate(self):
218 219 if self._size != self._got:
219 220 raise Abort(_('size mismatch: expected %d, got %d') %
220 221 (self._size, self._got))
221 222 for k, v in self._digests.items():
222 223 if v != self._digester[k]:
223 224 # i18n: first parameter is a digest name
224 225 raise Abort(_('%s mismatch: expected %s, got %s') %
225 226 (k, v, self._digester[k]))
226 227
227 228 try:
228 229 buffer = buffer
229 230 except NameError:
230 231 if sys.version_info[0] < 3:
231 232 def buffer(sliceable, offset=0):
232 233 return sliceable[offset:]
233 234 else:
234 235 def buffer(sliceable, offset=0):
235 236 return memoryview(sliceable)[offset:]
236 237
237 238 closefds = os.name == 'posix'
238 239
239 240 _chunksize = 4096
240 241
241 242 class bufferedinputpipe(object):
242 243 """a manually buffered input pipe
243 244
244 245 Python will not let us use buffered IO and lazy reading with 'polling' at
245 246 the same time. We cannot probe the buffer state and select will not detect
246 247 that data are ready to read if they are already buffered.
247 248
248 249 This class let us work around that by implementing its own buffering
249 250 (allowing efficient readline) while offering a way to know if the buffer is
250 251 empty from the output (allowing collaboration of the buffer with polling).
251 252
252 253 This class lives in the 'util' module because it makes use of the 'os'
253 254 module from the python stdlib.
254 255 """
255 256
256 257 def __init__(self, input):
257 258 self._input = input
258 259 self._buffer = []
259 260 self._eof = False
260 261 self._lenbuf = 0
261 262
262 263 @property
263 264 def hasbuffer(self):
264 265 """True is any data is currently buffered
265 266
266 267 This will be used externally a pre-step for polling IO. If there is
267 268 already data then no polling should be set in place."""
268 269 return bool(self._buffer)
269 270
270 271 @property
271 272 def closed(self):
272 273 return self._input.closed
273 274
274 275 def fileno(self):
275 276 return self._input.fileno()
276 277
277 278 def close(self):
278 279 return self._input.close()
279 280
280 281 def read(self, size):
281 282 while (not self._eof) and (self._lenbuf < size):
282 283 self._fillbuffer()
283 284 return self._frombuffer(size)
284 285
285 286 def readline(self, *args, **kwargs):
286 287 if 1 < len(self._buffer):
287 288 # this should not happen because both read and readline end with a
288 289 # _frombuffer call that collapse it.
289 290 self._buffer = [''.join(self._buffer)]
290 291 self._lenbuf = len(self._buffer[0])
291 292 lfi = -1
292 293 if self._buffer:
293 294 lfi = self._buffer[-1].find('\n')
294 295 while (not self._eof) and lfi < 0:
295 296 self._fillbuffer()
296 297 if self._buffer:
297 298 lfi = self._buffer[-1].find('\n')
298 299 size = lfi + 1
299 300 if lfi < 0: # end of file
300 301 size = self._lenbuf
301 302 elif 1 < len(self._buffer):
302 303 # we need to take previous chunks into account
303 304 size += self._lenbuf - len(self._buffer[-1])
304 305 return self._frombuffer(size)
305 306
306 307 def _frombuffer(self, size):
307 308 """return at most 'size' data from the buffer
308 309
309 310 The data are removed from the buffer."""
310 311 if size == 0 or not self._buffer:
311 312 return ''
312 313 buf = self._buffer[0]
313 314 if 1 < len(self._buffer):
314 315 buf = ''.join(self._buffer)
315 316
316 317 data = buf[:size]
317 318 buf = buf[len(data):]
318 319 if buf:
319 320 self._buffer = [buf]
320 321 self._lenbuf = len(buf)
321 322 else:
322 323 self._buffer = []
323 324 self._lenbuf = 0
324 325 return data
325 326
326 327 def _fillbuffer(self):
327 328 """read data to the buffer"""
328 329 data = os.read(self._input.fileno(), _chunksize)
329 330 if not data:
330 331 self._eof = True
331 332 else:
332 333 self._lenbuf += len(data)
333 334 self._buffer.append(data)
334 335
335 336 def popen2(cmd, env=None, newlines=False):
336 337 # Setting bufsize to -1 lets the system decide the buffer size.
337 338 # The default for bufsize is 0, meaning unbuffered. This leads to
338 339 # poor performance on Mac OS X: http://bugs.python.org/issue4194
339 340 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
340 341 close_fds=closefds,
341 342 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
342 343 universal_newlines=newlines,
343 344 env=env)
344 345 return p.stdin, p.stdout
345 346
346 347 def popen3(cmd, env=None, newlines=False):
347 348 stdin, stdout, stderr, p = popen4(cmd, env, newlines)
348 349 return stdin, stdout, stderr
349 350
350 351 def popen4(cmd, env=None, newlines=False, bufsize=-1):
351 352 p = subprocess.Popen(cmd, shell=True, bufsize=bufsize,
352 353 close_fds=closefds,
353 354 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
354 355 stderr=subprocess.PIPE,
355 356 universal_newlines=newlines,
356 357 env=env)
357 358 return p.stdin, p.stdout, p.stderr, p
358 359
359 360 def version():
360 361 """Return version information if available."""
361 362 try:
362 363 from . import __version__
363 364 return __version__.version
364 365 except ImportError:
365 366 return 'unknown'
366 367
367 368 def versiontuple(v=None, n=4):
368 369 """Parses a Mercurial version string into an N-tuple.
369 370
370 371 The version string to be parsed is specified with the ``v`` argument.
371 372 If it isn't defined, the current Mercurial version string will be parsed.
372 373
373 374 ``n`` can be 2, 3, or 4. Here is how some version strings map to
374 375 returned values:
375 376
376 377 >>> v = '3.6.1+190-df9b73d2d444'
377 378 >>> versiontuple(v, 2)
378 379 (3, 6)
379 380 >>> versiontuple(v, 3)
380 381 (3, 6, 1)
381 382 >>> versiontuple(v, 4)
382 383 (3, 6, 1, '190-df9b73d2d444')
383 384
384 385 >>> versiontuple('3.6.1+190-df9b73d2d444+20151118')
385 386 (3, 6, 1, '190-df9b73d2d444+20151118')
386 387
387 388 >>> v = '3.6'
388 389 >>> versiontuple(v, 2)
389 390 (3, 6)
390 391 >>> versiontuple(v, 3)
391 392 (3, 6, None)
392 393 >>> versiontuple(v, 4)
393 394 (3, 6, None, None)
394 395 """
395 396 if not v:
396 397 v = version()
397 398 parts = v.split('+', 1)
398 399 if len(parts) == 1:
399 400 vparts, extra = parts[0], None
400 401 else:
401 402 vparts, extra = parts
402 403
403 404 vints = []
404 405 for i in vparts.split('.'):
405 406 try:
406 407 vints.append(int(i))
407 408 except ValueError:
408 409 break
409 410 # (3, 6) -> (3, 6, None)
410 411 while len(vints) < 3:
411 412 vints.append(None)
412 413
413 414 if n == 2:
414 415 return (vints[0], vints[1])
415 416 if n == 3:
416 417 return (vints[0], vints[1], vints[2])
417 418 if n == 4:
418 419 return (vints[0], vints[1], vints[2], extra)
419 420
420 421 # used by parsedate
421 422 defaultdateformats = (
422 423 '%Y-%m-%d %H:%M:%S',
423 424 '%Y-%m-%d %I:%M:%S%p',
424 425 '%Y-%m-%d %H:%M',
425 426 '%Y-%m-%d %I:%M%p',
426 427 '%Y-%m-%d',
427 428 '%m-%d',
428 429 '%m/%d',
429 430 '%m/%d/%y',
430 431 '%m/%d/%Y',
431 432 '%a %b %d %H:%M:%S %Y',
432 433 '%a %b %d %I:%M:%S%p %Y',
433 434 '%a, %d %b %Y %H:%M:%S', # GNU coreutils "/bin/date --rfc-2822"
434 435 '%b %d %H:%M:%S %Y',
435 436 '%b %d %I:%M:%S%p %Y',
436 437 '%b %d %H:%M:%S',
437 438 '%b %d %I:%M:%S%p',
438 439 '%b %d %H:%M',
439 440 '%b %d %I:%M%p',
440 441 '%b %d %Y',
441 442 '%b %d',
442 443 '%H:%M:%S',
443 444 '%I:%M:%S%p',
444 445 '%H:%M',
445 446 '%I:%M%p',
446 447 )
447 448
448 449 extendeddateformats = defaultdateformats + (
449 450 "%Y",
450 451 "%Y-%m",
451 452 "%b",
452 453 "%b %Y",
453 454 )
454 455
455 456 def cachefunc(func):
456 457 '''cache the result of function calls'''
457 458 # XXX doesn't handle keywords args
458 459 if func.__code__.co_argcount == 0:
459 460 cache = []
460 461 def f():
461 462 if len(cache) == 0:
462 463 cache.append(func())
463 464 return cache[0]
464 465 return f
465 466 cache = {}
466 467 if func.__code__.co_argcount == 1:
467 468 # we gain a small amount of time because
468 469 # we don't need to pack/unpack the list
469 470 def f(arg):
470 471 if arg not in cache:
471 472 cache[arg] = func(arg)
472 473 return cache[arg]
473 474 else:
474 475 def f(*args):
475 476 if args not in cache:
476 477 cache[args] = func(*args)
477 478 return cache[args]
478 479
479 480 return f
480 481
481 482 class sortdict(dict):
482 483 '''a simple sorted dictionary'''
483 484 def __init__(self, data=None):
484 485 self._list = []
485 486 if data:
486 487 self.update(data)
487 488 def copy(self):
488 489 return sortdict(self)
489 490 def __setitem__(self, key, val):
490 491 if key in self:
491 492 self._list.remove(key)
492 493 self._list.append(key)
493 494 dict.__setitem__(self, key, val)
494 495 def __iter__(self):
495 496 return self._list.__iter__()
496 497 def update(self, src):
497 498 if isinstance(src, dict):
498 499 src = src.iteritems()
499 500 for k, v in src:
500 501 self[k] = v
501 502 def clear(self):
502 503 dict.clear(self)
503 504 self._list = []
504 505 def items(self):
505 506 return [(k, self[k]) for k in self._list]
506 507 def __delitem__(self, key):
507 508 dict.__delitem__(self, key)
508 509 self._list.remove(key)
509 510 def pop(self, key, *args, **kwargs):
510 511 dict.pop(self, key, *args, **kwargs)
511 512 try:
512 513 self._list.remove(key)
513 514 except ValueError:
514 515 pass
515 516 def keys(self):
516 517 return self._list
517 518 def iterkeys(self):
518 519 return self._list.__iter__()
519 520 def iteritems(self):
520 521 for k in self._list:
521 522 yield k, self[k]
522 523 def insert(self, index, key, val):
523 524 self._list.insert(index, key)
524 525 dict.__setitem__(self, key, val)
525 526
526 527 class _lrucachenode(object):
527 528 """A node in a doubly linked list.
528 529
529 530 Holds a reference to nodes on either side as well as a key-value
530 531 pair for the dictionary entry.
531 532 """
532 533 __slots__ = ('next', 'prev', 'key', 'value')
533 534
534 535 def __init__(self):
535 536 self.next = None
536 537 self.prev = None
537 538
538 539 self.key = _notset
539 540 self.value = None
540 541
541 542 def markempty(self):
542 543 """Mark the node as emptied."""
543 544 self.key = _notset
544 545
545 546 class lrucachedict(object):
546 547 """Dict that caches most recent accesses and sets.
547 548
548 549 The dict consists of an actual backing dict - indexed by original
549 550 key - and a doubly linked circular list defining the order of entries in
550 551 the cache.
551 552
552 553 The head node is the newest entry in the cache. If the cache is full,
553 554 we recycle head.prev and make it the new head. Cache accesses result in
554 555 the node being moved to before the existing head and being marked as the
555 556 new head node.
556 557 """
557 558 def __init__(self, max):
558 559 self._cache = {}
559 560
560 561 self._head = head = _lrucachenode()
561 562 head.prev = head
562 563 head.next = head
563 564 self._size = 1
564 565 self._capacity = max
565 566
566 567 def __len__(self):
567 568 return len(self._cache)
568 569
569 570 def __contains__(self, k):
570 571 return k in self._cache
571 572
572 573 def __iter__(self):
573 574 # We don't have to iterate in cache order, but why not.
574 575 n = self._head
575 576 for i in range(len(self._cache)):
576 577 yield n.key
577 578 n = n.next
578 579
579 580 def __getitem__(self, k):
580 581 node = self._cache[k]
581 582 self._movetohead(node)
582 583 return node.value
583 584
584 585 def __setitem__(self, k, v):
585 586 node = self._cache.get(k)
586 587 # Replace existing value and mark as newest.
587 588 if node is not None:
588 589 node.value = v
589 590 self._movetohead(node)
590 591 return
591 592
592 593 if self._size < self._capacity:
593 594 node = self._addcapacity()
594 595 else:
595 596 # Grab the last/oldest item.
596 597 node = self._head.prev
597 598
598 599 # At capacity. Kill the old entry.
599 600 if node.key is not _notset:
600 601 del self._cache[node.key]
601 602
602 603 node.key = k
603 604 node.value = v
604 605 self._cache[k] = node
605 606 # And mark it as newest entry. No need to adjust order since it
606 607 # is already self._head.prev.
607 608 self._head = node
608 609
609 610 def __delitem__(self, k):
610 611 node = self._cache.pop(k)
611 612 node.markempty()
612 613
613 614 # Temporarily mark as newest item before re-adjusting head to make
614 615 # this node the oldest item.
615 616 self._movetohead(node)
616 617 self._head = node.next
617 618
618 619 # Additional dict methods.
619 620
620 621 def get(self, k, default=None):
621 622 try:
622 623 return self._cache[k]
623 624 except KeyError:
624 625 return default
625 626
626 627 def clear(self):
627 628 n = self._head
628 629 while n.key is not _notset:
629 630 n.markempty()
630 631 n = n.next
631 632
632 633 self._cache.clear()
633 634
634 635 def copy(self):
635 636 result = lrucachedict(self._capacity)
636 637 n = self._head.prev
637 638 # Iterate in oldest-to-newest order, so the copy has the right ordering
638 639 for i in range(len(self._cache)):
639 640 result[n.key] = n.value
640 641 n = n.prev
641 642 return result
642 643
643 644 def _movetohead(self, node):
644 645 """Mark a node as the newest, making it the new head.
645 646
646 647 When a node is accessed, it becomes the freshest entry in the LRU
647 648 list, which is denoted by self._head.
648 649
649 650 Visually, let's make ``N`` the new head node (* denotes head):
650 651
651 652 previous/oldest <-> head <-> next/next newest
652 653
653 654 ----<->--- A* ---<->-----
654 655 | |
655 656 E <-> D <-> N <-> C <-> B
656 657
657 658 To:
658 659
659 660 ----<->--- N* ---<->-----
660 661 | |
661 662 E <-> D <-> C <-> B <-> A
662 663
663 664 This requires the following moves:
664 665
665 666 C.next = D (node.prev.next = node.next)
666 667 D.prev = C (node.next.prev = node.prev)
667 668 E.next = N (head.prev.next = node)
668 669 N.prev = E (node.prev = head.prev)
669 670 N.next = A (node.next = head)
670 671 A.prev = N (head.prev = node)
671 672 """
672 673 head = self._head
673 674 # C.next = D
674 675 node.prev.next = node.next
675 676 # D.prev = C
676 677 node.next.prev = node.prev
677 678 # N.prev = E
678 679 node.prev = head.prev
679 680 # N.next = A
680 681 # It is tempting to do just "head" here, however if node is
681 682 # adjacent to head, this will do bad things.
682 683 node.next = head.prev.next
683 684 # E.next = N
684 685 node.next.prev = node
685 686 # A.prev = N
686 687 node.prev.next = node
687 688
688 689 self._head = node
689 690
690 691 def _addcapacity(self):
691 692 """Add a node to the circular linked list.
692 693
693 694 The new node is inserted before the head node.
694 695 """
695 696 head = self._head
696 697 node = _lrucachenode()
697 698 head.prev.next = node
698 699 node.prev = head.prev
699 700 node.next = head
700 701 head.prev = node
701 702 self._size += 1
702 703 return node
703 704
704 705 def lrucachefunc(func):
705 706 '''cache most recent results of function calls'''
706 707 cache = {}
707 708 order = collections.deque()
708 709 if func.__code__.co_argcount == 1:
709 710 def f(arg):
710 711 if arg not in cache:
711 712 if len(cache) > 20:
712 713 del cache[order.popleft()]
713 714 cache[arg] = func(arg)
714 715 else:
715 716 order.remove(arg)
716 717 order.append(arg)
717 718 return cache[arg]
718 719 else:
719 720 def f(*args):
720 721 if args not in cache:
721 722 if len(cache) > 20:
722 723 del cache[order.popleft()]
723 724 cache[args] = func(*args)
724 725 else:
725 726 order.remove(args)
726 727 order.append(args)
727 728 return cache[args]
728 729
729 730 return f
730 731
731 732 class propertycache(object):
732 733 def __init__(self, func):
733 734 self.func = func
734 735 self.name = func.__name__
735 736 def __get__(self, obj, type=None):
736 737 result = self.func(obj)
737 738 self.cachevalue(obj, result)
738 739 return result
739 740
740 741 def cachevalue(self, obj, value):
741 742 # __dict__ assignment required to bypass __setattr__ (eg: repoview)
742 743 obj.__dict__[self.name] = value
743 744
744 745 def pipefilter(s, cmd):
745 746 '''filter string S through command CMD, returning its output'''
746 747 p = subprocess.Popen(cmd, shell=True, close_fds=closefds,
747 748 stdin=subprocess.PIPE, stdout=subprocess.PIPE)
748 749 pout, perr = p.communicate(s)
749 750 return pout
750 751
751 752 def tempfilter(s, cmd):
752 753 '''filter string S through a pair of temporary files with CMD.
753 754 CMD is used as a template to create the real command to be run,
754 755 with the strings INFILE and OUTFILE replaced by the real names of
755 756 the temporary files generated.'''
756 757 inname, outname = None, None
757 758 try:
758 759 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
759 760 fp = os.fdopen(infd, 'wb')
760 761 fp.write(s)
761 762 fp.close()
762 763 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
763 764 os.close(outfd)
764 765 cmd = cmd.replace('INFILE', inname)
765 766 cmd = cmd.replace('OUTFILE', outname)
766 767 code = os.system(cmd)
767 768 if sys.platform == 'OpenVMS' and code & 1:
768 769 code = 0
769 770 if code:
770 771 raise Abort(_("command '%s' failed: %s") %
771 772 (cmd, explainexit(code)))
772 773 return readfile(outname)
773 774 finally:
774 775 try:
775 776 if inname:
776 777 os.unlink(inname)
777 778 except OSError:
778 779 pass
779 780 try:
780 781 if outname:
781 782 os.unlink(outname)
782 783 except OSError:
783 784 pass
784 785
785 786 filtertable = {
786 787 'tempfile:': tempfilter,
787 788 'pipe:': pipefilter,
788 789 }
789 790
790 791 def filter(s, cmd):
791 792 "filter a string through a command that transforms its input to its output"
792 793 for name, fn in filtertable.iteritems():
793 794 if cmd.startswith(name):
794 795 return fn(s, cmd[len(name):].lstrip())
795 796 return pipefilter(s, cmd)
796 797
797 798 def binary(s):
798 799 """return true if a string is binary data"""
799 800 return bool(s and '\0' in s)
800 801
801 802 def increasingchunks(source, min=1024, max=65536):
802 803 '''return no less than min bytes per chunk while data remains,
803 804 doubling min after each chunk until it reaches max'''
804 805 def log2(x):
805 806 if not x:
806 807 return 0
807 808 i = 0
808 809 while x:
809 810 x >>= 1
810 811 i += 1
811 812 return i - 1
812 813
813 814 buf = []
814 815 blen = 0
815 816 for chunk in source:
816 817 buf.append(chunk)
817 818 blen += len(chunk)
818 819 if blen >= min:
819 820 if min < max:
820 821 min = min << 1
821 822 nmin = 1 << log2(blen)
822 823 if nmin > min:
823 824 min = nmin
824 825 if min > max:
825 826 min = max
826 827 yield ''.join(buf)
827 828 blen = 0
828 829 buf = []
829 830 if buf:
830 831 yield ''.join(buf)
831 832
832 833 Abort = error.Abort
833 834
834 835 def always(fn):
835 836 return True
836 837
837 838 def never(fn):
838 839 return False
839 840
840 841 def nogc(func):
841 842 """disable garbage collector
842 843
843 844 Python's garbage collector triggers a GC each time a certain number of
844 845 container objects (the number being defined by gc.get_threshold()) are
845 846 allocated even when marked not to be tracked by the collector. Tracking has
846 847 no effect on when GCs are triggered, only on what objects the GC looks
847 848 into. As a workaround, disable GC while building complex (huge)
848 849 containers.
849 850
850 851 This garbage collector issue have been fixed in 2.7.
851 852 """
852 853 def wrapper(*args, **kwargs):
853 854 gcenabled = gc.isenabled()
854 855 gc.disable()
855 856 try:
856 857 return func(*args, **kwargs)
857 858 finally:
858 859 if gcenabled:
859 860 gc.enable()
860 861 return wrapper
861 862
862 863 def pathto(root, n1, n2):
863 864 '''return the relative path from one place to another.
864 865 root should use os.sep to separate directories
865 866 n1 should use os.sep to separate directories
866 867 n2 should use "/" to separate directories
867 868 returns an os.sep-separated path.
868 869
869 870 If n1 is a relative path, it's assumed it's
870 871 relative to root.
871 872 n2 should always be relative to root.
872 873 '''
873 874 if not n1:
874 875 return localpath(n2)
875 876 if os.path.isabs(n1):
876 877 if os.path.splitdrive(root)[0] != os.path.splitdrive(n1)[0]:
877 878 return os.path.join(root, localpath(n2))
878 879 n2 = '/'.join((pconvert(root), n2))
879 880 a, b = splitpath(n1), n2.split('/')
880 881 a.reverse()
881 882 b.reverse()
882 883 while a and b and a[-1] == b[-1]:
883 884 a.pop()
884 885 b.pop()
885 886 b.reverse()
886 887 return os.sep.join((['..'] * len(a)) + b) or '.'
887 888
888 889 def mainfrozen():
889 890 """return True if we are a frozen executable.
890 891
891 892 The code supports py2exe (most common, Windows only) and tools/freeze
892 893 (portable, not much used).
893 894 """
894 895 return (safehasattr(sys, "frozen") or # new py2exe
895 896 safehasattr(sys, "importers") or # old py2exe
896 897 imp.is_frozen("__main__")) # tools/freeze
897 898
898 899 # the location of data files matching the source code
899 900 if mainfrozen() and getattr(sys, 'frozen', None) != 'macosx_app':
900 901 # executable version (py2exe) doesn't support __file__
901 902 datapath = os.path.dirname(sys.executable)
902 903 else:
903 904 datapath = os.path.dirname(__file__)
904 905
905 906 i18n.setdatapath(datapath)
906 907
907 908 _hgexecutable = None
908 909
909 910 def hgexecutable():
910 911 """return location of the 'hg' executable.
911 912
912 913 Defaults to $HG or 'hg' in the search path.
913 914 """
914 915 if _hgexecutable is None:
915 916 hg = os.environ.get('HG')
916 917 mainmod = sys.modules['__main__']
917 918 if hg:
918 919 _sethgexecutable(hg)
919 920 elif mainfrozen():
920 921 if getattr(sys, 'frozen', None) == 'macosx_app':
921 922 # Env variable set by py2app
922 923 _sethgexecutable(os.environ['EXECUTABLEPATH'])
923 924 else:
924 925 _sethgexecutable(sys.executable)
925 926 elif os.path.basename(getattr(mainmod, '__file__', '')) == 'hg':
926 927 _sethgexecutable(mainmod.__file__)
927 928 else:
928 929 exe = findexe('hg') or os.path.basename(sys.argv[0])
929 930 _sethgexecutable(exe)
930 931 return _hgexecutable
931 932
932 933 def _sethgexecutable(path):
933 934 """set location of the 'hg' executable"""
934 935 global _hgexecutable
935 936 _hgexecutable = path
936 937
937 938 def _isstdout(f):
938 939 fileno = getattr(f, 'fileno', None)
939 940 return fileno and fileno() == sys.__stdout__.fileno()
940 941
941 942 def system(cmd, environ=None, cwd=None, onerr=None, errprefix=None, out=None):
942 943 '''enhanced shell command execution.
943 944 run with environment maybe modified, maybe in different dir.
944 945
945 946 if command fails and onerr is None, return status, else raise onerr
946 947 object as exception.
947 948
948 949 if out is specified, it is assumed to be a file-like object that has a
949 950 write() method. stdout and stderr will be redirected to out.'''
950 951 if environ is None:
951 952 environ = {}
952 953 try:
953 954 sys.stdout.flush()
954 955 except Exception:
955 956 pass
956 957 def py2shell(val):
957 958 'convert python object into string that is useful to shell'
958 959 if val is None or val is False:
959 960 return '0'
960 961 if val is True:
961 962 return '1'
962 963 return str(val)
963 964 origcmd = cmd
964 965 cmd = quotecommand(cmd)
965 966 if sys.platform == 'plan9' and (sys.version_info[0] == 2
966 967 and sys.version_info[1] < 7):
967 968 # subprocess kludge to work around issues in half-baked Python
968 969 # ports, notably bichued/python:
969 970 if not cwd is None:
970 971 os.chdir(cwd)
971 972 rc = os.system(cmd)
972 973 else:
973 974 env = dict(os.environ)
974 975 env.update((k, py2shell(v)) for k, v in environ.iteritems())
975 976 env['HG'] = hgexecutable()
976 977 if out is None or _isstdout(out):
977 978 rc = subprocess.call(cmd, shell=True, close_fds=closefds,
978 979 env=env, cwd=cwd)
979 980 else:
980 981 proc = subprocess.Popen(cmd, shell=True, close_fds=closefds,
981 982 env=env, cwd=cwd, stdout=subprocess.PIPE,
982 983 stderr=subprocess.STDOUT)
983 984 while True:
984 985 line = proc.stdout.readline()
985 986 if not line:
986 987 break
987 988 out.write(line)
988 989 proc.wait()
989 990 rc = proc.returncode
990 991 if sys.platform == 'OpenVMS' and rc & 1:
991 992 rc = 0
992 993 if rc and onerr:
993 994 errmsg = '%s %s' % (os.path.basename(origcmd.split(None, 1)[0]),
994 995 explainexit(rc)[0])
995 996 if errprefix:
996 997 errmsg = '%s: %s' % (errprefix, errmsg)
997 998 raise onerr(errmsg)
998 999 return rc
999 1000
1000 1001 def checksignature(func):
1001 1002 '''wrap a function with code to check for calling errors'''
1002 1003 def check(*args, **kwargs):
1003 1004 try:
1004 1005 return func(*args, **kwargs)
1005 1006 except TypeError:
1006 1007 if len(traceback.extract_tb(sys.exc_info()[2])) == 1:
1007 1008 raise error.SignatureError
1008 1009 raise
1009 1010
1010 1011 return check
1011 1012
1012 1013 def copyfile(src, dest, hardlink=False, copystat=False, checkambig=False):
1013 1014 '''copy a file, preserving mode and optionally other stat info like
1014 1015 atime/mtime
1015 1016
1016 1017 checkambig argument is used with filestat, and is useful only if
1017 1018 destination file is guarded by any lock (e.g. repo.lock or
1018 1019 repo.wlock).
1019 1020
1020 1021 copystat and checkambig should be exclusive.
1021 1022 '''
1022 1023 assert not (copystat and checkambig)
1023 1024 oldstat = None
1024 1025 if os.path.lexists(dest):
1025 1026 if checkambig:
1026 1027 oldstat = checkambig and filestat(dest)
1027 1028 unlink(dest)
1028 1029 # hardlinks are problematic on CIFS, quietly ignore this flag
1029 1030 # until we find a way to work around it cleanly (issue4546)
1030 1031 if False and hardlink:
1031 1032 try:
1032 1033 oslink(src, dest)
1033 1034 return
1034 1035 except (IOError, OSError):
1035 1036 pass # fall back to normal copy
1036 1037 if os.path.islink(src):
1037 1038 os.symlink(os.readlink(src), dest)
1038 1039 # copytime is ignored for symlinks, but in general copytime isn't needed
1039 1040 # for them anyway
1040 1041 else:
1041 1042 try:
1042 1043 shutil.copyfile(src, dest)
1043 1044 if copystat:
1044 1045 # copystat also copies mode
1045 1046 shutil.copystat(src, dest)
1046 1047 else:
1047 1048 shutil.copymode(src, dest)
1048 1049 if oldstat and oldstat.stat:
1049 1050 newstat = filestat(dest)
1050 1051 if newstat.isambig(oldstat):
1051 1052 # stat of copied file is ambiguous to original one
1052 1053 advanced = (oldstat.stat.st_mtime + 1) & 0x7fffffff
1053 1054 os.utime(dest, (advanced, advanced))
1054 1055 except shutil.Error as inst:
1055 1056 raise Abort(str(inst))
1056 1057
1057 1058 def copyfiles(src, dst, hardlink=None, progress=lambda t, pos: None):
1058 1059 """Copy a directory tree using hardlinks if possible."""
1059 1060 num = 0
1060 1061
1061 1062 if hardlink is None:
1062 1063 hardlink = (os.stat(src).st_dev ==
1063 1064 os.stat(os.path.dirname(dst)).st_dev)
1064 1065 if hardlink:
1065 1066 topic = _('linking')
1066 1067 else:
1067 1068 topic = _('copying')
1068 1069
1069 1070 if os.path.isdir(src):
1070 1071 os.mkdir(dst)
1071 1072 for name, kind in osutil.listdir(src):
1072 1073 srcname = os.path.join(src, name)
1073 1074 dstname = os.path.join(dst, name)
1074 1075 def nprog(t, pos):
1075 1076 if pos is not None:
1076 1077 return progress(t, pos + num)
1077 1078 hardlink, n = copyfiles(srcname, dstname, hardlink, progress=nprog)
1078 1079 num += n
1079 1080 else:
1080 1081 if hardlink:
1081 1082 try:
1082 1083 oslink(src, dst)
1083 1084 except (IOError, OSError):
1084 1085 hardlink = False
1085 1086 shutil.copy(src, dst)
1086 1087 else:
1087 1088 shutil.copy(src, dst)
1088 1089 num += 1
1089 1090 progress(topic, num)
1090 1091 progress(topic, None)
1091 1092
1092 1093 return hardlink, num
1093 1094
1094 1095 _winreservednames = '''con prn aux nul
1095 1096 com1 com2 com3 com4 com5 com6 com7 com8 com9
1096 1097 lpt1 lpt2 lpt3 lpt4 lpt5 lpt6 lpt7 lpt8 lpt9'''.split()
1097 1098 _winreservedchars = ':*?"<>|'
1098 1099 def checkwinfilename(path):
1099 1100 r'''Check that the base-relative path is a valid filename on Windows.
1100 1101 Returns None if the path is ok, or a UI string describing the problem.
1101 1102
1102 1103 >>> checkwinfilename("just/a/normal/path")
1103 1104 >>> checkwinfilename("foo/bar/con.xml")
1104 1105 "filename contains 'con', which is reserved on Windows"
1105 1106 >>> checkwinfilename("foo/con.xml/bar")
1106 1107 "filename contains 'con', which is reserved on Windows"
1107 1108 >>> checkwinfilename("foo/bar/xml.con")
1108 1109 >>> checkwinfilename("foo/bar/AUX/bla.txt")
1109 1110 "filename contains 'AUX', which is reserved on Windows"
1110 1111 >>> checkwinfilename("foo/bar/bla:.txt")
1111 1112 "filename contains ':', which is reserved on Windows"
1112 1113 >>> checkwinfilename("foo/bar/b\07la.txt")
1113 1114 "filename contains '\\x07', which is invalid on Windows"
1114 1115 >>> checkwinfilename("foo/bar/bla ")
1115 1116 "filename ends with ' ', which is not allowed on Windows"
1116 1117 >>> checkwinfilename("../bar")
1117 1118 >>> checkwinfilename("foo\\")
1118 1119 "filename ends with '\\', which is invalid on Windows"
1119 1120 >>> checkwinfilename("foo\\/bar")
1120 1121 "directory name ends with '\\', which is invalid on Windows"
1121 1122 '''
1122 1123 if path.endswith('\\'):
1123 1124 return _("filename ends with '\\', which is invalid on Windows")
1124 1125 if '\\/' in path:
1125 1126 return _("directory name ends with '\\', which is invalid on Windows")
1126 1127 for n in path.replace('\\', '/').split('/'):
1127 1128 if not n:
1128 1129 continue
1129 1130 for c in n:
1130 1131 if c in _winreservedchars:
1131 1132 return _("filename contains '%s', which is reserved "
1132 1133 "on Windows") % c
1133 1134 if ord(c) <= 31:
1134 1135 return _("filename contains %r, which is invalid "
1135 1136 "on Windows") % c
1136 1137 base = n.split('.')[0]
1137 1138 if base and base.lower() in _winreservednames:
1138 1139 return _("filename contains '%s', which is reserved "
1139 1140 "on Windows") % base
1140 1141 t = n[-1]
1141 1142 if t in '. ' and n not in '..':
1142 1143 return _("filename ends with '%s', which is not allowed "
1143 1144 "on Windows") % t
1144 1145
1145 1146 if os.name == 'nt':
1146 1147 checkosfilename = checkwinfilename
1147 1148 else:
1148 1149 checkosfilename = platform.checkosfilename
1149 1150
1150 1151 def makelock(info, pathname):
1151 1152 try:
1152 1153 return os.symlink(info, pathname)
1153 1154 except OSError as why:
1154 1155 if why.errno == errno.EEXIST:
1155 1156 raise
1156 1157 except AttributeError: # no symlink in os
1157 1158 pass
1158 1159
1159 1160 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
1160 1161 os.write(ld, info)
1161 1162 os.close(ld)
1162 1163
1163 1164 def readlock(pathname):
1164 1165 try:
1165 1166 return os.readlink(pathname)
1166 1167 except OSError as why:
1167 1168 if why.errno not in (errno.EINVAL, errno.ENOSYS):
1168 1169 raise
1169 1170 except AttributeError: # no symlink in os
1170 1171 pass
1171 1172 fp = posixfile(pathname)
1172 1173 r = fp.read()
1173 1174 fp.close()
1174 1175 return r
1175 1176
1176 1177 def fstat(fp):
1177 1178 '''stat file object that may not have fileno method.'''
1178 1179 try:
1179 1180 return os.fstat(fp.fileno())
1180 1181 except AttributeError:
1181 1182 return os.stat(fp.name)
1182 1183
1183 1184 # File system features
1184 1185
1185 1186 def checkcase(path):
1186 1187 """
1187 1188 Return true if the given path is on a case-sensitive filesystem
1188 1189
1189 1190 Requires a path (like /foo/.hg) ending with a foldable final
1190 1191 directory component.
1191 1192 """
1192 1193 s1 = os.lstat(path)
1193 1194 d, b = os.path.split(path)
1194 1195 b2 = b.upper()
1195 1196 if b == b2:
1196 1197 b2 = b.lower()
1197 1198 if b == b2:
1198 1199 return True # no evidence against case sensitivity
1199 1200 p2 = os.path.join(d, b2)
1200 1201 try:
1201 1202 s2 = os.lstat(p2)
1202 1203 if s2 == s1:
1203 1204 return False
1204 1205 return True
1205 1206 except OSError:
1206 1207 return True
1207 1208
1208 1209 try:
1209 1210 import re2
1210 1211 _re2 = None
1211 1212 except ImportError:
1212 1213 _re2 = False
1213 1214
1214 1215 class _re(object):
1215 1216 def _checkre2(self):
1216 1217 global _re2
1217 1218 try:
1218 1219 # check if match works, see issue3964
1219 1220 _re2 = bool(re2.match(r'\[([^\[]+)\]', '[ui]'))
1220 1221 except ImportError:
1221 1222 _re2 = False
1222 1223
1223 1224 def compile(self, pat, flags=0):
1224 1225 '''Compile a regular expression, using re2 if possible
1225 1226
1226 1227 For best performance, use only re2-compatible regexp features. The
1227 1228 only flags from the re module that are re2-compatible are
1228 1229 IGNORECASE and MULTILINE.'''
1229 1230 if _re2 is None:
1230 1231 self._checkre2()
1231 1232 if _re2 and (flags & ~(remod.IGNORECASE | remod.MULTILINE)) == 0:
1232 1233 if flags & remod.IGNORECASE:
1233 1234 pat = '(?i)' + pat
1234 1235 if flags & remod.MULTILINE:
1235 1236 pat = '(?m)' + pat
1236 1237 try:
1237 1238 return re2.compile(pat)
1238 1239 except re2.error:
1239 1240 pass
1240 1241 return remod.compile(pat, flags)
1241 1242
1242 1243 @propertycache
1243 1244 def escape(self):
1244 1245 '''Return the version of escape corresponding to self.compile.
1245 1246
1246 1247 This is imperfect because whether re2 or re is used for a particular
1247 1248 function depends on the flags, etc, but it's the best we can do.
1248 1249 '''
1249 1250 global _re2
1250 1251 if _re2 is None:
1251 1252 self._checkre2()
1252 1253 if _re2:
1253 1254 return re2.escape
1254 1255 else:
1255 1256 return remod.escape
1256 1257
1257 1258 re = _re()
1258 1259
1259 1260 _fspathcache = {}
1260 1261 def fspath(name, root):
1261 1262 '''Get name in the case stored in the filesystem
1262 1263
1263 1264 The name should be relative to root, and be normcase-ed for efficiency.
1264 1265
1265 1266 Note that this function is unnecessary, and should not be
1266 1267 called, for case-sensitive filesystems (simply because it's expensive).
1267 1268
1268 1269 The root should be normcase-ed, too.
1269 1270 '''
1270 1271 def _makefspathcacheentry(dir):
1271 1272 return dict((normcase(n), n) for n in os.listdir(dir))
1272 1273
1273 1274 seps = os.sep
1274 1275 if os.altsep:
1275 1276 seps = seps + os.altsep
1276 1277 # Protect backslashes. This gets silly very quickly.
1277 1278 seps.replace('\\','\\\\')
1278 1279 pattern = remod.compile(r'([^%s]+)|([%s]+)' % (seps, seps))
1279 1280 dir = os.path.normpath(root)
1280 1281 result = []
1281 1282 for part, sep in pattern.findall(name):
1282 1283 if sep:
1283 1284 result.append(sep)
1284 1285 continue
1285 1286
1286 1287 if dir not in _fspathcache:
1287 1288 _fspathcache[dir] = _makefspathcacheentry(dir)
1288 1289 contents = _fspathcache[dir]
1289 1290
1290 1291 found = contents.get(part)
1291 1292 if not found:
1292 1293 # retry "once per directory" per "dirstate.walk" which
1293 1294 # may take place for each patches of "hg qpush", for example
1294 1295 _fspathcache[dir] = contents = _makefspathcacheentry(dir)
1295 1296 found = contents.get(part)
1296 1297
1297 1298 result.append(found or part)
1298 1299 dir = os.path.join(dir, part)
1299 1300
1300 1301 return ''.join(result)
1301 1302
1302 1303 def checknlink(testfile):
1303 1304 '''check whether hardlink count reporting works properly'''
1304 1305
1305 1306 # testfile may be open, so we need a separate file for checking to
1306 1307 # work around issue2543 (or testfile may get lost on Samba shares)
1307 1308 f1 = testfile + ".hgtmp1"
1308 1309 if os.path.lexists(f1):
1309 1310 return False
1310 1311 try:
1311 1312 posixfile(f1, 'w').close()
1312 1313 except IOError:
1313 1314 return False
1314 1315
1315 1316 f2 = testfile + ".hgtmp2"
1316 1317 fd = None
1317 1318 try:
1318 1319 oslink(f1, f2)
1319 1320 # nlinks() may behave differently for files on Windows shares if
1320 1321 # the file is open.
1321 1322 fd = posixfile(f2)
1322 1323 return nlinks(f2) > 1
1323 1324 except OSError:
1324 1325 return False
1325 1326 finally:
1326 1327 if fd is not None:
1327 1328 fd.close()
1328 1329 for f in (f1, f2):
1329 1330 try:
1330 1331 os.unlink(f)
1331 1332 except OSError:
1332 1333 pass
1333 1334
1334 1335 def endswithsep(path):
1335 1336 '''Check path ends with os.sep or os.altsep.'''
1336 1337 return path.endswith(os.sep) or os.altsep and path.endswith(os.altsep)
1337 1338
1338 1339 def splitpath(path):
1339 1340 '''Split path by os.sep.
1340 1341 Note that this function does not use os.altsep because this is
1341 1342 an alternative of simple "xxx.split(os.sep)".
1342 1343 It is recommended to use os.path.normpath() before using this
1343 1344 function if need.'''
1344 1345 return path.split(os.sep)
1345 1346
1346 1347 def gui():
1347 1348 '''Are we running in a GUI?'''
1348 1349 if sys.platform == 'darwin':
1349 1350 if 'SSH_CONNECTION' in os.environ:
1350 1351 # handle SSH access to a box where the user is logged in
1351 1352 return False
1352 1353 elif getattr(osutil, 'isgui', None):
1353 1354 # check if a CoreGraphics session is available
1354 1355 return osutil.isgui()
1355 1356 else:
1356 1357 # pure build; use a safe default
1357 1358 return True
1358 1359 else:
1359 1360 return os.name == "nt" or os.environ.get("DISPLAY")
1360 1361
1361 1362 def mktempcopy(name, emptyok=False, createmode=None):
1362 1363 """Create a temporary file with the same contents from name
1363 1364
1364 1365 The permission bits are copied from the original file.
1365 1366
1366 1367 If the temporary file is going to be truncated immediately, you
1367 1368 can use emptyok=True as an optimization.
1368 1369
1369 1370 Returns the name of the temporary file.
1370 1371 """
1371 1372 d, fn = os.path.split(name)
1372 1373 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
1373 1374 os.close(fd)
1374 1375 # Temporary files are created with mode 0600, which is usually not
1375 1376 # what we want. If the original file already exists, just copy
1376 1377 # its mode. Otherwise, manually obey umask.
1377 1378 copymode(name, temp, createmode)
1378 1379 if emptyok:
1379 1380 return temp
1380 1381 try:
1381 1382 try:
1382 1383 ifp = posixfile(name, "rb")
1383 1384 except IOError as inst:
1384 1385 if inst.errno == errno.ENOENT:
1385 1386 return temp
1386 1387 if not getattr(inst, 'filename', None):
1387 1388 inst.filename = name
1388 1389 raise
1389 1390 ofp = posixfile(temp, "wb")
1390 1391 for chunk in filechunkiter(ifp):
1391 1392 ofp.write(chunk)
1392 1393 ifp.close()
1393 1394 ofp.close()
1394 1395 except: # re-raises
1395 1396 try: os.unlink(temp)
1396 1397 except OSError: pass
1397 1398 raise
1398 1399 return temp
1399 1400
1400 1401 class filestat(object):
1401 1402 """help to exactly detect change of a file
1402 1403
1403 1404 'stat' attribute is result of 'os.stat()' if specified 'path'
1404 1405 exists. Otherwise, it is None. This can avoid preparative
1405 1406 'exists()' examination on client side of this class.
1406 1407 """
1407 1408 def __init__(self, path):
1408 1409 try:
1409 1410 self.stat = os.stat(path)
1410 1411 except OSError as err:
1411 1412 if err.errno != errno.ENOENT:
1412 1413 raise
1413 1414 self.stat = None
1414 1415
1415 1416 __hash__ = object.__hash__
1416 1417
1417 1418 def __eq__(self, old):
1418 1419 try:
1419 1420 # if ambiguity between stat of new and old file is
1420 1421 # avoided, comparision of size, ctime and mtime is enough
1421 1422 # to exactly detect change of a file regardless of platform
1422 1423 return (self.stat.st_size == old.stat.st_size and
1423 1424 self.stat.st_ctime == old.stat.st_ctime and
1424 1425 self.stat.st_mtime == old.stat.st_mtime)
1425 1426 except AttributeError:
1426 1427 return False
1427 1428
1428 1429 def isambig(self, old):
1429 1430 """Examine whether new (= self) stat is ambiguous against old one
1430 1431
1431 1432 "S[N]" below means stat of a file at N-th change:
1432 1433
1433 1434 - S[n-1].ctime < S[n].ctime: can detect change of a file
1434 1435 - S[n-1].ctime == S[n].ctime
1435 1436 - S[n-1].ctime < S[n].mtime: means natural advancing (*1)
1436 1437 - S[n-1].ctime == S[n].mtime: is ambiguous (*2)
1437 1438 - S[n-1].ctime > S[n].mtime: never occurs naturally (don't care)
1438 1439 - S[n-1].ctime > S[n].ctime: never occurs naturally (don't care)
1439 1440
1440 1441 Case (*2) above means that a file was changed twice or more at
1441 1442 same time in sec (= S[n-1].ctime), and comparison of timestamp
1442 1443 is ambiguous.
1443 1444
1444 1445 Base idea to avoid such ambiguity is "advance mtime 1 sec, if
1445 1446 timestamp is ambiguous".
1446 1447
1447 1448 But advancing mtime only in case (*2) doesn't work as
1448 1449 expected, because naturally advanced S[n].mtime in case (*1)
1449 1450 might be equal to manually advanced S[n-1 or earlier].mtime.
1450 1451
1451 1452 Therefore, all "S[n-1].ctime == S[n].ctime" cases should be
1452 1453 treated as ambiguous regardless of mtime, to avoid overlooking
1453 1454 by confliction between such mtime.
1454 1455
1455 1456 Advancing mtime "if isambig(oldstat)" ensures "S[n-1].mtime !=
1456 1457 S[n].mtime", even if size of a file isn't changed.
1457 1458 """
1458 1459 try:
1459 1460 return (self.stat.st_ctime == old.stat.st_ctime)
1460 1461 except AttributeError:
1461 1462 return False
1462 1463
1463 1464 def __ne__(self, other):
1464 1465 return not self == other
1465 1466
1466 1467 class atomictempfile(object):
1467 1468 '''writable file object that atomically updates a file
1468 1469
1469 1470 All writes will go to a temporary copy of the original file. Call
1470 1471 close() when you are done writing, and atomictempfile will rename
1471 1472 the temporary copy to the original name, making the changes
1472 1473 visible. If the object is destroyed without being closed, all your
1473 1474 writes are discarded.
1474 1475
1475 1476 checkambig argument of constructor is used with filestat, and is
1476 1477 useful only if target file is guarded by any lock (e.g. repo.lock
1477 1478 or repo.wlock).
1478 1479 '''
1479 1480 def __init__(self, name, mode='w+b', createmode=None, checkambig=False):
1480 1481 self.__name = name # permanent name
1481 1482 self._tempname = mktempcopy(name, emptyok=('w' in mode),
1482 1483 createmode=createmode)
1483 1484 self._fp = posixfile(self._tempname, mode)
1484 1485 self._checkambig = checkambig
1485 1486
1486 1487 # delegated methods
1487 1488 self.read = self._fp.read
1488 1489 self.write = self._fp.write
1489 1490 self.seek = self._fp.seek
1490 1491 self.tell = self._fp.tell
1491 1492 self.fileno = self._fp.fileno
1492 1493
1493 1494 def close(self):
1494 1495 if not self._fp.closed:
1495 1496 self._fp.close()
1496 1497 filename = localpath(self.__name)
1497 1498 oldstat = self._checkambig and filestat(filename)
1498 1499 if oldstat and oldstat.stat:
1499 1500 rename(self._tempname, filename)
1500 1501 newstat = filestat(filename)
1501 1502 if newstat.isambig(oldstat):
1502 1503 # stat of changed file is ambiguous to original one
1503 1504 advanced = (oldstat.stat.st_mtime + 1) & 0x7fffffff
1504 1505 os.utime(filename, (advanced, advanced))
1505 1506 else:
1506 1507 rename(self._tempname, filename)
1507 1508
1508 1509 def discard(self):
1509 1510 if not self._fp.closed:
1510 1511 try:
1511 1512 os.unlink(self._tempname)
1512 1513 except OSError:
1513 1514 pass
1514 1515 self._fp.close()
1515 1516
1516 1517 def __del__(self):
1517 1518 if safehasattr(self, '_fp'): # constructor actually did something
1518 1519 self.discard()
1519 1520
1520 1521 def __enter__(self):
1521 1522 return self
1522 1523
1523 1524 def __exit__(self, exctype, excvalue, traceback):
1524 1525 if exctype is not None:
1525 1526 self.discard()
1526 1527 else:
1527 1528 self.close()
1528 1529
1529 1530 def makedirs(name, mode=None, notindexed=False):
1530 1531 """recursive directory creation with parent mode inheritance
1531 1532
1532 1533 Newly created directories are marked as "not to be indexed by
1533 1534 the content indexing service", if ``notindexed`` is specified
1534 1535 for "write" mode access.
1535 1536 """
1536 1537 try:
1537 1538 makedir(name, notindexed)
1538 1539 except OSError as err:
1539 1540 if err.errno == errno.EEXIST:
1540 1541 return
1541 1542 if err.errno != errno.ENOENT or not name:
1542 1543 raise
1543 1544 parent = os.path.dirname(os.path.abspath(name))
1544 1545 if parent == name:
1545 1546 raise
1546 1547 makedirs(parent, mode, notindexed)
1547 1548 try:
1548 1549 makedir(name, notindexed)
1549 1550 except OSError as err:
1550 1551 # Catch EEXIST to handle races
1551 1552 if err.errno == errno.EEXIST:
1552 1553 return
1553 1554 raise
1554 1555 if mode is not None:
1555 1556 os.chmod(name, mode)
1556 1557
1557 1558 def readfile(path):
1558 1559 with open(path, 'rb') as fp:
1559 1560 return fp.read()
1560 1561
1561 1562 def writefile(path, text):
1562 1563 with open(path, 'wb') as fp:
1563 1564 fp.write(text)
1564 1565
1565 1566 def appendfile(path, text):
1566 1567 with open(path, 'ab') as fp:
1567 1568 fp.write(text)
1568 1569
1569 1570 class chunkbuffer(object):
1570 1571 """Allow arbitrary sized chunks of data to be efficiently read from an
1571 1572 iterator over chunks of arbitrary size."""
1572 1573
1573 1574 def __init__(self, in_iter):
1574 1575 """in_iter is the iterator that's iterating over the input chunks.
1575 1576 targetsize is how big a buffer to try to maintain."""
1576 1577 def splitbig(chunks):
1577 1578 for chunk in chunks:
1578 1579 if len(chunk) > 2**20:
1579 1580 pos = 0
1580 1581 while pos < len(chunk):
1581 1582 end = pos + 2 ** 18
1582 1583 yield chunk[pos:end]
1583 1584 pos = end
1584 1585 else:
1585 1586 yield chunk
1586 1587 self.iter = splitbig(in_iter)
1587 1588 self._queue = collections.deque()
1588 1589 self._chunkoffset = 0
1589 1590
1590 1591 def read(self, l=None):
1591 1592 """Read L bytes of data from the iterator of chunks of data.
1592 1593 Returns less than L bytes if the iterator runs dry.
1593 1594
1594 1595 If size parameter is omitted, read everything"""
1595 1596 if l is None:
1596 1597 return ''.join(self.iter)
1597 1598
1598 1599 left = l
1599 1600 buf = []
1600 1601 queue = self._queue
1601 1602 while left > 0:
1602 1603 # refill the queue
1603 1604 if not queue:
1604 1605 target = 2**18
1605 1606 for chunk in self.iter:
1606 1607 queue.append(chunk)
1607 1608 target -= len(chunk)
1608 1609 if target <= 0:
1609 1610 break
1610 1611 if not queue:
1611 1612 break
1612 1613
1613 1614 # The easy way to do this would be to queue.popleft(), modify the
1614 1615 # chunk (if necessary), then queue.appendleft(). However, for cases
1615 1616 # where we read partial chunk content, this incurs 2 dequeue
1616 1617 # mutations and creates a new str for the remaining chunk in the
1617 1618 # queue. Our code below avoids this overhead.
1618 1619
1619 1620 chunk = queue[0]
1620 1621 chunkl = len(chunk)
1621 1622 offset = self._chunkoffset
1622 1623
1623 1624 # Use full chunk.
1624 1625 if offset == 0 and left >= chunkl:
1625 1626 left -= chunkl
1626 1627 queue.popleft()
1627 1628 buf.append(chunk)
1628 1629 # self._chunkoffset remains at 0.
1629 1630 continue
1630 1631
1631 1632 chunkremaining = chunkl - offset
1632 1633
1633 1634 # Use all of unconsumed part of chunk.
1634 1635 if left >= chunkremaining:
1635 1636 left -= chunkremaining
1636 1637 queue.popleft()
1637 1638 # offset == 0 is enabled by block above, so this won't merely
1638 1639 # copy via ``chunk[0:]``.
1639 1640 buf.append(chunk[offset:])
1640 1641 self._chunkoffset = 0
1641 1642
1642 1643 # Partial chunk needed.
1643 1644 else:
1644 1645 buf.append(chunk[offset:offset + left])
1645 1646 self._chunkoffset += left
1646 1647 left -= chunkremaining
1647 1648
1648 1649 return ''.join(buf)
1649 1650
1650 1651 def filechunkiter(f, size=65536, limit=None):
1651 1652 """Create a generator that produces the data in the file size
1652 1653 (default 65536) bytes at a time, up to optional limit (default is
1653 1654 to read all data). Chunks may be less than size bytes if the
1654 1655 chunk is the last chunk in the file, or the file is a socket or
1655 1656 some other type of file that sometimes reads less data than is
1656 1657 requested."""
1657 1658 assert size >= 0
1658 1659 assert limit is None or limit >= 0
1659 1660 while True:
1660 1661 if limit is None:
1661 1662 nbytes = size
1662 1663 else:
1663 1664 nbytes = min(limit, size)
1664 1665 s = nbytes and f.read(nbytes)
1665 1666 if not s:
1666 1667 break
1667 1668 if limit:
1668 1669 limit -= len(s)
1669 1670 yield s
1670 1671
1671 1672 def makedate(timestamp=None):
1672 1673 '''Return a unix timestamp (or the current time) as a (unixtime,
1673 1674 offset) tuple based off the local timezone.'''
1674 1675 if timestamp is None:
1675 1676 timestamp = time.time()
1676 1677 if timestamp < 0:
1677 1678 hint = _("check your clock")
1678 1679 raise Abort(_("negative timestamp: %d") % timestamp, hint=hint)
1679 1680 delta = (datetime.datetime.utcfromtimestamp(timestamp) -
1680 1681 datetime.datetime.fromtimestamp(timestamp))
1681 1682 tz = delta.days * 86400 + delta.seconds
1682 1683 return timestamp, tz
1683 1684
1684 1685 def datestr(date=None, format='%a %b %d %H:%M:%S %Y %1%2'):
1685 1686 """represent a (unixtime, offset) tuple as a localized time.
1686 1687 unixtime is seconds since the epoch, and offset is the time zone's
1687 1688 number of seconds away from UTC.
1688 1689
1689 1690 >>> datestr((0, 0))
1690 1691 'Thu Jan 01 00:00:00 1970 +0000'
1691 1692 >>> datestr((42, 0))
1692 1693 'Thu Jan 01 00:00:42 1970 +0000'
1693 1694 >>> datestr((-42, 0))
1694 1695 'Wed Dec 31 23:59:18 1969 +0000'
1695 1696 >>> datestr((0x7fffffff, 0))
1696 1697 'Tue Jan 19 03:14:07 2038 +0000'
1697 1698 >>> datestr((-0x80000000, 0))
1698 1699 'Fri Dec 13 20:45:52 1901 +0000'
1699 1700 """
1700 1701 t, tz = date or makedate()
1701 1702 if "%1" in format or "%2" in format or "%z" in format:
1702 1703 sign = (tz > 0) and "-" or "+"
1703 1704 minutes = abs(tz) // 60
1704 1705 q, r = divmod(minutes, 60)
1705 1706 format = format.replace("%z", "%1%2")
1706 1707 format = format.replace("%1", "%c%02d" % (sign, q))
1707 1708 format = format.replace("%2", "%02d" % r)
1708 1709 d = t - tz
1709 1710 if d > 0x7fffffff:
1710 1711 d = 0x7fffffff
1711 1712 elif d < -0x80000000:
1712 1713 d = -0x80000000
1713 1714 # Never use time.gmtime() and datetime.datetime.fromtimestamp()
1714 1715 # because they use the gmtime() system call which is buggy on Windows
1715 1716 # for negative values.
1716 1717 t = datetime.datetime(1970, 1, 1) + datetime.timedelta(seconds=d)
1717 1718 s = t.strftime(format)
1718 1719 return s
1719 1720
1720 1721 def shortdate(date=None):
1721 1722 """turn (timestamp, tzoff) tuple into iso 8631 date."""
1722 1723 return datestr(date, format='%Y-%m-%d')
1723 1724
1724 1725 def parsetimezone(tz):
1725 1726 """parse a timezone string and return an offset integer"""
1726 1727 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
1727 1728 sign = (tz[0] == "+") and 1 or -1
1728 1729 hours = int(tz[1:3])
1729 1730 minutes = int(tz[3:5])
1730 1731 return -sign * (hours * 60 + minutes) * 60
1731 1732 if tz == "GMT" or tz == "UTC":
1732 1733 return 0
1733 1734 return None
1734 1735
1735 1736 def strdate(string, format, defaults=[]):
1736 1737 """parse a localized time string and return a (unixtime, offset) tuple.
1737 1738 if the string cannot be parsed, ValueError is raised."""
1738 1739 # NOTE: unixtime = localunixtime + offset
1739 1740 offset, date = parsetimezone(string.split()[-1]), string
1740 1741 if offset is not None:
1741 1742 date = " ".join(string.split()[:-1])
1742 1743
1743 1744 # add missing elements from defaults
1744 1745 usenow = False # default to using biased defaults
1745 1746 for part in ("S", "M", "HI", "d", "mb", "yY"): # decreasing specificity
1746 1747 found = [True for p in part if ("%"+p) in format]
1747 1748 if not found:
1748 1749 date += "@" + defaults[part][usenow]
1749 1750 format += "@%" + part[0]
1750 1751 else:
1751 1752 # We've found a specific time element, less specific time
1752 1753 # elements are relative to today
1753 1754 usenow = True
1754 1755
1755 1756 timetuple = time.strptime(date, format)
1756 1757 localunixtime = int(calendar.timegm(timetuple))
1757 1758 if offset is None:
1758 1759 # local timezone
1759 1760 unixtime = int(time.mktime(timetuple))
1760 1761 offset = unixtime - localunixtime
1761 1762 else:
1762 1763 unixtime = localunixtime + offset
1763 1764 return unixtime, offset
1764 1765
1765 1766 def parsedate(date, formats=None, bias=None):
1766 1767 """parse a localized date/time and return a (unixtime, offset) tuple.
1767 1768
1768 1769 The date may be a "unixtime offset" string or in one of the specified
1769 1770 formats. If the date already is a (unixtime, offset) tuple, it is returned.
1770 1771
1771 1772 >>> parsedate(' today ') == parsedate(\
1772 1773 datetime.date.today().strftime('%b %d'))
1773 1774 True
1774 1775 >>> parsedate( 'yesterday ') == parsedate((datetime.date.today() -\
1775 1776 datetime.timedelta(days=1)\
1776 1777 ).strftime('%b %d'))
1777 1778 True
1778 1779 >>> now, tz = makedate()
1779 1780 >>> strnow, strtz = parsedate('now')
1780 1781 >>> (strnow - now) < 1
1781 1782 True
1782 1783 >>> tz == strtz
1783 1784 True
1784 1785 """
1785 1786 if bias is None:
1786 1787 bias = {}
1787 1788 if not date:
1788 1789 return 0, 0
1789 1790 if isinstance(date, tuple) and len(date) == 2:
1790 1791 return date
1791 1792 if not formats:
1792 1793 formats = defaultdateformats
1793 1794 date = date.strip()
1794 1795
1795 1796 if date == 'now' or date == _('now'):
1796 1797 return makedate()
1797 1798 if date == 'today' or date == _('today'):
1798 1799 date = datetime.date.today().strftime('%b %d')
1799 1800 elif date == 'yesterday' or date == _('yesterday'):
1800 1801 date = (datetime.date.today() -
1801 1802 datetime.timedelta(days=1)).strftime('%b %d')
1802 1803
1803 1804 try:
1804 1805 when, offset = map(int, date.split(' '))
1805 1806 except ValueError:
1806 1807 # fill out defaults
1807 1808 now = makedate()
1808 1809 defaults = {}
1809 1810 for part in ("d", "mb", "yY", "HI", "M", "S"):
1810 1811 # this piece is for rounding the specific end of unknowns
1811 1812 b = bias.get(part)
1812 1813 if b is None:
1813 1814 if part[0] in "HMS":
1814 1815 b = "00"
1815 1816 else:
1816 1817 b = "0"
1817 1818
1818 1819 # this piece is for matching the generic end to today's date
1819 1820 n = datestr(now, "%" + part[0])
1820 1821
1821 1822 defaults[part] = (b, n)
1822 1823
1823 1824 for format in formats:
1824 1825 try:
1825 1826 when, offset = strdate(date, format, defaults)
1826 1827 except (ValueError, OverflowError):
1827 1828 pass
1828 1829 else:
1829 1830 break
1830 1831 else:
1831 1832 raise Abort(_('invalid date: %r') % date)
1832 1833 # validate explicit (probably user-specified) date and
1833 1834 # time zone offset. values must fit in signed 32 bits for
1834 1835 # current 32-bit linux runtimes. timezones go from UTC-12
1835 1836 # to UTC+14
1836 1837 if when < -0x80000000 or when > 0x7fffffff:
1837 1838 raise Abort(_('date exceeds 32 bits: %d') % when)
1838 1839 if offset < -50400 or offset > 43200:
1839 1840 raise Abort(_('impossible time zone offset: %d') % offset)
1840 1841 return when, offset
1841 1842
1842 1843 def matchdate(date):
1843 1844 """Return a function that matches a given date match specifier
1844 1845
1845 1846 Formats include:
1846 1847
1847 1848 '{date}' match a given date to the accuracy provided
1848 1849
1849 1850 '<{date}' on or before a given date
1850 1851
1851 1852 '>{date}' on or after a given date
1852 1853
1853 1854 >>> p1 = parsedate("10:29:59")
1854 1855 >>> p2 = parsedate("10:30:00")
1855 1856 >>> p3 = parsedate("10:30:59")
1856 1857 >>> p4 = parsedate("10:31:00")
1857 1858 >>> p5 = parsedate("Sep 15 10:30:00 1999")
1858 1859 >>> f = matchdate("10:30")
1859 1860 >>> f(p1[0])
1860 1861 False
1861 1862 >>> f(p2[0])
1862 1863 True
1863 1864 >>> f(p3[0])
1864 1865 True
1865 1866 >>> f(p4[0])
1866 1867 False
1867 1868 >>> f(p5[0])
1868 1869 False
1869 1870 """
1870 1871
1871 1872 def lower(date):
1872 1873 d = {'mb': "1", 'd': "1"}
1873 1874 return parsedate(date, extendeddateformats, d)[0]
1874 1875
1875 1876 def upper(date):
1876 1877 d = {'mb': "12", 'HI': "23", 'M': "59", 'S': "59"}
1877 1878 for days in ("31", "30", "29"):
1878 1879 try:
1879 1880 d["d"] = days
1880 1881 return parsedate(date, extendeddateformats, d)[0]
1881 1882 except Abort:
1882 1883 pass
1883 1884 d["d"] = "28"
1884 1885 return parsedate(date, extendeddateformats, d)[0]
1885 1886
1886 1887 date = date.strip()
1887 1888
1888 1889 if not date:
1889 1890 raise Abort(_("dates cannot consist entirely of whitespace"))
1890 1891 elif date[0] == "<":
1891 1892 if not date[1:]:
1892 1893 raise Abort(_("invalid day spec, use '<DATE'"))
1893 1894 when = upper(date[1:])
1894 1895 return lambda x: x <= when
1895 1896 elif date[0] == ">":
1896 1897 if not date[1:]:
1897 1898 raise Abort(_("invalid day spec, use '>DATE'"))
1898 1899 when = lower(date[1:])
1899 1900 return lambda x: x >= when
1900 1901 elif date[0] == "-":
1901 1902 try:
1902 1903 days = int(date[1:])
1903 1904 except ValueError:
1904 1905 raise Abort(_("invalid day spec: %s") % date[1:])
1905 1906 if days < 0:
1906 1907 raise Abort(_('%s must be nonnegative (see "hg help dates")')
1907 1908 % date[1:])
1908 1909 when = makedate()[0] - days * 3600 * 24
1909 1910 return lambda x: x >= when
1910 1911 elif " to " in date:
1911 1912 a, b = date.split(" to ")
1912 1913 start, stop = lower(a), upper(b)
1913 1914 return lambda x: x >= start and x <= stop
1914 1915 else:
1915 1916 start, stop = lower(date), upper(date)
1916 1917 return lambda x: x >= start and x <= stop
1917 1918
1918 1919 def stringmatcher(pattern):
1919 1920 """
1920 1921 accepts a string, possibly starting with 're:' or 'literal:' prefix.
1921 1922 returns the matcher name, pattern, and matcher function.
1922 1923 missing or unknown prefixes are treated as literal matches.
1923 1924
1924 1925 helper for tests:
1925 1926 >>> def test(pattern, *tests):
1926 1927 ... kind, pattern, matcher = stringmatcher(pattern)
1927 1928 ... return (kind, pattern, [bool(matcher(t)) for t in tests])
1928 1929
1929 1930 exact matching (no prefix):
1930 1931 >>> test('abcdefg', 'abc', 'def', 'abcdefg')
1931 1932 ('literal', 'abcdefg', [False, False, True])
1932 1933
1933 1934 regex matching ('re:' prefix)
1934 1935 >>> test('re:a.+b', 'nomatch', 'fooadef', 'fooadefbar')
1935 1936 ('re', 'a.+b', [False, False, True])
1936 1937
1937 1938 force exact matches ('literal:' prefix)
1938 1939 >>> test('literal:re:foobar', 'foobar', 're:foobar')
1939 1940 ('literal', 're:foobar', [False, True])
1940 1941
1941 1942 unknown prefixes are ignored and treated as literals
1942 1943 >>> test('foo:bar', 'foo', 'bar', 'foo:bar')
1943 1944 ('literal', 'foo:bar', [False, False, True])
1944 1945 """
1945 1946 if pattern.startswith('re:'):
1946 1947 pattern = pattern[3:]
1947 1948 try:
1948 1949 regex = remod.compile(pattern)
1949 1950 except remod.error as e:
1950 1951 raise error.ParseError(_('invalid regular expression: %s')
1951 1952 % e)
1952 1953 return 're', pattern, regex.search
1953 1954 elif pattern.startswith('literal:'):
1954 1955 pattern = pattern[8:]
1955 1956 return 'literal', pattern, pattern.__eq__
1956 1957
1957 1958 def shortuser(user):
1958 1959 """Return a short representation of a user name or email address."""
1959 1960 f = user.find('@')
1960 1961 if f >= 0:
1961 1962 user = user[:f]
1962 1963 f = user.find('<')
1963 1964 if f >= 0:
1964 1965 user = user[f + 1:]
1965 1966 f = user.find(' ')
1966 1967 if f >= 0:
1967 1968 user = user[:f]
1968 1969 f = user.find('.')
1969 1970 if f >= 0:
1970 1971 user = user[:f]
1971 1972 return user
1972 1973
1973 1974 def emailuser(user):
1974 1975 """Return the user portion of an email address."""
1975 1976 f = user.find('@')
1976 1977 if f >= 0:
1977 1978 user = user[:f]
1978 1979 f = user.find('<')
1979 1980 if f >= 0:
1980 1981 user = user[f + 1:]
1981 1982 return user
1982 1983
1983 1984 def email(author):
1984 1985 '''get email of author.'''
1985 1986 r = author.find('>')
1986 1987 if r == -1:
1987 1988 r = None
1988 1989 return author[author.find('<') + 1:r]
1989 1990
1990 1991 def ellipsis(text, maxlength=400):
1991 1992 """Trim string to at most maxlength (default: 400) columns in display."""
1992 1993 return encoding.trim(text, maxlength, ellipsis='...')
1993 1994
1994 1995 def unitcountfn(*unittable):
1995 1996 '''return a function that renders a readable count of some quantity'''
1996 1997
1997 1998 def go(count):
1998 1999 for multiplier, divisor, format in unittable:
1999 2000 if count >= divisor * multiplier:
2000 2001 return format % (count / float(divisor))
2001 2002 return unittable[-1][2] % count
2002 2003
2003 2004 return go
2004 2005
2005 2006 bytecount = unitcountfn(
2006 2007 (100, 1 << 30, _('%.0f GB')),
2007 2008 (10, 1 << 30, _('%.1f GB')),
2008 2009 (1, 1 << 30, _('%.2f GB')),
2009 2010 (100, 1 << 20, _('%.0f MB')),
2010 2011 (10, 1 << 20, _('%.1f MB')),
2011 2012 (1, 1 << 20, _('%.2f MB')),
2012 2013 (100, 1 << 10, _('%.0f KB')),
2013 2014 (10, 1 << 10, _('%.1f KB')),
2014 2015 (1, 1 << 10, _('%.2f KB')),
2015 2016 (1, 1, _('%.0f bytes')),
2016 2017 )
2017 2018
2018 2019 def uirepr(s):
2019 2020 # Avoid double backslash in Windows path repr()
2020 2021 return repr(s).replace('\\\\', '\\')
2021 2022
2022 2023 # delay import of textwrap
2023 2024 def MBTextWrapper(**kwargs):
2024 2025 class tw(textwrap.TextWrapper):
2025 2026 """
2026 2027 Extend TextWrapper for width-awareness.
2027 2028
2028 2029 Neither number of 'bytes' in any encoding nor 'characters' is
2029 2030 appropriate to calculate terminal columns for specified string.
2030 2031
2031 2032 Original TextWrapper implementation uses built-in 'len()' directly,
2032 2033 so overriding is needed to use width information of each characters.
2033 2034
2034 2035 In addition, characters classified into 'ambiguous' width are
2035 2036 treated as wide in East Asian area, but as narrow in other.
2036 2037
2037 2038 This requires use decision to determine width of such characters.
2038 2039 """
2039 2040 def _cutdown(self, ucstr, space_left):
2040 2041 l = 0
2041 2042 colwidth = encoding.ucolwidth
2042 2043 for i in xrange(len(ucstr)):
2043 2044 l += colwidth(ucstr[i])
2044 2045 if space_left < l:
2045 2046 return (ucstr[:i], ucstr[i:])
2046 2047 return ucstr, ''
2047 2048
2048 2049 # overriding of base class
2049 2050 def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
2050 2051 space_left = max(width - cur_len, 1)
2051 2052
2052 2053 if self.break_long_words:
2053 2054 cut, res = self._cutdown(reversed_chunks[-1], space_left)
2054 2055 cur_line.append(cut)
2055 2056 reversed_chunks[-1] = res
2056 2057 elif not cur_line:
2057 2058 cur_line.append(reversed_chunks.pop())
2058 2059
2059 2060 # this overriding code is imported from TextWrapper of Python 2.6
2060 2061 # to calculate columns of string by 'encoding.ucolwidth()'
2061 2062 def _wrap_chunks(self, chunks):
2062 2063 colwidth = encoding.ucolwidth
2063 2064
2064 2065 lines = []
2065 2066 if self.width <= 0:
2066 2067 raise ValueError("invalid width %r (must be > 0)" % self.width)
2067 2068
2068 2069 # Arrange in reverse order so items can be efficiently popped
2069 2070 # from a stack of chucks.
2070 2071 chunks.reverse()
2071 2072
2072 2073 while chunks:
2073 2074
2074 2075 # Start the list of chunks that will make up the current line.
2075 2076 # cur_len is just the length of all the chunks in cur_line.
2076 2077 cur_line = []
2077 2078 cur_len = 0
2078 2079
2079 2080 # Figure out which static string will prefix this line.
2080 2081 if lines:
2081 2082 indent = self.subsequent_indent
2082 2083 else:
2083 2084 indent = self.initial_indent
2084 2085
2085 2086 # Maximum width for this line.
2086 2087 width = self.width - len(indent)
2087 2088
2088 2089 # First chunk on line is whitespace -- drop it, unless this
2089 2090 # is the very beginning of the text (i.e. no lines started yet).
2090 2091 if self.drop_whitespace and chunks[-1].strip() == '' and lines:
2091 2092 del chunks[-1]
2092 2093
2093 2094 while chunks:
2094 2095 l = colwidth(chunks[-1])
2095 2096
2096 2097 # Can at least squeeze this chunk onto the current line.
2097 2098 if cur_len + l <= width:
2098 2099 cur_line.append(chunks.pop())
2099 2100 cur_len += l
2100 2101
2101 2102 # Nope, this line is full.
2102 2103 else:
2103 2104 break
2104 2105
2105 2106 # The current line is full, and the next chunk is too big to
2106 2107 # fit on *any* line (not just this one).
2107 2108 if chunks and colwidth(chunks[-1]) > width:
2108 2109 self._handle_long_word(chunks, cur_line, cur_len, width)
2109 2110
2110 2111 # If the last chunk on this line is all whitespace, drop it.
2111 2112 if (self.drop_whitespace and
2112 2113 cur_line and cur_line[-1].strip() == ''):
2113 2114 del cur_line[-1]
2114 2115
2115 2116 # Convert current line back to a string and store it in list
2116 2117 # of all lines (return value).
2117 2118 if cur_line:
2118 2119 lines.append(indent + ''.join(cur_line))
2119 2120
2120 2121 return lines
2121 2122
2122 2123 global MBTextWrapper
2123 2124 MBTextWrapper = tw
2124 2125 return tw(**kwargs)
2125 2126
2126 2127 def wrap(line, width, initindent='', hangindent=''):
2127 2128 maxindent = max(len(hangindent), len(initindent))
2128 2129 if width <= maxindent:
2129 2130 # adjust for weird terminal size
2130 2131 width = max(78, maxindent + 1)
2131 2132 line = line.decode(encoding.encoding, encoding.encodingmode)
2132 2133 initindent = initindent.decode(encoding.encoding, encoding.encodingmode)
2133 2134 hangindent = hangindent.decode(encoding.encoding, encoding.encodingmode)
2134 2135 wrapper = MBTextWrapper(width=width,
2135 2136 initial_indent=initindent,
2136 2137 subsequent_indent=hangindent)
2137 2138 return wrapper.fill(line).encode(encoding.encoding)
2138 2139
2139 2140 def iterlines(iterator):
2140 2141 for chunk in iterator:
2141 2142 for line in chunk.splitlines():
2142 2143 yield line
2143 2144
2144 2145 def expandpath(path):
2145 2146 return os.path.expanduser(os.path.expandvars(path))
2146 2147
2147 2148 def hgcmd():
2148 2149 """Return the command used to execute current hg
2149 2150
2150 2151 This is different from hgexecutable() because on Windows we want
2151 2152 to avoid things opening new shell windows like batch files, so we
2152 2153 get either the python call or current executable.
2153 2154 """
2154 2155 if mainfrozen():
2155 2156 if getattr(sys, 'frozen', None) == 'macosx_app':
2156 2157 # Env variable set by py2app
2157 2158 return [os.environ['EXECUTABLEPATH']]
2158 2159 else:
2159 2160 return [sys.executable]
2160 2161 return gethgcmd()
2161 2162
2162 2163 def rundetached(args, condfn):
2163 2164 """Execute the argument list in a detached process.
2164 2165
2165 2166 condfn is a callable which is called repeatedly and should return
2166 2167 True once the child process is known to have started successfully.
2167 2168 At this point, the child process PID is returned. If the child
2168 2169 process fails to start or finishes before condfn() evaluates to
2169 2170 True, return -1.
2170 2171 """
2171 2172 # Windows case is easier because the child process is either
2172 2173 # successfully starting and validating the condition or exiting
2173 2174 # on failure. We just poll on its PID. On Unix, if the child
2174 2175 # process fails to start, it will be left in a zombie state until
2175 2176 # the parent wait on it, which we cannot do since we expect a long
2176 2177 # running process on success. Instead we listen for SIGCHLD telling
2177 2178 # us our child process terminated.
2178 2179 terminated = set()
2179 2180 def handler(signum, frame):
2180 2181 terminated.add(os.wait())
2181 2182 prevhandler = None
2182 2183 SIGCHLD = getattr(signal, 'SIGCHLD', None)
2183 2184 if SIGCHLD is not None:
2184 2185 prevhandler = signal.signal(SIGCHLD, handler)
2185 2186 try:
2186 2187 pid = spawndetached(args)
2187 2188 while not condfn():
2188 2189 if ((pid in terminated or not testpid(pid))
2189 2190 and not condfn()):
2190 2191 return -1
2191 2192 time.sleep(0.1)
2192 2193 return pid
2193 2194 finally:
2194 2195 if prevhandler is not None:
2195 2196 signal.signal(signal.SIGCHLD, prevhandler)
2196 2197
2197 2198 def interpolate(prefix, mapping, s, fn=None, escape_prefix=False):
2198 2199 """Return the result of interpolating items in the mapping into string s.
2199 2200
2200 2201 prefix is a single character string, or a two character string with
2201 2202 a backslash as the first character if the prefix needs to be escaped in
2202 2203 a regular expression.
2203 2204
2204 2205 fn is an optional function that will be applied to the replacement text
2205 2206 just before replacement.
2206 2207
2207 2208 escape_prefix is an optional flag that allows using doubled prefix for
2208 2209 its escaping.
2209 2210 """
2210 2211 fn = fn or (lambda s: s)
2211 2212 patterns = '|'.join(mapping.keys())
2212 2213 if escape_prefix:
2213 2214 patterns += '|' + prefix
2214 2215 if len(prefix) > 1:
2215 2216 prefix_char = prefix[1:]
2216 2217 else:
2217 2218 prefix_char = prefix
2218 2219 mapping[prefix_char] = prefix_char
2219 2220 r = remod.compile(r'%s(%s)' % (prefix, patterns))
2220 2221 return r.sub(lambda x: fn(mapping[x.group()[1:]]), s)
2221 2222
2222 2223 def getport(port):
2223 2224 """Return the port for a given network service.
2224 2225
2225 2226 If port is an integer, it's returned as is. If it's a string, it's
2226 2227 looked up using socket.getservbyname(). If there's no matching
2227 2228 service, error.Abort is raised.
2228 2229 """
2229 2230 try:
2230 2231 return int(port)
2231 2232 except ValueError:
2232 2233 pass
2233 2234
2234 2235 try:
2235 2236 return socket.getservbyname(port)
2236 2237 except socket.error:
2237 2238 raise Abort(_("no port number associated with service '%s'") % port)
2238 2239
2239 2240 _booleans = {'1': True, 'yes': True, 'true': True, 'on': True, 'always': True,
2240 2241 '0': False, 'no': False, 'false': False, 'off': False,
2241 2242 'never': False}
2242 2243
2243 2244 def parsebool(s):
2244 2245 """Parse s into a boolean.
2245 2246
2246 2247 If s is not a valid boolean, returns None.
2247 2248 """
2248 2249 return _booleans.get(s.lower(), None)
2249 2250
2250 2251 _hexdig = '0123456789ABCDEFabcdef'
2251 2252 _hextochr = dict((a + b, chr(int(a + b, 16)))
2252 2253 for a in _hexdig for b in _hexdig)
2253 2254
2254 2255 def _urlunquote(s):
2255 2256 """Decode HTTP/HTML % encoding.
2256 2257
2257 2258 >>> _urlunquote('abc%20def')
2258 2259 'abc def'
2259 2260 """
2260 2261 res = s.split('%')
2261 2262 # fastpath
2262 2263 if len(res) == 1:
2263 2264 return s
2264 2265 s = res[0]
2265 2266 for item in res[1:]:
2266 2267 try:
2267 2268 s += _hextochr[item[:2]] + item[2:]
2268 2269 except KeyError:
2269 2270 s += '%' + item
2270 2271 except UnicodeDecodeError:
2271 2272 s += unichr(int(item[:2], 16)) + item[2:]
2272 2273 return s
2273 2274
2274 2275 class url(object):
2275 2276 r"""Reliable URL parser.
2276 2277
2277 2278 This parses URLs and provides attributes for the following
2278 2279 components:
2279 2280
2280 2281 <scheme>://<user>:<passwd>@<host>:<port>/<path>?<query>#<fragment>
2281 2282
2282 2283 Missing components are set to None. The only exception is
2283 2284 fragment, which is set to '' if present but empty.
2284 2285
2285 2286 If parsefragment is False, fragment is included in query. If
2286 2287 parsequery is False, query is included in path. If both are
2287 2288 False, both fragment and query are included in path.
2288 2289
2289 2290 See http://www.ietf.org/rfc/rfc2396.txt for more information.
2290 2291
2291 2292 Note that for backward compatibility reasons, bundle URLs do not
2292 2293 take host names. That means 'bundle://../' has a path of '../'.
2293 2294
2294 2295 Examples:
2295 2296
2296 2297 >>> url('http://www.ietf.org/rfc/rfc2396.txt')
2297 2298 <url scheme: 'http', host: 'www.ietf.org', path: 'rfc/rfc2396.txt'>
2298 2299 >>> url('ssh://[::1]:2200//home/joe/repo')
2299 2300 <url scheme: 'ssh', host: '[::1]', port: '2200', path: '/home/joe/repo'>
2300 2301 >>> url('file:///home/joe/repo')
2301 2302 <url scheme: 'file', path: '/home/joe/repo'>
2302 2303 >>> url('file:///c:/temp/foo/')
2303 2304 <url scheme: 'file', path: 'c:/temp/foo/'>
2304 2305 >>> url('bundle:foo')
2305 2306 <url scheme: 'bundle', path: 'foo'>
2306 2307 >>> url('bundle://../foo')
2307 2308 <url scheme: 'bundle', path: '../foo'>
2308 2309 >>> url(r'c:\foo\bar')
2309 2310 <url path: 'c:\\foo\\bar'>
2310 2311 >>> url(r'\\blah\blah\blah')
2311 2312 <url path: '\\\\blah\\blah\\blah'>
2312 2313 >>> url(r'\\blah\blah\blah#baz')
2313 2314 <url path: '\\\\blah\\blah\\blah', fragment: 'baz'>
2314 2315 >>> url(r'file:///C:\users\me')
2315 2316 <url scheme: 'file', path: 'C:\\users\\me'>
2316 2317
2317 2318 Authentication credentials:
2318 2319
2319 2320 >>> url('ssh://joe:xyz@x/repo')
2320 2321 <url scheme: 'ssh', user: 'joe', passwd: 'xyz', host: 'x', path: 'repo'>
2321 2322 >>> url('ssh://joe@x/repo')
2322 2323 <url scheme: 'ssh', user: 'joe', host: 'x', path: 'repo'>
2323 2324
2324 2325 Query strings and fragments:
2325 2326
2326 2327 >>> url('http://host/a?b#c')
2327 2328 <url scheme: 'http', host: 'host', path: 'a', query: 'b', fragment: 'c'>
2328 2329 >>> url('http://host/a?b#c', parsequery=False, parsefragment=False)
2329 2330 <url scheme: 'http', host: 'host', path: 'a?b#c'>
2330 2331 """
2331 2332
2332 2333 _safechars = "!~*'()+"
2333 2334 _safepchars = "/!~*'()+:\\"
2334 2335 _matchscheme = remod.compile(r'^[a-zA-Z0-9+.\-]+:').match
2335 2336
2336 2337 def __init__(self, path, parsequery=True, parsefragment=True):
2337 2338 # We slowly chomp away at path until we have only the path left
2338 2339 self.scheme = self.user = self.passwd = self.host = None
2339 2340 self.port = self.path = self.query = self.fragment = None
2340 2341 self._localpath = True
2341 2342 self._hostport = ''
2342 2343 self._origpath = path
2343 2344
2344 2345 if parsefragment and '#' in path:
2345 2346 path, self.fragment = path.split('#', 1)
2346 2347 if not path:
2347 2348 path = None
2348 2349
2349 2350 # special case for Windows drive letters and UNC paths
2350 2351 if hasdriveletter(path) or path.startswith(r'\\'):
2351 2352 self.path = path
2352 2353 return
2353 2354
2354 2355 # For compatibility reasons, we can't handle bundle paths as
2355 2356 # normal URLS
2356 2357 if path.startswith('bundle:'):
2357 2358 self.scheme = 'bundle'
2358 2359 path = path[7:]
2359 2360 if path.startswith('//'):
2360 2361 path = path[2:]
2361 2362 self.path = path
2362 2363 return
2363 2364
2364 2365 if self._matchscheme(path):
2365 2366 parts = path.split(':', 1)
2366 2367 if parts[0]:
2367 2368 self.scheme, path = parts
2368 2369 self._localpath = False
2369 2370
2370 2371 if not path:
2371 2372 path = None
2372 2373 if self._localpath:
2373 2374 self.path = ''
2374 2375 return
2375 2376 else:
2376 2377 if self._localpath:
2377 2378 self.path = path
2378 2379 return
2379 2380
2380 2381 if parsequery and '?' in path:
2381 2382 path, self.query = path.split('?', 1)
2382 2383 if not path:
2383 2384 path = None
2384 2385 if not self.query:
2385 2386 self.query = None
2386 2387
2387 2388 # // is required to specify a host/authority
2388 2389 if path and path.startswith('//'):
2389 2390 parts = path[2:].split('/', 1)
2390 2391 if len(parts) > 1:
2391 2392 self.host, path = parts
2392 2393 else:
2393 2394 self.host = parts[0]
2394 2395 path = None
2395 2396 if not self.host:
2396 2397 self.host = None
2397 2398 # path of file:///d is /d
2398 2399 # path of file:///d:/ is d:/, not /d:/
2399 2400 if path and not hasdriveletter(path):
2400 2401 path = '/' + path
2401 2402
2402 2403 if self.host and '@' in self.host:
2403 2404 self.user, self.host = self.host.rsplit('@', 1)
2404 2405 if ':' in self.user:
2405 2406 self.user, self.passwd = self.user.split(':', 1)
2406 2407 if not self.host:
2407 2408 self.host = None
2408 2409
2409 2410 # Don't split on colons in IPv6 addresses without ports
2410 2411 if (self.host and ':' in self.host and
2411 2412 not (self.host.startswith('[') and self.host.endswith(']'))):
2412 2413 self._hostport = self.host
2413 2414 self.host, self.port = self.host.rsplit(':', 1)
2414 2415 if not self.host:
2415 2416 self.host = None
2416 2417
2417 2418 if (self.host and self.scheme == 'file' and
2418 2419 self.host not in ('localhost', '127.0.0.1', '[::1]')):
2419 2420 raise Abort(_('file:// URLs can only refer to localhost'))
2420 2421
2421 2422 self.path = path
2422 2423
2423 2424 # leave the query string escaped
2424 2425 for a in ('user', 'passwd', 'host', 'port',
2425 2426 'path', 'fragment'):
2426 2427 v = getattr(self, a)
2427 2428 if v is not None:
2428 2429 setattr(self, a, _urlunquote(v))
2429 2430
2430 2431 def __repr__(self):
2431 2432 attrs = []
2432 2433 for a in ('scheme', 'user', 'passwd', 'host', 'port', 'path',
2433 2434 'query', 'fragment'):
2434 2435 v = getattr(self, a)
2435 2436 if v is not None:
2436 2437 attrs.append('%s: %r' % (a, v))
2437 2438 return '<url %s>' % ', '.join(attrs)
2438 2439
2439 2440 def __str__(self):
2440 2441 r"""Join the URL's components back into a URL string.
2441 2442
2442 2443 Examples:
2443 2444
2444 2445 >>> str(url('http://user:pw@host:80/c:/bob?fo:oo#ba:ar'))
2445 2446 'http://user:pw@host:80/c:/bob?fo:oo#ba:ar'
2446 2447 >>> str(url('http://user:pw@host:80/?foo=bar&baz=42'))
2447 2448 'http://user:pw@host:80/?foo=bar&baz=42'
2448 2449 >>> str(url('http://user:pw@host:80/?foo=bar%3dbaz'))
2449 2450 'http://user:pw@host:80/?foo=bar%3dbaz'
2450 2451 >>> str(url('ssh://user:pw@[::1]:2200//home/joe#'))
2451 2452 'ssh://user:pw@[::1]:2200//home/joe#'
2452 2453 >>> str(url('http://localhost:80//'))
2453 2454 'http://localhost:80//'
2454 2455 >>> str(url('http://localhost:80/'))
2455 2456 'http://localhost:80/'
2456 2457 >>> str(url('http://localhost:80'))
2457 2458 'http://localhost:80/'
2458 2459 >>> str(url('bundle:foo'))
2459 2460 'bundle:foo'
2460 2461 >>> str(url('bundle://../foo'))
2461 2462 'bundle:../foo'
2462 2463 >>> str(url('path'))
2463 2464 'path'
2464 2465 >>> str(url('file:///tmp/foo/bar'))
2465 2466 'file:///tmp/foo/bar'
2466 2467 >>> str(url('file:///c:/tmp/foo/bar'))
2467 2468 'file:///c:/tmp/foo/bar'
2468 2469 >>> print url(r'bundle:foo\bar')
2469 2470 bundle:foo\bar
2470 2471 >>> print url(r'file:///D:\data\hg')
2471 2472 file:///D:\data\hg
2472 2473 """
2473 2474 if self._localpath:
2474 2475 s = self.path
2475 2476 if self.scheme == 'bundle':
2476 2477 s = 'bundle:' + s
2477 2478 if self.fragment:
2478 2479 s += '#' + self.fragment
2479 2480 return s
2480 2481
2481 2482 s = self.scheme + ':'
2482 2483 if self.user or self.passwd or self.host:
2483 2484 s += '//'
2484 2485 elif self.scheme and (not self.path or self.path.startswith('/')
2485 2486 or hasdriveletter(self.path)):
2486 2487 s += '//'
2487 2488 if hasdriveletter(self.path):
2488 2489 s += '/'
2489 2490 if self.user:
2490 2491 s += urlreq.quote(self.user, safe=self._safechars)
2491 2492 if self.passwd:
2492 2493 s += ':' + urlreq.quote(self.passwd, safe=self._safechars)
2493 2494 if self.user or self.passwd:
2494 2495 s += '@'
2495 2496 if self.host:
2496 2497 if not (self.host.startswith('[') and self.host.endswith(']')):
2497 2498 s += urlreq.quote(self.host)
2498 2499 else:
2499 2500 s += self.host
2500 2501 if self.port:
2501 2502 s += ':' + urlreq.quote(self.port)
2502 2503 if self.host:
2503 2504 s += '/'
2504 2505 if self.path:
2505 2506 # TODO: similar to the query string, we should not unescape the
2506 2507 # path when we store it, the path might contain '%2f' = '/',
2507 2508 # which we should *not* escape.
2508 2509 s += urlreq.quote(self.path, safe=self._safepchars)
2509 2510 if self.query:
2510 2511 # we store the query in escaped form.
2511 2512 s += '?' + self.query
2512 2513 if self.fragment is not None:
2513 2514 s += '#' + urlreq.quote(self.fragment, safe=self._safepchars)
2514 2515 return s
2515 2516
2516 2517 def authinfo(self):
2517 2518 user, passwd = self.user, self.passwd
2518 2519 try:
2519 2520 self.user, self.passwd = None, None
2520 2521 s = str(self)
2521 2522 finally:
2522 2523 self.user, self.passwd = user, passwd
2523 2524 if not self.user:
2524 2525 return (s, None)
2525 2526 # authinfo[1] is passed to urllib2 password manager, and its
2526 2527 # URIs must not contain credentials. The host is passed in the
2527 2528 # URIs list because Python < 2.4.3 uses only that to search for
2528 2529 # a password.
2529 2530 return (s, (None, (s, self.host),
2530 2531 self.user, self.passwd or ''))
2531 2532
2532 2533 def isabs(self):
2533 2534 if self.scheme and self.scheme != 'file':
2534 2535 return True # remote URL
2535 2536 if hasdriveletter(self.path):
2536 2537 return True # absolute for our purposes - can't be joined()
2537 2538 if self.path.startswith(r'\\'):
2538 2539 return True # Windows UNC path
2539 2540 if self.path.startswith('/'):
2540 2541 return True # POSIX-style
2541 2542 return False
2542 2543
2543 2544 def localpath(self):
2544 2545 if self.scheme == 'file' or self.scheme == 'bundle':
2545 2546 path = self.path or '/'
2546 2547 # For Windows, we need to promote hosts containing drive
2547 2548 # letters to paths with drive letters.
2548 2549 if hasdriveletter(self._hostport):
2549 2550 path = self._hostport + '/' + self.path
2550 2551 elif (self.host is not None and self.path
2551 2552 and not hasdriveletter(path)):
2552 2553 path = '/' + path
2553 2554 return path
2554 2555 return self._origpath
2555 2556
2556 2557 def islocal(self):
2557 2558 '''whether localpath will return something that posixfile can open'''
2558 2559 return (not self.scheme or self.scheme == 'file'
2559 2560 or self.scheme == 'bundle')
2560 2561
2561 2562 def hasscheme(path):
2562 2563 return bool(url(path).scheme)
2563 2564
2564 2565 def hasdriveletter(path):
2565 2566 return path and path[1:2] == ':' and path[0:1].isalpha()
2566 2567
2567 2568 def urllocalpath(path):
2568 2569 return url(path, parsequery=False, parsefragment=False).localpath()
2569 2570
2570 2571 def hidepassword(u):
2571 2572 '''hide user credential in a url string'''
2572 2573 u = url(u)
2573 2574 if u.passwd:
2574 2575 u.passwd = '***'
2575 2576 return str(u)
2576 2577
2577 2578 def removeauth(u):
2578 2579 '''remove all authentication information from a url string'''
2579 2580 u = url(u)
2580 2581 u.user = u.passwd = None
2581 2582 return str(u)
2582 2583
2583 2584 def isatty(fp):
2584 2585 try:
2585 2586 return fp.isatty()
2586 2587 except AttributeError:
2587 2588 return False
2588 2589
2589 2590 timecount = unitcountfn(
2590 2591 (1, 1e3, _('%.0f s')),
2591 2592 (100, 1, _('%.1f s')),
2592 2593 (10, 1, _('%.2f s')),
2593 2594 (1, 1, _('%.3f s')),
2594 2595 (100, 0.001, _('%.1f ms')),
2595 2596 (10, 0.001, _('%.2f ms')),
2596 2597 (1, 0.001, _('%.3f ms')),
2597 2598 (100, 0.000001, _('%.1f us')),
2598 2599 (10, 0.000001, _('%.2f us')),
2599 2600 (1, 0.000001, _('%.3f us')),
2600 2601 (100, 0.000000001, _('%.1f ns')),
2601 2602 (10, 0.000000001, _('%.2f ns')),
2602 2603 (1, 0.000000001, _('%.3f ns')),
2603 2604 )
2604 2605
2605 2606 _timenesting = [0]
2606 2607
2607 2608 def timed(func):
2608 2609 '''Report the execution time of a function call to stderr.
2609 2610
2610 2611 During development, use as a decorator when you need to measure
2611 2612 the cost of a function, e.g. as follows:
2612 2613
2613 2614 @util.timed
2614 2615 def foo(a, b, c):
2615 2616 pass
2616 2617 '''
2617 2618
2618 2619 def wrapper(*args, **kwargs):
2619 2620 start = time.time()
2620 2621 indent = 2
2621 2622 _timenesting[0] += indent
2622 2623 try:
2623 2624 return func(*args, **kwargs)
2624 2625 finally:
2625 2626 elapsed = time.time() - start
2626 2627 _timenesting[0] -= indent
2627 2628 sys.stderr.write('%s%s: %s\n' %
2628 2629 (' ' * _timenesting[0], func.__name__,
2629 2630 timecount(elapsed)))
2630 2631 return wrapper
2631 2632
2632 2633 _sizeunits = (('m', 2**20), ('k', 2**10), ('g', 2**30),
2633 2634 ('kb', 2**10), ('mb', 2**20), ('gb', 2**30), ('b', 1))
2634 2635
2635 2636 def sizetoint(s):
2636 2637 '''Convert a space specifier to a byte count.
2637 2638
2638 2639 >>> sizetoint('30')
2639 2640 30
2640 2641 >>> sizetoint('2.2kb')
2641 2642 2252
2642 2643 >>> sizetoint('6M')
2643 2644 6291456
2644 2645 '''
2645 2646 t = s.strip().lower()
2646 2647 try:
2647 2648 for k, u in _sizeunits:
2648 2649 if t.endswith(k):
2649 2650 return int(float(t[:-len(k)]) * u)
2650 2651 return int(t)
2651 2652 except ValueError:
2652 2653 raise error.ParseError(_("couldn't parse size: %s") % s)
2653 2654
2654 2655 class hooks(object):
2655 2656 '''A collection of hook functions that can be used to extend a
2656 2657 function's behavior. Hooks are called in lexicographic order,
2657 2658 based on the names of their sources.'''
2658 2659
2659 2660 def __init__(self):
2660 2661 self._hooks = []
2661 2662
2662 2663 def add(self, source, hook):
2663 2664 self._hooks.append((source, hook))
2664 2665
2665 2666 def __call__(self, *args):
2666 2667 self._hooks.sort(key=lambda x: x[0])
2667 2668 results = []
2668 2669 for source, hook in self._hooks:
2669 2670 results.append(hook(*args))
2670 2671 return results
2671 2672
2672 2673 def getstackframes(skip=0, line=' %-*s in %s\n', fileline='%s:%s'):
2673 2674 '''Yields lines for a nicely formatted stacktrace.
2674 2675 Skips the 'skip' last entries.
2675 2676 Each file+linenumber is formatted according to fileline.
2676 2677 Each line is formatted according to line.
2677 2678 If line is None, it yields:
2678 2679 length of longest filepath+line number,
2679 2680 filepath+linenumber,
2680 2681 function
2681 2682
2682 2683 Not be used in production code but very convenient while developing.
2683 2684 '''
2684 2685 entries = [(fileline % (fn, ln), func)
2685 2686 for fn, ln, func, _text in traceback.extract_stack()[:-skip - 1]]
2686 2687 if entries:
2687 2688 fnmax = max(len(entry[0]) for entry in entries)
2688 2689 for fnln, func in entries:
2689 2690 if line is None:
2690 2691 yield (fnmax, fnln, func)
2691 2692 else:
2692 2693 yield line % (fnmax, fnln, func)
2693 2694
2694 2695 def debugstacktrace(msg='stacktrace', skip=0, f=sys.stderr, otherf=sys.stdout):
2695 2696 '''Writes a message to f (stderr) with a nicely formatted stacktrace.
2696 2697 Skips the 'skip' last entries. By default it will flush stdout first.
2697 2698 It can be used everywhere and intentionally does not require an ui object.
2698 2699 Not be used in production code but very convenient while developing.
2699 2700 '''
2700 2701 if otherf:
2701 2702 otherf.flush()
2702 2703 f.write('%s at:\n' % msg)
2703 2704 for line in getstackframes(skip + 1):
2704 2705 f.write(line)
2705 2706 f.flush()
2706 2707
2707 2708 class dirs(object):
2708 2709 '''a multiset of directory names from a dirstate or manifest'''
2709 2710
2710 2711 def __init__(self, map, skip=None):
2711 2712 self._dirs = {}
2712 2713 addpath = self.addpath
2713 2714 if safehasattr(map, 'iteritems') and skip is not None:
2714 2715 for f, s in map.iteritems():
2715 2716 if s[0] != skip:
2716 2717 addpath(f)
2717 2718 else:
2718 2719 for f in map:
2719 2720 addpath(f)
2720 2721
2721 2722 def addpath(self, path):
2722 2723 dirs = self._dirs
2723 2724 for base in finddirs(path):
2724 2725 if base in dirs:
2725 2726 dirs[base] += 1
2726 2727 return
2727 2728 dirs[base] = 1
2728 2729
2729 2730 def delpath(self, path):
2730 2731 dirs = self._dirs
2731 2732 for base in finddirs(path):
2732 2733 if dirs[base] > 1:
2733 2734 dirs[base] -= 1
2734 2735 return
2735 2736 del dirs[base]
2736 2737
2737 2738 def __iter__(self):
2738 2739 return self._dirs.iterkeys()
2739 2740
2740 2741 def __contains__(self, d):
2741 2742 return d in self._dirs
2742 2743
2743 2744 if safehasattr(parsers, 'dirs'):
2744 2745 dirs = parsers.dirs
2745 2746
2746 2747 def finddirs(path):
2747 2748 pos = path.rfind('/')
2748 2749 while pos != -1:
2749 2750 yield path[:pos]
2750 2751 pos = path.rfind('/', 0, pos)
2751 2752
2752 2753 # compression utility
2753 2754
2754 2755 class nocompress(object):
2755 2756 def compress(self, x):
2756 2757 return x
2757 2758 def flush(self):
2758 2759 return ""
2759 2760
2760 2761 compressors = {
2761 2762 None: nocompress,
2762 2763 # lambda to prevent early import
2763 2764 'BZ': lambda: bz2.BZ2Compressor(),
2764 2765 'GZ': lambda: zlib.compressobj(),
2765 2766 }
2766 2767 # also support the old form by courtesies
2767 2768 compressors['UN'] = compressors[None]
2768 2769
2769 2770 def _makedecompressor(decompcls):
2770 2771 def generator(f):
2771 2772 d = decompcls()
2772 2773 for chunk in filechunkiter(f):
2773 2774 yield d.decompress(chunk)
2774 2775 def func(fh):
2775 2776 return chunkbuffer(generator(fh))
2776 2777 return func
2777 2778
2778 2779 class ctxmanager(object):
2779 2780 '''A context manager for use in 'with' blocks to allow multiple
2780 2781 contexts to be entered at once. This is both safer and more
2781 2782 flexible than contextlib.nested.
2782 2783
2783 2784 Once Mercurial supports Python 2.7+, this will become mostly
2784 2785 unnecessary.
2785 2786 '''
2786 2787
2787 2788 def __init__(self, *args):
2788 2789 '''Accepts a list of no-argument functions that return context
2789 2790 managers. These will be invoked at __call__ time.'''
2790 2791 self._pending = args
2791 2792 self._atexit = []
2792 2793
2793 2794 def __enter__(self):
2794 2795 return self
2795 2796
2796 2797 def enter(self):
2797 2798 '''Create and enter context managers in the order in which they were
2798 2799 passed to the constructor.'''
2799 2800 values = []
2800 2801 for func in self._pending:
2801 2802 obj = func()
2802 2803 values.append(obj.__enter__())
2803 2804 self._atexit.append(obj.__exit__)
2804 2805 del self._pending
2805 2806 return values
2806 2807
2807 2808 def atexit(self, func, *args, **kwargs):
2808 2809 '''Add a function to call when this context manager exits. The
2809 2810 ordering of multiple atexit calls is unspecified, save that
2810 2811 they will happen before any __exit__ functions.'''
2811 2812 def wrapper(exc_type, exc_val, exc_tb):
2812 2813 func(*args, **kwargs)
2813 2814 self._atexit.append(wrapper)
2814 2815 return func
2815 2816
2816 2817 def __exit__(self, exc_type, exc_val, exc_tb):
2817 2818 '''Context managers are exited in the reverse order from which
2818 2819 they were created.'''
2819 2820 received = exc_type is not None
2820 2821 suppressed = False
2821 2822 pending = None
2822 2823 self._atexit.reverse()
2823 2824 for exitfunc in self._atexit:
2824 2825 try:
2825 2826 if exitfunc(exc_type, exc_val, exc_tb):
2826 2827 suppressed = True
2827 2828 exc_type = None
2828 2829 exc_val = None
2829 2830 exc_tb = None
2830 2831 except BaseException:
2831 2832 pending = sys.exc_info()
2832 2833 exc_type, exc_val, exc_tb = pending = sys.exc_info()
2833 2834 del self._atexit
2834 2835 if pending:
2835 2836 raise exc_val
2836 2837 return received and suppressed
2837 2838
2838 2839 def _bz2():
2839 2840 d = bz2.BZ2Decompressor()
2840 2841 # Bzip2 stream start with BZ, but we stripped it.
2841 2842 # we put it back for good measure.
2842 2843 d.decompress('BZ')
2843 2844 return d
2844 2845
2845 2846 decompressors = {None: lambda fh: fh,
2846 2847 '_truncatedBZ': _makedecompressor(_bz2),
2847 2848 'BZ': _makedecompressor(lambda: bz2.BZ2Decompressor()),
2848 2849 'GZ': _makedecompressor(lambda: zlib.decompressobj()),
2849 2850 }
2850 2851 # also support the old form by courtesies
2851 2852 decompressors['UN'] = decompressors[None]
2852 2853
2853 2854 # convenient shortcut
2854 2855 dst = debugstacktrace
@@ -1,151 +1,151
1 1 #require test-repo
2 2
3 3 $ . "$TESTDIR/helpers-testrepo.sh"
4 4 $ cd "$TESTDIR"/..
5 5
6 6 $ hg files 'set:(**.py)' | sed 's|\\|/|g' | xargs python contrib/check-py3-compat.py
7 7 hgext/fsmonitor/pywatchman/__init__.py not using absolute_import
8 8 hgext/fsmonitor/pywatchman/__init__.py requires print_function
9 9 hgext/fsmonitor/pywatchman/capabilities.py not using absolute_import
10 10 hgext/fsmonitor/pywatchman/pybser.py not using absolute_import
11 11 hgext/highlight/__init__.py not using absolute_import
12 12 hgext/highlight/highlight.py not using absolute_import
13 13 hgext/share.py not using absolute_import
14 14 hgext/win32text.py not using absolute_import
15 15 i18n/check-translation.py not using absolute_import
16 16 i18n/polib.py not using absolute_import
17 17 setup.py not using absolute_import
18 18 tests/heredoctest.py requires print_function
19 19 tests/md5sum.py not using absolute_import
20 20 tests/readlink.py not using absolute_import
21 21 tests/run-tests.py not using absolute_import
22 22 tests/test-demandimport.py not using absolute_import
23 23
24 24 #if py3exe
25 25 $ hg files 'set:(**.py)' | sed 's|\\|/|g' | xargs $PYTHON3 contrib/check-py3-compat.py
26 26 doc/hgmanpage.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
27 27 hgext/automv.py: error importing module: <SyntaxError> invalid syntax (commands.py, line *) (line *) (glob)
28 28 hgext/blackbox.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
29 hgext/bugzilla.py: error importing module: <ImportError> No module named 'xmlrpclib' (line *) (glob)
29 hgext/bugzilla.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
30 30 hgext/censor.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
31 31 hgext/chgserver.py: error importing module: <ImportError> No module named 'SocketServer' (line *) (glob)
32 32 hgext/children.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
33 33 hgext/churn.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
34 34 hgext/clonebundles.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
35 35 hgext/color.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
36 36 hgext/convert/bzr.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
37 37 hgext/convert/convcmd.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
38 38 hgext/convert/cvs.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
39 39 hgext/convert/cvsps.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
40 40 hgext/convert/darcs.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
41 41 hgext/convert/filemap.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
42 42 hgext/convert/git.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
43 43 hgext/convert/gnuarch.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
44 44 hgext/convert/hg.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
45 45 hgext/convert/monotone.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
46 46 hgext/convert/p*.py: error importing module: <SystemError> Parent module 'hgext.convert' not loaded, cannot perform relative import (line *) (glob)
47 47 hgext/convert/subversion.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
48 48 hgext/convert/transport.py: error importing module: <ImportError> No module named 'svn.client' (line *) (glob)
49 49 hgext/eol.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
50 50 hgext/extdiff.py: error importing module: <SyntaxError> invalid syntax (archival.py, line *) (line *) (glob)
51 51 hgext/factotum.py: error importing: <ImportError> No module named 'rfc822' (error at __init__.py:*) (glob)
52 52 hgext/fetch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
53 53 hgext/fsmonitor/watchmanclient.py: error importing module: <SystemError> Parent module 'hgext.fsmonitor' not loaded, cannot perform relative import (line *) (glob)
54 54 hgext/gpg.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
55 55 hgext/graphlog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
56 56 hgext/hgk.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
57 57 hgext/histedit.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
58 58 hgext/keyword.py: error importing: <ImportError> No module named 'BaseHTTPServer' (error at common.py:*) (glob)
59 59 hgext/largefiles/basestore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
60 60 hgext/largefiles/lfcommands.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
61 61 hgext/largefiles/lfutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
62 62 hgext/largefiles/localstore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
63 63 hgext/largefiles/overrides.py: error importing module: <SyntaxError> invalid syntax (archival.py, line *) (line *) (glob)
64 64 hgext/largefiles/proto.py: error importing: <ImportError> No module named 'httplib' (error at httppeer.py:*) (glob)
65 65 hgext/largefiles/remotestore.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at wireproto.py:*) (glob)
66 66 hgext/largefiles/reposetup.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
67 67 hgext/largefiles/storefactory.py: error importing: <SyntaxError> invalid syntax (bundle2.py, line *) (error at bundlerepo.py:*) (glob)
68 68 hgext/largefiles/uisetup.py: error importing: <ImportError> No module named 'BaseHTTPServer' (error at common.py:*) (glob)
69 69 hgext/largefiles/wirestore.py: error importing module: <SystemError> Parent module 'hgext.largefiles' not loaded, cannot perform relative import (line *) (glob)
70 70 hgext/mq.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
71 71 hgext/notify.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
72 72 hgext/pager.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
73 73 hgext/patchbomb.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
74 74 hgext/purge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
75 75 hgext/rebase.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
76 76 hgext/record.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
77 77 hgext/relink.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
78 78 hgext/schemes.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
79 79 hgext/share.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
80 80 hgext/shelve.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
81 81 hgext/strip.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
82 82 hgext/transplant.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
83 83 mercurial/archival.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
84 84 mercurial/branchmap.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
85 85 mercurial/bundle*.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
86 86 mercurial/bundlerepo.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
87 87 mercurial/changegroup.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
88 88 mercurial/changelog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
89 89 mercurial/cmdutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
90 90 mercurial/commands.py: invalid syntax: invalid syntax (<unknown>, line *) (glob)
91 91 mercurial/commandserver.py: error importing module: <ImportError> No module named 'SocketServer' (line *) (glob)
92 92 mercurial/context.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
93 93 mercurial/copies.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
94 94 mercurial/crecord.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
95 95 mercurial/dirstate.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
96 96 mercurial/discovery.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
97 97 mercurial/dispatch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
98 98 mercurial/exchange.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
99 99 mercurial/extensions.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
100 100 mercurial/filelog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
101 101 mercurial/filemerge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
102 102 mercurial/fileset.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
103 103 mercurial/formatter.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
104 104 mercurial/graphmod.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
105 105 mercurial/help.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
106 106 mercurial/hg.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at bundlerepo.py:*) (glob)
107 107 mercurial/hgweb/common.py: error importing module: <ImportError> No module named 'BaseHTTPServer' (line *) (glob)
108 108 mercurial/hgweb/hgweb_mod.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
109 109 mercurial/hgweb/hgwebdir_mod.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
110 110 mercurial/hgweb/protocol.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
111 111 mercurial/hgweb/request.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
112 112 mercurial/hgweb/server.py: error importing module: <ImportError> No module named 'BaseHTTPServer' (line *) (glob)
113 113 mercurial/hgweb/webcommands.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
114 114 mercurial/hgweb/webutil.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
115 115 mercurial/hgweb/wsgicgi.py: error importing module: <SystemError> Parent module 'mercurial.hgweb' not loaded, cannot perform relative import (line *) (glob)
116 116 mercurial/hook.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
117 117 mercurial/httpconnection.py: error importing: <ImportError> No module named 'rfc822' (error at __init__.py:*) (glob)
118 118 mercurial/httppeer.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
119 119 mercurial/keepalive.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
120 120 mercurial/localrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
121 121 mercurial/mail.py: error importing module: <AttributeError> module 'email' has no attribute 'Header' (line *) (glob)
122 122 mercurial/manifest.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
123 123 mercurial/merge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
124 124 mercurial/namespaces.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
125 125 mercurial/patch.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
126 126 mercurial/pure/mpatch.py: error importing module: <ImportError> cannot import name 'pycompat' (line *) (glob)
127 127 mercurial/pure/parsers.py: error importing module: <ImportError> No module named 'mercurial.pure.node' (line *) (glob)
128 128 mercurial/repair.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
129 129 mercurial/revlog.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
130 130 mercurial/revset.py: error importing module: <AttributeError> 'dict' object has no attribute 'iteritems' (line *) (glob)
131 131 mercurial/scmutil.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
132 132 mercurial/scmwindows.py: error importing module: <ImportError> No module named '_winreg' (line *) (glob)
133 133 mercurial/simplemerge.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
134 134 mercurial/sshpeer.py: error importing: <SyntaxError> invalid syntax (bundle*.py, line *) (error at wireproto.py:*) (glob)
135 135 mercurial/sshserver.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
136 136 mercurial/statichttprepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
137 137 mercurial/store.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
138 138 mercurial/streamclone.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
139 139 mercurial/subrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
140 140 mercurial/templatefilters.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
141 141 mercurial/templatekw.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
142 142 mercurial/templater.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
143 143 mercurial/ui.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
144 144 mercurial/unionrepo.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
145 145 mercurial/url.py: error importing module: <ImportError> No module named 'httplib' (line *) (glob)
146 146 mercurial/verify.py: error importing: <AttributeError> 'dict' object has no attribute 'iteritems' (error at revset.py:*) (glob)
147 147 mercurial/win*.py: error importing module: <ImportError> No module named 'msvcrt' (line *) (glob)
148 148 mercurial/windows.py: error importing module: <ImportError> No module named '_winreg' (line *) (glob)
149 149 mercurial/wireproto.py: error importing module: <SyntaxError> invalid syntax (bundle*.py, line *) (line *) (glob)
150 150
151 151 #endif
General Comments 0
You need to be logged in to leave comments. Login now