##// END OF EJS Templates
rename explain_exit to explainexit
Adrian Buehlmann -
r14234:600e6400 default
parent child Browse files
Show More
@@ -1,756 +1,756 b''
1 1 # bugzilla.py - bugzilla integration for mercurial
2 2 #
3 3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 4 # Copyright 2011 Jim Hague <jim.hague@acm.org>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''hooks for integrating with the Bugzilla bug tracker
10 10
11 11 This hook extension adds comments on bugs in Bugzilla when changesets
12 12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 13 the Mercurial template mechanism.
14 14
15 15 The hook does not change bug status.
16 16
17 17 Three basic modes of access to Bugzilla are provided:
18 18
19 19 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
20 20
21 21 2. Check data via the Bugzilla XMLRPC interface and submit bug change
22 22 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
23 23
24 24 3. Writing directly to the Bugzilla database. Only Bugzilla installations
25 25 using MySQL are supported. Requires Python MySQLdb.
26 26
27 27 Writing directly to the database is susceptible to schema changes, and
28 28 relies on a Bugzilla contrib script to send out bug change
29 29 notification emails. This script runs as the user running Mercurial,
30 30 must be run on the host with the Bugzilla install, and requires
31 31 permission to read Bugzilla configuration details and the necessary
32 32 MySQL user and password to have full access rights to the Bugzilla
33 33 database. For these reasons this access mode is now considered
34 34 deprecated, and will not be updated for new Bugzilla versions going
35 35 forward.
36 36
37 37 Access via XMLRPC needs a Bugzilla username and password to be specified
38 38 in the configuration. Comments are added under that username. Since the
39 39 configuration must be readable by all Mercurial users, it is recommended
40 40 that the rights of that user are restricted in Bugzilla to the minimum
41 41 necessary to add comments.
42 42
43 43 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
44 44 email to the Bugzilla email interface to submit comments to bugs.
45 45 The From: address in the email is set to the email address of the Mercurial
46 46 user, so the comment appears to come from the Mercurial user. In the event
47 47 that the Mercurial user email is not recognised by Bugzilla as a Bugzilla
48 48 user, the email associated with the Bugzilla username used to log into
49 49 Bugzilla is used instead as the source of the comment.
50 50
51 51 Configuration items common to all access modes:
52 52
53 53 bugzilla.version
54 54 This access type to use. Values recognised are:
55 55
56 56 :``xmlrpc``: Bugzilla XMLRPC interface.
57 57 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
58 58 :``3.0``: MySQL access, Bugzilla 3.0 and later.
59 59 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
60 60 including 3.0.
61 61 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
62 62 including 2.18.
63 63
64 64 bugzilla.regexp
65 65 Regular expression to match bug IDs in changeset commit message.
66 66 Must contain one "()" group. The default expression matches ``Bug
67 67 1234``, ``Bug no. 1234``, ``Bug number 1234``, ``Bugs 1234,5678``,
68 68 ``Bug 1234 and 5678`` and variations thereof. Matching is case
69 69 insensitive.
70 70
71 71 bugzilla.style
72 72 The style file to use when formatting comments.
73 73
74 74 bugzilla.template
75 75 Template to use when formatting comments. Overrides style if
76 76 specified. In addition to the usual Mercurial keywords, the
77 77 extension specifies:
78 78
79 79 :``{bug}``: The Bugzilla bug ID.
80 80 :``{root}``: The full pathname of the Mercurial repository.
81 81 :``{webroot}``: Stripped pathname of the Mercurial repository.
82 82 :``{hgweb}``: Base URL for browsing Mercurial repositories.
83 83
84 84 Default ``changeset {node|short} in repo {root} refers to bug
85 85 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
86 86
87 87 bugzilla.strip
88 88 The number of path separator characters to strip from the front of
89 89 the Mercurial repository path (``{root}`` in templates) to produce
90 90 ``{webroot}``. For example, a repository with ``{root}``
91 91 ``/var/local/my-project`` with a strip of 2 gives a value for
92 92 ``{webroot}`` of ``my-project``. Default 0.
93 93
94 94 web.baseurl
95 95 Base URL for browsing Mercurial repositories. Referenced from
96 96 templates as ``{hgweb}``.
97 97
98 98 Configuration items common to XMLRPC+email and MySQL access modes:
99 99
100 100 bugzilla.usermap
101 101 Path of file containing Mercurial committer email to Bugzilla user email
102 102 mappings. If specified, the file should contain one mapping per
103 103 line::
104 104
105 105 committer = Bugzilla user
106 106
107 107 See also the ``[usermap]`` section.
108 108
109 109 The ``[usermap]`` section is used to specify mappings of Mercurial
110 110 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
111 111 Contains entries of the form ``committer = Bugzilla user``.
112 112
113 113 XMLRPC access mode configuration:
114 114
115 115 bugzilla.bzurl
116 116 The base URL for the Bugzilla installation.
117 117 Default ``http://localhost/bugzilla``.
118 118
119 119 bugzilla.user
120 120 The username to use to log into Bugzilla via XMLRPC. Default
121 121 ``bugs``.
122 122
123 123 bugzilla.password
124 124 The password for Bugzilla login.
125 125
126 126 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
127 127 and also:
128 128
129 129 bugzilla.bzemail
130 130 The Bugzilla email address.
131 131
132 132 In addition, the Mercurial email settings must be configured. See the
133 133 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
134 134
135 135 MySQL access mode configuration:
136 136
137 137 bugzilla.host
138 138 Hostname of the MySQL server holding the Bugzilla database.
139 139 Default ``localhost``.
140 140
141 141 bugzilla.db
142 142 Name of the Bugzilla database in MySQL. Default ``bugs``.
143 143
144 144 bugzilla.user
145 145 Username to use to access MySQL server. Default ``bugs``.
146 146
147 147 bugzilla.password
148 148 Password to use to access MySQL server.
149 149
150 150 bugzilla.timeout
151 151 Database connection timeout (seconds). Default 5.
152 152
153 153 bugzilla.bzuser
154 154 Fallback Bugzilla user name to record comments with, if changeset
155 155 committer cannot be found as a Bugzilla user.
156 156
157 157 bugzilla.bzdir
158 158 Bugzilla install directory. Used by default notify. Default
159 159 ``/var/www/html/bugzilla``.
160 160
161 161 bugzilla.notify
162 162 The command to run to get Bugzilla to send bug change notification
163 163 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
164 164 id) and ``user`` (committer bugzilla email). Default depends on
165 165 version; from 2.18 it is "cd %(bzdir)s && perl -T
166 166 contrib/sendbugmail.pl %(id)s %(user)s".
167 167
168 168 Activating the extension::
169 169
170 170 [extensions]
171 171 bugzilla =
172 172
173 173 [hooks]
174 174 # run bugzilla hook on every change pulled or pushed in here
175 175 incoming.bugzilla = python:hgext.bugzilla.hook
176 176
177 177 Example configurations:
178 178
179 179 XMLRPC example configuration. This uses the Bugzilla at
180 180 ``http://my-project.org/bugzilla``, logging in as user
181 181 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
182 182 collection of Mercurial repositories in ``/var/local/hg/repos/``,
183 183 with a web interface at ``http://my-project.org/hg``. ::
184 184
185 185 [bugzilla]
186 186 bzurl=http://my-project.org/bugzilla
187 187 user=bugmail@my-project.org
188 188 password=plugh
189 189 version=xmlrpc
190 190 template=Changeset {node|short} in {root|basename}.
191 191 {hgweb}/{webroot}/rev/{node|short}\\n
192 192 {desc}\\n
193 193 strip=5
194 194
195 195 [web]
196 196 baseurl=http://my-project.org/hg
197 197
198 198 XMLRPC+email example configuration. This uses the Bugzilla at
199 199 ``http://my-project.org/bugzilla``, logging in as user
200 200 ``bugmail@my-project.org`` wityh password ``plugh``. It is used with a
201 201 collection of Mercurial repositories in ``/var/local/hg/repos/``,
202 202 with a web interface at ``http://my-project.org/hg``. Bug comments
203 203 are sent to the Bugzilla email address
204 204 ``bugzilla@my-project.org``. ::
205 205
206 206 [bugzilla]
207 207 bzurl=http://my-project.org/bugzilla
208 208 user=bugmail@my-project.org
209 209 password=plugh
210 210 version=xmlrpc
211 211 bzemail=bugzilla@my-project.org
212 212 template=Changeset {node|short} in {root|basename}.
213 213 {hgweb}/{webroot}/rev/{node|short}\\n
214 214 {desc}\\n
215 215 strip=5
216 216
217 217 [web]
218 218 baseurl=http://my-project.org/hg
219 219
220 220 [usermap]
221 221 user@emaildomain.com=user.name@bugzilladomain.com
222 222
223 223 MySQL example configuration. This has a local Bugzilla 3.2 installation
224 224 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
225 225 the Bugzilla database name is ``bugs`` and MySQL is
226 226 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
227 227 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
228 228 with a web interface at ``http://my-project.org/hg``. ::
229 229
230 230 [bugzilla]
231 231 host=localhost
232 232 password=XYZZY
233 233 version=3.0
234 234 bzuser=unknown@domain.com
235 235 bzdir=/opt/bugzilla-3.2
236 236 template=Changeset {node|short} in {root|basename}.
237 237 {hgweb}/{webroot}/rev/{node|short}\\n
238 238 {desc}\\n
239 239 strip=5
240 240
241 241 [web]
242 242 baseurl=http://my-project.org/hg
243 243
244 244 [usermap]
245 245 user@emaildomain.com=user.name@bugzilladomain.com
246 246
247 247 All the above add a comment to the Bugzilla bug record of the form::
248 248
249 249 Changeset 3b16791d6642 in repository-name.
250 250 http://my-project.org/hg/repository-name/rev/3b16791d6642
251 251
252 252 Changeset commit comment. Bug 1234.
253 253 '''
254 254
255 255 from mercurial.i18n import _
256 256 from mercurial.node import short
257 257 from mercurial import cmdutil, mail, templater, util
258 258 import re, time, xmlrpclib
259 259
260 260 class bzaccess(object):
261 261 '''Base class for access to Bugzilla.'''
262 262
263 263 def __init__(self, ui):
264 264 self.ui = ui
265 265 usermap = self.ui.config('bugzilla', 'usermap')
266 266 if usermap:
267 267 self.ui.readconfig(usermap, sections=['usermap'])
268 268
269 269 def map_committer(self, user):
270 270 '''map name of committer to Bugzilla user name.'''
271 271 for committer, bzuser in self.ui.configitems('usermap'):
272 272 if committer.lower() == user.lower():
273 273 return bzuser
274 274 return user
275 275
276 276 # Methods to be implemented by access classes.
277 277 def filter_real_bug_ids(self, ids):
278 278 '''remove bug IDs that do not exist in Bugzilla from set.'''
279 279 pass
280 280
281 281 def filter_cset_known_bug_ids(self, node, ids):
282 282 '''remove bug IDs where node occurs in comment text from set.'''
283 283 pass
284 284
285 285 def add_comment(self, bugid, text, committer):
286 286 '''add comment to bug.
287 287
288 288 If possible add the comment as being from the committer of
289 289 the changeset. Otherwise use the default Bugzilla user.
290 290 '''
291 291 pass
292 292
293 293 def notify(self, ids, committer):
294 294 '''Force sending of Bugzilla notification emails.'''
295 295 pass
296 296
297 297 # Bugzilla via direct access to MySQL database.
298 298 class bzmysql(bzaccess):
299 299 '''Support for direct MySQL access to Bugzilla.
300 300
301 301 The earliest Bugzilla version this is tested with is version 2.16.
302 302
303 303 If your Bugzilla is version 3.2 or above, you are strongly
304 304 recommended to use the XMLRPC access method instead.
305 305 '''
306 306
307 307 @staticmethod
308 308 def sql_buglist(ids):
309 309 '''return SQL-friendly list of bug ids'''
310 310 return '(' + ','.join(map(str, ids)) + ')'
311 311
312 312 _MySQLdb = None
313 313
314 314 def __init__(self, ui):
315 315 try:
316 316 import MySQLdb as mysql
317 317 bzmysql._MySQLdb = mysql
318 318 except ImportError, err:
319 319 raise util.Abort(_('python mysql support not available: %s') % err)
320 320
321 321 bzaccess.__init__(self, ui)
322 322
323 323 host = self.ui.config('bugzilla', 'host', 'localhost')
324 324 user = self.ui.config('bugzilla', 'user', 'bugs')
325 325 passwd = self.ui.config('bugzilla', 'password')
326 326 db = self.ui.config('bugzilla', 'db', 'bugs')
327 327 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
328 328 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
329 329 (host, db, user, '*' * len(passwd)))
330 330 self.conn = bzmysql._MySQLdb.connect(host=host,
331 331 user=user, passwd=passwd,
332 332 db=db,
333 333 connect_timeout=timeout)
334 334 self.cursor = self.conn.cursor()
335 335 self.longdesc_id = self.get_longdesc_id()
336 336 self.user_ids = {}
337 337 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
338 338
339 339 def run(self, *args, **kwargs):
340 340 '''run a query.'''
341 341 self.ui.note(_('query: %s %s\n') % (args, kwargs))
342 342 try:
343 343 self.cursor.execute(*args, **kwargs)
344 344 except bzmysql._MySQLdb.MySQLError:
345 345 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
346 346 raise
347 347
348 348 def get_longdesc_id(self):
349 349 '''get identity of longdesc field'''
350 350 self.run('select fieldid from fielddefs where name = "longdesc"')
351 351 ids = self.cursor.fetchall()
352 352 if len(ids) != 1:
353 353 raise util.Abort(_('unknown database schema'))
354 354 return ids[0][0]
355 355
356 356 def filter_real_bug_ids(self, ids):
357 357 '''filter not-existing bug ids from set.'''
358 358 self.run('select bug_id from bugs where bug_id in %s' %
359 359 bzmysql.sql_buglist(ids))
360 360 return set([c[0] for c in self.cursor.fetchall()])
361 361
362 362 def filter_cset_known_bug_ids(self, node, ids):
363 363 '''filter bug ids that already refer to this changeset from set.'''
364 364
365 365 self.run('''select bug_id from longdescs where
366 366 bug_id in %s and thetext like "%%%s%%"''' %
367 367 (bzmysql.sql_buglist(ids), short(node)))
368 368 for (id,) in self.cursor.fetchall():
369 369 self.ui.status(_('bug %d already knows about changeset %s\n') %
370 370 (id, short(node)))
371 371 ids.discard(id)
372 372 return ids
373 373
374 374 def notify(self, ids, committer):
375 375 '''tell bugzilla to send mail.'''
376 376
377 377 self.ui.status(_('telling bugzilla to send mail:\n'))
378 378 (user, userid) = self.get_bugzilla_user(committer)
379 379 for id in ids:
380 380 self.ui.status(_(' bug %s\n') % id)
381 381 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
382 382 bzdir = self.ui.config('bugzilla', 'bzdir', '/var/www/html/bugzilla')
383 383 try:
384 384 # Backwards-compatible with old notify string, which
385 385 # took one string. This will throw with a new format
386 386 # string.
387 387 cmd = cmdfmt % id
388 388 except TypeError:
389 389 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
390 390 self.ui.note(_('running notify command %s\n') % cmd)
391 391 fp = util.popen('(%s) 2>&1' % cmd)
392 392 out = fp.read()
393 393 ret = fp.close()
394 394 if ret:
395 395 self.ui.warn(out)
396 396 raise util.Abort(_('bugzilla notify command %s') %
397 util.explain_exit(ret)[0])
397 util.explainexit(ret)[0])
398 398 self.ui.status(_('done\n'))
399 399
400 400 def get_user_id(self, user):
401 401 '''look up numeric bugzilla user id.'''
402 402 try:
403 403 return self.user_ids[user]
404 404 except KeyError:
405 405 try:
406 406 userid = int(user)
407 407 except ValueError:
408 408 self.ui.note(_('looking up user %s\n') % user)
409 409 self.run('''select userid from profiles
410 410 where login_name like %s''', user)
411 411 all = self.cursor.fetchall()
412 412 if len(all) != 1:
413 413 raise KeyError(user)
414 414 userid = int(all[0][0])
415 415 self.user_ids[user] = userid
416 416 return userid
417 417
418 418 def get_bugzilla_user(self, committer):
419 419 '''See if committer is a registered bugzilla user. Return
420 420 bugzilla username and userid if so. If not, return default
421 421 bugzilla username and userid.'''
422 422 user = self.map_committer(committer)
423 423 try:
424 424 userid = self.get_user_id(user)
425 425 except KeyError:
426 426 try:
427 427 defaultuser = self.ui.config('bugzilla', 'bzuser')
428 428 if not defaultuser:
429 429 raise util.Abort(_('cannot find bugzilla user id for %s') %
430 430 user)
431 431 userid = self.get_user_id(defaultuser)
432 432 user = defaultuser
433 433 except KeyError:
434 434 raise util.Abort(_('cannot find bugzilla user id for %s or %s') %
435 435 (user, defaultuser))
436 436 return (user, userid)
437 437
438 438 def add_comment(self, bugid, text, committer):
439 439 '''add comment to bug. try adding comment as committer of
440 440 changeset, otherwise as default bugzilla user.'''
441 441 (user, userid) = self.get_bugzilla_user(committer)
442 442 now = time.strftime('%Y-%m-%d %H:%M:%S')
443 443 self.run('''insert into longdescs
444 444 (bug_id, who, bug_when, thetext)
445 445 values (%s, %s, %s, %s)''',
446 446 (bugid, userid, now, text))
447 447 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
448 448 values (%s, %s, %s, %s)''',
449 449 (bugid, userid, now, self.longdesc_id))
450 450 self.conn.commit()
451 451
452 452 class bzmysql_2_18(bzmysql):
453 453 '''support for bugzilla 2.18 series.'''
454 454
455 455 def __init__(self, ui):
456 456 bzmysql.__init__(self, ui)
457 457 self.default_notify = \
458 458 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
459 459
460 460 class bzmysql_3_0(bzmysql_2_18):
461 461 '''support for bugzilla 3.0 series.'''
462 462
463 463 def __init__(self, ui):
464 464 bzmysql_2_18.__init__(self, ui)
465 465
466 466 def get_longdesc_id(self):
467 467 '''get identity of longdesc field'''
468 468 self.run('select id from fielddefs where name = "longdesc"')
469 469 ids = self.cursor.fetchall()
470 470 if len(ids) != 1:
471 471 raise util.Abort(_('unknown database schema'))
472 472 return ids[0][0]
473 473
474 474 # Buzgilla via XMLRPC interface.
475 475
476 476 class CookieSafeTransport(xmlrpclib.SafeTransport):
477 477 """A SafeTransport that retains cookies over its lifetime.
478 478
479 479 The regular xmlrpclib transports ignore cookies. Which causes
480 480 a bit of a problem when you need a cookie-based login, as with
481 481 the Bugzilla XMLRPC interface.
482 482
483 483 So this is a SafeTransport which looks for cookies being set
484 484 in responses and saves them to add to all future requests.
485 485 It appears a SafeTransport can do both HTTP and HTTPS sessions,
486 486 which saves us having to do a CookieTransport too.
487 487 """
488 488
489 489 # Inspiration drawn from
490 490 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
491 491 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
492 492
493 493 cookies = []
494 494 def send_cookies(self, connection):
495 495 if self.cookies:
496 496 for cookie in self.cookies:
497 497 connection.putheader("Cookie", cookie)
498 498
499 499 def request(self, host, handler, request_body, verbose=0):
500 500 self.verbose = verbose
501 501
502 502 # issue XML-RPC request
503 503 h = self.make_connection(host)
504 504 if verbose:
505 505 h.set_debuglevel(1)
506 506
507 507 self.send_request(h, handler, request_body)
508 508 self.send_host(h, host)
509 509 self.send_cookies(h)
510 510 self.send_user_agent(h)
511 511 self.send_content(h, request_body)
512 512
513 513 # Deal with differences between Python 2.4-2.6 and 2.7.
514 514 # In the former h is a HTTP(S). In the latter it's a
515 515 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
516 516 # HTTP(S) has an underlying HTTP(S)Connection, so extract
517 517 # that and use it.
518 518 try:
519 519 response = h.getresponse()
520 520 except AttributeError:
521 521 response = h._conn.getresponse()
522 522
523 523 # Add any cookie definitions to our list.
524 524 for header in response.msg.getallmatchingheaders("Set-Cookie"):
525 525 val = header.split(": ", 1)[1]
526 526 cookie = val.split(";", 1)[0]
527 527 self.cookies.append(cookie)
528 528
529 529 if response.status != 200:
530 530 raise xmlrpclib.ProtocolError(host + handler, response.status,
531 531 response.reason, response.msg.headers)
532 532
533 533 payload = response.read()
534 534 parser, unmarshaller = self.getparser()
535 535 parser.feed(payload)
536 536 parser.close()
537 537
538 538 return unmarshaller.close()
539 539
540 540 class bzxmlrpc(bzaccess):
541 541 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
542 542
543 543 Requires a minimum Bugzilla version 3.4.
544 544 """
545 545
546 546 def __init__(self, ui):
547 547 bzaccess.__init__(self, ui)
548 548
549 549 bzweb = self.ui.config('bugzilla', 'bzurl',
550 550 'http://localhost/bugzilla/')
551 551 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
552 552
553 553 user = self.ui.config('bugzilla', 'user', 'bugs')
554 554 passwd = self.ui.config('bugzilla', 'password')
555 555
556 556 self.bzproxy = xmlrpclib.ServerProxy(bzweb, CookieSafeTransport())
557 557 self.bzproxy.User.login(dict(login=user, password=passwd))
558 558
559 559 def get_bug_comments(self, id):
560 560 """Return a string with all comment text for a bug."""
561 561 c = self.bzproxy.Bug.comments(dict(ids=[id]))
562 562 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
563 563
564 564 def filter_real_bug_ids(self, ids):
565 565 res = set()
566 566 bugs = self.bzproxy.Bug.get(dict(ids=sorted(ids), permissive=True))
567 567 for bug in bugs['bugs']:
568 568 res.add(bug['id'])
569 569 return res
570 570
571 571 def filter_cset_known_bug_ids(self, node, ids):
572 572 for id in sorted(ids):
573 573 if self.get_bug_comments(id).find(short(node)) != -1:
574 574 self.ui.status(_('bug %d already knows about changeset %s\n') %
575 575 (id, short(node)))
576 576 ids.discard(id)
577 577 return ids
578 578
579 579 def add_comment(self, bugid, text, committer):
580 580 self.bzproxy.Bug.add_comment(dict(id=bugid, comment=text))
581 581
582 582 class bzxmlrpcemail(bzxmlrpc):
583 583 """Read data from Bugzilla via XMLRPC, send updates via email.
584 584
585 585 Advantages of sending updates via email:
586 586 1. Comments can be added as any user, not just logged in user.
587 587 2. Bug statuses and other fields not accessible via XMLRPC can
588 588 be updated. This is not currently used.
589 589 """
590 590
591 591 def __init__(self, ui):
592 592 bzxmlrpc.__init__(self, ui)
593 593
594 594 self.bzemail = self.ui.config('bugzilla', 'bzemail')
595 595 if not self.bzemail:
596 596 raise util.Abort(_("configuration 'bzemail' missing"))
597 597 mail.validateconfig(self.ui)
598 598
599 599 def send_bug_modify_email(self, bugid, commands, comment, committer):
600 600 '''send modification message to Bugzilla bug via email.
601 601
602 602 The message format is documented in the Bugzilla email_in.pl
603 603 specification. commands is a list of command lines, comment is the
604 604 comment text.
605 605
606 606 To stop users from crafting commit comments with
607 607 Bugzilla commands, specify the bug ID via the message body, rather
608 608 than the subject line, and leave a blank line after it.
609 609 '''
610 610 user = self.map_committer(committer)
611 611 matches = self.bzproxy.User.get(dict(match=[user]))
612 612 if not matches['users']:
613 613 user = self.ui.config('bugzilla', 'user', 'bugs')
614 614 matches = self.bzproxy.User.get(dict(match=[user]))
615 615 if not matches['users']:
616 616 raise util.Abort(_("default bugzilla user %s email not found") %
617 617 user)
618 618 user = matches['users'][0]['email']
619 619
620 620 text = "\n".join(commands) + "\n@bug_id = %d\n\n" % bugid + comment
621 621
622 622 _charsets = mail._charsets(self.ui)
623 623 user = mail.addressencode(self.ui, user, _charsets)
624 624 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
625 625 msg = mail.mimeencode(self.ui, text, _charsets)
626 626 msg['From'] = user
627 627 msg['To'] = bzemail
628 628 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
629 629 sendmail = mail.connect(self.ui)
630 630 sendmail(user, bzemail, msg.as_string())
631 631
632 632 def add_comment(self, bugid, text, committer):
633 633 self.send_bug_modify_email(bugid, [], text, committer)
634 634
635 635 class bugzilla(object):
636 636 # supported versions of bugzilla. different versions have
637 637 # different schemas.
638 638 _versions = {
639 639 '2.16': bzmysql,
640 640 '2.18': bzmysql_2_18,
641 641 '3.0': bzmysql_3_0,
642 642 'xmlrpc': bzxmlrpc,
643 643 'xmlrpc+email': bzxmlrpcemail
644 644 }
645 645
646 646 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
647 647 r'((?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)')
648 648
649 649 _bz = None
650 650
651 651 def __init__(self, ui, repo):
652 652 self.ui = ui
653 653 self.repo = repo
654 654
655 655 def bz(self):
656 656 '''return object that knows how to talk to bugzilla version in
657 657 use.'''
658 658
659 659 if bugzilla._bz is None:
660 660 bzversion = self.ui.config('bugzilla', 'version')
661 661 try:
662 662 bzclass = bugzilla._versions[bzversion]
663 663 except KeyError:
664 664 raise util.Abort(_('bugzilla version %s not supported') %
665 665 bzversion)
666 666 bugzilla._bz = bzclass(self.ui)
667 667 return bugzilla._bz
668 668
669 669 def __getattr__(self, key):
670 670 return getattr(self.bz(), key)
671 671
672 672 _bug_re = None
673 673 _split_re = None
674 674
675 675 def find_bug_ids(self, ctx):
676 676 '''return set of integer bug IDs from commit comment.
677 677
678 678 Extract bug IDs from changeset comments. Filter out any that are
679 679 not known to Bugzilla, and any that already have a reference to
680 680 the given changeset in their comments.
681 681 '''
682 682 if bugzilla._bug_re is None:
683 683 bugzilla._bug_re = re.compile(
684 684 self.ui.config('bugzilla', 'regexp', bugzilla._default_bug_re),
685 685 re.IGNORECASE)
686 686 bugzilla._split_re = re.compile(r'\D+')
687 687 start = 0
688 688 ids = set()
689 689 while True:
690 690 m = bugzilla._bug_re.search(ctx.description(), start)
691 691 if not m:
692 692 break
693 693 start = m.end()
694 694 for id in bugzilla._split_re.split(m.group(1)):
695 695 if not id:
696 696 continue
697 697 ids.add(int(id))
698 698 if ids:
699 699 ids = self.filter_real_bug_ids(ids)
700 700 if ids:
701 701 ids = self.filter_cset_known_bug_ids(ctx.node(), ids)
702 702 return ids
703 703
704 704 def update(self, bugid, ctx):
705 705 '''update bugzilla bug with reference to changeset.'''
706 706
707 707 def webroot(root):
708 708 '''strip leading prefix of repo root and turn into
709 709 url-safe path.'''
710 710 count = int(self.ui.config('bugzilla', 'strip', 0))
711 711 root = util.pconvert(root)
712 712 while count > 0:
713 713 c = root.find('/')
714 714 if c == -1:
715 715 break
716 716 root = root[c + 1:]
717 717 count -= 1
718 718 return root
719 719
720 720 mapfile = self.ui.config('bugzilla', 'style')
721 721 tmpl = self.ui.config('bugzilla', 'template')
722 722 t = cmdutil.changeset_templater(self.ui, self.repo,
723 723 False, None, mapfile, False)
724 724 if not mapfile and not tmpl:
725 725 tmpl = _('changeset {node|short} in repo {root} refers '
726 726 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
727 727 if tmpl:
728 728 tmpl = templater.parsestring(tmpl, quoted=False)
729 729 t.use_template(tmpl)
730 730 self.ui.pushbuffer()
731 731 t.show(ctx, changes=ctx.changeset(),
732 732 bug=str(bugid),
733 733 hgweb=self.ui.config('web', 'baseurl'),
734 734 root=self.repo.root,
735 735 webroot=webroot(self.repo.root))
736 736 data = self.ui.popbuffer()
737 737 self.add_comment(bugid, data, util.email(ctx.user()))
738 738
739 739 def hook(ui, repo, hooktype, node=None, **kwargs):
740 740 '''add comment to bugzilla for each changeset that refers to a
741 741 bugzilla bug id. only add a comment once per bug, so same change
742 742 seen multiple times does not fill bug with duplicate data.'''
743 743 if node is None:
744 744 raise util.Abort(_('hook type %s does not pass a changeset id') %
745 745 hooktype)
746 746 try:
747 747 bz = bugzilla(ui, repo)
748 748 ctx = repo[node]
749 749 ids = bz.find_bug_ids(ctx)
750 750 if ids:
751 751 for id in ids:
752 752 bz.update(id, ctx)
753 753 bz.notify(ids, util.email(ctx.user()))
754 754 except Exception, e:
755 755 raise util.Abort(_('Bugzilla error: %s') % e)
756 756
@@ -1,411 +1,411 b''
1 1 # common.py - common code for the convert extension
2 2 #
3 3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 import base64, errno
9 9 import os
10 10 import cPickle as pickle
11 11 from mercurial import util
12 12 from mercurial.i18n import _
13 13
14 14 def encodeargs(args):
15 15 def encodearg(s):
16 16 lines = base64.encodestring(s)
17 17 lines = [l.splitlines()[0] for l in lines]
18 18 return ''.join(lines)
19 19
20 20 s = pickle.dumps(args)
21 21 return encodearg(s)
22 22
23 23 def decodeargs(s):
24 24 s = base64.decodestring(s)
25 25 return pickle.loads(s)
26 26
27 27 class MissingTool(Exception):
28 28 pass
29 29
30 30 def checktool(exe, name=None, abort=True):
31 31 name = name or exe
32 32 if not util.find_exe(exe):
33 33 exc = abort and util.Abort or MissingTool
34 34 raise exc(_('cannot find required "%s" tool') % name)
35 35
36 36 class NoRepo(Exception):
37 37 pass
38 38
39 39 SKIPREV = 'SKIP'
40 40
41 41 class commit(object):
42 42 def __init__(self, author, date, desc, parents, branch=None, rev=None,
43 43 extra={}, sortkey=None):
44 44 self.author = author or 'unknown'
45 45 self.date = date or '0 0'
46 46 self.desc = desc
47 47 self.parents = parents
48 48 self.branch = branch
49 49 self.rev = rev
50 50 self.extra = extra
51 51 self.sortkey = sortkey
52 52
53 53 class converter_source(object):
54 54 """Conversion source interface"""
55 55
56 56 def __init__(self, ui, path=None, rev=None):
57 57 """Initialize conversion source (or raise NoRepo("message")
58 58 exception if path is not a valid repository)"""
59 59 self.ui = ui
60 60 self.path = path
61 61 self.rev = rev
62 62
63 63 self.encoding = 'utf-8'
64 64
65 65 def before(self):
66 66 pass
67 67
68 68 def after(self):
69 69 pass
70 70
71 71 def setrevmap(self, revmap):
72 72 """set the map of already-converted revisions"""
73 73 pass
74 74
75 75 def getheads(self):
76 76 """Return a list of this repository's heads"""
77 77 raise NotImplementedError()
78 78
79 79 def getfile(self, name, rev):
80 80 """Return a pair (data, mode) where data is the file content
81 81 as a string and mode one of '', 'x' or 'l'. rev is the
82 82 identifier returned by a previous call to getchanges(). Raise
83 83 IOError to indicate that name was deleted in rev.
84 84 """
85 85 raise NotImplementedError()
86 86
87 87 def getchanges(self, version):
88 88 """Returns a tuple of (files, copies).
89 89
90 90 files is a sorted list of (filename, id) tuples for all files
91 91 changed between version and its first parent returned by
92 92 getcommit(). id is the source revision id of the file.
93 93
94 94 copies is a dictionary of dest: source
95 95 """
96 96 raise NotImplementedError()
97 97
98 98 def getcommit(self, version):
99 99 """Return the commit object for version"""
100 100 raise NotImplementedError()
101 101
102 102 def gettags(self):
103 103 """Return the tags as a dictionary of name: revision
104 104
105 105 Tag names must be UTF-8 strings.
106 106 """
107 107 raise NotImplementedError()
108 108
109 109 def recode(self, s, encoding=None):
110 110 if not encoding:
111 111 encoding = self.encoding or 'utf-8'
112 112
113 113 if isinstance(s, unicode):
114 114 return s.encode("utf-8")
115 115 try:
116 116 return s.decode(encoding).encode("utf-8")
117 117 except:
118 118 try:
119 119 return s.decode("latin-1").encode("utf-8")
120 120 except:
121 121 return s.decode(encoding, "replace").encode("utf-8")
122 122
123 123 def getchangedfiles(self, rev, i):
124 124 """Return the files changed by rev compared to parent[i].
125 125
126 126 i is an index selecting one of the parents of rev. The return
127 127 value should be the list of files that are different in rev and
128 128 this parent.
129 129
130 130 If rev has no parents, i is None.
131 131
132 132 This function is only needed to support --filemap
133 133 """
134 134 raise NotImplementedError()
135 135
136 136 def converted(self, rev, sinkrev):
137 137 '''Notify the source that a revision has been converted.'''
138 138 pass
139 139
140 140 def hasnativeorder(self):
141 141 """Return true if this source has a meaningful, native revision
142 142 order. For instance, Mercurial revisions are store sequentially
143 143 while there is no such global ordering with Darcs.
144 144 """
145 145 return False
146 146
147 147 def lookuprev(self, rev):
148 148 """If rev is a meaningful revision reference in source, return
149 149 the referenced identifier in the same format used by getcommit().
150 150 return None otherwise.
151 151 """
152 152 return None
153 153
154 154 def getbookmarks(self):
155 155 """Return the bookmarks as a dictionary of name: revision
156 156
157 157 Bookmark names are to be UTF-8 strings.
158 158 """
159 159 return {}
160 160
161 161 class converter_sink(object):
162 162 """Conversion sink (target) interface"""
163 163
164 164 def __init__(self, ui, path):
165 165 """Initialize conversion sink (or raise NoRepo("message")
166 166 exception if path is not a valid repository)
167 167
168 168 created is a list of paths to remove if a fatal error occurs
169 169 later"""
170 170 self.ui = ui
171 171 self.path = path
172 172 self.created = []
173 173
174 174 def getheads(self):
175 175 """Return a list of this repository's heads"""
176 176 raise NotImplementedError()
177 177
178 178 def revmapfile(self):
179 179 """Path to a file that will contain lines
180 180 source_rev_id sink_rev_id
181 181 mapping equivalent revision identifiers for each system."""
182 182 raise NotImplementedError()
183 183
184 184 def authorfile(self):
185 185 """Path to a file that will contain lines
186 186 srcauthor=dstauthor
187 187 mapping equivalent authors identifiers for each system."""
188 188 return None
189 189
190 190 def putcommit(self, files, copies, parents, commit, source, revmap):
191 191 """Create a revision with all changed files listed in 'files'
192 192 and having listed parents. 'commit' is a commit object
193 193 containing at a minimum the author, date, and message for this
194 194 changeset. 'files' is a list of (path, version) tuples,
195 195 'copies' is a dictionary mapping destinations to sources,
196 196 'source' is the source repository, and 'revmap' is a mapfile
197 197 of source revisions to converted revisions. Only getfile() and
198 198 lookuprev() should be called on 'source'.
199 199
200 200 Note that the sink repository is not told to update itself to
201 201 a particular revision (or even what that revision would be)
202 202 before it receives the file data.
203 203 """
204 204 raise NotImplementedError()
205 205
206 206 def puttags(self, tags):
207 207 """Put tags into sink.
208 208
209 209 tags: {tagname: sink_rev_id, ...} where tagname is an UTF-8 string.
210 210 Return a pair (tag_revision, tag_parent_revision), or (None, None)
211 211 if nothing was changed.
212 212 """
213 213 raise NotImplementedError()
214 214
215 215 def setbranch(self, branch, pbranches):
216 216 """Set the current branch name. Called before the first putcommit
217 217 on the branch.
218 218 branch: branch name for subsequent commits
219 219 pbranches: (converted parent revision, parent branch) tuples"""
220 220 pass
221 221
222 222 def setfilemapmode(self, active):
223 223 """Tell the destination that we're using a filemap
224 224
225 225 Some converter_sources (svn in particular) can claim that a file
226 226 was changed in a revision, even if there was no change. This method
227 227 tells the destination that we're using a filemap and that it should
228 228 filter empty revisions.
229 229 """
230 230 pass
231 231
232 232 def before(self):
233 233 pass
234 234
235 235 def after(self):
236 236 pass
237 237
238 238 def putbookmarks(self, bookmarks):
239 239 """Put bookmarks into sink.
240 240
241 241 bookmarks: {bookmarkname: sink_rev_id, ...}
242 242 where bookmarkname is an UTF-8 string.
243 243 """
244 244 pass
245 245
246 246 class commandline(object):
247 247 def __init__(self, ui, command):
248 248 self.ui = ui
249 249 self.command = command
250 250
251 251 def prerun(self):
252 252 pass
253 253
254 254 def postrun(self):
255 255 pass
256 256
257 257 def _cmdline(self, cmd, closestdin, *args, **kwargs):
258 258 cmdline = [self.command, cmd] + list(args)
259 259 for k, v in kwargs.iteritems():
260 260 if len(k) == 1:
261 261 cmdline.append('-' + k)
262 262 else:
263 263 cmdline.append('--' + k.replace('_', '-'))
264 264 try:
265 265 if len(k) == 1:
266 266 cmdline.append('' + v)
267 267 else:
268 268 cmdline[-1] += '=' + v
269 269 except TypeError:
270 270 pass
271 271 cmdline = [util.shellquote(arg) for arg in cmdline]
272 272 if not self.ui.debugflag:
273 273 cmdline += ['2>', util.nulldev]
274 274 if closestdin:
275 275 cmdline += ['<', util.nulldev]
276 276 cmdline = ' '.join(cmdline)
277 277 return cmdline
278 278
279 279 def _run(self, cmd, *args, **kwargs):
280 280 return self._dorun(util.popen, cmd, True, *args, **kwargs)
281 281
282 282 def _run2(self, cmd, *args, **kwargs):
283 283 return self._dorun(util.popen2, cmd, False, *args, **kwargs)
284 284
285 285 def _dorun(self, openfunc, cmd, closestdin, *args, **kwargs):
286 286 cmdline = self._cmdline(cmd, closestdin, *args, **kwargs)
287 287 self.ui.debug('running: %s\n' % (cmdline,))
288 288 self.prerun()
289 289 try:
290 290 return openfunc(cmdline)
291 291 finally:
292 292 self.postrun()
293 293
294 294 def run(self, cmd, *args, **kwargs):
295 295 fp = self._run(cmd, *args, **kwargs)
296 296 output = fp.read()
297 297 self.ui.debug(output)
298 298 return output, fp.close()
299 299
300 300 def runlines(self, cmd, *args, **kwargs):
301 301 fp = self._run(cmd, *args, **kwargs)
302 302 output = fp.readlines()
303 303 self.ui.debug(''.join(output))
304 304 return output, fp.close()
305 305
306 306 def checkexit(self, status, output=''):
307 307 if status:
308 308 if output:
309 309 self.ui.warn(_('%s error:\n') % self.command)
310 310 self.ui.warn(output)
311 msg = util.explain_exit(status)[0]
311 msg = util.explainexit(status)[0]
312 312 raise util.Abort('%s %s' % (self.command, msg))
313 313
314 314 def run0(self, cmd, *args, **kwargs):
315 315 output, status = self.run(cmd, *args, **kwargs)
316 316 self.checkexit(status, output)
317 317 return output
318 318
319 319 def runlines0(self, cmd, *args, **kwargs):
320 320 output, status = self.runlines(cmd, *args, **kwargs)
321 321 self.checkexit(status, ''.join(output))
322 322 return output
323 323
324 324 def getargmax(self):
325 325 if '_argmax' in self.__dict__:
326 326 return self._argmax
327 327
328 328 # POSIX requires at least 4096 bytes for ARG_MAX
329 329 self._argmax = 4096
330 330 try:
331 331 self._argmax = os.sysconf("SC_ARG_MAX")
332 332 except:
333 333 pass
334 334
335 335 # Windows shells impose their own limits on command line length,
336 336 # down to 2047 bytes for cmd.exe under Windows NT/2k and 2500 bytes
337 337 # for older 4nt.exe. See http://support.microsoft.com/kb/830473 for
338 338 # details about cmd.exe limitations.
339 339
340 340 # Since ARG_MAX is for command line _and_ environment, lower our limit
341 341 # (and make happy Windows shells while doing this).
342 342
343 343 self._argmax = self._argmax / 2 - 1
344 344 return self._argmax
345 345
346 346 def limit_arglist(self, arglist, cmd, closestdin, *args, **kwargs):
347 347 cmdlen = len(self._cmdline(cmd, closestdin, *args, **kwargs))
348 348 limit = self.getargmax() - cmdlen
349 349 bytes = 0
350 350 fl = []
351 351 for fn in arglist:
352 352 b = len(fn) + 3
353 353 if bytes + b < limit or len(fl) == 0:
354 354 fl.append(fn)
355 355 bytes += b
356 356 else:
357 357 yield fl
358 358 fl = [fn]
359 359 bytes = b
360 360 if fl:
361 361 yield fl
362 362
363 363 def xargs(self, arglist, cmd, *args, **kwargs):
364 364 for l in self.limit_arglist(arglist, cmd, True, *args, **kwargs):
365 365 self.run0(cmd, *(list(args) + l), **kwargs)
366 366
367 367 class mapfile(dict):
368 368 def __init__(self, ui, path):
369 369 super(mapfile, self).__init__()
370 370 self.ui = ui
371 371 self.path = path
372 372 self.fp = None
373 373 self.order = []
374 374 self._read()
375 375
376 376 def _read(self):
377 377 if not self.path:
378 378 return
379 379 try:
380 380 fp = open(self.path, 'r')
381 381 except IOError, err:
382 382 if err.errno != errno.ENOENT:
383 383 raise
384 384 return
385 385 for i, line in enumerate(fp):
386 386 try:
387 387 key, value = line.splitlines()[0].rsplit(' ', 1)
388 388 except ValueError:
389 389 raise util.Abort(
390 390 _('syntax error in %s(%d): key/value pair expected')
391 391 % (self.path, i + 1))
392 392 if key not in self:
393 393 self.order.append(key)
394 394 super(mapfile, self).__setitem__(key, value)
395 395 fp.close()
396 396
397 397 def __setitem__(self, key, value):
398 398 if self.fp is None:
399 399 try:
400 400 self.fp = open(self.path, 'a')
401 401 except IOError, err:
402 402 raise util.Abort(_('could not open map file %r: %s') %
403 403 (self.path, err.strerror))
404 404 self.fp.write('%s %s\n' % (key, value))
405 405 self.fp.flush()
406 406 super(mapfile, self).__setitem__(key, value)
407 407
408 408 def close(self):
409 409 if self.fp:
410 410 self.fp.close()
411 411 self.fp = None
@@ -1,159 +1,159 b''
1 1 # hook.py - hook support for mercurial
2 2 #
3 3 # Copyright 2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 import os, sys
10 10 import extensions, util
11 11
12 12 def _pythonhook(ui, repo, name, hname, funcname, args, throw):
13 13 '''call python hook. hook is callable object, looked up as
14 14 name in python module. if callable returns "true", hook
15 15 fails, else passes. if hook raises exception, treated as
16 16 hook failure. exception propagates if throw is "true".
17 17
18 18 reason for "true" meaning "hook failed" is so that
19 19 unmodified commands (e.g. mercurial.commands.update) can
20 20 be run as hooks without wrappers to convert return values.'''
21 21
22 22 ui.note(_("calling hook %s: %s\n") % (hname, funcname))
23 23 obj = funcname
24 24 if not hasattr(obj, '__call__'):
25 25 d = funcname.rfind('.')
26 26 if d == -1:
27 27 raise util.Abort(_('%s hook is invalid ("%s" not in '
28 28 'a module)') % (hname, funcname))
29 29 modname = funcname[:d]
30 30 oldpaths = sys.path
31 31 if hasattr(sys, "frozen"):
32 32 # binary installs require sys.path manipulation
33 33 modpath, modfile = os.path.split(modname)
34 34 if modpath and modfile:
35 35 sys.path = sys.path[:] + [modpath]
36 36 modname = modfile
37 37 try:
38 38 obj = __import__(modname)
39 39 except ImportError:
40 40 e1 = sys.exc_type, sys.exc_value, sys.exc_traceback
41 41 try:
42 42 # extensions are loaded with hgext_ prefix
43 43 obj = __import__("hgext_%s" % modname)
44 44 except ImportError:
45 45 e2 = sys.exc_type, sys.exc_value, sys.exc_traceback
46 46 if ui.tracebackflag:
47 47 ui.warn(_('exception from first failed import attempt:\n'))
48 48 ui.traceback(e1)
49 49 if ui.tracebackflag:
50 50 ui.warn(_('exception from second failed import attempt:\n'))
51 51 ui.traceback(e2)
52 52 raise util.Abort(_('%s hook is invalid '
53 53 '(import of "%s" failed)') %
54 54 (hname, modname))
55 55 sys.path = oldpaths
56 56 try:
57 57 for p in funcname.split('.')[1:]:
58 58 obj = getattr(obj, p)
59 59 except AttributeError:
60 60 raise util.Abort(_('%s hook is invalid '
61 61 '("%s" is not defined)') %
62 62 (hname, funcname))
63 63 if not hasattr(obj, '__call__'):
64 64 raise util.Abort(_('%s hook is invalid '
65 65 '("%s" is not callable)') %
66 66 (hname, funcname))
67 67 try:
68 68 r = obj(ui=ui, repo=repo, hooktype=name, **args)
69 69 except KeyboardInterrupt:
70 70 raise
71 71 except Exception, exc:
72 72 if isinstance(exc, util.Abort):
73 73 ui.warn(_('error: %s hook failed: %s\n') %
74 74 (hname, exc.args[0]))
75 75 else:
76 76 ui.warn(_('error: %s hook raised an exception: '
77 77 '%s\n') % (hname, exc))
78 78 if throw:
79 79 raise
80 80 ui.traceback()
81 81 return True
82 82 if r:
83 83 if throw:
84 84 raise util.Abort(_('%s hook failed') % hname)
85 85 ui.warn(_('warning: %s hook failed\n') % hname)
86 86 return r
87 87
88 88 def _exthook(ui, repo, name, cmd, args, throw):
89 89 ui.note(_("running hook %s: %s\n") % (name, cmd))
90 90
91 91 env = {}
92 92 for k, v in args.iteritems():
93 93 if hasattr(v, '__call__'):
94 94 v = v()
95 95 if isinstance(v, dict):
96 96 # make the dictionary element order stable across Python
97 97 # implementations
98 98 v = ('{' +
99 99 ', '.join('%r: %r' % i for i in sorted(v.iteritems())) +
100 100 '}')
101 101 env['HG_' + k.upper()] = v
102 102
103 103 if repo:
104 104 cwd = repo.root
105 105 else:
106 106 cwd = os.getcwd()
107 107 if 'HG_URL' in env and env['HG_URL'].startswith('remote:http'):
108 108 r = util.system(cmd, environ=env, cwd=cwd, out=ui)
109 109 else:
110 110 r = util.system(cmd, environ=env, cwd=cwd)
111 111 if r:
112 desc, r = util.explain_exit(r)
112 desc, r = util.explainexit(r)
113 113 if throw:
114 114 raise util.Abort(_('%s hook %s') % (name, desc))
115 115 ui.warn(_('warning: %s hook %s\n') % (name, desc))
116 116 return r
117 117
118 118 _redirect = False
119 119 def redirect(state):
120 120 global _redirect
121 121 _redirect = state
122 122
123 123 def hook(ui, repo, name, throw=False, **args):
124 124 r = False
125 125
126 126 oldstdout = -1
127 127 if _redirect:
128 128 stdoutno = sys.__stdout__.fileno()
129 129 stderrno = sys.__stderr__.fileno()
130 130 # temporarily redirect stdout to stderr, if possible
131 131 if stdoutno >= 0 and stderrno >= 0:
132 132 oldstdout = os.dup(stdoutno)
133 133 os.dup2(stderrno, stdoutno)
134 134
135 135 try:
136 136 for hname, cmd in ui.configitems('hooks'):
137 137 if hname.split('.')[0] != name or not cmd:
138 138 continue
139 139 if hasattr(cmd, '__call__'):
140 140 r = _pythonhook(ui, repo, name, hname, cmd, args, throw) or r
141 141 elif cmd.startswith('python:'):
142 142 if cmd.count(':') >= 2:
143 143 path, cmd = cmd[7:].rsplit(':', 1)
144 144 path = util.expandpath(path)
145 145 if repo:
146 146 path = os.path.join(repo.root, path)
147 147 mod = extensions.loadpath(path, 'hghook.%s' % hname)
148 148 hookfn = getattr(mod, cmd)
149 149 else:
150 150 hookfn = cmd[7:].strip()
151 151 r = _pythonhook(ui, repo, name, hname, hookfn, args, throw) or r
152 152 else:
153 153 r = _exthook(ui, repo, hname, cmd, args, throw) or r
154 154 finally:
155 155 if _redirect and oldstdout >= 0:
156 156 os.dup2(oldstdout, stdoutno)
157 157 os.close(oldstdout)
158 158
159 159 return r
@@ -1,233 +1,233 b''
1 1 # mail.py - mail sending bits for mercurial
2 2 #
3 3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 import util, encoding
10 10 import os, smtplib, socket, quopri
11 11 import email.Header, email.MIMEText, email.Utils
12 12
13 13 _oldheaderinit = email.Header.Header.__init__
14 14 def _unifiedheaderinit(self, *args, **kw):
15 15 """
16 16 Python2.7 introduces a backwards incompatible change
17 17 (Python issue1974, r70772) in email.Generator.Generator code:
18 18 pre-2.7 code passed "continuation_ws='\t'" to the Header
19 19 constructor, and 2.7 removed this parameter.
20 20
21 21 Default argument is continuation_ws=' ', which means that the
22 22 behaviour is different in <2.7 and 2.7
23 23
24 24 We consider the 2.7 behaviour to be preferable, but need
25 25 to have an unified behaviour for versions 2.4 to 2.7
26 26 """
27 27 # override continuation_ws
28 28 kw['continuation_ws'] = ' '
29 29 _oldheaderinit(self, *args, **kw)
30 30
31 31 email.Header.Header.__dict__['__init__'] = _unifiedheaderinit
32 32
33 33 def _smtp(ui):
34 34 '''build an smtp connection and return a function to send mail'''
35 35 local_hostname = ui.config('smtp', 'local_hostname')
36 36 tls = ui.config('smtp', 'tls', 'none')
37 37 # backward compatible: when tls = true, we use starttls.
38 38 starttls = tls == 'starttls' or util.parsebool(tls)
39 39 smtps = tls == 'smtps'
40 40 if (starttls or smtps) and not hasattr(socket, 'ssl'):
41 41 raise util.Abort(_("can't use TLS: Python SSL support not installed"))
42 42 if smtps:
43 43 ui.note(_('(using smtps)\n'))
44 44 s = smtplib.SMTP_SSL(local_hostname=local_hostname)
45 45 else:
46 46 s = smtplib.SMTP(local_hostname=local_hostname)
47 47 mailhost = ui.config('smtp', 'host')
48 48 if not mailhost:
49 49 raise util.Abort(_('smtp.host not configured - cannot send mail'))
50 50 mailport = util.getport(ui.config('smtp', 'port', 25))
51 51 ui.note(_('sending mail: smtp host %s, port %s\n') %
52 52 (mailhost, mailport))
53 53 s.connect(host=mailhost, port=mailport)
54 54 if starttls:
55 55 ui.note(_('(using starttls)\n'))
56 56 s.ehlo()
57 57 s.starttls()
58 58 s.ehlo()
59 59 username = ui.config('smtp', 'username')
60 60 password = ui.config('smtp', 'password')
61 61 if username and not password:
62 62 password = ui.getpass()
63 63 if username and password:
64 64 ui.note(_('(authenticating to mail server as %s)\n') %
65 65 (username))
66 66 try:
67 67 s.login(username, password)
68 68 except smtplib.SMTPException, inst:
69 69 raise util.Abort(inst)
70 70
71 71 def send(sender, recipients, msg):
72 72 try:
73 73 return s.sendmail(sender, recipients, msg)
74 74 except smtplib.SMTPRecipientsRefused, inst:
75 75 recipients = [r[1] for r in inst.recipients.values()]
76 76 raise util.Abort('\n' + '\n'.join(recipients))
77 77 except smtplib.SMTPException, inst:
78 78 raise util.Abort(inst)
79 79
80 80 return send
81 81
82 82 def _sendmail(ui, sender, recipients, msg):
83 83 '''send mail using sendmail.'''
84 84 program = ui.config('email', 'method')
85 85 cmdline = '%s -f %s %s' % (program, util.email(sender),
86 86 ' '.join(map(util.email, recipients)))
87 87 ui.note(_('sending mail: %s\n') % cmdline)
88 88 fp = util.popen(cmdline, 'w')
89 89 fp.write(msg)
90 90 ret = fp.close()
91 91 if ret:
92 92 raise util.Abort('%s %s' % (
93 93 os.path.basename(program.split(None, 1)[0]),
94 util.explain_exit(ret)[0]))
94 util.explainexit(ret)[0]))
95 95
96 96 def connect(ui):
97 97 '''make a mail connection. return a function to send mail.
98 98 call as sendmail(sender, list-of-recipients, msg).'''
99 99 if ui.config('email', 'method', 'smtp') == 'smtp':
100 100 return _smtp(ui)
101 101 return lambda s, r, m: _sendmail(ui, s, r, m)
102 102
103 103 def sendmail(ui, sender, recipients, msg):
104 104 send = connect(ui)
105 105 return send(sender, recipients, msg)
106 106
107 107 def validateconfig(ui):
108 108 '''determine if we have enough config data to try sending email.'''
109 109 method = ui.config('email', 'method', 'smtp')
110 110 if method == 'smtp':
111 111 if not ui.config('smtp', 'host'):
112 112 raise util.Abort(_('smtp specified as email transport, '
113 113 'but no smtp host configured'))
114 114 else:
115 115 if not util.find_exe(method):
116 116 raise util.Abort(_('%r specified as email transport, '
117 117 'but not in PATH') % method)
118 118
119 119 def mimetextpatch(s, subtype='plain', display=False):
120 120 '''If patch in utf-8 transfer-encode it.'''
121 121
122 122 enc = None
123 123 for line in s.splitlines():
124 124 if len(line) > 950:
125 125 s = quopri.encodestring(s)
126 126 enc = "quoted-printable"
127 127 break
128 128
129 129 cs = 'us-ascii'
130 130 if not display:
131 131 try:
132 132 s.decode('us-ascii')
133 133 except UnicodeDecodeError:
134 134 try:
135 135 s.decode('utf-8')
136 136 cs = 'utf-8'
137 137 except UnicodeDecodeError:
138 138 # We'll go with us-ascii as a fallback.
139 139 pass
140 140
141 141 msg = email.MIMEText.MIMEText(s, subtype, cs)
142 142 if enc:
143 143 del msg['Content-Transfer-Encoding']
144 144 msg['Content-Transfer-Encoding'] = enc
145 145 return msg
146 146
147 147 def _charsets(ui):
148 148 '''Obtains charsets to send mail parts not containing patches.'''
149 149 charsets = [cs.lower() for cs in ui.configlist('email', 'charsets')]
150 150 fallbacks = [encoding.fallbackencoding.lower(),
151 151 encoding.encoding.lower(), 'utf-8']
152 152 for cs in fallbacks: # find unique charsets while keeping order
153 153 if cs not in charsets:
154 154 charsets.append(cs)
155 155 return [cs for cs in charsets if not cs.endswith('ascii')]
156 156
157 157 def _encode(ui, s, charsets):
158 158 '''Returns (converted) string, charset tuple.
159 159 Finds out best charset by cycling through sendcharsets in descending
160 160 order. Tries both encoding and fallbackencoding for input. Only as
161 161 last resort send as is in fake ascii.
162 162 Caveat: Do not use for mail parts containing patches!'''
163 163 try:
164 164 s.decode('ascii')
165 165 except UnicodeDecodeError:
166 166 sendcharsets = charsets or _charsets(ui)
167 167 for ics in (encoding.encoding, encoding.fallbackencoding):
168 168 try:
169 169 u = s.decode(ics)
170 170 except UnicodeDecodeError:
171 171 continue
172 172 for ocs in sendcharsets:
173 173 try:
174 174 return u.encode(ocs), ocs
175 175 except UnicodeEncodeError:
176 176 pass
177 177 except LookupError:
178 178 ui.warn(_('ignoring invalid sendcharset: %s\n') % ocs)
179 179 # if ascii, or all conversion attempts fail, send (broken) ascii
180 180 return s, 'us-ascii'
181 181
182 182 def headencode(ui, s, charsets=None, display=False):
183 183 '''Returns RFC-2047 compliant header from given string.'''
184 184 if not display:
185 185 # split into words?
186 186 s, cs = _encode(ui, s, charsets)
187 187 return str(email.Header.Header(s, cs))
188 188 return s
189 189
190 190 def _addressencode(ui, name, addr, charsets=None):
191 191 name = headencode(ui, name, charsets)
192 192 try:
193 193 acc, dom = addr.split('@')
194 194 acc = acc.encode('ascii')
195 195 dom = dom.decode(encoding.encoding).encode('idna')
196 196 addr = '%s@%s' % (acc, dom)
197 197 except UnicodeDecodeError:
198 198 raise util.Abort(_('invalid email address: %s') % addr)
199 199 except ValueError:
200 200 try:
201 201 # too strict?
202 202 addr = addr.encode('ascii')
203 203 except UnicodeDecodeError:
204 204 raise util.Abort(_('invalid local address: %s') % addr)
205 205 return email.Utils.formataddr((name, addr))
206 206
207 207 def addressencode(ui, address, charsets=None, display=False):
208 208 '''Turns address into RFC-2047 compliant header.'''
209 209 if display or not address:
210 210 return address or ''
211 211 name, addr = email.Utils.parseaddr(address)
212 212 return _addressencode(ui, name, addr, charsets)
213 213
214 214 def addrlistencode(ui, addrs, charsets=None, display=False):
215 215 '''Turns a list of addresses into a list of RFC-2047 compliant headers.
216 216 A single element of input list may contain multiple addresses, but output
217 217 always has one address per item'''
218 218 if display:
219 219 return [a.strip() for a in addrs if a.strip()]
220 220
221 221 result = []
222 222 for name, addr in email.Utils.getaddresses(addrs):
223 223 if name or addr:
224 224 result.append(_addressencode(ui, name, addr, charsets))
225 225 return result
226 226
227 227 def mimeencode(ui, s, charsets=None, display=False):
228 228 '''creates mime text object, encodes it if needed, and sets
229 229 charset and transfer-encoding accordingly.'''
230 230 cs = 'us-ascii'
231 231 if not display:
232 232 s, cs = _encode(ui, s, charsets)
233 233 return email.MIMEText.MIMEText(s, 'plain', cs)
@@ -1,1618 +1,1618 b''
1 1 # patch.py - patch file parsing routines
2 2 #
3 3 # Copyright 2006 Brendan Cully <brendan@kublai.com>
4 4 # Copyright 2007 Chris Mason <chris.mason@oracle.com>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 import cStringIO, email.Parser, os, errno, re
10 10 import tempfile, zlib
11 11
12 12 from i18n import _
13 13 from node import hex, nullid, short
14 14 import base85, mdiff, scmutil, util, diffhelpers, copies, encoding
15 15
16 16 gitre = re.compile('diff --git a/(.*) b/(.*)')
17 17
18 18 class PatchError(Exception):
19 19 pass
20 20
21 21 # helper functions
22 22
23 23 def copyfile(src, dst, basedir):
24 24 abssrc, absdst = [scmutil.canonpath(basedir, basedir, x)
25 25 for x in [src, dst]]
26 26 if os.path.lexists(absdst):
27 27 raise util.Abort(_("cannot create %s: destination already exists") %
28 28 dst)
29 29
30 30 dstdir = os.path.dirname(absdst)
31 31 if dstdir and not os.path.isdir(dstdir):
32 32 try:
33 33 os.makedirs(dstdir)
34 34 except IOError:
35 35 raise util.Abort(
36 36 _("cannot create %s: unable to create destination directory")
37 37 % dst)
38 38
39 39 util.copyfile(abssrc, absdst)
40 40
41 41 # public functions
42 42
43 43 def split(stream):
44 44 '''return an iterator of individual patches from a stream'''
45 45 def isheader(line, inheader):
46 46 if inheader and line[0] in (' ', '\t'):
47 47 # continuation
48 48 return True
49 49 if line[0] in (' ', '-', '+'):
50 50 # diff line - don't check for header pattern in there
51 51 return False
52 52 l = line.split(': ', 1)
53 53 return len(l) == 2 and ' ' not in l[0]
54 54
55 55 def chunk(lines):
56 56 return cStringIO.StringIO(''.join(lines))
57 57
58 58 def hgsplit(stream, cur):
59 59 inheader = True
60 60
61 61 for line in stream:
62 62 if not line.strip():
63 63 inheader = False
64 64 if not inheader and line.startswith('# HG changeset patch'):
65 65 yield chunk(cur)
66 66 cur = []
67 67 inheader = True
68 68
69 69 cur.append(line)
70 70
71 71 if cur:
72 72 yield chunk(cur)
73 73
74 74 def mboxsplit(stream, cur):
75 75 for line in stream:
76 76 if line.startswith('From '):
77 77 for c in split(chunk(cur[1:])):
78 78 yield c
79 79 cur = []
80 80
81 81 cur.append(line)
82 82
83 83 if cur:
84 84 for c in split(chunk(cur[1:])):
85 85 yield c
86 86
87 87 def mimesplit(stream, cur):
88 88 def msgfp(m):
89 89 fp = cStringIO.StringIO()
90 90 g = email.Generator.Generator(fp, mangle_from_=False)
91 91 g.flatten(m)
92 92 fp.seek(0)
93 93 return fp
94 94
95 95 for line in stream:
96 96 cur.append(line)
97 97 c = chunk(cur)
98 98
99 99 m = email.Parser.Parser().parse(c)
100 100 if not m.is_multipart():
101 101 yield msgfp(m)
102 102 else:
103 103 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
104 104 for part in m.walk():
105 105 ct = part.get_content_type()
106 106 if ct not in ok_types:
107 107 continue
108 108 yield msgfp(part)
109 109
110 110 def headersplit(stream, cur):
111 111 inheader = False
112 112
113 113 for line in stream:
114 114 if not inheader and isheader(line, inheader):
115 115 yield chunk(cur)
116 116 cur = []
117 117 inheader = True
118 118 if inheader and not isheader(line, inheader):
119 119 inheader = False
120 120
121 121 cur.append(line)
122 122
123 123 if cur:
124 124 yield chunk(cur)
125 125
126 126 def remainder(cur):
127 127 yield chunk(cur)
128 128
129 129 class fiter(object):
130 130 def __init__(self, fp):
131 131 self.fp = fp
132 132
133 133 def __iter__(self):
134 134 return self
135 135
136 136 def next(self):
137 137 l = self.fp.readline()
138 138 if not l:
139 139 raise StopIteration
140 140 return l
141 141
142 142 inheader = False
143 143 cur = []
144 144
145 145 mimeheaders = ['content-type']
146 146
147 147 if not hasattr(stream, 'next'):
148 148 # http responses, for example, have readline but not next
149 149 stream = fiter(stream)
150 150
151 151 for line in stream:
152 152 cur.append(line)
153 153 if line.startswith('# HG changeset patch'):
154 154 return hgsplit(stream, cur)
155 155 elif line.startswith('From '):
156 156 return mboxsplit(stream, cur)
157 157 elif isheader(line, inheader):
158 158 inheader = True
159 159 if line.split(':', 1)[0].lower() in mimeheaders:
160 160 # let email parser handle this
161 161 return mimesplit(stream, cur)
162 162 elif line.startswith('--- ') and inheader:
163 163 # No evil headers seen by diff start, split by hand
164 164 return headersplit(stream, cur)
165 165 # Not enough info, keep reading
166 166
167 167 # if we are here, we have a very plain patch
168 168 return remainder(cur)
169 169
170 170 def extract(ui, fileobj):
171 171 '''extract patch from data read from fileobj.
172 172
173 173 patch can be a normal patch or contained in an email message.
174 174
175 175 return tuple (filename, message, user, date, branch, node, p1, p2).
176 176 Any item in the returned tuple can be None. If filename is None,
177 177 fileobj did not contain a patch. Caller must unlink filename when done.'''
178 178
179 179 # attempt to detect the start of a patch
180 180 # (this heuristic is borrowed from quilt)
181 181 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |'
182 182 r'retrieving revision [0-9]+(\.[0-9]+)*$|'
183 183 r'---[ \t].*?^\+\+\+[ \t]|'
184 184 r'\*\*\*[ \t].*?^---[ \t])', re.MULTILINE|re.DOTALL)
185 185
186 186 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
187 187 tmpfp = os.fdopen(fd, 'w')
188 188 try:
189 189 msg = email.Parser.Parser().parse(fileobj)
190 190
191 191 subject = msg['Subject']
192 192 user = msg['From']
193 193 if not subject and not user:
194 194 # Not an email, restore parsed headers if any
195 195 subject = '\n'.join(': '.join(h) for h in msg.items()) + '\n'
196 196
197 197 gitsendmail = 'git-send-email' in msg.get('X-Mailer', '')
198 198 # should try to parse msg['Date']
199 199 date = None
200 200 nodeid = None
201 201 branch = None
202 202 parents = []
203 203
204 204 if subject:
205 205 if subject.startswith('[PATCH'):
206 206 pend = subject.find(']')
207 207 if pend >= 0:
208 208 subject = subject[pend + 1:].lstrip()
209 209 subject = subject.replace('\n\t', ' ')
210 210 ui.debug('Subject: %s\n' % subject)
211 211 if user:
212 212 ui.debug('From: %s\n' % user)
213 213 diffs_seen = 0
214 214 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
215 215 message = ''
216 216 for part in msg.walk():
217 217 content_type = part.get_content_type()
218 218 ui.debug('Content-Type: %s\n' % content_type)
219 219 if content_type not in ok_types:
220 220 continue
221 221 payload = part.get_payload(decode=True)
222 222 m = diffre.search(payload)
223 223 if m:
224 224 hgpatch = False
225 225 hgpatchheader = False
226 226 ignoretext = False
227 227
228 228 ui.debug('found patch at byte %d\n' % m.start(0))
229 229 diffs_seen += 1
230 230 cfp = cStringIO.StringIO()
231 231 for line in payload[:m.start(0)].splitlines():
232 232 if line.startswith('# HG changeset patch') and not hgpatch:
233 233 ui.debug('patch generated by hg export\n')
234 234 hgpatch = True
235 235 hgpatchheader = True
236 236 # drop earlier commit message content
237 237 cfp.seek(0)
238 238 cfp.truncate()
239 239 subject = None
240 240 elif hgpatchheader:
241 241 if line.startswith('# User '):
242 242 user = line[7:]
243 243 ui.debug('From: %s\n' % user)
244 244 elif line.startswith("# Date "):
245 245 date = line[7:]
246 246 elif line.startswith("# Branch "):
247 247 branch = line[9:]
248 248 elif line.startswith("# Node ID "):
249 249 nodeid = line[10:]
250 250 elif line.startswith("# Parent "):
251 251 parents.append(line[10:])
252 252 elif not line.startswith("# "):
253 253 hgpatchheader = False
254 254 elif line == '---' and gitsendmail:
255 255 ignoretext = True
256 256 if not hgpatchheader and not ignoretext:
257 257 cfp.write(line)
258 258 cfp.write('\n')
259 259 message = cfp.getvalue()
260 260 if tmpfp:
261 261 tmpfp.write(payload)
262 262 if not payload.endswith('\n'):
263 263 tmpfp.write('\n')
264 264 elif not diffs_seen and message and content_type == 'text/plain':
265 265 message += '\n' + payload
266 266 except:
267 267 tmpfp.close()
268 268 os.unlink(tmpname)
269 269 raise
270 270
271 271 if subject and not message.startswith(subject):
272 272 message = '%s\n%s' % (subject, message)
273 273 tmpfp.close()
274 274 if not diffs_seen:
275 275 os.unlink(tmpname)
276 276 return None, message, user, date, branch, None, None, None
277 277 p1 = parents and parents.pop(0) or None
278 278 p2 = parents and parents.pop(0) or None
279 279 return tmpname, message, user, date, branch, nodeid, p1, p2
280 280
281 281 class patchmeta(object):
282 282 """Patched file metadata
283 283
284 284 'op' is the performed operation within ADD, DELETE, RENAME, MODIFY
285 285 or COPY. 'path' is patched file path. 'oldpath' is set to the
286 286 origin file when 'op' is either COPY or RENAME, None otherwise. If
287 287 file mode is changed, 'mode' is a tuple (islink, isexec) where
288 288 'islink' is True if the file is a symlink and 'isexec' is True if
289 289 the file is executable. Otherwise, 'mode' is None.
290 290 """
291 291 def __init__(self, path):
292 292 self.path = path
293 293 self.oldpath = None
294 294 self.mode = None
295 295 self.op = 'MODIFY'
296 296 self.binary = False
297 297
298 298 def setmode(self, mode):
299 299 islink = mode & 020000
300 300 isexec = mode & 0100
301 301 self.mode = (islink, isexec)
302 302
303 303 def __repr__(self):
304 304 return "<patchmeta %s %r>" % (self.op, self.path)
305 305
306 306 def readgitpatch(lr):
307 307 """extract git-style metadata about patches from <patchname>"""
308 308
309 309 # Filter patch for git information
310 310 gp = None
311 311 gitpatches = []
312 312 for line in lr:
313 313 line = line.rstrip(' \r\n')
314 314 if line.startswith('diff --git'):
315 315 m = gitre.match(line)
316 316 if m:
317 317 if gp:
318 318 gitpatches.append(gp)
319 319 dst = m.group(2)
320 320 gp = patchmeta(dst)
321 321 elif gp:
322 322 if line.startswith('--- '):
323 323 gitpatches.append(gp)
324 324 gp = None
325 325 continue
326 326 if line.startswith('rename from '):
327 327 gp.op = 'RENAME'
328 328 gp.oldpath = line[12:]
329 329 elif line.startswith('rename to '):
330 330 gp.path = line[10:]
331 331 elif line.startswith('copy from '):
332 332 gp.op = 'COPY'
333 333 gp.oldpath = line[10:]
334 334 elif line.startswith('copy to '):
335 335 gp.path = line[8:]
336 336 elif line.startswith('deleted file'):
337 337 gp.op = 'DELETE'
338 338 elif line.startswith('new file mode '):
339 339 gp.op = 'ADD'
340 340 gp.setmode(int(line[-6:], 8))
341 341 elif line.startswith('new mode '):
342 342 gp.setmode(int(line[-6:], 8))
343 343 elif line.startswith('GIT binary patch'):
344 344 gp.binary = True
345 345 if gp:
346 346 gitpatches.append(gp)
347 347
348 348 return gitpatches
349 349
350 350 class linereader(object):
351 351 # simple class to allow pushing lines back into the input stream
352 352 def __init__(self, fp, textmode=False):
353 353 self.fp = fp
354 354 self.buf = []
355 355 self.textmode = textmode
356 356 self.eol = None
357 357
358 358 def push(self, line):
359 359 if line is not None:
360 360 self.buf.append(line)
361 361
362 362 def readline(self):
363 363 if self.buf:
364 364 l = self.buf[0]
365 365 del self.buf[0]
366 366 return l
367 367 l = self.fp.readline()
368 368 if not self.eol:
369 369 if l.endswith('\r\n'):
370 370 self.eol = '\r\n'
371 371 elif l.endswith('\n'):
372 372 self.eol = '\n'
373 373 if self.textmode and l.endswith('\r\n'):
374 374 l = l[:-2] + '\n'
375 375 return l
376 376
377 377 def __iter__(self):
378 378 while 1:
379 379 l = self.readline()
380 380 if not l:
381 381 break
382 382 yield l
383 383
384 384 # @@ -start,len +start,len @@ or @@ -start +start @@ if len is 1
385 385 unidesc = re.compile('@@ -(\d+)(,(\d+))? \+(\d+)(,(\d+))? @@')
386 386 contextdesc = re.compile('(---|\*\*\*) (\d+)(,(\d+))? (---|\*\*\*)')
387 387 eolmodes = ['strict', 'crlf', 'lf', 'auto']
388 388
389 389 class patchfile(object):
390 390 def __init__(self, ui, fname, opener, missing=False, eolmode='strict'):
391 391 self.fname = fname
392 392 self.eolmode = eolmode
393 393 self.eol = None
394 394 self.opener = opener
395 395 self.ui = ui
396 396 self.lines = []
397 397 self.exists = False
398 398 self.missing = missing
399 399 if not missing:
400 400 try:
401 401 self.lines = self.readlines(fname)
402 402 self.exists = True
403 403 except IOError:
404 404 pass
405 405 else:
406 406 self.ui.warn(_("unable to find '%s' for patching\n") % self.fname)
407 407
408 408 self.hash = {}
409 409 self.dirty = False
410 410 self.offset = 0
411 411 self.skew = 0
412 412 self.rej = []
413 413 self.fileprinted = False
414 414 self.printfile(False)
415 415 self.hunks = 0
416 416
417 417 def readlines(self, fname):
418 418 if os.path.islink(fname):
419 419 return [os.readlink(fname)]
420 420 fp = self.opener(fname, 'r')
421 421 try:
422 422 lr = linereader(fp, self.eolmode != 'strict')
423 423 lines = list(lr)
424 424 self.eol = lr.eol
425 425 return lines
426 426 finally:
427 427 fp.close()
428 428
429 429 def writelines(self, fname, lines):
430 430 # Ensure supplied data ends in fname, being a regular file or
431 431 # a symlink. cmdutil.updatedir will -too magically- take care
432 432 # of setting it to the proper type afterwards.
433 433 st_mode = None
434 434 islink = os.path.islink(fname)
435 435 if islink:
436 436 fp = cStringIO.StringIO()
437 437 else:
438 438 try:
439 439 st_mode = os.lstat(fname).st_mode & 0777
440 440 except OSError, e:
441 441 if e.errno != errno.ENOENT:
442 442 raise
443 443 fp = self.opener(fname, 'w')
444 444 try:
445 445 if self.eolmode == 'auto':
446 446 eol = self.eol
447 447 elif self.eolmode == 'crlf':
448 448 eol = '\r\n'
449 449 else:
450 450 eol = '\n'
451 451
452 452 if self.eolmode != 'strict' and eol and eol != '\n':
453 453 for l in lines:
454 454 if l and l[-1] == '\n':
455 455 l = l[:-1] + eol
456 456 fp.write(l)
457 457 else:
458 458 fp.writelines(lines)
459 459 if islink:
460 460 self.opener.symlink(fp.getvalue(), fname)
461 461 if st_mode is not None:
462 462 os.chmod(fname, st_mode)
463 463 finally:
464 464 fp.close()
465 465
466 466 def unlink(self, fname):
467 467 os.unlink(fname)
468 468
469 469 def printfile(self, warn):
470 470 if self.fileprinted:
471 471 return
472 472 if warn or self.ui.verbose:
473 473 self.fileprinted = True
474 474 s = _("patching file %s\n") % self.fname
475 475 if warn:
476 476 self.ui.warn(s)
477 477 else:
478 478 self.ui.note(s)
479 479
480 480
481 481 def findlines(self, l, linenum):
482 482 # looks through the hash and finds candidate lines. The
483 483 # result is a list of line numbers sorted based on distance
484 484 # from linenum
485 485
486 486 cand = self.hash.get(l, [])
487 487 if len(cand) > 1:
488 488 # resort our list of potentials forward then back.
489 489 cand.sort(key=lambda x: abs(x - linenum))
490 490 return cand
491 491
492 492 def makerejlines(self, fname):
493 493 base = os.path.basename(fname)
494 494 yield "--- %s\n+++ %s\n" % (base, base)
495 495 for x in self.rej:
496 496 for l in x.hunk:
497 497 yield l
498 498 if l[-1] != '\n':
499 499 yield "\n\ No newline at end of file\n"
500 500
501 501 def write_rej(self):
502 502 # our rejects are a little different from patch(1). This always
503 503 # creates rejects in the same form as the original patch. A file
504 504 # header is inserted so that you can run the reject through patch again
505 505 # without having to type the filename.
506 506
507 507 if not self.rej:
508 508 return
509 509
510 510 fname = self.fname + ".rej"
511 511 self.ui.warn(
512 512 _("%d out of %d hunks FAILED -- saving rejects to file %s\n") %
513 513 (len(self.rej), self.hunks, fname))
514 514
515 515 fp = self.opener(fname, 'w')
516 516 fp.writelines(self.makerejlines(self.fname))
517 517 fp.close()
518 518
519 519 def apply(self, h):
520 520 if not h.complete():
521 521 raise PatchError(_("bad hunk #%d %s (%d %d %d %d)") %
522 522 (h.number, h.desc, len(h.a), h.lena, len(h.b),
523 523 h.lenb))
524 524
525 525 self.hunks += 1
526 526
527 527 if self.missing:
528 528 self.rej.append(h)
529 529 return -1
530 530
531 531 if self.exists and h.createfile():
532 532 self.ui.warn(_("file %s already exists\n") % self.fname)
533 533 self.rej.append(h)
534 534 return -1
535 535
536 536 if isinstance(h, binhunk):
537 537 if h.rmfile():
538 538 self.unlink(self.fname)
539 539 else:
540 540 self.lines[:] = h.new()
541 541 self.offset += len(h.new())
542 542 self.dirty = True
543 543 return 0
544 544
545 545 horig = h
546 546 if (self.eolmode in ('crlf', 'lf')
547 547 or self.eolmode == 'auto' and self.eol):
548 548 # If new eols are going to be normalized, then normalize
549 549 # hunk data before patching. Otherwise, preserve input
550 550 # line-endings.
551 551 h = h.getnormalized()
552 552
553 553 # fast case first, no offsets, no fuzz
554 554 old = h.old()
555 555 # patch starts counting at 1 unless we are adding the file
556 556 if h.starta == 0:
557 557 start = 0
558 558 else:
559 559 start = h.starta + self.offset - 1
560 560 orig_start = start
561 561 # if there's skew we want to emit the "(offset %d lines)" even
562 562 # when the hunk cleanly applies at start + skew, so skip the
563 563 # fast case code
564 564 if self.skew == 0 and diffhelpers.testhunk(old, self.lines, start) == 0:
565 565 if h.rmfile():
566 566 self.unlink(self.fname)
567 567 else:
568 568 self.lines[start : start + h.lena] = h.new()
569 569 self.offset += h.lenb - h.lena
570 570 self.dirty = True
571 571 return 0
572 572
573 573 # ok, we couldn't match the hunk. Lets look for offsets and fuzz it
574 574 self.hash = {}
575 575 for x, s in enumerate(self.lines):
576 576 self.hash.setdefault(s, []).append(x)
577 577 if h.hunk[-1][0] != ' ':
578 578 # if the hunk tried to put something at the bottom of the file
579 579 # override the start line and use eof here
580 580 search_start = len(self.lines)
581 581 else:
582 582 search_start = orig_start + self.skew
583 583
584 584 for fuzzlen in xrange(3):
585 585 for toponly in [True, False]:
586 586 old = h.old(fuzzlen, toponly)
587 587
588 588 cand = self.findlines(old[0][1:], search_start)
589 589 for l in cand:
590 590 if diffhelpers.testhunk(old, self.lines, l) == 0:
591 591 newlines = h.new(fuzzlen, toponly)
592 592 self.lines[l : l + len(old)] = newlines
593 593 self.offset += len(newlines) - len(old)
594 594 self.skew = l - orig_start
595 595 self.dirty = True
596 596 offset = l - orig_start - fuzzlen
597 597 if fuzzlen:
598 598 msg = _("Hunk #%d succeeded at %d "
599 599 "with fuzz %d "
600 600 "(offset %d lines).\n")
601 601 self.printfile(True)
602 602 self.ui.warn(msg %
603 603 (h.number, l + 1, fuzzlen, offset))
604 604 else:
605 605 msg = _("Hunk #%d succeeded at %d "
606 606 "(offset %d lines).\n")
607 607 self.ui.note(msg % (h.number, l + 1, offset))
608 608 return fuzzlen
609 609 self.printfile(True)
610 610 self.ui.warn(_("Hunk #%d FAILED at %d\n") % (h.number, orig_start))
611 611 self.rej.append(horig)
612 612 return -1
613 613
614 614 def close(self):
615 615 if self.dirty:
616 616 self.writelines(self.fname, self.lines)
617 617 self.write_rej()
618 618 return len(self.rej)
619 619
620 620 class hunk(object):
621 621 def __init__(self, desc, num, lr, context, create=False, remove=False):
622 622 self.number = num
623 623 self.desc = desc
624 624 self.hunk = [desc]
625 625 self.a = []
626 626 self.b = []
627 627 self.starta = self.lena = None
628 628 self.startb = self.lenb = None
629 629 if lr is not None:
630 630 if context:
631 631 self.read_context_hunk(lr)
632 632 else:
633 633 self.read_unified_hunk(lr)
634 634 self.create = create
635 635 self.remove = remove and not create
636 636
637 637 def getnormalized(self):
638 638 """Return a copy with line endings normalized to LF."""
639 639
640 640 def normalize(lines):
641 641 nlines = []
642 642 for line in lines:
643 643 if line.endswith('\r\n'):
644 644 line = line[:-2] + '\n'
645 645 nlines.append(line)
646 646 return nlines
647 647
648 648 # Dummy object, it is rebuilt manually
649 649 nh = hunk(self.desc, self.number, None, None, False, False)
650 650 nh.number = self.number
651 651 nh.desc = self.desc
652 652 nh.hunk = self.hunk
653 653 nh.a = normalize(self.a)
654 654 nh.b = normalize(self.b)
655 655 nh.starta = self.starta
656 656 nh.startb = self.startb
657 657 nh.lena = self.lena
658 658 nh.lenb = self.lenb
659 659 nh.create = self.create
660 660 nh.remove = self.remove
661 661 return nh
662 662
663 663 def read_unified_hunk(self, lr):
664 664 m = unidesc.match(self.desc)
665 665 if not m:
666 666 raise PatchError(_("bad hunk #%d") % self.number)
667 667 self.starta, foo, self.lena, self.startb, foo2, self.lenb = m.groups()
668 668 if self.lena is None:
669 669 self.lena = 1
670 670 else:
671 671 self.lena = int(self.lena)
672 672 if self.lenb is None:
673 673 self.lenb = 1
674 674 else:
675 675 self.lenb = int(self.lenb)
676 676 self.starta = int(self.starta)
677 677 self.startb = int(self.startb)
678 678 diffhelpers.addlines(lr, self.hunk, self.lena, self.lenb, self.a, self.b)
679 679 # if we hit eof before finishing out the hunk, the last line will
680 680 # be zero length. Lets try to fix it up.
681 681 while len(self.hunk[-1]) == 0:
682 682 del self.hunk[-1]
683 683 del self.a[-1]
684 684 del self.b[-1]
685 685 self.lena -= 1
686 686 self.lenb -= 1
687 687 self._fixnewline(lr)
688 688
689 689 def read_context_hunk(self, lr):
690 690 self.desc = lr.readline()
691 691 m = contextdesc.match(self.desc)
692 692 if not m:
693 693 raise PatchError(_("bad hunk #%d") % self.number)
694 694 foo, self.starta, foo2, aend, foo3 = m.groups()
695 695 self.starta = int(self.starta)
696 696 if aend is None:
697 697 aend = self.starta
698 698 self.lena = int(aend) - self.starta
699 699 if self.starta:
700 700 self.lena += 1
701 701 for x in xrange(self.lena):
702 702 l = lr.readline()
703 703 if l.startswith('---'):
704 704 # lines addition, old block is empty
705 705 lr.push(l)
706 706 break
707 707 s = l[2:]
708 708 if l.startswith('- ') or l.startswith('! '):
709 709 u = '-' + s
710 710 elif l.startswith(' '):
711 711 u = ' ' + s
712 712 else:
713 713 raise PatchError(_("bad hunk #%d old text line %d") %
714 714 (self.number, x))
715 715 self.a.append(u)
716 716 self.hunk.append(u)
717 717
718 718 l = lr.readline()
719 719 if l.startswith('\ '):
720 720 s = self.a[-1][:-1]
721 721 self.a[-1] = s
722 722 self.hunk[-1] = s
723 723 l = lr.readline()
724 724 m = contextdesc.match(l)
725 725 if not m:
726 726 raise PatchError(_("bad hunk #%d") % self.number)
727 727 foo, self.startb, foo2, bend, foo3 = m.groups()
728 728 self.startb = int(self.startb)
729 729 if bend is None:
730 730 bend = self.startb
731 731 self.lenb = int(bend) - self.startb
732 732 if self.startb:
733 733 self.lenb += 1
734 734 hunki = 1
735 735 for x in xrange(self.lenb):
736 736 l = lr.readline()
737 737 if l.startswith('\ '):
738 738 # XXX: the only way to hit this is with an invalid line range.
739 739 # The no-eol marker is not counted in the line range, but I
740 740 # guess there are diff(1) out there which behave differently.
741 741 s = self.b[-1][:-1]
742 742 self.b[-1] = s
743 743 self.hunk[hunki - 1] = s
744 744 continue
745 745 if not l:
746 746 # line deletions, new block is empty and we hit EOF
747 747 lr.push(l)
748 748 break
749 749 s = l[2:]
750 750 if l.startswith('+ ') or l.startswith('! '):
751 751 u = '+' + s
752 752 elif l.startswith(' '):
753 753 u = ' ' + s
754 754 elif len(self.b) == 0:
755 755 # line deletions, new block is empty
756 756 lr.push(l)
757 757 break
758 758 else:
759 759 raise PatchError(_("bad hunk #%d old text line %d") %
760 760 (self.number, x))
761 761 self.b.append(s)
762 762 while True:
763 763 if hunki >= len(self.hunk):
764 764 h = ""
765 765 else:
766 766 h = self.hunk[hunki]
767 767 hunki += 1
768 768 if h == u:
769 769 break
770 770 elif h.startswith('-'):
771 771 continue
772 772 else:
773 773 self.hunk.insert(hunki - 1, u)
774 774 break
775 775
776 776 if not self.a:
777 777 # this happens when lines were only added to the hunk
778 778 for x in self.hunk:
779 779 if x.startswith('-') or x.startswith(' '):
780 780 self.a.append(x)
781 781 if not self.b:
782 782 # this happens when lines were only deleted from the hunk
783 783 for x in self.hunk:
784 784 if x.startswith('+') or x.startswith(' '):
785 785 self.b.append(x[1:])
786 786 # @@ -start,len +start,len @@
787 787 self.desc = "@@ -%d,%d +%d,%d @@\n" % (self.starta, self.lena,
788 788 self.startb, self.lenb)
789 789 self.hunk[0] = self.desc
790 790 self._fixnewline(lr)
791 791
792 792 def _fixnewline(self, lr):
793 793 l = lr.readline()
794 794 if l.startswith('\ '):
795 795 diffhelpers.fix_newline(self.hunk, self.a, self.b)
796 796 else:
797 797 lr.push(l)
798 798
799 799 def complete(self):
800 800 return len(self.a) == self.lena and len(self.b) == self.lenb
801 801
802 802 def createfile(self):
803 803 return self.starta == 0 and self.lena == 0 and self.create
804 804
805 805 def rmfile(self):
806 806 return self.startb == 0 and self.lenb == 0 and self.remove
807 807
808 808 def fuzzit(self, l, fuzz, toponly):
809 809 # this removes context lines from the top and bottom of list 'l'. It
810 810 # checks the hunk to make sure only context lines are removed, and then
811 811 # returns a new shortened list of lines.
812 812 fuzz = min(fuzz, len(l)-1)
813 813 if fuzz:
814 814 top = 0
815 815 bot = 0
816 816 hlen = len(self.hunk)
817 817 for x in xrange(hlen - 1):
818 818 # the hunk starts with the @@ line, so use x+1
819 819 if self.hunk[x + 1][0] == ' ':
820 820 top += 1
821 821 else:
822 822 break
823 823 if not toponly:
824 824 for x in xrange(hlen - 1):
825 825 if self.hunk[hlen - bot - 1][0] == ' ':
826 826 bot += 1
827 827 else:
828 828 break
829 829
830 830 # top and bot now count context in the hunk
831 831 # adjust them if either one is short
832 832 context = max(top, bot, 3)
833 833 if bot < context:
834 834 bot = max(0, fuzz - (context - bot))
835 835 else:
836 836 bot = min(fuzz, bot)
837 837 if top < context:
838 838 top = max(0, fuzz - (context - top))
839 839 else:
840 840 top = min(fuzz, top)
841 841
842 842 return l[top:len(l)-bot]
843 843 return l
844 844
845 845 def old(self, fuzz=0, toponly=False):
846 846 return self.fuzzit(self.a, fuzz, toponly)
847 847
848 848 def new(self, fuzz=0, toponly=False):
849 849 return self.fuzzit(self.b, fuzz, toponly)
850 850
851 851 class binhunk:
852 852 'A binary patch file. Only understands literals so far.'
853 853 def __init__(self, gitpatch):
854 854 self.gitpatch = gitpatch
855 855 self.text = None
856 856 self.hunk = ['GIT binary patch\n']
857 857
858 858 def createfile(self):
859 859 return self.gitpatch.op in ('ADD', 'RENAME', 'COPY')
860 860
861 861 def rmfile(self):
862 862 return self.gitpatch.op == 'DELETE'
863 863
864 864 def complete(self):
865 865 return self.text is not None
866 866
867 867 def new(self):
868 868 return [self.text]
869 869
870 870 def extract(self, lr):
871 871 line = lr.readline()
872 872 self.hunk.append(line)
873 873 while line and not line.startswith('literal '):
874 874 line = lr.readline()
875 875 self.hunk.append(line)
876 876 if not line:
877 877 raise PatchError(_('could not extract binary patch'))
878 878 size = int(line[8:].rstrip())
879 879 dec = []
880 880 line = lr.readline()
881 881 self.hunk.append(line)
882 882 while len(line) > 1:
883 883 l = line[0]
884 884 if l <= 'Z' and l >= 'A':
885 885 l = ord(l) - ord('A') + 1
886 886 else:
887 887 l = ord(l) - ord('a') + 27
888 888 dec.append(base85.b85decode(line[1:-1])[:l])
889 889 line = lr.readline()
890 890 self.hunk.append(line)
891 891 text = zlib.decompress(''.join(dec))
892 892 if len(text) != size:
893 893 raise PatchError(_('binary patch is %d bytes, not %d') %
894 894 len(text), size)
895 895 self.text = text
896 896
897 897 def parsefilename(str):
898 898 # --- filename \t|space stuff
899 899 s = str[4:].rstrip('\r\n')
900 900 i = s.find('\t')
901 901 if i < 0:
902 902 i = s.find(' ')
903 903 if i < 0:
904 904 return s
905 905 return s[:i]
906 906
907 907 def pathstrip(path, strip):
908 908 pathlen = len(path)
909 909 i = 0
910 910 if strip == 0:
911 911 return '', path.rstrip()
912 912 count = strip
913 913 while count > 0:
914 914 i = path.find('/', i)
915 915 if i == -1:
916 916 raise PatchError(_("unable to strip away %d of %d dirs from %s") %
917 917 (count, strip, path))
918 918 i += 1
919 919 # consume '//' in the path
920 920 while i < pathlen - 1 and path[i] == '/':
921 921 i += 1
922 922 count -= 1
923 923 return path[:i].lstrip(), path[i:].rstrip()
924 924
925 925 def selectfile(afile_orig, bfile_orig, hunk, strip):
926 926 nulla = afile_orig == "/dev/null"
927 927 nullb = bfile_orig == "/dev/null"
928 928 abase, afile = pathstrip(afile_orig, strip)
929 929 gooda = not nulla and os.path.lexists(afile)
930 930 bbase, bfile = pathstrip(bfile_orig, strip)
931 931 if afile == bfile:
932 932 goodb = gooda
933 933 else:
934 934 goodb = not nullb and os.path.lexists(bfile)
935 935 createfunc = hunk.createfile
936 936 missing = not goodb and not gooda and not createfunc()
937 937
938 938 # some diff programs apparently produce patches where the afile is
939 939 # not /dev/null, but afile starts with bfile
940 940 abasedir = afile[:afile.rfind('/') + 1]
941 941 bbasedir = bfile[:bfile.rfind('/') + 1]
942 942 if missing and abasedir == bbasedir and afile.startswith(bfile):
943 943 # this isn't very pretty
944 944 hunk.create = True
945 945 if createfunc():
946 946 missing = False
947 947 else:
948 948 hunk.create = False
949 949
950 950 # If afile is "a/b/foo" and bfile is "a/b/foo.orig" we assume the
951 951 # diff is between a file and its backup. In this case, the original
952 952 # file should be patched (see original mpatch code).
953 953 isbackup = (abase == bbase and bfile.startswith(afile))
954 954 fname = None
955 955 if not missing:
956 956 if gooda and goodb:
957 957 fname = isbackup and afile or bfile
958 958 elif gooda:
959 959 fname = afile
960 960
961 961 if not fname:
962 962 if not nullb:
963 963 fname = isbackup and afile or bfile
964 964 elif not nulla:
965 965 fname = afile
966 966 else:
967 967 raise PatchError(_("undefined source and destination files"))
968 968
969 969 return fname, missing
970 970
971 971 def scangitpatch(lr, firstline):
972 972 """
973 973 Git patches can emit:
974 974 - rename a to b
975 975 - change b
976 976 - copy a to c
977 977 - change c
978 978
979 979 We cannot apply this sequence as-is, the renamed 'a' could not be
980 980 found for it would have been renamed already. And we cannot copy
981 981 from 'b' instead because 'b' would have been changed already. So
982 982 we scan the git patch for copy and rename commands so we can
983 983 perform the copies ahead of time.
984 984 """
985 985 pos = 0
986 986 try:
987 987 pos = lr.fp.tell()
988 988 fp = lr.fp
989 989 except IOError:
990 990 fp = cStringIO.StringIO(lr.fp.read())
991 991 gitlr = linereader(fp, lr.textmode)
992 992 gitlr.push(firstline)
993 993 gitpatches = readgitpatch(gitlr)
994 994 fp.seek(pos)
995 995 return gitpatches
996 996
997 997 def iterhunks(ui, fp):
998 998 """Read a patch and yield the following events:
999 999 - ("file", afile, bfile, firsthunk): select a new target file.
1000 1000 - ("hunk", hunk): a new hunk is ready to be applied, follows a
1001 1001 "file" event.
1002 1002 - ("git", gitchanges): current diff is in git format, gitchanges
1003 1003 maps filenames to gitpatch records. Unique event.
1004 1004 """
1005 1005 changed = {}
1006 1006 afile = ""
1007 1007 bfile = ""
1008 1008 state = None
1009 1009 hunknum = 0
1010 1010 emitfile = newfile = False
1011 1011 git = False
1012 1012
1013 1013 # our states
1014 1014 BFILE = 1
1015 1015 context = None
1016 1016 lr = linereader(fp)
1017 1017
1018 1018 while True:
1019 1019 x = lr.readline()
1020 1020 if not x:
1021 1021 break
1022 1022 if (state == BFILE and ((not context and x[0] == '@') or
1023 1023 ((context is not False) and x.startswith('***************')))):
1024 1024 if context is None and x.startswith('***************'):
1025 1025 context = True
1026 1026 gpatch = changed.get(bfile)
1027 1027 create = afile == '/dev/null' or gpatch and gpatch.op == 'ADD'
1028 1028 remove = bfile == '/dev/null' or gpatch and gpatch.op == 'DELETE'
1029 1029 h = hunk(x, hunknum + 1, lr, context, create, remove)
1030 1030 hunknum += 1
1031 1031 if emitfile:
1032 1032 emitfile = False
1033 1033 yield 'file', (afile, bfile, h)
1034 1034 yield 'hunk', h
1035 1035 elif state == BFILE and x.startswith('GIT binary patch'):
1036 1036 h = binhunk(changed[bfile])
1037 1037 hunknum += 1
1038 1038 if emitfile:
1039 1039 emitfile = False
1040 1040 yield 'file', ('a/' + afile, 'b/' + bfile, h)
1041 1041 h.extract(lr)
1042 1042 yield 'hunk', h
1043 1043 elif x.startswith('diff --git'):
1044 1044 # check for git diff, scanning the whole patch file if needed
1045 1045 m = gitre.match(x)
1046 1046 if m:
1047 1047 afile, bfile = m.group(1, 2)
1048 1048 if not git:
1049 1049 git = True
1050 1050 gitpatches = scangitpatch(lr, x)
1051 1051 yield 'git', gitpatches
1052 1052 for gp in gitpatches:
1053 1053 changed[gp.path] = gp
1054 1054 # else error?
1055 1055 # copy/rename + modify should modify target, not source
1056 1056 gp = changed.get(bfile)
1057 1057 if gp and (gp.op in ('COPY', 'DELETE', 'RENAME', 'ADD')
1058 1058 or gp.mode):
1059 1059 afile = bfile
1060 1060 newfile = True
1061 1061 elif x.startswith('---'):
1062 1062 # check for a unified diff
1063 1063 l2 = lr.readline()
1064 1064 if not l2.startswith('+++'):
1065 1065 lr.push(l2)
1066 1066 continue
1067 1067 newfile = True
1068 1068 context = False
1069 1069 afile = parsefilename(x)
1070 1070 bfile = parsefilename(l2)
1071 1071 elif x.startswith('***'):
1072 1072 # check for a context diff
1073 1073 l2 = lr.readline()
1074 1074 if not l2.startswith('---'):
1075 1075 lr.push(l2)
1076 1076 continue
1077 1077 l3 = lr.readline()
1078 1078 lr.push(l3)
1079 1079 if not l3.startswith("***************"):
1080 1080 lr.push(l2)
1081 1081 continue
1082 1082 newfile = True
1083 1083 context = True
1084 1084 afile = parsefilename(x)
1085 1085 bfile = parsefilename(l2)
1086 1086
1087 1087 if newfile:
1088 1088 newfile = False
1089 1089 emitfile = True
1090 1090 state = BFILE
1091 1091 hunknum = 0
1092 1092
1093 1093 def applydiff(ui, fp, changed, strip=1, eolmode='strict'):
1094 1094 """Reads a patch from fp and tries to apply it.
1095 1095
1096 1096 The dict 'changed' is filled in with all of the filenames changed
1097 1097 by the patch. Returns 0 for a clean patch, -1 if any rejects were
1098 1098 found and 1 if there was any fuzz.
1099 1099
1100 1100 If 'eolmode' is 'strict', the patch content and patched file are
1101 1101 read in binary mode. Otherwise, line endings are ignored when
1102 1102 patching then normalized according to 'eolmode'.
1103 1103
1104 1104 Callers probably want to call 'cmdutil.updatedir' after this to
1105 1105 apply certain categories of changes not done by this function.
1106 1106 """
1107 1107 return _applydiff(ui, fp, patchfile, copyfile, changed, strip=strip,
1108 1108 eolmode=eolmode)
1109 1109
1110 1110 def _applydiff(ui, fp, patcher, copyfn, changed, strip=1, eolmode='strict'):
1111 1111 rejects = 0
1112 1112 err = 0
1113 1113 current_file = None
1114 1114 cwd = os.getcwd()
1115 1115 opener = scmutil.opener(cwd)
1116 1116
1117 1117 for state, values in iterhunks(ui, fp):
1118 1118 if state == 'hunk':
1119 1119 if not current_file:
1120 1120 continue
1121 1121 ret = current_file.apply(values)
1122 1122 if ret >= 0:
1123 1123 changed.setdefault(current_file.fname, None)
1124 1124 if ret > 0:
1125 1125 err = 1
1126 1126 elif state == 'file':
1127 1127 if current_file:
1128 1128 rejects += current_file.close()
1129 1129 afile, bfile, first_hunk = values
1130 1130 try:
1131 1131 current_file, missing = selectfile(afile, bfile,
1132 1132 first_hunk, strip)
1133 1133 current_file = patcher(ui, current_file, opener,
1134 1134 missing=missing, eolmode=eolmode)
1135 1135 except PatchError, inst:
1136 1136 ui.warn(str(inst) + '\n')
1137 1137 current_file = None
1138 1138 rejects += 1
1139 1139 continue
1140 1140 elif state == 'git':
1141 1141 for gp in values:
1142 1142 gp.path = pathstrip(gp.path, strip - 1)[1]
1143 1143 if gp.oldpath:
1144 1144 gp.oldpath = pathstrip(gp.oldpath, strip - 1)[1]
1145 1145 # Binary patches really overwrite target files, copying them
1146 1146 # will just make it fails with "target file exists"
1147 1147 if gp.op in ('COPY', 'RENAME') and not gp.binary:
1148 1148 copyfn(gp.oldpath, gp.path, cwd)
1149 1149 changed[gp.path] = gp
1150 1150 else:
1151 1151 raise util.Abort(_('unsupported parser state: %s') % state)
1152 1152
1153 1153 if current_file:
1154 1154 rejects += current_file.close()
1155 1155
1156 1156 if rejects:
1157 1157 return -1
1158 1158 return err
1159 1159
1160 1160 def _externalpatch(patcher, patchname, ui, strip, cwd, files):
1161 1161 """use <patcher> to apply <patchname> to the working directory.
1162 1162 returns whether patch was applied with fuzz factor."""
1163 1163
1164 1164 fuzz = False
1165 1165 args = []
1166 1166 if cwd:
1167 1167 args.append('-d %s' % util.shellquote(cwd))
1168 1168 fp = util.popen('%s %s -p%d < %s' % (patcher, ' '.join(args), strip,
1169 1169 util.shellquote(patchname)))
1170 1170
1171 1171 for line in fp:
1172 1172 line = line.rstrip()
1173 1173 ui.note(line + '\n')
1174 1174 if line.startswith('patching file '):
1175 1175 pf = util.parsepatchoutput(line)
1176 1176 printed_file = False
1177 1177 files.setdefault(pf, None)
1178 1178 elif line.find('with fuzz') >= 0:
1179 1179 fuzz = True
1180 1180 if not printed_file:
1181 1181 ui.warn(pf + '\n')
1182 1182 printed_file = True
1183 1183 ui.warn(line + '\n')
1184 1184 elif line.find('saving rejects to file') >= 0:
1185 1185 ui.warn(line + '\n')
1186 1186 elif line.find('FAILED') >= 0:
1187 1187 if not printed_file:
1188 1188 ui.warn(pf + '\n')
1189 1189 printed_file = True
1190 1190 ui.warn(line + '\n')
1191 1191 code = fp.close()
1192 1192 if code:
1193 1193 raise PatchError(_("patch command failed: %s") %
1194 util.explain_exit(code)[0])
1194 util.explainexit(code)[0])
1195 1195 return fuzz
1196 1196
1197 1197 def internalpatch(patchobj, ui, strip, cwd, files=None, eolmode='strict'):
1198 1198 """use builtin patch to apply <patchobj> to the working directory.
1199 1199 returns whether patch was applied with fuzz factor."""
1200 1200
1201 1201 if files is None:
1202 1202 files = {}
1203 1203 if eolmode is None:
1204 1204 eolmode = ui.config('patch', 'eol', 'strict')
1205 1205 if eolmode.lower() not in eolmodes:
1206 1206 raise util.Abort(_('unsupported line endings type: %s') % eolmode)
1207 1207 eolmode = eolmode.lower()
1208 1208
1209 1209 try:
1210 1210 fp = open(patchobj, 'rb')
1211 1211 except TypeError:
1212 1212 fp = patchobj
1213 1213 if cwd:
1214 1214 curdir = os.getcwd()
1215 1215 os.chdir(cwd)
1216 1216 try:
1217 1217 ret = applydiff(ui, fp, files, strip=strip, eolmode=eolmode)
1218 1218 finally:
1219 1219 if cwd:
1220 1220 os.chdir(curdir)
1221 1221 if fp != patchobj:
1222 1222 fp.close()
1223 1223 if ret < 0:
1224 1224 raise PatchError(_('patch failed to apply'))
1225 1225 return ret > 0
1226 1226
1227 1227 def patch(patchname, ui, strip=1, cwd=None, files=None, eolmode='strict'):
1228 1228 """Apply <patchname> to the working directory.
1229 1229
1230 1230 'eolmode' specifies how end of lines should be handled. It can be:
1231 1231 - 'strict': inputs are read in binary mode, EOLs are preserved
1232 1232 - 'crlf': EOLs are ignored when patching and reset to CRLF
1233 1233 - 'lf': EOLs are ignored when patching and reset to LF
1234 1234 - None: get it from user settings, default to 'strict'
1235 1235 'eolmode' is ignored when using an external patcher program.
1236 1236
1237 1237 Returns whether patch was applied with fuzz factor.
1238 1238 """
1239 1239 patcher = ui.config('ui', 'patch')
1240 1240 if files is None:
1241 1241 files = {}
1242 1242 try:
1243 1243 if patcher:
1244 1244 return _externalpatch(patcher, patchname, ui, strip, cwd, files)
1245 1245 return internalpatch(patchname, ui, strip, cwd, files, eolmode)
1246 1246 except PatchError, err:
1247 1247 raise util.Abort(str(err))
1248 1248
1249 1249 def b85diff(to, tn):
1250 1250 '''print base85-encoded binary diff'''
1251 1251 def gitindex(text):
1252 1252 if not text:
1253 1253 return hex(nullid)
1254 1254 l = len(text)
1255 1255 s = util.sha1('blob %d\0' % l)
1256 1256 s.update(text)
1257 1257 return s.hexdigest()
1258 1258
1259 1259 def fmtline(line):
1260 1260 l = len(line)
1261 1261 if l <= 26:
1262 1262 l = chr(ord('A') + l - 1)
1263 1263 else:
1264 1264 l = chr(l - 26 + ord('a') - 1)
1265 1265 return '%c%s\n' % (l, base85.b85encode(line, True))
1266 1266
1267 1267 def chunk(text, csize=52):
1268 1268 l = len(text)
1269 1269 i = 0
1270 1270 while i < l:
1271 1271 yield text[i:i + csize]
1272 1272 i += csize
1273 1273
1274 1274 tohash = gitindex(to)
1275 1275 tnhash = gitindex(tn)
1276 1276 if tohash == tnhash:
1277 1277 return ""
1278 1278
1279 1279 # TODO: deltas
1280 1280 ret = ['index %s..%s\nGIT binary patch\nliteral %s\n' %
1281 1281 (tohash, tnhash, len(tn))]
1282 1282 for l in chunk(zlib.compress(tn)):
1283 1283 ret.append(fmtline(l))
1284 1284 ret.append('\n')
1285 1285 return ''.join(ret)
1286 1286
1287 1287 class GitDiffRequired(Exception):
1288 1288 pass
1289 1289
1290 1290 def diffopts(ui, opts=None, untrusted=False):
1291 1291 def get(key, name=None, getter=ui.configbool):
1292 1292 return ((opts and opts.get(key)) or
1293 1293 getter('diff', name or key, None, untrusted=untrusted))
1294 1294 return mdiff.diffopts(
1295 1295 text=opts and opts.get('text'),
1296 1296 git=get('git'),
1297 1297 nodates=get('nodates'),
1298 1298 showfunc=get('show_function', 'showfunc'),
1299 1299 ignorews=get('ignore_all_space', 'ignorews'),
1300 1300 ignorewsamount=get('ignore_space_change', 'ignorewsamount'),
1301 1301 ignoreblanklines=get('ignore_blank_lines', 'ignoreblanklines'),
1302 1302 context=get('unified', getter=ui.config))
1303 1303
1304 1304 def diff(repo, node1=None, node2=None, match=None, changes=None, opts=None,
1305 1305 losedatafn=None, prefix=''):
1306 1306 '''yields diff of changes to files between two nodes, or node and
1307 1307 working directory.
1308 1308
1309 1309 if node1 is None, use first dirstate parent instead.
1310 1310 if node2 is None, compare node1 with working directory.
1311 1311
1312 1312 losedatafn(**kwarg) is a callable run when opts.upgrade=True and
1313 1313 every time some change cannot be represented with the current
1314 1314 patch format. Return False to upgrade to git patch format, True to
1315 1315 accept the loss or raise an exception to abort the diff. It is
1316 1316 called with the name of current file being diffed as 'fn'. If set
1317 1317 to None, patches will always be upgraded to git format when
1318 1318 necessary.
1319 1319
1320 1320 prefix is a filename prefix that is prepended to all filenames on
1321 1321 display (used for subrepos).
1322 1322 '''
1323 1323
1324 1324 if opts is None:
1325 1325 opts = mdiff.defaultopts
1326 1326
1327 1327 if not node1 and not node2:
1328 1328 node1 = repo.dirstate.p1()
1329 1329
1330 1330 def lrugetfilectx():
1331 1331 cache = {}
1332 1332 order = []
1333 1333 def getfilectx(f, ctx):
1334 1334 fctx = ctx.filectx(f, filelog=cache.get(f))
1335 1335 if f not in cache:
1336 1336 if len(cache) > 20:
1337 1337 del cache[order.pop(0)]
1338 1338 cache[f] = fctx.filelog()
1339 1339 else:
1340 1340 order.remove(f)
1341 1341 order.append(f)
1342 1342 return fctx
1343 1343 return getfilectx
1344 1344 getfilectx = lrugetfilectx()
1345 1345
1346 1346 ctx1 = repo[node1]
1347 1347 ctx2 = repo[node2]
1348 1348
1349 1349 if not changes:
1350 1350 changes = repo.status(ctx1, ctx2, match=match)
1351 1351 modified, added, removed = changes[:3]
1352 1352
1353 1353 if not modified and not added and not removed:
1354 1354 return []
1355 1355
1356 1356 revs = None
1357 1357 if not repo.ui.quiet:
1358 1358 hexfunc = repo.ui.debugflag and hex or short
1359 1359 revs = [hexfunc(node) for node in [node1, node2] if node]
1360 1360
1361 1361 copy = {}
1362 1362 if opts.git or opts.upgrade:
1363 1363 copy = copies.copies(repo, ctx1, ctx2, repo[nullid])[0]
1364 1364
1365 1365 difffn = lambda opts, losedata: trydiff(repo, revs, ctx1, ctx2,
1366 1366 modified, added, removed, copy, getfilectx, opts, losedata, prefix)
1367 1367 if opts.upgrade and not opts.git:
1368 1368 try:
1369 1369 def losedata(fn):
1370 1370 if not losedatafn or not losedatafn(fn=fn):
1371 1371 raise GitDiffRequired()
1372 1372 # Buffer the whole output until we are sure it can be generated
1373 1373 return list(difffn(opts.copy(git=False), losedata))
1374 1374 except GitDiffRequired:
1375 1375 return difffn(opts.copy(git=True), None)
1376 1376 else:
1377 1377 return difffn(opts, None)
1378 1378
1379 1379 def difflabel(func, *args, **kw):
1380 1380 '''yields 2-tuples of (output, label) based on the output of func()'''
1381 1381 prefixes = [('diff', 'diff.diffline'),
1382 1382 ('copy', 'diff.extended'),
1383 1383 ('rename', 'diff.extended'),
1384 1384 ('old', 'diff.extended'),
1385 1385 ('new', 'diff.extended'),
1386 1386 ('deleted', 'diff.extended'),
1387 1387 ('---', 'diff.file_a'),
1388 1388 ('+++', 'diff.file_b'),
1389 1389 ('@@', 'diff.hunk'),
1390 1390 ('-', 'diff.deleted'),
1391 1391 ('+', 'diff.inserted')]
1392 1392
1393 1393 for chunk in func(*args, **kw):
1394 1394 lines = chunk.split('\n')
1395 1395 for i, line in enumerate(lines):
1396 1396 if i != 0:
1397 1397 yield ('\n', '')
1398 1398 stripline = line
1399 1399 if line and line[0] in '+-':
1400 1400 # highlight trailing whitespace, but only in changed lines
1401 1401 stripline = line.rstrip()
1402 1402 for prefix, label in prefixes:
1403 1403 if stripline.startswith(prefix):
1404 1404 yield (stripline, label)
1405 1405 break
1406 1406 else:
1407 1407 yield (line, '')
1408 1408 if line != stripline:
1409 1409 yield (line[len(stripline):], 'diff.trailingwhitespace')
1410 1410
1411 1411 def diffui(*args, **kw):
1412 1412 '''like diff(), but yields 2-tuples of (output, label) for ui.write()'''
1413 1413 return difflabel(diff, *args, **kw)
1414 1414
1415 1415
1416 1416 def _addmodehdr(header, omode, nmode):
1417 1417 if omode != nmode:
1418 1418 header.append('old mode %s\n' % omode)
1419 1419 header.append('new mode %s\n' % nmode)
1420 1420
1421 1421 def trydiff(repo, revs, ctx1, ctx2, modified, added, removed,
1422 1422 copy, getfilectx, opts, losedatafn, prefix):
1423 1423
1424 1424 def join(f):
1425 1425 return os.path.join(prefix, f)
1426 1426
1427 1427 date1 = util.datestr(ctx1.date())
1428 1428 man1 = ctx1.manifest()
1429 1429
1430 1430 gone = set()
1431 1431 gitmode = {'l': '120000', 'x': '100755', '': '100644'}
1432 1432
1433 1433 copyto = dict([(v, k) for k, v in copy.items()])
1434 1434
1435 1435 if opts.git:
1436 1436 revs = None
1437 1437
1438 1438 for f in sorted(modified + added + removed):
1439 1439 to = None
1440 1440 tn = None
1441 1441 dodiff = True
1442 1442 header = []
1443 1443 if f in man1:
1444 1444 to = getfilectx(f, ctx1).data()
1445 1445 if f not in removed:
1446 1446 tn = getfilectx(f, ctx2).data()
1447 1447 a, b = f, f
1448 1448 if opts.git or losedatafn:
1449 1449 if f in added:
1450 1450 mode = gitmode[ctx2.flags(f)]
1451 1451 if f in copy or f in copyto:
1452 1452 if opts.git:
1453 1453 if f in copy:
1454 1454 a = copy[f]
1455 1455 else:
1456 1456 a = copyto[f]
1457 1457 omode = gitmode[man1.flags(a)]
1458 1458 _addmodehdr(header, omode, mode)
1459 1459 if a in removed and a not in gone:
1460 1460 op = 'rename'
1461 1461 gone.add(a)
1462 1462 else:
1463 1463 op = 'copy'
1464 1464 header.append('%s from %s\n' % (op, join(a)))
1465 1465 header.append('%s to %s\n' % (op, join(f)))
1466 1466 to = getfilectx(a, ctx1).data()
1467 1467 else:
1468 1468 losedatafn(f)
1469 1469 else:
1470 1470 if opts.git:
1471 1471 header.append('new file mode %s\n' % mode)
1472 1472 elif ctx2.flags(f):
1473 1473 losedatafn(f)
1474 1474 # In theory, if tn was copied or renamed we should check
1475 1475 # if the source is binary too but the copy record already
1476 1476 # forces git mode.
1477 1477 if util.binary(tn):
1478 1478 if opts.git:
1479 1479 dodiff = 'binary'
1480 1480 else:
1481 1481 losedatafn(f)
1482 1482 if not opts.git and not tn:
1483 1483 # regular diffs cannot represent new empty file
1484 1484 losedatafn(f)
1485 1485 elif f in removed:
1486 1486 if opts.git:
1487 1487 # have we already reported a copy above?
1488 1488 if ((f in copy and copy[f] in added
1489 1489 and copyto[copy[f]] == f) or
1490 1490 (f in copyto and copyto[f] in added
1491 1491 and copy[copyto[f]] == f)):
1492 1492 dodiff = False
1493 1493 else:
1494 1494 header.append('deleted file mode %s\n' %
1495 1495 gitmode[man1.flags(f)])
1496 1496 elif not to or util.binary(to):
1497 1497 # regular diffs cannot represent empty file deletion
1498 1498 losedatafn(f)
1499 1499 else:
1500 1500 oflag = man1.flags(f)
1501 1501 nflag = ctx2.flags(f)
1502 1502 binary = util.binary(to) or util.binary(tn)
1503 1503 if opts.git:
1504 1504 _addmodehdr(header, gitmode[oflag], gitmode[nflag])
1505 1505 if binary:
1506 1506 dodiff = 'binary'
1507 1507 elif binary or nflag != oflag:
1508 1508 losedatafn(f)
1509 1509 if opts.git:
1510 1510 header.insert(0, mdiff.diffline(revs, join(a), join(b), opts))
1511 1511
1512 1512 if dodiff:
1513 1513 if dodiff == 'binary':
1514 1514 text = b85diff(to, tn)
1515 1515 else:
1516 1516 text = mdiff.unidiff(to, date1,
1517 1517 # ctx2 date may be dynamic
1518 1518 tn, util.datestr(ctx2.date()),
1519 1519 join(a), join(b), revs, opts=opts)
1520 1520 if header and (text or len(header) > 1):
1521 1521 yield ''.join(header)
1522 1522 if text:
1523 1523 yield text
1524 1524
1525 1525 def diffstatdata(lines):
1526 1526 diffre = re.compile('^diff .*-r [a-z0-9]+\s(.*)$')
1527 1527
1528 1528 filename, adds, removes = None, 0, 0
1529 1529 for line in lines:
1530 1530 if line.startswith('diff'):
1531 1531 if filename:
1532 1532 isbinary = adds == 0 and removes == 0
1533 1533 yield (filename, adds, removes, isbinary)
1534 1534 # set numbers to 0 anyway when starting new file
1535 1535 adds, removes = 0, 0
1536 1536 if line.startswith('diff --git'):
1537 1537 filename = gitre.search(line).group(1)
1538 1538 elif line.startswith('diff -r'):
1539 1539 # format: "diff -r ... -r ... filename"
1540 1540 filename = diffre.search(line).group(1)
1541 1541 elif line.startswith('+') and not line.startswith('+++'):
1542 1542 adds += 1
1543 1543 elif line.startswith('-') and not line.startswith('---'):
1544 1544 removes += 1
1545 1545 if filename:
1546 1546 isbinary = adds == 0 and removes == 0
1547 1547 yield (filename, adds, removes, isbinary)
1548 1548
1549 1549 def diffstat(lines, width=80, git=False):
1550 1550 output = []
1551 1551 stats = list(diffstatdata(lines))
1552 1552
1553 1553 maxtotal, maxname = 0, 0
1554 1554 totaladds, totalremoves = 0, 0
1555 1555 hasbinary = False
1556 1556
1557 1557 sized = [(filename, adds, removes, isbinary, encoding.colwidth(filename))
1558 1558 for filename, adds, removes, isbinary in stats]
1559 1559
1560 1560 for filename, adds, removes, isbinary, namewidth in sized:
1561 1561 totaladds += adds
1562 1562 totalremoves += removes
1563 1563 maxname = max(maxname, namewidth)
1564 1564 maxtotal = max(maxtotal, adds + removes)
1565 1565 if isbinary:
1566 1566 hasbinary = True
1567 1567
1568 1568 countwidth = len(str(maxtotal))
1569 1569 if hasbinary and countwidth < 3:
1570 1570 countwidth = 3
1571 1571 graphwidth = width - countwidth - maxname - 6
1572 1572 if graphwidth < 10:
1573 1573 graphwidth = 10
1574 1574
1575 1575 def scale(i):
1576 1576 if maxtotal <= graphwidth:
1577 1577 return i
1578 1578 # If diffstat runs out of room it doesn't print anything,
1579 1579 # which isn't very useful, so always print at least one + or -
1580 1580 # if there were at least some changes.
1581 1581 return max(i * graphwidth // maxtotal, int(bool(i)))
1582 1582
1583 1583 for filename, adds, removes, isbinary, namewidth in sized:
1584 1584 if git and isbinary:
1585 1585 count = 'Bin'
1586 1586 else:
1587 1587 count = adds + removes
1588 1588 pluses = '+' * scale(adds)
1589 1589 minuses = '-' * scale(removes)
1590 1590 output.append(' %s%s | %*s %s%s\n' %
1591 1591 (filename, ' ' * (maxname - namewidth),
1592 1592 countwidth, count,
1593 1593 pluses, minuses))
1594 1594
1595 1595 if stats:
1596 1596 output.append(_(' %d files changed, %d insertions(+), %d deletions(-)\n')
1597 1597 % (len(stats), totaladds, totalremoves))
1598 1598
1599 1599 return ''.join(output)
1600 1600
1601 1601 def diffstatui(*args, **kw):
1602 1602 '''like diffstat(), but yields 2-tuples of (output, label) for
1603 1603 ui.write()
1604 1604 '''
1605 1605
1606 1606 for line in diffstat(*args, **kw).splitlines():
1607 1607 if line and line[-1] in '+-':
1608 1608 name, graph = line.rsplit(' ', 1)
1609 1609 yield (name + ' ', '')
1610 1610 m = re.search(r'\++', graph)
1611 1611 if m:
1612 1612 yield (m.group(0), 'diffstat.inserted')
1613 1613 m = re.search(r'-+', graph)
1614 1614 if m:
1615 1615 yield (m.group(0), 'diffstat.deleted')
1616 1616 else:
1617 1617 yield (line, '')
1618 1618 yield ('\n', '')
@@ -1,331 +1,331 b''
1 1 # posix.py - Posix utility function implementations for Mercurial
2 2 #
3 3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 import os, sys, errno, stat, getpass, pwd, grp, tempfile
10 10
11 11 posixfile = open
12 12 nulldev = '/dev/null'
13 13 normpath = os.path.normpath
14 14 samestat = os.path.samestat
15 15 os_link = os.link
16 16 unlink = os.unlink
17 17 rename = os.rename
18 18 expandglobs = False
19 19
20 20 umask = os.umask(0)
21 21 os.umask(umask)
22 22
23 23 def openhardlinks():
24 24 '''return true if it is safe to hold open file handles to hardlinks'''
25 25 return True
26 26
27 27 def nlinks(name):
28 28 '''return number of hardlinks for the given file'''
29 29 return os.lstat(name).st_nlink
30 30
31 31 def parsepatchoutput(output_line):
32 32 """parses the output produced by patch and returns the filename"""
33 33 pf = output_line[14:]
34 34 if os.sys.platform == 'OpenVMS':
35 35 if pf[0] == '`':
36 36 pf = pf[1:-1] # Remove the quotes
37 37 else:
38 38 if pf.startswith("'") and pf.endswith("'") and " " in pf:
39 39 pf = pf[1:-1] # Remove the quotes
40 40 return pf
41 41
42 42 def sshargs(sshcmd, host, user, port):
43 43 '''Build argument list for ssh'''
44 44 args = user and ("%s@%s" % (user, host)) or host
45 45 return port and ("%s -p %s" % (args, port)) or args
46 46
47 47 def is_exec(f):
48 48 """check whether a file is executable"""
49 49 return (os.lstat(f).st_mode & 0100 != 0)
50 50
51 51 def setflags(f, l, x):
52 52 s = os.lstat(f).st_mode
53 53 if l:
54 54 if not stat.S_ISLNK(s):
55 55 # switch file to link
56 56 fp = open(f)
57 57 data = fp.read()
58 58 fp.close()
59 59 os.unlink(f)
60 60 try:
61 61 os.symlink(data, f)
62 62 except OSError:
63 63 # failed to make a link, rewrite file
64 64 fp = open(f, "w")
65 65 fp.write(data)
66 66 fp.close()
67 67 # no chmod needed at this point
68 68 return
69 69 if stat.S_ISLNK(s):
70 70 # switch link to file
71 71 data = os.readlink(f)
72 72 os.unlink(f)
73 73 fp = open(f, "w")
74 74 fp.write(data)
75 75 fp.close()
76 76 s = 0666 & ~umask # avoid restatting for chmod
77 77
78 78 sx = s & 0100
79 79 if x and not sx:
80 80 # Turn on +x for every +r bit when making a file executable
81 81 # and obey umask.
82 82 os.chmod(f, s | (s & 0444) >> 2 & ~umask)
83 83 elif not x and sx:
84 84 # Turn off all +x bits
85 85 os.chmod(f, s & 0666)
86 86
87 87 def checkexec(path):
88 88 """
89 89 Check whether the given path is on a filesystem with UNIX-like exec flags
90 90
91 91 Requires a directory (like /foo/.hg)
92 92 """
93 93
94 94 # VFAT on some Linux versions can flip mode but it doesn't persist
95 95 # a FS remount. Frequently we can detect it if files are created
96 96 # with exec bit on.
97 97
98 98 try:
99 99 EXECFLAGS = stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
100 100 fh, fn = tempfile.mkstemp(dir=path, prefix='hg-checkexec-')
101 101 try:
102 102 os.close(fh)
103 103 m = os.stat(fn).st_mode & 0777
104 104 new_file_has_exec = m & EXECFLAGS
105 105 os.chmod(fn, m ^ EXECFLAGS)
106 106 exec_flags_cannot_flip = ((os.stat(fn).st_mode & 0777) == m)
107 107 finally:
108 108 os.unlink(fn)
109 109 except (IOError, OSError):
110 110 # we don't care, the user probably won't be able to commit anyway
111 111 return False
112 112 return not (new_file_has_exec or exec_flags_cannot_flip)
113 113
114 114 def checklink(path):
115 115 """check whether the given path is on a symlink-capable filesystem"""
116 116 # mktemp is not racy because symlink creation will fail if the
117 117 # file already exists
118 118 name = tempfile.mktemp(dir=path, prefix='hg-checklink-')
119 119 try:
120 120 os.symlink(".", name)
121 121 os.unlink(name)
122 122 return True
123 123 except (OSError, AttributeError):
124 124 return False
125 125
126 126 def checkosfilename(path):
127 127 '''Check that the base-relative path is a valid filename on this platform.
128 128 Returns None if the path is ok, or a UI string describing the problem.'''
129 129 pass # on posix platforms, every path is ok
130 130
131 131 def setbinary(fd):
132 132 pass
133 133
134 134 def pconvert(path):
135 135 return path
136 136
137 137 def localpath(path):
138 138 return path
139 139
140 140 def samefile(fpath1, fpath2):
141 141 """Returns whether path1 and path2 refer to the same file. This is only
142 142 guaranteed to work for files, not directories."""
143 143 return os.path.samefile(fpath1, fpath2)
144 144
145 145 def samedevice(fpath1, fpath2):
146 146 """Returns whether fpath1 and fpath2 are on the same device. This is only
147 147 guaranteed to work for files, not directories."""
148 148 st1 = os.lstat(fpath1)
149 149 st2 = os.lstat(fpath2)
150 150 return st1.st_dev == st2.st_dev
151 151
152 152 if sys.platform == 'darwin':
153 153 import fcntl # only needed on darwin, missing on jython
154 154 def realpath(path):
155 155 '''
156 156 Returns the true, canonical file system path equivalent to the given
157 157 path.
158 158
159 159 Equivalent means, in this case, resulting in the same, unique
160 160 file system link to the path. Every file system entry, whether a file,
161 161 directory, hard link or symbolic link or special, will have a single
162 162 path preferred by the system, but may allow multiple, differing path
163 163 lookups to point to it.
164 164
165 165 Most regular UNIX file systems only allow a file system entry to be
166 166 looked up by its distinct path. Obviously, this does not apply to case
167 167 insensitive file systems, whether case preserving or not. The most
168 168 complex issue to deal with is file systems transparently reencoding the
169 169 path, such as the non-standard Unicode normalisation required for HFS+
170 170 and HFSX.
171 171 '''
172 172 # Constants copied from /usr/include/sys/fcntl.h
173 173 F_GETPATH = 50
174 174 O_SYMLINK = 0x200000
175 175
176 176 try:
177 177 fd = os.open(path, O_SYMLINK)
178 178 except OSError, err:
179 179 if err.errno == errno.ENOENT:
180 180 return path
181 181 raise
182 182
183 183 try:
184 184 return fcntl.fcntl(fd, F_GETPATH, '\0' * 1024).rstrip('\0')
185 185 finally:
186 186 os.close(fd)
187 187 else:
188 188 # Fallback to the likely inadequate Python builtin function.
189 189 realpath = os.path.realpath
190 190
191 191 def shellquote(s):
192 192 if os.sys.platform == 'OpenVMS':
193 193 return '"%s"' % s
194 194 else:
195 195 return "'%s'" % s.replace("'", "'\\''")
196 196
197 197 def quotecommand(cmd):
198 198 return cmd
199 199
200 200 def popen(command, mode='r'):
201 201 return os.popen(command, mode)
202 202
203 203 def testpid(pid):
204 204 '''return False if pid dead, True if running or not sure'''
205 205 if os.sys.platform == 'OpenVMS':
206 206 return True
207 207 try:
208 208 os.kill(pid, 0)
209 209 return True
210 210 except OSError, inst:
211 211 return inst.errno != errno.ESRCH
212 212
213 def explain_exit(code):
213 def explainexit(code):
214 214 """return a 2-tuple (desc, code) describing a subprocess status
215 215 (codes from kill are negative - not os.system/wait encoding)"""
216 216 if code >= 0:
217 217 return _("exited with status %d") % code, code
218 218 return _("killed by signal %d") % -code, -code
219 219
220 220 def isowner(st):
221 221 """Return True if the stat object st is from the current user."""
222 222 return st.st_uid == os.getuid()
223 223
224 224 def find_exe(command):
225 225 '''Find executable for command searching like which does.
226 226 If command is a basename then PATH is searched for command.
227 227 PATH isn't searched if command is an absolute or relative path.
228 228 If command isn't found None is returned.'''
229 229 if sys.platform == 'OpenVMS':
230 230 return command
231 231
232 232 def findexisting(executable):
233 233 'Will return executable if existing file'
234 234 if os.path.exists(executable):
235 235 return executable
236 236 return None
237 237
238 238 if os.sep in command:
239 239 return findexisting(command)
240 240
241 241 for path in os.environ.get('PATH', '').split(os.pathsep):
242 242 executable = findexisting(os.path.join(path, command))
243 243 if executable is not None:
244 244 return executable
245 245 return None
246 246
247 247 def set_signal_handler():
248 248 pass
249 249
250 250 def statfiles(files):
251 251 'Stat each file in files and yield stat or None if file does not exist.'
252 252 lstat = os.lstat
253 253 for nf in files:
254 254 try:
255 255 st = lstat(nf)
256 256 except OSError, err:
257 257 if err.errno not in (errno.ENOENT, errno.ENOTDIR):
258 258 raise
259 259 st = None
260 260 yield st
261 261
262 262 def getuser():
263 263 '''return name of current user'''
264 264 return getpass.getuser()
265 265
266 266 def expand_glob(pats):
267 267 '''On Windows, expand the implicit globs in a list of patterns'''
268 268 return list(pats)
269 269
270 270 def username(uid=None):
271 271 """Return the name of the user with the given uid.
272 272
273 273 If uid is None, return the name of the current user."""
274 274
275 275 if uid is None:
276 276 uid = os.getuid()
277 277 try:
278 278 return pwd.getpwuid(uid)[0]
279 279 except KeyError:
280 280 return str(uid)
281 281
282 282 def groupname(gid=None):
283 283 """Return the name of the group with the given gid.
284 284
285 285 If gid is None, return the name of the current group."""
286 286
287 287 if gid is None:
288 288 gid = os.getgid()
289 289 try:
290 290 return grp.getgrgid(gid)[0]
291 291 except KeyError:
292 292 return str(gid)
293 293
294 294 def groupmembers(name):
295 295 """Return the list of members of the group with the given
296 296 name, KeyError if the group does not exist.
297 297 """
298 298 return list(grp.getgrnam(name).gr_mem)
299 299
300 300 def spawndetached(args):
301 301 return os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
302 302 args[0], args)
303 303
304 304 def gethgcmd():
305 305 return sys.argv[:1]
306 306
307 307 def termwidth():
308 308 try:
309 309 import termios, array, fcntl
310 310 for dev in (sys.stderr, sys.stdout, sys.stdin):
311 311 try:
312 312 try:
313 313 fd = dev.fileno()
314 314 except AttributeError:
315 315 continue
316 316 if not os.isatty(fd):
317 317 continue
318 318 arri = fcntl.ioctl(fd, termios.TIOCGWINSZ, '\0' * 8)
319 319 width = array.array('h', arri)[1]
320 320 if width > 0:
321 321 return width
322 322 except ValueError:
323 323 pass
324 324 except IOError, e:
325 325 if e[0] == errno.EINVAL:
326 326 pass
327 327 else:
328 328 raise
329 329 except ImportError:
330 330 pass
331 331 return 80
@@ -1,1590 +1,1590 b''
1 1 # util.py - Mercurial utility functions and platform specfic implementations
2 2 #
3 3 # Copyright 2005 K. Thananchayan <thananck@yahoo.com>
4 4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
5 5 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
6 6 #
7 7 # This software may be used and distributed according to the terms of the
8 8 # GNU General Public License version 2 or any later version.
9 9
10 10 """Mercurial utility functions and platform specfic implementations.
11 11
12 12 This contains helper routines that are independent of the SCM core and
13 13 hide platform-specific details from the core.
14 14 """
15 15
16 16 from i18n import _
17 17 import error, osutil, encoding
18 18 import errno, re, shutil, sys, tempfile, traceback
19 19 import os, time, calendar, textwrap, unicodedata, signal
20 20 import imp, socket, urllib
21 21
22 22 # Python compatibility
23 23
24 24 def sha1(s):
25 25 return _fastsha1(s)
26 26
27 27 def _fastsha1(s):
28 28 # This function will import sha1 from hashlib or sha (whichever is
29 29 # available) and overwrite itself with it on the first call.
30 30 # Subsequent calls will go directly to the imported function.
31 31 if sys.version_info >= (2, 5):
32 32 from hashlib import sha1 as _sha1
33 33 else:
34 34 from sha import sha as _sha1
35 35 global _fastsha1, sha1
36 36 _fastsha1 = sha1 = _sha1
37 37 return _sha1(s)
38 38
39 39 import __builtin__
40 40
41 41 if sys.version_info[0] < 3:
42 42 def fakebuffer(sliceable, offset=0):
43 43 return sliceable[offset:]
44 44 else:
45 45 def fakebuffer(sliceable, offset=0):
46 46 return memoryview(sliceable)[offset:]
47 47 try:
48 48 buffer
49 49 except NameError:
50 50 __builtin__.buffer = fakebuffer
51 51
52 52 import subprocess
53 53 closefds = os.name == 'posix'
54 54
55 55 def popen2(cmd, env=None, newlines=False):
56 56 # Setting bufsize to -1 lets the system decide the buffer size.
57 57 # The default for bufsize is 0, meaning unbuffered. This leads to
58 58 # poor performance on Mac OS X: http://bugs.python.org/issue4194
59 59 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
60 60 close_fds=closefds,
61 61 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
62 62 universal_newlines=newlines,
63 63 env=env)
64 64 return p.stdin, p.stdout
65 65
66 66 def popen3(cmd, env=None, newlines=False):
67 67 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
68 68 close_fds=closefds,
69 69 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
70 70 stderr=subprocess.PIPE,
71 71 universal_newlines=newlines,
72 72 env=env)
73 73 return p.stdin, p.stdout, p.stderr
74 74
75 75 def version():
76 76 """Return version information if available."""
77 77 try:
78 78 import __version__
79 79 return __version__.version
80 80 except ImportError:
81 81 return 'unknown'
82 82
83 83 # used by parsedate
84 84 defaultdateformats = (
85 85 '%Y-%m-%d %H:%M:%S',
86 86 '%Y-%m-%d %I:%M:%S%p',
87 87 '%Y-%m-%d %H:%M',
88 88 '%Y-%m-%d %I:%M%p',
89 89 '%Y-%m-%d',
90 90 '%m-%d',
91 91 '%m/%d',
92 92 '%m/%d/%y',
93 93 '%m/%d/%Y',
94 94 '%a %b %d %H:%M:%S %Y',
95 95 '%a %b %d %I:%M:%S%p %Y',
96 96 '%a, %d %b %Y %H:%M:%S', # GNU coreutils "/bin/date --rfc-2822"
97 97 '%b %d %H:%M:%S %Y',
98 98 '%b %d %I:%M:%S%p %Y',
99 99 '%b %d %H:%M:%S',
100 100 '%b %d %I:%M:%S%p',
101 101 '%b %d %H:%M',
102 102 '%b %d %I:%M%p',
103 103 '%b %d %Y',
104 104 '%b %d',
105 105 '%H:%M:%S',
106 106 '%I:%M:%S%p',
107 107 '%H:%M',
108 108 '%I:%M%p',
109 109 )
110 110
111 111 extendeddateformats = defaultdateformats + (
112 112 "%Y",
113 113 "%Y-%m",
114 114 "%b",
115 115 "%b %Y",
116 116 )
117 117
118 118 def cachefunc(func):
119 119 '''cache the result of function calls'''
120 120 # XXX doesn't handle keywords args
121 121 cache = {}
122 122 if func.func_code.co_argcount == 1:
123 123 # we gain a small amount of time because
124 124 # we don't need to pack/unpack the list
125 125 def f(arg):
126 126 if arg not in cache:
127 127 cache[arg] = func(arg)
128 128 return cache[arg]
129 129 else:
130 130 def f(*args):
131 131 if args not in cache:
132 132 cache[args] = func(*args)
133 133 return cache[args]
134 134
135 135 return f
136 136
137 137 def lrucachefunc(func):
138 138 '''cache most recent results of function calls'''
139 139 cache = {}
140 140 order = []
141 141 if func.func_code.co_argcount == 1:
142 142 def f(arg):
143 143 if arg not in cache:
144 144 if len(cache) > 20:
145 145 del cache[order.pop(0)]
146 146 cache[arg] = func(arg)
147 147 else:
148 148 order.remove(arg)
149 149 order.append(arg)
150 150 return cache[arg]
151 151 else:
152 152 def f(*args):
153 153 if args not in cache:
154 154 if len(cache) > 20:
155 155 del cache[order.pop(0)]
156 156 cache[args] = func(*args)
157 157 else:
158 158 order.remove(args)
159 159 order.append(args)
160 160 return cache[args]
161 161
162 162 return f
163 163
164 164 class propertycache(object):
165 165 def __init__(self, func):
166 166 self.func = func
167 167 self.name = func.__name__
168 168 def __get__(self, obj, type=None):
169 169 result = self.func(obj)
170 170 setattr(obj, self.name, result)
171 171 return result
172 172
173 173 def pipefilter(s, cmd):
174 174 '''filter string S through command CMD, returning its output'''
175 175 p = subprocess.Popen(cmd, shell=True, close_fds=closefds,
176 176 stdin=subprocess.PIPE, stdout=subprocess.PIPE)
177 177 pout, perr = p.communicate(s)
178 178 return pout
179 179
180 180 def tempfilter(s, cmd):
181 181 '''filter string S through a pair of temporary files with CMD.
182 182 CMD is used as a template to create the real command to be run,
183 183 with the strings INFILE and OUTFILE replaced by the real names of
184 184 the temporary files generated.'''
185 185 inname, outname = None, None
186 186 try:
187 187 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
188 188 fp = os.fdopen(infd, 'wb')
189 189 fp.write(s)
190 190 fp.close()
191 191 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
192 192 os.close(outfd)
193 193 cmd = cmd.replace('INFILE', inname)
194 194 cmd = cmd.replace('OUTFILE', outname)
195 195 code = os.system(cmd)
196 196 if sys.platform == 'OpenVMS' and code & 1:
197 197 code = 0
198 198 if code:
199 199 raise Abort(_("command '%s' failed: %s") %
200 (cmd, explain_exit(code)))
200 (cmd, explainexit(code)))
201 201 fp = open(outname, 'rb')
202 202 r = fp.read()
203 203 fp.close()
204 204 return r
205 205 finally:
206 206 try:
207 207 if inname:
208 208 os.unlink(inname)
209 209 except OSError:
210 210 pass
211 211 try:
212 212 if outname:
213 213 os.unlink(outname)
214 214 except OSError:
215 215 pass
216 216
217 217 filtertable = {
218 218 'tempfile:': tempfilter,
219 219 'pipe:': pipefilter,
220 220 }
221 221
222 222 def filter(s, cmd):
223 223 "filter a string through a command that transforms its input to its output"
224 224 for name, fn in filtertable.iteritems():
225 225 if cmd.startswith(name):
226 226 return fn(s, cmd[len(name):].lstrip())
227 227 return pipefilter(s, cmd)
228 228
229 229 def binary(s):
230 230 """return true if a string is binary data"""
231 231 return bool(s and '\0' in s)
232 232
233 233 def increasingchunks(source, min=1024, max=65536):
234 234 '''return no less than min bytes per chunk while data remains,
235 235 doubling min after each chunk until it reaches max'''
236 236 def log2(x):
237 237 if not x:
238 238 return 0
239 239 i = 0
240 240 while x:
241 241 x >>= 1
242 242 i += 1
243 243 return i - 1
244 244
245 245 buf = []
246 246 blen = 0
247 247 for chunk in source:
248 248 buf.append(chunk)
249 249 blen += len(chunk)
250 250 if blen >= min:
251 251 if min < max:
252 252 min = min << 1
253 253 nmin = 1 << log2(blen)
254 254 if nmin > min:
255 255 min = nmin
256 256 if min > max:
257 257 min = max
258 258 yield ''.join(buf)
259 259 blen = 0
260 260 buf = []
261 261 if buf:
262 262 yield ''.join(buf)
263 263
264 264 Abort = error.Abort
265 265
266 266 def always(fn):
267 267 return True
268 268
269 269 def never(fn):
270 270 return False
271 271
272 272 def pathto(root, n1, n2):
273 273 '''return the relative path from one place to another.
274 274 root should use os.sep to separate directories
275 275 n1 should use os.sep to separate directories
276 276 n2 should use "/" to separate directories
277 277 returns an os.sep-separated path.
278 278
279 279 If n1 is a relative path, it's assumed it's
280 280 relative to root.
281 281 n2 should always be relative to root.
282 282 '''
283 283 if not n1:
284 284 return localpath(n2)
285 285 if os.path.isabs(n1):
286 286 if os.path.splitdrive(root)[0] != os.path.splitdrive(n1)[0]:
287 287 return os.path.join(root, localpath(n2))
288 288 n2 = '/'.join((pconvert(root), n2))
289 289 a, b = splitpath(n1), n2.split('/')
290 290 a.reverse()
291 291 b.reverse()
292 292 while a and b and a[-1] == b[-1]:
293 293 a.pop()
294 294 b.pop()
295 295 b.reverse()
296 296 return os.sep.join((['..'] * len(a)) + b) or '.'
297 297
298 298 _hgexecutable = None
299 299
300 300 def mainfrozen():
301 301 """return True if we are a frozen executable.
302 302
303 303 The code supports py2exe (most common, Windows only) and tools/freeze
304 304 (portable, not much used).
305 305 """
306 306 return (hasattr(sys, "frozen") or # new py2exe
307 307 hasattr(sys, "importers") or # old py2exe
308 308 imp.is_frozen("__main__")) # tools/freeze
309 309
310 310 def hgexecutable():
311 311 """return location of the 'hg' executable.
312 312
313 313 Defaults to $HG or 'hg' in the search path.
314 314 """
315 315 if _hgexecutable is None:
316 316 hg = os.environ.get('HG')
317 317 if hg:
318 318 _sethgexecutable(hg)
319 319 elif mainfrozen():
320 320 _sethgexecutable(sys.executable)
321 321 else:
322 322 exe = find_exe('hg') or os.path.basename(sys.argv[0])
323 323 _sethgexecutable(exe)
324 324 return _hgexecutable
325 325
326 326 def _sethgexecutable(path):
327 327 """set location of the 'hg' executable"""
328 328 global _hgexecutable
329 329 _hgexecutable = path
330 330
331 331 def system(cmd, environ={}, cwd=None, onerr=None, errprefix=None, out=None):
332 332 '''enhanced shell command execution.
333 333 run with environment maybe modified, maybe in different dir.
334 334
335 335 if command fails and onerr is None, return status. if ui object,
336 336 print error message and return status, else raise onerr object as
337 337 exception.
338 338
339 339 if out is specified, it is assumed to be a file-like object that has a
340 340 write() method. stdout and stderr will be redirected to out.'''
341 341 try:
342 342 sys.stdout.flush()
343 343 except Exception:
344 344 pass
345 345 def py2shell(val):
346 346 'convert python object into string that is useful to shell'
347 347 if val is None or val is False:
348 348 return '0'
349 349 if val is True:
350 350 return '1'
351 351 return str(val)
352 352 origcmd = cmd
353 353 cmd = quotecommand(cmd)
354 354 env = dict(os.environ)
355 355 env.update((k, py2shell(v)) for k, v in environ.iteritems())
356 356 env['HG'] = hgexecutable()
357 357 if out is None:
358 358 rc = subprocess.call(cmd, shell=True, close_fds=closefds,
359 359 env=env, cwd=cwd)
360 360 else:
361 361 proc = subprocess.Popen(cmd, shell=True, close_fds=closefds,
362 362 env=env, cwd=cwd, stdout=subprocess.PIPE,
363 363 stderr=subprocess.STDOUT)
364 364 for line in proc.stdout:
365 365 out.write(line)
366 366 proc.wait()
367 367 rc = proc.returncode
368 368 if sys.platform == 'OpenVMS' and rc & 1:
369 369 rc = 0
370 370 if rc and onerr:
371 371 errmsg = '%s %s' % (os.path.basename(origcmd.split(None, 1)[0]),
372 explain_exit(rc)[0])
372 explainexit(rc)[0])
373 373 if errprefix:
374 374 errmsg = '%s: %s' % (errprefix, errmsg)
375 375 try:
376 376 onerr.warn(errmsg + '\n')
377 377 except AttributeError:
378 378 raise onerr(errmsg)
379 379 return rc
380 380
381 381 def checksignature(func):
382 382 '''wrap a function with code to check for calling errors'''
383 383 def check(*args, **kwargs):
384 384 try:
385 385 return func(*args, **kwargs)
386 386 except TypeError:
387 387 if len(traceback.extract_tb(sys.exc_info()[2])) == 1:
388 388 raise error.SignatureError
389 389 raise
390 390
391 391 return check
392 392
393 393 def makedir(path, notindexed):
394 394 os.mkdir(path)
395 395
396 396 def unlinkpath(f):
397 397 """unlink and remove the directory if it is empty"""
398 398 os.unlink(f)
399 399 # try removing directories that might now be empty
400 400 try:
401 401 os.removedirs(os.path.dirname(f))
402 402 except OSError:
403 403 pass
404 404
405 405 def copyfile(src, dest):
406 406 "copy a file, preserving mode and atime/mtime"
407 407 if os.path.islink(src):
408 408 try:
409 409 os.unlink(dest)
410 410 except OSError:
411 411 pass
412 412 os.symlink(os.readlink(src), dest)
413 413 else:
414 414 try:
415 415 shutil.copyfile(src, dest)
416 416 shutil.copymode(src, dest)
417 417 except shutil.Error, inst:
418 418 raise Abort(str(inst))
419 419
420 420 def copyfiles(src, dst, hardlink=None):
421 421 """Copy a directory tree using hardlinks if possible"""
422 422
423 423 if hardlink is None:
424 424 hardlink = (os.stat(src).st_dev ==
425 425 os.stat(os.path.dirname(dst)).st_dev)
426 426
427 427 num = 0
428 428 if os.path.isdir(src):
429 429 os.mkdir(dst)
430 430 for name, kind in osutil.listdir(src):
431 431 srcname = os.path.join(src, name)
432 432 dstname = os.path.join(dst, name)
433 433 hardlink, n = copyfiles(srcname, dstname, hardlink)
434 434 num += n
435 435 else:
436 436 if hardlink:
437 437 try:
438 438 os_link(src, dst)
439 439 except (IOError, OSError):
440 440 hardlink = False
441 441 shutil.copy(src, dst)
442 442 else:
443 443 shutil.copy(src, dst)
444 444 num += 1
445 445
446 446 return hardlink, num
447 447
448 448 _windows_reserved_filenames = '''con prn aux nul
449 449 com1 com2 com3 com4 com5 com6 com7 com8 com9
450 450 lpt1 lpt2 lpt3 lpt4 lpt5 lpt6 lpt7 lpt8 lpt9'''.split()
451 451 _windows_reserved_chars = ':*?"<>|'
452 452 def checkwinfilename(path):
453 453 '''Check that the base-relative path is a valid filename on Windows.
454 454 Returns None if the path is ok, or a UI string describing the problem.
455 455
456 456 >>> checkwinfilename("just/a/normal/path")
457 457 >>> checkwinfilename("foo/bar/con.xml")
458 458 "filename contains 'con', which is reserved on Windows"
459 459 >>> checkwinfilename("foo/con.xml/bar")
460 460 "filename contains 'con', which is reserved on Windows"
461 461 >>> checkwinfilename("foo/bar/xml.con")
462 462 >>> checkwinfilename("foo/bar/AUX/bla.txt")
463 463 "filename contains 'AUX', which is reserved on Windows"
464 464 >>> checkwinfilename("foo/bar/bla:.txt")
465 465 "filename contains ':', which is reserved on Windows"
466 466 >>> checkwinfilename("foo/bar/b\07la.txt")
467 467 "filename contains '\\\\x07', which is invalid on Windows"
468 468 >>> checkwinfilename("foo/bar/bla ")
469 469 "filename ends with ' ', which is not allowed on Windows"
470 470 '''
471 471 for n in path.replace('\\', '/').split('/'):
472 472 if not n:
473 473 continue
474 474 for c in n:
475 475 if c in _windows_reserved_chars:
476 476 return _("filename contains '%s', which is reserved "
477 477 "on Windows") % c
478 478 if ord(c) <= 31:
479 479 return _("filename contains %r, which is invalid "
480 480 "on Windows") % c
481 481 base = n.split('.')[0]
482 482 if base and base.lower() in _windows_reserved_filenames:
483 483 return _("filename contains '%s', which is reserved "
484 484 "on Windows") % base
485 485 t = n[-1]
486 486 if t in '. ':
487 487 return _("filename ends with '%s', which is not allowed "
488 488 "on Windows") % t
489 489
490 490 def lookupreg(key, name=None, scope=None):
491 491 return None
492 492
493 493 def hidewindow():
494 494 """Hide current shell window.
495 495
496 496 Used to hide the window opened when starting asynchronous
497 497 child process under Windows, unneeded on other systems.
498 498 """
499 499 pass
500 500
501 501 if os.name == 'nt':
502 502 checkosfilename = checkwinfilename
503 503 from windows import *
504 504 else:
505 505 from posix import *
506 506
507 507 def makelock(info, pathname):
508 508 try:
509 509 return os.symlink(info, pathname)
510 510 except OSError, why:
511 511 if why.errno == errno.EEXIST:
512 512 raise
513 513 except AttributeError: # no symlink in os
514 514 pass
515 515
516 516 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
517 517 os.write(ld, info)
518 518 os.close(ld)
519 519
520 520 def readlock(pathname):
521 521 try:
522 522 return os.readlink(pathname)
523 523 except OSError, why:
524 524 if why.errno not in (errno.EINVAL, errno.ENOSYS):
525 525 raise
526 526 except AttributeError: # no symlink in os
527 527 pass
528 528 fp = posixfile(pathname)
529 529 r = fp.read()
530 530 fp.close()
531 531 return r
532 532
533 533 def fstat(fp):
534 534 '''stat file object that may not have fileno method.'''
535 535 try:
536 536 return os.fstat(fp.fileno())
537 537 except AttributeError:
538 538 return os.stat(fp.name)
539 539
540 540 # File system features
541 541
542 542 def checkcase(path):
543 543 """
544 544 Check whether the given path is on a case-sensitive filesystem
545 545
546 546 Requires a path (like /foo/.hg) ending with a foldable final
547 547 directory component.
548 548 """
549 549 s1 = os.stat(path)
550 550 d, b = os.path.split(path)
551 551 p2 = os.path.join(d, b.upper())
552 552 if path == p2:
553 553 p2 = os.path.join(d, b.lower())
554 554 try:
555 555 s2 = os.stat(p2)
556 556 if s2 == s1:
557 557 return False
558 558 return True
559 559 except OSError:
560 560 return True
561 561
562 562 _fspathcache = {}
563 563 def fspath(name, root):
564 564 '''Get name in the case stored in the filesystem
565 565
566 566 The name is either relative to root, or it is an absolute path starting
567 567 with root. Note that this function is unnecessary, and should not be
568 568 called, for case-sensitive filesystems (simply because it's expensive).
569 569 '''
570 570 # If name is absolute, make it relative
571 571 if name.lower().startswith(root.lower()):
572 572 l = len(root)
573 573 if name[l] == os.sep or name[l] == os.altsep:
574 574 l = l + 1
575 575 name = name[l:]
576 576
577 577 if not os.path.lexists(os.path.join(root, name)):
578 578 return None
579 579
580 580 seps = os.sep
581 581 if os.altsep:
582 582 seps = seps + os.altsep
583 583 # Protect backslashes. This gets silly very quickly.
584 584 seps.replace('\\','\\\\')
585 585 pattern = re.compile(r'([^%s]+)|([%s]+)' % (seps, seps))
586 586 dir = os.path.normcase(os.path.normpath(root))
587 587 result = []
588 588 for part, sep in pattern.findall(name):
589 589 if sep:
590 590 result.append(sep)
591 591 continue
592 592
593 593 if dir not in _fspathcache:
594 594 _fspathcache[dir] = os.listdir(dir)
595 595 contents = _fspathcache[dir]
596 596
597 597 lpart = part.lower()
598 598 lenp = len(part)
599 599 for n in contents:
600 600 if lenp == len(n) and n.lower() == lpart:
601 601 result.append(n)
602 602 break
603 603 else:
604 604 # Cannot happen, as the file exists!
605 605 result.append(part)
606 606 dir = os.path.join(dir, lpart)
607 607
608 608 return ''.join(result)
609 609
610 610 def checknlink(testfile):
611 611 '''check whether hardlink count reporting works properly'''
612 612
613 613 # testfile may be open, so we need a separate file for checking to
614 614 # work around issue2543 (or testfile may get lost on Samba shares)
615 615 f1 = testfile + ".hgtmp1"
616 616 if os.path.lexists(f1):
617 617 return False
618 618 try:
619 619 posixfile(f1, 'w').close()
620 620 except IOError:
621 621 return False
622 622
623 623 f2 = testfile + ".hgtmp2"
624 624 fd = None
625 625 try:
626 626 try:
627 627 os_link(f1, f2)
628 628 except OSError:
629 629 return False
630 630
631 631 # nlinks() may behave differently for files on Windows shares if
632 632 # the file is open.
633 633 fd = posixfile(f2)
634 634 return nlinks(f2) > 1
635 635 finally:
636 636 if fd is not None:
637 637 fd.close()
638 638 for f in (f1, f2):
639 639 try:
640 640 os.unlink(f)
641 641 except OSError:
642 642 pass
643 643
644 644 return False
645 645
646 646 def endswithsep(path):
647 647 '''Check path ends with os.sep or os.altsep.'''
648 648 return path.endswith(os.sep) or os.altsep and path.endswith(os.altsep)
649 649
650 650 def splitpath(path):
651 651 '''Split path by os.sep.
652 652 Note that this function does not use os.altsep because this is
653 653 an alternative of simple "xxx.split(os.sep)".
654 654 It is recommended to use os.path.normpath() before using this
655 655 function if need.'''
656 656 return path.split(os.sep)
657 657
658 658 def gui():
659 659 '''Are we running in a GUI?'''
660 660 if sys.platform == 'darwin':
661 661 if 'SSH_CONNECTION' in os.environ:
662 662 # handle SSH access to a box where the user is logged in
663 663 return False
664 664 elif getattr(osutil, 'isgui', None):
665 665 # check if a CoreGraphics session is available
666 666 return osutil.isgui()
667 667 else:
668 668 # pure build; use a safe default
669 669 return True
670 670 else:
671 671 return os.name == "nt" or os.environ.get("DISPLAY")
672 672
673 673 def mktempcopy(name, emptyok=False, createmode=None):
674 674 """Create a temporary file with the same contents from name
675 675
676 676 The permission bits are copied from the original file.
677 677
678 678 If the temporary file is going to be truncated immediately, you
679 679 can use emptyok=True as an optimization.
680 680
681 681 Returns the name of the temporary file.
682 682 """
683 683 d, fn = os.path.split(name)
684 684 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
685 685 os.close(fd)
686 686 # Temporary files are created with mode 0600, which is usually not
687 687 # what we want. If the original file already exists, just copy
688 688 # its mode. Otherwise, manually obey umask.
689 689 try:
690 690 st_mode = os.lstat(name).st_mode & 0777
691 691 except OSError, inst:
692 692 if inst.errno != errno.ENOENT:
693 693 raise
694 694 st_mode = createmode
695 695 if st_mode is None:
696 696 st_mode = ~umask
697 697 st_mode &= 0666
698 698 os.chmod(temp, st_mode)
699 699 if emptyok:
700 700 return temp
701 701 try:
702 702 try:
703 703 ifp = posixfile(name, "rb")
704 704 except IOError, inst:
705 705 if inst.errno == errno.ENOENT:
706 706 return temp
707 707 if not getattr(inst, 'filename', None):
708 708 inst.filename = name
709 709 raise
710 710 ofp = posixfile(temp, "wb")
711 711 for chunk in filechunkiter(ifp):
712 712 ofp.write(chunk)
713 713 ifp.close()
714 714 ofp.close()
715 715 except:
716 716 try: os.unlink(temp)
717 717 except: pass
718 718 raise
719 719 return temp
720 720
721 721 class atomictempfile(object):
722 722 '''writeable file object that atomically updates a file
723 723
724 724 All writes will go to a temporary copy of the original file. Call
725 725 rename() when you are done writing, and atomictempfile will rename
726 726 the temporary copy to the original name, making the changes visible.
727 727
728 728 Unlike other file-like objects, close() discards your writes by
729 729 simply deleting the temporary file.
730 730 '''
731 731 def __init__(self, name, mode='w+b', createmode=None):
732 732 self.__name = name # permanent name
733 733 self._tempname = mktempcopy(name, emptyok=('w' in mode),
734 734 createmode=createmode)
735 735 self._fp = posixfile(self._tempname, mode)
736 736
737 737 # delegated methods
738 738 self.write = self._fp.write
739 739 self.fileno = self._fp.fileno
740 740
741 741 def rename(self):
742 742 if not self._fp.closed:
743 743 self._fp.close()
744 744 rename(self._tempname, localpath(self.__name))
745 745
746 746 def close(self):
747 747 if not self._fp.closed:
748 748 try:
749 749 os.unlink(self._tempname)
750 750 except OSError:
751 751 pass
752 752 self._fp.close()
753 753
754 754 def __del__(self):
755 755 if hasattr(self, '_fp'): # constructor actually did something
756 756 self.close()
757 757
758 758 def makedirs(name, mode=None):
759 759 """recursive directory creation with parent mode inheritance"""
760 760 parent = os.path.abspath(os.path.dirname(name))
761 761 try:
762 762 os.mkdir(name)
763 763 if mode is not None:
764 764 os.chmod(name, mode)
765 765 return
766 766 except OSError, err:
767 767 if err.errno == errno.EEXIST:
768 768 return
769 769 if not name or parent == name or err.errno != errno.ENOENT:
770 770 raise
771 771 makedirs(parent, mode)
772 772 makedirs(name, mode)
773 773
774 774 def readfile(path):
775 775 fp = open(path)
776 776 try:
777 777 return fp.read()
778 778 finally:
779 779 fp.close()
780 780
781 781 def writefile(path, text):
782 782 fp = open(path, 'wb')
783 783 try:
784 784 fp.write(text)
785 785 finally:
786 786 fp.close()
787 787
788 788 def appendfile(path, text):
789 789 fp = open(path, 'ab')
790 790 try:
791 791 fp.write(text)
792 792 finally:
793 793 fp.close()
794 794
795 795 class chunkbuffer(object):
796 796 """Allow arbitrary sized chunks of data to be efficiently read from an
797 797 iterator over chunks of arbitrary size."""
798 798
799 799 def __init__(self, in_iter):
800 800 """in_iter is the iterator that's iterating over the input chunks.
801 801 targetsize is how big a buffer to try to maintain."""
802 802 def splitbig(chunks):
803 803 for chunk in chunks:
804 804 if len(chunk) > 2**20:
805 805 pos = 0
806 806 while pos < len(chunk):
807 807 end = pos + 2 ** 18
808 808 yield chunk[pos:end]
809 809 pos = end
810 810 else:
811 811 yield chunk
812 812 self.iter = splitbig(in_iter)
813 813 self._queue = []
814 814
815 815 def read(self, l):
816 816 """Read L bytes of data from the iterator of chunks of data.
817 817 Returns less than L bytes if the iterator runs dry."""
818 818 left = l
819 819 buf = ''
820 820 queue = self._queue
821 821 while left > 0:
822 822 # refill the queue
823 823 if not queue:
824 824 target = 2**18
825 825 for chunk in self.iter:
826 826 queue.append(chunk)
827 827 target -= len(chunk)
828 828 if target <= 0:
829 829 break
830 830 if not queue:
831 831 break
832 832
833 833 chunk = queue.pop(0)
834 834 left -= len(chunk)
835 835 if left < 0:
836 836 queue.insert(0, chunk[left:])
837 837 buf += chunk[:left]
838 838 else:
839 839 buf += chunk
840 840
841 841 return buf
842 842
843 843 def filechunkiter(f, size=65536, limit=None):
844 844 """Create a generator that produces the data in the file size
845 845 (default 65536) bytes at a time, up to optional limit (default is
846 846 to read all data). Chunks may be less than size bytes if the
847 847 chunk is the last chunk in the file, or the file is a socket or
848 848 some other type of file that sometimes reads less data than is
849 849 requested."""
850 850 assert size >= 0
851 851 assert limit is None or limit >= 0
852 852 while True:
853 853 if limit is None:
854 854 nbytes = size
855 855 else:
856 856 nbytes = min(limit, size)
857 857 s = nbytes and f.read(nbytes)
858 858 if not s:
859 859 break
860 860 if limit:
861 861 limit -= len(s)
862 862 yield s
863 863
864 864 def makedate():
865 865 lt = time.localtime()
866 866 if lt[8] == 1 and time.daylight:
867 867 tz = time.altzone
868 868 else:
869 869 tz = time.timezone
870 870 t = time.mktime(lt)
871 871 if t < 0:
872 872 hint = _("check your clock")
873 873 raise Abort(_("negative timestamp: %d") % t, hint=hint)
874 874 return t, tz
875 875
876 876 def datestr(date=None, format='%a %b %d %H:%M:%S %Y %1%2'):
877 877 """represent a (unixtime, offset) tuple as a localized time.
878 878 unixtime is seconds since the epoch, and offset is the time zone's
879 879 number of seconds away from UTC. if timezone is false, do not
880 880 append time zone to string."""
881 881 t, tz = date or makedate()
882 882 if t < 0:
883 883 t = 0 # time.gmtime(lt) fails on Windows for lt < -43200
884 884 tz = 0
885 885 if "%1" in format or "%2" in format:
886 886 sign = (tz > 0) and "-" or "+"
887 887 minutes = abs(tz) // 60
888 888 format = format.replace("%1", "%c%02d" % (sign, minutes // 60))
889 889 format = format.replace("%2", "%02d" % (minutes % 60))
890 890 s = time.strftime(format, time.gmtime(float(t) - tz))
891 891 return s
892 892
893 893 def shortdate(date=None):
894 894 """turn (timestamp, tzoff) tuple into iso 8631 date."""
895 895 return datestr(date, format='%Y-%m-%d')
896 896
897 897 def strdate(string, format, defaults=[]):
898 898 """parse a localized time string and return a (unixtime, offset) tuple.
899 899 if the string cannot be parsed, ValueError is raised."""
900 900 def timezone(string):
901 901 tz = string.split()[-1]
902 902 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
903 903 sign = (tz[0] == "+") and 1 or -1
904 904 hours = int(tz[1:3])
905 905 minutes = int(tz[3:5])
906 906 return -sign * (hours * 60 + minutes) * 60
907 907 if tz == "GMT" or tz == "UTC":
908 908 return 0
909 909 return None
910 910
911 911 # NOTE: unixtime = localunixtime + offset
912 912 offset, date = timezone(string), string
913 913 if offset is not None:
914 914 date = " ".join(string.split()[:-1])
915 915
916 916 # add missing elements from defaults
917 917 usenow = False # default to using biased defaults
918 918 for part in ("S", "M", "HI", "d", "mb", "yY"): # decreasing specificity
919 919 found = [True for p in part if ("%"+p) in format]
920 920 if not found:
921 921 date += "@" + defaults[part][usenow]
922 922 format += "@%" + part[0]
923 923 else:
924 924 # We've found a specific time element, less specific time
925 925 # elements are relative to today
926 926 usenow = True
927 927
928 928 timetuple = time.strptime(date, format)
929 929 localunixtime = int(calendar.timegm(timetuple))
930 930 if offset is None:
931 931 # local timezone
932 932 unixtime = int(time.mktime(timetuple))
933 933 offset = unixtime - localunixtime
934 934 else:
935 935 unixtime = localunixtime + offset
936 936 return unixtime, offset
937 937
938 938 def parsedate(date, formats=None, bias={}):
939 939 """parse a localized date/time and return a (unixtime, offset) tuple.
940 940
941 941 The date may be a "unixtime offset" string or in one of the specified
942 942 formats. If the date already is a (unixtime, offset) tuple, it is returned.
943 943 """
944 944 if not date:
945 945 return 0, 0
946 946 if isinstance(date, tuple) and len(date) == 2:
947 947 return date
948 948 if not formats:
949 949 formats = defaultdateformats
950 950 date = date.strip()
951 951 try:
952 952 when, offset = map(int, date.split(' '))
953 953 except ValueError:
954 954 # fill out defaults
955 955 now = makedate()
956 956 defaults = {}
957 957 for part in ("d", "mb", "yY", "HI", "M", "S"):
958 958 # this piece is for rounding the specific end of unknowns
959 959 b = bias.get(part)
960 960 if b is None:
961 961 if part[0] in "HMS":
962 962 b = "00"
963 963 else:
964 964 b = "0"
965 965
966 966 # this piece is for matching the generic end to today's date
967 967 n = datestr(now, "%" + part[0])
968 968
969 969 defaults[part] = (b, n)
970 970
971 971 for format in formats:
972 972 try:
973 973 when, offset = strdate(date, format, defaults)
974 974 except (ValueError, OverflowError):
975 975 pass
976 976 else:
977 977 break
978 978 else:
979 979 raise Abort(_('invalid date: %r') % date)
980 980 # validate explicit (probably user-specified) date and
981 981 # time zone offset. values must fit in signed 32 bits for
982 982 # current 32-bit linux runtimes. timezones go from UTC-12
983 983 # to UTC+14
984 984 if abs(when) > 0x7fffffff:
985 985 raise Abort(_('date exceeds 32 bits: %d') % when)
986 986 if when < 0:
987 987 raise Abort(_('negative date value: %d') % when)
988 988 if offset < -50400 or offset > 43200:
989 989 raise Abort(_('impossible time zone offset: %d') % offset)
990 990 return when, offset
991 991
992 992 def matchdate(date):
993 993 """Return a function that matches a given date match specifier
994 994
995 995 Formats include:
996 996
997 997 '{date}' match a given date to the accuracy provided
998 998
999 999 '<{date}' on or before a given date
1000 1000
1001 1001 '>{date}' on or after a given date
1002 1002
1003 1003 >>> p1 = parsedate("10:29:59")
1004 1004 >>> p2 = parsedate("10:30:00")
1005 1005 >>> p3 = parsedate("10:30:59")
1006 1006 >>> p4 = parsedate("10:31:00")
1007 1007 >>> p5 = parsedate("Sep 15 10:30:00 1999")
1008 1008 >>> f = matchdate("10:30")
1009 1009 >>> f(p1[0])
1010 1010 False
1011 1011 >>> f(p2[0])
1012 1012 True
1013 1013 >>> f(p3[0])
1014 1014 True
1015 1015 >>> f(p4[0])
1016 1016 False
1017 1017 >>> f(p5[0])
1018 1018 False
1019 1019 """
1020 1020
1021 1021 def lower(date):
1022 1022 d = dict(mb="1", d="1")
1023 1023 return parsedate(date, extendeddateformats, d)[0]
1024 1024
1025 1025 def upper(date):
1026 1026 d = dict(mb="12", HI="23", M="59", S="59")
1027 1027 for days in ("31", "30", "29"):
1028 1028 try:
1029 1029 d["d"] = days
1030 1030 return parsedate(date, extendeddateformats, d)[0]
1031 1031 except:
1032 1032 pass
1033 1033 d["d"] = "28"
1034 1034 return parsedate(date, extendeddateformats, d)[0]
1035 1035
1036 1036 date = date.strip()
1037 1037
1038 1038 if not date:
1039 1039 raise Abort(_("dates cannot consist entirely of whitespace"))
1040 1040 elif date[0] == "<":
1041 1041 if not date[1:]:
1042 1042 raise Abort(_("invalid day spec, use '<DATE'"))
1043 1043 when = upper(date[1:])
1044 1044 return lambda x: x <= when
1045 1045 elif date[0] == ">":
1046 1046 if not date[1:]:
1047 1047 raise Abort(_("invalid day spec, use '>DATE'"))
1048 1048 when = lower(date[1:])
1049 1049 return lambda x: x >= when
1050 1050 elif date[0] == "-":
1051 1051 try:
1052 1052 days = int(date[1:])
1053 1053 except ValueError:
1054 1054 raise Abort(_("invalid day spec: %s") % date[1:])
1055 1055 if days < 0:
1056 1056 raise Abort(_("%s must be nonnegative (see 'hg help dates')")
1057 1057 % date[1:])
1058 1058 when = makedate()[0] - days * 3600 * 24
1059 1059 return lambda x: x >= when
1060 1060 elif " to " in date:
1061 1061 a, b = date.split(" to ")
1062 1062 start, stop = lower(a), upper(b)
1063 1063 return lambda x: x >= start and x <= stop
1064 1064 else:
1065 1065 start, stop = lower(date), upper(date)
1066 1066 return lambda x: x >= start and x <= stop
1067 1067
1068 1068 def shortuser(user):
1069 1069 """Return a short representation of a user name or email address."""
1070 1070 f = user.find('@')
1071 1071 if f >= 0:
1072 1072 user = user[:f]
1073 1073 f = user.find('<')
1074 1074 if f >= 0:
1075 1075 user = user[f + 1:]
1076 1076 f = user.find(' ')
1077 1077 if f >= 0:
1078 1078 user = user[:f]
1079 1079 f = user.find('.')
1080 1080 if f >= 0:
1081 1081 user = user[:f]
1082 1082 return user
1083 1083
1084 1084 def email(author):
1085 1085 '''get email of author.'''
1086 1086 r = author.find('>')
1087 1087 if r == -1:
1088 1088 r = None
1089 1089 return author[author.find('<') + 1:r]
1090 1090
1091 1091 def _ellipsis(text, maxlength):
1092 1092 if len(text) <= maxlength:
1093 1093 return text, False
1094 1094 else:
1095 1095 return "%s..." % (text[:maxlength - 3]), True
1096 1096
1097 1097 def ellipsis(text, maxlength=400):
1098 1098 """Trim string to at most maxlength (default: 400) characters."""
1099 1099 try:
1100 1100 # use unicode not to split at intermediate multi-byte sequence
1101 1101 utext, truncated = _ellipsis(text.decode(encoding.encoding),
1102 1102 maxlength)
1103 1103 if not truncated:
1104 1104 return text
1105 1105 return utext.encode(encoding.encoding)
1106 1106 except (UnicodeDecodeError, UnicodeEncodeError):
1107 1107 return _ellipsis(text, maxlength)[0]
1108 1108
1109 1109 def bytecount(nbytes):
1110 1110 '''return byte count formatted as readable string, with units'''
1111 1111
1112 1112 units = (
1113 1113 (100, 1 << 30, _('%.0f GB')),
1114 1114 (10, 1 << 30, _('%.1f GB')),
1115 1115 (1, 1 << 30, _('%.2f GB')),
1116 1116 (100, 1 << 20, _('%.0f MB')),
1117 1117 (10, 1 << 20, _('%.1f MB')),
1118 1118 (1, 1 << 20, _('%.2f MB')),
1119 1119 (100, 1 << 10, _('%.0f KB')),
1120 1120 (10, 1 << 10, _('%.1f KB')),
1121 1121 (1, 1 << 10, _('%.2f KB')),
1122 1122 (1, 1, _('%.0f bytes')),
1123 1123 )
1124 1124
1125 1125 for multiplier, divisor, format in units:
1126 1126 if nbytes >= divisor * multiplier:
1127 1127 return format % (nbytes / float(divisor))
1128 1128 return units[-1][2] % nbytes
1129 1129
1130 1130 def uirepr(s):
1131 1131 # Avoid double backslash in Windows path repr()
1132 1132 return repr(s).replace('\\\\', '\\')
1133 1133
1134 1134 # delay import of textwrap
1135 1135 def MBTextWrapper(**kwargs):
1136 1136 class tw(textwrap.TextWrapper):
1137 1137 """
1138 1138 Extend TextWrapper for double-width characters.
1139 1139
1140 1140 Some Asian characters use two terminal columns instead of one.
1141 1141 A good example of this behavior can be seen with u'\u65e5\u672c',
1142 1142 the two Japanese characters for "Japan":
1143 1143 len() returns 2, but when printed to a terminal, they eat 4 columns.
1144 1144
1145 1145 (Note that this has nothing to do whatsoever with unicode
1146 1146 representation, or encoding of the underlying string)
1147 1147 """
1148 1148 def __init__(self, **kwargs):
1149 1149 textwrap.TextWrapper.__init__(self, **kwargs)
1150 1150
1151 1151 def _cutdown(self, str, space_left):
1152 1152 l = 0
1153 1153 ucstr = unicode(str, encoding.encoding)
1154 1154 colwidth = unicodedata.east_asian_width
1155 1155 for i in xrange(len(ucstr)):
1156 1156 l += colwidth(ucstr[i]) in 'WFA' and 2 or 1
1157 1157 if space_left < l:
1158 1158 return (ucstr[:i].encode(encoding.encoding),
1159 1159 ucstr[i:].encode(encoding.encoding))
1160 1160 return str, ''
1161 1161
1162 1162 # overriding of base class
1163 1163 def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
1164 1164 space_left = max(width - cur_len, 1)
1165 1165
1166 1166 if self.break_long_words:
1167 1167 cut, res = self._cutdown(reversed_chunks[-1], space_left)
1168 1168 cur_line.append(cut)
1169 1169 reversed_chunks[-1] = res
1170 1170 elif not cur_line:
1171 1171 cur_line.append(reversed_chunks.pop())
1172 1172
1173 1173 global MBTextWrapper
1174 1174 MBTextWrapper = tw
1175 1175 return tw(**kwargs)
1176 1176
1177 1177 def wrap(line, width, initindent='', hangindent=''):
1178 1178 maxindent = max(len(hangindent), len(initindent))
1179 1179 if width <= maxindent:
1180 1180 # adjust for weird terminal size
1181 1181 width = max(78, maxindent + 1)
1182 1182 wrapper = MBTextWrapper(width=width,
1183 1183 initial_indent=initindent,
1184 1184 subsequent_indent=hangindent)
1185 1185 return wrapper.fill(line)
1186 1186
1187 1187 def iterlines(iterator):
1188 1188 for chunk in iterator:
1189 1189 for line in chunk.splitlines():
1190 1190 yield line
1191 1191
1192 1192 def expandpath(path):
1193 1193 return os.path.expanduser(os.path.expandvars(path))
1194 1194
1195 1195 def hgcmd():
1196 1196 """Return the command used to execute current hg
1197 1197
1198 1198 This is different from hgexecutable() because on Windows we want
1199 1199 to avoid things opening new shell windows like batch files, so we
1200 1200 get either the python call or current executable.
1201 1201 """
1202 1202 if mainfrozen():
1203 1203 return [sys.executable]
1204 1204 return gethgcmd()
1205 1205
1206 1206 def rundetached(args, condfn):
1207 1207 """Execute the argument list in a detached process.
1208 1208
1209 1209 condfn is a callable which is called repeatedly and should return
1210 1210 True once the child process is known to have started successfully.
1211 1211 At this point, the child process PID is returned. If the child
1212 1212 process fails to start or finishes before condfn() evaluates to
1213 1213 True, return -1.
1214 1214 """
1215 1215 # Windows case is easier because the child process is either
1216 1216 # successfully starting and validating the condition or exiting
1217 1217 # on failure. We just poll on its PID. On Unix, if the child
1218 1218 # process fails to start, it will be left in a zombie state until
1219 1219 # the parent wait on it, which we cannot do since we expect a long
1220 1220 # running process on success. Instead we listen for SIGCHLD telling
1221 1221 # us our child process terminated.
1222 1222 terminated = set()
1223 1223 def handler(signum, frame):
1224 1224 terminated.add(os.wait())
1225 1225 prevhandler = None
1226 1226 if hasattr(signal, 'SIGCHLD'):
1227 1227 prevhandler = signal.signal(signal.SIGCHLD, handler)
1228 1228 try:
1229 1229 pid = spawndetached(args)
1230 1230 while not condfn():
1231 1231 if ((pid in terminated or not testpid(pid))
1232 1232 and not condfn()):
1233 1233 return -1
1234 1234 time.sleep(0.1)
1235 1235 return pid
1236 1236 finally:
1237 1237 if prevhandler is not None:
1238 1238 signal.signal(signal.SIGCHLD, prevhandler)
1239 1239
1240 1240 try:
1241 1241 any, all = any, all
1242 1242 except NameError:
1243 1243 def any(iterable):
1244 1244 for i in iterable:
1245 1245 if i:
1246 1246 return True
1247 1247 return False
1248 1248
1249 1249 def all(iterable):
1250 1250 for i in iterable:
1251 1251 if not i:
1252 1252 return False
1253 1253 return True
1254 1254
1255 1255 def interpolate(prefix, mapping, s, fn=None, escape_prefix=False):
1256 1256 """Return the result of interpolating items in the mapping into string s.
1257 1257
1258 1258 prefix is a single character string, or a two character string with
1259 1259 a backslash as the first character if the prefix needs to be escaped in
1260 1260 a regular expression.
1261 1261
1262 1262 fn is an optional function that will be applied to the replacement text
1263 1263 just before replacement.
1264 1264
1265 1265 escape_prefix is an optional flag that allows using doubled prefix for
1266 1266 its escaping.
1267 1267 """
1268 1268 fn = fn or (lambda s: s)
1269 1269 patterns = '|'.join(mapping.keys())
1270 1270 if escape_prefix:
1271 1271 patterns += '|' + prefix
1272 1272 if len(prefix) > 1:
1273 1273 prefix_char = prefix[1:]
1274 1274 else:
1275 1275 prefix_char = prefix
1276 1276 mapping[prefix_char] = prefix_char
1277 1277 r = re.compile(r'%s(%s)' % (prefix, patterns))
1278 1278 return r.sub(lambda x: fn(mapping[x.group()[1:]]), s)
1279 1279
1280 1280 def getport(port):
1281 1281 """Return the port for a given network service.
1282 1282
1283 1283 If port is an integer, it's returned as is. If it's a string, it's
1284 1284 looked up using socket.getservbyname(). If there's no matching
1285 1285 service, util.Abort is raised.
1286 1286 """
1287 1287 try:
1288 1288 return int(port)
1289 1289 except ValueError:
1290 1290 pass
1291 1291
1292 1292 try:
1293 1293 return socket.getservbyname(port)
1294 1294 except socket.error:
1295 1295 raise Abort(_("no port number associated with service '%s'") % port)
1296 1296
1297 1297 _booleans = {'1': True, 'yes': True, 'true': True, 'on': True, 'always': True,
1298 1298 '0': False, 'no': False, 'false': False, 'off': False,
1299 1299 'never': False}
1300 1300
1301 1301 def parsebool(s):
1302 1302 """Parse s into a boolean.
1303 1303
1304 1304 If s is not a valid boolean, returns None.
1305 1305 """
1306 1306 return _booleans.get(s.lower(), None)
1307 1307
1308 1308 _hexdig = '0123456789ABCDEFabcdef'
1309 1309 _hextochr = dict((a + b, chr(int(a + b, 16)))
1310 1310 for a in _hexdig for b in _hexdig)
1311 1311
1312 1312 def _urlunquote(s):
1313 1313 """unquote('abc%20def') -> 'abc def'."""
1314 1314 res = s.split('%')
1315 1315 # fastpath
1316 1316 if len(res) == 1:
1317 1317 return s
1318 1318 s = res[0]
1319 1319 for item in res[1:]:
1320 1320 try:
1321 1321 s += _hextochr[item[:2]] + item[2:]
1322 1322 except KeyError:
1323 1323 s += '%' + item
1324 1324 except UnicodeDecodeError:
1325 1325 s += unichr(int(item[:2], 16)) + item[2:]
1326 1326 return s
1327 1327
1328 1328 class url(object):
1329 1329 r"""Reliable URL parser.
1330 1330
1331 1331 This parses URLs and provides attributes for the following
1332 1332 components:
1333 1333
1334 1334 <scheme>://<user>:<passwd>@<host>:<port>/<path>?<query>#<fragment>
1335 1335
1336 1336 Missing components are set to None. The only exception is
1337 1337 fragment, which is set to '' if present but empty.
1338 1338
1339 1339 If parsefragment is False, fragment is included in query. If
1340 1340 parsequery is False, query is included in path. If both are
1341 1341 False, both fragment and query are included in path.
1342 1342
1343 1343 See http://www.ietf.org/rfc/rfc2396.txt for more information.
1344 1344
1345 1345 Note that for backward compatibility reasons, bundle URLs do not
1346 1346 take host names. That means 'bundle://../' has a path of '../'.
1347 1347
1348 1348 Examples:
1349 1349
1350 1350 >>> url('http://www.ietf.org/rfc/rfc2396.txt')
1351 1351 <url scheme: 'http', host: 'www.ietf.org', path: 'rfc/rfc2396.txt'>
1352 1352 >>> url('ssh://[::1]:2200//home/joe/repo')
1353 1353 <url scheme: 'ssh', host: '[::1]', port: '2200', path: '/home/joe/repo'>
1354 1354 >>> url('file:///home/joe/repo')
1355 1355 <url scheme: 'file', path: '/home/joe/repo'>
1356 1356 >>> url('bundle:foo')
1357 1357 <url scheme: 'bundle', path: 'foo'>
1358 1358 >>> url('bundle://../foo')
1359 1359 <url scheme: 'bundle', path: '../foo'>
1360 1360 >>> url(r'c:\foo\bar')
1361 1361 <url path: 'c:\\foo\\bar'>
1362 1362
1363 1363 Authentication credentials:
1364 1364
1365 1365 >>> url('ssh://joe:xyz@x/repo')
1366 1366 <url scheme: 'ssh', user: 'joe', passwd: 'xyz', host: 'x', path: 'repo'>
1367 1367 >>> url('ssh://joe@x/repo')
1368 1368 <url scheme: 'ssh', user: 'joe', host: 'x', path: 'repo'>
1369 1369
1370 1370 Query strings and fragments:
1371 1371
1372 1372 >>> url('http://host/a?b#c')
1373 1373 <url scheme: 'http', host: 'host', path: 'a', query: 'b', fragment: 'c'>
1374 1374 >>> url('http://host/a?b#c', parsequery=False, parsefragment=False)
1375 1375 <url scheme: 'http', host: 'host', path: 'a?b#c'>
1376 1376 """
1377 1377
1378 1378 _safechars = "!~*'()+"
1379 1379 _safepchars = "/!~*'()+"
1380 1380 _matchscheme = re.compile(r'^[a-zA-Z0-9+.\-]+:').match
1381 1381
1382 1382 def __init__(self, path, parsequery=True, parsefragment=True):
1383 1383 # We slowly chomp away at path until we have only the path left
1384 1384 self.scheme = self.user = self.passwd = self.host = None
1385 1385 self.port = self.path = self.query = self.fragment = None
1386 1386 self._localpath = True
1387 1387 self._hostport = ''
1388 1388 self._origpath = path
1389 1389
1390 1390 # special case for Windows drive letters
1391 1391 if hasdriveletter(path):
1392 1392 self.path = path
1393 1393 return
1394 1394
1395 1395 # For compatibility reasons, we can't handle bundle paths as
1396 1396 # normal URLS
1397 1397 if path.startswith('bundle:'):
1398 1398 self.scheme = 'bundle'
1399 1399 path = path[7:]
1400 1400 if path.startswith('//'):
1401 1401 path = path[2:]
1402 1402 self.path = path
1403 1403 return
1404 1404
1405 1405 if self._matchscheme(path):
1406 1406 parts = path.split(':', 1)
1407 1407 if parts[0]:
1408 1408 self.scheme, path = parts
1409 1409 self._localpath = False
1410 1410
1411 1411 if not path:
1412 1412 path = None
1413 1413 if self._localpath:
1414 1414 self.path = ''
1415 1415 return
1416 1416 else:
1417 1417 if parsefragment and '#' in path:
1418 1418 path, self.fragment = path.split('#', 1)
1419 1419 if not path:
1420 1420 path = None
1421 1421 if self._localpath:
1422 1422 self.path = path
1423 1423 return
1424 1424
1425 1425 if parsequery and '?' in path:
1426 1426 path, self.query = path.split('?', 1)
1427 1427 if not path:
1428 1428 path = None
1429 1429 if not self.query:
1430 1430 self.query = None
1431 1431
1432 1432 # // is required to specify a host/authority
1433 1433 if path and path.startswith('//'):
1434 1434 parts = path[2:].split('/', 1)
1435 1435 if len(parts) > 1:
1436 1436 self.host, path = parts
1437 1437 path = path
1438 1438 else:
1439 1439 self.host = parts[0]
1440 1440 path = None
1441 1441 if not self.host:
1442 1442 self.host = None
1443 1443 if path:
1444 1444 path = '/' + path
1445 1445
1446 1446 if self.host and '@' in self.host:
1447 1447 self.user, self.host = self.host.rsplit('@', 1)
1448 1448 if ':' in self.user:
1449 1449 self.user, self.passwd = self.user.split(':', 1)
1450 1450 if not self.host:
1451 1451 self.host = None
1452 1452
1453 1453 # Don't split on colons in IPv6 addresses without ports
1454 1454 if (self.host and ':' in self.host and
1455 1455 not (self.host.startswith('[') and self.host.endswith(']'))):
1456 1456 self._hostport = self.host
1457 1457 self.host, self.port = self.host.rsplit(':', 1)
1458 1458 if not self.host:
1459 1459 self.host = None
1460 1460
1461 1461 if (self.host and self.scheme == 'file' and
1462 1462 self.host not in ('localhost', '127.0.0.1', '[::1]')):
1463 1463 raise Abort(_('file:// URLs can only refer to localhost'))
1464 1464
1465 1465 self.path = path
1466 1466
1467 1467 for a in ('user', 'passwd', 'host', 'port',
1468 1468 'path', 'query', 'fragment'):
1469 1469 v = getattr(self, a)
1470 1470 if v is not None:
1471 1471 setattr(self, a, _urlunquote(v))
1472 1472
1473 1473 def __repr__(self):
1474 1474 attrs = []
1475 1475 for a in ('scheme', 'user', 'passwd', 'host', 'port', 'path',
1476 1476 'query', 'fragment'):
1477 1477 v = getattr(self, a)
1478 1478 if v is not None:
1479 1479 attrs.append('%s: %r' % (a, v))
1480 1480 return '<url %s>' % ', '.join(attrs)
1481 1481
1482 1482 def __str__(self):
1483 1483 r"""Join the URL's components back into a URL string.
1484 1484
1485 1485 Examples:
1486 1486
1487 1487 >>> str(url('http://user:pw@host:80/?foo#bar'))
1488 1488 'http://user:pw@host:80/?foo#bar'
1489 1489 >>> str(url('ssh://user:pw@[::1]:2200//home/joe#'))
1490 1490 'ssh://user:pw@[::1]:2200//home/joe#'
1491 1491 >>> str(url('http://localhost:80//'))
1492 1492 'http://localhost:80//'
1493 1493 >>> str(url('http://localhost:80/'))
1494 1494 'http://localhost:80/'
1495 1495 >>> str(url('http://localhost:80'))
1496 1496 'http://localhost:80/'
1497 1497 >>> str(url('bundle:foo'))
1498 1498 'bundle:foo'
1499 1499 >>> str(url('bundle://../foo'))
1500 1500 'bundle:../foo'
1501 1501 >>> str(url('path'))
1502 1502 'path'
1503 1503 >>> print url(r'bundle:foo\bar')
1504 1504 bundle:foo\bar
1505 1505 """
1506 1506 if self._localpath:
1507 1507 s = self.path
1508 1508 if self.scheme == 'bundle':
1509 1509 s = 'bundle:' + s
1510 1510 if self.fragment:
1511 1511 s += '#' + self.fragment
1512 1512 return s
1513 1513
1514 1514 s = self.scheme + ':'
1515 1515 if (self.user or self.passwd or self.host or
1516 1516 self.scheme and not self.path):
1517 1517 s += '//'
1518 1518 if self.user:
1519 1519 s += urllib.quote(self.user, safe=self._safechars)
1520 1520 if self.passwd:
1521 1521 s += ':' + urllib.quote(self.passwd, safe=self._safechars)
1522 1522 if self.user or self.passwd:
1523 1523 s += '@'
1524 1524 if self.host:
1525 1525 if not (self.host.startswith('[') and self.host.endswith(']')):
1526 1526 s += urllib.quote(self.host)
1527 1527 else:
1528 1528 s += self.host
1529 1529 if self.port:
1530 1530 s += ':' + urllib.quote(self.port)
1531 1531 if self.host:
1532 1532 s += '/'
1533 1533 if self.path:
1534 1534 s += urllib.quote(self.path, safe=self._safepchars)
1535 1535 if self.query:
1536 1536 s += '?' + urllib.quote(self.query, safe=self._safepchars)
1537 1537 if self.fragment is not None:
1538 1538 s += '#' + urllib.quote(self.fragment, safe=self._safepchars)
1539 1539 return s
1540 1540
1541 1541 def authinfo(self):
1542 1542 user, passwd = self.user, self.passwd
1543 1543 try:
1544 1544 self.user, self.passwd = None, None
1545 1545 s = str(self)
1546 1546 finally:
1547 1547 self.user, self.passwd = user, passwd
1548 1548 if not self.user:
1549 1549 return (s, None)
1550 1550 return (s, (None, (str(self), self.host),
1551 1551 self.user, self.passwd or ''))
1552 1552
1553 1553 def localpath(self):
1554 1554 if self.scheme == 'file' or self.scheme == 'bundle':
1555 1555 path = self.path or '/'
1556 1556 # For Windows, we need to promote hosts containing drive
1557 1557 # letters to paths with drive letters.
1558 1558 if hasdriveletter(self._hostport):
1559 1559 path = self._hostport + '/' + self.path
1560 1560 elif self.host is not None and self.path:
1561 1561 path = '/' + path
1562 1562 # We also need to handle the case of file:///C:/, which
1563 1563 # should return C:/, not /C:/.
1564 1564 elif hasdriveletter(path):
1565 1565 # Strip leading slash from paths with drive names
1566 1566 return path[1:]
1567 1567 return path
1568 1568 return self._origpath
1569 1569
1570 1570 def hasscheme(path):
1571 1571 return bool(url(path).scheme)
1572 1572
1573 1573 def hasdriveletter(path):
1574 1574 return path[1:2] == ':' and path[0:1].isalpha()
1575 1575
1576 1576 def localpath(path):
1577 1577 return url(path, parsequery=False, parsefragment=False).localpath()
1578 1578
1579 1579 def hidepassword(u):
1580 1580 '''hide user credential in a url string'''
1581 1581 u = url(u)
1582 1582 if u.passwd:
1583 1583 u.passwd = '***'
1584 1584 return str(u)
1585 1585
1586 1586 def removeauth(u):
1587 1587 '''remove all authentication information from a url string'''
1588 1588 u = url(u)
1589 1589 u.user = u.passwd = None
1590 1590 return str(u)
@@ -1,286 +1,286 b''
1 1 # windows.py - Windows utility function implementations for Mercurial
2 2 #
3 3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 import osutil
10 10 import errno, msvcrt, os, re, sys
11 11
12 12 nulldev = 'NUL:'
13 13 umask = 002
14 14
15 15 # wrap osutil.posixfile to provide friendlier exceptions
16 16 def posixfile(name, mode='r', buffering=-1):
17 17 try:
18 18 return osutil.posixfile(name, mode, buffering)
19 19 except WindowsError, err:
20 20 raise IOError(err.errno, '%s: %s' % (name, err.strerror))
21 21 posixfile.__doc__ = osutil.posixfile.__doc__
22 22
23 23 class winstdout(object):
24 24 '''stdout on windows misbehaves if sent through a pipe'''
25 25
26 26 def __init__(self, fp):
27 27 self.fp = fp
28 28
29 29 def __getattr__(self, key):
30 30 return getattr(self.fp, key)
31 31
32 32 def close(self):
33 33 try:
34 34 self.fp.close()
35 35 except IOError:
36 36 pass
37 37
38 38 def write(self, s):
39 39 try:
40 40 # This is workaround for "Not enough space" error on
41 41 # writing large size of data to console.
42 42 limit = 16000
43 43 l = len(s)
44 44 start = 0
45 45 self.softspace = 0
46 46 while start < l:
47 47 end = start + limit
48 48 self.fp.write(s[start:end])
49 49 start = end
50 50 except IOError, inst:
51 51 if inst.errno != 0:
52 52 raise
53 53 self.close()
54 54 raise IOError(errno.EPIPE, 'Broken pipe')
55 55
56 56 def flush(self):
57 57 try:
58 58 return self.fp.flush()
59 59 except IOError, inst:
60 60 if inst.errno != errno.EINVAL:
61 61 raise
62 62 self.close()
63 63 raise IOError(errno.EPIPE, 'Broken pipe')
64 64
65 65 sys.stdout = winstdout(sys.stdout)
66 66
67 67 def _is_win_9x():
68 68 '''return true if run on windows 95, 98 or me.'''
69 69 try:
70 70 return sys.getwindowsversion()[3] == 1
71 71 except AttributeError:
72 72 return 'command' in os.environ.get('comspec', '')
73 73
74 74 def openhardlinks():
75 75 return not _is_win_9x()
76 76
77 77 def parsepatchoutput(output_line):
78 78 """parses the output produced by patch and returns the filename"""
79 79 pf = output_line[14:]
80 80 if pf[0] == '`':
81 81 pf = pf[1:-1] # Remove the quotes
82 82 return pf
83 83
84 84 def sshargs(sshcmd, host, user, port):
85 85 '''Build argument list for ssh or Plink'''
86 86 pflag = 'plink' in sshcmd.lower() and '-P' or '-p'
87 87 args = user and ("%s@%s" % (user, host)) or host
88 88 return port and ("%s %s %s" % (args, pflag, port)) or args
89 89
90 90 def setflags(f, l, x):
91 91 pass
92 92
93 93 def checkexec(path):
94 94 return False
95 95
96 96 def checklink(path):
97 97 return False
98 98
99 99 def setbinary(fd):
100 100 # When run without console, pipes may expose invalid
101 101 # fileno(), usually set to -1.
102 102 if hasattr(fd, 'fileno') and fd.fileno() >= 0:
103 103 msvcrt.setmode(fd.fileno(), os.O_BINARY)
104 104
105 105 def pconvert(path):
106 106 return '/'.join(path.split(os.sep))
107 107
108 108 def localpath(path):
109 109 return path.replace('/', '\\')
110 110
111 111 def normpath(path):
112 112 return pconvert(os.path.normpath(path))
113 113
114 114 def realpath(path):
115 115 '''
116 116 Returns the true, canonical file system path equivalent to the given
117 117 path.
118 118 '''
119 119 # TODO: There may be a more clever way to do this that also handles other,
120 120 # less common file systems.
121 121 return os.path.normpath(os.path.normcase(os.path.realpath(path)))
122 122
123 123 def samestat(s1, s2):
124 124 return False
125 125
126 126 # A sequence of backslashes is special iff it precedes a double quote:
127 127 # - if there's an even number of backslashes, the double quote is not
128 128 # quoted (i.e. it ends the quoted region)
129 129 # - if there's an odd number of backslashes, the double quote is quoted
130 130 # - in both cases, every pair of backslashes is unquoted into a single
131 131 # backslash
132 132 # (See http://msdn2.microsoft.com/en-us/library/a1y7w461.aspx )
133 133 # So, to quote a string, we must surround it in double quotes, double
134 134 # the number of backslashes that preceed double quotes and add another
135 135 # backslash before every double quote (being careful with the double
136 136 # quote we've appended to the end)
137 137 _quotere = None
138 138 def shellquote(s):
139 139 global _quotere
140 140 if _quotere is None:
141 141 _quotere = re.compile(r'(\\*)("|\\$)')
142 142 return '"%s"' % _quotere.sub(r'\1\1\\\2', s)
143 143
144 144 def quotecommand(cmd):
145 145 """Build a command string suitable for os.popen* calls."""
146 146 if sys.version_info < (2, 7, 1):
147 147 # Python versions since 2.7.1 do this extra quoting themselves
148 148 return '"' + cmd + '"'
149 149 return cmd
150 150
151 151 def popen(command, mode='r'):
152 152 # Work around "popen spawned process may not write to stdout
153 153 # under windows"
154 154 # http://bugs.python.org/issue1366
155 155 command += " 2> %s" % nulldev
156 156 return os.popen(quotecommand(command), mode)
157 157
158 def explain_exit(code):
158 def explainexit(code):
159 159 return _("exited with status %d") % code, code
160 160
161 161 # if you change this stub into a real check, please try to implement the
162 162 # username and groupname functions above, too.
163 163 def isowner(st):
164 164 return True
165 165
166 166 def find_exe(command):
167 167 '''Find executable for command searching like cmd.exe does.
168 168 If command is a basename then PATH is searched for command.
169 169 PATH isn't searched if command is an absolute or relative path.
170 170 An extension from PATHEXT is found and added if not present.
171 171 If command isn't found None is returned.'''
172 172 pathext = os.environ.get('PATHEXT', '.COM;.EXE;.BAT;.CMD')
173 173 pathexts = [ext for ext in pathext.lower().split(os.pathsep)]
174 174 if os.path.splitext(command)[1].lower() in pathexts:
175 175 pathexts = ['']
176 176
177 177 def findexisting(pathcommand):
178 178 'Will append extension (if needed) and return existing file'
179 179 for ext in pathexts:
180 180 executable = pathcommand + ext
181 181 if os.path.exists(executable):
182 182 return executable
183 183 return None
184 184
185 185 if os.sep in command:
186 186 return findexisting(command)
187 187
188 188 for path in os.environ.get('PATH', '').split(os.pathsep):
189 189 executable = findexisting(os.path.join(path, command))
190 190 if executable is not None:
191 191 return executable
192 192 return findexisting(os.path.expanduser(os.path.expandvars(command)))
193 193
194 194 def statfiles(files):
195 195 '''Stat each file in files and yield stat or None if file does not exist.
196 196 Cluster and cache stat per directory to minimize number of OS stat calls.'''
197 197 ncase = os.path.normcase
198 198 dircache = {} # dirname -> filename -> status | None if file does not exist
199 199 for nf in files:
200 200 nf = ncase(nf)
201 201 dir, base = os.path.split(nf)
202 202 if not dir:
203 203 dir = '.'
204 204 cache = dircache.get(dir, None)
205 205 if cache is None:
206 206 try:
207 207 dmap = dict([(ncase(n), s)
208 208 for n, k, s in osutil.listdir(dir, True)])
209 209 except OSError, err:
210 210 # handle directory not found in Python version prior to 2.5
211 211 # Python <= 2.4 returns native Windows code 3 in errno
212 212 # Python >= 2.5 returns ENOENT and adds winerror field
213 213 # EINVAL is raised if dir is not a directory.
214 214 if err.errno not in (3, errno.ENOENT, errno.EINVAL,
215 215 errno.ENOTDIR):
216 216 raise
217 217 dmap = {}
218 218 cache = dircache.setdefault(dir, dmap)
219 219 yield cache.get(base, None)
220 220
221 221 def username(uid=None):
222 222 """Return the name of the user with the given uid.
223 223
224 224 If uid is None, return the name of the current user."""
225 225 return None
226 226
227 227 def groupname(gid=None):
228 228 """Return the name of the group with the given gid.
229 229
230 230 If gid is None, return the name of the current group."""
231 231 return None
232 232
233 233 def _removedirs(name):
234 234 """special version of os.removedirs that does not remove symlinked
235 235 directories or junction points if they actually contain files"""
236 236 if osutil.listdir(name):
237 237 return
238 238 os.rmdir(name)
239 239 head, tail = os.path.split(name)
240 240 if not tail:
241 241 head, tail = os.path.split(head)
242 242 while head and tail:
243 243 try:
244 244 if osutil.listdir(head):
245 245 return
246 246 os.rmdir(head)
247 247 except (ValueError, OSError):
248 248 break
249 249 head, tail = os.path.split(head)
250 250
251 251 def unlinkpath(f):
252 252 """unlink and remove the directory if it is empty"""
253 253 unlink(f)
254 254 # try removing directories that might now be empty
255 255 try:
256 256 _removedirs(os.path.dirname(f))
257 257 except OSError:
258 258 pass
259 259
260 260 def rename(src, dst):
261 261 '''atomically rename file src to dst, replacing dst if it exists'''
262 262 try:
263 263 os.rename(src, dst)
264 264 except OSError, e:
265 265 if e.errno != errno.EEXIST:
266 266 raise
267 267 unlink(dst)
268 268 os.rename(src, dst)
269 269
270 270 def gethgcmd():
271 271 return [sys.executable] + sys.argv[:1]
272 272
273 273 def termwidth():
274 274 # cmd.exe does not handle CR like a unix console, the CR is
275 275 # counted in the line length. On 80 columns consoles, if 80
276 276 # characters are written, the following CR won't apply on the
277 277 # current line but on the new one. Keep room for it.
278 278 return 79
279 279
280 280 def groupmembers(name):
281 281 # Don't support groups on Windows for now
282 282 raise KeyError()
283 283
284 284 from win32 import *
285 285
286 286 expandglobs = True
General Comments 0
You need to be logged in to leave comments. Login now