##// END OF EJS Templates
rename explain_exit to explainexit
Adrian Buehlmann -
r14234:600e6400 default
parent child Browse files
Show More
@@ -1,756 +1,756 b''
1 # bugzilla.py - bugzilla integration for mercurial
1 # bugzilla.py - bugzilla integration for mercurial
2 #
2 #
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
3 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 # Copyright 2011 Jim Hague <jim.hague@acm.org>
4 # Copyright 2011 Jim Hague <jim.hague@acm.org>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''hooks for integrating with the Bugzilla bug tracker
9 '''hooks for integrating with the Bugzilla bug tracker
10
10
11 This hook extension adds comments on bugs in Bugzilla when changesets
11 This hook extension adds comments on bugs in Bugzilla when changesets
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
12 that refer to bugs by Bugzilla ID are seen. The comment is formatted using
13 the Mercurial template mechanism.
13 the Mercurial template mechanism.
14
14
15 The hook does not change bug status.
15 The hook does not change bug status.
16
16
17 Three basic modes of access to Bugzilla are provided:
17 Three basic modes of access to Bugzilla are provided:
18
18
19 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
19 1. Access via the Bugzilla XMLRPC interface. Requires Bugzilla 3.4 or later.
20
20
21 2. Check data via the Bugzilla XMLRPC interface and submit bug change
21 2. Check data via the Bugzilla XMLRPC interface and submit bug change
22 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
22 via email to Bugzilla email interface. Requires Bugzilla 3.4 or later.
23
23
24 3. Writing directly to the Bugzilla database. Only Bugzilla installations
24 3. Writing directly to the Bugzilla database. Only Bugzilla installations
25 using MySQL are supported. Requires Python MySQLdb.
25 using MySQL are supported. Requires Python MySQLdb.
26
26
27 Writing directly to the database is susceptible to schema changes, and
27 Writing directly to the database is susceptible to schema changes, and
28 relies on a Bugzilla contrib script to send out bug change
28 relies on a Bugzilla contrib script to send out bug change
29 notification emails. This script runs as the user running Mercurial,
29 notification emails. This script runs as the user running Mercurial,
30 must be run on the host with the Bugzilla install, and requires
30 must be run on the host with the Bugzilla install, and requires
31 permission to read Bugzilla configuration details and the necessary
31 permission to read Bugzilla configuration details and the necessary
32 MySQL user and password to have full access rights to the Bugzilla
32 MySQL user and password to have full access rights to the Bugzilla
33 database. For these reasons this access mode is now considered
33 database. For these reasons this access mode is now considered
34 deprecated, and will not be updated for new Bugzilla versions going
34 deprecated, and will not be updated for new Bugzilla versions going
35 forward.
35 forward.
36
36
37 Access via XMLRPC needs a Bugzilla username and password to be specified
37 Access via XMLRPC needs a Bugzilla username and password to be specified
38 in the configuration. Comments are added under that username. Since the
38 in the configuration. Comments are added under that username. Since the
39 configuration must be readable by all Mercurial users, it is recommended
39 configuration must be readable by all Mercurial users, it is recommended
40 that the rights of that user are restricted in Bugzilla to the minimum
40 that the rights of that user are restricted in Bugzilla to the minimum
41 necessary to add comments.
41 necessary to add comments.
42
42
43 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
43 Access via XMLRPC/email uses XMLRPC to query Bugzilla, but sends
44 email to the Bugzilla email interface to submit comments to bugs.
44 email to the Bugzilla email interface to submit comments to bugs.
45 The From: address in the email is set to the email address of the Mercurial
45 The From: address in the email is set to the email address of the Mercurial
46 user, so the comment appears to come from the Mercurial user. In the event
46 user, so the comment appears to come from the Mercurial user. In the event
47 that the Mercurial user email is not recognised by Bugzilla as a Bugzilla
47 that the Mercurial user email is not recognised by Bugzilla as a Bugzilla
48 user, the email associated with the Bugzilla username used to log into
48 user, the email associated with the Bugzilla username used to log into
49 Bugzilla is used instead as the source of the comment.
49 Bugzilla is used instead as the source of the comment.
50
50
51 Configuration items common to all access modes:
51 Configuration items common to all access modes:
52
52
53 bugzilla.version
53 bugzilla.version
54 This access type to use. Values recognised are:
54 This access type to use. Values recognised are:
55
55
56 :``xmlrpc``: Bugzilla XMLRPC interface.
56 :``xmlrpc``: Bugzilla XMLRPC interface.
57 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
57 :``xmlrpc+email``: Bugzilla XMLRPC and email interfaces.
58 :``3.0``: MySQL access, Bugzilla 3.0 and later.
58 :``3.0``: MySQL access, Bugzilla 3.0 and later.
59 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
59 :``2.18``: MySQL access, Bugzilla 2.18 and up to but not
60 including 3.0.
60 including 3.0.
61 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
61 :``2.16``: MySQL access, Bugzilla 2.16 and up to but not
62 including 2.18.
62 including 2.18.
63
63
64 bugzilla.regexp
64 bugzilla.regexp
65 Regular expression to match bug IDs in changeset commit message.
65 Regular expression to match bug IDs in changeset commit message.
66 Must contain one "()" group. The default expression matches ``Bug
66 Must contain one "()" group. The default expression matches ``Bug
67 1234``, ``Bug no. 1234``, ``Bug number 1234``, ``Bugs 1234,5678``,
67 1234``, ``Bug no. 1234``, ``Bug number 1234``, ``Bugs 1234,5678``,
68 ``Bug 1234 and 5678`` and variations thereof. Matching is case
68 ``Bug 1234 and 5678`` and variations thereof. Matching is case
69 insensitive.
69 insensitive.
70
70
71 bugzilla.style
71 bugzilla.style
72 The style file to use when formatting comments.
72 The style file to use when formatting comments.
73
73
74 bugzilla.template
74 bugzilla.template
75 Template to use when formatting comments. Overrides style if
75 Template to use when formatting comments. Overrides style if
76 specified. In addition to the usual Mercurial keywords, the
76 specified. In addition to the usual Mercurial keywords, the
77 extension specifies:
77 extension specifies:
78
78
79 :``{bug}``: The Bugzilla bug ID.
79 :``{bug}``: The Bugzilla bug ID.
80 :``{root}``: The full pathname of the Mercurial repository.
80 :``{root}``: The full pathname of the Mercurial repository.
81 :``{webroot}``: Stripped pathname of the Mercurial repository.
81 :``{webroot}``: Stripped pathname of the Mercurial repository.
82 :``{hgweb}``: Base URL for browsing Mercurial repositories.
82 :``{hgweb}``: Base URL for browsing Mercurial repositories.
83
83
84 Default ``changeset {node|short} in repo {root} refers to bug
84 Default ``changeset {node|short} in repo {root} refers to bug
85 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
85 {bug}.\\ndetails:\\n\\t{desc|tabindent}``
86
86
87 bugzilla.strip
87 bugzilla.strip
88 The number of path separator characters to strip from the front of
88 The number of path separator characters to strip from the front of
89 the Mercurial repository path (``{root}`` in templates) to produce
89 the Mercurial repository path (``{root}`` in templates) to produce
90 ``{webroot}``. For example, a repository with ``{root}``
90 ``{webroot}``. For example, a repository with ``{root}``
91 ``/var/local/my-project`` with a strip of 2 gives a value for
91 ``/var/local/my-project`` with a strip of 2 gives a value for
92 ``{webroot}`` of ``my-project``. Default 0.
92 ``{webroot}`` of ``my-project``. Default 0.
93
93
94 web.baseurl
94 web.baseurl
95 Base URL for browsing Mercurial repositories. Referenced from
95 Base URL for browsing Mercurial repositories. Referenced from
96 templates as ``{hgweb}``.
96 templates as ``{hgweb}``.
97
97
98 Configuration items common to XMLRPC+email and MySQL access modes:
98 Configuration items common to XMLRPC+email and MySQL access modes:
99
99
100 bugzilla.usermap
100 bugzilla.usermap
101 Path of file containing Mercurial committer email to Bugzilla user email
101 Path of file containing Mercurial committer email to Bugzilla user email
102 mappings. If specified, the file should contain one mapping per
102 mappings. If specified, the file should contain one mapping per
103 line::
103 line::
104
104
105 committer = Bugzilla user
105 committer = Bugzilla user
106
106
107 See also the ``[usermap]`` section.
107 See also the ``[usermap]`` section.
108
108
109 The ``[usermap]`` section is used to specify mappings of Mercurial
109 The ``[usermap]`` section is used to specify mappings of Mercurial
110 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
110 committer email to Bugzilla user email. See also ``bugzilla.usermap``.
111 Contains entries of the form ``committer = Bugzilla user``.
111 Contains entries of the form ``committer = Bugzilla user``.
112
112
113 XMLRPC access mode configuration:
113 XMLRPC access mode configuration:
114
114
115 bugzilla.bzurl
115 bugzilla.bzurl
116 The base URL for the Bugzilla installation.
116 The base URL for the Bugzilla installation.
117 Default ``http://localhost/bugzilla``.
117 Default ``http://localhost/bugzilla``.
118
118
119 bugzilla.user
119 bugzilla.user
120 The username to use to log into Bugzilla via XMLRPC. Default
120 The username to use to log into Bugzilla via XMLRPC. Default
121 ``bugs``.
121 ``bugs``.
122
122
123 bugzilla.password
123 bugzilla.password
124 The password for Bugzilla login.
124 The password for Bugzilla login.
125
125
126 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
126 XMLRPC+email access mode uses the XMLRPC access mode configuration items,
127 and also:
127 and also:
128
128
129 bugzilla.bzemail
129 bugzilla.bzemail
130 The Bugzilla email address.
130 The Bugzilla email address.
131
131
132 In addition, the Mercurial email settings must be configured. See the
132 In addition, the Mercurial email settings must be configured. See the
133 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
133 documentation in hgrc(5), sections ``[email]`` and ``[smtp]``.
134
134
135 MySQL access mode configuration:
135 MySQL access mode configuration:
136
136
137 bugzilla.host
137 bugzilla.host
138 Hostname of the MySQL server holding the Bugzilla database.
138 Hostname of the MySQL server holding the Bugzilla database.
139 Default ``localhost``.
139 Default ``localhost``.
140
140
141 bugzilla.db
141 bugzilla.db
142 Name of the Bugzilla database in MySQL. Default ``bugs``.
142 Name of the Bugzilla database in MySQL. Default ``bugs``.
143
143
144 bugzilla.user
144 bugzilla.user
145 Username to use to access MySQL server. Default ``bugs``.
145 Username to use to access MySQL server. Default ``bugs``.
146
146
147 bugzilla.password
147 bugzilla.password
148 Password to use to access MySQL server.
148 Password to use to access MySQL server.
149
149
150 bugzilla.timeout
150 bugzilla.timeout
151 Database connection timeout (seconds). Default 5.
151 Database connection timeout (seconds). Default 5.
152
152
153 bugzilla.bzuser
153 bugzilla.bzuser
154 Fallback Bugzilla user name to record comments with, if changeset
154 Fallback Bugzilla user name to record comments with, if changeset
155 committer cannot be found as a Bugzilla user.
155 committer cannot be found as a Bugzilla user.
156
156
157 bugzilla.bzdir
157 bugzilla.bzdir
158 Bugzilla install directory. Used by default notify. Default
158 Bugzilla install directory. Used by default notify. Default
159 ``/var/www/html/bugzilla``.
159 ``/var/www/html/bugzilla``.
160
160
161 bugzilla.notify
161 bugzilla.notify
162 The command to run to get Bugzilla to send bug change notification
162 The command to run to get Bugzilla to send bug change notification
163 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
163 emails. Substitutes from a map with 3 keys, ``bzdir``, ``id`` (bug
164 id) and ``user`` (committer bugzilla email). Default depends on
164 id) and ``user`` (committer bugzilla email). Default depends on
165 version; from 2.18 it is "cd %(bzdir)s && perl -T
165 version; from 2.18 it is "cd %(bzdir)s && perl -T
166 contrib/sendbugmail.pl %(id)s %(user)s".
166 contrib/sendbugmail.pl %(id)s %(user)s".
167
167
168 Activating the extension::
168 Activating the extension::
169
169
170 [extensions]
170 [extensions]
171 bugzilla =
171 bugzilla =
172
172
173 [hooks]
173 [hooks]
174 # run bugzilla hook on every change pulled or pushed in here
174 # run bugzilla hook on every change pulled or pushed in here
175 incoming.bugzilla = python:hgext.bugzilla.hook
175 incoming.bugzilla = python:hgext.bugzilla.hook
176
176
177 Example configurations:
177 Example configurations:
178
178
179 XMLRPC example configuration. This uses the Bugzilla at
179 XMLRPC example configuration. This uses the Bugzilla at
180 ``http://my-project.org/bugzilla``, logging in as user
180 ``http://my-project.org/bugzilla``, logging in as user
181 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
181 ``bugmail@my-project.org`` with password ``plugh``. It is used with a
182 collection of Mercurial repositories in ``/var/local/hg/repos/``,
182 collection of Mercurial repositories in ``/var/local/hg/repos/``,
183 with a web interface at ``http://my-project.org/hg``. ::
183 with a web interface at ``http://my-project.org/hg``. ::
184
184
185 [bugzilla]
185 [bugzilla]
186 bzurl=http://my-project.org/bugzilla
186 bzurl=http://my-project.org/bugzilla
187 user=bugmail@my-project.org
187 user=bugmail@my-project.org
188 password=plugh
188 password=plugh
189 version=xmlrpc
189 version=xmlrpc
190 template=Changeset {node|short} in {root|basename}.
190 template=Changeset {node|short} in {root|basename}.
191 {hgweb}/{webroot}/rev/{node|short}\\n
191 {hgweb}/{webroot}/rev/{node|short}\\n
192 {desc}\\n
192 {desc}\\n
193 strip=5
193 strip=5
194
194
195 [web]
195 [web]
196 baseurl=http://my-project.org/hg
196 baseurl=http://my-project.org/hg
197
197
198 XMLRPC+email example configuration. This uses the Bugzilla at
198 XMLRPC+email example configuration. This uses the Bugzilla at
199 ``http://my-project.org/bugzilla``, logging in as user
199 ``http://my-project.org/bugzilla``, logging in as user
200 ``bugmail@my-project.org`` wityh password ``plugh``. It is used with a
200 ``bugmail@my-project.org`` wityh password ``plugh``. It is used with a
201 collection of Mercurial repositories in ``/var/local/hg/repos/``,
201 collection of Mercurial repositories in ``/var/local/hg/repos/``,
202 with a web interface at ``http://my-project.org/hg``. Bug comments
202 with a web interface at ``http://my-project.org/hg``. Bug comments
203 are sent to the Bugzilla email address
203 are sent to the Bugzilla email address
204 ``bugzilla@my-project.org``. ::
204 ``bugzilla@my-project.org``. ::
205
205
206 [bugzilla]
206 [bugzilla]
207 bzurl=http://my-project.org/bugzilla
207 bzurl=http://my-project.org/bugzilla
208 user=bugmail@my-project.org
208 user=bugmail@my-project.org
209 password=plugh
209 password=plugh
210 version=xmlrpc
210 version=xmlrpc
211 bzemail=bugzilla@my-project.org
211 bzemail=bugzilla@my-project.org
212 template=Changeset {node|short} in {root|basename}.
212 template=Changeset {node|short} in {root|basename}.
213 {hgweb}/{webroot}/rev/{node|short}\\n
213 {hgweb}/{webroot}/rev/{node|short}\\n
214 {desc}\\n
214 {desc}\\n
215 strip=5
215 strip=5
216
216
217 [web]
217 [web]
218 baseurl=http://my-project.org/hg
218 baseurl=http://my-project.org/hg
219
219
220 [usermap]
220 [usermap]
221 user@emaildomain.com=user.name@bugzilladomain.com
221 user@emaildomain.com=user.name@bugzilladomain.com
222
222
223 MySQL example configuration. This has a local Bugzilla 3.2 installation
223 MySQL example configuration. This has a local Bugzilla 3.2 installation
224 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
224 in ``/opt/bugzilla-3.2``. The MySQL database is on ``localhost``,
225 the Bugzilla database name is ``bugs`` and MySQL is
225 the Bugzilla database name is ``bugs`` and MySQL is
226 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
226 accessed with MySQL username ``bugs`` password ``XYZZY``. It is used
227 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
227 with a collection of Mercurial repositories in ``/var/local/hg/repos/``,
228 with a web interface at ``http://my-project.org/hg``. ::
228 with a web interface at ``http://my-project.org/hg``. ::
229
229
230 [bugzilla]
230 [bugzilla]
231 host=localhost
231 host=localhost
232 password=XYZZY
232 password=XYZZY
233 version=3.0
233 version=3.0
234 bzuser=unknown@domain.com
234 bzuser=unknown@domain.com
235 bzdir=/opt/bugzilla-3.2
235 bzdir=/opt/bugzilla-3.2
236 template=Changeset {node|short} in {root|basename}.
236 template=Changeset {node|short} in {root|basename}.
237 {hgweb}/{webroot}/rev/{node|short}\\n
237 {hgweb}/{webroot}/rev/{node|short}\\n
238 {desc}\\n
238 {desc}\\n
239 strip=5
239 strip=5
240
240
241 [web]
241 [web]
242 baseurl=http://my-project.org/hg
242 baseurl=http://my-project.org/hg
243
243
244 [usermap]
244 [usermap]
245 user@emaildomain.com=user.name@bugzilladomain.com
245 user@emaildomain.com=user.name@bugzilladomain.com
246
246
247 All the above add a comment to the Bugzilla bug record of the form::
247 All the above add a comment to the Bugzilla bug record of the form::
248
248
249 Changeset 3b16791d6642 in repository-name.
249 Changeset 3b16791d6642 in repository-name.
250 http://my-project.org/hg/repository-name/rev/3b16791d6642
250 http://my-project.org/hg/repository-name/rev/3b16791d6642
251
251
252 Changeset commit comment. Bug 1234.
252 Changeset commit comment. Bug 1234.
253 '''
253 '''
254
254
255 from mercurial.i18n import _
255 from mercurial.i18n import _
256 from mercurial.node import short
256 from mercurial.node import short
257 from mercurial import cmdutil, mail, templater, util
257 from mercurial import cmdutil, mail, templater, util
258 import re, time, xmlrpclib
258 import re, time, xmlrpclib
259
259
260 class bzaccess(object):
260 class bzaccess(object):
261 '''Base class for access to Bugzilla.'''
261 '''Base class for access to Bugzilla.'''
262
262
263 def __init__(self, ui):
263 def __init__(self, ui):
264 self.ui = ui
264 self.ui = ui
265 usermap = self.ui.config('bugzilla', 'usermap')
265 usermap = self.ui.config('bugzilla', 'usermap')
266 if usermap:
266 if usermap:
267 self.ui.readconfig(usermap, sections=['usermap'])
267 self.ui.readconfig(usermap, sections=['usermap'])
268
268
269 def map_committer(self, user):
269 def map_committer(self, user):
270 '''map name of committer to Bugzilla user name.'''
270 '''map name of committer to Bugzilla user name.'''
271 for committer, bzuser in self.ui.configitems('usermap'):
271 for committer, bzuser in self.ui.configitems('usermap'):
272 if committer.lower() == user.lower():
272 if committer.lower() == user.lower():
273 return bzuser
273 return bzuser
274 return user
274 return user
275
275
276 # Methods to be implemented by access classes.
276 # Methods to be implemented by access classes.
277 def filter_real_bug_ids(self, ids):
277 def filter_real_bug_ids(self, ids):
278 '''remove bug IDs that do not exist in Bugzilla from set.'''
278 '''remove bug IDs that do not exist in Bugzilla from set.'''
279 pass
279 pass
280
280
281 def filter_cset_known_bug_ids(self, node, ids):
281 def filter_cset_known_bug_ids(self, node, ids):
282 '''remove bug IDs where node occurs in comment text from set.'''
282 '''remove bug IDs where node occurs in comment text from set.'''
283 pass
283 pass
284
284
285 def add_comment(self, bugid, text, committer):
285 def add_comment(self, bugid, text, committer):
286 '''add comment to bug.
286 '''add comment to bug.
287
287
288 If possible add the comment as being from the committer of
288 If possible add the comment as being from the committer of
289 the changeset. Otherwise use the default Bugzilla user.
289 the changeset. Otherwise use the default Bugzilla user.
290 '''
290 '''
291 pass
291 pass
292
292
293 def notify(self, ids, committer):
293 def notify(self, ids, committer):
294 '''Force sending of Bugzilla notification emails.'''
294 '''Force sending of Bugzilla notification emails.'''
295 pass
295 pass
296
296
297 # Bugzilla via direct access to MySQL database.
297 # Bugzilla via direct access to MySQL database.
298 class bzmysql(bzaccess):
298 class bzmysql(bzaccess):
299 '''Support for direct MySQL access to Bugzilla.
299 '''Support for direct MySQL access to Bugzilla.
300
300
301 The earliest Bugzilla version this is tested with is version 2.16.
301 The earliest Bugzilla version this is tested with is version 2.16.
302
302
303 If your Bugzilla is version 3.2 or above, you are strongly
303 If your Bugzilla is version 3.2 or above, you are strongly
304 recommended to use the XMLRPC access method instead.
304 recommended to use the XMLRPC access method instead.
305 '''
305 '''
306
306
307 @staticmethod
307 @staticmethod
308 def sql_buglist(ids):
308 def sql_buglist(ids):
309 '''return SQL-friendly list of bug ids'''
309 '''return SQL-friendly list of bug ids'''
310 return '(' + ','.join(map(str, ids)) + ')'
310 return '(' + ','.join(map(str, ids)) + ')'
311
311
312 _MySQLdb = None
312 _MySQLdb = None
313
313
314 def __init__(self, ui):
314 def __init__(self, ui):
315 try:
315 try:
316 import MySQLdb as mysql
316 import MySQLdb as mysql
317 bzmysql._MySQLdb = mysql
317 bzmysql._MySQLdb = mysql
318 except ImportError, err:
318 except ImportError, err:
319 raise util.Abort(_('python mysql support not available: %s') % err)
319 raise util.Abort(_('python mysql support not available: %s') % err)
320
320
321 bzaccess.__init__(self, ui)
321 bzaccess.__init__(self, ui)
322
322
323 host = self.ui.config('bugzilla', 'host', 'localhost')
323 host = self.ui.config('bugzilla', 'host', 'localhost')
324 user = self.ui.config('bugzilla', 'user', 'bugs')
324 user = self.ui.config('bugzilla', 'user', 'bugs')
325 passwd = self.ui.config('bugzilla', 'password')
325 passwd = self.ui.config('bugzilla', 'password')
326 db = self.ui.config('bugzilla', 'db', 'bugs')
326 db = self.ui.config('bugzilla', 'db', 'bugs')
327 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
327 timeout = int(self.ui.config('bugzilla', 'timeout', 5))
328 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
328 self.ui.note(_('connecting to %s:%s as %s, password %s\n') %
329 (host, db, user, '*' * len(passwd)))
329 (host, db, user, '*' * len(passwd)))
330 self.conn = bzmysql._MySQLdb.connect(host=host,
330 self.conn = bzmysql._MySQLdb.connect(host=host,
331 user=user, passwd=passwd,
331 user=user, passwd=passwd,
332 db=db,
332 db=db,
333 connect_timeout=timeout)
333 connect_timeout=timeout)
334 self.cursor = self.conn.cursor()
334 self.cursor = self.conn.cursor()
335 self.longdesc_id = self.get_longdesc_id()
335 self.longdesc_id = self.get_longdesc_id()
336 self.user_ids = {}
336 self.user_ids = {}
337 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
337 self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
338
338
339 def run(self, *args, **kwargs):
339 def run(self, *args, **kwargs):
340 '''run a query.'''
340 '''run a query.'''
341 self.ui.note(_('query: %s %s\n') % (args, kwargs))
341 self.ui.note(_('query: %s %s\n') % (args, kwargs))
342 try:
342 try:
343 self.cursor.execute(*args, **kwargs)
343 self.cursor.execute(*args, **kwargs)
344 except bzmysql._MySQLdb.MySQLError:
344 except bzmysql._MySQLdb.MySQLError:
345 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
345 self.ui.note(_('failed query: %s %s\n') % (args, kwargs))
346 raise
346 raise
347
347
348 def get_longdesc_id(self):
348 def get_longdesc_id(self):
349 '''get identity of longdesc field'''
349 '''get identity of longdesc field'''
350 self.run('select fieldid from fielddefs where name = "longdesc"')
350 self.run('select fieldid from fielddefs where name = "longdesc"')
351 ids = self.cursor.fetchall()
351 ids = self.cursor.fetchall()
352 if len(ids) != 1:
352 if len(ids) != 1:
353 raise util.Abort(_('unknown database schema'))
353 raise util.Abort(_('unknown database schema'))
354 return ids[0][0]
354 return ids[0][0]
355
355
356 def filter_real_bug_ids(self, ids):
356 def filter_real_bug_ids(self, ids):
357 '''filter not-existing bug ids from set.'''
357 '''filter not-existing bug ids from set.'''
358 self.run('select bug_id from bugs where bug_id in %s' %
358 self.run('select bug_id from bugs where bug_id in %s' %
359 bzmysql.sql_buglist(ids))
359 bzmysql.sql_buglist(ids))
360 return set([c[0] for c in self.cursor.fetchall()])
360 return set([c[0] for c in self.cursor.fetchall()])
361
361
362 def filter_cset_known_bug_ids(self, node, ids):
362 def filter_cset_known_bug_ids(self, node, ids):
363 '''filter bug ids that already refer to this changeset from set.'''
363 '''filter bug ids that already refer to this changeset from set.'''
364
364
365 self.run('''select bug_id from longdescs where
365 self.run('''select bug_id from longdescs where
366 bug_id in %s and thetext like "%%%s%%"''' %
366 bug_id in %s and thetext like "%%%s%%"''' %
367 (bzmysql.sql_buglist(ids), short(node)))
367 (bzmysql.sql_buglist(ids), short(node)))
368 for (id,) in self.cursor.fetchall():
368 for (id,) in self.cursor.fetchall():
369 self.ui.status(_('bug %d already knows about changeset %s\n') %
369 self.ui.status(_('bug %d already knows about changeset %s\n') %
370 (id, short(node)))
370 (id, short(node)))
371 ids.discard(id)
371 ids.discard(id)
372 return ids
372 return ids
373
373
374 def notify(self, ids, committer):
374 def notify(self, ids, committer):
375 '''tell bugzilla to send mail.'''
375 '''tell bugzilla to send mail.'''
376
376
377 self.ui.status(_('telling bugzilla to send mail:\n'))
377 self.ui.status(_('telling bugzilla to send mail:\n'))
378 (user, userid) = self.get_bugzilla_user(committer)
378 (user, userid) = self.get_bugzilla_user(committer)
379 for id in ids:
379 for id in ids:
380 self.ui.status(_(' bug %s\n') % id)
380 self.ui.status(_(' bug %s\n') % id)
381 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
381 cmdfmt = self.ui.config('bugzilla', 'notify', self.default_notify)
382 bzdir = self.ui.config('bugzilla', 'bzdir', '/var/www/html/bugzilla')
382 bzdir = self.ui.config('bugzilla', 'bzdir', '/var/www/html/bugzilla')
383 try:
383 try:
384 # Backwards-compatible with old notify string, which
384 # Backwards-compatible with old notify string, which
385 # took one string. This will throw with a new format
385 # took one string. This will throw with a new format
386 # string.
386 # string.
387 cmd = cmdfmt % id
387 cmd = cmdfmt % id
388 except TypeError:
388 except TypeError:
389 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
389 cmd = cmdfmt % {'bzdir': bzdir, 'id': id, 'user': user}
390 self.ui.note(_('running notify command %s\n') % cmd)
390 self.ui.note(_('running notify command %s\n') % cmd)
391 fp = util.popen('(%s) 2>&1' % cmd)
391 fp = util.popen('(%s) 2>&1' % cmd)
392 out = fp.read()
392 out = fp.read()
393 ret = fp.close()
393 ret = fp.close()
394 if ret:
394 if ret:
395 self.ui.warn(out)
395 self.ui.warn(out)
396 raise util.Abort(_('bugzilla notify command %s') %
396 raise util.Abort(_('bugzilla notify command %s') %
397 util.explain_exit(ret)[0])
397 util.explainexit(ret)[0])
398 self.ui.status(_('done\n'))
398 self.ui.status(_('done\n'))
399
399
400 def get_user_id(self, user):
400 def get_user_id(self, user):
401 '''look up numeric bugzilla user id.'''
401 '''look up numeric bugzilla user id.'''
402 try:
402 try:
403 return self.user_ids[user]
403 return self.user_ids[user]
404 except KeyError:
404 except KeyError:
405 try:
405 try:
406 userid = int(user)
406 userid = int(user)
407 except ValueError:
407 except ValueError:
408 self.ui.note(_('looking up user %s\n') % user)
408 self.ui.note(_('looking up user %s\n') % user)
409 self.run('''select userid from profiles
409 self.run('''select userid from profiles
410 where login_name like %s''', user)
410 where login_name like %s''', user)
411 all = self.cursor.fetchall()
411 all = self.cursor.fetchall()
412 if len(all) != 1:
412 if len(all) != 1:
413 raise KeyError(user)
413 raise KeyError(user)
414 userid = int(all[0][0])
414 userid = int(all[0][0])
415 self.user_ids[user] = userid
415 self.user_ids[user] = userid
416 return userid
416 return userid
417
417
418 def get_bugzilla_user(self, committer):
418 def get_bugzilla_user(self, committer):
419 '''See if committer is a registered bugzilla user. Return
419 '''See if committer is a registered bugzilla user. Return
420 bugzilla username and userid if so. If not, return default
420 bugzilla username and userid if so. If not, return default
421 bugzilla username and userid.'''
421 bugzilla username and userid.'''
422 user = self.map_committer(committer)
422 user = self.map_committer(committer)
423 try:
423 try:
424 userid = self.get_user_id(user)
424 userid = self.get_user_id(user)
425 except KeyError:
425 except KeyError:
426 try:
426 try:
427 defaultuser = self.ui.config('bugzilla', 'bzuser')
427 defaultuser = self.ui.config('bugzilla', 'bzuser')
428 if not defaultuser:
428 if not defaultuser:
429 raise util.Abort(_('cannot find bugzilla user id for %s') %
429 raise util.Abort(_('cannot find bugzilla user id for %s') %
430 user)
430 user)
431 userid = self.get_user_id(defaultuser)
431 userid = self.get_user_id(defaultuser)
432 user = defaultuser
432 user = defaultuser
433 except KeyError:
433 except KeyError:
434 raise util.Abort(_('cannot find bugzilla user id for %s or %s') %
434 raise util.Abort(_('cannot find bugzilla user id for %s or %s') %
435 (user, defaultuser))
435 (user, defaultuser))
436 return (user, userid)
436 return (user, userid)
437
437
438 def add_comment(self, bugid, text, committer):
438 def add_comment(self, bugid, text, committer):
439 '''add comment to bug. try adding comment as committer of
439 '''add comment to bug. try adding comment as committer of
440 changeset, otherwise as default bugzilla user.'''
440 changeset, otherwise as default bugzilla user.'''
441 (user, userid) = self.get_bugzilla_user(committer)
441 (user, userid) = self.get_bugzilla_user(committer)
442 now = time.strftime('%Y-%m-%d %H:%M:%S')
442 now = time.strftime('%Y-%m-%d %H:%M:%S')
443 self.run('''insert into longdescs
443 self.run('''insert into longdescs
444 (bug_id, who, bug_when, thetext)
444 (bug_id, who, bug_when, thetext)
445 values (%s, %s, %s, %s)''',
445 values (%s, %s, %s, %s)''',
446 (bugid, userid, now, text))
446 (bugid, userid, now, text))
447 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
447 self.run('''insert into bugs_activity (bug_id, who, bug_when, fieldid)
448 values (%s, %s, %s, %s)''',
448 values (%s, %s, %s, %s)''',
449 (bugid, userid, now, self.longdesc_id))
449 (bugid, userid, now, self.longdesc_id))
450 self.conn.commit()
450 self.conn.commit()
451
451
452 class bzmysql_2_18(bzmysql):
452 class bzmysql_2_18(bzmysql):
453 '''support for bugzilla 2.18 series.'''
453 '''support for bugzilla 2.18 series.'''
454
454
455 def __init__(self, ui):
455 def __init__(self, ui):
456 bzmysql.__init__(self, ui)
456 bzmysql.__init__(self, ui)
457 self.default_notify = \
457 self.default_notify = \
458 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
458 "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
459
459
460 class bzmysql_3_0(bzmysql_2_18):
460 class bzmysql_3_0(bzmysql_2_18):
461 '''support for bugzilla 3.0 series.'''
461 '''support for bugzilla 3.0 series.'''
462
462
463 def __init__(self, ui):
463 def __init__(self, ui):
464 bzmysql_2_18.__init__(self, ui)
464 bzmysql_2_18.__init__(self, ui)
465
465
466 def get_longdesc_id(self):
466 def get_longdesc_id(self):
467 '''get identity of longdesc field'''
467 '''get identity of longdesc field'''
468 self.run('select id from fielddefs where name = "longdesc"')
468 self.run('select id from fielddefs where name = "longdesc"')
469 ids = self.cursor.fetchall()
469 ids = self.cursor.fetchall()
470 if len(ids) != 1:
470 if len(ids) != 1:
471 raise util.Abort(_('unknown database schema'))
471 raise util.Abort(_('unknown database schema'))
472 return ids[0][0]
472 return ids[0][0]
473
473
474 # Buzgilla via XMLRPC interface.
474 # Buzgilla via XMLRPC interface.
475
475
476 class CookieSafeTransport(xmlrpclib.SafeTransport):
476 class CookieSafeTransport(xmlrpclib.SafeTransport):
477 """A SafeTransport that retains cookies over its lifetime.
477 """A SafeTransport that retains cookies over its lifetime.
478
478
479 The regular xmlrpclib transports ignore cookies. Which causes
479 The regular xmlrpclib transports ignore cookies. Which causes
480 a bit of a problem when you need a cookie-based login, as with
480 a bit of a problem when you need a cookie-based login, as with
481 the Bugzilla XMLRPC interface.
481 the Bugzilla XMLRPC interface.
482
482
483 So this is a SafeTransport which looks for cookies being set
483 So this is a SafeTransport which looks for cookies being set
484 in responses and saves them to add to all future requests.
484 in responses and saves them to add to all future requests.
485 It appears a SafeTransport can do both HTTP and HTTPS sessions,
485 It appears a SafeTransport can do both HTTP and HTTPS sessions,
486 which saves us having to do a CookieTransport too.
486 which saves us having to do a CookieTransport too.
487 """
487 """
488
488
489 # Inspiration drawn from
489 # Inspiration drawn from
490 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
490 # http://blog.godson.in/2010/09/how-to-make-python-xmlrpclib-client.html
491 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
491 # http://www.itkovian.net/base/transport-class-for-pythons-xml-rpc-lib/
492
492
493 cookies = []
493 cookies = []
494 def send_cookies(self, connection):
494 def send_cookies(self, connection):
495 if self.cookies:
495 if self.cookies:
496 for cookie in self.cookies:
496 for cookie in self.cookies:
497 connection.putheader("Cookie", cookie)
497 connection.putheader("Cookie", cookie)
498
498
499 def request(self, host, handler, request_body, verbose=0):
499 def request(self, host, handler, request_body, verbose=0):
500 self.verbose = verbose
500 self.verbose = verbose
501
501
502 # issue XML-RPC request
502 # issue XML-RPC request
503 h = self.make_connection(host)
503 h = self.make_connection(host)
504 if verbose:
504 if verbose:
505 h.set_debuglevel(1)
505 h.set_debuglevel(1)
506
506
507 self.send_request(h, handler, request_body)
507 self.send_request(h, handler, request_body)
508 self.send_host(h, host)
508 self.send_host(h, host)
509 self.send_cookies(h)
509 self.send_cookies(h)
510 self.send_user_agent(h)
510 self.send_user_agent(h)
511 self.send_content(h, request_body)
511 self.send_content(h, request_body)
512
512
513 # Deal with differences between Python 2.4-2.6 and 2.7.
513 # Deal with differences between Python 2.4-2.6 and 2.7.
514 # In the former h is a HTTP(S). In the latter it's a
514 # In the former h is a HTTP(S). In the latter it's a
515 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
515 # HTTP(S)Connection. Luckily, the 2.4-2.6 implementation of
516 # HTTP(S) has an underlying HTTP(S)Connection, so extract
516 # HTTP(S) has an underlying HTTP(S)Connection, so extract
517 # that and use it.
517 # that and use it.
518 try:
518 try:
519 response = h.getresponse()
519 response = h.getresponse()
520 except AttributeError:
520 except AttributeError:
521 response = h._conn.getresponse()
521 response = h._conn.getresponse()
522
522
523 # Add any cookie definitions to our list.
523 # Add any cookie definitions to our list.
524 for header in response.msg.getallmatchingheaders("Set-Cookie"):
524 for header in response.msg.getallmatchingheaders("Set-Cookie"):
525 val = header.split(": ", 1)[1]
525 val = header.split(": ", 1)[1]
526 cookie = val.split(";", 1)[0]
526 cookie = val.split(";", 1)[0]
527 self.cookies.append(cookie)
527 self.cookies.append(cookie)
528
528
529 if response.status != 200:
529 if response.status != 200:
530 raise xmlrpclib.ProtocolError(host + handler, response.status,
530 raise xmlrpclib.ProtocolError(host + handler, response.status,
531 response.reason, response.msg.headers)
531 response.reason, response.msg.headers)
532
532
533 payload = response.read()
533 payload = response.read()
534 parser, unmarshaller = self.getparser()
534 parser, unmarshaller = self.getparser()
535 parser.feed(payload)
535 parser.feed(payload)
536 parser.close()
536 parser.close()
537
537
538 return unmarshaller.close()
538 return unmarshaller.close()
539
539
540 class bzxmlrpc(bzaccess):
540 class bzxmlrpc(bzaccess):
541 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
541 """Support for access to Bugzilla via the Bugzilla XMLRPC API.
542
542
543 Requires a minimum Bugzilla version 3.4.
543 Requires a minimum Bugzilla version 3.4.
544 """
544 """
545
545
546 def __init__(self, ui):
546 def __init__(self, ui):
547 bzaccess.__init__(self, ui)
547 bzaccess.__init__(self, ui)
548
548
549 bzweb = self.ui.config('bugzilla', 'bzurl',
549 bzweb = self.ui.config('bugzilla', 'bzurl',
550 'http://localhost/bugzilla/')
550 'http://localhost/bugzilla/')
551 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
551 bzweb = bzweb.rstrip("/") + "/xmlrpc.cgi"
552
552
553 user = self.ui.config('bugzilla', 'user', 'bugs')
553 user = self.ui.config('bugzilla', 'user', 'bugs')
554 passwd = self.ui.config('bugzilla', 'password')
554 passwd = self.ui.config('bugzilla', 'password')
555
555
556 self.bzproxy = xmlrpclib.ServerProxy(bzweb, CookieSafeTransport())
556 self.bzproxy = xmlrpclib.ServerProxy(bzweb, CookieSafeTransport())
557 self.bzproxy.User.login(dict(login=user, password=passwd))
557 self.bzproxy.User.login(dict(login=user, password=passwd))
558
558
559 def get_bug_comments(self, id):
559 def get_bug_comments(self, id):
560 """Return a string with all comment text for a bug."""
560 """Return a string with all comment text for a bug."""
561 c = self.bzproxy.Bug.comments(dict(ids=[id]))
561 c = self.bzproxy.Bug.comments(dict(ids=[id]))
562 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
562 return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
563
563
564 def filter_real_bug_ids(self, ids):
564 def filter_real_bug_ids(self, ids):
565 res = set()
565 res = set()
566 bugs = self.bzproxy.Bug.get(dict(ids=sorted(ids), permissive=True))
566 bugs = self.bzproxy.Bug.get(dict(ids=sorted(ids), permissive=True))
567 for bug in bugs['bugs']:
567 for bug in bugs['bugs']:
568 res.add(bug['id'])
568 res.add(bug['id'])
569 return res
569 return res
570
570
571 def filter_cset_known_bug_ids(self, node, ids):
571 def filter_cset_known_bug_ids(self, node, ids):
572 for id in sorted(ids):
572 for id in sorted(ids):
573 if self.get_bug_comments(id).find(short(node)) != -1:
573 if self.get_bug_comments(id).find(short(node)) != -1:
574 self.ui.status(_('bug %d already knows about changeset %s\n') %
574 self.ui.status(_('bug %d already knows about changeset %s\n') %
575 (id, short(node)))
575 (id, short(node)))
576 ids.discard(id)
576 ids.discard(id)
577 return ids
577 return ids
578
578
579 def add_comment(self, bugid, text, committer):
579 def add_comment(self, bugid, text, committer):
580 self.bzproxy.Bug.add_comment(dict(id=bugid, comment=text))
580 self.bzproxy.Bug.add_comment(dict(id=bugid, comment=text))
581
581
582 class bzxmlrpcemail(bzxmlrpc):
582 class bzxmlrpcemail(bzxmlrpc):
583 """Read data from Bugzilla via XMLRPC, send updates via email.
583 """Read data from Bugzilla via XMLRPC, send updates via email.
584
584
585 Advantages of sending updates via email:
585 Advantages of sending updates via email:
586 1. Comments can be added as any user, not just logged in user.
586 1. Comments can be added as any user, not just logged in user.
587 2. Bug statuses and other fields not accessible via XMLRPC can
587 2. Bug statuses and other fields not accessible via XMLRPC can
588 be updated. This is not currently used.
588 be updated. This is not currently used.
589 """
589 """
590
590
591 def __init__(self, ui):
591 def __init__(self, ui):
592 bzxmlrpc.__init__(self, ui)
592 bzxmlrpc.__init__(self, ui)
593
593
594 self.bzemail = self.ui.config('bugzilla', 'bzemail')
594 self.bzemail = self.ui.config('bugzilla', 'bzemail')
595 if not self.bzemail:
595 if not self.bzemail:
596 raise util.Abort(_("configuration 'bzemail' missing"))
596 raise util.Abort(_("configuration 'bzemail' missing"))
597 mail.validateconfig(self.ui)
597 mail.validateconfig(self.ui)
598
598
599 def send_bug_modify_email(self, bugid, commands, comment, committer):
599 def send_bug_modify_email(self, bugid, commands, comment, committer):
600 '''send modification message to Bugzilla bug via email.
600 '''send modification message to Bugzilla bug via email.
601
601
602 The message format is documented in the Bugzilla email_in.pl
602 The message format is documented in the Bugzilla email_in.pl
603 specification. commands is a list of command lines, comment is the
603 specification. commands is a list of command lines, comment is the
604 comment text.
604 comment text.
605
605
606 To stop users from crafting commit comments with
606 To stop users from crafting commit comments with
607 Bugzilla commands, specify the bug ID via the message body, rather
607 Bugzilla commands, specify the bug ID via the message body, rather
608 than the subject line, and leave a blank line after it.
608 than the subject line, and leave a blank line after it.
609 '''
609 '''
610 user = self.map_committer(committer)
610 user = self.map_committer(committer)
611 matches = self.bzproxy.User.get(dict(match=[user]))
611 matches = self.bzproxy.User.get(dict(match=[user]))
612 if not matches['users']:
612 if not matches['users']:
613 user = self.ui.config('bugzilla', 'user', 'bugs')
613 user = self.ui.config('bugzilla', 'user', 'bugs')
614 matches = self.bzproxy.User.get(dict(match=[user]))
614 matches = self.bzproxy.User.get(dict(match=[user]))
615 if not matches['users']:
615 if not matches['users']:
616 raise util.Abort(_("default bugzilla user %s email not found") %
616 raise util.Abort(_("default bugzilla user %s email not found") %
617 user)
617 user)
618 user = matches['users'][0]['email']
618 user = matches['users'][0]['email']
619
619
620 text = "\n".join(commands) + "\n@bug_id = %d\n\n" % bugid + comment
620 text = "\n".join(commands) + "\n@bug_id = %d\n\n" % bugid + comment
621
621
622 _charsets = mail._charsets(self.ui)
622 _charsets = mail._charsets(self.ui)
623 user = mail.addressencode(self.ui, user, _charsets)
623 user = mail.addressencode(self.ui, user, _charsets)
624 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
624 bzemail = mail.addressencode(self.ui, self.bzemail, _charsets)
625 msg = mail.mimeencode(self.ui, text, _charsets)
625 msg = mail.mimeencode(self.ui, text, _charsets)
626 msg['From'] = user
626 msg['From'] = user
627 msg['To'] = bzemail
627 msg['To'] = bzemail
628 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
628 msg['Subject'] = mail.headencode(self.ui, "Bug modification", _charsets)
629 sendmail = mail.connect(self.ui)
629 sendmail = mail.connect(self.ui)
630 sendmail(user, bzemail, msg.as_string())
630 sendmail(user, bzemail, msg.as_string())
631
631
632 def add_comment(self, bugid, text, committer):
632 def add_comment(self, bugid, text, committer):
633 self.send_bug_modify_email(bugid, [], text, committer)
633 self.send_bug_modify_email(bugid, [], text, committer)
634
634
635 class bugzilla(object):
635 class bugzilla(object):
636 # supported versions of bugzilla. different versions have
636 # supported versions of bugzilla. different versions have
637 # different schemas.
637 # different schemas.
638 _versions = {
638 _versions = {
639 '2.16': bzmysql,
639 '2.16': bzmysql,
640 '2.18': bzmysql_2_18,
640 '2.18': bzmysql_2_18,
641 '3.0': bzmysql_3_0,
641 '3.0': bzmysql_3_0,
642 'xmlrpc': bzxmlrpc,
642 'xmlrpc': bzxmlrpc,
643 'xmlrpc+email': bzxmlrpcemail
643 'xmlrpc+email': bzxmlrpcemail
644 }
644 }
645
645
646 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
646 _default_bug_re = (r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
647 r'((?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)')
647 r'((?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)')
648
648
649 _bz = None
649 _bz = None
650
650
651 def __init__(self, ui, repo):
651 def __init__(self, ui, repo):
652 self.ui = ui
652 self.ui = ui
653 self.repo = repo
653 self.repo = repo
654
654
655 def bz(self):
655 def bz(self):
656 '''return object that knows how to talk to bugzilla version in
656 '''return object that knows how to talk to bugzilla version in
657 use.'''
657 use.'''
658
658
659 if bugzilla._bz is None:
659 if bugzilla._bz is None:
660 bzversion = self.ui.config('bugzilla', 'version')
660 bzversion = self.ui.config('bugzilla', 'version')
661 try:
661 try:
662 bzclass = bugzilla._versions[bzversion]
662 bzclass = bugzilla._versions[bzversion]
663 except KeyError:
663 except KeyError:
664 raise util.Abort(_('bugzilla version %s not supported') %
664 raise util.Abort(_('bugzilla version %s not supported') %
665 bzversion)
665 bzversion)
666 bugzilla._bz = bzclass(self.ui)
666 bugzilla._bz = bzclass(self.ui)
667 return bugzilla._bz
667 return bugzilla._bz
668
668
669 def __getattr__(self, key):
669 def __getattr__(self, key):
670 return getattr(self.bz(), key)
670 return getattr(self.bz(), key)
671
671
672 _bug_re = None
672 _bug_re = None
673 _split_re = None
673 _split_re = None
674
674
675 def find_bug_ids(self, ctx):
675 def find_bug_ids(self, ctx):
676 '''return set of integer bug IDs from commit comment.
676 '''return set of integer bug IDs from commit comment.
677
677
678 Extract bug IDs from changeset comments. Filter out any that are
678 Extract bug IDs from changeset comments. Filter out any that are
679 not known to Bugzilla, and any that already have a reference to
679 not known to Bugzilla, and any that already have a reference to
680 the given changeset in their comments.
680 the given changeset in their comments.
681 '''
681 '''
682 if bugzilla._bug_re is None:
682 if bugzilla._bug_re is None:
683 bugzilla._bug_re = re.compile(
683 bugzilla._bug_re = re.compile(
684 self.ui.config('bugzilla', 'regexp', bugzilla._default_bug_re),
684 self.ui.config('bugzilla', 'regexp', bugzilla._default_bug_re),
685 re.IGNORECASE)
685 re.IGNORECASE)
686 bugzilla._split_re = re.compile(r'\D+')
686 bugzilla._split_re = re.compile(r'\D+')
687 start = 0
687 start = 0
688 ids = set()
688 ids = set()
689 while True:
689 while True:
690 m = bugzilla._bug_re.search(ctx.description(), start)
690 m = bugzilla._bug_re.search(ctx.description(), start)
691 if not m:
691 if not m:
692 break
692 break
693 start = m.end()
693 start = m.end()
694 for id in bugzilla._split_re.split(m.group(1)):
694 for id in bugzilla._split_re.split(m.group(1)):
695 if not id:
695 if not id:
696 continue
696 continue
697 ids.add(int(id))
697 ids.add(int(id))
698 if ids:
698 if ids:
699 ids = self.filter_real_bug_ids(ids)
699 ids = self.filter_real_bug_ids(ids)
700 if ids:
700 if ids:
701 ids = self.filter_cset_known_bug_ids(ctx.node(), ids)
701 ids = self.filter_cset_known_bug_ids(ctx.node(), ids)
702 return ids
702 return ids
703
703
704 def update(self, bugid, ctx):
704 def update(self, bugid, ctx):
705 '''update bugzilla bug with reference to changeset.'''
705 '''update bugzilla bug with reference to changeset.'''
706
706
707 def webroot(root):
707 def webroot(root):
708 '''strip leading prefix of repo root and turn into
708 '''strip leading prefix of repo root and turn into
709 url-safe path.'''
709 url-safe path.'''
710 count = int(self.ui.config('bugzilla', 'strip', 0))
710 count = int(self.ui.config('bugzilla', 'strip', 0))
711 root = util.pconvert(root)
711 root = util.pconvert(root)
712 while count > 0:
712 while count > 0:
713 c = root.find('/')
713 c = root.find('/')
714 if c == -1:
714 if c == -1:
715 break
715 break
716 root = root[c + 1:]
716 root = root[c + 1:]
717 count -= 1
717 count -= 1
718 return root
718 return root
719
719
720 mapfile = self.ui.config('bugzilla', 'style')
720 mapfile = self.ui.config('bugzilla', 'style')
721 tmpl = self.ui.config('bugzilla', 'template')
721 tmpl = self.ui.config('bugzilla', 'template')
722 t = cmdutil.changeset_templater(self.ui, self.repo,
722 t = cmdutil.changeset_templater(self.ui, self.repo,
723 False, None, mapfile, False)
723 False, None, mapfile, False)
724 if not mapfile and not tmpl:
724 if not mapfile and not tmpl:
725 tmpl = _('changeset {node|short} in repo {root} refers '
725 tmpl = _('changeset {node|short} in repo {root} refers '
726 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
726 'to bug {bug}.\ndetails:\n\t{desc|tabindent}')
727 if tmpl:
727 if tmpl:
728 tmpl = templater.parsestring(tmpl, quoted=False)
728 tmpl = templater.parsestring(tmpl, quoted=False)
729 t.use_template(tmpl)
729 t.use_template(tmpl)
730 self.ui.pushbuffer()
730 self.ui.pushbuffer()
731 t.show(ctx, changes=ctx.changeset(),
731 t.show(ctx, changes=ctx.changeset(),
732 bug=str(bugid),
732 bug=str(bugid),
733 hgweb=self.ui.config('web', 'baseurl'),
733 hgweb=self.ui.config('web', 'baseurl'),
734 root=self.repo.root,
734 root=self.repo.root,
735 webroot=webroot(self.repo.root))
735 webroot=webroot(self.repo.root))
736 data = self.ui.popbuffer()
736 data = self.ui.popbuffer()
737 self.add_comment(bugid, data, util.email(ctx.user()))
737 self.add_comment(bugid, data, util.email(ctx.user()))
738
738
739 def hook(ui, repo, hooktype, node=None, **kwargs):
739 def hook(ui, repo, hooktype, node=None, **kwargs):
740 '''add comment to bugzilla for each changeset that refers to a
740 '''add comment to bugzilla for each changeset that refers to a
741 bugzilla bug id. only add a comment once per bug, so same change
741 bugzilla bug id. only add a comment once per bug, so same change
742 seen multiple times does not fill bug with duplicate data.'''
742 seen multiple times does not fill bug with duplicate data.'''
743 if node is None:
743 if node is None:
744 raise util.Abort(_('hook type %s does not pass a changeset id') %
744 raise util.Abort(_('hook type %s does not pass a changeset id') %
745 hooktype)
745 hooktype)
746 try:
746 try:
747 bz = bugzilla(ui, repo)
747 bz = bugzilla(ui, repo)
748 ctx = repo[node]
748 ctx = repo[node]
749 ids = bz.find_bug_ids(ctx)
749 ids = bz.find_bug_ids(ctx)
750 if ids:
750 if ids:
751 for id in ids:
751 for id in ids:
752 bz.update(id, ctx)
752 bz.update(id, ctx)
753 bz.notify(ids, util.email(ctx.user()))
753 bz.notify(ids, util.email(ctx.user()))
754 except Exception, e:
754 except Exception, e:
755 raise util.Abort(_('Bugzilla error: %s') % e)
755 raise util.Abort(_('Bugzilla error: %s') % e)
756
756
@@ -1,411 +1,411 b''
1 # common.py - common code for the convert extension
1 # common.py - common code for the convert extension
2 #
2 #
3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import base64, errno
8 import base64, errno
9 import os
9 import os
10 import cPickle as pickle
10 import cPickle as pickle
11 from mercurial import util
11 from mercurial import util
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13
13
14 def encodeargs(args):
14 def encodeargs(args):
15 def encodearg(s):
15 def encodearg(s):
16 lines = base64.encodestring(s)
16 lines = base64.encodestring(s)
17 lines = [l.splitlines()[0] for l in lines]
17 lines = [l.splitlines()[0] for l in lines]
18 return ''.join(lines)
18 return ''.join(lines)
19
19
20 s = pickle.dumps(args)
20 s = pickle.dumps(args)
21 return encodearg(s)
21 return encodearg(s)
22
22
23 def decodeargs(s):
23 def decodeargs(s):
24 s = base64.decodestring(s)
24 s = base64.decodestring(s)
25 return pickle.loads(s)
25 return pickle.loads(s)
26
26
27 class MissingTool(Exception):
27 class MissingTool(Exception):
28 pass
28 pass
29
29
30 def checktool(exe, name=None, abort=True):
30 def checktool(exe, name=None, abort=True):
31 name = name or exe
31 name = name or exe
32 if not util.find_exe(exe):
32 if not util.find_exe(exe):
33 exc = abort and util.Abort or MissingTool
33 exc = abort and util.Abort or MissingTool
34 raise exc(_('cannot find required "%s" tool') % name)
34 raise exc(_('cannot find required "%s" tool') % name)
35
35
36 class NoRepo(Exception):
36 class NoRepo(Exception):
37 pass
37 pass
38
38
39 SKIPREV = 'SKIP'
39 SKIPREV = 'SKIP'
40
40
41 class commit(object):
41 class commit(object):
42 def __init__(self, author, date, desc, parents, branch=None, rev=None,
42 def __init__(self, author, date, desc, parents, branch=None, rev=None,
43 extra={}, sortkey=None):
43 extra={}, sortkey=None):
44 self.author = author or 'unknown'
44 self.author = author or 'unknown'
45 self.date = date or '0 0'
45 self.date = date or '0 0'
46 self.desc = desc
46 self.desc = desc
47 self.parents = parents
47 self.parents = parents
48 self.branch = branch
48 self.branch = branch
49 self.rev = rev
49 self.rev = rev
50 self.extra = extra
50 self.extra = extra
51 self.sortkey = sortkey
51 self.sortkey = sortkey
52
52
53 class converter_source(object):
53 class converter_source(object):
54 """Conversion source interface"""
54 """Conversion source interface"""
55
55
56 def __init__(self, ui, path=None, rev=None):
56 def __init__(self, ui, path=None, rev=None):
57 """Initialize conversion source (or raise NoRepo("message")
57 """Initialize conversion source (or raise NoRepo("message")
58 exception if path is not a valid repository)"""
58 exception if path is not a valid repository)"""
59 self.ui = ui
59 self.ui = ui
60 self.path = path
60 self.path = path
61 self.rev = rev
61 self.rev = rev
62
62
63 self.encoding = 'utf-8'
63 self.encoding = 'utf-8'
64
64
65 def before(self):
65 def before(self):
66 pass
66 pass
67
67
68 def after(self):
68 def after(self):
69 pass
69 pass
70
70
71 def setrevmap(self, revmap):
71 def setrevmap(self, revmap):
72 """set the map of already-converted revisions"""
72 """set the map of already-converted revisions"""
73 pass
73 pass
74
74
75 def getheads(self):
75 def getheads(self):
76 """Return a list of this repository's heads"""
76 """Return a list of this repository's heads"""
77 raise NotImplementedError()
77 raise NotImplementedError()
78
78
79 def getfile(self, name, rev):
79 def getfile(self, name, rev):
80 """Return a pair (data, mode) where data is the file content
80 """Return a pair (data, mode) where data is the file content
81 as a string and mode one of '', 'x' or 'l'. rev is the
81 as a string and mode one of '', 'x' or 'l'. rev is the
82 identifier returned by a previous call to getchanges(). Raise
82 identifier returned by a previous call to getchanges(). Raise
83 IOError to indicate that name was deleted in rev.
83 IOError to indicate that name was deleted in rev.
84 """
84 """
85 raise NotImplementedError()
85 raise NotImplementedError()
86
86
87 def getchanges(self, version):
87 def getchanges(self, version):
88 """Returns a tuple of (files, copies).
88 """Returns a tuple of (files, copies).
89
89
90 files is a sorted list of (filename, id) tuples for all files
90 files is a sorted list of (filename, id) tuples for all files
91 changed between version and its first parent returned by
91 changed between version and its first parent returned by
92 getcommit(). id is the source revision id of the file.
92 getcommit(). id is the source revision id of the file.
93
93
94 copies is a dictionary of dest: source
94 copies is a dictionary of dest: source
95 """
95 """
96 raise NotImplementedError()
96 raise NotImplementedError()
97
97
98 def getcommit(self, version):
98 def getcommit(self, version):
99 """Return the commit object for version"""
99 """Return the commit object for version"""
100 raise NotImplementedError()
100 raise NotImplementedError()
101
101
102 def gettags(self):
102 def gettags(self):
103 """Return the tags as a dictionary of name: revision
103 """Return the tags as a dictionary of name: revision
104
104
105 Tag names must be UTF-8 strings.
105 Tag names must be UTF-8 strings.
106 """
106 """
107 raise NotImplementedError()
107 raise NotImplementedError()
108
108
109 def recode(self, s, encoding=None):
109 def recode(self, s, encoding=None):
110 if not encoding:
110 if not encoding:
111 encoding = self.encoding or 'utf-8'
111 encoding = self.encoding or 'utf-8'
112
112
113 if isinstance(s, unicode):
113 if isinstance(s, unicode):
114 return s.encode("utf-8")
114 return s.encode("utf-8")
115 try:
115 try:
116 return s.decode(encoding).encode("utf-8")
116 return s.decode(encoding).encode("utf-8")
117 except:
117 except:
118 try:
118 try:
119 return s.decode("latin-1").encode("utf-8")
119 return s.decode("latin-1").encode("utf-8")
120 except:
120 except:
121 return s.decode(encoding, "replace").encode("utf-8")
121 return s.decode(encoding, "replace").encode("utf-8")
122
122
123 def getchangedfiles(self, rev, i):
123 def getchangedfiles(self, rev, i):
124 """Return the files changed by rev compared to parent[i].
124 """Return the files changed by rev compared to parent[i].
125
125
126 i is an index selecting one of the parents of rev. The return
126 i is an index selecting one of the parents of rev. The return
127 value should be the list of files that are different in rev and
127 value should be the list of files that are different in rev and
128 this parent.
128 this parent.
129
129
130 If rev has no parents, i is None.
130 If rev has no parents, i is None.
131
131
132 This function is only needed to support --filemap
132 This function is only needed to support --filemap
133 """
133 """
134 raise NotImplementedError()
134 raise NotImplementedError()
135
135
136 def converted(self, rev, sinkrev):
136 def converted(self, rev, sinkrev):
137 '''Notify the source that a revision has been converted.'''
137 '''Notify the source that a revision has been converted.'''
138 pass
138 pass
139
139
140 def hasnativeorder(self):
140 def hasnativeorder(self):
141 """Return true if this source has a meaningful, native revision
141 """Return true if this source has a meaningful, native revision
142 order. For instance, Mercurial revisions are store sequentially
142 order. For instance, Mercurial revisions are store sequentially
143 while there is no such global ordering with Darcs.
143 while there is no such global ordering with Darcs.
144 """
144 """
145 return False
145 return False
146
146
147 def lookuprev(self, rev):
147 def lookuprev(self, rev):
148 """If rev is a meaningful revision reference in source, return
148 """If rev is a meaningful revision reference in source, return
149 the referenced identifier in the same format used by getcommit().
149 the referenced identifier in the same format used by getcommit().
150 return None otherwise.
150 return None otherwise.
151 """
151 """
152 return None
152 return None
153
153
154 def getbookmarks(self):
154 def getbookmarks(self):
155 """Return the bookmarks as a dictionary of name: revision
155 """Return the bookmarks as a dictionary of name: revision
156
156
157 Bookmark names are to be UTF-8 strings.
157 Bookmark names are to be UTF-8 strings.
158 """
158 """
159 return {}
159 return {}
160
160
161 class converter_sink(object):
161 class converter_sink(object):
162 """Conversion sink (target) interface"""
162 """Conversion sink (target) interface"""
163
163
164 def __init__(self, ui, path):
164 def __init__(self, ui, path):
165 """Initialize conversion sink (or raise NoRepo("message")
165 """Initialize conversion sink (or raise NoRepo("message")
166 exception if path is not a valid repository)
166 exception if path is not a valid repository)
167
167
168 created is a list of paths to remove if a fatal error occurs
168 created is a list of paths to remove if a fatal error occurs
169 later"""
169 later"""
170 self.ui = ui
170 self.ui = ui
171 self.path = path
171 self.path = path
172 self.created = []
172 self.created = []
173
173
174 def getheads(self):
174 def getheads(self):
175 """Return a list of this repository's heads"""
175 """Return a list of this repository's heads"""
176 raise NotImplementedError()
176 raise NotImplementedError()
177
177
178 def revmapfile(self):
178 def revmapfile(self):
179 """Path to a file that will contain lines
179 """Path to a file that will contain lines
180 source_rev_id sink_rev_id
180 source_rev_id sink_rev_id
181 mapping equivalent revision identifiers for each system."""
181 mapping equivalent revision identifiers for each system."""
182 raise NotImplementedError()
182 raise NotImplementedError()
183
183
184 def authorfile(self):
184 def authorfile(self):
185 """Path to a file that will contain lines
185 """Path to a file that will contain lines
186 srcauthor=dstauthor
186 srcauthor=dstauthor
187 mapping equivalent authors identifiers for each system."""
187 mapping equivalent authors identifiers for each system."""
188 return None
188 return None
189
189
190 def putcommit(self, files, copies, parents, commit, source, revmap):
190 def putcommit(self, files, copies, parents, commit, source, revmap):
191 """Create a revision with all changed files listed in 'files'
191 """Create a revision with all changed files listed in 'files'
192 and having listed parents. 'commit' is a commit object
192 and having listed parents. 'commit' is a commit object
193 containing at a minimum the author, date, and message for this
193 containing at a minimum the author, date, and message for this
194 changeset. 'files' is a list of (path, version) tuples,
194 changeset. 'files' is a list of (path, version) tuples,
195 'copies' is a dictionary mapping destinations to sources,
195 'copies' is a dictionary mapping destinations to sources,
196 'source' is the source repository, and 'revmap' is a mapfile
196 'source' is the source repository, and 'revmap' is a mapfile
197 of source revisions to converted revisions. Only getfile() and
197 of source revisions to converted revisions. Only getfile() and
198 lookuprev() should be called on 'source'.
198 lookuprev() should be called on 'source'.
199
199
200 Note that the sink repository is not told to update itself to
200 Note that the sink repository is not told to update itself to
201 a particular revision (or even what that revision would be)
201 a particular revision (or even what that revision would be)
202 before it receives the file data.
202 before it receives the file data.
203 """
203 """
204 raise NotImplementedError()
204 raise NotImplementedError()
205
205
206 def puttags(self, tags):
206 def puttags(self, tags):
207 """Put tags into sink.
207 """Put tags into sink.
208
208
209 tags: {tagname: sink_rev_id, ...} where tagname is an UTF-8 string.
209 tags: {tagname: sink_rev_id, ...} where tagname is an UTF-8 string.
210 Return a pair (tag_revision, tag_parent_revision), or (None, None)
210 Return a pair (tag_revision, tag_parent_revision), or (None, None)
211 if nothing was changed.
211 if nothing was changed.
212 """
212 """
213 raise NotImplementedError()
213 raise NotImplementedError()
214
214
215 def setbranch(self, branch, pbranches):
215 def setbranch(self, branch, pbranches):
216 """Set the current branch name. Called before the first putcommit
216 """Set the current branch name. Called before the first putcommit
217 on the branch.
217 on the branch.
218 branch: branch name for subsequent commits
218 branch: branch name for subsequent commits
219 pbranches: (converted parent revision, parent branch) tuples"""
219 pbranches: (converted parent revision, parent branch) tuples"""
220 pass
220 pass
221
221
222 def setfilemapmode(self, active):
222 def setfilemapmode(self, active):
223 """Tell the destination that we're using a filemap
223 """Tell the destination that we're using a filemap
224
224
225 Some converter_sources (svn in particular) can claim that a file
225 Some converter_sources (svn in particular) can claim that a file
226 was changed in a revision, even if there was no change. This method
226 was changed in a revision, even if there was no change. This method
227 tells the destination that we're using a filemap and that it should
227 tells the destination that we're using a filemap and that it should
228 filter empty revisions.
228 filter empty revisions.
229 """
229 """
230 pass
230 pass
231
231
232 def before(self):
232 def before(self):
233 pass
233 pass
234
234
235 def after(self):
235 def after(self):
236 pass
236 pass
237
237
238 def putbookmarks(self, bookmarks):
238 def putbookmarks(self, bookmarks):
239 """Put bookmarks into sink.
239 """Put bookmarks into sink.
240
240
241 bookmarks: {bookmarkname: sink_rev_id, ...}
241 bookmarks: {bookmarkname: sink_rev_id, ...}
242 where bookmarkname is an UTF-8 string.
242 where bookmarkname is an UTF-8 string.
243 """
243 """
244 pass
244 pass
245
245
246 class commandline(object):
246 class commandline(object):
247 def __init__(self, ui, command):
247 def __init__(self, ui, command):
248 self.ui = ui
248 self.ui = ui
249 self.command = command
249 self.command = command
250
250
251 def prerun(self):
251 def prerun(self):
252 pass
252 pass
253
253
254 def postrun(self):
254 def postrun(self):
255 pass
255 pass
256
256
257 def _cmdline(self, cmd, closestdin, *args, **kwargs):
257 def _cmdline(self, cmd, closestdin, *args, **kwargs):
258 cmdline = [self.command, cmd] + list(args)
258 cmdline = [self.command, cmd] + list(args)
259 for k, v in kwargs.iteritems():
259 for k, v in kwargs.iteritems():
260 if len(k) == 1:
260 if len(k) == 1:
261 cmdline.append('-' + k)
261 cmdline.append('-' + k)
262 else:
262 else:
263 cmdline.append('--' + k.replace('_', '-'))
263 cmdline.append('--' + k.replace('_', '-'))
264 try:
264 try:
265 if len(k) == 1:
265 if len(k) == 1:
266 cmdline.append('' + v)
266 cmdline.append('' + v)
267 else:
267 else:
268 cmdline[-1] += '=' + v
268 cmdline[-1] += '=' + v
269 except TypeError:
269 except TypeError:
270 pass
270 pass
271 cmdline = [util.shellquote(arg) for arg in cmdline]
271 cmdline = [util.shellquote(arg) for arg in cmdline]
272 if not self.ui.debugflag:
272 if not self.ui.debugflag:
273 cmdline += ['2>', util.nulldev]
273 cmdline += ['2>', util.nulldev]
274 if closestdin:
274 if closestdin:
275 cmdline += ['<', util.nulldev]
275 cmdline += ['<', util.nulldev]
276 cmdline = ' '.join(cmdline)
276 cmdline = ' '.join(cmdline)
277 return cmdline
277 return cmdline
278
278
279 def _run(self, cmd, *args, **kwargs):
279 def _run(self, cmd, *args, **kwargs):
280 return self._dorun(util.popen, cmd, True, *args, **kwargs)
280 return self._dorun(util.popen, cmd, True, *args, **kwargs)
281
281
282 def _run2(self, cmd, *args, **kwargs):
282 def _run2(self, cmd, *args, **kwargs):
283 return self._dorun(util.popen2, cmd, False, *args, **kwargs)
283 return self._dorun(util.popen2, cmd, False, *args, **kwargs)
284
284
285 def _dorun(self, openfunc, cmd, closestdin, *args, **kwargs):
285 def _dorun(self, openfunc, cmd, closestdin, *args, **kwargs):
286 cmdline = self._cmdline(cmd, closestdin, *args, **kwargs)
286 cmdline = self._cmdline(cmd, closestdin, *args, **kwargs)
287 self.ui.debug('running: %s\n' % (cmdline,))
287 self.ui.debug('running: %s\n' % (cmdline,))
288 self.prerun()
288 self.prerun()
289 try:
289 try:
290 return openfunc(cmdline)
290 return openfunc(cmdline)
291 finally:
291 finally:
292 self.postrun()
292 self.postrun()
293
293
294 def run(self, cmd, *args, **kwargs):
294 def run(self, cmd, *args, **kwargs):
295 fp = self._run(cmd, *args, **kwargs)
295 fp = self._run(cmd, *args, **kwargs)
296 output = fp.read()
296 output = fp.read()
297 self.ui.debug(output)
297 self.ui.debug(output)
298 return output, fp.close()
298 return output, fp.close()
299
299
300 def runlines(self, cmd, *args, **kwargs):
300 def runlines(self, cmd, *args, **kwargs):
301 fp = self._run(cmd, *args, **kwargs)
301 fp = self._run(cmd, *args, **kwargs)
302 output = fp.readlines()
302 output = fp.readlines()
303 self.ui.debug(''.join(output))
303 self.ui.debug(''.join(output))
304 return output, fp.close()
304 return output, fp.close()
305
305
306 def checkexit(self, status, output=''):
306 def checkexit(self, status, output=''):
307 if status:
307 if status:
308 if output:
308 if output:
309 self.ui.warn(_('%s error:\n') % self.command)
309 self.ui.warn(_('%s error:\n') % self.command)
310 self.ui.warn(output)
310 self.ui.warn(output)
311 msg = util.explain_exit(status)[0]
311 msg = util.explainexit(status)[0]
312 raise util.Abort('%s %s' % (self.command, msg))
312 raise util.Abort('%s %s' % (self.command, msg))
313
313
314 def run0(self, cmd, *args, **kwargs):
314 def run0(self, cmd, *args, **kwargs):
315 output, status = self.run(cmd, *args, **kwargs)
315 output, status = self.run(cmd, *args, **kwargs)
316 self.checkexit(status, output)
316 self.checkexit(status, output)
317 return output
317 return output
318
318
319 def runlines0(self, cmd, *args, **kwargs):
319 def runlines0(self, cmd, *args, **kwargs):
320 output, status = self.runlines(cmd, *args, **kwargs)
320 output, status = self.runlines(cmd, *args, **kwargs)
321 self.checkexit(status, ''.join(output))
321 self.checkexit(status, ''.join(output))
322 return output
322 return output
323
323
324 def getargmax(self):
324 def getargmax(self):
325 if '_argmax' in self.__dict__:
325 if '_argmax' in self.__dict__:
326 return self._argmax
326 return self._argmax
327
327
328 # POSIX requires at least 4096 bytes for ARG_MAX
328 # POSIX requires at least 4096 bytes for ARG_MAX
329 self._argmax = 4096
329 self._argmax = 4096
330 try:
330 try:
331 self._argmax = os.sysconf("SC_ARG_MAX")
331 self._argmax = os.sysconf("SC_ARG_MAX")
332 except:
332 except:
333 pass
333 pass
334
334
335 # Windows shells impose their own limits on command line length,
335 # Windows shells impose their own limits on command line length,
336 # down to 2047 bytes for cmd.exe under Windows NT/2k and 2500 bytes
336 # down to 2047 bytes for cmd.exe under Windows NT/2k and 2500 bytes
337 # for older 4nt.exe. See http://support.microsoft.com/kb/830473 for
337 # for older 4nt.exe. See http://support.microsoft.com/kb/830473 for
338 # details about cmd.exe limitations.
338 # details about cmd.exe limitations.
339
339
340 # Since ARG_MAX is for command line _and_ environment, lower our limit
340 # Since ARG_MAX is for command line _and_ environment, lower our limit
341 # (and make happy Windows shells while doing this).
341 # (and make happy Windows shells while doing this).
342
342
343 self._argmax = self._argmax / 2 - 1
343 self._argmax = self._argmax / 2 - 1
344 return self._argmax
344 return self._argmax
345
345
346 def limit_arglist(self, arglist, cmd, closestdin, *args, **kwargs):
346 def limit_arglist(self, arglist, cmd, closestdin, *args, **kwargs):
347 cmdlen = len(self._cmdline(cmd, closestdin, *args, **kwargs))
347 cmdlen = len(self._cmdline(cmd, closestdin, *args, **kwargs))
348 limit = self.getargmax() - cmdlen
348 limit = self.getargmax() - cmdlen
349 bytes = 0
349 bytes = 0
350 fl = []
350 fl = []
351 for fn in arglist:
351 for fn in arglist:
352 b = len(fn) + 3
352 b = len(fn) + 3
353 if bytes + b < limit or len(fl) == 0:
353 if bytes + b < limit or len(fl) == 0:
354 fl.append(fn)
354 fl.append(fn)
355 bytes += b
355 bytes += b
356 else:
356 else:
357 yield fl
357 yield fl
358 fl = [fn]
358 fl = [fn]
359 bytes = b
359 bytes = b
360 if fl:
360 if fl:
361 yield fl
361 yield fl
362
362
363 def xargs(self, arglist, cmd, *args, **kwargs):
363 def xargs(self, arglist, cmd, *args, **kwargs):
364 for l in self.limit_arglist(arglist, cmd, True, *args, **kwargs):
364 for l in self.limit_arglist(arglist, cmd, True, *args, **kwargs):
365 self.run0(cmd, *(list(args) + l), **kwargs)
365 self.run0(cmd, *(list(args) + l), **kwargs)
366
366
367 class mapfile(dict):
367 class mapfile(dict):
368 def __init__(self, ui, path):
368 def __init__(self, ui, path):
369 super(mapfile, self).__init__()
369 super(mapfile, self).__init__()
370 self.ui = ui
370 self.ui = ui
371 self.path = path
371 self.path = path
372 self.fp = None
372 self.fp = None
373 self.order = []
373 self.order = []
374 self._read()
374 self._read()
375
375
376 def _read(self):
376 def _read(self):
377 if not self.path:
377 if not self.path:
378 return
378 return
379 try:
379 try:
380 fp = open(self.path, 'r')
380 fp = open(self.path, 'r')
381 except IOError, err:
381 except IOError, err:
382 if err.errno != errno.ENOENT:
382 if err.errno != errno.ENOENT:
383 raise
383 raise
384 return
384 return
385 for i, line in enumerate(fp):
385 for i, line in enumerate(fp):
386 try:
386 try:
387 key, value = line.splitlines()[0].rsplit(' ', 1)
387 key, value = line.splitlines()[0].rsplit(' ', 1)
388 except ValueError:
388 except ValueError:
389 raise util.Abort(
389 raise util.Abort(
390 _('syntax error in %s(%d): key/value pair expected')
390 _('syntax error in %s(%d): key/value pair expected')
391 % (self.path, i + 1))
391 % (self.path, i + 1))
392 if key not in self:
392 if key not in self:
393 self.order.append(key)
393 self.order.append(key)
394 super(mapfile, self).__setitem__(key, value)
394 super(mapfile, self).__setitem__(key, value)
395 fp.close()
395 fp.close()
396
396
397 def __setitem__(self, key, value):
397 def __setitem__(self, key, value):
398 if self.fp is None:
398 if self.fp is None:
399 try:
399 try:
400 self.fp = open(self.path, 'a')
400 self.fp = open(self.path, 'a')
401 except IOError, err:
401 except IOError, err:
402 raise util.Abort(_('could not open map file %r: %s') %
402 raise util.Abort(_('could not open map file %r: %s') %
403 (self.path, err.strerror))
403 (self.path, err.strerror))
404 self.fp.write('%s %s\n' % (key, value))
404 self.fp.write('%s %s\n' % (key, value))
405 self.fp.flush()
405 self.fp.flush()
406 super(mapfile, self).__setitem__(key, value)
406 super(mapfile, self).__setitem__(key, value)
407
407
408 def close(self):
408 def close(self):
409 if self.fp:
409 if self.fp:
410 self.fp.close()
410 self.fp.close()
411 self.fp = None
411 self.fp = None
@@ -1,159 +1,159 b''
1 # hook.py - hook support for mercurial
1 # hook.py - hook support for mercurial
2 #
2 #
3 # Copyright 2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 import os, sys
9 import os, sys
10 import extensions, util
10 import extensions, util
11
11
12 def _pythonhook(ui, repo, name, hname, funcname, args, throw):
12 def _pythonhook(ui, repo, name, hname, funcname, args, throw):
13 '''call python hook. hook is callable object, looked up as
13 '''call python hook. hook is callable object, looked up as
14 name in python module. if callable returns "true", hook
14 name in python module. if callable returns "true", hook
15 fails, else passes. if hook raises exception, treated as
15 fails, else passes. if hook raises exception, treated as
16 hook failure. exception propagates if throw is "true".
16 hook failure. exception propagates if throw is "true".
17
17
18 reason for "true" meaning "hook failed" is so that
18 reason for "true" meaning "hook failed" is so that
19 unmodified commands (e.g. mercurial.commands.update) can
19 unmodified commands (e.g. mercurial.commands.update) can
20 be run as hooks without wrappers to convert return values.'''
20 be run as hooks without wrappers to convert return values.'''
21
21
22 ui.note(_("calling hook %s: %s\n") % (hname, funcname))
22 ui.note(_("calling hook %s: %s\n") % (hname, funcname))
23 obj = funcname
23 obj = funcname
24 if not hasattr(obj, '__call__'):
24 if not hasattr(obj, '__call__'):
25 d = funcname.rfind('.')
25 d = funcname.rfind('.')
26 if d == -1:
26 if d == -1:
27 raise util.Abort(_('%s hook is invalid ("%s" not in '
27 raise util.Abort(_('%s hook is invalid ("%s" not in '
28 'a module)') % (hname, funcname))
28 'a module)') % (hname, funcname))
29 modname = funcname[:d]
29 modname = funcname[:d]
30 oldpaths = sys.path
30 oldpaths = sys.path
31 if hasattr(sys, "frozen"):
31 if hasattr(sys, "frozen"):
32 # binary installs require sys.path manipulation
32 # binary installs require sys.path manipulation
33 modpath, modfile = os.path.split(modname)
33 modpath, modfile = os.path.split(modname)
34 if modpath and modfile:
34 if modpath and modfile:
35 sys.path = sys.path[:] + [modpath]
35 sys.path = sys.path[:] + [modpath]
36 modname = modfile
36 modname = modfile
37 try:
37 try:
38 obj = __import__(modname)
38 obj = __import__(modname)
39 except ImportError:
39 except ImportError:
40 e1 = sys.exc_type, sys.exc_value, sys.exc_traceback
40 e1 = sys.exc_type, sys.exc_value, sys.exc_traceback
41 try:
41 try:
42 # extensions are loaded with hgext_ prefix
42 # extensions are loaded with hgext_ prefix
43 obj = __import__("hgext_%s" % modname)
43 obj = __import__("hgext_%s" % modname)
44 except ImportError:
44 except ImportError:
45 e2 = sys.exc_type, sys.exc_value, sys.exc_traceback
45 e2 = sys.exc_type, sys.exc_value, sys.exc_traceback
46 if ui.tracebackflag:
46 if ui.tracebackflag:
47 ui.warn(_('exception from first failed import attempt:\n'))
47 ui.warn(_('exception from first failed import attempt:\n'))
48 ui.traceback(e1)
48 ui.traceback(e1)
49 if ui.tracebackflag:
49 if ui.tracebackflag:
50 ui.warn(_('exception from second failed import attempt:\n'))
50 ui.warn(_('exception from second failed import attempt:\n'))
51 ui.traceback(e2)
51 ui.traceback(e2)
52 raise util.Abort(_('%s hook is invalid '
52 raise util.Abort(_('%s hook is invalid '
53 '(import of "%s" failed)') %
53 '(import of "%s" failed)') %
54 (hname, modname))
54 (hname, modname))
55 sys.path = oldpaths
55 sys.path = oldpaths
56 try:
56 try:
57 for p in funcname.split('.')[1:]:
57 for p in funcname.split('.')[1:]:
58 obj = getattr(obj, p)
58 obj = getattr(obj, p)
59 except AttributeError:
59 except AttributeError:
60 raise util.Abort(_('%s hook is invalid '
60 raise util.Abort(_('%s hook is invalid '
61 '("%s" is not defined)') %
61 '("%s" is not defined)') %
62 (hname, funcname))
62 (hname, funcname))
63 if not hasattr(obj, '__call__'):
63 if not hasattr(obj, '__call__'):
64 raise util.Abort(_('%s hook is invalid '
64 raise util.Abort(_('%s hook is invalid '
65 '("%s" is not callable)') %
65 '("%s" is not callable)') %
66 (hname, funcname))
66 (hname, funcname))
67 try:
67 try:
68 r = obj(ui=ui, repo=repo, hooktype=name, **args)
68 r = obj(ui=ui, repo=repo, hooktype=name, **args)
69 except KeyboardInterrupt:
69 except KeyboardInterrupt:
70 raise
70 raise
71 except Exception, exc:
71 except Exception, exc:
72 if isinstance(exc, util.Abort):
72 if isinstance(exc, util.Abort):
73 ui.warn(_('error: %s hook failed: %s\n') %
73 ui.warn(_('error: %s hook failed: %s\n') %
74 (hname, exc.args[0]))
74 (hname, exc.args[0]))
75 else:
75 else:
76 ui.warn(_('error: %s hook raised an exception: '
76 ui.warn(_('error: %s hook raised an exception: '
77 '%s\n') % (hname, exc))
77 '%s\n') % (hname, exc))
78 if throw:
78 if throw:
79 raise
79 raise
80 ui.traceback()
80 ui.traceback()
81 return True
81 return True
82 if r:
82 if r:
83 if throw:
83 if throw:
84 raise util.Abort(_('%s hook failed') % hname)
84 raise util.Abort(_('%s hook failed') % hname)
85 ui.warn(_('warning: %s hook failed\n') % hname)
85 ui.warn(_('warning: %s hook failed\n') % hname)
86 return r
86 return r
87
87
88 def _exthook(ui, repo, name, cmd, args, throw):
88 def _exthook(ui, repo, name, cmd, args, throw):
89 ui.note(_("running hook %s: %s\n") % (name, cmd))
89 ui.note(_("running hook %s: %s\n") % (name, cmd))
90
90
91 env = {}
91 env = {}
92 for k, v in args.iteritems():
92 for k, v in args.iteritems():
93 if hasattr(v, '__call__'):
93 if hasattr(v, '__call__'):
94 v = v()
94 v = v()
95 if isinstance(v, dict):
95 if isinstance(v, dict):
96 # make the dictionary element order stable across Python
96 # make the dictionary element order stable across Python
97 # implementations
97 # implementations
98 v = ('{' +
98 v = ('{' +
99 ', '.join('%r: %r' % i for i in sorted(v.iteritems())) +
99 ', '.join('%r: %r' % i for i in sorted(v.iteritems())) +
100 '}')
100 '}')
101 env['HG_' + k.upper()] = v
101 env['HG_' + k.upper()] = v
102
102
103 if repo:
103 if repo:
104 cwd = repo.root
104 cwd = repo.root
105 else:
105 else:
106 cwd = os.getcwd()
106 cwd = os.getcwd()
107 if 'HG_URL' in env and env['HG_URL'].startswith('remote:http'):
107 if 'HG_URL' in env and env['HG_URL'].startswith('remote:http'):
108 r = util.system(cmd, environ=env, cwd=cwd, out=ui)
108 r = util.system(cmd, environ=env, cwd=cwd, out=ui)
109 else:
109 else:
110 r = util.system(cmd, environ=env, cwd=cwd)
110 r = util.system(cmd, environ=env, cwd=cwd)
111 if r:
111 if r:
112 desc, r = util.explain_exit(r)
112 desc, r = util.explainexit(r)
113 if throw:
113 if throw:
114 raise util.Abort(_('%s hook %s') % (name, desc))
114 raise util.Abort(_('%s hook %s') % (name, desc))
115 ui.warn(_('warning: %s hook %s\n') % (name, desc))
115 ui.warn(_('warning: %s hook %s\n') % (name, desc))
116 return r
116 return r
117
117
118 _redirect = False
118 _redirect = False
119 def redirect(state):
119 def redirect(state):
120 global _redirect
120 global _redirect
121 _redirect = state
121 _redirect = state
122
122
123 def hook(ui, repo, name, throw=False, **args):
123 def hook(ui, repo, name, throw=False, **args):
124 r = False
124 r = False
125
125
126 oldstdout = -1
126 oldstdout = -1
127 if _redirect:
127 if _redirect:
128 stdoutno = sys.__stdout__.fileno()
128 stdoutno = sys.__stdout__.fileno()
129 stderrno = sys.__stderr__.fileno()
129 stderrno = sys.__stderr__.fileno()
130 # temporarily redirect stdout to stderr, if possible
130 # temporarily redirect stdout to stderr, if possible
131 if stdoutno >= 0 and stderrno >= 0:
131 if stdoutno >= 0 and stderrno >= 0:
132 oldstdout = os.dup(stdoutno)
132 oldstdout = os.dup(stdoutno)
133 os.dup2(stderrno, stdoutno)
133 os.dup2(stderrno, stdoutno)
134
134
135 try:
135 try:
136 for hname, cmd in ui.configitems('hooks'):
136 for hname, cmd in ui.configitems('hooks'):
137 if hname.split('.')[0] != name or not cmd:
137 if hname.split('.')[0] != name or not cmd:
138 continue
138 continue
139 if hasattr(cmd, '__call__'):
139 if hasattr(cmd, '__call__'):
140 r = _pythonhook(ui, repo, name, hname, cmd, args, throw) or r
140 r = _pythonhook(ui, repo, name, hname, cmd, args, throw) or r
141 elif cmd.startswith('python:'):
141 elif cmd.startswith('python:'):
142 if cmd.count(':') >= 2:
142 if cmd.count(':') >= 2:
143 path, cmd = cmd[7:].rsplit(':', 1)
143 path, cmd = cmd[7:].rsplit(':', 1)
144 path = util.expandpath(path)
144 path = util.expandpath(path)
145 if repo:
145 if repo:
146 path = os.path.join(repo.root, path)
146 path = os.path.join(repo.root, path)
147 mod = extensions.loadpath(path, 'hghook.%s' % hname)
147 mod = extensions.loadpath(path, 'hghook.%s' % hname)
148 hookfn = getattr(mod, cmd)
148 hookfn = getattr(mod, cmd)
149 else:
149 else:
150 hookfn = cmd[7:].strip()
150 hookfn = cmd[7:].strip()
151 r = _pythonhook(ui, repo, name, hname, hookfn, args, throw) or r
151 r = _pythonhook(ui, repo, name, hname, hookfn, args, throw) or r
152 else:
152 else:
153 r = _exthook(ui, repo, hname, cmd, args, throw) or r
153 r = _exthook(ui, repo, hname, cmd, args, throw) or r
154 finally:
154 finally:
155 if _redirect and oldstdout >= 0:
155 if _redirect and oldstdout >= 0:
156 os.dup2(oldstdout, stdoutno)
156 os.dup2(oldstdout, stdoutno)
157 os.close(oldstdout)
157 os.close(oldstdout)
158
158
159 return r
159 return r
@@ -1,233 +1,233 b''
1 # mail.py - mail sending bits for mercurial
1 # mail.py - mail sending bits for mercurial
2 #
2 #
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 import util, encoding
9 import util, encoding
10 import os, smtplib, socket, quopri
10 import os, smtplib, socket, quopri
11 import email.Header, email.MIMEText, email.Utils
11 import email.Header, email.MIMEText, email.Utils
12
12
13 _oldheaderinit = email.Header.Header.__init__
13 _oldheaderinit = email.Header.Header.__init__
14 def _unifiedheaderinit(self, *args, **kw):
14 def _unifiedheaderinit(self, *args, **kw):
15 """
15 """
16 Python2.7 introduces a backwards incompatible change
16 Python2.7 introduces a backwards incompatible change
17 (Python issue1974, r70772) in email.Generator.Generator code:
17 (Python issue1974, r70772) in email.Generator.Generator code:
18 pre-2.7 code passed "continuation_ws='\t'" to the Header
18 pre-2.7 code passed "continuation_ws='\t'" to the Header
19 constructor, and 2.7 removed this parameter.
19 constructor, and 2.7 removed this parameter.
20
20
21 Default argument is continuation_ws=' ', which means that the
21 Default argument is continuation_ws=' ', which means that the
22 behaviour is different in <2.7 and 2.7
22 behaviour is different in <2.7 and 2.7
23
23
24 We consider the 2.7 behaviour to be preferable, but need
24 We consider the 2.7 behaviour to be preferable, but need
25 to have an unified behaviour for versions 2.4 to 2.7
25 to have an unified behaviour for versions 2.4 to 2.7
26 """
26 """
27 # override continuation_ws
27 # override continuation_ws
28 kw['continuation_ws'] = ' '
28 kw['continuation_ws'] = ' '
29 _oldheaderinit(self, *args, **kw)
29 _oldheaderinit(self, *args, **kw)
30
30
31 email.Header.Header.__dict__['__init__'] = _unifiedheaderinit
31 email.Header.Header.__dict__['__init__'] = _unifiedheaderinit
32
32
33 def _smtp(ui):
33 def _smtp(ui):
34 '''build an smtp connection and return a function to send mail'''
34 '''build an smtp connection and return a function to send mail'''
35 local_hostname = ui.config('smtp', 'local_hostname')
35 local_hostname = ui.config('smtp', 'local_hostname')
36 tls = ui.config('smtp', 'tls', 'none')
36 tls = ui.config('smtp', 'tls', 'none')
37 # backward compatible: when tls = true, we use starttls.
37 # backward compatible: when tls = true, we use starttls.
38 starttls = tls == 'starttls' or util.parsebool(tls)
38 starttls = tls == 'starttls' or util.parsebool(tls)
39 smtps = tls == 'smtps'
39 smtps = tls == 'smtps'
40 if (starttls or smtps) and not hasattr(socket, 'ssl'):
40 if (starttls or smtps) and not hasattr(socket, 'ssl'):
41 raise util.Abort(_("can't use TLS: Python SSL support not installed"))
41 raise util.Abort(_("can't use TLS: Python SSL support not installed"))
42 if smtps:
42 if smtps:
43 ui.note(_('(using smtps)\n'))
43 ui.note(_('(using smtps)\n'))
44 s = smtplib.SMTP_SSL(local_hostname=local_hostname)
44 s = smtplib.SMTP_SSL(local_hostname=local_hostname)
45 else:
45 else:
46 s = smtplib.SMTP(local_hostname=local_hostname)
46 s = smtplib.SMTP(local_hostname=local_hostname)
47 mailhost = ui.config('smtp', 'host')
47 mailhost = ui.config('smtp', 'host')
48 if not mailhost:
48 if not mailhost:
49 raise util.Abort(_('smtp.host not configured - cannot send mail'))
49 raise util.Abort(_('smtp.host not configured - cannot send mail'))
50 mailport = util.getport(ui.config('smtp', 'port', 25))
50 mailport = util.getport(ui.config('smtp', 'port', 25))
51 ui.note(_('sending mail: smtp host %s, port %s\n') %
51 ui.note(_('sending mail: smtp host %s, port %s\n') %
52 (mailhost, mailport))
52 (mailhost, mailport))
53 s.connect(host=mailhost, port=mailport)
53 s.connect(host=mailhost, port=mailport)
54 if starttls:
54 if starttls:
55 ui.note(_('(using starttls)\n'))
55 ui.note(_('(using starttls)\n'))
56 s.ehlo()
56 s.ehlo()
57 s.starttls()
57 s.starttls()
58 s.ehlo()
58 s.ehlo()
59 username = ui.config('smtp', 'username')
59 username = ui.config('smtp', 'username')
60 password = ui.config('smtp', 'password')
60 password = ui.config('smtp', 'password')
61 if username and not password:
61 if username and not password:
62 password = ui.getpass()
62 password = ui.getpass()
63 if username and password:
63 if username and password:
64 ui.note(_('(authenticating to mail server as %s)\n') %
64 ui.note(_('(authenticating to mail server as %s)\n') %
65 (username))
65 (username))
66 try:
66 try:
67 s.login(username, password)
67 s.login(username, password)
68 except smtplib.SMTPException, inst:
68 except smtplib.SMTPException, inst:
69 raise util.Abort(inst)
69 raise util.Abort(inst)
70
70
71 def send(sender, recipients, msg):
71 def send(sender, recipients, msg):
72 try:
72 try:
73 return s.sendmail(sender, recipients, msg)
73 return s.sendmail(sender, recipients, msg)
74 except smtplib.SMTPRecipientsRefused, inst:
74 except smtplib.SMTPRecipientsRefused, inst:
75 recipients = [r[1] for r in inst.recipients.values()]
75 recipients = [r[1] for r in inst.recipients.values()]
76 raise util.Abort('\n' + '\n'.join(recipients))
76 raise util.Abort('\n' + '\n'.join(recipients))
77 except smtplib.SMTPException, inst:
77 except smtplib.SMTPException, inst:
78 raise util.Abort(inst)
78 raise util.Abort(inst)
79
79
80 return send
80 return send
81
81
82 def _sendmail(ui, sender, recipients, msg):
82 def _sendmail(ui, sender, recipients, msg):
83 '''send mail using sendmail.'''
83 '''send mail using sendmail.'''
84 program = ui.config('email', 'method')
84 program = ui.config('email', 'method')
85 cmdline = '%s -f %s %s' % (program, util.email(sender),
85 cmdline = '%s -f %s %s' % (program, util.email(sender),
86 ' '.join(map(util.email, recipients)))
86 ' '.join(map(util.email, recipients)))
87 ui.note(_('sending mail: %s\n') % cmdline)
87 ui.note(_('sending mail: %s\n') % cmdline)
88 fp = util.popen(cmdline, 'w')
88 fp = util.popen(cmdline, 'w')
89 fp.write(msg)
89 fp.write(msg)
90 ret = fp.close()
90 ret = fp.close()
91 if ret:
91 if ret:
92 raise util.Abort('%s %s' % (
92 raise util.Abort('%s %s' % (
93 os.path.basename(program.split(None, 1)[0]),
93 os.path.basename(program.split(None, 1)[0]),
94 util.explain_exit(ret)[0]))
94 util.explainexit(ret)[0]))
95
95
96 def connect(ui):
96 def connect(ui):
97 '''make a mail connection. return a function to send mail.
97 '''make a mail connection. return a function to send mail.
98 call as sendmail(sender, list-of-recipients, msg).'''
98 call as sendmail(sender, list-of-recipients, msg).'''
99 if ui.config('email', 'method', 'smtp') == 'smtp':
99 if ui.config('email', 'method', 'smtp') == 'smtp':
100 return _smtp(ui)
100 return _smtp(ui)
101 return lambda s, r, m: _sendmail(ui, s, r, m)
101 return lambda s, r, m: _sendmail(ui, s, r, m)
102
102
103 def sendmail(ui, sender, recipients, msg):
103 def sendmail(ui, sender, recipients, msg):
104 send = connect(ui)
104 send = connect(ui)
105 return send(sender, recipients, msg)
105 return send(sender, recipients, msg)
106
106
107 def validateconfig(ui):
107 def validateconfig(ui):
108 '''determine if we have enough config data to try sending email.'''
108 '''determine if we have enough config data to try sending email.'''
109 method = ui.config('email', 'method', 'smtp')
109 method = ui.config('email', 'method', 'smtp')
110 if method == 'smtp':
110 if method == 'smtp':
111 if not ui.config('smtp', 'host'):
111 if not ui.config('smtp', 'host'):
112 raise util.Abort(_('smtp specified as email transport, '
112 raise util.Abort(_('smtp specified as email transport, '
113 'but no smtp host configured'))
113 'but no smtp host configured'))
114 else:
114 else:
115 if not util.find_exe(method):
115 if not util.find_exe(method):
116 raise util.Abort(_('%r specified as email transport, '
116 raise util.Abort(_('%r specified as email transport, '
117 'but not in PATH') % method)
117 'but not in PATH') % method)
118
118
119 def mimetextpatch(s, subtype='plain', display=False):
119 def mimetextpatch(s, subtype='plain', display=False):
120 '''If patch in utf-8 transfer-encode it.'''
120 '''If patch in utf-8 transfer-encode it.'''
121
121
122 enc = None
122 enc = None
123 for line in s.splitlines():
123 for line in s.splitlines():
124 if len(line) > 950:
124 if len(line) > 950:
125 s = quopri.encodestring(s)
125 s = quopri.encodestring(s)
126 enc = "quoted-printable"
126 enc = "quoted-printable"
127 break
127 break
128
128
129 cs = 'us-ascii'
129 cs = 'us-ascii'
130 if not display:
130 if not display:
131 try:
131 try:
132 s.decode('us-ascii')
132 s.decode('us-ascii')
133 except UnicodeDecodeError:
133 except UnicodeDecodeError:
134 try:
134 try:
135 s.decode('utf-8')
135 s.decode('utf-8')
136 cs = 'utf-8'
136 cs = 'utf-8'
137 except UnicodeDecodeError:
137 except UnicodeDecodeError:
138 # We'll go with us-ascii as a fallback.
138 # We'll go with us-ascii as a fallback.
139 pass
139 pass
140
140
141 msg = email.MIMEText.MIMEText(s, subtype, cs)
141 msg = email.MIMEText.MIMEText(s, subtype, cs)
142 if enc:
142 if enc:
143 del msg['Content-Transfer-Encoding']
143 del msg['Content-Transfer-Encoding']
144 msg['Content-Transfer-Encoding'] = enc
144 msg['Content-Transfer-Encoding'] = enc
145 return msg
145 return msg
146
146
147 def _charsets(ui):
147 def _charsets(ui):
148 '''Obtains charsets to send mail parts not containing patches.'''
148 '''Obtains charsets to send mail parts not containing patches.'''
149 charsets = [cs.lower() for cs in ui.configlist('email', 'charsets')]
149 charsets = [cs.lower() for cs in ui.configlist('email', 'charsets')]
150 fallbacks = [encoding.fallbackencoding.lower(),
150 fallbacks = [encoding.fallbackencoding.lower(),
151 encoding.encoding.lower(), 'utf-8']
151 encoding.encoding.lower(), 'utf-8']
152 for cs in fallbacks: # find unique charsets while keeping order
152 for cs in fallbacks: # find unique charsets while keeping order
153 if cs not in charsets:
153 if cs not in charsets:
154 charsets.append(cs)
154 charsets.append(cs)
155 return [cs for cs in charsets if not cs.endswith('ascii')]
155 return [cs for cs in charsets if not cs.endswith('ascii')]
156
156
157 def _encode(ui, s, charsets):
157 def _encode(ui, s, charsets):
158 '''Returns (converted) string, charset tuple.
158 '''Returns (converted) string, charset tuple.
159 Finds out best charset by cycling through sendcharsets in descending
159 Finds out best charset by cycling through sendcharsets in descending
160 order. Tries both encoding and fallbackencoding for input. Only as
160 order. Tries both encoding and fallbackencoding for input. Only as
161 last resort send as is in fake ascii.
161 last resort send as is in fake ascii.
162 Caveat: Do not use for mail parts containing patches!'''
162 Caveat: Do not use for mail parts containing patches!'''
163 try:
163 try:
164 s.decode('ascii')
164 s.decode('ascii')
165 except UnicodeDecodeError:
165 except UnicodeDecodeError:
166 sendcharsets = charsets or _charsets(ui)
166 sendcharsets = charsets or _charsets(ui)
167 for ics in (encoding.encoding, encoding.fallbackencoding):
167 for ics in (encoding.encoding, encoding.fallbackencoding):
168 try:
168 try:
169 u = s.decode(ics)
169 u = s.decode(ics)
170 except UnicodeDecodeError:
170 except UnicodeDecodeError:
171 continue
171 continue
172 for ocs in sendcharsets:
172 for ocs in sendcharsets:
173 try:
173 try:
174 return u.encode(ocs), ocs
174 return u.encode(ocs), ocs
175 except UnicodeEncodeError:
175 except UnicodeEncodeError:
176 pass
176 pass
177 except LookupError:
177 except LookupError:
178 ui.warn(_('ignoring invalid sendcharset: %s\n') % ocs)
178 ui.warn(_('ignoring invalid sendcharset: %s\n') % ocs)
179 # if ascii, or all conversion attempts fail, send (broken) ascii
179 # if ascii, or all conversion attempts fail, send (broken) ascii
180 return s, 'us-ascii'
180 return s, 'us-ascii'
181
181
182 def headencode(ui, s, charsets=None, display=False):
182 def headencode(ui, s, charsets=None, display=False):
183 '''Returns RFC-2047 compliant header from given string.'''
183 '''Returns RFC-2047 compliant header from given string.'''
184 if not display:
184 if not display:
185 # split into words?
185 # split into words?
186 s, cs = _encode(ui, s, charsets)
186 s, cs = _encode(ui, s, charsets)
187 return str(email.Header.Header(s, cs))
187 return str(email.Header.Header(s, cs))
188 return s
188 return s
189
189
190 def _addressencode(ui, name, addr, charsets=None):
190 def _addressencode(ui, name, addr, charsets=None):
191 name = headencode(ui, name, charsets)
191 name = headencode(ui, name, charsets)
192 try:
192 try:
193 acc, dom = addr.split('@')
193 acc, dom = addr.split('@')
194 acc = acc.encode('ascii')
194 acc = acc.encode('ascii')
195 dom = dom.decode(encoding.encoding).encode('idna')
195 dom = dom.decode(encoding.encoding).encode('idna')
196 addr = '%s@%s' % (acc, dom)
196 addr = '%s@%s' % (acc, dom)
197 except UnicodeDecodeError:
197 except UnicodeDecodeError:
198 raise util.Abort(_('invalid email address: %s') % addr)
198 raise util.Abort(_('invalid email address: %s') % addr)
199 except ValueError:
199 except ValueError:
200 try:
200 try:
201 # too strict?
201 # too strict?
202 addr = addr.encode('ascii')
202 addr = addr.encode('ascii')
203 except UnicodeDecodeError:
203 except UnicodeDecodeError:
204 raise util.Abort(_('invalid local address: %s') % addr)
204 raise util.Abort(_('invalid local address: %s') % addr)
205 return email.Utils.formataddr((name, addr))
205 return email.Utils.formataddr((name, addr))
206
206
207 def addressencode(ui, address, charsets=None, display=False):
207 def addressencode(ui, address, charsets=None, display=False):
208 '''Turns address into RFC-2047 compliant header.'''
208 '''Turns address into RFC-2047 compliant header.'''
209 if display or not address:
209 if display or not address:
210 return address or ''
210 return address or ''
211 name, addr = email.Utils.parseaddr(address)
211 name, addr = email.Utils.parseaddr(address)
212 return _addressencode(ui, name, addr, charsets)
212 return _addressencode(ui, name, addr, charsets)
213
213
214 def addrlistencode(ui, addrs, charsets=None, display=False):
214 def addrlistencode(ui, addrs, charsets=None, display=False):
215 '''Turns a list of addresses into a list of RFC-2047 compliant headers.
215 '''Turns a list of addresses into a list of RFC-2047 compliant headers.
216 A single element of input list may contain multiple addresses, but output
216 A single element of input list may contain multiple addresses, but output
217 always has one address per item'''
217 always has one address per item'''
218 if display:
218 if display:
219 return [a.strip() for a in addrs if a.strip()]
219 return [a.strip() for a in addrs if a.strip()]
220
220
221 result = []
221 result = []
222 for name, addr in email.Utils.getaddresses(addrs):
222 for name, addr in email.Utils.getaddresses(addrs):
223 if name or addr:
223 if name or addr:
224 result.append(_addressencode(ui, name, addr, charsets))
224 result.append(_addressencode(ui, name, addr, charsets))
225 return result
225 return result
226
226
227 def mimeencode(ui, s, charsets=None, display=False):
227 def mimeencode(ui, s, charsets=None, display=False):
228 '''creates mime text object, encodes it if needed, and sets
228 '''creates mime text object, encodes it if needed, and sets
229 charset and transfer-encoding accordingly.'''
229 charset and transfer-encoding accordingly.'''
230 cs = 'us-ascii'
230 cs = 'us-ascii'
231 if not display:
231 if not display:
232 s, cs = _encode(ui, s, charsets)
232 s, cs = _encode(ui, s, charsets)
233 return email.MIMEText.MIMEText(s, 'plain', cs)
233 return email.MIMEText.MIMEText(s, 'plain', cs)
@@ -1,1618 +1,1618 b''
1 # patch.py - patch file parsing routines
1 # patch.py - patch file parsing routines
2 #
2 #
3 # Copyright 2006 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006 Brendan Cully <brendan@kublai.com>
4 # Copyright 2007 Chris Mason <chris.mason@oracle.com>
4 # Copyright 2007 Chris Mason <chris.mason@oracle.com>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 import cStringIO, email.Parser, os, errno, re
9 import cStringIO, email.Parser, os, errno, re
10 import tempfile, zlib
10 import tempfile, zlib
11
11
12 from i18n import _
12 from i18n import _
13 from node import hex, nullid, short
13 from node import hex, nullid, short
14 import base85, mdiff, scmutil, util, diffhelpers, copies, encoding
14 import base85, mdiff, scmutil, util, diffhelpers, copies, encoding
15
15
16 gitre = re.compile('diff --git a/(.*) b/(.*)')
16 gitre = re.compile('diff --git a/(.*) b/(.*)')
17
17
18 class PatchError(Exception):
18 class PatchError(Exception):
19 pass
19 pass
20
20
21 # helper functions
21 # helper functions
22
22
23 def copyfile(src, dst, basedir):
23 def copyfile(src, dst, basedir):
24 abssrc, absdst = [scmutil.canonpath(basedir, basedir, x)
24 abssrc, absdst = [scmutil.canonpath(basedir, basedir, x)
25 for x in [src, dst]]
25 for x in [src, dst]]
26 if os.path.lexists(absdst):
26 if os.path.lexists(absdst):
27 raise util.Abort(_("cannot create %s: destination already exists") %
27 raise util.Abort(_("cannot create %s: destination already exists") %
28 dst)
28 dst)
29
29
30 dstdir = os.path.dirname(absdst)
30 dstdir = os.path.dirname(absdst)
31 if dstdir and not os.path.isdir(dstdir):
31 if dstdir and not os.path.isdir(dstdir):
32 try:
32 try:
33 os.makedirs(dstdir)
33 os.makedirs(dstdir)
34 except IOError:
34 except IOError:
35 raise util.Abort(
35 raise util.Abort(
36 _("cannot create %s: unable to create destination directory")
36 _("cannot create %s: unable to create destination directory")
37 % dst)
37 % dst)
38
38
39 util.copyfile(abssrc, absdst)
39 util.copyfile(abssrc, absdst)
40
40
41 # public functions
41 # public functions
42
42
43 def split(stream):
43 def split(stream):
44 '''return an iterator of individual patches from a stream'''
44 '''return an iterator of individual patches from a stream'''
45 def isheader(line, inheader):
45 def isheader(line, inheader):
46 if inheader and line[0] in (' ', '\t'):
46 if inheader and line[0] in (' ', '\t'):
47 # continuation
47 # continuation
48 return True
48 return True
49 if line[0] in (' ', '-', '+'):
49 if line[0] in (' ', '-', '+'):
50 # diff line - don't check for header pattern in there
50 # diff line - don't check for header pattern in there
51 return False
51 return False
52 l = line.split(': ', 1)
52 l = line.split(': ', 1)
53 return len(l) == 2 and ' ' not in l[0]
53 return len(l) == 2 and ' ' not in l[0]
54
54
55 def chunk(lines):
55 def chunk(lines):
56 return cStringIO.StringIO(''.join(lines))
56 return cStringIO.StringIO(''.join(lines))
57
57
58 def hgsplit(stream, cur):
58 def hgsplit(stream, cur):
59 inheader = True
59 inheader = True
60
60
61 for line in stream:
61 for line in stream:
62 if not line.strip():
62 if not line.strip():
63 inheader = False
63 inheader = False
64 if not inheader and line.startswith('# HG changeset patch'):
64 if not inheader and line.startswith('# HG changeset patch'):
65 yield chunk(cur)
65 yield chunk(cur)
66 cur = []
66 cur = []
67 inheader = True
67 inheader = True
68
68
69 cur.append(line)
69 cur.append(line)
70
70
71 if cur:
71 if cur:
72 yield chunk(cur)
72 yield chunk(cur)
73
73
74 def mboxsplit(stream, cur):
74 def mboxsplit(stream, cur):
75 for line in stream:
75 for line in stream:
76 if line.startswith('From '):
76 if line.startswith('From '):
77 for c in split(chunk(cur[1:])):
77 for c in split(chunk(cur[1:])):
78 yield c
78 yield c
79 cur = []
79 cur = []
80
80
81 cur.append(line)
81 cur.append(line)
82
82
83 if cur:
83 if cur:
84 for c in split(chunk(cur[1:])):
84 for c in split(chunk(cur[1:])):
85 yield c
85 yield c
86
86
87 def mimesplit(stream, cur):
87 def mimesplit(stream, cur):
88 def msgfp(m):
88 def msgfp(m):
89 fp = cStringIO.StringIO()
89 fp = cStringIO.StringIO()
90 g = email.Generator.Generator(fp, mangle_from_=False)
90 g = email.Generator.Generator(fp, mangle_from_=False)
91 g.flatten(m)
91 g.flatten(m)
92 fp.seek(0)
92 fp.seek(0)
93 return fp
93 return fp
94
94
95 for line in stream:
95 for line in stream:
96 cur.append(line)
96 cur.append(line)
97 c = chunk(cur)
97 c = chunk(cur)
98
98
99 m = email.Parser.Parser().parse(c)
99 m = email.Parser.Parser().parse(c)
100 if not m.is_multipart():
100 if not m.is_multipart():
101 yield msgfp(m)
101 yield msgfp(m)
102 else:
102 else:
103 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
103 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
104 for part in m.walk():
104 for part in m.walk():
105 ct = part.get_content_type()
105 ct = part.get_content_type()
106 if ct not in ok_types:
106 if ct not in ok_types:
107 continue
107 continue
108 yield msgfp(part)
108 yield msgfp(part)
109
109
110 def headersplit(stream, cur):
110 def headersplit(stream, cur):
111 inheader = False
111 inheader = False
112
112
113 for line in stream:
113 for line in stream:
114 if not inheader and isheader(line, inheader):
114 if not inheader and isheader(line, inheader):
115 yield chunk(cur)
115 yield chunk(cur)
116 cur = []
116 cur = []
117 inheader = True
117 inheader = True
118 if inheader and not isheader(line, inheader):
118 if inheader and not isheader(line, inheader):
119 inheader = False
119 inheader = False
120
120
121 cur.append(line)
121 cur.append(line)
122
122
123 if cur:
123 if cur:
124 yield chunk(cur)
124 yield chunk(cur)
125
125
126 def remainder(cur):
126 def remainder(cur):
127 yield chunk(cur)
127 yield chunk(cur)
128
128
129 class fiter(object):
129 class fiter(object):
130 def __init__(self, fp):
130 def __init__(self, fp):
131 self.fp = fp
131 self.fp = fp
132
132
133 def __iter__(self):
133 def __iter__(self):
134 return self
134 return self
135
135
136 def next(self):
136 def next(self):
137 l = self.fp.readline()
137 l = self.fp.readline()
138 if not l:
138 if not l:
139 raise StopIteration
139 raise StopIteration
140 return l
140 return l
141
141
142 inheader = False
142 inheader = False
143 cur = []
143 cur = []
144
144
145 mimeheaders = ['content-type']
145 mimeheaders = ['content-type']
146
146
147 if not hasattr(stream, 'next'):
147 if not hasattr(stream, 'next'):
148 # http responses, for example, have readline but not next
148 # http responses, for example, have readline but not next
149 stream = fiter(stream)
149 stream = fiter(stream)
150
150
151 for line in stream:
151 for line in stream:
152 cur.append(line)
152 cur.append(line)
153 if line.startswith('# HG changeset patch'):
153 if line.startswith('# HG changeset patch'):
154 return hgsplit(stream, cur)
154 return hgsplit(stream, cur)
155 elif line.startswith('From '):
155 elif line.startswith('From '):
156 return mboxsplit(stream, cur)
156 return mboxsplit(stream, cur)
157 elif isheader(line, inheader):
157 elif isheader(line, inheader):
158 inheader = True
158 inheader = True
159 if line.split(':', 1)[0].lower() in mimeheaders:
159 if line.split(':', 1)[0].lower() in mimeheaders:
160 # let email parser handle this
160 # let email parser handle this
161 return mimesplit(stream, cur)
161 return mimesplit(stream, cur)
162 elif line.startswith('--- ') and inheader:
162 elif line.startswith('--- ') and inheader:
163 # No evil headers seen by diff start, split by hand
163 # No evil headers seen by diff start, split by hand
164 return headersplit(stream, cur)
164 return headersplit(stream, cur)
165 # Not enough info, keep reading
165 # Not enough info, keep reading
166
166
167 # if we are here, we have a very plain patch
167 # if we are here, we have a very plain patch
168 return remainder(cur)
168 return remainder(cur)
169
169
170 def extract(ui, fileobj):
170 def extract(ui, fileobj):
171 '''extract patch from data read from fileobj.
171 '''extract patch from data read from fileobj.
172
172
173 patch can be a normal patch or contained in an email message.
173 patch can be a normal patch or contained in an email message.
174
174
175 return tuple (filename, message, user, date, branch, node, p1, p2).
175 return tuple (filename, message, user, date, branch, node, p1, p2).
176 Any item in the returned tuple can be None. If filename is None,
176 Any item in the returned tuple can be None. If filename is None,
177 fileobj did not contain a patch. Caller must unlink filename when done.'''
177 fileobj did not contain a patch. Caller must unlink filename when done.'''
178
178
179 # attempt to detect the start of a patch
179 # attempt to detect the start of a patch
180 # (this heuristic is borrowed from quilt)
180 # (this heuristic is borrowed from quilt)
181 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |'
181 diffre = re.compile(r'^(?:Index:[ \t]|diff[ \t]|RCS file: |'
182 r'retrieving revision [0-9]+(\.[0-9]+)*$|'
182 r'retrieving revision [0-9]+(\.[0-9]+)*$|'
183 r'---[ \t].*?^\+\+\+[ \t]|'
183 r'---[ \t].*?^\+\+\+[ \t]|'
184 r'\*\*\*[ \t].*?^---[ \t])', re.MULTILINE|re.DOTALL)
184 r'\*\*\*[ \t].*?^---[ \t])', re.MULTILINE|re.DOTALL)
185
185
186 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
186 fd, tmpname = tempfile.mkstemp(prefix='hg-patch-')
187 tmpfp = os.fdopen(fd, 'w')
187 tmpfp = os.fdopen(fd, 'w')
188 try:
188 try:
189 msg = email.Parser.Parser().parse(fileobj)
189 msg = email.Parser.Parser().parse(fileobj)
190
190
191 subject = msg['Subject']
191 subject = msg['Subject']
192 user = msg['From']
192 user = msg['From']
193 if not subject and not user:
193 if not subject and not user:
194 # Not an email, restore parsed headers if any
194 # Not an email, restore parsed headers if any
195 subject = '\n'.join(': '.join(h) for h in msg.items()) + '\n'
195 subject = '\n'.join(': '.join(h) for h in msg.items()) + '\n'
196
196
197 gitsendmail = 'git-send-email' in msg.get('X-Mailer', '')
197 gitsendmail = 'git-send-email' in msg.get('X-Mailer', '')
198 # should try to parse msg['Date']
198 # should try to parse msg['Date']
199 date = None
199 date = None
200 nodeid = None
200 nodeid = None
201 branch = None
201 branch = None
202 parents = []
202 parents = []
203
203
204 if subject:
204 if subject:
205 if subject.startswith('[PATCH'):
205 if subject.startswith('[PATCH'):
206 pend = subject.find(']')
206 pend = subject.find(']')
207 if pend >= 0:
207 if pend >= 0:
208 subject = subject[pend + 1:].lstrip()
208 subject = subject[pend + 1:].lstrip()
209 subject = subject.replace('\n\t', ' ')
209 subject = subject.replace('\n\t', ' ')
210 ui.debug('Subject: %s\n' % subject)
210 ui.debug('Subject: %s\n' % subject)
211 if user:
211 if user:
212 ui.debug('From: %s\n' % user)
212 ui.debug('From: %s\n' % user)
213 diffs_seen = 0
213 diffs_seen = 0
214 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
214 ok_types = ('text/plain', 'text/x-diff', 'text/x-patch')
215 message = ''
215 message = ''
216 for part in msg.walk():
216 for part in msg.walk():
217 content_type = part.get_content_type()
217 content_type = part.get_content_type()
218 ui.debug('Content-Type: %s\n' % content_type)
218 ui.debug('Content-Type: %s\n' % content_type)
219 if content_type not in ok_types:
219 if content_type not in ok_types:
220 continue
220 continue
221 payload = part.get_payload(decode=True)
221 payload = part.get_payload(decode=True)
222 m = diffre.search(payload)
222 m = diffre.search(payload)
223 if m:
223 if m:
224 hgpatch = False
224 hgpatch = False
225 hgpatchheader = False
225 hgpatchheader = False
226 ignoretext = False
226 ignoretext = False
227
227
228 ui.debug('found patch at byte %d\n' % m.start(0))
228 ui.debug('found patch at byte %d\n' % m.start(0))
229 diffs_seen += 1
229 diffs_seen += 1
230 cfp = cStringIO.StringIO()
230 cfp = cStringIO.StringIO()
231 for line in payload[:m.start(0)].splitlines():
231 for line in payload[:m.start(0)].splitlines():
232 if line.startswith('# HG changeset patch') and not hgpatch:
232 if line.startswith('# HG changeset patch') and not hgpatch:
233 ui.debug('patch generated by hg export\n')
233 ui.debug('patch generated by hg export\n')
234 hgpatch = True
234 hgpatch = True
235 hgpatchheader = True
235 hgpatchheader = True
236 # drop earlier commit message content
236 # drop earlier commit message content
237 cfp.seek(0)
237 cfp.seek(0)
238 cfp.truncate()
238 cfp.truncate()
239 subject = None
239 subject = None
240 elif hgpatchheader:
240 elif hgpatchheader:
241 if line.startswith('# User '):
241 if line.startswith('# User '):
242 user = line[7:]
242 user = line[7:]
243 ui.debug('From: %s\n' % user)
243 ui.debug('From: %s\n' % user)
244 elif line.startswith("# Date "):
244 elif line.startswith("# Date "):
245 date = line[7:]
245 date = line[7:]
246 elif line.startswith("# Branch "):
246 elif line.startswith("# Branch "):
247 branch = line[9:]
247 branch = line[9:]
248 elif line.startswith("# Node ID "):
248 elif line.startswith("# Node ID "):
249 nodeid = line[10:]
249 nodeid = line[10:]
250 elif line.startswith("# Parent "):
250 elif line.startswith("# Parent "):
251 parents.append(line[10:])
251 parents.append(line[10:])
252 elif not line.startswith("# "):
252 elif not line.startswith("# "):
253 hgpatchheader = False
253 hgpatchheader = False
254 elif line == '---' and gitsendmail:
254 elif line == '---' and gitsendmail:
255 ignoretext = True
255 ignoretext = True
256 if not hgpatchheader and not ignoretext:
256 if not hgpatchheader and not ignoretext:
257 cfp.write(line)
257 cfp.write(line)
258 cfp.write('\n')
258 cfp.write('\n')
259 message = cfp.getvalue()
259 message = cfp.getvalue()
260 if tmpfp:
260 if tmpfp:
261 tmpfp.write(payload)
261 tmpfp.write(payload)
262 if not payload.endswith('\n'):
262 if not payload.endswith('\n'):
263 tmpfp.write('\n')
263 tmpfp.write('\n')
264 elif not diffs_seen and message and content_type == 'text/plain':
264 elif not diffs_seen and message and content_type == 'text/plain':
265 message += '\n' + payload
265 message += '\n' + payload
266 except:
266 except:
267 tmpfp.close()
267 tmpfp.close()
268 os.unlink(tmpname)
268 os.unlink(tmpname)
269 raise
269 raise
270
270
271 if subject and not message.startswith(subject):
271 if subject and not message.startswith(subject):
272 message = '%s\n%s' % (subject, message)
272 message = '%s\n%s' % (subject, message)
273 tmpfp.close()
273 tmpfp.close()
274 if not diffs_seen:
274 if not diffs_seen:
275 os.unlink(tmpname)
275 os.unlink(tmpname)
276 return None, message, user, date, branch, None, None, None
276 return None, message, user, date, branch, None, None, None
277 p1 = parents and parents.pop(0) or None
277 p1 = parents and parents.pop(0) or None
278 p2 = parents and parents.pop(0) or None
278 p2 = parents and parents.pop(0) or None
279 return tmpname, message, user, date, branch, nodeid, p1, p2
279 return tmpname, message, user, date, branch, nodeid, p1, p2
280
280
281 class patchmeta(object):
281 class patchmeta(object):
282 """Patched file metadata
282 """Patched file metadata
283
283
284 'op' is the performed operation within ADD, DELETE, RENAME, MODIFY
284 'op' is the performed operation within ADD, DELETE, RENAME, MODIFY
285 or COPY. 'path' is patched file path. 'oldpath' is set to the
285 or COPY. 'path' is patched file path. 'oldpath' is set to the
286 origin file when 'op' is either COPY or RENAME, None otherwise. If
286 origin file when 'op' is either COPY or RENAME, None otherwise. If
287 file mode is changed, 'mode' is a tuple (islink, isexec) where
287 file mode is changed, 'mode' is a tuple (islink, isexec) where
288 'islink' is True if the file is a symlink and 'isexec' is True if
288 'islink' is True if the file is a symlink and 'isexec' is True if
289 the file is executable. Otherwise, 'mode' is None.
289 the file is executable. Otherwise, 'mode' is None.
290 """
290 """
291 def __init__(self, path):
291 def __init__(self, path):
292 self.path = path
292 self.path = path
293 self.oldpath = None
293 self.oldpath = None
294 self.mode = None
294 self.mode = None
295 self.op = 'MODIFY'
295 self.op = 'MODIFY'
296 self.binary = False
296 self.binary = False
297
297
298 def setmode(self, mode):
298 def setmode(self, mode):
299 islink = mode & 020000
299 islink = mode & 020000
300 isexec = mode & 0100
300 isexec = mode & 0100
301 self.mode = (islink, isexec)
301 self.mode = (islink, isexec)
302
302
303 def __repr__(self):
303 def __repr__(self):
304 return "<patchmeta %s %r>" % (self.op, self.path)
304 return "<patchmeta %s %r>" % (self.op, self.path)
305
305
306 def readgitpatch(lr):
306 def readgitpatch(lr):
307 """extract git-style metadata about patches from <patchname>"""
307 """extract git-style metadata about patches from <patchname>"""
308
308
309 # Filter patch for git information
309 # Filter patch for git information
310 gp = None
310 gp = None
311 gitpatches = []
311 gitpatches = []
312 for line in lr:
312 for line in lr:
313 line = line.rstrip(' \r\n')
313 line = line.rstrip(' \r\n')
314 if line.startswith('diff --git'):
314 if line.startswith('diff --git'):
315 m = gitre.match(line)
315 m = gitre.match(line)
316 if m:
316 if m:
317 if gp:
317 if gp:
318 gitpatches.append(gp)
318 gitpatches.append(gp)
319 dst = m.group(2)
319 dst = m.group(2)
320 gp = patchmeta(dst)
320 gp = patchmeta(dst)
321 elif gp:
321 elif gp:
322 if line.startswith('--- '):
322 if line.startswith('--- '):
323 gitpatches.append(gp)
323 gitpatches.append(gp)
324 gp = None
324 gp = None
325 continue
325 continue
326 if line.startswith('rename from '):
326 if line.startswith('rename from '):
327 gp.op = 'RENAME'
327 gp.op = 'RENAME'
328 gp.oldpath = line[12:]
328 gp.oldpath = line[12:]
329 elif line.startswith('rename to '):
329 elif line.startswith('rename to '):
330 gp.path = line[10:]
330 gp.path = line[10:]
331 elif line.startswith('copy from '):
331 elif line.startswith('copy from '):
332 gp.op = 'COPY'
332 gp.op = 'COPY'
333 gp.oldpath = line[10:]
333 gp.oldpath = line[10:]
334 elif line.startswith('copy to '):
334 elif line.startswith('copy to '):
335 gp.path = line[8:]
335 gp.path = line[8:]
336 elif line.startswith('deleted file'):
336 elif line.startswith('deleted file'):
337 gp.op = 'DELETE'
337 gp.op = 'DELETE'
338 elif line.startswith('new file mode '):
338 elif line.startswith('new file mode '):
339 gp.op = 'ADD'
339 gp.op = 'ADD'
340 gp.setmode(int(line[-6:], 8))
340 gp.setmode(int(line[-6:], 8))
341 elif line.startswith('new mode '):
341 elif line.startswith('new mode '):
342 gp.setmode(int(line[-6:], 8))
342 gp.setmode(int(line[-6:], 8))
343 elif line.startswith('GIT binary patch'):
343 elif line.startswith('GIT binary patch'):
344 gp.binary = True
344 gp.binary = True
345 if gp:
345 if gp:
346 gitpatches.append(gp)
346 gitpatches.append(gp)
347
347
348 return gitpatches
348 return gitpatches
349
349
350 class linereader(object):
350 class linereader(object):
351 # simple class to allow pushing lines back into the input stream
351 # simple class to allow pushing lines back into the input stream
352 def __init__(self, fp, textmode=False):
352 def __init__(self, fp, textmode=False):
353 self.fp = fp
353 self.fp = fp
354 self.buf = []
354 self.buf = []
355 self.textmode = textmode
355 self.textmode = textmode
356 self.eol = None
356 self.eol = None
357
357
358 def push(self, line):
358 def push(self, line):
359 if line is not None:
359 if line is not None:
360 self.buf.append(line)
360 self.buf.append(line)
361
361
362 def readline(self):
362 def readline(self):
363 if self.buf:
363 if self.buf:
364 l = self.buf[0]
364 l = self.buf[0]
365 del self.buf[0]
365 del self.buf[0]
366 return l
366 return l
367 l = self.fp.readline()
367 l = self.fp.readline()
368 if not self.eol:
368 if not self.eol:
369 if l.endswith('\r\n'):
369 if l.endswith('\r\n'):
370 self.eol = '\r\n'
370 self.eol = '\r\n'
371 elif l.endswith('\n'):
371 elif l.endswith('\n'):
372 self.eol = '\n'
372 self.eol = '\n'
373 if self.textmode and l.endswith('\r\n'):
373 if self.textmode and l.endswith('\r\n'):
374 l = l[:-2] + '\n'
374 l = l[:-2] + '\n'
375 return l
375 return l
376
376
377 def __iter__(self):
377 def __iter__(self):
378 while 1:
378 while 1:
379 l = self.readline()
379 l = self.readline()
380 if not l:
380 if not l:
381 break
381 break
382 yield l
382 yield l
383
383
384 # @@ -start,len +start,len @@ or @@ -start +start @@ if len is 1
384 # @@ -start,len +start,len @@ or @@ -start +start @@ if len is 1
385 unidesc = re.compile('@@ -(\d+)(,(\d+))? \+(\d+)(,(\d+))? @@')
385 unidesc = re.compile('@@ -(\d+)(,(\d+))? \+(\d+)(,(\d+))? @@')
386 contextdesc = re.compile('(---|\*\*\*) (\d+)(,(\d+))? (---|\*\*\*)')
386 contextdesc = re.compile('(---|\*\*\*) (\d+)(,(\d+))? (---|\*\*\*)')
387 eolmodes = ['strict', 'crlf', 'lf', 'auto']
387 eolmodes = ['strict', 'crlf', 'lf', 'auto']
388
388
389 class patchfile(object):
389 class patchfile(object):
390 def __init__(self, ui, fname, opener, missing=False, eolmode='strict'):
390 def __init__(self, ui, fname, opener, missing=False, eolmode='strict'):
391 self.fname = fname
391 self.fname = fname
392 self.eolmode = eolmode
392 self.eolmode = eolmode
393 self.eol = None
393 self.eol = None
394 self.opener = opener
394 self.opener = opener
395 self.ui = ui
395 self.ui = ui
396 self.lines = []
396 self.lines = []
397 self.exists = False
397 self.exists = False
398 self.missing = missing
398 self.missing = missing
399 if not missing:
399 if not missing:
400 try:
400 try:
401 self.lines = self.readlines(fname)
401 self.lines = self.readlines(fname)
402 self.exists = True
402 self.exists = True
403 except IOError:
403 except IOError:
404 pass
404 pass
405 else:
405 else:
406 self.ui.warn(_("unable to find '%s' for patching\n") % self.fname)
406 self.ui.warn(_("unable to find '%s' for patching\n") % self.fname)
407
407
408 self.hash = {}
408 self.hash = {}
409 self.dirty = False
409 self.dirty = False
410 self.offset = 0
410 self.offset = 0
411 self.skew = 0
411 self.skew = 0
412 self.rej = []
412 self.rej = []
413 self.fileprinted = False
413 self.fileprinted = False
414 self.printfile(False)
414 self.printfile(False)
415 self.hunks = 0
415 self.hunks = 0
416
416
417 def readlines(self, fname):
417 def readlines(self, fname):
418 if os.path.islink(fname):
418 if os.path.islink(fname):
419 return [os.readlink(fname)]
419 return [os.readlink(fname)]
420 fp = self.opener(fname, 'r')
420 fp = self.opener(fname, 'r')
421 try:
421 try:
422 lr = linereader(fp, self.eolmode != 'strict')
422 lr = linereader(fp, self.eolmode != 'strict')
423 lines = list(lr)
423 lines = list(lr)
424 self.eol = lr.eol
424 self.eol = lr.eol
425 return lines
425 return lines
426 finally:
426 finally:
427 fp.close()
427 fp.close()
428
428
429 def writelines(self, fname, lines):
429 def writelines(self, fname, lines):
430 # Ensure supplied data ends in fname, being a regular file or
430 # Ensure supplied data ends in fname, being a regular file or
431 # a symlink. cmdutil.updatedir will -too magically- take care
431 # a symlink. cmdutil.updatedir will -too magically- take care
432 # of setting it to the proper type afterwards.
432 # of setting it to the proper type afterwards.
433 st_mode = None
433 st_mode = None
434 islink = os.path.islink(fname)
434 islink = os.path.islink(fname)
435 if islink:
435 if islink:
436 fp = cStringIO.StringIO()
436 fp = cStringIO.StringIO()
437 else:
437 else:
438 try:
438 try:
439 st_mode = os.lstat(fname).st_mode & 0777
439 st_mode = os.lstat(fname).st_mode & 0777
440 except OSError, e:
440 except OSError, e:
441 if e.errno != errno.ENOENT:
441 if e.errno != errno.ENOENT:
442 raise
442 raise
443 fp = self.opener(fname, 'w')
443 fp = self.opener(fname, 'w')
444 try:
444 try:
445 if self.eolmode == 'auto':
445 if self.eolmode == 'auto':
446 eol = self.eol
446 eol = self.eol
447 elif self.eolmode == 'crlf':
447 elif self.eolmode == 'crlf':
448 eol = '\r\n'
448 eol = '\r\n'
449 else:
449 else:
450 eol = '\n'
450 eol = '\n'
451
451
452 if self.eolmode != 'strict' and eol and eol != '\n':
452 if self.eolmode != 'strict' and eol and eol != '\n':
453 for l in lines:
453 for l in lines:
454 if l and l[-1] == '\n':
454 if l and l[-1] == '\n':
455 l = l[:-1] + eol
455 l = l[:-1] + eol
456 fp.write(l)
456 fp.write(l)
457 else:
457 else:
458 fp.writelines(lines)
458 fp.writelines(lines)
459 if islink:
459 if islink:
460 self.opener.symlink(fp.getvalue(), fname)
460 self.opener.symlink(fp.getvalue(), fname)
461 if st_mode is not None:
461 if st_mode is not None:
462 os.chmod(fname, st_mode)
462 os.chmod(fname, st_mode)
463 finally:
463 finally:
464 fp.close()
464 fp.close()
465
465
466 def unlink(self, fname):
466 def unlink(self, fname):
467 os.unlink(fname)
467 os.unlink(fname)
468
468
469 def printfile(self, warn):
469 def printfile(self, warn):
470 if self.fileprinted:
470 if self.fileprinted:
471 return
471 return
472 if warn or self.ui.verbose:
472 if warn or self.ui.verbose:
473 self.fileprinted = True
473 self.fileprinted = True
474 s = _("patching file %s\n") % self.fname
474 s = _("patching file %s\n") % self.fname
475 if warn:
475 if warn:
476 self.ui.warn(s)
476 self.ui.warn(s)
477 else:
477 else:
478 self.ui.note(s)
478 self.ui.note(s)
479
479
480
480
481 def findlines(self, l, linenum):
481 def findlines(self, l, linenum):
482 # looks through the hash and finds candidate lines. The
482 # looks through the hash and finds candidate lines. The
483 # result is a list of line numbers sorted based on distance
483 # result is a list of line numbers sorted based on distance
484 # from linenum
484 # from linenum
485
485
486 cand = self.hash.get(l, [])
486 cand = self.hash.get(l, [])
487 if len(cand) > 1:
487 if len(cand) > 1:
488 # resort our list of potentials forward then back.
488 # resort our list of potentials forward then back.
489 cand.sort(key=lambda x: abs(x - linenum))
489 cand.sort(key=lambda x: abs(x - linenum))
490 return cand
490 return cand
491
491
492 def makerejlines(self, fname):
492 def makerejlines(self, fname):
493 base = os.path.basename(fname)
493 base = os.path.basename(fname)
494 yield "--- %s\n+++ %s\n" % (base, base)
494 yield "--- %s\n+++ %s\n" % (base, base)
495 for x in self.rej:
495 for x in self.rej:
496 for l in x.hunk:
496 for l in x.hunk:
497 yield l
497 yield l
498 if l[-1] != '\n':
498 if l[-1] != '\n':
499 yield "\n\ No newline at end of file\n"
499 yield "\n\ No newline at end of file\n"
500
500
501 def write_rej(self):
501 def write_rej(self):
502 # our rejects are a little different from patch(1). This always
502 # our rejects are a little different from patch(1). This always
503 # creates rejects in the same form as the original patch. A file
503 # creates rejects in the same form as the original patch. A file
504 # header is inserted so that you can run the reject through patch again
504 # header is inserted so that you can run the reject through patch again
505 # without having to type the filename.
505 # without having to type the filename.
506
506
507 if not self.rej:
507 if not self.rej:
508 return
508 return
509
509
510 fname = self.fname + ".rej"
510 fname = self.fname + ".rej"
511 self.ui.warn(
511 self.ui.warn(
512 _("%d out of %d hunks FAILED -- saving rejects to file %s\n") %
512 _("%d out of %d hunks FAILED -- saving rejects to file %s\n") %
513 (len(self.rej), self.hunks, fname))
513 (len(self.rej), self.hunks, fname))
514
514
515 fp = self.opener(fname, 'w')
515 fp = self.opener(fname, 'w')
516 fp.writelines(self.makerejlines(self.fname))
516 fp.writelines(self.makerejlines(self.fname))
517 fp.close()
517 fp.close()
518
518
519 def apply(self, h):
519 def apply(self, h):
520 if not h.complete():
520 if not h.complete():
521 raise PatchError(_("bad hunk #%d %s (%d %d %d %d)") %
521 raise PatchError(_("bad hunk #%d %s (%d %d %d %d)") %
522 (h.number, h.desc, len(h.a), h.lena, len(h.b),
522 (h.number, h.desc, len(h.a), h.lena, len(h.b),
523 h.lenb))
523 h.lenb))
524
524
525 self.hunks += 1
525 self.hunks += 1
526
526
527 if self.missing:
527 if self.missing:
528 self.rej.append(h)
528 self.rej.append(h)
529 return -1
529 return -1
530
530
531 if self.exists and h.createfile():
531 if self.exists and h.createfile():
532 self.ui.warn(_("file %s already exists\n") % self.fname)
532 self.ui.warn(_("file %s already exists\n") % self.fname)
533 self.rej.append(h)
533 self.rej.append(h)
534 return -1
534 return -1
535
535
536 if isinstance(h, binhunk):
536 if isinstance(h, binhunk):
537 if h.rmfile():
537 if h.rmfile():
538 self.unlink(self.fname)
538 self.unlink(self.fname)
539 else:
539 else:
540 self.lines[:] = h.new()
540 self.lines[:] = h.new()
541 self.offset += len(h.new())
541 self.offset += len(h.new())
542 self.dirty = True
542 self.dirty = True
543 return 0
543 return 0
544
544
545 horig = h
545 horig = h
546 if (self.eolmode in ('crlf', 'lf')
546 if (self.eolmode in ('crlf', 'lf')
547 or self.eolmode == 'auto' and self.eol):
547 or self.eolmode == 'auto' and self.eol):
548 # If new eols are going to be normalized, then normalize
548 # If new eols are going to be normalized, then normalize
549 # hunk data before patching. Otherwise, preserve input
549 # hunk data before patching. Otherwise, preserve input
550 # line-endings.
550 # line-endings.
551 h = h.getnormalized()
551 h = h.getnormalized()
552
552
553 # fast case first, no offsets, no fuzz
553 # fast case first, no offsets, no fuzz
554 old = h.old()
554 old = h.old()
555 # patch starts counting at 1 unless we are adding the file
555 # patch starts counting at 1 unless we are adding the file
556 if h.starta == 0:
556 if h.starta == 0:
557 start = 0
557 start = 0
558 else:
558 else:
559 start = h.starta + self.offset - 1
559 start = h.starta + self.offset - 1
560 orig_start = start
560 orig_start = start
561 # if there's skew we want to emit the "(offset %d lines)" even
561 # if there's skew we want to emit the "(offset %d lines)" even
562 # when the hunk cleanly applies at start + skew, so skip the
562 # when the hunk cleanly applies at start + skew, so skip the
563 # fast case code
563 # fast case code
564 if self.skew == 0 and diffhelpers.testhunk(old, self.lines, start) == 0:
564 if self.skew == 0 and diffhelpers.testhunk(old, self.lines, start) == 0:
565 if h.rmfile():
565 if h.rmfile():
566 self.unlink(self.fname)
566 self.unlink(self.fname)
567 else:
567 else:
568 self.lines[start : start + h.lena] = h.new()
568 self.lines[start : start + h.lena] = h.new()
569 self.offset += h.lenb - h.lena
569 self.offset += h.lenb - h.lena
570 self.dirty = True
570 self.dirty = True
571 return 0
571 return 0
572
572
573 # ok, we couldn't match the hunk. Lets look for offsets and fuzz it
573 # ok, we couldn't match the hunk. Lets look for offsets and fuzz it
574 self.hash = {}
574 self.hash = {}
575 for x, s in enumerate(self.lines):
575 for x, s in enumerate(self.lines):
576 self.hash.setdefault(s, []).append(x)
576 self.hash.setdefault(s, []).append(x)
577 if h.hunk[-1][0] != ' ':
577 if h.hunk[-1][0] != ' ':
578 # if the hunk tried to put something at the bottom of the file
578 # if the hunk tried to put something at the bottom of the file
579 # override the start line and use eof here
579 # override the start line and use eof here
580 search_start = len(self.lines)
580 search_start = len(self.lines)
581 else:
581 else:
582 search_start = orig_start + self.skew
582 search_start = orig_start + self.skew
583
583
584 for fuzzlen in xrange(3):
584 for fuzzlen in xrange(3):
585 for toponly in [True, False]:
585 for toponly in [True, False]:
586 old = h.old(fuzzlen, toponly)
586 old = h.old(fuzzlen, toponly)
587
587
588 cand = self.findlines(old[0][1:], search_start)
588 cand = self.findlines(old[0][1:], search_start)
589 for l in cand:
589 for l in cand:
590 if diffhelpers.testhunk(old, self.lines, l) == 0:
590 if diffhelpers.testhunk(old, self.lines, l) == 0:
591 newlines = h.new(fuzzlen, toponly)
591 newlines = h.new(fuzzlen, toponly)
592 self.lines[l : l + len(old)] = newlines
592 self.lines[l : l + len(old)] = newlines
593 self.offset += len(newlines) - len(old)
593 self.offset += len(newlines) - len(old)
594 self.skew = l - orig_start
594 self.skew = l - orig_start
595 self.dirty = True
595 self.dirty = True
596 offset = l - orig_start - fuzzlen
596 offset = l - orig_start - fuzzlen
597 if fuzzlen:
597 if fuzzlen:
598 msg = _("Hunk #%d succeeded at %d "
598 msg = _("Hunk #%d succeeded at %d "
599 "with fuzz %d "
599 "with fuzz %d "
600 "(offset %d lines).\n")
600 "(offset %d lines).\n")
601 self.printfile(True)
601 self.printfile(True)
602 self.ui.warn(msg %
602 self.ui.warn(msg %
603 (h.number, l + 1, fuzzlen, offset))
603 (h.number, l + 1, fuzzlen, offset))
604 else:
604 else:
605 msg = _("Hunk #%d succeeded at %d "
605 msg = _("Hunk #%d succeeded at %d "
606 "(offset %d lines).\n")
606 "(offset %d lines).\n")
607 self.ui.note(msg % (h.number, l + 1, offset))
607 self.ui.note(msg % (h.number, l + 1, offset))
608 return fuzzlen
608 return fuzzlen
609 self.printfile(True)
609 self.printfile(True)
610 self.ui.warn(_("Hunk #%d FAILED at %d\n") % (h.number, orig_start))
610 self.ui.warn(_("Hunk #%d FAILED at %d\n") % (h.number, orig_start))
611 self.rej.append(horig)
611 self.rej.append(horig)
612 return -1
612 return -1
613
613
614 def close(self):
614 def close(self):
615 if self.dirty:
615 if self.dirty:
616 self.writelines(self.fname, self.lines)
616 self.writelines(self.fname, self.lines)
617 self.write_rej()
617 self.write_rej()
618 return len(self.rej)
618 return len(self.rej)
619
619
620 class hunk(object):
620 class hunk(object):
621 def __init__(self, desc, num, lr, context, create=False, remove=False):
621 def __init__(self, desc, num, lr, context, create=False, remove=False):
622 self.number = num
622 self.number = num
623 self.desc = desc
623 self.desc = desc
624 self.hunk = [desc]
624 self.hunk = [desc]
625 self.a = []
625 self.a = []
626 self.b = []
626 self.b = []
627 self.starta = self.lena = None
627 self.starta = self.lena = None
628 self.startb = self.lenb = None
628 self.startb = self.lenb = None
629 if lr is not None:
629 if lr is not None:
630 if context:
630 if context:
631 self.read_context_hunk(lr)
631 self.read_context_hunk(lr)
632 else:
632 else:
633 self.read_unified_hunk(lr)
633 self.read_unified_hunk(lr)
634 self.create = create
634 self.create = create
635 self.remove = remove and not create
635 self.remove = remove and not create
636
636
637 def getnormalized(self):
637 def getnormalized(self):
638 """Return a copy with line endings normalized to LF."""
638 """Return a copy with line endings normalized to LF."""
639
639
640 def normalize(lines):
640 def normalize(lines):
641 nlines = []
641 nlines = []
642 for line in lines:
642 for line in lines:
643 if line.endswith('\r\n'):
643 if line.endswith('\r\n'):
644 line = line[:-2] + '\n'
644 line = line[:-2] + '\n'
645 nlines.append(line)
645 nlines.append(line)
646 return nlines
646 return nlines
647
647
648 # Dummy object, it is rebuilt manually
648 # Dummy object, it is rebuilt manually
649 nh = hunk(self.desc, self.number, None, None, False, False)
649 nh = hunk(self.desc, self.number, None, None, False, False)
650 nh.number = self.number
650 nh.number = self.number
651 nh.desc = self.desc
651 nh.desc = self.desc
652 nh.hunk = self.hunk
652 nh.hunk = self.hunk
653 nh.a = normalize(self.a)
653 nh.a = normalize(self.a)
654 nh.b = normalize(self.b)
654 nh.b = normalize(self.b)
655 nh.starta = self.starta
655 nh.starta = self.starta
656 nh.startb = self.startb
656 nh.startb = self.startb
657 nh.lena = self.lena
657 nh.lena = self.lena
658 nh.lenb = self.lenb
658 nh.lenb = self.lenb
659 nh.create = self.create
659 nh.create = self.create
660 nh.remove = self.remove
660 nh.remove = self.remove
661 return nh
661 return nh
662
662
663 def read_unified_hunk(self, lr):
663 def read_unified_hunk(self, lr):
664 m = unidesc.match(self.desc)
664 m = unidesc.match(self.desc)
665 if not m:
665 if not m:
666 raise PatchError(_("bad hunk #%d") % self.number)
666 raise PatchError(_("bad hunk #%d") % self.number)
667 self.starta, foo, self.lena, self.startb, foo2, self.lenb = m.groups()
667 self.starta, foo, self.lena, self.startb, foo2, self.lenb = m.groups()
668 if self.lena is None:
668 if self.lena is None:
669 self.lena = 1
669 self.lena = 1
670 else:
670 else:
671 self.lena = int(self.lena)
671 self.lena = int(self.lena)
672 if self.lenb is None:
672 if self.lenb is None:
673 self.lenb = 1
673 self.lenb = 1
674 else:
674 else:
675 self.lenb = int(self.lenb)
675 self.lenb = int(self.lenb)
676 self.starta = int(self.starta)
676 self.starta = int(self.starta)
677 self.startb = int(self.startb)
677 self.startb = int(self.startb)
678 diffhelpers.addlines(lr, self.hunk, self.lena, self.lenb, self.a, self.b)
678 diffhelpers.addlines(lr, self.hunk, self.lena, self.lenb, self.a, self.b)
679 # if we hit eof before finishing out the hunk, the last line will
679 # if we hit eof before finishing out the hunk, the last line will
680 # be zero length. Lets try to fix it up.
680 # be zero length. Lets try to fix it up.
681 while len(self.hunk[-1]) == 0:
681 while len(self.hunk[-1]) == 0:
682 del self.hunk[-1]
682 del self.hunk[-1]
683 del self.a[-1]
683 del self.a[-1]
684 del self.b[-1]
684 del self.b[-1]
685 self.lena -= 1
685 self.lena -= 1
686 self.lenb -= 1
686 self.lenb -= 1
687 self._fixnewline(lr)
687 self._fixnewline(lr)
688
688
689 def read_context_hunk(self, lr):
689 def read_context_hunk(self, lr):
690 self.desc = lr.readline()
690 self.desc = lr.readline()
691 m = contextdesc.match(self.desc)
691 m = contextdesc.match(self.desc)
692 if not m:
692 if not m:
693 raise PatchError(_("bad hunk #%d") % self.number)
693 raise PatchError(_("bad hunk #%d") % self.number)
694 foo, self.starta, foo2, aend, foo3 = m.groups()
694 foo, self.starta, foo2, aend, foo3 = m.groups()
695 self.starta = int(self.starta)
695 self.starta = int(self.starta)
696 if aend is None:
696 if aend is None:
697 aend = self.starta
697 aend = self.starta
698 self.lena = int(aend) - self.starta
698 self.lena = int(aend) - self.starta
699 if self.starta:
699 if self.starta:
700 self.lena += 1
700 self.lena += 1
701 for x in xrange(self.lena):
701 for x in xrange(self.lena):
702 l = lr.readline()
702 l = lr.readline()
703 if l.startswith('---'):
703 if l.startswith('---'):
704 # lines addition, old block is empty
704 # lines addition, old block is empty
705 lr.push(l)
705 lr.push(l)
706 break
706 break
707 s = l[2:]
707 s = l[2:]
708 if l.startswith('- ') or l.startswith('! '):
708 if l.startswith('- ') or l.startswith('! '):
709 u = '-' + s
709 u = '-' + s
710 elif l.startswith(' '):
710 elif l.startswith(' '):
711 u = ' ' + s
711 u = ' ' + s
712 else:
712 else:
713 raise PatchError(_("bad hunk #%d old text line %d") %
713 raise PatchError(_("bad hunk #%d old text line %d") %
714 (self.number, x))
714 (self.number, x))
715 self.a.append(u)
715 self.a.append(u)
716 self.hunk.append(u)
716 self.hunk.append(u)
717
717
718 l = lr.readline()
718 l = lr.readline()
719 if l.startswith('\ '):
719 if l.startswith('\ '):
720 s = self.a[-1][:-1]
720 s = self.a[-1][:-1]
721 self.a[-1] = s
721 self.a[-1] = s
722 self.hunk[-1] = s
722 self.hunk[-1] = s
723 l = lr.readline()
723 l = lr.readline()
724 m = contextdesc.match(l)
724 m = contextdesc.match(l)
725 if not m:
725 if not m:
726 raise PatchError(_("bad hunk #%d") % self.number)
726 raise PatchError(_("bad hunk #%d") % self.number)
727 foo, self.startb, foo2, bend, foo3 = m.groups()
727 foo, self.startb, foo2, bend, foo3 = m.groups()
728 self.startb = int(self.startb)
728 self.startb = int(self.startb)
729 if bend is None:
729 if bend is None:
730 bend = self.startb
730 bend = self.startb
731 self.lenb = int(bend) - self.startb
731 self.lenb = int(bend) - self.startb
732 if self.startb:
732 if self.startb:
733 self.lenb += 1
733 self.lenb += 1
734 hunki = 1
734 hunki = 1
735 for x in xrange(self.lenb):
735 for x in xrange(self.lenb):
736 l = lr.readline()
736 l = lr.readline()
737 if l.startswith('\ '):
737 if l.startswith('\ '):
738 # XXX: the only way to hit this is with an invalid line range.
738 # XXX: the only way to hit this is with an invalid line range.
739 # The no-eol marker is not counted in the line range, but I
739 # The no-eol marker is not counted in the line range, but I
740 # guess there are diff(1) out there which behave differently.
740 # guess there are diff(1) out there which behave differently.
741 s = self.b[-1][:-1]
741 s = self.b[-1][:-1]
742 self.b[-1] = s
742 self.b[-1] = s
743 self.hunk[hunki - 1] = s
743 self.hunk[hunki - 1] = s
744 continue
744 continue
745 if not l:
745 if not l:
746 # line deletions, new block is empty and we hit EOF
746 # line deletions, new block is empty and we hit EOF
747 lr.push(l)
747 lr.push(l)
748 break
748 break
749 s = l[2:]
749 s = l[2:]
750 if l.startswith('+ ') or l.startswith('! '):
750 if l.startswith('+ ') or l.startswith('! '):
751 u = '+' + s
751 u = '+' + s
752 elif l.startswith(' '):
752 elif l.startswith(' '):
753 u = ' ' + s
753 u = ' ' + s
754 elif len(self.b) == 0:
754 elif len(self.b) == 0:
755 # line deletions, new block is empty
755 # line deletions, new block is empty
756 lr.push(l)
756 lr.push(l)
757 break
757 break
758 else:
758 else:
759 raise PatchError(_("bad hunk #%d old text line %d") %
759 raise PatchError(_("bad hunk #%d old text line %d") %
760 (self.number, x))
760 (self.number, x))
761 self.b.append(s)
761 self.b.append(s)
762 while True:
762 while True:
763 if hunki >= len(self.hunk):
763 if hunki >= len(self.hunk):
764 h = ""
764 h = ""
765 else:
765 else:
766 h = self.hunk[hunki]
766 h = self.hunk[hunki]
767 hunki += 1
767 hunki += 1
768 if h == u:
768 if h == u:
769 break
769 break
770 elif h.startswith('-'):
770 elif h.startswith('-'):
771 continue
771 continue
772 else:
772 else:
773 self.hunk.insert(hunki - 1, u)
773 self.hunk.insert(hunki - 1, u)
774 break
774 break
775
775
776 if not self.a:
776 if not self.a:
777 # this happens when lines were only added to the hunk
777 # this happens when lines were only added to the hunk
778 for x in self.hunk:
778 for x in self.hunk:
779 if x.startswith('-') or x.startswith(' '):
779 if x.startswith('-') or x.startswith(' '):
780 self.a.append(x)
780 self.a.append(x)
781 if not self.b:
781 if not self.b:
782 # this happens when lines were only deleted from the hunk
782 # this happens when lines were only deleted from the hunk
783 for x in self.hunk:
783 for x in self.hunk:
784 if x.startswith('+') or x.startswith(' '):
784 if x.startswith('+') or x.startswith(' '):
785 self.b.append(x[1:])
785 self.b.append(x[1:])
786 # @@ -start,len +start,len @@
786 # @@ -start,len +start,len @@
787 self.desc = "@@ -%d,%d +%d,%d @@\n" % (self.starta, self.lena,
787 self.desc = "@@ -%d,%d +%d,%d @@\n" % (self.starta, self.lena,
788 self.startb, self.lenb)
788 self.startb, self.lenb)
789 self.hunk[0] = self.desc
789 self.hunk[0] = self.desc
790 self._fixnewline(lr)
790 self._fixnewline(lr)
791
791
792 def _fixnewline(self, lr):
792 def _fixnewline(self, lr):
793 l = lr.readline()
793 l = lr.readline()
794 if l.startswith('\ '):
794 if l.startswith('\ '):
795 diffhelpers.fix_newline(self.hunk, self.a, self.b)
795 diffhelpers.fix_newline(self.hunk, self.a, self.b)
796 else:
796 else:
797 lr.push(l)
797 lr.push(l)
798
798
799 def complete(self):
799 def complete(self):
800 return len(self.a) == self.lena and len(self.b) == self.lenb
800 return len(self.a) == self.lena and len(self.b) == self.lenb
801
801
802 def createfile(self):
802 def createfile(self):
803 return self.starta == 0 and self.lena == 0 and self.create
803 return self.starta == 0 and self.lena == 0 and self.create
804
804
805 def rmfile(self):
805 def rmfile(self):
806 return self.startb == 0 and self.lenb == 0 and self.remove
806 return self.startb == 0 and self.lenb == 0 and self.remove
807
807
808 def fuzzit(self, l, fuzz, toponly):
808 def fuzzit(self, l, fuzz, toponly):
809 # this removes context lines from the top and bottom of list 'l'. It
809 # this removes context lines from the top and bottom of list 'l'. It
810 # checks the hunk to make sure only context lines are removed, and then
810 # checks the hunk to make sure only context lines are removed, and then
811 # returns a new shortened list of lines.
811 # returns a new shortened list of lines.
812 fuzz = min(fuzz, len(l)-1)
812 fuzz = min(fuzz, len(l)-1)
813 if fuzz:
813 if fuzz:
814 top = 0
814 top = 0
815 bot = 0
815 bot = 0
816 hlen = len(self.hunk)
816 hlen = len(self.hunk)
817 for x in xrange(hlen - 1):
817 for x in xrange(hlen - 1):
818 # the hunk starts with the @@ line, so use x+1
818 # the hunk starts with the @@ line, so use x+1
819 if self.hunk[x + 1][0] == ' ':
819 if self.hunk[x + 1][0] == ' ':
820 top += 1
820 top += 1
821 else:
821 else:
822 break
822 break
823 if not toponly:
823 if not toponly:
824 for x in xrange(hlen - 1):
824 for x in xrange(hlen - 1):
825 if self.hunk[hlen - bot - 1][0] == ' ':
825 if self.hunk[hlen - bot - 1][0] == ' ':
826 bot += 1
826 bot += 1
827 else:
827 else:
828 break
828 break
829
829
830 # top and bot now count context in the hunk
830 # top and bot now count context in the hunk
831 # adjust them if either one is short
831 # adjust them if either one is short
832 context = max(top, bot, 3)
832 context = max(top, bot, 3)
833 if bot < context:
833 if bot < context:
834 bot = max(0, fuzz - (context - bot))
834 bot = max(0, fuzz - (context - bot))
835 else:
835 else:
836 bot = min(fuzz, bot)
836 bot = min(fuzz, bot)
837 if top < context:
837 if top < context:
838 top = max(0, fuzz - (context - top))
838 top = max(0, fuzz - (context - top))
839 else:
839 else:
840 top = min(fuzz, top)
840 top = min(fuzz, top)
841
841
842 return l[top:len(l)-bot]
842 return l[top:len(l)-bot]
843 return l
843 return l
844
844
845 def old(self, fuzz=0, toponly=False):
845 def old(self, fuzz=0, toponly=False):
846 return self.fuzzit(self.a, fuzz, toponly)
846 return self.fuzzit(self.a, fuzz, toponly)
847
847
848 def new(self, fuzz=0, toponly=False):
848 def new(self, fuzz=0, toponly=False):
849 return self.fuzzit(self.b, fuzz, toponly)
849 return self.fuzzit(self.b, fuzz, toponly)
850
850
851 class binhunk:
851 class binhunk:
852 'A binary patch file. Only understands literals so far.'
852 'A binary patch file. Only understands literals so far.'
853 def __init__(self, gitpatch):
853 def __init__(self, gitpatch):
854 self.gitpatch = gitpatch
854 self.gitpatch = gitpatch
855 self.text = None
855 self.text = None
856 self.hunk = ['GIT binary patch\n']
856 self.hunk = ['GIT binary patch\n']
857
857
858 def createfile(self):
858 def createfile(self):
859 return self.gitpatch.op in ('ADD', 'RENAME', 'COPY')
859 return self.gitpatch.op in ('ADD', 'RENAME', 'COPY')
860
860
861 def rmfile(self):
861 def rmfile(self):
862 return self.gitpatch.op == 'DELETE'
862 return self.gitpatch.op == 'DELETE'
863
863
864 def complete(self):
864 def complete(self):
865 return self.text is not None
865 return self.text is not None
866
866
867 def new(self):
867 def new(self):
868 return [self.text]
868 return [self.text]
869
869
870 def extract(self, lr):
870 def extract(self, lr):
871 line = lr.readline()
871 line = lr.readline()
872 self.hunk.append(line)
872 self.hunk.append(line)
873 while line and not line.startswith('literal '):
873 while line and not line.startswith('literal '):
874 line = lr.readline()
874 line = lr.readline()
875 self.hunk.append(line)
875 self.hunk.append(line)
876 if not line:
876 if not line:
877 raise PatchError(_('could not extract binary patch'))
877 raise PatchError(_('could not extract binary patch'))
878 size = int(line[8:].rstrip())
878 size = int(line[8:].rstrip())
879 dec = []
879 dec = []
880 line = lr.readline()
880 line = lr.readline()
881 self.hunk.append(line)
881 self.hunk.append(line)
882 while len(line) > 1:
882 while len(line) > 1:
883 l = line[0]
883 l = line[0]
884 if l <= 'Z' and l >= 'A':
884 if l <= 'Z' and l >= 'A':
885 l = ord(l) - ord('A') + 1
885 l = ord(l) - ord('A') + 1
886 else:
886 else:
887 l = ord(l) - ord('a') + 27
887 l = ord(l) - ord('a') + 27
888 dec.append(base85.b85decode(line[1:-1])[:l])
888 dec.append(base85.b85decode(line[1:-1])[:l])
889 line = lr.readline()
889 line = lr.readline()
890 self.hunk.append(line)
890 self.hunk.append(line)
891 text = zlib.decompress(''.join(dec))
891 text = zlib.decompress(''.join(dec))
892 if len(text) != size:
892 if len(text) != size:
893 raise PatchError(_('binary patch is %d bytes, not %d') %
893 raise PatchError(_('binary patch is %d bytes, not %d') %
894 len(text), size)
894 len(text), size)
895 self.text = text
895 self.text = text
896
896
897 def parsefilename(str):
897 def parsefilename(str):
898 # --- filename \t|space stuff
898 # --- filename \t|space stuff
899 s = str[4:].rstrip('\r\n')
899 s = str[4:].rstrip('\r\n')
900 i = s.find('\t')
900 i = s.find('\t')
901 if i < 0:
901 if i < 0:
902 i = s.find(' ')
902 i = s.find(' ')
903 if i < 0:
903 if i < 0:
904 return s
904 return s
905 return s[:i]
905 return s[:i]
906
906
907 def pathstrip(path, strip):
907 def pathstrip(path, strip):
908 pathlen = len(path)
908 pathlen = len(path)
909 i = 0
909 i = 0
910 if strip == 0:
910 if strip == 0:
911 return '', path.rstrip()
911 return '', path.rstrip()
912 count = strip
912 count = strip
913 while count > 0:
913 while count > 0:
914 i = path.find('/', i)
914 i = path.find('/', i)
915 if i == -1:
915 if i == -1:
916 raise PatchError(_("unable to strip away %d of %d dirs from %s") %
916 raise PatchError(_("unable to strip away %d of %d dirs from %s") %
917 (count, strip, path))
917 (count, strip, path))
918 i += 1
918 i += 1
919 # consume '//' in the path
919 # consume '//' in the path
920 while i < pathlen - 1 and path[i] == '/':
920 while i < pathlen - 1 and path[i] == '/':
921 i += 1
921 i += 1
922 count -= 1
922 count -= 1
923 return path[:i].lstrip(), path[i:].rstrip()
923 return path[:i].lstrip(), path[i:].rstrip()
924
924
925 def selectfile(afile_orig, bfile_orig, hunk, strip):
925 def selectfile(afile_orig, bfile_orig, hunk, strip):
926 nulla = afile_orig == "/dev/null"
926 nulla = afile_orig == "/dev/null"
927 nullb = bfile_orig == "/dev/null"
927 nullb = bfile_orig == "/dev/null"
928 abase, afile = pathstrip(afile_orig, strip)
928 abase, afile = pathstrip(afile_orig, strip)
929 gooda = not nulla and os.path.lexists(afile)
929 gooda = not nulla and os.path.lexists(afile)
930 bbase, bfile = pathstrip(bfile_orig, strip)
930 bbase, bfile = pathstrip(bfile_orig, strip)
931 if afile == bfile:
931 if afile == bfile:
932 goodb = gooda
932 goodb = gooda
933 else:
933 else:
934 goodb = not nullb and os.path.lexists(bfile)
934 goodb = not nullb and os.path.lexists(bfile)
935 createfunc = hunk.createfile
935 createfunc = hunk.createfile
936 missing = not goodb and not gooda and not createfunc()
936 missing = not goodb and not gooda and not createfunc()
937
937
938 # some diff programs apparently produce patches where the afile is
938 # some diff programs apparently produce patches where the afile is
939 # not /dev/null, but afile starts with bfile
939 # not /dev/null, but afile starts with bfile
940 abasedir = afile[:afile.rfind('/') + 1]
940 abasedir = afile[:afile.rfind('/') + 1]
941 bbasedir = bfile[:bfile.rfind('/') + 1]
941 bbasedir = bfile[:bfile.rfind('/') + 1]
942 if missing and abasedir == bbasedir and afile.startswith(bfile):
942 if missing and abasedir == bbasedir and afile.startswith(bfile):
943 # this isn't very pretty
943 # this isn't very pretty
944 hunk.create = True
944 hunk.create = True
945 if createfunc():
945 if createfunc():
946 missing = False
946 missing = False
947 else:
947 else:
948 hunk.create = False
948 hunk.create = False
949
949
950 # If afile is "a/b/foo" and bfile is "a/b/foo.orig" we assume the
950 # If afile is "a/b/foo" and bfile is "a/b/foo.orig" we assume the
951 # diff is between a file and its backup. In this case, the original
951 # diff is between a file and its backup. In this case, the original
952 # file should be patched (see original mpatch code).
952 # file should be patched (see original mpatch code).
953 isbackup = (abase == bbase and bfile.startswith(afile))
953 isbackup = (abase == bbase and bfile.startswith(afile))
954 fname = None
954 fname = None
955 if not missing:
955 if not missing:
956 if gooda and goodb:
956 if gooda and goodb:
957 fname = isbackup and afile or bfile
957 fname = isbackup and afile or bfile
958 elif gooda:
958 elif gooda:
959 fname = afile
959 fname = afile
960
960
961 if not fname:
961 if not fname:
962 if not nullb:
962 if not nullb:
963 fname = isbackup and afile or bfile
963 fname = isbackup and afile or bfile
964 elif not nulla:
964 elif not nulla:
965 fname = afile
965 fname = afile
966 else:
966 else:
967 raise PatchError(_("undefined source and destination files"))
967 raise PatchError(_("undefined source and destination files"))
968
968
969 return fname, missing
969 return fname, missing
970
970
971 def scangitpatch(lr, firstline):
971 def scangitpatch(lr, firstline):
972 """
972 """
973 Git patches can emit:
973 Git patches can emit:
974 - rename a to b
974 - rename a to b
975 - change b
975 - change b
976 - copy a to c
976 - copy a to c
977 - change c
977 - change c
978
978
979 We cannot apply this sequence as-is, the renamed 'a' could not be
979 We cannot apply this sequence as-is, the renamed 'a' could not be
980 found for it would have been renamed already. And we cannot copy
980 found for it would have been renamed already. And we cannot copy
981 from 'b' instead because 'b' would have been changed already. So
981 from 'b' instead because 'b' would have been changed already. So
982 we scan the git patch for copy and rename commands so we can
982 we scan the git patch for copy and rename commands so we can
983 perform the copies ahead of time.
983 perform the copies ahead of time.
984 """
984 """
985 pos = 0
985 pos = 0
986 try:
986 try:
987 pos = lr.fp.tell()
987 pos = lr.fp.tell()
988 fp = lr.fp
988 fp = lr.fp
989 except IOError:
989 except IOError:
990 fp = cStringIO.StringIO(lr.fp.read())
990 fp = cStringIO.StringIO(lr.fp.read())
991 gitlr = linereader(fp, lr.textmode)
991 gitlr = linereader(fp, lr.textmode)
992 gitlr.push(firstline)
992 gitlr.push(firstline)
993 gitpatches = readgitpatch(gitlr)
993 gitpatches = readgitpatch(gitlr)
994 fp.seek(pos)
994 fp.seek(pos)
995 return gitpatches
995 return gitpatches
996
996
997 def iterhunks(ui, fp):
997 def iterhunks(ui, fp):
998 """Read a patch and yield the following events:
998 """Read a patch and yield the following events:
999 - ("file", afile, bfile, firsthunk): select a new target file.
999 - ("file", afile, bfile, firsthunk): select a new target file.
1000 - ("hunk", hunk): a new hunk is ready to be applied, follows a
1000 - ("hunk", hunk): a new hunk is ready to be applied, follows a
1001 "file" event.
1001 "file" event.
1002 - ("git", gitchanges): current diff is in git format, gitchanges
1002 - ("git", gitchanges): current diff is in git format, gitchanges
1003 maps filenames to gitpatch records. Unique event.
1003 maps filenames to gitpatch records. Unique event.
1004 """
1004 """
1005 changed = {}
1005 changed = {}
1006 afile = ""
1006 afile = ""
1007 bfile = ""
1007 bfile = ""
1008 state = None
1008 state = None
1009 hunknum = 0
1009 hunknum = 0
1010 emitfile = newfile = False
1010 emitfile = newfile = False
1011 git = False
1011 git = False
1012
1012
1013 # our states
1013 # our states
1014 BFILE = 1
1014 BFILE = 1
1015 context = None
1015 context = None
1016 lr = linereader(fp)
1016 lr = linereader(fp)
1017
1017
1018 while True:
1018 while True:
1019 x = lr.readline()
1019 x = lr.readline()
1020 if not x:
1020 if not x:
1021 break
1021 break
1022 if (state == BFILE and ((not context and x[0] == '@') or
1022 if (state == BFILE and ((not context and x[0] == '@') or
1023 ((context is not False) and x.startswith('***************')))):
1023 ((context is not False) and x.startswith('***************')))):
1024 if context is None and x.startswith('***************'):
1024 if context is None and x.startswith('***************'):
1025 context = True
1025 context = True
1026 gpatch = changed.get(bfile)
1026 gpatch = changed.get(bfile)
1027 create = afile == '/dev/null' or gpatch and gpatch.op == 'ADD'
1027 create = afile == '/dev/null' or gpatch and gpatch.op == 'ADD'
1028 remove = bfile == '/dev/null' or gpatch and gpatch.op == 'DELETE'
1028 remove = bfile == '/dev/null' or gpatch and gpatch.op == 'DELETE'
1029 h = hunk(x, hunknum + 1, lr, context, create, remove)
1029 h = hunk(x, hunknum + 1, lr, context, create, remove)
1030 hunknum += 1
1030 hunknum += 1
1031 if emitfile:
1031 if emitfile:
1032 emitfile = False
1032 emitfile = False
1033 yield 'file', (afile, bfile, h)
1033 yield 'file', (afile, bfile, h)
1034 yield 'hunk', h
1034 yield 'hunk', h
1035 elif state == BFILE and x.startswith('GIT binary patch'):
1035 elif state == BFILE and x.startswith('GIT binary patch'):
1036 h = binhunk(changed[bfile])
1036 h = binhunk(changed[bfile])
1037 hunknum += 1
1037 hunknum += 1
1038 if emitfile:
1038 if emitfile:
1039 emitfile = False
1039 emitfile = False
1040 yield 'file', ('a/' + afile, 'b/' + bfile, h)
1040 yield 'file', ('a/' + afile, 'b/' + bfile, h)
1041 h.extract(lr)
1041 h.extract(lr)
1042 yield 'hunk', h
1042 yield 'hunk', h
1043 elif x.startswith('diff --git'):
1043 elif x.startswith('diff --git'):
1044 # check for git diff, scanning the whole patch file if needed
1044 # check for git diff, scanning the whole patch file if needed
1045 m = gitre.match(x)
1045 m = gitre.match(x)
1046 if m:
1046 if m:
1047 afile, bfile = m.group(1, 2)
1047 afile, bfile = m.group(1, 2)
1048 if not git:
1048 if not git:
1049 git = True
1049 git = True
1050 gitpatches = scangitpatch(lr, x)
1050 gitpatches = scangitpatch(lr, x)
1051 yield 'git', gitpatches
1051 yield 'git', gitpatches
1052 for gp in gitpatches:
1052 for gp in gitpatches:
1053 changed[gp.path] = gp
1053 changed[gp.path] = gp
1054 # else error?
1054 # else error?
1055 # copy/rename + modify should modify target, not source
1055 # copy/rename + modify should modify target, not source
1056 gp = changed.get(bfile)
1056 gp = changed.get(bfile)
1057 if gp and (gp.op in ('COPY', 'DELETE', 'RENAME', 'ADD')
1057 if gp and (gp.op in ('COPY', 'DELETE', 'RENAME', 'ADD')
1058 or gp.mode):
1058 or gp.mode):
1059 afile = bfile
1059 afile = bfile
1060 newfile = True
1060 newfile = True
1061 elif x.startswith('---'):
1061 elif x.startswith('---'):
1062 # check for a unified diff
1062 # check for a unified diff
1063 l2 = lr.readline()
1063 l2 = lr.readline()
1064 if not l2.startswith('+++'):
1064 if not l2.startswith('+++'):
1065 lr.push(l2)
1065 lr.push(l2)
1066 continue
1066 continue
1067 newfile = True
1067 newfile = True
1068 context = False
1068 context = False
1069 afile = parsefilename(x)
1069 afile = parsefilename(x)
1070 bfile = parsefilename(l2)
1070 bfile = parsefilename(l2)
1071 elif x.startswith('***'):
1071 elif x.startswith('***'):
1072 # check for a context diff
1072 # check for a context diff
1073 l2 = lr.readline()
1073 l2 = lr.readline()
1074 if not l2.startswith('---'):
1074 if not l2.startswith('---'):
1075 lr.push(l2)
1075 lr.push(l2)
1076 continue
1076 continue
1077 l3 = lr.readline()
1077 l3 = lr.readline()
1078 lr.push(l3)
1078 lr.push(l3)
1079 if not l3.startswith("***************"):
1079 if not l3.startswith("***************"):
1080 lr.push(l2)
1080 lr.push(l2)
1081 continue
1081 continue
1082 newfile = True
1082 newfile = True
1083 context = True
1083 context = True
1084 afile = parsefilename(x)
1084 afile = parsefilename(x)
1085 bfile = parsefilename(l2)
1085 bfile = parsefilename(l2)
1086
1086
1087 if newfile:
1087 if newfile:
1088 newfile = False
1088 newfile = False
1089 emitfile = True
1089 emitfile = True
1090 state = BFILE
1090 state = BFILE
1091 hunknum = 0
1091 hunknum = 0
1092
1092
1093 def applydiff(ui, fp, changed, strip=1, eolmode='strict'):
1093 def applydiff(ui, fp, changed, strip=1, eolmode='strict'):
1094 """Reads a patch from fp and tries to apply it.
1094 """Reads a patch from fp and tries to apply it.
1095
1095
1096 The dict 'changed' is filled in with all of the filenames changed
1096 The dict 'changed' is filled in with all of the filenames changed
1097 by the patch. Returns 0 for a clean patch, -1 if any rejects were
1097 by the patch. Returns 0 for a clean patch, -1 if any rejects were
1098 found and 1 if there was any fuzz.
1098 found and 1 if there was any fuzz.
1099
1099
1100 If 'eolmode' is 'strict', the patch content and patched file are
1100 If 'eolmode' is 'strict', the patch content and patched file are
1101 read in binary mode. Otherwise, line endings are ignored when
1101 read in binary mode. Otherwise, line endings are ignored when
1102 patching then normalized according to 'eolmode'.
1102 patching then normalized according to 'eolmode'.
1103
1103
1104 Callers probably want to call 'cmdutil.updatedir' after this to
1104 Callers probably want to call 'cmdutil.updatedir' after this to
1105 apply certain categories of changes not done by this function.
1105 apply certain categories of changes not done by this function.
1106 """
1106 """
1107 return _applydiff(ui, fp, patchfile, copyfile, changed, strip=strip,
1107 return _applydiff(ui, fp, patchfile, copyfile, changed, strip=strip,
1108 eolmode=eolmode)
1108 eolmode=eolmode)
1109
1109
1110 def _applydiff(ui, fp, patcher, copyfn, changed, strip=1, eolmode='strict'):
1110 def _applydiff(ui, fp, patcher, copyfn, changed, strip=1, eolmode='strict'):
1111 rejects = 0
1111 rejects = 0
1112 err = 0
1112 err = 0
1113 current_file = None
1113 current_file = None
1114 cwd = os.getcwd()
1114 cwd = os.getcwd()
1115 opener = scmutil.opener(cwd)
1115 opener = scmutil.opener(cwd)
1116
1116
1117 for state, values in iterhunks(ui, fp):
1117 for state, values in iterhunks(ui, fp):
1118 if state == 'hunk':
1118 if state == 'hunk':
1119 if not current_file:
1119 if not current_file:
1120 continue
1120 continue
1121 ret = current_file.apply(values)
1121 ret = current_file.apply(values)
1122 if ret >= 0:
1122 if ret >= 0:
1123 changed.setdefault(current_file.fname, None)
1123 changed.setdefault(current_file.fname, None)
1124 if ret > 0:
1124 if ret > 0:
1125 err = 1
1125 err = 1
1126 elif state == 'file':
1126 elif state == 'file':
1127 if current_file:
1127 if current_file:
1128 rejects += current_file.close()
1128 rejects += current_file.close()
1129 afile, bfile, first_hunk = values
1129 afile, bfile, first_hunk = values
1130 try:
1130 try:
1131 current_file, missing = selectfile(afile, bfile,
1131 current_file, missing = selectfile(afile, bfile,
1132 first_hunk, strip)
1132 first_hunk, strip)
1133 current_file = patcher(ui, current_file, opener,
1133 current_file = patcher(ui, current_file, opener,
1134 missing=missing, eolmode=eolmode)
1134 missing=missing, eolmode=eolmode)
1135 except PatchError, inst:
1135 except PatchError, inst:
1136 ui.warn(str(inst) + '\n')
1136 ui.warn(str(inst) + '\n')
1137 current_file = None
1137 current_file = None
1138 rejects += 1
1138 rejects += 1
1139 continue
1139 continue
1140 elif state == 'git':
1140 elif state == 'git':
1141 for gp in values:
1141 for gp in values:
1142 gp.path = pathstrip(gp.path, strip - 1)[1]
1142 gp.path = pathstrip(gp.path, strip - 1)[1]
1143 if gp.oldpath:
1143 if gp.oldpath:
1144 gp.oldpath = pathstrip(gp.oldpath, strip - 1)[1]
1144 gp.oldpath = pathstrip(gp.oldpath, strip - 1)[1]
1145 # Binary patches really overwrite target files, copying them
1145 # Binary patches really overwrite target files, copying them
1146 # will just make it fails with "target file exists"
1146 # will just make it fails with "target file exists"
1147 if gp.op in ('COPY', 'RENAME') and not gp.binary:
1147 if gp.op in ('COPY', 'RENAME') and not gp.binary:
1148 copyfn(gp.oldpath, gp.path, cwd)
1148 copyfn(gp.oldpath, gp.path, cwd)
1149 changed[gp.path] = gp
1149 changed[gp.path] = gp
1150 else:
1150 else:
1151 raise util.Abort(_('unsupported parser state: %s') % state)
1151 raise util.Abort(_('unsupported parser state: %s') % state)
1152
1152
1153 if current_file:
1153 if current_file:
1154 rejects += current_file.close()
1154 rejects += current_file.close()
1155
1155
1156 if rejects:
1156 if rejects:
1157 return -1
1157 return -1
1158 return err
1158 return err
1159
1159
1160 def _externalpatch(patcher, patchname, ui, strip, cwd, files):
1160 def _externalpatch(patcher, patchname, ui, strip, cwd, files):
1161 """use <patcher> to apply <patchname> to the working directory.
1161 """use <patcher> to apply <patchname> to the working directory.
1162 returns whether patch was applied with fuzz factor."""
1162 returns whether patch was applied with fuzz factor."""
1163
1163
1164 fuzz = False
1164 fuzz = False
1165 args = []
1165 args = []
1166 if cwd:
1166 if cwd:
1167 args.append('-d %s' % util.shellquote(cwd))
1167 args.append('-d %s' % util.shellquote(cwd))
1168 fp = util.popen('%s %s -p%d < %s' % (patcher, ' '.join(args), strip,
1168 fp = util.popen('%s %s -p%d < %s' % (patcher, ' '.join(args), strip,
1169 util.shellquote(patchname)))
1169 util.shellquote(patchname)))
1170
1170
1171 for line in fp:
1171 for line in fp:
1172 line = line.rstrip()
1172 line = line.rstrip()
1173 ui.note(line + '\n')
1173 ui.note(line + '\n')
1174 if line.startswith('patching file '):
1174 if line.startswith('patching file '):
1175 pf = util.parsepatchoutput(line)
1175 pf = util.parsepatchoutput(line)
1176 printed_file = False
1176 printed_file = False
1177 files.setdefault(pf, None)
1177 files.setdefault(pf, None)
1178 elif line.find('with fuzz') >= 0:
1178 elif line.find('with fuzz') >= 0:
1179 fuzz = True
1179 fuzz = True
1180 if not printed_file:
1180 if not printed_file:
1181 ui.warn(pf + '\n')
1181 ui.warn(pf + '\n')
1182 printed_file = True
1182 printed_file = True
1183 ui.warn(line + '\n')
1183 ui.warn(line + '\n')
1184 elif line.find('saving rejects to file') >= 0:
1184 elif line.find('saving rejects to file') >= 0:
1185 ui.warn(line + '\n')
1185 ui.warn(line + '\n')
1186 elif line.find('FAILED') >= 0:
1186 elif line.find('FAILED') >= 0:
1187 if not printed_file:
1187 if not printed_file:
1188 ui.warn(pf + '\n')
1188 ui.warn(pf + '\n')
1189 printed_file = True
1189 printed_file = True
1190 ui.warn(line + '\n')
1190 ui.warn(line + '\n')
1191 code = fp.close()
1191 code = fp.close()
1192 if code:
1192 if code:
1193 raise PatchError(_("patch command failed: %s") %
1193 raise PatchError(_("patch command failed: %s") %
1194 util.explain_exit(code)[0])
1194 util.explainexit(code)[0])
1195 return fuzz
1195 return fuzz
1196
1196
1197 def internalpatch(patchobj, ui, strip, cwd, files=None, eolmode='strict'):
1197 def internalpatch(patchobj, ui, strip, cwd, files=None, eolmode='strict'):
1198 """use builtin patch to apply <patchobj> to the working directory.
1198 """use builtin patch to apply <patchobj> to the working directory.
1199 returns whether patch was applied with fuzz factor."""
1199 returns whether patch was applied with fuzz factor."""
1200
1200
1201 if files is None:
1201 if files is None:
1202 files = {}
1202 files = {}
1203 if eolmode is None:
1203 if eolmode is None:
1204 eolmode = ui.config('patch', 'eol', 'strict')
1204 eolmode = ui.config('patch', 'eol', 'strict')
1205 if eolmode.lower() not in eolmodes:
1205 if eolmode.lower() not in eolmodes:
1206 raise util.Abort(_('unsupported line endings type: %s') % eolmode)
1206 raise util.Abort(_('unsupported line endings type: %s') % eolmode)
1207 eolmode = eolmode.lower()
1207 eolmode = eolmode.lower()
1208
1208
1209 try:
1209 try:
1210 fp = open(patchobj, 'rb')
1210 fp = open(patchobj, 'rb')
1211 except TypeError:
1211 except TypeError:
1212 fp = patchobj
1212 fp = patchobj
1213 if cwd:
1213 if cwd:
1214 curdir = os.getcwd()
1214 curdir = os.getcwd()
1215 os.chdir(cwd)
1215 os.chdir(cwd)
1216 try:
1216 try:
1217 ret = applydiff(ui, fp, files, strip=strip, eolmode=eolmode)
1217 ret = applydiff(ui, fp, files, strip=strip, eolmode=eolmode)
1218 finally:
1218 finally:
1219 if cwd:
1219 if cwd:
1220 os.chdir(curdir)
1220 os.chdir(curdir)
1221 if fp != patchobj:
1221 if fp != patchobj:
1222 fp.close()
1222 fp.close()
1223 if ret < 0:
1223 if ret < 0:
1224 raise PatchError(_('patch failed to apply'))
1224 raise PatchError(_('patch failed to apply'))
1225 return ret > 0
1225 return ret > 0
1226
1226
1227 def patch(patchname, ui, strip=1, cwd=None, files=None, eolmode='strict'):
1227 def patch(patchname, ui, strip=1, cwd=None, files=None, eolmode='strict'):
1228 """Apply <patchname> to the working directory.
1228 """Apply <patchname> to the working directory.
1229
1229
1230 'eolmode' specifies how end of lines should be handled. It can be:
1230 'eolmode' specifies how end of lines should be handled. It can be:
1231 - 'strict': inputs are read in binary mode, EOLs are preserved
1231 - 'strict': inputs are read in binary mode, EOLs are preserved
1232 - 'crlf': EOLs are ignored when patching and reset to CRLF
1232 - 'crlf': EOLs are ignored when patching and reset to CRLF
1233 - 'lf': EOLs are ignored when patching and reset to LF
1233 - 'lf': EOLs are ignored when patching and reset to LF
1234 - None: get it from user settings, default to 'strict'
1234 - None: get it from user settings, default to 'strict'
1235 'eolmode' is ignored when using an external patcher program.
1235 'eolmode' is ignored when using an external patcher program.
1236
1236
1237 Returns whether patch was applied with fuzz factor.
1237 Returns whether patch was applied with fuzz factor.
1238 """
1238 """
1239 patcher = ui.config('ui', 'patch')
1239 patcher = ui.config('ui', 'patch')
1240 if files is None:
1240 if files is None:
1241 files = {}
1241 files = {}
1242 try:
1242 try:
1243 if patcher:
1243 if patcher:
1244 return _externalpatch(patcher, patchname, ui, strip, cwd, files)
1244 return _externalpatch(patcher, patchname, ui, strip, cwd, files)
1245 return internalpatch(patchname, ui, strip, cwd, files, eolmode)
1245 return internalpatch(patchname, ui, strip, cwd, files, eolmode)
1246 except PatchError, err:
1246 except PatchError, err:
1247 raise util.Abort(str(err))
1247 raise util.Abort(str(err))
1248
1248
1249 def b85diff(to, tn):
1249 def b85diff(to, tn):
1250 '''print base85-encoded binary diff'''
1250 '''print base85-encoded binary diff'''
1251 def gitindex(text):
1251 def gitindex(text):
1252 if not text:
1252 if not text:
1253 return hex(nullid)
1253 return hex(nullid)
1254 l = len(text)
1254 l = len(text)
1255 s = util.sha1('blob %d\0' % l)
1255 s = util.sha1('blob %d\0' % l)
1256 s.update(text)
1256 s.update(text)
1257 return s.hexdigest()
1257 return s.hexdigest()
1258
1258
1259 def fmtline(line):
1259 def fmtline(line):
1260 l = len(line)
1260 l = len(line)
1261 if l <= 26:
1261 if l <= 26:
1262 l = chr(ord('A') + l - 1)
1262 l = chr(ord('A') + l - 1)
1263 else:
1263 else:
1264 l = chr(l - 26 + ord('a') - 1)
1264 l = chr(l - 26 + ord('a') - 1)
1265 return '%c%s\n' % (l, base85.b85encode(line, True))
1265 return '%c%s\n' % (l, base85.b85encode(line, True))
1266
1266
1267 def chunk(text, csize=52):
1267 def chunk(text, csize=52):
1268 l = len(text)
1268 l = len(text)
1269 i = 0
1269 i = 0
1270 while i < l:
1270 while i < l:
1271 yield text[i:i + csize]
1271 yield text[i:i + csize]
1272 i += csize
1272 i += csize
1273
1273
1274 tohash = gitindex(to)
1274 tohash = gitindex(to)
1275 tnhash = gitindex(tn)
1275 tnhash = gitindex(tn)
1276 if tohash == tnhash:
1276 if tohash == tnhash:
1277 return ""
1277 return ""
1278
1278
1279 # TODO: deltas
1279 # TODO: deltas
1280 ret = ['index %s..%s\nGIT binary patch\nliteral %s\n' %
1280 ret = ['index %s..%s\nGIT binary patch\nliteral %s\n' %
1281 (tohash, tnhash, len(tn))]
1281 (tohash, tnhash, len(tn))]
1282 for l in chunk(zlib.compress(tn)):
1282 for l in chunk(zlib.compress(tn)):
1283 ret.append(fmtline(l))
1283 ret.append(fmtline(l))
1284 ret.append('\n')
1284 ret.append('\n')
1285 return ''.join(ret)
1285 return ''.join(ret)
1286
1286
1287 class GitDiffRequired(Exception):
1287 class GitDiffRequired(Exception):
1288 pass
1288 pass
1289
1289
1290 def diffopts(ui, opts=None, untrusted=False):
1290 def diffopts(ui, opts=None, untrusted=False):
1291 def get(key, name=None, getter=ui.configbool):
1291 def get(key, name=None, getter=ui.configbool):
1292 return ((opts and opts.get(key)) or
1292 return ((opts and opts.get(key)) or
1293 getter('diff', name or key, None, untrusted=untrusted))
1293 getter('diff', name or key, None, untrusted=untrusted))
1294 return mdiff.diffopts(
1294 return mdiff.diffopts(
1295 text=opts and opts.get('text'),
1295 text=opts and opts.get('text'),
1296 git=get('git'),
1296 git=get('git'),
1297 nodates=get('nodates'),
1297 nodates=get('nodates'),
1298 showfunc=get('show_function', 'showfunc'),
1298 showfunc=get('show_function', 'showfunc'),
1299 ignorews=get('ignore_all_space', 'ignorews'),
1299 ignorews=get('ignore_all_space', 'ignorews'),
1300 ignorewsamount=get('ignore_space_change', 'ignorewsamount'),
1300 ignorewsamount=get('ignore_space_change', 'ignorewsamount'),
1301 ignoreblanklines=get('ignore_blank_lines', 'ignoreblanklines'),
1301 ignoreblanklines=get('ignore_blank_lines', 'ignoreblanklines'),
1302 context=get('unified', getter=ui.config))
1302 context=get('unified', getter=ui.config))
1303
1303
1304 def diff(repo, node1=None, node2=None, match=None, changes=None, opts=None,
1304 def diff(repo, node1=None, node2=None, match=None, changes=None, opts=None,
1305 losedatafn=None, prefix=''):
1305 losedatafn=None, prefix=''):
1306 '''yields diff of changes to files between two nodes, or node and
1306 '''yields diff of changes to files between two nodes, or node and
1307 working directory.
1307 working directory.
1308
1308
1309 if node1 is None, use first dirstate parent instead.
1309 if node1 is None, use first dirstate parent instead.
1310 if node2 is None, compare node1 with working directory.
1310 if node2 is None, compare node1 with working directory.
1311
1311
1312 losedatafn(**kwarg) is a callable run when opts.upgrade=True and
1312 losedatafn(**kwarg) is a callable run when opts.upgrade=True and
1313 every time some change cannot be represented with the current
1313 every time some change cannot be represented with the current
1314 patch format. Return False to upgrade to git patch format, True to
1314 patch format. Return False to upgrade to git patch format, True to
1315 accept the loss or raise an exception to abort the diff. It is
1315 accept the loss or raise an exception to abort the diff. It is
1316 called with the name of current file being diffed as 'fn'. If set
1316 called with the name of current file being diffed as 'fn'. If set
1317 to None, patches will always be upgraded to git format when
1317 to None, patches will always be upgraded to git format when
1318 necessary.
1318 necessary.
1319
1319
1320 prefix is a filename prefix that is prepended to all filenames on
1320 prefix is a filename prefix that is prepended to all filenames on
1321 display (used for subrepos).
1321 display (used for subrepos).
1322 '''
1322 '''
1323
1323
1324 if opts is None:
1324 if opts is None:
1325 opts = mdiff.defaultopts
1325 opts = mdiff.defaultopts
1326
1326
1327 if not node1 and not node2:
1327 if not node1 and not node2:
1328 node1 = repo.dirstate.p1()
1328 node1 = repo.dirstate.p1()
1329
1329
1330 def lrugetfilectx():
1330 def lrugetfilectx():
1331 cache = {}
1331 cache = {}
1332 order = []
1332 order = []
1333 def getfilectx(f, ctx):
1333 def getfilectx(f, ctx):
1334 fctx = ctx.filectx(f, filelog=cache.get(f))
1334 fctx = ctx.filectx(f, filelog=cache.get(f))
1335 if f not in cache:
1335 if f not in cache:
1336 if len(cache) > 20:
1336 if len(cache) > 20:
1337 del cache[order.pop(0)]
1337 del cache[order.pop(0)]
1338 cache[f] = fctx.filelog()
1338 cache[f] = fctx.filelog()
1339 else:
1339 else:
1340 order.remove(f)
1340 order.remove(f)
1341 order.append(f)
1341 order.append(f)
1342 return fctx
1342 return fctx
1343 return getfilectx
1343 return getfilectx
1344 getfilectx = lrugetfilectx()
1344 getfilectx = lrugetfilectx()
1345
1345
1346 ctx1 = repo[node1]
1346 ctx1 = repo[node1]
1347 ctx2 = repo[node2]
1347 ctx2 = repo[node2]
1348
1348
1349 if not changes:
1349 if not changes:
1350 changes = repo.status(ctx1, ctx2, match=match)
1350 changes = repo.status(ctx1, ctx2, match=match)
1351 modified, added, removed = changes[:3]
1351 modified, added, removed = changes[:3]
1352
1352
1353 if not modified and not added and not removed:
1353 if not modified and not added and not removed:
1354 return []
1354 return []
1355
1355
1356 revs = None
1356 revs = None
1357 if not repo.ui.quiet:
1357 if not repo.ui.quiet:
1358 hexfunc = repo.ui.debugflag and hex or short
1358 hexfunc = repo.ui.debugflag and hex or short
1359 revs = [hexfunc(node) for node in [node1, node2] if node]
1359 revs = [hexfunc(node) for node in [node1, node2] if node]
1360
1360
1361 copy = {}
1361 copy = {}
1362 if opts.git or opts.upgrade:
1362 if opts.git or opts.upgrade:
1363 copy = copies.copies(repo, ctx1, ctx2, repo[nullid])[0]
1363 copy = copies.copies(repo, ctx1, ctx2, repo[nullid])[0]
1364
1364
1365 difffn = lambda opts, losedata: trydiff(repo, revs, ctx1, ctx2,
1365 difffn = lambda opts, losedata: trydiff(repo, revs, ctx1, ctx2,
1366 modified, added, removed, copy, getfilectx, opts, losedata, prefix)
1366 modified, added, removed, copy, getfilectx, opts, losedata, prefix)
1367 if opts.upgrade and not opts.git:
1367 if opts.upgrade and not opts.git:
1368 try:
1368 try:
1369 def losedata(fn):
1369 def losedata(fn):
1370 if not losedatafn or not losedatafn(fn=fn):
1370 if not losedatafn or not losedatafn(fn=fn):
1371 raise GitDiffRequired()
1371 raise GitDiffRequired()
1372 # Buffer the whole output until we are sure it can be generated
1372 # Buffer the whole output until we are sure it can be generated
1373 return list(difffn(opts.copy(git=False), losedata))
1373 return list(difffn(opts.copy(git=False), losedata))
1374 except GitDiffRequired:
1374 except GitDiffRequired:
1375 return difffn(opts.copy(git=True), None)
1375 return difffn(opts.copy(git=True), None)
1376 else:
1376 else:
1377 return difffn(opts, None)
1377 return difffn(opts, None)
1378
1378
1379 def difflabel(func, *args, **kw):
1379 def difflabel(func, *args, **kw):
1380 '''yields 2-tuples of (output, label) based on the output of func()'''
1380 '''yields 2-tuples of (output, label) based on the output of func()'''
1381 prefixes = [('diff', 'diff.diffline'),
1381 prefixes = [('diff', 'diff.diffline'),
1382 ('copy', 'diff.extended'),
1382 ('copy', 'diff.extended'),
1383 ('rename', 'diff.extended'),
1383 ('rename', 'diff.extended'),
1384 ('old', 'diff.extended'),
1384 ('old', 'diff.extended'),
1385 ('new', 'diff.extended'),
1385 ('new', 'diff.extended'),
1386 ('deleted', 'diff.extended'),
1386 ('deleted', 'diff.extended'),
1387 ('---', 'diff.file_a'),
1387 ('---', 'diff.file_a'),
1388 ('+++', 'diff.file_b'),
1388 ('+++', 'diff.file_b'),
1389 ('@@', 'diff.hunk'),
1389 ('@@', 'diff.hunk'),
1390 ('-', 'diff.deleted'),
1390 ('-', 'diff.deleted'),
1391 ('+', 'diff.inserted')]
1391 ('+', 'diff.inserted')]
1392
1392
1393 for chunk in func(*args, **kw):
1393 for chunk in func(*args, **kw):
1394 lines = chunk.split('\n')
1394 lines = chunk.split('\n')
1395 for i, line in enumerate(lines):
1395 for i, line in enumerate(lines):
1396 if i != 0:
1396 if i != 0:
1397 yield ('\n', '')
1397 yield ('\n', '')
1398 stripline = line
1398 stripline = line
1399 if line and line[0] in '+-':
1399 if line and line[0] in '+-':
1400 # highlight trailing whitespace, but only in changed lines
1400 # highlight trailing whitespace, but only in changed lines
1401 stripline = line.rstrip()
1401 stripline = line.rstrip()
1402 for prefix, label in prefixes:
1402 for prefix, label in prefixes:
1403 if stripline.startswith(prefix):
1403 if stripline.startswith(prefix):
1404 yield (stripline, label)
1404 yield (stripline, label)
1405 break
1405 break
1406 else:
1406 else:
1407 yield (line, '')
1407 yield (line, '')
1408 if line != stripline:
1408 if line != stripline:
1409 yield (line[len(stripline):], 'diff.trailingwhitespace')
1409 yield (line[len(stripline):], 'diff.trailingwhitespace')
1410
1410
1411 def diffui(*args, **kw):
1411 def diffui(*args, **kw):
1412 '''like diff(), but yields 2-tuples of (output, label) for ui.write()'''
1412 '''like diff(), but yields 2-tuples of (output, label) for ui.write()'''
1413 return difflabel(diff, *args, **kw)
1413 return difflabel(diff, *args, **kw)
1414
1414
1415
1415
1416 def _addmodehdr(header, omode, nmode):
1416 def _addmodehdr(header, omode, nmode):
1417 if omode != nmode:
1417 if omode != nmode:
1418 header.append('old mode %s\n' % omode)
1418 header.append('old mode %s\n' % omode)
1419 header.append('new mode %s\n' % nmode)
1419 header.append('new mode %s\n' % nmode)
1420
1420
1421 def trydiff(repo, revs, ctx1, ctx2, modified, added, removed,
1421 def trydiff(repo, revs, ctx1, ctx2, modified, added, removed,
1422 copy, getfilectx, opts, losedatafn, prefix):
1422 copy, getfilectx, opts, losedatafn, prefix):
1423
1423
1424 def join(f):
1424 def join(f):
1425 return os.path.join(prefix, f)
1425 return os.path.join(prefix, f)
1426
1426
1427 date1 = util.datestr(ctx1.date())
1427 date1 = util.datestr(ctx1.date())
1428 man1 = ctx1.manifest()
1428 man1 = ctx1.manifest()
1429
1429
1430 gone = set()
1430 gone = set()
1431 gitmode = {'l': '120000', 'x': '100755', '': '100644'}
1431 gitmode = {'l': '120000', 'x': '100755', '': '100644'}
1432
1432
1433 copyto = dict([(v, k) for k, v in copy.items()])
1433 copyto = dict([(v, k) for k, v in copy.items()])
1434
1434
1435 if opts.git:
1435 if opts.git:
1436 revs = None
1436 revs = None
1437
1437
1438 for f in sorted(modified + added + removed):
1438 for f in sorted(modified + added + removed):
1439 to = None
1439 to = None
1440 tn = None
1440 tn = None
1441 dodiff = True
1441 dodiff = True
1442 header = []
1442 header = []
1443 if f in man1:
1443 if f in man1:
1444 to = getfilectx(f, ctx1).data()
1444 to = getfilectx(f, ctx1).data()
1445 if f not in removed:
1445 if f not in removed:
1446 tn = getfilectx(f, ctx2).data()
1446 tn = getfilectx(f, ctx2).data()
1447 a, b = f, f
1447 a, b = f, f
1448 if opts.git or losedatafn:
1448 if opts.git or losedatafn:
1449 if f in added:
1449 if f in added:
1450 mode = gitmode[ctx2.flags(f)]
1450 mode = gitmode[ctx2.flags(f)]
1451 if f in copy or f in copyto:
1451 if f in copy or f in copyto:
1452 if opts.git:
1452 if opts.git:
1453 if f in copy:
1453 if f in copy:
1454 a = copy[f]
1454 a = copy[f]
1455 else:
1455 else:
1456 a = copyto[f]
1456 a = copyto[f]
1457 omode = gitmode[man1.flags(a)]
1457 omode = gitmode[man1.flags(a)]
1458 _addmodehdr(header, omode, mode)
1458 _addmodehdr(header, omode, mode)
1459 if a in removed and a not in gone:
1459 if a in removed and a not in gone:
1460 op = 'rename'
1460 op = 'rename'
1461 gone.add(a)
1461 gone.add(a)
1462 else:
1462 else:
1463 op = 'copy'
1463 op = 'copy'
1464 header.append('%s from %s\n' % (op, join(a)))
1464 header.append('%s from %s\n' % (op, join(a)))
1465 header.append('%s to %s\n' % (op, join(f)))
1465 header.append('%s to %s\n' % (op, join(f)))
1466 to = getfilectx(a, ctx1).data()
1466 to = getfilectx(a, ctx1).data()
1467 else:
1467 else:
1468 losedatafn(f)
1468 losedatafn(f)
1469 else:
1469 else:
1470 if opts.git:
1470 if opts.git:
1471 header.append('new file mode %s\n' % mode)
1471 header.append('new file mode %s\n' % mode)
1472 elif ctx2.flags(f):
1472 elif ctx2.flags(f):
1473 losedatafn(f)
1473 losedatafn(f)
1474 # In theory, if tn was copied or renamed we should check
1474 # In theory, if tn was copied or renamed we should check
1475 # if the source is binary too but the copy record already
1475 # if the source is binary too but the copy record already
1476 # forces git mode.
1476 # forces git mode.
1477 if util.binary(tn):
1477 if util.binary(tn):
1478 if opts.git:
1478 if opts.git:
1479 dodiff = 'binary'
1479 dodiff = 'binary'
1480 else:
1480 else:
1481 losedatafn(f)
1481 losedatafn(f)
1482 if not opts.git and not tn:
1482 if not opts.git and not tn:
1483 # regular diffs cannot represent new empty file
1483 # regular diffs cannot represent new empty file
1484 losedatafn(f)
1484 losedatafn(f)
1485 elif f in removed:
1485 elif f in removed:
1486 if opts.git:
1486 if opts.git:
1487 # have we already reported a copy above?
1487 # have we already reported a copy above?
1488 if ((f in copy and copy[f] in added
1488 if ((f in copy and copy[f] in added
1489 and copyto[copy[f]] == f) or
1489 and copyto[copy[f]] == f) or
1490 (f in copyto and copyto[f] in added
1490 (f in copyto and copyto[f] in added
1491 and copy[copyto[f]] == f)):
1491 and copy[copyto[f]] == f)):
1492 dodiff = False
1492 dodiff = False
1493 else:
1493 else:
1494 header.append('deleted file mode %s\n' %
1494 header.append('deleted file mode %s\n' %
1495 gitmode[man1.flags(f)])
1495 gitmode[man1.flags(f)])
1496 elif not to or util.binary(to):
1496 elif not to or util.binary(to):
1497 # regular diffs cannot represent empty file deletion
1497 # regular diffs cannot represent empty file deletion
1498 losedatafn(f)
1498 losedatafn(f)
1499 else:
1499 else:
1500 oflag = man1.flags(f)
1500 oflag = man1.flags(f)
1501 nflag = ctx2.flags(f)
1501 nflag = ctx2.flags(f)
1502 binary = util.binary(to) or util.binary(tn)
1502 binary = util.binary(to) or util.binary(tn)
1503 if opts.git:
1503 if opts.git:
1504 _addmodehdr(header, gitmode[oflag], gitmode[nflag])
1504 _addmodehdr(header, gitmode[oflag], gitmode[nflag])
1505 if binary:
1505 if binary:
1506 dodiff = 'binary'
1506 dodiff = 'binary'
1507 elif binary or nflag != oflag:
1507 elif binary or nflag != oflag:
1508 losedatafn(f)
1508 losedatafn(f)
1509 if opts.git:
1509 if opts.git:
1510 header.insert(0, mdiff.diffline(revs, join(a), join(b), opts))
1510 header.insert(0, mdiff.diffline(revs, join(a), join(b), opts))
1511
1511
1512 if dodiff:
1512 if dodiff:
1513 if dodiff == 'binary':
1513 if dodiff == 'binary':
1514 text = b85diff(to, tn)
1514 text = b85diff(to, tn)
1515 else:
1515 else:
1516 text = mdiff.unidiff(to, date1,
1516 text = mdiff.unidiff(to, date1,
1517 # ctx2 date may be dynamic
1517 # ctx2 date may be dynamic
1518 tn, util.datestr(ctx2.date()),
1518 tn, util.datestr(ctx2.date()),
1519 join(a), join(b), revs, opts=opts)
1519 join(a), join(b), revs, opts=opts)
1520 if header and (text or len(header) > 1):
1520 if header and (text or len(header) > 1):
1521 yield ''.join(header)
1521 yield ''.join(header)
1522 if text:
1522 if text:
1523 yield text
1523 yield text
1524
1524
1525 def diffstatdata(lines):
1525 def diffstatdata(lines):
1526 diffre = re.compile('^diff .*-r [a-z0-9]+\s(.*)$')
1526 diffre = re.compile('^diff .*-r [a-z0-9]+\s(.*)$')
1527
1527
1528 filename, adds, removes = None, 0, 0
1528 filename, adds, removes = None, 0, 0
1529 for line in lines:
1529 for line in lines:
1530 if line.startswith('diff'):
1530 if line.startswith('diff'):
1531 if filename:
1531 if filename:
1532 isbinary = adds == 0 and removes == 0
1532 isbinary = adds == 0 and removes == 0
1533 yield (filename, adds, removes, isbinary)
1533 yield (filename, adds, removes, isbinary)
1534 # set numbers to 0 anyway when starting new file
1534 # set numbers to 0 anyway when starting new file
1535 adds, removes = 0, 0
1535 adds, removes = 0, 0
1536 if line.startswith('diff --git'):
1536 if line.startswith('diff --git'):
1537 filename = gitre.search(line).group(1)
1537 filename = gitre.search(line).group(1)
1538 elif line.startswith('diff -r'):
1538 elif line.startswith('diff -r'):
1539 # format: "diff -r ... -r ... filename"
1539 # format: "diff -r ... -r ... filename"
1540 filename = diffre.search(line).group(1)
1540 filename = diffre.search(line).group(1)
1541 elif line.startswith('+') and not line.startswith('+++'):
1541 elif line.startswith('+') and not line.startswith('+++'):
1542 adds += 1
1542 adds += 1
1543 elif line.startswith('-') and not line.startswith('---'):
1543 elif line.startswith('-') and not line.startswith('---'):
1544 removes += 1
1544 removes += 1
1545 if filename:
1545 if filename:
1546 isbinary = adds == 0 and removes == 0
1546 isbinary = adds == 0 and removes == 0
1547 yield (filename, adds, removes, isbinary)
1547 yield (filename, adds, removes, isbinary)
1548
1548
1549 def diffstat(lines, width=80, git=False):
1549 def diffstat(lines, width=80, git=False):
1550 output = []
1550 output = []
1551 stats = list(diffstatdata(lines))
1551 stats = list(diffstatdata(lines))
1552
1552
1553 maxtotal, maxname = 0, 0
1553 maxtotal, maxname = 0, 0
1554 totaladds, totalremoves = 0, 0
1554 totaladds, totalremoves = 0, 0
1555 hasbinary = False
1555 hasbinary = False
1556
1556
1557 sized = [(filename, adds, removes, isbinary, encoding.colwidth(filename))
1557 sized = [(filename, adds, removes, isbinary, encoding.colwidth(filename))
1558 for filename, adds, removes, isbinary in stats]
1558 for filename, adds, removes, isbinary in stats]
1559
1559
1560 for filename, adds, removes, isbinary, namewidth in sized:
1560 for filename, adds, removes, isbinary, namewidth in sized:
1561 totaladds += adds
1561 totaladds += adds
1562 totalremoves += removes
1562 totalremoves += removes
1563 maxname = max(maxname, namewidth)
1563 maxname = max(maxname, namewidth)
1564 maxtotal = max(maxtotal, adds + removes)
1564 maxtotal = max(maxtotal, adds + removes)
1565 if isbinary:
1565 if isbinary:
1566 hasbinary = True
1566 hasbinary = True
1567
1567
1568 countwidth = len(str(maxtotal))
1568 countwidth = len(str(maxtotal))
1569 if hasbinary and countwidth < 3:
1569 if hasbinary and countwidth < 3:
1570 countwidth = 3
1570 countwidth = 3
1571 graphwidth = width - countwidth - maxname - 6
1571 graphwidth = width - countwidth - maxname - 6
1572 if graphwidth < 10:
1572 if graphwidth < 10:
1573 graphwidth = 10
1573 graphwidth = 10
1574
1574
1575 def scale(i):
1575 def scale(i):
1576 if maxtotal <= graphwidth:
1576 if maxtotal <= graphwidth:
1577 return i
1577 return i
1578 # If diffstat runs out of room it doesn't print anything,
1578 # If diffstat runs out of room it doesn't print anything,
1579 # which isn't very useful, so always print at least one + or -
1579 # which isn't very useful, so always print at least one + or -
1580 # if there were at least some changes.
1580 # if there were at least some changes.
1581 return max(i * graphwidth // maxtotal, int(bool(i)))
1581 return max(i * graphwidth // maxtotal, int(bool(i)))
1582
1582
1583 for filename, adds, removes, isbinary, namewidth in sized:
1583 for filename, adds, removes, isbinary, namewidth in sized:
1584 if git and isbinary:
1584 if git and isbinary:
1585 count = 'Bin'
1585 count = 'Bin'
1586 else:
1586 else:
1587 count = adds + removes
1587 count = adds + removes
1588 pluses = '+' * scale(adds)
1588 pluses = '+' * scale(adds)
1589 minuses = '-' * scale(removes)
1589 minuses = '-' * scale(removes)
1590 output.append(' %s%s | %*s %s%s\n' %
1590 output.append(' %s%s | %*s %s%s\n' %
1591 (filename, ' ' * (maxname - namewidth),
1591 (filename, ' ' * (maxname - namewidth),
1592 countwidth, count,
1592 countwidth, count,
1593 pluses, minuses))
1593 pluses, minuses))
1594
1594
1595 if stats:
1595 if stats:
1596 output.append(_(' %d files changed, %d insertions(+), %d deletions(-)\n')
1596 output.append(_(' %d files changed, %d insertions(+), %d deletions(-)\n')
1597 % (len(stats), totaladds, totalremoves))
1597 % (len(stats), totaladds, totalremoves))
1598
1598
1599 return ''.join(output)
1599 return ''.join(output)
1600
1600
1601 def diffstatui(*args, **kw):
1601 def diffstatui(*args, **kw):
1602 '''like diffstat(), but yields 2-tuples of (output, label) for
1602 '''like diffstat(), but yields 2-tuples of (output, label) for
1603 ui.write()
1603 ui.write()
1604 '''
1604 '''
1605
1605
1606 for line in diffstat(*args, **kw).splitlines():
1606 for line in diffstat(*args, **kw).splitlines():
1607 if line and line[-1] in '+-':
1607 if line and line[-1] in '+-':
1608 name, graph = line.rsplit(' ', 1)
1608 name, graph = line.rsplit(' ', 1)
1609 yield (name + ' ', '')
1609 yield (name + ' ', '')
1610 m = re.search(r'\++', graph)
1610 m = re.search(r'\++', graph)
1611 if m:
1611 if m:
1612 yield (m.group(0), 'diffstat.inserted')
1612 yield (m.group(0), 'diffstat.inserted')
1613 m = re.search(r'-+', graph)
1613 m = re.search(r'-+', graph)
1614 if m:
1614 if m:
1615 yield (m.group(0), 'diffstat.deleted')
1615 yield (m.group(0), 'diffstat.deleted')
1616 else:
1616 else:
1617 yield (line, '')
1617 yield (line, '')
1618 yield ('\n', '')
1618 yield ('\n', '')
@@ -1,331 +1,331 b''
1 # posix.py - Posix utility function implementations for Mercurial
1 # posix.py - Posix utility function implementations for Mercurial
2 #
2 #
3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 import os, sys, errno, stat, getpass, pwd, grp, tempfile
9 import os, sys, errno, stat, getpass, pwd, grp, tempfile
10
10
11 posixfile = open
11 posixfile = open
12 nulldev = '/dev/null'
12 nulldev = '/dev/null'
13 normpath = os.path.normpath
13 normpath = os.path.normpath
14 samestat = os.path.samestat
14 samestat = os.path.samestat
15 os_link = os.link
15 os_link = os.link
16 unlink = os.unlink
16 unlink = os.unlink
17 rename = os.rename
17 rename = os.rename
18 expandglobs = False
18 expandglobs = False
19
19
20 umask = os.umask(0)
20 umask = os.umask(0)
21 os.umask(umask)
21 os.umask(umask)
22
22
23 def openhardlinks():
23 def openhardlinks():
24 '''return true if it is safe to hold open file handles to hardlinks'''
24 '''return true if it is safe to hold open file handles to hardlinks'''
25 return True
25 return True
26
26
27 def nlinks(name):
27 def nlinks(name):
28 '''return number of hardlinks for the given file'''
28 '''return number of hardlinks for the given file'''
29 return os.lstat(name).st_nlink
29 return os.lstat(name).st_nlink
30
30
31 def parsepatchoutput(output_line):
31 def parsepatchoutput(output_line):
32 """parses the output produced by patch and returns the filename"""
32 """parses the output produced by patch and returns the filename"""
33 pf = output_line[14:]
33 pf = output_line[14:]
34 if os.sys.platform == 'OpenVMS':
34 if os.sys.platform == 'OpenVMS':
35 if pf[0] == '`':
35 if pf[0] == '`':
36 pf = pf[1:-1] # Remove the quotes
36 pf = pf[1:-1] # Remove the quotes
37 else:
37 else:
38 if pf.startswith("'") and pf.endswith("'") and " " in pf:
38 if pf.startswith("'") and pf.endswith("'") and " " in pf:
39 pf = pf[1:-1] # Remove the quotes
39 pf = pf[1:-1] # Remove the quotes
40 return pf
40 return pf
41
41
42 def sshargs(sshcmd, host, user, port):
42 def sshargs(sshcmd, host, user, port):
43 '''Build argument list for ssh'''
43 '''Build argument list for ssh'''
44 args = user and ("%s@%s" % (user, host)) or host
44 args = user and ("%s@%s" % (user, host)) or host
45 return port and ("%s -p %s" % (args, port)) or args
45 return port and ("%s -p %s" % (args, port)) or args
46
46
47 def is_exec(f):
47 def is_exec(f):
48 """check whether a file is executable"""
48 """check whether a file is executable"""
49 return (os.lstat(f).st_mode & 0100 != 0)
49 return (os.lstat(f).st_mode & 0100 != 0)
50
50
51 def setflags(f, l, x):
51 def setflags(f, l, x):
52 s = os.lstat(f).st_mode
52 s = os.lstat(f).st_mode
53 if l:
53 if l:
54 if not stat.S_ISLNK(s):
54 if not stat.S_ISLNK(s):
55 # switch file to link
55 # switch file to link
56 fp = open(f)
56 fp = open(f)
57 data = fp.read()
57 data = fp.read()
58 fp.close()
58 fp.close()
59 os.unlink(f)
59 os.unlink(f)
60 try:
60 try:
61 os.symlink(data, f)
61 os.symlink(data, f)
62 except OSError:
62 except OSError:
63 # failed to make a link, rewrite file
63 # failed to make a link, rewrite file
64 fp = open(f, "w")
64 fp = open(f, "w")
65 fp.write(data)
65 fp.write(data)
66 fp.close()
66 fp.close()
67 # no chmod needed at this point
67 # no chmod needed at this point
68 return
68 return
69 if stat.S_ISLNK(s):
69 if stat.S_ISLNK(s):
70 # switch link to file
70 # switch link to file
71 data = os.readlink(f)
71 data = os.readlink(f)
72 os.unlink(f)
72 os.unlink(f)
73 fp = open(f, "w")
73 fp = open(f, "w")
74 fp.write(data)
74 fp.write(data)
75 fp.close()
75 fp.close()
76 s = 0666 & ~umask # avoid restatting for chmod
76 s = 0666 & ~umask # avoid restatting for chmod
77
77
78 sx = s & 0100
78 sx = s & 0100
79 if x and not sx:
79 if x and not sx:
80 # Turn on +x for every +r bit when making a file executable
80 # Turn on +x for every +r bit when making a file executable
81 # and obey umask.
81 # and obey umask.
82 os.chmod(f, s | (s & 0444) >> 2 & ~umask)
82 os.chmod(f, s | (s & 0444) >> 2 & ~umask)
83 elif not x and sx:
83 elif not x and sx:
84 # Turn off all +x bits
84 # Turn off all +x bits
85 os.chmod(f, s & 0666)
85 os.chmod(f, s & 0666)
86
86
87 def checkexec(path):
87 def checkexec(path):
88 """
88 """
89 Check whether the given path is on a filesystem with UNIX-like exec flags
89 Check whether the given path is on a filesystem with UNIX-like exec flags
90
90
91 Requires a directory (like /foo/.hg)
91 Requires a directory (like /foo/.hg)
92 """
92 """
93
93
94 # VFAT on some Linux versions can flip mode but it doesn't persist
94 # VFAT on some Linux versions can flip mode but it doesn't persist
95 # a FS remount. Frequently we can detect it if files are created
95 # a FS remount. Frequently we can detect it if files are created
96 # with exec bit on.
96 # with exec bit on.
97
97
98 try:
98 try:
99 EXECFLAGS = stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
99 EXECFLAGS = stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
100 fh, fn = tempfile.mkstemp(dir=path, prefix='hg-checkexec-')
100 fh, fn = tempfile.mkstemp(dir=path, prefix='hg-checkexec-')
101 try:
101 try:
102 os.close(fh)
102 os.close(fh)
103 m = os.stat(fn).st_mode & 0777
103 m = os.stat(fn).st_mode & 0777
104 new_file_has_exec = m & EXECFLAGS
104 new_file_has_exec = m & EXECFLAGS
105 os.chmod(fn, m ^ EXECFLAGS)
105 os.chmod(fn, m ^ EXECFLAGS)
106 exec_flags_cannot_flip = ((os.stat(fn).st_mode & 0777) == m)
106 exec_flags_cannot_flip = ((os.stat(fn).st_mode & 0777) == m)
107 finally:
107 finally:
108 os.unlink(fn)
108 os.unlink(fn)
109 except (IOError, OSError):
109 except (IOError, OSError):
110 # we don't care, the user probably won't be able to commit anyway
110 # we don't care, the user probably won't be able to commit anyway
111 return False
111 return False
112 return not (new_file_has_exec or exec_flags_cannot_flip)
112 return not (new_file_has_exec or exec_flags_cannot_flip)
113
113
114 def checklink(path):
114 def checklink(path):
115 """check whether the given path is on a symlink-capable filesystem"""
115 """check whether the given path is on a symlink-capable filesystem"""
116 # mktemp is not racy because symlink creation will fail if the
116 # mktemp is not racy because symlink creation will fail if the
117 # file already exists
117 # file already exists
118 name = tempfile.mktemp(dir=path, prefix='hg-checklink-')
118 name = tempfile.mktemp(dir=path, prefix='hg-checklink-')
119 try:
119 try:
120 os.symlink(".", name)
120 os.symlink(".", name)
121 os.unlink(name)
121 os.unlink(name)
122 return True
122 return True
123 except (OSError, AttributeError):
123 except (OSError, AttributeError):
124 return False
124 return False
125
125
126 def checkosfilename(path):
126 def checkosfilename(path):
127 '''Check that the base-relative path is a valid filename on this platform.
127 '''Check that the base-relative path is a valid filename on this platform.
128 Returns None if the path is ok, or a UI string describing the problem.'''
128 Returns None if the path is ok, or a UI string describing the problem.'''
129 pass # on posix platforms, every path is ok
129 pass # on posix platforms, every path is ok
130
130
131 def setbinary(fd):
131 def setbinary(fd):
132 pass
132 pass
133
133
134 def pconvert(path):
134 def pconvert(path):
135 return path
135 return path
136
136
137 def localpath(path):
137 def localpath(path):
138 return path
138 return path
139
139
140 def samefile(fpath1, fpath2):
140 def samefile(fpath1, fpath2):
141 """Returns whether path1 and path2 refer to the same file. This is only
141 """Returns whether path1 and path2 refer to the same file. This is only
142 guaranteed to work for files, not directories."""
142 guaranteed to work for files, not directories."""
143 return os.path.samefile(fpath1, fpath2)
143 return os.path.samefile(fpath1, fpath2)
144
144
145 def samedevice(fpath1, fpath2):
145 def samedevice(fpath1, fpath2):
146 """Returns whether fpath1 and fpath2 are on the same device. This is only
146 """Returns whether fpath1 and fpath2 are on the same device. This is only
147 guaranteed to work for files, not directories."""
147 guaranteed to work for files, not directories."""
148 st1 = os.lstat(fpath1)
148 st1 = os.lstat(fpath1)
149 st2 = os.lstat(fpath2)
149 st2 = os.lstat(fpath2)
150 return st1.st_dev == st2.st_dev
150 return st1.st_dev == st2.st_dev
151
151
152 if sys.platform == 'darwin':
152 if sys.platform == 'darwin':
153 import fcntl # only needed on darwin, missing on jython
153 import fcntl # only needed on darwin, missing on jython
154 def realpath(path):
154 def realpath(path):
155 '''
155 '''
156 Returns the true, canonical file system path equivalent to the given
156 Returns the true, canonical file system path equivalent to the given
157 path.
157 path.
158
158
159 Equivalent means, in this case, resulting in the same, unique
159 Equivalent means, in this case, resulting in the same, unique
160 file system link to the path. Every file system entry, whether a file,
160 file system link to the path. Every file system entry, whether a file,
161 directory, hard link or symbolic link or special, will have a single
161 directory, hard link or symbolic link or special, will have a single
162 path preferred by the system, but may allow multiple, differing path
162 path preferred by the system, but may allow multiple, differing path
163 lookups to point to it.
163 lookups to point to it.
164
164
165 Most regular UNIX file systems only allow a file system entry to be
165 Most regular UNIX file systems only allow a file system entry to be
166 looked up by its distinct path. Obviously, this does not apply to case
166 looked up by its distinct path. Obviously, this does not apply to case
167 insensitive file systems, whether case preserving or not. The most
167 insensitive file systems, whether case preserving or not. The most
168 complex issue to deal with is file systems transparently reencoding the
168 complex issue to deal with is file systems transparently reencoding the
169 path, such as the non-standard Unicode normalisation required for HFS+
169 path, such as the non-standard Unicode normalisation required for HFS+
170 and HFSX.
170 and HFSX.
171 '''
171 '''
172 # Constants copied from /usr/include/sys/fcntl.h
172 # Constants copied from /usr/include/sys/fcntl.h
173 F_GETPATH = 50
173 F_GETPATH = 50
174 O_SYMLINK = 0x200000
174 O_SYMLINK = 0x200000
175
175
176 try:
176 try:
177 fd = os.open(path, O_SYMLINK)
177 fd = os.open(path, O_SYMLINK)
178 except OSError, err:
178 except OSError, err:
179 if err.errno == errno.ENOENT:
179 if err.errno == errno.ENOENT:
180 return path
180 return path
181 raise
181 raise
182
182
183 try:
183 try:
184 return fcntl.fcntl(fd, F_GETPATH, '\0' * 1024).rstrip('\0')
184 return fcntl.fcntl(fd, F_GETPATH, '\0' * 1024).rstrip('\0')
185 finally:
185 finally:
186 os.close(fd)
186 os.close(fd)
187 else:
187 else:
188 # Fallback to the likely inadequate Python builtin function.
188 # Fallback to the likely inadequate Python builtin function.
189 realpath = os.path.realpath
189 realpath = os.path.realpath
190
190
191 def shellquote(s):
191 def shellquote(s):
192 if os.sys.platform == 'OpenVMS':
192 if os.sys.platform == 'OpenVMS':
193 return '"%s"' % s
193 return '"%s"' % s
194 else:
194 else:
195 return "'%s'" % s.replace("'", "'\\''")
195 return "'%s'" % s.replace("'", "'\\''")
196
196
197 def quotecommand(cmd):
197 def quotecommand(cmd):
198 return cmd
198 return cmd
199
199
200 def popen(command, mode='r'):
200 def popen(command, mode='r'):
201 return os.popen(command, mode)
201 return os.popen(command, mode)
202
202
203 def testpid(pid):
203 def testpid(pid):
204 '''return False if pid dead, True if running or not sure'''
204 '''return False if pid dead, True if running or not sure'''
205 if os.sys.platform == 'OpenVMS':
205 if os.sys.platform == 'OpenVMS':
206 return True
206 return True
207 try:
207 try:
208 os.kill(pid, 0)
208 os.kill(pid, 0)
209 return True
209 return True
210 except OSError, inst:
210 except OSError, inst:
211 return inst.errno != errno.ESRCH
211 return inst.errno != errno.ESRCH
212
212
213 def explain_exit(code):
213 def explainexit(code):
214 """return a 2-tuple (desc, code) describing a subprocess status
214 """return a 2-tuple (desc, code) describing a subprocess status
215 (codes from kill are negative - not os.system/wait encoding)"""
215 (codes from kill are negative - not os.system/wait encoding)"""
216 if code >= 0:
216 if code >= 0:
217 return _("exited with status %d") % code, code
217 return _("exited with status %d") % code, code
218 return _("killed by signal %d") % -code, -code
218 return _("killed by signal %d") % -code, -code
219
219
220 def isowner(st):
220 def isowner(st):
221 """Return True if the stat object st is from the current user."""
221 """Return True if the stat object st is from the current user."""
222 return st.st_uid == os.getuid()
222 return st.st_uid == os.getuid()
223
223
224 def find_exe(command):
224 def find_exe(command):
225 '''Find executable for command searching like which does.
225 '''Find executable for command searching like which does.
226 If command is a basename then PATH is searched for command.
226 If command is a basename then PATH is searched for command.
227 PATH isn't searched if command is an absolute or relative path.
227 PATH isn't searched if command is an absolute or relative path.
228 If command isn't found None is returned.'''
228 If command isn't found None is returned.'''
229 if sys.platform == 'OpenVMS':
229 if sys.platform == 'OpenVMS':
230 return command
230 return command
231
231
232 def findexisting(executable):
232 def findexisting(executable):
233 'Will return executable if existing file'
233 'Will return executable if existing file'
234 if os.path.exists(executable):
234 if os.path.exists(executable):
235 return executable
235 return executable
236 return None
236 return None
237
237
238 if os.sep in command:
238 if os.sep in command:
239 return findexisting(command)
239 return findexisting(command)
240
240
241 for path in os.environ.get('PATH', '').split(os.pathsep):
241 for path in os.environ.get('PATH', '').split(os.pathsep):
242 executable = findexisting(os.path.join(path, command))
242 executable = findexisting(os.path.join(path, command))
243 if executable is not None:
243 if executable is not None:
244 return executable
244 return executable
245 return None
245 return None
246
246
247 def set_signal_handler():
247 def set_signal_handler():
248 pass
248 pass
249
249
250 def statfiles(files):
250 def statfiles(files):
251 'Stat each file in files and yield stat or None if file does not exist.'
251 'Stat each file in files and yield stat or None if file does not exist.'
252 lstat = os.lstat
252 lstat = os.lstat
253 for nf in files:
253 for nf in files:
254 try:
254 try:
255 st = lstat(nf)
255 st = lstat(nf)
256 except OSError, err:
256 except OSError, err:
257 if err.errno not in (errno.ENOENT, errno.ENOTDIR):
257 if err.errno not in (errno.ENOENT, errno.ENOTDIR):
258 raise
258 raise
259 st = None
259 st = None
260 yield st
260 yield st
261
261
262 def getuser():
262 def getuser():
263 '''return name of current user'''
263 '''return name of current user'''
264 return getpass.getuser()
264 return getpass.getuser()
265
265
266 def expand_glob(pats):
266 def expand_glob(pats):
267 '''On Windows, expand the implicit globs in a list of patterns'''
267 '''On Windows, expand the implicit globs in a list of patterns'''
268 return list(pats)
268 return list(pats)
269
269
270 def username(uid=None):
270 def username(uid=None):
271 """Return the name of the user with the given uid.
271 """Return the name of the user with the given uid.
272
272
273 If uid is None, return the name of the current user."""
273 If uid is None, return the name of the current user."""
274
274
275 if uid is None:
275 if uid is None:
276 uid = os.getuid()
276 uid = os.getuid()
277 try:
277 try:
278 return pwd.getpwuid(uid)[0]
278 return pwd.getpwuid(uid)[0]
279 except KeyError:
279 except KeyError:
280 return str(uid)
280 return str(uid)
281
281
282 def groupname(gid=None):
282 def groupname(gid=None):
283 """Return the name of the group with the given gid.
283 """Return the name of the group with the given gid.
284
284
285 If gid is None, return the name of the current group."""
285 If gid is None, return the name of the current group."""
286
286
287 if gid is None:
287 if gid is None:
288 gid = os.getgid()
288 gid = os.getgid()
289 try:
289 try:
290 return grp.getgrgid(gid)[0]
290 return grp.getgrgid(gid)[0]
291 except KeyError:
291 except KeyError:
292 return str(gid)
292 return str(gid)
293
293
294 def groupmembers(name):
294 def groupmembers(name):
295 """Return the list of members of the group with the given
295 """Return the list of members of the group with the given
296 name, KeyError if the group does not exist.
296 name, KeyError if the group does not exist.
297 """
297 """
298 return list(grp.getgrnam(name).gr_mem)
298 return list(grp.getgrnam(name).gr_mem)
299
299
300 def spawndetached(args):
300 def spawndetached(args):
301 return os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
301 return os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
302 args[0], args)
302 args[0], args)
303
303
304 def gethgcmd():
304 def gethgcmd():
305 return sys.argv[:1]
305 return sys.argv[:1]
306
306
307 def termwidth():
307 def termwidth():
308 try:
308 try:
309 import termios, array, fcntl
309 import termios, array, fcntl
310 for dev in (sys.stderr, sys.stdout, sys.stdin):
310 for dev in (sys.stderr, sys.stdout, sys.stdin):
311 try:
311 try:
312 try:
312 try:
313 fd = dev.fileno()
313 fd = dev.fileno()
314 except AttributeError:
314 except AttributeError:
315 continue
315 continue
316 if not os.isatty(fd):
316 if not os.isatty(fd):
317 continue
317 continue
318 arri = fcntl.ioctl(fd, termios.TIOCGWINSZ, '\0' * 8)
318 arri = fcntl.ioctl(fd, termios.TIOCGWINSZ, '\0' * 8)
319 width = array.array('h', arri)[1]
319 width = array.array('h', arri)[1]
320 if width > 0:
320 if width > 0:
321 return width
321 return width
322 except ValueError:
322 except ValueError:
323 pass
323 pass
324 except IOError, e:
324 except IOError, e:
325 if e[0] == errno.EINVAL:
325 if e[0] == errno.EINVAL:
326 pass
326 pass
327 else:
327 else:
328 raise
328 raise
329 except ImportError:
329 except ImportError:
330 pass
330 pass
331 return 80
331 return 80
@@ -1,1590 +1,1590 b''
1 # util.py - Mercurial utility functions and platform specfic implementations
1 # util.py - Mercurial utility functions and platform specfic implementations
2 #
2 #
3 # Copyright 2005 K. Thananchayan <thananck@yahoo.com>
3 # Copyright 2005 K. Thananchayan <thananck@yahoo.com>
4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
5 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
5 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
6 #
6 #
7 # This software may be used and distributed according to the terms of the
7 # This software may be used and distributed according to the terms of the
8 # GNU General Public License version 2 or any later version.
8 # GNU General Public License version 2 or any later version.
9
9
10 """Mercurial utility functions and platform specfic implementations.
10 """Mercurial utility functions and platform specfic implementations.
11
11
12 This contains helper routines that are independent of the SCM core and
12 This contains helper routines that are independent of the SCM core and
13 hide platform-specific details from the core.
13 hide platform-specific details from the core.
14 """
14 """
15
15
16 from i18n import _
16 from i18n import _
17 import error, osutil, encoding
17 import error, osutil, encoding
18 import errno, re, shutil, sys, tempfile, traceback
18 import errno, re, shutil, sys, tempfile, traceback
19 import os, time, calendar, textwrap, unicodedata, signal
19 import os, time, calendar, textwrap, unicodedata, signal
20 import imp, socket, urllib
20 import imp, socket, urllib
21
21
22 # Python compatibility
22 # Python compatibility
23
23
24 def sha1(s):
24 def sha1(s):
25 return _fastsha1(s)
25 return _fastsha1(s)
26
26
27 def _fastsha1(s):
27 def _fastsha1(s):
28 # This function will import sha1 from hashlib or sha (whichever is
28 # This function will import sha1 from hashlib or sha (whichever is
29 # available) and overwrite itself with it on the first call.
29 # available) and overwrite itself with it on the first call.
30 # Subsequent calls will go directly to the imported function.
30 # Subsequent calls will go directly to the imported function.
31 if sys.version_info >= (2, 5):
31 if sys.version_info >= (2, 5):
32 from hashlib import sha1 as _sha1
32 from hashlib import sha1 as _sha1
33 else:
33 else:
34 from sha import sha as _sha1
34 from sha import sha as _sha1
35 global _fastsha1, sha1
35 global _fastsha1, sha1
36 _fastsha1 = sha1 = _sha1
36 _fastsha1 = sha1 = _sha1
37 return _sha1(s)
37 return _sha1(s)
38
38
39 import __builtin__
39 import __builtin__
40
40
41 if sys.version_info[0] < 3:
41 if sys.version_info[0] < 3:
42 def fakebuffer(sliceable, offset=0):
42 def fakebuffer(sliceable, offset=0):
43 return sliceable[offset:]
43 return sliceable[offset:]
44 else:
44 else:
45 def fakebuffer(sliceable, offset=0):
45 def fakebuffer(sliceable, offset=0):
46 return memoryview(sliceable)[offset:]
46 return memoryview(sliceable)[offset:]
47 try:
47 try:
48 buffer
48 buffer
49 except NameError:
49 except NameError:
50 __builtin__.buffer = fakebuffer
50 __builtin__.buffer = fakebuffer
51
51
52 import subprocess
52 import subprocess
53 closefds = os.name == 'posix'
53 closefds = os.name == 'posix'
54
54
55 def popen2(cmd, env=None, newlines=False):
55 def popen2(cmd, env=None, newlines=False):
56 # Setting bufsize to -1 lets the system decide the buffer size.
56 # Setting bufsize to -1 lets the system decide the buffer size.
57 # The default for bufsize is 0, meaning unbuffered. This leads to
57 # The default for bufsize is 0, meaning unbuffered. This leads to
58 # poor performance on Mac OS X: http://bugs.python.org/issue4194
58 # poor performance on Mac OS X: http://bugs.python.org/issue4194
59 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
59 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
60 close_fds=closefds,
60 close_fds=closefds,
61 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
61 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
62 universal_newlines=newlines,
62 universal_newlines=newlines,
63 env=env)
63 env=env)
64 return p.stdin, p.stdout
64 return p.stdin, p.stdout
65
65
66 def popen3(cmd, env=None, newlines=False):
66 def popen3(cmd, env=None, newlines=False):
67 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
67 p = subprocess.Popen(cmd, shell=True, bufsize=-1,
68 close_fds=closefds,
68 close_fds=closefds,
69 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
69 stdin=subprocess.PIPE, stdout=subprocess.PIPE,
70 stderr=subprocess.PIPE,
70 stderr=subprocess.PIPE,
71 universal_newlines=newlines,
71 universal_newlines=newlines,
72 env=env)
72 env=env)
73 return p.stdin, p.stdout, p.stderr
73 return p.stdin, p.stdout, p.stderr
74
74
75 def version():
75 def version():
76 """Return version information if available."""
76 """Return version information if available."""
77 try:
77 try:
78 import __version__
78 import __version__
79 return __version__.version
79 return __version__.version
80 except ImportError:
80 except ImportError:
81 return 'unknown'
81 return 'unknown'
82
82
83 # used by parsedate
83 # used by parsedate
84 defaultdateformats = (
84 defaultdateformats = (
85 '%Y-%m-%d %H:%M:%S',
85 '%Y-%m-%d %H:%M:%S',
86 '%Y-%m-%d %I:%M:%S%p',
86 '%Y-%m-%d %I:%M:%S%p',
87 '%Y-%m-%d %H:%M',
87 '%Y-%m-%d %H:%M',
88 '%Y-%m-%d %I:%M%p',
88 '%Y-%m-%d %I:%M%p',
89 '%Y-%m-%d',
89 '%Y-%m-%d',
90 '%m-%d',
90 '%m-%d',
91 '%m/%d',
91 '%m/%d',
92 '%m/%d/%y',
92 '%m/%d/%y',
93 '%m/%d/%Y',
93 '%m/%d/%Y',
94 '%a %b %d %H:%M:%S %Y',
94 '%a %b %d %H:%M:%S %Y',
95 '%a %b %d %I:%M:%S%p %Y',
95 '%a %b %d %I:%M:%S%p %Y',
96 '%a, %d %b %Y %H:%M:%S', # GNU coreutils "/bin/date --rfc-2822"
96 '%a, %d %b %Y %H:%M:%S', # GNU coreutils "/bin/date --rfc-2822"
97 '%b %d %H:%M:%S %Y',
97 '%b %d %H:%M:%S %Y',
98 '%b %d %I:%M:%S%p %Y',
98 '%b %d %I:%M:%S%p %Y',
99 '%b %d %H:%M:%S',
99 '%b %d %H:%M:%S',
100 '%b %d %I:%M:%S%p',
100 '%b %d %I:%M:%S%p',
101 '%b %d %H:%M',
101 '%b %d %H:%M',
102 '%b %d %I:%M%p',
102 '%b %d %I:%M%p',
103 '%b %d %Y',
103 '%b %d %Y',
104 '%b %d',
104 '%b %d',
105 '%H:%M:%S',
105 '%H:%M:%S',
106 '%I:%M:%S%p',
106 '%I:%M:%S%p',
107 '%H:%M',
107 '%H:%M',
108 '%I:%M%p',
108 '%I:%M%p',
109 )
109 )
110
110
111 extendeddateformats = defaultdateformats + (
111 extendeddateformats = defaultdateformats + (
112 "%Y",
112 "%Y",
113 "%Y-%m",
113 "%Y-%m",
114 "%b",
114 "%b",
115 "%b %Y",
115 "%b %Y",
116 )
116 )
117
117
118 def cachefunc(func):
118 def cachefunc(func):
119 '''cache the result of function calls'''
119 '''cache the result of function calls'''
120 # XXX doesn't handle keywords args
120 # XXX doesn't handle keywords args
121 cache = {}
121 cache = {}
122 if func.func_code.co_argcount == 1:
122 if func.func_code.co_argcount == 1:
123 # we gain a small amount of time because
123 # we gain a small amount of time because
124 # we don't need to pack/unpack the list
124 # we don't need to pack/unpack the list
125 def f(arg):
125 def f(arg):
126 if arg not in cache:
126 if arg not in cache:
127 cache[arg] = func(arg)
127 cache[arg] = func(arg)
128 return cache[arg]
128 return cache[arg]
129 else:
129 else:
130 def f(*args):
130 def f(*args):
131 if args not in cache:
131 if args not in cache:
132 cache[args] = func(*args)
132 cache[args] = func(*args)
133 return cache[args]
133 return cache[args]
134
134
135 return f
135 return f
136
136
137 def lrucachefunc(func):
137 def lrucachefunc(func):
138 '''cache most recent results of function calls'''
138 '''cache most recent results of function calls'''
139 cache = {}
139 cache = {}
140 order = []
140 order = []
141 if func.func_code.co_argcount == 1:
141 if func.func_code.co_argcount == 1:
142 def f(arg):
142 def f(arg):
143 if arg not in cache:
143 if arg not in cache:
144 if len(cache) > 20:
144 if len(cache) > 20:
145 del cache[order.pop(0)]
145 del cache[order.pop(0)]
146 cache[arg] = func(arg)
146 cache[arg] = func(arg)
147 else:
147 else:
148 order.remove(arg)
148 order.remove(arg)
149 order.append(arg)
149 order.append(arg)
150 return cache[arg]
150 return cache[arg]
151 else:
151 else:
152 def f(*args):
152 def f(*args):
153 if args not in cache:
153 if args not in cache:
154 if len(cache) > 20:
154 if len(cache) > 20:
155 del cache[order.pop(0)]
155 del cache[order.pop(0)]
156 cache[args] = func(*args)
156 cache[args] = func(*args)
157 else:
157 else:
158 order.remove(args)
158 order.remove(args)
159 order.append(args)
159 order.append(args)
160 return cache[args]
160 return cache[args]
161
161
162 return f
162 return f
163
163
164 class propertycache(object):
164 class propertycache(object):
165 def __init__(self, func):
165 def __init__(self, func):
166 self.func = func
166 self.func = func
167 self.name = func.__name__
167 self.name = func.__name__
168 def __get__(self, obj, type=None):
168 def __get__(self, obj, type=None):
169 result = self.func(obj)
169 result = self.func(obj)
170 setattr(obj, self.name, result)
170 setattr(obj, self.name, result)
171 return result
171 return result
172
172
173 def pipefilter(s, cmd):
173 def pipefilter(s, cmd):
174 '''filter string S through command CMD, returning its output'''
174 '''filter string S through command CMD, returning its output'''
175 p = subprocess.Popen(cmd, shell=True, close_fds=closefds,
175 p = subprocess.Popen(cmd, shell=True, close_fds=closefds,
176 stdin=subprocess.PIPE, stdout=subprocess.PIPE)
176 stdin=subprocess.PIPE, stdout=subprocess.PIPE)
177 pout, perr = p.communicate(s)
177 pout, perr = p.communicate(s)
178 return pout
178 return pout
179
179
180 def tempfilter(s, cmd):
180 def tempfilter(s, cmd):
181 '''filter string S through a pair of temporary files with CMD.
181 '''filter string S through a pair of temporary files with CMD.
182 CMD is used as a template to create the real command to be run,
182 CMD is used as a template to create the real command to be run,
183 with the strings INFILE and OUTFILE replaced by the real names of
183 with the strings INFILE and OUTFILE replaced by the real names of
184 the temporary files generated.'''
184 the temporary files generated.'''
185 inname, outname = None, None
185 inname, outname = None, None
186 try:
186 try:
187 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
187 infd, inname = tempfile.mkstemp(prefix='hg-filter-in-')
188 fp = os.fdopen(infd, 'wb')
188 fp = os.fdopen(infd, 'wb')
189 fp.write(s)
189 fp.write(s)
190 fp.close()
190 fp.close()
191 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
191 outfd, outname = tempfile.mkstemp(prefix='hg-filter-out-')
192 os.close(outfd)
192 os.close(outfd)
193 cmd = cmd.replace('INFILE', inname)
193 cmd = cmd.replace('INFILE', inname)
194 cmd = cmd.replace('OUTFILE', outname)
194 cmd = cmd.replace('OUTFILE', outname)
195 code = os.system(cmd)
195 code = os.system(cmd)
196 if sys.platform == 'OpenVMS' and code & 1:
196 if sys.platform == 'OpenVMS' and code & 1:
197 code = 0
197 code = 0
198 if code:
198 if code:
199 raise Abort(_("command '%s' failed: %s") %
199 raise Abort(_("command '%s' failed: %s") %
200 (cmd, explain_exit(code)))
200 (cmd, explainexit(code)))
201 fp = open(outname, 'rb')
201 fp = open(outname, 'rb')
202 r = fp.read()
202 r = fp.read()
203 fp.close()
203 fp.close()
204 return r
204 return r
205 finally:
205 finally:
206 try:
206 try:
207 if inname:
207 if inname:
208 os.unlink(inname)
208 os.unlink(inname)
209 except OSError:
209 except OSError:
210 pass
210 pass
211 try:
211 try:
212 if outname:
212 if outname:
213 os.unlink(outname)
213 os.unlink(outname)
214 except OSError:
214 except OSError:
215 pass
215 pass
216
216
217 filtertable = {
217 filtertable = {
218 'tempfile:': tempfilter,
218 'tempfile:': tempfilter,
219 'pipe:': pipefilter,
219 'pipe:': pipefilter,
220 }
220 }
221
221
222 def filter(s, cmd):
222 def filter(s, cmd):
223 "filter a string through a command that transforms its input to its output"
223 "filter a string through a command that transforms its input to its output"
224 for name, fn in filtertable.iteritems():
224 for name, fn in filtertable.iteritems():
225 if cmd.startswith(name):
225 if cmd.startswith(name):
226 return fn(s, cmd[len(name):].lstrip())
226 return fn(s, cmd[len(name):].lstrip())
227 return pipefilter(s, cmd)
227 return pipefilter(s, cmd)
228
228
229 def binary(s):
229 def binary(s):
230 """return true if a string is binary data"""
230 """return true if a string is binary data"""
231 return bool(s and '\0' in s)
231 return bool(s and '\0' in s)
232
232
233 def increasingchunks(source, min=1024, max=65536):
233 def increasingchunks(source, min=1024, max=65536):
234 '''return no less than min bytes per chunk while data remains,
234 '''return no less than min bytes per chunk while data remains,
235 doubling min after each chunk until it reaches max'''
235 doubling min after each chunk until it reaches max'''
236 def log2(x):
236 def log2(x):
237 if not x:
237 if not x:
238 return 0
238 return 0
239 i = 0
239 i = 0
240 while x:
240 while x:
241 x >>= 1
241 x >>= 1
242 i += 1
242 i += 1
243 return i - 1
243 return i - 1
244
244
245 buf = []
245 buf = []
246 blen = 0
246 blen = 0
247 for chunk in source:
247 for chunk in source:
248 buf.append(chunk)
248 buf.append(chunk)
249 blen += len(chunk)
249 blen += len(chunk)
250 if blen >= min:
250 if blen >= min:
251 if min < max:
251 if min < max:
252 min = min << 1
252 min = min << 1
253 nmin = 1 << log2(blen)
253 nmin = 1 << log2(blen)
254 if nmin > min:
254 if nmin > min:
255 min = nmin
255 min = nmin
256 if min > max:
256 if min > max:
257 min = max
257 min = max
258 yield ''.join(buf)
258 yield ''.join(buf)
259 blen = 0
259 blen = 0
260 buf = []
260 buf = []
261 if buf:
261 if buf:
262 yield ''.join(buf)
262 yield ''.join(buf)
263
263
264 Abort = error.Abort
264 Abort = error.Abort
265
265
266 def always(fn):
266 def always(fn):
267 return True
267 return True
268
268
269 def never(fn):
269 def never(fn):
270 return False
270 return False
271
271
272 def pathto(root, n1, n2):
272 def pathto(root, n1, n2):
273 '''return the relative path from one place to another.
273 '''return the relative path from one place to another.
274 root should use os.sep to separate directories
274 root should use os.sep to separate directories
275 n1 should use os.sep to separate directories
275 n1 should use os.sep to separate directories
276 n2 should use "/" to separate directories
276 n2 should use "/" to separate directories
277 returns an os.sep-separated path.
277 returns an os.sep-separated path.
278
278
279 If n1 is a relative path, it's assumed it's
279 If n1 is a relative path, it's assumed it's
280 relative to root.
280 relative to root.
281 n2 should always be relative to root.
281 n2 should always be relative to root.
282 '''
282 '''
283 if not n1:
283 if not n1:
284 return localpath(n2)
284 return localpath(n2)
285 if os.path.isabs(n1):
285 if os.path.isabs(n1):
286 if os.path.splitdrive(root)[0] != os.path.splitdrive(n1)[0]:
286 if os.path.splitdrive(root)[0] != os.path.splitdrive(n1)[0]:
287 return os.path.join(root, localpath(n2))
287 return os.path.join(root, localpath(n2))
288 n2 = '/'.join((pconvert(root), n2))
288 n2 = '/'.join((pconvert(root), n2))
289 a, b = splitpath(n1), n2.split('/')
289 a, b = splitpath(n1), n2.split('/')
290 a.reverse()
290 a.reverse()
291 b.reverse()
291 b.reverse()
292 while a and b and a[-1] == b[-1]:
292 while a and b and a[-1] == b[-1]:
293 a.pop()
293 a.pop()
294 b.pop()
294 b.pop()
295 b.reverse()
295 b.reverse()
296 return os.sep.join((['..'] * len(a)) + b) or '.'
296 return os.sep.join((['..'] * len(a)) + b) or '.'
297
297
298 _hgexecutable = None
298 _hgexecutable = None
299
299
300 def mainfrozen():
300 def mainfrozen():
301 """return True if we are a frozen executable.
301 """return True if we are a frozen executable.
302
302
303 The code supports py2exe (most common, Windows only) and tools/freeze
303 The code supports py2exe (most common, Windows only) and tools/freeze
304 (portable, not much used).
304 (portable, not much used).
305 """
305 """
306 return (hasattr(sys, "frozen") or # new py2exe
306 return (hasattr(sys, "frozen") or # new py2exe
307 hasattr(sys, "importers") or # old py2exe
307 hasattr(sys, "importers") or # old py2exe
308 imp.is_frozen("__main__")) # tools/freeze
308 imp.is_frozen("__main__")) # tools/freeze
309
309
310 def hgexecutable():
310 def hgexecutable():
311 """return location of the 'hg' executable.
311 """return location of the 'hg' executable.
312
312
313 Defaults to $HG or 'hg' in the search path.
313 Defaults to $HG or 'hg' in the search path.
314 """
314 """
315 if _hgexecutable is None:
315 if _hgexecutable is None:
316 hg = os.environ.get('HG')
316 hg = os.environ.get('HG')
317 if hg:
317 if hg:
318 _sethgexecutable(hg)
318 _sethgexecutable(hg)
319 elif mainfrozen():
319 elif mainfrozen():
320 _sethgexecutable(sys.executable)
320 _sethgexecutable(sys.executable)
321 else:
321 else:
322 exe = find_exe('hg') or os.path.basename(sys.argv[0])
322 exe = find_exe('hg') or os.path.basename(sys.argv[0])
323 _sethgexecutable(exe)
323 _sethgexecutable(exe)
324 return _hgexecutable
324 return _hgexecutable
325
325
326 def _sethgexecutable(path):
326 def _sethgexecutable(path):
327 """set location of the 'hg' executable"""
327 """set location of the 'hg' executable"""
328 global _hgexecutable
328 global _hgexecutable
329 _hgexecutable = path
329 _hgexecutable = path
330
330
331 def system(cmd, environ={}, cwd=None, onerr=None, errprefix=None, out=None):
331 def system(cmd, environ={}, cwd=None, onerr=None, errprefix=None, out=None):
332 '''enhanced shell command execution.
332 '''enhanced shell command execution.
333 run with environment maybe modified, maybe in different dir.
333 run with environment maybe modified, maybe in different dir.
334
334
335 if command fails and onerr is None, return status. if ui object,
335 if command fails and onerr is None, return status. if ui object,
336 print error message and return status, else raise onerr object as
336 print error message and return status, else raise onerr object as
337 exception.
337 exception.
338
338
339 if out is specified, it is assumed to be a file-like object that has a
339 if out is specified, it is assumed to be a file-like object that has a
340 write() method. stdout and stderr will be redirected to out.'''
340 write() method. stdout and stderr will be redirected to out.'''
341 try:
341 try:
342 sys.stdout.flush()
342 sys.stdout.flush()
343 except Exception:
343 except Exception:
344 pass
344 pass
345 def py2shell(val):
345 def py2shell(val):
346 'convert python object into string that is useful to shell'
346 'convert python object into string that is useful to shell'
347 if val is None or val is False:
347 if val is None or val is False:
348 return '0'
348 return '0'
349 if val is True:
349 if val is True:
350 return '1'
350 return '1'
351 return str(val)
351 return str(val)
352 origcmd = cmd
352 origcmd = cmd
353 cmd = quotecommand(cmd)
353 cmd = quotecommand(cmd)
354 env = dict(os.environ)
354 env = dict(os.environ)
355 env.update((k, py2shell(v)) for k, v in environ.iteritems())
355 env.update((k, py2shell(v)) for k, v in environ.iteritems())
356 env['HG'] = hgexecutable()
356 env['HG'] = hgexecutable()
357 if out is None:
357 if out is None:
358 rc = subprocess.call(cmd, shell=True, close_fds=closefds,
358 rc = subprocess.call(cmd, shell=True, close_fds=closefds,
359 env=env, cwd=cwd)
359 env=env, cwd=cwd)
360 else:
360 else:
361 proc = subprocess.Popen(cmd, shell=True, close_fds=closefds,
361 proc = subprocess.Popen(cmd, shell=True, close_fds=closefds,
362 env=env, cwd=cwd, stdout=subprocess.PIPE,
362 env=env, cwd=cwd, stdout=subprocess.PIPE,
363 stderr=subprocess.STDOUT)
363 stderr=subprocess.STDOUT)
364 for line in proc.stdout:
364 for line in proc.stdout:
365 out.write(line)
365 out.write(line)
366 proc.wait()
366 proc.wait()
367 rc = proc.returncode
367 rc = proc.returncode
368 if sys.platform == 'OpenVMS' and rc & 1:
368 if sys.platform == 'OpenVMS' and rc & 1:
369 rc = 0
369 rc = 0
370 if rc and onerr:
370 if rc and onerr:
371 errmsg = '%s %s' % (os.path.basename(origcmd.split(None, 1)[0]),
371 errmsg = '%s %s' % (os.path.basename(origcmd.split(None, 1)[0]),
372 explain_exit(rc)[0])
372 explainexit(rc)[0])
373 if errprefix:
373 if errprefix:
374 errmsg = '%s: %s' % (errprefix, errmsg)
374 errmsg = '%s: %s' % (errprefix, errmsg)
375 try:
375 try:
376 onerr.warn(errmsg + '\n')
376 onerr.warn(errmsg + '\n')
377 except AttributeError:
377 except AttributeError:
378 raise onerr(errmsg)
378 raise onerr(errmsg)
379 return rc
379 return rc
380
380
381 def checksignature(func):
381 def checksignature(func):
382 '''wrap a function with code to check for calling errors'''
382 '''wrap a function with code to check for calling errors'''
383 def check(*args, **kwargs):
383 def check(*args, **kwargs):
384 try:
384 try:
385 return func(*args, **kwargs)
385 return func(*args, **kwargs)
386 except TypeError:
386 except TypeError:
387 if len(traceback.extract_tb(sys.exc_info()[2])) == 1:
387 if len(traceback.extract_tb(sys.exc_info()[2])) == 1:
388 raise error.SignatureError
388 raise error.SignatureError
389 raise
389 raise
390
390
391 return check
391 return check
392
392
393 def makedir(path, notindexed):
393 def makedir(path, notindexed):
394 os.mkdir(path)
394 os.mkdir(path)
395
395
396 def unlinkpath(f):
396 def unlinkpath(f):
397 """unlink and remove the directory if it is empty"""
397 """unlink and remove the directory if it is empty"""
398 os.unlink(f)
398 os.unlink(f)
399 # try removing directories that might now be empty
399 # try removing directories that might now be empty
400 try:
400 try:
401 os.removedirs(os.path.dirname(f))
401 os.removedirs(os.path.dirname(f))
402 except OSError:
402 except OSError:
403 pass
403 pass
404
404
405 def copyfile(src, dest):
405 def copyfile(src, dest):
406 "copy a file, preserving mode and atime/mtime"
406 "copy a file, preserving mode and atime/mtime"
407 if os.path.islink(src):
407 if os.path.islink(src):
408 try:
408 try:
409 os.unlink(dest)
409 os.unlink(dest)
410 except OSError:
410 except OSError:
411 pass
411 pass
412 os.symlink(os.readlink(src), dest)
412 os.symlink(os.readlink(src), dest)
413 else:
413 else:
414 try:
414 try:
415 shutil.copyfile(src, dest)
415 shutil.copyfile(src, dest)
416 shutil.copymode(src, dest)
416 shutil.copymode(src, dest)
417 except shutil.Error, inst:
417 except shutil.Error, inst:
418 raise Abort(str(inst))
418 raise Abort(str(inst))
419
419
420 def copyfiles(src, dst, hardlink=None):
420 def copyfiles(src, dst, hardlink=None):
421 """Copy a directory tree using hardlinks if possible"""
421 """Copy a directory tree using hardlinks if possible"""
422
422
423 if hardlink is None:
423 if hardlink is None:
424 hardlink = (os.stat(src).st_dev ==
424 hardlink = (os.stat(src).st_dev ==
425 os.stat(os.path.dirname(dst)).st_dev)
425 os.stat(os.path.dirname(dst)).st_dev)
426
426
427 num = 0
427 num = 0
428 if os.path.isdir(src):
428 if os.path.isdir(src):
429 os.mkdir(dst)
429 os.mkdir(dst)
430 for name, kind in osutil.listdir(src):
430 for name, kind in osutil.listdir(src):
431 srcname = os.path.join(src, name)
431 srcname = os.path.join(src, name)
432 dstname = os.path.join(dst, name)
432 dstname = os.path.join(dst, name)
433 hardlink, n = copyfiles(srcname, dstname, hardlink)
433 hardlink, n = copyfiles(srcname, dstname, hardlink)
434 num += n
434 num += n
435 else:
435 else:
436 if hardlink:
436 if hardlink:
437 try:
437 try:
438 os_link(src, dst)
438 os_link(src, dst)
439 except (IOError, OSError):
439 except (IOError, OSError):
440 hardlink = False
440 hardlink = False
441 shutil.copy(src, dst)
441 shutil.copy(src, dst)
442 else:
442 else:
443 shutil.copy(src, dst)
443 shutil.copy(src, dst)
444 num += 1
444 num += 1
445
445
446 return hardlink, num
446 return hardlink, num
447
447
448 _windows_reserved_filenames = '''con prn aux nul
448 _windows_reserved_filenames = '''con prn aux nul
449 com1 com2 com3 com4 com5 com6 com7 com8 com9
449 com1 com2 com3 com4 com5 com6 com7 com8 com9
450 lpt1 lpt2 lpt3 lpt4 lpt5 lpt6 lpt7 lpt8 lpt9'''.split()
450 lpt1 lpt2 lpt3 lpt4 lpt5 lpt6 lpt7 lpt8 lpt9'''.split()
451 _windows_reserved_chars = ':*?"<>|'
451 _windows_reserved_chars = ':*?"<>|'
452 def checkwinfilename(path):
452 def checkwinfilename(path):
453 '''Check that the base-relative path is a valid filename on Windows.
453 '''Check that the base-relative path is a valid filename on Windows.
454 Returns None if the path is ok, or a UI string describing the problem.
454 Returns None if the path is ok, or a UI string describing the problem.
455
455
456 >>> checkwinfilename("just/a/normal/path")
456 >>> checkwinfilename("just/a/normal/path")
457 >>> checkwinfilename("foo/bar/con.xml")
457 >>> checkwinfilename("foo/bar/con.xml")
458 "filename contains 'con', which is reserved on Windows"
458 "filename contains 'con', which is reserved on Windows"
459 >>> checkwinfilename("foo/con.xml/bar")
459 >>> checkwinfilename("foo/con.xml/bar")
460 "filename contains 'con', which is reserved on Windows"
460 "filename contains 'con', which is reserved on Windows"
461 >>> checkwinfilename("foo/bar/xml.con")
461 >>> checkwinfilename("foo/bar/xml.con")
462 >>> checkwinfilename("foo/bar/AUX/bla.txt")
462 >>> checkwinfilename("foo/bar/AUX/bla.txt")
463 "filename contains 'AUX', which is reserved on Windows"
463 "filename contains 'AUX', which is reserved on Windows"
464 >>> checkwinfilename("foo/bar/bla:.txt")
464 >>> checkwinfilename("foo/bar/bla:.txt")
465 "filename contains ':', which is reserved on Windows"
465 "filename contains ':', which is reserved on Windows"
466 >>> checkwinfilename("foo/bar/b\07la.txt")
466 >>> checkwinfilename("foo/bar/b\07la.txt")
467 "filename contains '\\\\x07', which is invalid on Windows"
467 "filename contains '\\\\x07', which is invalid on Windows"
468 >>> checkwinfilename("foo/bar/bla ")
468 >>> checkwinfilename("foo/bar/bla ")
469 "filename ends with ' ', which is not allowed on Windows"
469 "filename ends with ' ', which is not allowed on Windows"
470 '''
470 '''
471 for n in path.replace('\\', '/').split('/'):
471 for n in path.replace('\\', '/').split('/'):
472 if not n:
472 if not n:
473 continue
473 continue
474 for c in n:
474 for c in n:
475 if c in _windows_reserved_chars:
475 if c in _windows_reserved_chars:
476 return _("filename contains '%s', which is reserved "
476 return _("filename contains '%s', which is reserved "
477 "on Windows") % c
477 "on Windows") % c
478 if ord(c) <= 31:
478 if ord(c) <= 31:
479 return _("filename contains %r, which is invalid "
479 return _("filename contains %r, which is invalid "
480 "on Windows") % c
480 "on Windows") % c
481 base = n.split('.')[0]
481 base = n.split('.')[0]
482 if base and base.lower() in _windows_reserved_filenames:
482 if base and base.lower() in _windows_reserved_filenames:
483 return _("filename contains '%s', which is reserved "
483 return _("filename contains '%s', which is reserved "
484 "on Windows") % base
484 "on Windows") % base
485 t = n[-1]
485 t = n[-1]
486 if t in '. ':
486 if t in '. ':
487 return _("filename ends with '%s', which is not allowed "
487 return _("filename ends with '%s', which is not allowed "
488 "on Windows") % t
488 "on Windows") % t
489
489
490 def lookupreg(key, name=None, scope=None):
490 def lookupreg(key, name=None, scope=None):
491 return None
491 return None
492
492
493 def hidewindow():
493 def hidewindow():
494 """Hide current shell window.
494 """Hide current shell window.
495
495
496 Used to hide the window opened when starting asynchronous
496 Used to hide the window opened when starting asynchronous
497 child process under Windows, unneeded on other systems.
497 child process under Windows, unneeded on other systems.
498 """
498 """
499 pass
499 pass
500
500
501 if os.name == 'nt':
501 if os.name == 'nt':
502 checkosfilename = checkwinfilename
502 checkosfilename = checkwinfilename
503 from windows import *
503 from windows import *
504 else:
504 else:
505 from posix import *
505 from posix import *
506
506
507 def makelock(info, pathname):
507 def makelock(info, pathname):
508 try:
508 try:
509 return os.symlink(info, pathname)
509 return os.symlink(info, pathname)
510 except OSError, why:
510 except OSError, why:
511 if why.errno == errno.EEXIST:
511 if why.errno == errno.EEXIST:
512 raise
512 raise
513 except AttributeError: # no symlink in os
513 except AttributeError: # no symlink in os
514 pass
514 pass
515
515
516 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
516 ld = os.open(pathname, os.O_CREAT | os.O_WRONLY | os.O_EXCL)
517 os.write(ld, info)
517 os.write(ld, info)
518 os.close(ld)
518 os.close(ld)
519
519
520 def readlock(pathname):
520 def readlock(pathname):
521 try:
521 try:
522 return os.readlink(pathname)
522 return os.readlink(pathname)
523 except OSError, why:
523 except OSError, why:
524 if why.errno not in (errno.EINVAL, errno.ENOSYS):
524 if why.errno not in (errno.EINVAL, errno.ENOSYS):
525 raise
525 raise
526 except AttributeError: # no symlink in os
526 except AttributeError: # no symlink in os
527 pass
527 pass
528 fp = posixfile(pathname)
528 fp = posixfile(pathname)
529 r = fp.read()
529 r = fp.read()
530 fp.close()
530 fp.close()
531 return r
531 return r
532
532
533 def fstat(fp):
533 def fstat(fp):
534 '''stat file object that may not have fileno method.'''
534 '''stat file object that may not have fileno method.'''
535 try:
535 try:
536 return os.fstat(fp.fileno())
536 return os.fstat(fp.fileno())
537 except AttributeError:
537 except AttributeError:
538 return os.stat(fp.name)
538 return os.stat(fp.name)
539
539
540 # File system features
540 # File system features
541
541
542 def checkcase(path):
542 def checkcase(path):
543 """
543 """
544 Check whether the given path is on a case-sensitive filesystem
544 Check whether the given path is on a case-sensitive filesystem
545
545
546 Requires a path (like /foo/.hg) ending with a foldable final
546 Requires a path (like /foo/.hg) ending with a foldable final
547 directory component.
547 directory component.
548 """
548 """
549 s1 = os.stat(path)
549 s1 = os.stat(path)
550 d, b = os.path.split(path)
550 d, b = os.path.split(path)
551 p2 = os.path.join(d, b.upper())
551 p2 = os.path.join(d, b.upper())
552 if path == p2:
552 if path == p2:
553 p2 = os.path.join(d, b.lower())
553 p2 = os.path.join(d, b.lower())
554 try:
554 try:
555 s2 = os.stat(p2)
555 s2 = os.stat(p2)
556 if s2 == s1:
556 if s2 == s1:
557 return False
557 return False
558 return True
558 return True
559 except OSError:
559 except OSError:
560 return True
560 return True
561
561
562 _fspathcache = {}
562 _fspathcache = {}
563 def fspath(name, root):
563 def fspath(name, root):
564 '''Get name in the case stored in the filesystem
564 '''Get name in the case stored in the filesystem
565
565
566 The name is either relative to root, or it is an absolute path starting
566 The name is either relative to root, or it is an absolute path starting
567 with root. Note that this function is unnecessary, and should not be
567 with root. Note that this function is unnecessary, and should not be
568 called, for case-sensitive filesystems (simply because it's expensive).
568 called, for case-sensitive filesystems (simply because it's expensive).
569 '''
569 '''
570 # If name is absolute, make it relative
570 # If name is absolute, make it relative
571 if name.lower().startswith(root.lower()):
571 if name.lower().startswith(root.lower()):
572 l = len(root)
572 l = len(root)
573 if name[l] == os.sep or name[l] == os.altsep:
573 if name[l] == os.sep or name[l] == os.altsep:
574 l = l + 1
574 l = l + 1
575 name = name[l:]
575 name = name[l:]
576
576
577 if not os.path.lexists(os.path.join(root, name)):
577 if not os.path.lexists(os.path.join(root, name)):
578 return None
578 return None
579
579
580 seps = os.sep
580 seps = os.sep
581 if os.altsep:
581 if os.altsep:
582 seps = seps + os.altsep
582 seps = seps + os.altsep
583 # Protect backslashes. This gets silly very quickly.
583 # Protect backslashes. This gets silly very quickly.
584 seps.replace('\\','\\\\')
584 seps.replace('\\','\\\\')
585 pattern = re.compile(r'([^%s]+)|([%s]+)' % (seps, seps))
585 pattern = re.compile(r'([^%s]+)|([%s]+)' % (seps, seps))
586 dir = os.path.normcase(os.path.normpath(root))
586 dir = os.path.normcase(os.path.normpath(root))
587 result = []
587 result = []
588 for part, sep in pattern.findall(name):
588 for part, sep in pattern.findall(name):
589 if sep:
589 if sep:
590 result.append(sep)
590 result.append(sep)
591 continue
591 continue
592
592
593 if dir not in _fspathcache:
593 if dir not in _fspathcache:
594 _fspathcache[dir] = os.listdir(dir)
594 _fspathcache[dir] = os.listdir(dir)
595 contents = _fspathcache[dir]
595 contents = _fspathcache[dir]
596
596
597 lpart = part.lower()
597 lpart = part.lower()
598 lenp = len(part)
598 lenp = len(part)
599 for n in contents:
599 for n in contents:
600 if lenp == len(n) and n.lower() == lpart:
600 if lenp == len(n) and n.lower() == lpart:
601 result.append(n)
601 result.append(n)
602 break
602 break
603 else:
603 else:
604 # Cannot happen, as the file exists!
604 # Cannot happen, as the file exists!
605 result.append(part)
605 result.append(part)
606 dir = os.path.join(dir, lpart)
606 dir = os.path.join(dir, lpart)
607
607
608 return ''.join(result)
608 return ''.join(result)
609
609
610 def checknlink(testfile):
610 def checknlink(testfile):
611 '''check whether hardlink count reporting works properly'''
611 '''check whether hardlink count reporting works properly'''
612
612
613 # testfile may be open, so we need a separate file for checking to
613 # testfile may be open, so we need a separate file for checking to
614 # work around issue2543 (or testfile may get lost on Samba shares)
614 # work around issue2543 (or testfile may get lost on Samba shares)
615 f1 = testfile + ".hgtmp1"
615 f1 = testfile + ".hgtmp1"
616 if os.path.lexists(f1):
616 if os.path.lexists(f1):
617 return False
617 return False
618 try:
618 try:
619 posixfile(f1, 'w').close()
619 posixfile(f1, 'w').close()
620 except IOError:
620 except IOError:
621 return False
621 return False
622
622
623 f2 = testfile + ".hgtmp2"
623 f2 = testfile + ".hgtmp2"
624 fd = None
624 fd = None
625 try:
625 try:
626 try:
626 try:
627 os_link(f1, f2)
627 os_link(f1, f2)
628 except OSError:
628 except OSError:
629 return False
629 return False
630
630
631 # nlinks() may behave differently for files on Windows shares if
631 # nlinks() may behave differently for files on Windows shares if
632 # the file is open.
632 # the file is open.
633 fd = posixfile(f2)
633 fd = posixfile(f2)
634 return nlinks(f2) > 1
634 return nlinks(f2) > 1
635 finally:
635 finally:
636 if fd is not None:
636 if fd is not None:
637 fd.close()
637 fd.close()
638 for f in (f1, f2):
638 for f in (f1, f2):
639 try:
639 try:
640 os.unlink(f)
640 os.unlink(f)
641 except OSError:
641 except OSError:
642 pass
642 pass
643
643
644 return False
644 return False
645
645
646 def endswithsep(path):
646 def endswithsep(path):
647 '''Check path ends with os.sep or os.altsep.'''
647 '''Check path ends with os.sep or os.altsep.'''
648 return path.endswith(os.sep) or os.altsep and path.endswith(os.altsep)
648 return path.endswith(os.sep) or os.altsep and path.endswith(os.altsep)
649
649
650 def splitpath(path):
650 def splitpath(path):
651 '''Split path by os.sep.
651 '''Split path by os.sep.
652 Note that this function does not use os.altsep because this is
652 Note that this function does not use os.altsep because this is
653 an alternative of simple "xxx.split(os.sep)".
653 an alternative of simple "xxx.split(os.sep)".
654 It is recommended to use os.path.normpath() before using this
654 It is recommended to use os.path.normpath() before using this
655 function if need.'''
655 function if need.'''
656 return path.split(os.sep)
656 return path.split(os.sep)
657
657
658 def gui():
658 def gui():
659 '''Are we running in a GUI?'''
659 '''Are we running in a GUI?'''
660 if sys.platform == 'darwin':
660 if sys.platform == 'darwin':
661 if 'SSH_CONNECTION' in os.environ:
661 if 'SSH_CONNECTION' in os.environ:
662 # handle SSH access to a box where the user is logged in
662 # handle SSH access to a box where the user is logged in
663 return False
663 return False
664 elif getattr(osutil, 'isgui', None):
664 elif getattr(osutil, 'isgui', None):
665 # check if a CoreGraphics session is available
665 # check if a CoreGraphics session is available
666 return osutil.isgui()
666 return osutil.isgui()
667 else:
667 else:
668 # pure build; use a safe default
668 # pure build; use a safe default
669 return True
669 return True
670 else:
670 else:
671 return os.name == "nt" or os.environ.get("DISPLAY")
671 return os.name == "nt" or os.environ.get("DISPLAY")
672
672
673 def mktempcopy(name, emptyok=False, createmode=None):
673 def mktempcopy(name, emptyok=False, createmode=None):
674 """Create a temporary file with the same contents from name
674 """Create a temporary file with the same contents from name
675
675
676 The permission bits are copied from the original file.
676 The permission bits are copied from the original file.
677
677
678 If the temporary file is going to be truncated immediately, you
678 If the temporary file is going to be truncated immediately, you
679 can use emptyok=True as an optimization.
679 can use emptyok=True as an optimization.
680
680
681 Returns the name of the temporary file.
681 Returns the name of the temporary file.
682 """
682 """
683 d, fn = os.path.split(name)
683 d, fn = os.path.split(name)
684 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
684 fd, temp = tempfile.mkstemp(prefix='.%s-' % fn, dir=d)
685 os.close(fd)
685 os.close(fd)
686 # Temporary files are created with mode 0600, which is usually not
686 # Temporary files are created with mode 0600, which is usually not
687 # what we want. If the original file already exists, just copy
687 # what we want. If the original file already exists, just copy
688 # its mode. Otherwise, manually obey umask.
688 # its mode. Otherwise, manually obey umask.
689 try:
689 try:
690 st_mode = os.lstat(name).st_mode & 0777
690 st_mode = os.lstat(name).st_mode & 0777
691 except OSError, inst:
691 except OSError, inst:
692 if inst.errno != errno.ENOENT:
692 if inst.errno != errno.ENOENT:
693 raise
693 raise
694 st_mode = createmode
694 st_mode = createmode
695 if st_mode is None:
695 if st_mode is None:
696 st_mode = ~umask
696 st_mode = ~umask
697 st_mode &= 0666
697 st_mode &= 0666
698 os.chmod(temp, st_mode)
698 os.chmod(temp, st_mode)
699 if emptyok:
699 if emptyok:
700 return temp
700 return temp
701 try:
701 try:
702 try:
702 try:
703 ifp = posixfile(name, "rb")
703 ifp = posixfile(name, "rb")
704 except IOError, inst:
704 except IOError, inst:
705 if inst.errno == errno.ENOENT:
705 if inst.errno == errno.ENOENT:
706 return temp
706 return temp
707 if not getattr(inst, 'filename', None):
707 if not getattr(inst, 'filename', None):
708 inst.filename = name
708 inst.filename = name
709 raise
709 raise
710 ofp = posixfile(temp, "wb")
710 ofp = posixfile(temp, "wb")
711 for chunk in filechunkiter(ifp):
711 for chunk in filechunkiter(ifp):
712 ofp.write(chunk)
712 ofp.write(chunk)
713 ifp.close()
713 ifp.close()
714 ofp.close()
714 ofp.close()
715 except:
715 except:
716 try: os.unlink(temp)
716 try: os.unlink(temp)
717 except: pass
717 except: pass
718 raise
718 raise
719 return temp
719 return temp
720
720
721 class atomictempfile(object):
721 class atomictempfile(object):
722 '''writeable file object that atomically updates a file
722 '''writeable file object that atomically updates a file
723
723
724 All writes will go to a temporary copy of the original file. Call
724 All writes will go to a temporary copy of the original file. Call
725 rename() when you are done writing, and atomictempfile will rename
725 rename() when you are done writing, and atomictempfile will rename
726 the temporary copy to the original name, making the changes visible.
726 the temporary copy to the original name, making the changes visible.
727
727
728 Unlike other file-like objects, close() discards your writes by
728 Unlike other file-like objects, close() discards your writes by
729 simply deleting the temporary file.
729 simply deleting the temporary file.
730 '''
730 '''
731 def __init__(self, name, mode='w+b', createmode=None):
731 def __init__(self, name, mode='w+b', createmode=None):
732 self.__name = name # permanent name
732 self.__name = name # permanent name
733 self._tempname = mktempcopy(name, emptyok=('w' in mode),
733 self._tempname = mktempcopy(name, emptyok=('w' in mode),
734 createmode=createmode)
734 createmode=createmode)
735 self._fp = posixfile(self._tempname, mode)
735 self._fp = posixfile(self._tempname, mode)
736
736
737 # delegated methods
737 # delegated methods
738 self.write = self._fp.write
738 self.write = self._fp.write
739 self.fileno = self._fp.fileno
739 self.fileno = self._fp.fileno
740
740
741 def rename(self):
741 def rename(self):
742 if not self._fp.closed:
742 if not self._fp.closed:
743 self._fp.close()
743 self._fp.close()
744 rename(self._tempname, localpath(self.__name))
744 rename(self._tempname, localpath(self.__name))
745
745
746 def close(self):
746 def close(self):
747 if not self._fp.closed:
747 if not self._fp.closed:
748 try:
748 try:
749 os.unlink(self._tempname)
749 os.unlink(self._tempname)
750 except OSError:
750 except OSError:
751 pass
751 pass
752 self._fp.close()
752 self._fp.close()
753
753
754 def __del__(self):
754 def __del__(self):
755 if hasattr(self, '_fp'): # constructor actually did something
755 if hasattr(self, '_fp'): # constructor actually did something
756 self.close()
756 self.close()
757
757
758 def makedirs(name, mode=None):
758 def makedirs(name, mode=None):
759 """recursive directory creation with parent mode inheritance"""
759 """recursive directory creation with parent mode inheritance"""
760 parent = os.path.abspath(os.path.dirname(name))
760 parent = os.path.abspath(os.path.dirname(name))
761 try:
761 try:
762 os.mkdir(name)
762 os.mkdir(name)
763 if mode is not None:
763 if mode is not None:
764 os.chmod(name, mode)
764 os.chmod(name, mode)
765 return
765 return
766 except OSError, err:
766 except OSError, err:
767 if err.errno == errno.EEXIST:
767 if err.errno == errno.EEXIST:
768 return
768 return
769 if not name or parent == name or err.errno != errno.ENOENT:
769 if not name or parent == name or err.errno != errno.ENOENT:
770 raise
770 raise
771 makedirs(parent, mode)
771 makedirs(parent, mode)
772 makedirs(name, mode)
772 makedirs(name, mode)
773
773
774 def readfile(path):
774 def readfile(path):
775 fp = open(path)
775 fp = open(path)
776 try:
776 try:
777 return fp.read()
777 return fp.read()
778 finally:
778 finally:
779 fp.close()
779 fp.close()
780
780
781 def writefile(path, text):
781 def writefile(path, text):
782 fp = open(path, 'wb')
782 fp = open(path, 'wb')
783 try:
783 try:
784 fp.write(text)
784 fp.write(text)
785 finally:
785 finally:
786 fp.close()
786 fp.close()
787
787
788 def appendfile(path, text):
788 def appendfile(path, text):
789 fp = open(path, 'ab')
789 fp = open(path, 'ab')
790 try:
790 try:
791 fp.write(text)
791 fp.write(text)
792 finally:
792 finally:
793 fp.close()
793 fp.close()
794
794
795 class chunkbuffer(object):
795 class chunkbuffer(object):
796 """Allow arbitrary sized chunks of data to be efficiently read from an
796 """Allow arbitrary sized chunks of data to be efficiently read from an
797 iterator over chunks of arbitrary size."""
797 iterator over chunks of arbitrary size."""
798
798
799 def __init__(self, in_iter):
799 def __init__(self, in_iter):
800 """in_iter is the iterator that's iterating over the input chunks.
800 """in_iter is the iterator that's iterating over the input chunks.
801 targetsize is how big a buffer to try to maintain."""
801 targetsize is how big a buffer to try to maintain."""
802 def splitbig(chunks):
802 def splitbig(chunks):
803 for chunk in chunks:
803 for chunk in chunks:
804 if len(chunk) > 2**20:
804 if len(chunk) > 2**20:
805 pos = 0
805 pos = 0
806 while pos < len(chunk):
806 while pos < len(chunk):
807 end = pos + 2 ** 18
807 end = pos + 2 ** 18
808 yield chunk[pos:end]
808 yield chunk[pos:end]
809 pos = end
809 pos = end
810 else:
810 else:
811 yield chunk
811 yield chunk
812 self.iter = splitbig(in_iter)
812 self.iter = splitbig(in_iter)
813 self._queue = []
813 self._queue = []
814
814
815 def read(self, l):
815 def read(self, l):
816 """Read L bytes of data from the iterator of chunks of data.
816 """Read L bytes of data from the iterator of chunks of data.
817 Returns less than L bytes if the iterator runs dry."""
817 Returns less than L bytes if the iterator runs dry."""
818 left = l
818 left = l
819 buf = ''
819 buf = ''
820 queue = self._queue
820 queue = self._queue
821 while left > 0:
821 while left > 0:
822 # refill the queue
822 # refill the queue
823 if not queue:
823 if not queue:
824 target = 2**18
824 target = 2**18
825 for chunk in self.iter:
825 for chunk in self.iter:
826 queue.append(chunk)
826 queue.append(chunk)
827 target -= len(chunk)
827 target -= len(chunk)
828 if target <= 0:
828 if target <= 0:
829 break
829 break
830 if not queue:
830 if not queue:
831 break
831 break
832
832
833 chunk = queue.pop(0)
833 chunk = queue.pop(0)
834 left -= len(chunk)
834 left -= len(chunk)
835 if left < 0:
835 if left < 0:
836 queue.insert(0, chunk[left:])
836 queue.insert(0, chunk[left:])
837 buf += chunk[:left]
837 buf += chunk[:left]
838 else:
838 else:
839 buf += chunk
839 buf += chunk
840
840
841 return buf
841 return buf
842
842
843 def filechunkiter(f, size=65536, limit=None):
843 def filechunkiter(f, size=65536, limit=None):
844 """Create a generator that produces the data in the file size
844 """Create a generator that produces the data in the file size
845 (default 65536) bytes at a time, up to optional limit (default is
845 (default 65536) bytes at a time, up to optional limit (default is
846 to read all data). Chunks may be less than size bytes if the
846 to read all data). Chunks may be less than size bytes if the
847 chunk is the last chunk in the file, or the file is a socket or
847 chunk is the last chunk in the file, or the file is a socket or
848 some other type of file that sometimes reads less data than is
848 some other type of file that sometimes reads less data than is
849 requested."""
849 requested."""
850 assert size >= 0
850 assert size >= 0
851 assert limit is None or limit >= 0
851 assert limit is None or limit >= 0
852 while True:
852 while True:
853 if limit is None:
853 if limit is None:
854 nbytes = size
854 nbytes = size
855 else:
855 else:
856 nbytes = min(limit, size)
856 nbytes = min(limit, size)
857 s = nbytes and f.read(nbytes)
857 s = nbytes and f.read(nbytes)
858 if not s:
858 if not s:
859 break
859 break
860 if limit:
860 if limit:
861 limit -= len(s)
861 limit -= len(s)
862 yield s
862 yield s
863
863
864 def makedate():
864 def makedate():
865 lt = time.localtime()
865 lt = time.localtime()
866 if lt[8] == 1 and time.daylight:
866 if lt[8] == 1 and time.daylight:
867 tz = time.altzone
867 tz = time.altzone
868 else:
868 else:
869 tz = time.timezone
869 tz = time.timezone
870 t = time.mktime(lt)
870 t = time.mktime(lt)
871 if t < 0:
871 if t < 0:
872 hint = _("check your clock")
872 hint = _("check your clock")
873 raise Abort(_("negative timestamp: %d") % t, hint=hint)
873 raise Abort(_("negative timestamp: %d") % t, hint=hint)
874 return t, tz
874 return t, tz
875
875
876 def datestr(date=None, format='%a %b %d %H:%M:%S %Y %1%2'):
876 def datestr(date=None, format='%a %b %d %H:%M:%S %Y %1%2'):
877 """represent a (unixtime, offset) tuple as a localized time.
877 """represent a (unixtime, offset) tuple as a localized time.
878 unixtime is seconds since the epoch, and offset is the time zone's
878 unixtime is seconds since the epoch, and offset is the time zone's
879 number of seconds away from UTC. if timezone is false, do not
879 number of seconds away from UTC. if timezone is false, do not
880 append time zone to string."""
880 append time zone to string."""
881 t, tz = date or makedate()
881 t, tz = date or makedate()
882 if t < 0:
882 if t < 0:
883 t = 0 # time.gmtime(lt) fails on Windows for lt < -43200
883 t = 0 # time.gmtime(lt) fails on Windows for lt < -43200
884 tz = 0
884 tz = 0
885 if "%1" in format or "%2" in format:
885 if "%1" in format or "%2" in format:
886 sign = (tz > 0) and "-" or "+"
886 sign = (tz > 0) and "-" or "+"
887 minutes = abs(tz) // 60
887 minutes = abs(tz) // 60
888 format = format.replace("%1", "%c%02d" % (sign, minutes // 60))
888 format = format.replace("%1", "%c%02d" % (sign, minutes // 60))
889 format = format.replace("%2", "%02d" % (minutes % 60))
889 format = format.replace("%2", "%02d" % (minutes % 60))
890 s = time.strftime(format, time.gmtime(float(t) - tz))
890 s = time.strftime(format, time.gmtime(float(t) - tz))
891 return s
891 return s
892
892
893 def shortdate(date=None):
893 def shortdate(date=None):
894 """turn (timestamp, tzoff) tuple into iso 8631 date."""
894 """turn (timestamp, tzoff) tuple into iso 8631 date."""
895 return datestr(date, format='%Y-%m-%d')
895 return datestr(date, format='%Y-%m-%d')
896
896
897 def strdate(string, format, defaults=[]):
897 def strdate(string, format, defaults=[]):
898 """parse a localized time string and return a (unixtime, offset) tuple.
898 """parse a localized time string and return a (unixtime, offset) tuple.
899 if the string cannot be parsed, ValueError is raised."""
899 if the string cannot be parsed, ValueError is raised."""
900 def timezone(string):
900 def timezone(string):
901 tz = string.split()[-1]
901 tz = string.split()[-1]
902 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
902 if tz[0] in "+-" and len(tz) == 5 and tz[1:].isdigit():
903 sign = (tz[0] == "+") and 1 or -1
903 sign = (tz[0] == "+") and 1 or -1
904 hours = int(tz[1:3])
904 hours = int(tz[1:3])
905 minutes = int(tz[3:5])
905 minutes = int(tz[3:5])
906 return -sign * (hours * 60 + minutes) * 60
906 return -sign * (hours * 60 + minutes) * 60
907 if tz == "GMT" or tz == "UTC":
907 if tz == "GMT" or tz == "UTC":
908 return 0
908 return 0
909 return None
909 return None
910
910
911 # NOTE: unixtime = localunixtime + offset
911 # NOTE: unixtime = localunixtime + offset
912 offset, date = timezone(string), string
912 offset, date = timezone(string), string
913 if offset is not None:
913 if offset is not None:
914 date = " ".join(string.split()[:-1])
914 date = " ".join(string.split()[:-1])
915
915
916 # add missing elements from defaults
916 # add missing elements from defaults
917 usenow = False # default to using biased defaults
917 usenow = False # default to using biased defaults
918 for part in ("S", "M", "HI", "d", "mb", "yY"): # decreasing specificity
918 for part in ("S", "M", "HI", "d", "mb", "yY"): # decreasing specificity
919 found = [True for p in part if ("%"+p) in format]
919 found = [True for p in part if ("%"+p) in format]
920 if not found:
920 if not found:
921 date += "@" + defaults[part][usenow]
921 date += "@" + defaults[part][usenow]
922 format += "@%" + part[0]
922 format += "@%" + part[0]
923 else:
923 else:
924 # We've found a specific time element, less specific time
924 # We've found a specific time element, less specific time
925 # elements are relative to today
925 # elements are relative to today
926 usenow = True
926 usenow = True
927
927
928 timetuple = time.strptime(date, format)
928 timetuple = time.strptime(date, format)
929 localunixtime = int(calendar.timegm(timetuple))
929 localunixtime = int(calendar.timegm(timetuple))
930 if offset is None:
930 if offset is None:
931 # local timezone
931 # local timezone
932 unixtime = int(time.mktime(timetuple))
932 unixtime = int(time.mktime(timetuple))
933 offset = unixtime - localunixtime
933 offset = unixtime - localunixtime
934 else:
934 else:
935 unixtime = localunixtime + offset
935 unixtime = localunixtime + offset
936 return unixtime, offset
936 return unixtime, offset
937
937
938 def parsedate(date, formats=None, bias={}):
938 def parsedate(date, formats=None, bias={}):
939 """parse a localized date/time and return a (unixtime, offset) tuple.
939 """parse a localized date/time and return a (unixtime, offset) tuple.
940
940
941 The date may be a "unixtime offset" string or in one of the specified
941 The date may be a "unixtime offset" string or in one of the specified
942 formats. If the date already is a (unixtime, offset) tuple, it is returned.
942 formats. If the date already is a (unixtime, offset) tuple, it is returned.
943 """
943 """
944 if not date:
944 if not date:
945 return 0, 0
945 return 0, 0
946 if isinstance(date, tuple) and len(date) == 2:
946 if isinstance(date, tuple) and len(date) == 2:
947 return date
947 return date
948 if not formats:
948 if not formats:
949 formats = defaultdateformats
949 formats = defaultdateformats
950 date = date.strip()
950 date = date.strip()
951 try:
951 try:
952 when, offset = map(int, date.split(' '))
952 when, offset = map(int, date.split(' '))
953 except ValueError:
953 except ValueError:
954 # fill out defaults
954 # fill out defaults
955 now = makedate()
955 now = makedate()
956 defaults = {}
956 defaults = {}
957 for part in ("d", "mb", "yY", "HI", "M", "S"):
957 for part in ("d", "mb", "yY", "HI", "M", "S"):
958 # this piece is for rounding the specific end of unknowns
958 # this piece is for rounding the specific end of unknowns
959 b = bias.get(part)
959 b = bias.get(part)
960 if b is None:
960 if b is None:
961 if part[0] in "HMS":
961 if part[0] in "HMS":
962 b = "00"
962 b = "00"
963 else:
963 else:
964 b = "0"
964 b = "0"
965
965
966 # this piece is for matching the generic end to today's date
966 # this piece is for matching the generic end to today's date
967 n = datestr(now, "%" + part[0])
967 n = datestr(now, "%" + part[0])
968
968
969 defaults[part] = (b, n)
969 defaults[part] = (b, n)
970
970
971 for format in formats:
971 for format in formats:
972 try:
972 try:
973 when, offset = strdate(date, format, defaults)
973 when, offset = strdate(date, format, defaults)
974 except (ValueError, OverflowError):
974 except (ValueError, OverflowError):
975 pass
975 pass
976 else:
976 else:
977 break
977 break
978 else:
978 else:
979 raise Abort(_('invalid date: %r') % date)
979 raise Abort(_('invalid date: %r') % date)
980 # validate explicit (probably user-specified) date and
980 # validate explicit (probably user-specified) date and
981 # time zone offset. values must fit in signed 32 bits for
981 # time zone offset. values must fit in signed 32 bits for
982 # current 32-bit linux runtimes. timezones go from UTC-12
982 # current 32-bit linux runtimes. timezones go from UTC-12
983 # to UTC+14
983 # to UTC+14
984 if abs(when) > 0x7fffffff:
984 if abs(when) > 0x7fffffff:
985 raise Abort(_('date exceeds 32 bits: %d') % when)
985 raise Abort(_('date exceeds 32 bits: %d') % when)
986 if when < 0:
986 if when < 0:
987 raise Abort(_('negative date value: %d') % when)
987 raise Abort(_('negative date value: %d') % when)
988 if offset < -50400 or offset > 43200:
988 if offset < -50400 or offset > 43200:
989 raise Abort(_('impossible time zone offset: %d') % offset)
989 raise Abort(_('impossible time zone offset: %d') % offset)
990 return when, offset
990 return when, offset
991
991
992 def matchdate(date):
992 def matchdate(date):
993 """Return a function that matches a given date match specifier
993 """Return a function that matches a given date match specifier
994
994
995 Formats include:
995 Formats include:
996
996
997 '{date}' match a given date to the accuracy provided
997 '{date}' match a given date to the accuracy provided
998
998
999 '<{date}' on or before a given date
999 '<{date}' on or before a given date
1000
1000
1001 '>{date}' on or after a given date
1001 '>{date}' on or after a given date
1002
1002
1003 >>> p1 = parsedate("10:29:59")
1003 >>> p1 = parsedate("10:29:59")
1004 >>> p2 = parsedate("10:30:00")
1004 >>> p2 = parsedate("10:30:00")
1005 >>> p3 = parsedate("10:30:59")
1005 >>> p3 = parsedate("10:30:59")
1006 >>> p4 = parsedate("10:31:00")
1006 >>> p4 = parsedate("10:31:00")
1007 >>> p5 = parsedate("Sep 15 10:30:00 1999")
1007 >>> p5 = parsedate("Sep 15 10:30:00 1999")
1008 >>> f = matchdate("10:30")
1008 >>> f = matchdate("10:30")
1009 >>> f(p1[0])
1009 >>> f(p1[0])
1010 False
1010 False
1011 >>> f(p2[0])
1011 >>> f(p2[0])
1012 True
1012 True
1013 >>> f(p3[0])
1013 >>> f(p3[0])
1014 True
1014 True
1015 >>> f(p4[0])
1015 >>> f(p4[0])
1016 False
1016 False
1017 >>> f(p5[0])
1017 >>> f(p5[0])
1018 False
1018 False
1019 """
1019 """
1020
1020
1021 def lower(date):
1021 def lower(date):
1022 d = dict(mb="1", d="1")
1022 d = dict(mb="1", d="1")
1023 return parsedate(date, extendeddateformats, d)[0]
1023 return parsedate(date, extendeddateformats, d)[0]
1024
1024
1025 def upper(date):
1025 def upper(date):
1026 d = dict(mb="12", HI="23", M="59", S="59")
1026 d = dict(mb="12", HI="23", M="59", S="59")
1027 for days in ("31", "30", "29"):
1027 for days in ("31", "30", "29"):
1028 try:
1028 try:
1029 d["d"] = days
1029 d["d"] = days
1030 return parsedate(date, extendeddateformats, d)[0]
1030 return parsedate(date, extendeddateformats, d)[0]
1031 except:
1031 except:
1032 pass
1032 pass
1033 d["d"] = "28"
1033 d["d"] = "28"
1034 return parsedate(date, extendeddateformats, d)[0]
1034 return parsedate(date, extendeddateformats, d)[0]
1035
1035
1036 date = date.strip()
1036 date = date.strip()
1037
1037
1038 if not date:
1038 if not date:
1039 raise Abort(_("dates cannot consist entirely of whitespace"))
1039 raise Abort(_("dates cannot consist entirely of whitespace"))
1040 elif date[0] == "<":
1040 elif date[0] == "<":
1041 if not date[1:]:
1041 if not date[1:]:
1042 raise Abort(_("invalid day spec, use '<DATE'"))
1042 raise Abort(_("invalid day spec, use '<DATE'"))
1043 when = upper(date[1:])
1043 when = upper(date[1:])
1044 return lambda x: x <= when
1044 return lambda x: x <= when
1045 elif date[0] == ">":
1045 elif date[0] == ">":
1046 if not date[1:]:
1046 if not date[1:]:
1047 raise Abort(_("invalid day spec, use '>DATE'"))
1047 raise Abort(_("invalid day spec, use '>DATE'"))
1048 when = lower(date[1:])
1048 when = lower(date[1:])
1049 return lambda x: x >= when
1049 return lambda x: x >= when
1050 elif date[0] == "-":
1050 elif date[0] == "-":
1051 try:
1051 try:
1052 days = int(date[1:])
1052 days = int(date[1:])
1053 except ValueError:
1053 except ValueError:
1054 raise Abort(_("invalid day spec: %s") % date[1:])
1054 raise Abort(_("invalid day spec: %s") % date[1:])
1055 if days < 0:
1055 if days < 0:
1056 raise Abort(_("%s must be nonnegative (see 'hg help dates')")
1056 raise Abort(_("%s must be nonnegative (see 'hg help dates')")
1057 % date[1:])
1057 % date[1:])
1058 when = makedate()[0] - days * 3600 * 24
1058 when = makedate()[0] - days * 3600 * 24
1059 return lambda x: x >= when
1059 return lambda x: x >= when
1060 elif " to " in date:
1060 elif " to " in date:
1061 a, b = date.split(" to ")
1061 a, b = date.split(" to ")
1062 start, stop = lower(a), upper(b)
1062 start, stop = lower(a), upper(b)
1063 return lambda x: x >= start and x <= stop
1063 return lambda x: x >= start and x <= stop
1064 else:
1064 else:
1065 start, stop = lower(date), upper(date)
1065 start, stop = lower(date), upper(date)
1066 return lambda x: x >= start and x <= stop
1066 return lambda x: x >= start and x <= stop
1067
1067
1068 def shortuser(user):
1068 def shortuser(user):
1069 """Return a short representation of a user name or email address."""
1069 """Return a short representation of a user name or email address."""
1070 f = user.find('@')
1070 f = user.find('@')
1071 if f >= 0:
1071 if f >= 0:
1072 user = user[:f]
1072 user = user[:f]
1073 f = user.find('<')
1073 f = user.find('<')
1074 if f >= 0:
1074 if f >= 0:
1075 user = user[f + 1:]
1075 user = user[f + 1:]
1076 f = user.find(' ')
1076 f = user.find(' ')
1077 if f >= 0:
1077 if f >= 0:
1078 user = user[:f]
1078 user = user[:f]
1079 f = user.find('.')
1079 f = user.find('.')
1080 if f >= 0:
1080 if f >= 0:
1081 user = user[:f]
1081 user = user[:f]
1082 return user
1082 return user
1083
1083
1084 def email(author):
1084 def email(author):
1085 '''get email of author.'''
1085 '''get email of author.'''
1086 r = author.find('>')
1086 r = author.find('>')
1087 if r == -1:
1087 if r == -1:
1088 r = None
1088 r = None
1089 return author[author.find('<') + 1:r]
1089 return author[author.find('<') + 1:r]
1090
1090
1091 def _ellipsis(text, maxlength):
1091 def _ellipsis(text, maxlength):
1092 if len(text) <= maxlength:
1092 if len(text) <= maxlength:
1093 return text, False
1093 return text, False
1094 else:
1094 else:
1095 return "%s..." % (text[:maxlength - 3]), True
1095 return "%s..." % (text[:maxlength - 3]), True
1096
1096
1097 def ellipsis(text, maxlength=400):
1097 def ellipsis(text, maxlength=400):
1098 """Trim string to at most maxlength (default: 400) characters."""
1098 """Trim string to at most maxlength (default: 400) characters."""
1099 try:
1099 try:
1100 # use unicode not to split at intermediate multi-byte sequence
1100 # use unicode not to split at intermediate multi-byte sequence
1101 utext, truncated = _ellipsis(text.decode(encoding.encoding),
1101 utext, truncated = _ellipsis(text.decode(encoding.encoding),
1102 maxlength)
1102 maxlength)
1103 if not truncated:
1103 if not truncated:
1104 return text
1104 return text
1105 return utext.encode(encoding.encoding)
1105 return utext.encode(encoding.encoding)
1106 except (UnicodeDecodeError, UnicodeEncodeError):
1106 except (UnicodeDecodeError, UnicodeEncodeError):
1107 return _ellipsis(text, maxlength)[0]
1107 return _ellipsis(text, maxlength)[0]
1108
1108
1109 def bytecount(nbytes):
1109 def bytecount(nbytes):
1110 '''return byte count formatted as readable string, with units'''
1110 '''return byte count formatted as readable string, with units'''
1111
1111
1112 units = (
1112 units = (
1113 (100, 1 << 30, _('%.0f GB')),
1113 (100, 1 << 30, _('%.0f GB')),
1114 (10, 1 << 30, _('%.1f GB')),
1114 (10, 1 << 30, _('%.1f GB')),
1115 (1, 1 << 30, _('%.2f GB')),
1115 (1, 1 << 30, _('%.2f GB')),
1116 (100, 1 << 20, _('%.0f MB')),
1116 (100, 1 << 20, _('%.0f MB')),
1117 (10, 1 << 20, _('%.1f MB')),
1117 (10, 1 << 20, _('%.1f MB')),
1118 (1, 1 << 20, _('%.2f MB')),
1118 (1, 1 << 20, _('%.2f MB')),
1119 (100, 1 << 10, _('%.0f KB')),
1119 (100, 1 << 10, _('%.0f KB')),
1120 (10, 1 << 10, _('%.1f KB')),
1120 (10, 1 << 10, _('%.1f KB')),
1121 (1, 1 << 10, _('%.2f KB')),
1121 (1, 1 << 10, _('%.2f KB')),
1122 (1, 1, _('%.0f bytes')),
1122 (1, 1, _('%.0f bytes')),
1123 )
1123 )
1124
1124
1125 for multiplier, divisor, format in units:
1125 for multiplier, divisor, format in units:
1126 if nbytes >= divisor * multiplier:
1126 if nbytes >= divisor * multiplier:
1127 return format % (nbytes / float(divisor))
1127 return format % (nbytes / float(divisor))
1128 return units[-1][2] % nbytes
1128 return units[-1][2] % nbytes
1129
1129
1130 def uirepr(s):
1130 def uirepr(s):
1131 # Avoid double backslash in Windows path repr()
1131 # Avoid double backslash in Windows path repr()
1132 return repr(s).replace('\\\\', '\\')
1132 return repr(s).replace('\\\\', '\\')
1133
1133
1134 # delay import of textwrap
1134 # delay import of textwrap
1135 def MBTextWrapper(**kwargs):
1135 def MBTextWrapper(**kwargs):
1136 class tw(textwrap.TextWrapper):
1136 class tw(textwrap.TextWrapper):
1137 """
1137 """
1138 Extend TextWrapper for double-width characters.
1138 Extend TextWrapper for double-width characters.
1139
1139
1140 Some Asian characters use two terminal columns instead of one.
1140 Some Asian characters use two terminal columns instead of one.
1141 A good example of this behavior can be seen with u'\u65e5\u672c',
1141 A good example of this behavior can be seen with u'\u65e5\u672c',
1142 the two Japanese characters for "Japan":
1142 the two Japanese characters for "Japan":
1143 len() returns 2, but when printed to a terminal, they eat 4 columns.
1143 len() returns 2, but when printed to a terminal, they eat 4 columns.
1144
1144
1145 (Note that this has nothing to do whatsoever with unicode
1145 (Note that this has nothing to do whatsoever with unicode
1146 representation, or encoding of the underlying string)
1146 representation, or encoding of the underlying string)
1147 """
1147 """
1148 def __init__(self, **kwargs):
1148 def __init__(self, **kwargs):
1149 textwrap.TextWrapper.__init__(self, **kwargs)
1149 textwrap.TextWrapper.__init__(self, **kwargs)
1150
1150
1151 def _cutdown(self, str, space_left):
1151 def _cutdown(self, str, space_left):
1152 l = 0
1152 l = 0
1153 ucstr = unicode(str, encoding.encoding)
1153 ucstr = unicode(str, encoding.encoding)
1154 colwidth = unicodedata.east_asian_width
1154 colwidth = unicodedata.east_asian_width
1155 for i in xrange(len(ucstr)):
1155 for i in xrange(len(ucstr)):
1156 l += colwidth(ucstr[i]) in 'WFA' and 2 or 1
1156 l += colwidth(ucstr[i]) in 'WFA' and 2 or 1
1157 if space_left < l:
1157 if space_left < l:
1158 return (ucstr[:i].encode(encoding.encoding),
1158 return (ucstr[:i].encode(encoding.encoding),
1159 ucstr[i:].encode(encoding.encoding))
1159 ucstr[i:].encode(encoding.encoding))
1160 return str, ''
1160 return str, ''
1161
1161
1162 # overriding of base class
1162 # overriding of base class
1163 def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
1163 def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
1164 space_left = max(width - cur_len, 1)
1164 space_left = max(width - cur_len, 1)
1165
1165
1166 if self.break_long_words:
1166 if self.break_long_words:
1167 cut, res = self._cutdown(reversed_chunks[-1], space_left)
1167 cut, res = self._cutdown(reversed_chunks[-1], space_left)
1168 cur_line.append(cut)
1168 cur_line.append(cut)
1169 reversed_chunks[-1] = res
1169 reversed_chunks[-1] = res
1170 elif not cur_line:
1170 elif not cur_line:
1171 cur_line.append(reversed_chunks.pop())
1171 cur_line.append(reversed_chunks.pop())
1172
1172
1173 global MBTextWrapper
1173 global MBTextWrapper
1174 MBTextWrapper = tw
1174 MBTextWrapper = tw
1175 return tw(**kwargs)
1175 return tw(**kwargs)
1176
1176
1177 def wrap(line, width, initindent='', hangindent=''):
1177 def wrap(line, width, initindent='', hangindent=''):
1178 maxindent = max(len(hangindent), len(initindent))
1178 maxindent = max(len(hangindent), len(initindent))
1179 if width <= maxindent:
1179 if width <= maxindent:
1180 # adjust for weird terminal size
1180 # adjust for weird terminal size
1181 width = max(78, maxindent + 1)
1181 width = max(78, maxindent + 1)
1182 wrapper = MBTextWrapper(width=width,
1182 wrapper = MBTextWrapper(width=width,
1183 initial_indent=initindent,
1183 initial_indent=initindent,
1184 subsequent_indent=hangindent)
1184 subsequent_indent=hangindent)
1185 return wrapper.fill(line)
1185 return wrapper.fill(line)
1186
1186
1187 def iterlines(iterator):
1187 def iterlines(iterator):
1188 for chunk in iterator:
1188 for chunk in iterator:
1189 for line in chunk.splitlines():
1189 for line in chunk.splitlines():
1190 yield line
1190 yield line
1191
1191
1192 def expandpath(path):
1192 def expandpath(path):
1193 return os.path.expanduser(os.path.expandvars(path))
1193 return os.path.expanduser(os.path.expandvars(path))
1194
1194
1195 def hgcmd():
1195 def hgcmd():
1196 """Return the command used to execute current hg
1196 """Return the command used to execute current hg
1197
1197
1198 This is different from hgexecutable() because on Windows we want
1198 This is different from hgexecutable() because on Windows we want
1199 to avoid things opening new shell windows like batch files, so we
1199 to avoid things opening new shell windows like batch files, so we
1200 get either the python call or current executable.
1200 get either the python call or current executable.
1201 """
1201 """
1202 if mainfrozen():
1202 if mainfrozen():
1203 return [sys.executable]
1203 return [sys.executable]
1204 return gethgcmd()
1204 return gethgcmd()
1205
1205
1206 def rundetached(args, condfn):
1206 def rundetached(args, condfn):
1207 """Execute the argument list in a detached process.
1207 """Execute the argument list in a detached process.
1208
1208
1209 condfn is a callable which is called repeatedly and should return
1209 condfn is a callable which is called repeatedly and should return
1210 True once the child process is known to have started successfully.
1210 True once the child process is known to have started successfully.
1211 At this point, the child process PID is returned. If the child
1211 At this point, the child process PID is returned. If the child
1212 process fails to start or finishes before condfn() evaluates to
1212 process fails to start or finishes before condfn() evaluates to
1213 True, return -1.
1213 True, return -1.
1214 """
1214 """
1215 # Windows case is easier because the child process is either
1215 # Windows case is easier because the child process is either
1216 # successfully starting and validating the condition or exiting
1216 # successfully starting and validating the condition or exiting
1217 # on failure. We just poll on its PID. On Unix, if the child
1217 # on failure. We just poll on its PID. On Unix, if the child
1218 # process fails to start, it will be left in a zombie state until
1218 # process fails to start, it will be left in a zombie state until
1219 # the parent wait on it, which we cannot do since we expect a long
1219 # the parent wait on it, which we cannot do since we expect a long
1220 # running process on success. Instead we listen for SIGCHLD telling
1220 # running process on success. Instead we listen for SIGCHLD telling
1221 # us our child process terminated.
1221 # us our child process terminated.
1222 terminated = set()
1222 terminated = set()
1223 def handler(signum, frame):
1223 def handler(signum, frame):
1224 terminated.add(os.wait())
1224 terminated.add(os.wait())
1225 prevhandler = None
1225 prevhandler = None
1226 if hasattr(signal, 'SIGCHLD'):
1226 if hasattr(signal, 'SIGCHLD'):
1227 prevhandler = signal.signal(signal.SIGCHLD, handler)
1227 prevhandler = signal.signal(signal.SIGCHLD, handler)
1228 try:
1228 try:
1229 pid = spawndetached(args)
1229 pid = spawndetached(args)
1230 while not condfn():
1230 while not condfn():
1231 if ((pid in terminated or not testpid(pid))
1231 if ((pid in terminated or not testpid(pid))
1232 and not condfn()):
1232 and not condfn()):
1233 return -1
1233 return -1
1234 time.sleep(0.1)
1234 time.sleep(0.1)
1235 return pid
1235 return pid
1236 finally:
1236 finally:
1237 if prevhandler is not None:
1237 if prevhandler is not None:
1238 signal.signal(signal.SIGCHLD, prevhandler)
1238 signal.signal(signal.SIGCHLD, prevhandler)
1239
1239
1240 try:
1240 try:
1241 any, all = any, all
1241 any, all = any, all
1242 except NameError:
1242 except NameError:
1243 def any(iterable):
1243 def any(iterable):
1244 for i in iterable:
1244 for i in iterable:
1245 if i:
1245 if i:
1246 return True
1246 return True
1247 return False
1247 return False
1248
1248
1249 def all(iterable):
1249 def all(iterable):
1250 for i in iterable:
1250 for i in iterable:
1251 if not i:
1251 if not i:
1252 return False
1252 return False
1253 return True
1253 return True
1254
1254
1255 def interpolate(prefix, mapping, s, fn=None, escape_prefix=False):
1255 def interpolate(prefix, mapping, s, fn=None, escape_prefix=False):
1256 """Return the result of interpolating items in the mapping into string s.
1256 """Return the result of interpolating items in the mapping into string s.
1257
1257
1258 prefix is a single character string, or a two character string with
1258 prefix is a single character string, or a two character string with
1259 a backslash as the first character if the prefix needs to be escaped in
1259 a backslash as the first character if the prefix needs to be escaped in
1260 a regular expression.
1260 a regular expression.
1261
1261
1262 fn is an optional function that will be applied to the replacement text
1262 fn is an optional function that will be applied to the replacement text
1263 just before replacement.
1263 just before replacement.
1264
1264
1265 escape_prefix is an optional flag that allows using doubled prefix for
1265 escape_prefix is an optional flag that allows using doubled prefix for
1266 its escaping.
1266 its escaping.
1267 """
1267 """
1268 fn = fn or (lambda s: s)
1268 fn = fn or (lambda s: s)
1269 patterns = '|'.join(mapping.keys())
1269 patterns = '|'.join(mapping.keys())
1270 if escape_prefix:
1270 if escape_prefix:
1271 patterns += '|' + prefix
1271 patterns += '|' + prefix
1272 if len(prefix) > 1:
1272 if len(prefix) > 1:
1273 prefix_char = prefix[1:]
1273 prefix_char = prefix[1:]
1274 else:
1274 else:
1275 prefix_char = prefix
1275 prefix_char = prefix
1276 mapping[prefix_char] = prefix_char
1276 mapping[prefix_char] = prefix_char
1277 r = re.compile(r'%s(%s)' % (prefix, patterns))
1277 r = re.compile(r'%s(%s)' % (prefix, patterns))
1278 return r.sub(lambda x: fn(mapping[x.group()[1:]]), s)
1278 return r.sub(lambda x: fn(mapping[x.group()[1:]]), s)
1279
1279
1280 def getport(port):
1280 def getport(port):
1281 """Return the port for a given network service.
1281 """Return the port for a given network service.
1282
1282
1283 If port is an integer, it's returned as is. If it's a string, it's
1283 If port is an integer, it's returned as is. If it's a string, it's
1284 looked up using socket.getservbyname(). If there's no matching
1284 looked up using socket.getservbyname(). If there's no matching
1285 service, util.Abort is raised.
1285 service, util.Abort is raised.
1286 """
1286 """
1287 try:
1287 try:
1288 return int(port)
1288 return int(port)
1289 except ValueError:
1289 except ValueError:
1290 pass
1290 pass
1291
1291
1292 try:
1292 try:
1293 return socket.getservbyname(port)
1293 return socket.getservbyname(port)
1294 except socket.error:
1294 except socket.error:
1295 raise Abort(_("no port number associated with service '%s'") % port)
1295 raise Abort(_("no port number associated with service '%s'") % port)
1296
1296
1297 _booleans = {'1': True, 'yes': True, 'true': True, 'on': True, 'always': True,
1297 _booleans = {'1': True, 'yes': True, 'true': True, 'on': True, 'always': True,
1298 '0': False, 'no': False, 'false': False, 'off': False,
1298 '0': False, 'no': False, 'false': False, 'off': False,
1299 'never': False}
1299 'never': False}
1300
1300
1301 def parsebool(s):
1301 def parsebool(s):
1302 """Parse s into a boolean.
1302 """Parse s into a boolean.
1303
1303
1304 If s is not a valid boolean, returns None.
1304 If s is not a valid boolean, returns None.
1305 """
1305 """
1306 return _booleans.get(s.lower(), None)
1306 return _booleans.get(s.lower(), None)
1307
1307
1308 _hexdig = '0123456789ABCDEFabcdef'
1308 _hexdig = '0123456789ABCDEFabcdef'
1309 _hextochr = dict((a + b, chr(int(a + b, 16)))
1309 _hextochr = dict((a + b, chr(int(a + b, 16)))
1310 for a in _hexdig for b in _hexdig)
1310 for a in _hexdig for b in _hexdig)
1311
1311
1312 def _urlunquote(s):
1312 def _urlunquote(s):
1313 """unquote('abc%20def') -> 'abc def'."""
1313 """unquote('abc%20def') -> 'abc def'."""
1314 res = s.split('%')
1314 res = s.split('%')
1315 # fastpath
1315 # fastpath
1316 if len(res) == 1:
1316 if len(res) == 1:
1317 return s
1317 return s
1318 s = res[0]
1318 s = res[0]
1319 for item in res[1:]:
1319 for item in res[1:]:
1320 try:
1320 try:
1321 s += _hextochr[item[:2]] + item[2:]
1321 s += _hextochr[item[:2]] + item[2:]
1322 except KeyError:
1322 except KeyError:
1323 s += '%' + item
1323 s += '%' + item
1324 except UnicodeDecodeError:
1324 except UnicodeDecodeError:
1325 s += unichr(int(item[:2], 16)) + item[2:]
1325 s += unichr(int(item[:2], 16)) + item[2:]
1326 return s
1326 return s
1327
1327
1328 class url(object):
1328 class url(object):
1329 r"""Reliable URL parser.
1329 r"""Reliable URL parser.
1330
1330
1331 This parses URLs and provides attributes for the following
1331 This parses URLs and provides attributes for the following
1332 components:
1332 components:
1333
1333
1334 <scheme>://<user>:<passwd>@<host>:<port>/<path>?<query>#<fragment>
1334 <scheme>://<user>:<passwd>@<host>:<port>/<path>?<query>#<fragment>
1335
1335
1336 Missing components are set to None. The only exception is
1336 Missing components are set to None. The only exception is
1337 fragment, which is set to '' if present but empty.
1337 fragment, which is set to '' if present but empty.
1338
1338
1339 If parsefragment is False, fragment is included in query. If
1339 If parsefragment is False, fragment is included in query. If
1340 parsequery is False, query is included in path. If both are
1340 parsequery is False, query is included in path. If both are
1341 False, both fragment and query are included in path.
1341 False, both fragment and query are included in path.
1342
1342
1343 See http://www.ietf.org/rfc/rfc2396.txt for more information.
1343 See http://www.ietf.org/rfc/rfc2396.txt for more information.
1344
1344
1345 Note that for backward compatibility reasons, bundle URLs do not
1345 Note that for backward compatibility reasons, bundle URLs do not
1346 take host names. That means 'bundle://../' has a path of '../'.
1346 take host names. That means 'bundle://../' has a path of '../'.
1347
1347
1348 Examples:
1348 Examples:
1349
1349
1350 >>> url('http://www.ietf.org/rfc/rfc2396.txt')
1350 >>> url('http://www.ietf.org/rfc/rfc2396.txt')
1351 <url scheme: 'http', host: 'www.ietf.org', path: 'rfc/rfc2396.txt'>
1351 <url scheme: 'http', host: 'www.ietf.org', path: 'rfc/rfc2396.txt'>
1352 >>> url('ssh://[::1]:2200//home/joe/repo')
1352 >>> url('ssh://[::1]:2200//home/joe/repo')
1353 <url scheme: 'ssh', host: '[::1]', port: '2200', path: '/home/joe/repo'>
1353 <url scheme: 'ssh', host: '[::1]', port: '2200', path: '/home/joe/repo'>
1354 >>> url('file:///home/joe/repo')
1354 >>> url('file:///home/joe/repo')
1355 <url scheme: 'file', path: '/home/joe/repo'>
1355 <url scheme: 'file', path: '/home/joe/repo'>
1356 >>> url('bundle:foo')
1356 >>> url('bundle:foo')
1357 <url scheme: 'bundle', path: 'foo'>
1357 <url scheme: 'bundle', path: 'foo'>
1358 >>> url('bundle://../foo')
1358 >>> url('bundle://../foo')
1359 <url scheme: 'bundle', path: '../foo'>
1359 <url scheme: 'bundle', path: '../foo'>
1360 >>> url(r'c:\foo\bar')
1360 >>> url(r'c:\foo\bar')
1361 <url path: 'c:\\foo\\bar'>
1361 <url path: 'c:\\foo\\bar'>
1362
1362
1363 Authentication credentials:
1363 Authentication credentials:
1364
1364
1365 >>> url('ssh://joe:xyz@x/repo')
1365 >>> url('ssh://joe:xyz@x/repo')
1366 <url scheme: 'ssh', user: 'joe', passwd: 'xyz', host: 'x', path: 'repo'>
1366 <url scheme: 'ssh', user: 'joe', passwd: 'xyz', host: 'x', path: 'repo'>
1367 >>> url('ssh://joe@x/repo')
1367 >>> url('ssh://joe@x/repo')
1368 <url scheme: 'ssh', user: 'joe', host: 'x', path: 'repo'>
1368 <url scheme: 'ssh', user: 'joe', host: 'x', path: 'repo'>
1369
1369
1370 Query strings and fragments:
1370 Query strings and fragments:
1371
1371
1372 >>> url('http://host/a?b#c')
1372 >>> url('http://host/a?b#c')
1373 <url scheme: 'http', host: 'host', path: 'a', query: 'b', fragment: 'c'>
1373 <url scheme: 'http', host: 'host', path: 'a', query: 'b', fragment: 'c'>
1374 >>> url('http://host/a?b#c', parsequery=False, parsefragment=False)
1374 >>> url('http://host/a?b#c', parsequery=False, parsefragment=False)
1375 <url scheme: 'http', host: 'host', path: 'a?b#c'>
1375 <url scheme: 'http', host: 'host', path: 'a?b#c'>
1376 """
1376 """
1377
1377
1378 _safechars = "!~*'()+"
1378 _safechars = "!~*'()+"
1379 _safepchars = "/!~*'()+"
1379 _safepchars = "/!~*'()+"
1380 _matchscheme = re.compile(r'^[a-zA-Z0-9+.\-]+:').match
1380 _matchscheme = re.compile(r'^[a-zA-Z0-9+.\-]+:').match
1381
1381
1382 def __init__(self, path, parsequery=True, parsefragment=True):
1382 def __init__(self, path, parsequery=True, parsefragment=True):
1383 # We slowly chomp away at path until we have only the path left
1383 # We slowly chomp away at path until we have only the path left
1384 self.scheme = self.user = self.passwd = self.host = None
1384 self.scheme = self.user = self.passwd = self.host = None
1385 self.port = self.path = self.query = self.fragment = None
1385 self.port = self.path = self.query = self.fragment = None
1386 self._localpath = True
1386 self._localpath = True
1387 self._hostport = ''
1387 self._hostport = ''
1388 self._origpath = path
1388 self._origpath = path
1389
1389
1390 # special case for Windows drive letters
1390 # special case for Windows drive letters
1391 if hasdriveletter(path):
1391 if hasdriveletter(path):
1392 self.path = path
1392 self.path = path
1393 return
1393 return
1394
1394
1395 # For compatibility reasons, we can't handle bundle paths as
1395 # For compatibility reasons, we can't handle bundle paths as
1396 # normal URLS
1396 # normal URLS
1397 if path.startswith('bundle:'):
1397 if path.startswith('bundle:'):
1398 self.scheme = 'bundle'
1398 self.scheme = 'bundle'
1399 path = path[7:]
1399 path = path[7:]
1400 if path.startswith('//'):
1400 if path.startswith('//'):
1401 path = path[2:]
1401 path = path[2:]
1402 self.path = path
1402 self.path = path
1403 return
1403 return
1404
1404
1405 if self._matchscheme(path):
1405 if self._matchscheme(path):
1406 parts = path.split(':', 1)
1406 parts = path.split(':', 1)
1407 if parts[0]:
1407 if parts[0]:
1408 self.scheme, path = parts
1408 self.scheme, path = parts
1409 self._localpath = False
1409 self._localpath = False
1410
1410
1411 if not path:
1411 if not path:
1412 path = None
1412 path = None
1413 if self._localpath:
1413 if self._localpath:
1414 self.path = ''
1414 self.path = ''
1415 return
1415 return
1416 else:
1416 else:
1417 if parsefragment and '#' in path:
1417 if parsefragment and '#' in path:
1418 path, self.fragment = path.split('#', 1)
1418 path, self.fragment = path.split('#', 1)
1419 if not path:
1419 if not path:
1420 path = None
1420 path = None
1421 if self._localpath:
1421 if self._localpath:
1422 self.path = path
1422 self.path = path
1423 return
1423 return
1424
1424
1425 if parsequery and '?' in path:
1425 if parsequery and '?' in path:
1426 path, self.query = path.split('?', 1)
1426 path, self.query = path.split('?', 1)
1427 if not path:
1427 if not path:
1428 path = None
1428 path = None
1429 if not self.query:
1429 if not self.query:
1430 self.query = None
1430 self.query = None
1431
1431
1432 # // is required to specify a host/authority
1432 # // is required to specify a host/authority
1433 if path and path.startswith('//'):
1433 if path and path.startswith('//'):
1434 parts = path[2:].split('/', 1)
1434 parts = path[2:].split('/', 1)
1435 if len(parts) > 1:
1435 if len(parts) > 1:
1436 self.host, path = parts
1436 self.host, path = parts
1437 path = path
1437 path = path
1438 else:
1438 else:
1439 self.host = parts[0]
1439 self.host = parts[0]
1440 path = None
1440 path = None
1441 if not self.host:
1441 if not self.host:
1442 self.host = None
1442 self.host = None
1443 if path:
1443 if path:
1444 path = '/' + path
1444 path = '/' + path
1445
1445
1446 if self.host and '@' in self.host:
1446 if self.host and '@' in self.host:
1447 self.user, self.host = self.host.rsplit('@', 1)
1447 self.user, self.host = self.host.rsplit('@', 1)
1448 if ':' in self.user:
1448 if ':' in self.user:
1449 self.user, self.passwd = self.user.split(':', 1)
1449 self.user, self.passwd = self.user.split(':', 1)
1450 if not self.host:
1450 if not self.host:
1451 self.host = None
1451 self.host = None
1452
1452
1453 # Don't split on colons in IPv6 addresses without ports
1453 # Don't split on colons in IPv6 addresses without ports
1454 if (self.host and ':' in self.host and
1454 if (self.host and ':' in self.host and
1455 not (self.host.startswith('[') and self.host.endswith(']'))):
1455 not (self.host.startswith('[') and self.host.endswith(']'))):
1456 self._hostport = self.host
1456 self._hostport = self.host
1457 self.host, self.port = self.host.rsplit(':', 1)
1457 self.host, self.port = self.host.rsplit(':', 1)
1458 if not self.host:
1458 if not self.host:
1459 self.host = None
1459 self.host = None
1460
1460
1461 if (self.host and self.scheme == 'file' and
1461 if (self.host and self.scheme == 'file' and
1462 self.host not in ('localhost', '127.0.0.1', '[::1]')):
1462 self.host not in ('localhost', '127.0.0.1', '[::1]')):
1463 raise Abort(_('file:// URLs can only refer to localhost'))
1463 raise Abort(_('file:// URLs can only refer to localhost'))
1464
1464
1465 self.path = path
1465 self.path = path
1466
1466
1467 for a in ('user', 'passwd', 'host', 'port',
1467 for a in ('user', 'passwd', 'host', 'port',
1468 'path', 'query', 'fragment'):
1468 'path', 'query', 'fragment'):
1469 v = getattr(self, a)
1469 v = getattr(self, a)
1470 if v is not None:
1470 if v is not None:
1471 setattr(self, a, _urlunquote(v))
1471 setattr(self, a, _urlunquote(v))
1472
1472
1473 def __repr__(self):
1473 def __repr__(self):
1474 attrs = []
1474 attrs = []
1475 for a in ('scheme', 'user', 'passwd', 'host', 'port', 'path',
1475 for a in ('scheme', 'user', 'passwd', 'host', 'port', 'path',
1476 'query', 'fragment'):
1476 'query', 'fragment'):
1477 v = getattr(self, a)
1477 v = getattr(self, a)
1478 if v is not None:
1478 if v is not None:
1479 attrs.append('%s: %r' % (a, v))
1479 attrs.append('%s: %r' % (a, v))
1480 return '<url %s>' % ', '.join(attrs)
1480 return '<url %s>' % ', '.join(attrs)
1481
1481
1482 def __str__(self):
1482 def __str__(self):
1483 r"""Join the URL's components back into a URL string.
1483 r"""Join the URL's components back into a URL string.
1484
1484
1485 Examples:
1485 Examples:
1486
1486
1487 >>> str(url('http://user:pw@host:80/?foo#bar'))
1487 >>> str(url('http://user:pw@host:80/?foo#bar'))
1488 'http://user:pw@host:80/?foo#bar'
1488 'http://user:pw@host:80/?foo#bar'
1489 >>> str(url('ssh://user:pw@[::1]:2200//home/joe#'))
1489 >>> str(url('ssh://user:pw@[::1]:2200//home/joe#'))
1490 'ssh://user:pw@[::1]:2200//home/joe#'
1490 'ssh://user:pw@[::1]:2200//home/joe#'
1491 >>> str(url('http://localhost:80//'))
1491 >>> str(url('http://localhost:80//'))
1492 'http://localhost:80//'
1492 'http://localhost:80//'
1493 >>> str(url('http://localhost:80/'))
1493 >>> str(url('http://localhost:80/'))
1494 'http://localhost:80/'
1494 'http://localhost:80/'
1495 >>> str(url('http://localhost:80'))
1495 >>> str(url('http://localhost:80'))
1496 'http://localhost:80/'
1496 'http://localhost:80/'
1497 >>> str(url('bundle:foo'))
1497 >>> str(url('bundle:foo'))
1498 'bundle:foo'
1498 'bundle:foo'
1499 >>> str(url('bundle://../foo'))
1499 >>> str(url('bundle://../foo'))
1500 'bundle:../foo'
1500 'bundle:../foo'
1501 >>> str(url('path'))
1501 >>> str(url('path'))
1502 'path'
1502 'path'
1503 >>> print url(r'bundle:foo\bar')
1503 >>> print url(r'bundle:foo\bar')
1504 bundle:foo\bar
1504 bundle:foo\bar
1505 """
1505 """
1506 if self._localpath:
1506 if self._localpath:
1507 s = self.path
1507 s = self.path
1508 if self.scheme == 'bundle':
1508 if self.scheme == 'bundle':
1509 s = 'bundle:' + s
1509 s = 'bundle:' + s
1510 if self.fragment:
1510 if self.fragment:
1511 s += '#' + self.fragment
1511 s += '#' + self.fragment
1512 return s
1512 return s
1513
1513
1514 s = self.scheme + ':'
1514 s = self.scheme + ':'
1515 if (self.user or self.passwd or self.host or
1515 if (self.user or self.passwd or self.host or
1516 self.scheme and not self.path):
1516 self.scheme and not self.path):
1517 s += '//'
1517 s += '//'
1518 if self.user:
1518 if self.user:
1519 s += urllib.quote(self.user, safe=self._safechars)
1519 s += urllib.quote(self.user, safe=self._safechars)
1520 if self.passwd:
1520 if self.passwd:
1521 s += ':' + urllib.quote(self.passwd, safe=self._safechars)
1521 s += ':' + urllib.quote(self.passwd, safe=self._safechars)
1522 if self.user or self.passwd:
1522 if self.user or self.passwd:
1523 s += '@'
1523 s += '@'
1524 if self.host:
1524 if self.host:
1525 if not (self.host.startswith('[') and self.host.endswith(']')):
1525 if not (self.host.startswith('[') and self.host.endswith(']')):
1526 s += urllib.quote(self.host)
1526 s += urllib.quote(self.host)
1527 else:
1527 else:
1528 s += self.host
1528 s += self.host
1529 if self.port:
1529 if self.port:
1530 s += ':' + urllib.quote(self.port)
1530 s += ':' + urllib.quote(self.port)
1531 if self.host:
1531 if self.host:
1532 s += '/'
1532 s += '/'
1533 if self.path:
1533 if self.path:
1534 s += urllib.quote(self.path, safe=self._safepchars)
1534 s += urllib.quote(self.path, safe=self._safepchars)
1535 if self.query:
1535 if self.query:
1536 s += '?' + urllib.quote(self.query, safe=self._safepchars)
1536 s += '?' + urllib.quote(self.query, safe=self._safepchars)
1537 if self.fragment is not None:
1537 if self.fragment is not None:
1538 s += '#' + urllib.quote(self.fragment, safe=self._safepchars)
1538 s += '#' + urllib.quote(self.fragment, safe=self._safepchars)
1539 return s
1539 return s
1540
1540
1541 def authinfo(self):
1541 def authinfo(self):
1542 user, passwd = self.user, self.passwd
1542 user, passwd = self.user, self.passwd
1543 try:
1543 try:
1544 self.user, self.passwd = None, None
1544 self.user, self.passwd = None, None
1545 s = str(self)
1545 s = str(self)
1546 finally:
1546 finally:
1547 self.user, self.passwd = user, passwd
1547 self.user, self.passwd = user, passwd
1548 if not self.user:
1548 if not self.user:
1549 return (s, None)
1549 return (s, None)
1550 return (s, (None, (str(self), self.host),
1550 return (s, (None, (str(self), self.host),
1551 self.user, self.passwd or ''))
1551 self.user, self.passwd or ''))
1552
1552
1553 def localpath(self):
1553 def localpath(self):
1554 if self.scheme == 'file' or self.scheme == 'bundle':
1554 if self.scheme == 'file' or self.scheme == 'bundle':
1555 path = self.path or '/'
1555 path = self.path or '/'
1556 # For Windows, we need to promote hosts containing drive
1556 # For Windows, we need to promote hosts containing drive
1557 # letters to paths with drive letters.
1557 # letters to paths with drive letters.
1558 if hasdriveletter(self._hostport):
1558 if hasdriveletter(self._hostport):
1559 path = self._hostport + '/' + self.path
1559 path = self._hostport + '/' + self.path
1560 elif self.host is not None and self.path:
1560 elif self.host is not None and self.path:
1561 path = '/' + path
1561 path = '/' + path
1562 # We also need to handle the case of file:///C:/, which
1562 # We also need to handle the case of file:///C:/, which
1563 # should return C:/, not /C:/.
1563 # should return C:/, not /C:/.
1564 elif hasdriveletter(path):
1564 elif hasdriveletter(path):
1565 # Strip leading slash from paths with drive names
1565 # Strip leading slash from paths with drive names
1566 return path[1:]
1566 return path[1:]
1567 return path
1567 return path
1568 return self._origpath
1568 return self._origpath
1569
1569
1570 def hasscheme(path):
1570 def hasscheme(path):
1571 return bool(url(path).scheme)
1571 return bool(url(path).scheme)
1572
1572
1573 def hasdriveletter(path):
1573 def hasdriveletter(path):
1574 return path[1:2] == ':' and path[0:1].isalpha()
1574 return path[1:2] == ':' and path[0:1].isalpha()
1575
1575
1576 def localpath(path):
1576 def localpath(path):
1577 return url(path, parsequery=False, parsefragment=False).localpath()
1577 return url(path, parsequery=False, parsefragment=False).localpath()
1578
1578
1579 def hidepassword(u):
1579 def hidepassword(u):
1580 '''hide user credential in a url string'''
1580 '''hide user credential in a url string'''
1581 u = url(u)
1581 u = url(u)
1582 if u.passwd:
1582 if u.passwd:
1583 u.passwd = '***'
1583 u.passwd = '***'
1584 return str(u)
1584 return str(u)
1585
1585
1586 def removeauth(u):
1586 def removeauth(u):
1587 '''remove all authentication information from a url string'''
1587 '''remove all authentication information from a url string'''
1588 u = url(u)
1588 u = url(u)
1589 u.user = u.passwd = None
1589 u.user = u.passwd = None
1590 return str(u)
1590 return str(u)
@@ -1,286 +1,286 b''
1 # windows.py - Windows utility function implementations for Mercurial
1 # windows.py - Windows utility function implementations for Mercurial
2 #
2 #
3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
3 # Copyright 2005-2009 Matt Mackall <mpm@selenic.com> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 import osutil
9 import osutil
10 import errno, msvcrt, os, re, sys
10 import errno, msvcrt, os, re, sys
11
11
12 nulldev = 'NUL:'
12 nulldev = 'NUL:'
13 umask = 002
13 umask = 002
14
14
15 # wrap osutil.posixfile to provide friendlier exceptions
15 # wrap osutil.posixfile to provide friendlier exceptions
16 def posixfile(name, mode='r', buffering=-1):
16 def posixfile(name, mode='r', buffering=-1):
17 try:
17 try:
18 return osutil.posixfile(name, mode, buffering)
18 return osutil.posixfile(name, mode, buffering)
19 except WindowsError, err:
19 except WindowsError, err:
20 raise IOError(err.errno, '%s: %s' % (name, err.strerror))
20 raise IOError(err.errno, '%s: %s' % (name, err.strerror))
21 posixfile.__doc__ = osutil.posixfile.__doc__
21 posixfile.__doc__ = osutil.posixfile.__doc__
22
22
23 class winstdout(object):
23 class winstdout(object):
24 '''stdout on windows misbehaves if sent through a pipe'''
24 '''stdout on windows misbehaves if sent through a pipe'''
25
25
26 def __init__(self, fp):
26 def __init__(self, fp):
27 self.fp = fp
27 self.fp = fp
28
28
29 def __getattr__(self, key):
29 def __getattr__(self, key):
30 return getattr(self.fp, key)
30 return getattr(self.fp, key)
31
31
32 def close(self):
32 def close(self):
33 try:
33 try:
34 self.fp.close()
34 self.fp.close()
35 except IOError:
35 except IOError:
36 pass
36 pass
37
37
38 def write(self, s):
38 def write(self, s):
39 try:
39 try:
40 # This is workaround for "Not enough space" error on
40 # This is workaround for "Not enough space" error on
41 # writing large size of data to console.
41 # writing large size of data to console.
42 limit = 16000
42 limit = 16000
43 l = len(s)
43 l = len(s)
44 start = 0
44 start = 0
45 self.softspace = 0
45 self.softspace = 0
46 while start < l:
46 while start < l:
47 end = start + limit
47 end = start + limit
48 self.fp.write(s[start:end])
48 self.fp.write(s[start:end])
49 start = end
49 start = end
50 except IOError, inst:
50 except IOError, inst:
51 if inst.errno != 0:
51 if inst.errno != 0:
52 raise
52 raise
53 self.close()
53 self.close()
54 raise IOError(errno.EPIPE, 'Broken pipe')
54 raise IOError(errno.EPIPE, 'Broken pipe')
55
55
56 def flush(self):
56 def flush(self):
57 try:
57 try:
58 return self.fp.flush()
58 return self.fp.flush()
59 except IOError, inst:
59 except IOError, inst:
60 if inst.errno != errno.EINVAL:
60 if inst.errno != errno.EINVAL:
61 raise
61 raise
62 self.close()
62 self.close()
63 raise IOError(errno.EPIPE, 'Broken pipe')
63 raise IOError(errno.EPIPE, 'Broken pipe')
64
64
65 sys.stdout = winstdout(sys.stdout)
65 sys.stdout = winstdout(sys.stdout)
66
66
67 def _is_win_9x():
67 def _is_win_9x():
68 '''return true if run on windows 95, 98 or me.'''
68 '''return true if run on windows 95, 98 or me.'''
69 try:
69 try:
70 return sys.getwindowsversion()[3] == 1
70 return sys.getwindowsversion()[3] == 1
71 except AttributeError:
71 except AttributeError:
72 return 'command' in os.environ.get('comspec', '')
72 return 'command' in os.environ.get('comspec', '')
73
73
74 def openhardlinks():
74 def openhardlinks():
75 return not _is_win_9x()
75 return not _is_win_9x()
76
76
77 def parsepatchoutput(output_line):
77 def parsepatchoutput(output_line):
78 """parses the output produced by patch and returns the filename"""
78 """parses the output produced by patch and returns the filename"""
79 pf = output_line[14:]
79 pf = output_line[14:]
80 if pf[0] == '`':
80 if pf[0] == '`':
81 pf = pf[1:-1] # Remove the quotes
81 pf = pf[1:-1] # Remove the quotes
82 return pf
82 return pf
83
83
84 def sshargs(sshcmd, host, user, port):
84 def sshargs(sshcmd, host, user, port):
85 '''Build argument list for ssh or Plink'''
85 '''Build argument list for ssh or Plink'''
86 pflag = 'plink' in sshcmd.lower() and '-P' or '-p'
86 pflag = 'plink' in sshcmd.lower() and '-P' or '-p'
87 args = user and ("%s@%s" % (user, host)) or host
87 args = user and ("%s@%s" % (user, host)) or host
88 return port and ("%s %s %s" % (args, pflag, port)) or args
88 return port and ("%s %s %s" % (args, pflag, port)) or args
89
89
90 def setflags(f, l, x):
90 def setflags(f, l, x):
91 pass
91 pass
92
92
93 def checkexec(path):
93 def checkexec(path):
94 return False
94 return False
95
95
96 def checklink(path):
96 def checklink(path):
97 return False
97 return False
98
98
99 def setbinary(fd):
99 def setbinary(fd):
100 # When run without console, pipes may expose invalid
100 # When run without console, pipes may expose invalid
101 # fileno(), usually set to -1.
101 # fileno(), usually set to -1.
102 if hasattr(fd, 'fileno') and fd.fileno() >= 0:
102 if hasattr(fd, 'fileno') and fd.fileno() >= 0:
103 msvcrt.setmode(fd.fileno(), os.O_BINARY)
103 msvcrt.setmode(fd.fileno(), os.O_BINARY)
104
104
105 def pconvert(path):
105 def pconvert(path):
106 return '/'.join(path.split(os.sep))
106 return '/'.join(path.split(os.sep))
107
107
108 def localpath(path):
108 def localpath(path):
109 return path.replace('/', '\\')
109 return path.replace('/', '\\')
110
110
111 def normpath(path):
111 def normpath(path):
112 return pconvert(os.path.normpath(path))
112 return pconvert(os.path.normpath(path))
113
113
114 def realpath(path):
114 def realpath(path):
115 '''
115 '''
116 Returns the true, canonical file system path equivalent to the given
116 Returns the true, canonical file system path equivalent to the given
117 path.
117 path.
118 '''
118 '''
119 # TODO: There may be a more clever way to do this that also handles other,
119 # TODO: There may be a more clever way to do this that also handles other,
120 # less common file systems.
120 # less common file systems.
121 return os.path.normpath(os.path.normcase(os.path.realpath(path)))
121 return os.path.normpath(os.path.normcase(os.path.realpath(path)))
122
122
123 def samestat(s1, s2):
123 def samestat(s1, s2):
124 return False
124 return False
125
125
126 # A sequence of backslashes is special iff it precedes a double quote:
126 # A sequence of backslashes is special iff it precedes a double quote:
127 # - if there's an even number of backslashes, the double quote is not
127 # - if there's an even number of backslashes, the double quote is not
128 # quoted (i.e. it ends the quoted region)
128 # quoted (i.e. it ends the quoted region)
129 # - if there's an odd number of backslashes, the double quote is quoted
129 # - if there's an odd number of backslashes, the double quote is quoted
130 # - in both cases, every pair of backslashes is unquoted into a single
130 # - in both cases, every pair of backslashes is unquoted into a single
131 # backslash
131 # backslash
132 # (See http://msdn2.microsoft.com/en-us/library/a1y7w461.aspx )
132 # (See http://msdn2.microsoft.com/en-us/library/a1y7w461.aspx )
133 # So, to quote a string, we must surround it in double quotes, double
133 # So, to quote a string, we must surround it in double quotes, double
134 # the number of backslashes that preceed double quotes and add another
134 # the number of backslashes that preceed double quotes and add another
135 # backslash before every double quote (being careful with the double
135 # backslash before every double quote (being careful with the double
136 # quote we've appended to the end)
136 # quote we've appended to the end)
137 _quotere = None
137 _quotere = None
138 def shellquote(s):
138 def shellquote(s):
139 global _quotere
139 global _quotere
140 if _quotere is None:
140 if _quotere is None:
141 _quotere = re.compile(r'(\\*)("|\\$)')
141 _quotere = re.compile(r'(\\*)("|\\$)')
142 return '"%s"' % _quotere.sub(r'\1\1\\\2', s)
142 return '"%s"' % _quotere.sub(r'\1\1\\\2', s)
143
143
144 def quotecommand(cmd):
144 def quotecommand(cmd):
145 """Build a command string suitable for os.popen* calls."""
145 """Build a command string suitable for os.popen* calls."""
146 if sys.version_info < (2, 7, 1):
146 if sys.version_info < (2, 7, 1):
147 # Python versions since 2.7.1 do this extra quoting themselves
147 # Python versions since 2.7.1 do this extra quoting themselves
148 return '"' + cmd + '"'
148 return '"' + cmd + '"'
149 return cmd
149 return cmd
150
150
151 def popen(command, mode='r'):
151 def popen(command, mode='r'):
152 # Work around "popen spawned process may not write to stdout
152 # Work around "popen spawned process may not write to stdout
153 # under windows"
153 # under windows"
154 # http://bugs.python.org/issue1366
154 # http://bugs.python.org/issue1366
155 command += " 2> %s" % nulldev
155 command += " 2> %s" % nulldev
156 return os.popen(quotecommand(command), mode)
156 return os.popen(quotecommand(command), mode)
157
157
158 def explain_exit(code):
158 def explainexit(code):
159 return _("exited with status %d") % code, code
159 return _("exited with status %d") % code, code
160
160
161 # if you change this stub into a real check, please try to implement the
161 # if you change this stub into a real check, please try to implement the
162 # username and groupname functions above, too.
162 # username and groupname functions above, too.
163 def isowner(st):
163 def isowner(st):
164 return True
164 return True
165
165
166 def find_exe(command):
166 def find_exe(command):
167 '''Find executable for command searching like cmd.exe does.
167 '''Find executable for command searching like cmd.exe does.
168 If command is a basename then PATH is searched for command.
168 If command is a basename then PATH is searched for command.
169 PATH isn't searched if command is an absolute or relative path.
169 PATH isn't searched if command is an absolute or relative path.
170 An extension from PATHEXT is found and added if not present.
170 An extension from PATHEXT is found and added if not present.
171 If command isn't found None is returned.'''
171 If command isn't found None is returned.'''
172 pathext = os.environ.get('PATHEXT', '.COM;.EXE;.BAT;.CMD')
172 pathext = os.environ.get('PATHEXT', '.COM;.EXE;.BAT;.CMD')
173 pathexts = [ext for ext in pathext.lower().split(os.pathsep)]
173 pathexts = [ext for ext in pathext.lower().split(os.pathsep)]
174 if os.path.splitext(command)[1].lower() in pathexts:
174 if os.path.splitext(command)[1].lower() in pathexts:
175 pathexts = ['']
175 pathexts = ['']
176
176
177 def findexisting(pathcommand):
177 def findexisting(pathcommand):
178 'Will append extension (if needed) and return existing file'
178 'Will append extension (if needed) and return existing file'
179 for ext in pathexts:
179 for ext in pathexts:
180 executable = pathcommand + ext
180 executable = pathcommand + ext
181 if os.path.exists(executable):
181 if os.path.exists(executable):
182 return executable
182 return executable
183 return None
183 return None
184
184
185 if os.sep in command:
185 if os.sep in command:
186 return findexisting(command)
186 return findexisting(command)
187
187
188 for path in os.environ.get('PATH', '').split(os.pathsep):
188 for path in os.environ.get('PATH', '').split(os.pathsep):
189 executable = findexisting(os.path.join(path, command))
189 executable = findexisting(os.path.join(path, command))
190 if executable is not None:
190 if executable is not None:
191 return executable
191 return executable
192 return findexisting(os.path.expanduser(os.path.expandvars(command)))
192 return findexisting(os.path.expanduser(os.path.expandvars(command)))
193
193
194 def statfiles(files):
194 def statfiles(files):
195 '''Stat each file in files and yield stat or None if file does not exist.
195 '''Stat each file in files and yield stat or None if file does not exist.
196 Cluster and cache stat per directory to minimize number of OS stat calls.'''
196 Cluster and cache stat per directory to minimize number of OS stat calls.'''
197 ncase = os.path.normcase
197 ncase = os.path.normcase
198 dircache = {} # dirname -> filename -> status | None if file does not exist
198 dircache = {} # dirname -> filename -> status | None if file does not exist
199 for nf in files:
199 for nf in files:
200 nf = ncase(nf)
200 nf = ncase(nf)
201 dir, base = os.path.split(nf)
201 dir, base = os.path.split(nf)
202 if not dir:
202 if not dir:
203 dir = '.'
203 dir = '.'
204 cache = dircache.get(dir, None)
204 cache = dircache.get(dir, None)
205 if cache is None:
205 if cache is None:
206 try:
206 try:
207 dmap = dict([(ncase(n), s)
207 dmap = dict([(ncase(n), s)
208 for n, k, s in osutil.listdir(dir, True)])
208 for n, k, s in osutil.listdir(dir, True)])
209 except OSError, err:
209 except OSError, err:
210 # handle directory not found in Python version prior to 2.5
210 # handle directory not found in Python version prior to 2.5
211 # Python <= 2.4 returns native Windows code 3 in errno
211 # Python <= 2.4 returns native Windows code 3 in errno
212 # Python >= 2.5 returns ENOENT and adds winerror field
212 # Python >= 2.5 returns ENOENT and adds winerror field
213 # EINVAL is raised if dir is not a directory.
213 # EINVAL is raised if dir is not a directory.
214 if err.errno not in (3, errno.ENOENT, errno.EINVAL,
214 if err.errno not in (3, errno.ENOENT, errno.EINVAL,
215 errno.ENOTDIR):
215 errno.ENOTDIR):
216 raise
216 raise
217 dmap = {}
217 dmap = {}
218 cache = dircache.setdefault(dir, dmap)
218 cache = dircache.setdefault(dir, dmap)
219 yield cache.get(base, None)
219 yield cache.get(base, None)
220
220
221 def username(uid=None):
221 def username(uid=None):
222 """Return the name of the user with the given uid.
222 """Return the name of the user with the given uid.
223
223
224 If uid is None, return the name of the current user."""
224 If uid is None, return the name of the current user."""
225 return None
225 return None
226
226
227 def groupname(gid=None):
227 def groupname(gid=None):
228 """Return the name of the group with the given gid.
228 """Return the name of the group with the given gid.
229
229
230 If gid is None, return the name of the current group."""
230 If gid is None, return the name of the current group."""
231 return None
231 return None
232
232
233 def _removedirs(name):
233 def _removedirs(name):
234 """special version of os.removedirs that does not remove symlinked
234 """special version of os.removedirs that does not remove symlinked
235 directories or junction points if they actually contain files"""
235 directories or junction points if they actually contain files"""
236 if osutil.listdir(name):
236 if osutil.listdir(name):
237 return
237 return
238 os.rmdir(name)
238 os.rmdir(name)
239 head, tail = os.path.split(name)
239 head, tail = os.path.split(name)
240 if not tail:
240 if not tail:
241 head, tail = os.path.split(head)
241 head, tail = os.path.split(head)
242 while head and tail:
242 while head and tail:
243 try:
243 try:
244 if osutil.listdir(head):
244 if osutil.listdir(head):
245 return
245 return
246 os.rmdir(head)
246 os.rmdir(head)
247 except (ValueError, OSError):
247 except (ValueError, OSError):
248 break
248 break
249 head, tail = os.path.split(head)
249 head, tail = os.path.split(head)
250
250
251 def unlinkpath(f):
251 def unlinkpath(f):
252 """unlink and remove the directory if it is empty"""
252 """unlink and remove the directory if it is empty"""
253 unlink(f)
253 unlink(f)
254 # try removing directories that might now be empty
254 # try removing directories that might now be empty
255 try:
255 try:
256 _removedirs(os.path.dirname(f))
256 _removedirs(os.path.dirname(f))
257 except OSError:
257 except OSError:
258 pass
258 pass
259
259
260 def rename(src, dst):
260 def rename(src, dst):
261 '''atomically rename file src to dst, replacing dst if it exists'''
261 '''atomically rename file src to dst, replacing dst if it exists'''
262 try:
262 try:
263 os.rename(src, dst)
263 os.rename(src, dst)
264 except OSError, e:
264 except OSError, e:
265 if e.errno != errno.EEXIST:
265 if e.errno != errno.EEXIST:
266 raise
266 raise
267 unlink(dst)
267 unlink(dst)
268 os.rename(src, dst)
268 os.rename(src, dst)
269
269
270 def gethgcmd():
270 def gethgcmd():
271 return [sys.executable] + sys.argv[:1]
271 return [sys.executable] + sys.argv[:1]
272
272
273 def termwidth():
273 def termwidth():
274 # cmd.exe does not handle CR like a unix console, the CR is
274 # cmd.exe does not handle CR like a unix console, the CR is
275 # counted in the line length. On 80 columns consoles, if 80
275 # counted in the line length. On 80 columns consoles, if 80
276 # characters are written, the following CR won't apply on the
276 # characters are written, the following CR won't apply on the
277 # current line but on the new one. Keep room for it.
277 # current line but on the new one. Keep room for it.
278 return 79
278 return 79
279
279
280 def groupmembers(name):
280 def groupmembers(name):
281 # Don't support groups on Windows for now
281 # Don't support groups on Windows for now
282 raise KeyError()
282 raise KeyError()
283
283
284 from win32 import *
284 from win32 import *
285
285
286 expandglobs = True
286 expandglobs = True
General Comments 0
You need to be logged in to leave comments. Login now